From: Peter Seibel
Subject: Standards compliance question
Date: 
Message-ID: <m3adfs6deo.fsf@localhost.localdomain>
A couple questions about how having an ANSI standard works:

 1) Is there any enforcement mechanism for compliance with the
 standard other than market pressure. That is, if someone writes an
 implementation that they call ANSI Common Lisp that doesn't actually
 conform to the standard, is there someone who's going to make them
 stop (e.g. by suing them).

 2) Can anyone who plunks down $18 for a copy of the PDF of ANSI
 INCITS 226-1994 (R1999) implement the language and call at ANSI
 Common Lisp? Or do you have to license the name, etc.

My impression is that the answers are "No" and "Yes". but there are
folks on this group who surely know much more about it than I do.

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

  The intellectual level needed   for  system design is  in  general
  grossly  underestimated. I am  convinced  more than ever that this
  type of work is very difficult and that every effort to do it with
  other than the best people is doomed to either failure or moderate
  success at enormous expense. --Edsger Dijkstra

From: Steven M. Haflich
Subject: Re: Standards compliance question
Date: 
Message-ID: <3E77C505.5030108@alum.mit.edu>
Peter Seibel wrote:
> A couple questions about how having an ANSI standard works:
> 
>  1) Is there any enforcement mechanism for compliance with the
>  standard other than market pressure. That is, if someone writes an
>  implementation that they call ANSI Common Lisp that doesn't actually
>  conform to the standard, is there someone who's going to make them
>  stop (e.g. by suing them).

There is no enforcement.  Standards conformance (unless related to
something with legitimate government concern, like safety, health,
or pollution) is always voluntary in the U.S., and subject only to
market pressure.

In any case, how could there be any enforcement absent any defined
test procedure for conformance?  Even though you think you know what
is required from the language, and what is prohibited, the cracks are
many and wide.

What does your favorite conforming implementation do with the following:

(make-package (copy-seq "FOO"))
(setf (aref (package-name (find-package "FOO")) 2) #\B)
(find-package "FOB)

I can find no prohibition in the ANS against executing this sequence,
and if I were sufficiently ignorant of both common practice and
implementation, I might expect it to work.  There are zillions of
things like this in the ANS.

>  2) Can anyone who plunks down $18 for a copy of the PDF of ANSI
>  INCITS 226-1994 (R1999) implement the language and call at ANSI
>  Common Lisp? Or do you have to license the name, etc.

There is no mechanism for licensing.  Note carefully the dictionary
entry for *FEATURES*, and the links there to the glossary term
"purports to conform."  Any implementation can purport anything,
and an implementation can lie blatantly if it likes.

By the way, despite the signature below, the content on this message
is my own and is not endorsed by INCITS or anyone else.  If you
really wanted I might be able to find some SDO materials online
explaining the benefits of _voluntary_ standards, either at INCITS
or ISO.

Steve Haflich
Chair, INCITS/J13
From: Barry Margolin
Subject: Re: Standards compliance question
Date: 
Message-ID: <1Y4ea.11$F4.665@paloalto-snr1.gtei.net>
In article <················@alum.mit.edu>,
Steven M. Haflich <·················@alum.mit.edu> wrote:
>Peter Seibel wrote:
>> A couple questions about how having an ANSI standard works:
>> 
>>  1) Is there any enforcement mechanism for compliance with the
>>  standard other than market pressure. That is, if someone writes an
>>  implementation that they call ANSI Common Lisp that doesn't actually
>>  conform to the standard, is there someone who's going to make them
>>  stop (e.g. by suing them).
>
>There is no enforcement.  Standards conformance (unless related to
>something with legitimate government concern, like safety, health,
>or pollution) is always voluntary in the U.S., and subject only to
>market pressure.

Or sometimes contractual issues.  E.g. your contract with the
implementation vendor stipulates that they'll provide a conformant
implementation, or you hire a programmer to write an application that
should be portable among conforming implementations.

>In any case, how could there be any enforcement absent any defined
>test procedure for conformance?  Even though you think you know what
>is required from the language, and what is prohibited, the cracks are
>many and wide.

While a conformance test would nice, all you need to show that someone has
violated one of the above types of contracts is a single example of
non-conformance.

-- 
Barry Margolin, ··············@level3.com
Genuity Managed Services, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Kent M Pitman
Subject: Re: Standards compliance question
Date: 
Message-ID: <sfwk7evow7z.fsf@shell01.TheWorld.com>
Barry Margolin <··············@level3.com> writes:

> In article <················@alum.mit.edu>,
> Steven M. Haflich <·················@alum.mit.edu> wrote:
>
> >In any case, how could there be any enforcement absent any defined
> >test procedure for conformance?  Even though you think you know what
> >is required from the language, and what is prohibited, the cracks are
> >many and wide.
> 
> While a conformance test would nice, all you need to show that someone has
> violated one of the above types of contracts is a single example of
> non-conformance.

Well, I recall being told once (quite a while ago, so my memory may
have faded) that in the Ada community, where there is testing, it is
expected that you will have non-standard results and you end up having
to go through and justify your non-standard results.  I think the issue
is that some "requirements" are vague and you have to say how you 
interpret them.  I'm not sure if anyone out-and-out passes that test.
I am not sure either that a single deviation ends up counting as failure.
To say that a deviation was an automatic failure is to say there is a
canonical resolution to each ambiguity, which I don't believe there is.
From: Barry Margolin
Subject: Re: Standards compliance question
Date: 
Message-ID: <5a6ea.12$F4.760@paloalto-snr1.gtei.net>
In article <···············@shell01.TheWorld.com>,
Kent M Pitman  <······@world.std.com> wrote:
>Barry Margolin <··············@level3.com> writes:
>
>> In article <················@alum.mit.edu>,
>> Steven M. Haflich <·················@alum.mit.edu> wrote:
>>
>> >In any case, how could there be any enforcement absent any defined
>> >test procedure for conformance?  Even though you think you know what
>> >is required from the language, and what is prohibited, the cracks are
>> >many and wide.
>> 
>> While a conformance test would nice, all you need to show that someone has
>> violated one of the above types of contracts is a single example of
>> non-conformance.
>
>Well, I recall being told once (quite a while ago, so my memory may
>have faded) that in the Ada community, where there is testing, it is
>expected that you will have non-standard results and you end up having
>to go through and justify your non-standard results.  I think the issue
>is that some "requirements" are vague and you have to say how you 
>interpret them.  I'm not sure if anyone out-and-out passes that test.
>I am not sure either that a single deviation ends up counting as failure.
>To say that a deviation was an automatic failure is to say there is a
>canonical resolution to each ambiguity, which I don't believe there is.

Not an "automatic failure", of course; it becomes a point of negotiation
between the two parties.  They'll each try to convince the other that their
interpretation is correct, and if they can't resolve it themselves it might
go to arbitration or court.  Also, ANSI and ISO have procedures for getting
the appropriate technical committees to provide clarifications.

But my point was that compliance is like a physical law: you can't prove it
conclusively, you can only provide lots of confirming evidence; however,
any counterexample will disprove it.

And a standard with minor ambiguities could be considered analogous to an
obsolete physical law, like Newton's Laws of Motion.  They're not
completely correct -- Einstein's Theories of Relativity have obsoleted them
-- but they're good enough to use in most contexts.

-- 
Barry Margolin, ··············@level3.com
Genuity Managed Services, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Kent M Pitman
Subject: Re: Standards compliance question
Date: 
Message-ID: <sfwd6knhqzp.fsf@shell01.TheWorld.com>
Barry Margolin <··············@level3.com> writes:

> But my point was that compliance is like a physical law: you can't prove it
> conclusively, you can only provide lots of confirming evidence; however,
> any counterexample will disprove it.

Only if you conclusively show the counterexample.  The wording of the
spec (of any spec) may be such that there are multiple possible solutions.
In this regard, it is different than a physical system.

Language is inherently like this.  And by design.  One of the only reasons
people ever forge agreements is that words stretch to accomodate multiple
readings.   Sometimes two people disagree in concept, but find a wording 
that they agree on.

This happened recently with the UN resolution on Iraq.  The claim is
that Powell asked France not to sign if they were not prepared for
action in the face of non-compliance.  France claims it was prepared
for action in the face of non-compliance.  France thinks the
resolution intended that the UN would determine non-compliance and the
US thinks the resolution doesn't require that.  Had they been forced
to actually word it one way or the other, and had they realized this
was necessary, it probably would not have gotten signed in the first
place.  But the words stretched where their intent didn't.  There is
no judge of non-compliance.  So you can't say "any counterexample
disproves it" because you don't know what a counterexample is without
a judge.  And there is no canonically defined judge.  The US asserts
that since the UN is not a ruling body, the US's interpretation is
enough to justify its own actions. France apparently asserts that
since the UN is not a ruling body, it doesn't need the US's permission
in order to conclude that the UN _is_ a ruling body, and that it is therefore
entitled to believe that the US needs the UN's permission. (or something
like that).  My real point is not to make fun of France (even though I
don't understand their position) but to say that when there is no 
overarching authority (and there is not in either the international 
arena, nor in this standards body situation), one has to just be reasonable
and then if one of probably several possible courts of competent jurisdiction
is ever convened to test what happened, one has to be ready to defend 
their actions in whatever jurisdiction that happens to be (which may not
be canonically determined).  [Best case is if you work a low-cost
way to resolve such disputes without involving the actual court system
when you write your contract! e.g., they might contract with me or you
or somebody to figure out who needs to bend...]

But when you say "a single counterexample" you're sounding very binary.
I don't think binary logic applies.

> And a standard with minor ambiguities could be considered analogous to an
> obsolete physical law, like Newton's Laws of Motion.  They're not
> completely correct -- Einstein's Theories of Relativity have obsoleted them
> -- but they're good enough to use in most contexts.

I don't like this analogy because in the analogy space, Newton is believed
to be definitively wrong (and merely handy for guessing because the math
is easier) and Einsten is believed to be definitively right (and merely
a pain to do the math for).  In the case of the standards process, ambiguities
dno't render the standard obsolete--they render the solution space 
multi-valued.  The committee might after-the-fact choose to clarify the
intent, but if the claim of conformance was not to the clarified version,
then the claim of conformance is not really false in the sense that no vendor
can be held to say that it conforms to a decision that has not yet been made.
I would be surprised (though I don't assert it can't happen--I'd jsut think
it weird) if you could sue for specific performance to bring a vendor into
compliance based on its claim of conformance to a later-clarified point.
I would assume that unless the vendor claims to offer a maintenance agreement
that will KEEP the implementation in conformance, then all a contract 
requires is that it conforms to the best interpretation known at time of
sale.
From: Barry Margolin
Subject: Re: Standards compliance question
Date: 
Message-ID: <fA7ea.14$F4.1035@paloalto-snr1.gtei.net>
In article <···············@shell01.TheWorld.com>,
Kent M Pitman  <······@world.std.com> wrote:
>Barry Margolin <··············@level3.com> writes:
>
>> But my point was that compliance is like a physical law: you can't prove it
>> conclusively, you can only provide lots of confirming evidence; however,
>> any counterexample will disprove it.
>
>Only if you conclusively show the counterexample.

Come on, I think I explained that I understand this.

My point is that there's *no* way to conclusively show that you conform,
unless the standard is written with mathematical precision and a similarly
precise analysis of the implementation can be done.  Conformance tests
don't prove anything except that you handle all the cases that the test
checks for.  Unless the language is so simple as to be practically useless,
it's infeasible for a conformance test to be exhaustive.

On the other hand, it *is* possible to show conclusively that an
implementation doesn't conform.  If (+ 1 2) returns something other than 3,
or (defun foo ...) doesn't define a function named foo, you don't conform.
If you fail a conformance test, you can be presumed not to conform.

In areas where the standard is unclear, the very concept of conformance is
not well defined, although if an implementation doesn't match *any* of the
reasonable interpretations then it's presumably non-conformant.

Going back to my original point, though, the value of a standard is in
defining contracts.  In this respect, it's not very different from most
other contractual requirements.  In general, it's not up to any party to
prove that he's met all his obligations; rather, if the other party is not
satisfied he must point out an obligation that was not met.  The standard
can be thought of as a laundry list of requirements that may be referenced
by a contract.

If a conformance test exists, it could be used as a compromise -- "conforms
to the standard" might be treated as equivalent to "passes the conformance
test".  This is often done to save time and reduce litigation.  But it
should be understood that it's just an approximation, due to the
limitations of technology.

-- 
Barry Margolin, ··············@level3.com
Genuity Managed Services, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Paul F. Dietz
Subject: Re: Standards compliance question
Date: 
Message-ID: <_oGdnXFx-_ZTlOSjXTWcqQ@dls.net>
Barry Margolin wrote:

> If a conformance test exists, it could be used as a compromise -- "conforms
> to the standard" might be treated as equivalent to "passes the conformance
> test".

IMO, it's a shame the CL standard didn't come with a set of agreed-upon
conformance tests.

	Paul
From: Kent M Pitman
Subject: Re: Standards compliance question
Date: 
Message-ID: <sfw1y12vkcy.fsf@shell01.TheWorld.com>
"Paul F. Dietz" <·····@dls.net> writes:

> Barry Margolin wrote:
> 
> > If a conformance test exists, it could be used as a compromise -- "conforms
> > to the standard" might be treated as equivalent to "passes the conformance
> > test".
> 
> IMO, it's a shame the CL standard didn't come with a set of agreed-upon
> conformance tests.

I disagree.

In my personal opinion (for whatever that's worth), we had finite
budget and this would have consumed enormous amounts of it,
accomplishing nothing, leaving you no more able than you are now to
tell what was useful and what was not.  Meanwhile, some aspect of the
standard would have been less well done because our attentions would
have been dragged down reviewing test cases instead of spec.

That's just my guess, of course.  No way to know.
But maybe it will make you feel less bad.

No reason a voluntary testing place can't arise and offer the service
to those who care after-the-fact.  They won't then be measuring compliance
per se, but they can be measuring "x's belief that blah complies", which
might be commercially useful.  Or not.

The individual vendors talked about contributing their own test cases.
Most vendors have some.  But eventually they realized that there was no way
to assure they were not giving away competitive advantage, and each was
worried it would contribute more than the others.  And so my recollection
is that they backed out once it got close to time to show their cards.

A community repository of test cases "just for the hell of it" 
could also be made.  Just like benchmarks.

IMO, it's more important to worry that a few important areas are done
well and equivalently than it is to worry that every detail of every
implementation conforms.  A lot of conformance testing is likely wasted 
because no one really cares.  Again, just a guess of mine. YMMV.
From: Christophe Rhodes
Subject: Re: Standards compliance question
Date: 
Message-ID: <sqvfyebeng.fsf@lambda.jcn.srcf.net>
Kent M Pitman <······@world.std.com> writes:

> IMO, it's more important to worry that a few important areas are done
> well and equivalently than it is to worry that every detail of every
> implementation conforms.  A lot of conformance testing is likely wasted 
> because no one really cares.  Again, just a guess of mine. YMMV.

A rather surprising number of people here seemed to care about the
existence of (ARRAY NIL), despite the fact that, as far as I know,
before Paul read the relevant sections carefully no-one seemed to have
noticed that the spec required that such a specialized array type
existed.

Or maybe those were just words, and those being vocal wouldn't be
willing to back up their words with their money, whether that be
money(1) or money(2) or money(3).

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Kent M Pitman
Subject: Re: Standards compliance question
Date: 
Message-ID: <sfwy93a4aft.fsf@shell01.TheWorld.com>
Christophe Rhodes <·····@cam.ac.uk> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > IMO, it's more important to worry that a few important areas are done
> > well and equivalently than it is to worry that every detail of every
> > implementation conforms.  A lot of conformance testing is likely wasted 
> > because no one really cares.  Again, just a guess of mine. YMMV.
> 
> A rather surprising number of people here seemed to care about the
> existence of (ARRAY NIL), despite the fact that, as far as I know,
> before Paul read the relevant sections carefully no-one seemed to have
> noticed that the spec required that such a specialized array type
> existed.

People used to complain about the massive number of cleanup issues 
I submitted during the CLTL -> ANSI CL process.  Some people said
"there are a zillion other things that are potential portability
problems, too, why these?"  My reply was "because these have actually
been encountered and they are known to matter to at least one person".

There are a million things that you _could_ test for.  Each has a finite
cost to fix.  I'd rather have vendors accumulate test cases from actual
reported bugs and know that the vendors had budget left to fix the things
that matter than know that vendors had squandered their budget chasing
bugs that might be noticed.  If it turns out that all users are happy and
vendors have _extra_ time, I'm all set for them to burn surplus cash
that isn't going to corporate dividends on forward-chaining bugfixes
instead of backward-chaining ones.  Or if one or more vendors wants to 
spend a lot on making sure they have fixed bugs before users encounter
them, that's up to the vendor.  But to have the community REQUIRE that ALL
implementations adopt this particular business plan just as a cost of
being part of the community means that implementations can't compete 
commercially on the basis of their choice of where to invest their energies
and where not to.  (If you believe they can compete at all; see my recent
remarks on capitalism [search words "capitalist", "capitalism", "free
software"] in other threads. ;)

In the end, the only effect of all this testing is that _maybe_ an
implementation or two will be found unable to conform.  The questions
you should really then ask are:
 (a) is it likely that no one would know that this is inferior?
     that is, are people building product that is likely to care
     really likely to be unaware, simply by monitoring word on the street,
     that this implementation is not as good as another, or would this
     test be the only clue.  If the knowledge is out there anyway,
     why go to a lot of extra expense to do the same things?
 (b) if there are people whose needs are not as strong as others,
     should everyone in the industry have to pay for conformance testing?
     (requiring conformance tests drives up the price, whereas if they
     are not required, users can still ask for whatever tests they want
     voluntarily, meaning that people who need that quality can pay for
     it and people who don't will not have to)
 (c) does this make the implementation unsuitable for the task?
     (i.e., might a non-conforming implementation still suffice
      for some applications?  experience says "yes")
 (d) does this make the implementation unable to survive, and is
     that good?  (i.e., if a buggy implementation is out there,
     will it be better for it to be made to just go away, or at least
     wait to make any sale until it is perfect, or is it better to let
     it have some revenue from people who don't mind its deficiencies?)
There are probably other questions as well, but these are the kinds of
serious business questions that one has to ask when figuring out
how to proceed.

A lot of times if one has come from an academic background, or at
least if one has not personally confronted the finite budget issue,
one may just assume that purity and testing and whatnot are
unconditionally good.  But CL is not just an academic thing.  It is a
commercial standard, and it accepts and attempts to accomodate
commercial realities.  CL implementations are expensive to produce.
CL vendors are lucky, and so is the community, if they get to market
at all.  Fortunately, the world doesn't need to be kneedeep in CL
implementations--it just needs enough to keep the market healthy--and
it certainly has that many.  Nevertheless, we don't want the bar impossibly
high.

clisp, just to take an example, has traditionally been griped about
for little non-conformances.  It has come a long way and may well be
getting close to real conformance.  Would the world be better off if
it could not have been able to claim to be a CL?  Maybe Paul Graham
would not have realized to try it.  Would that have done us any good?

> Or maybe those were just words, and those being vocal wouldn't be
> willing to back up their words with their money, whether that be
> money(1) or money(2) or money(3).

Sometimes purity and good can lead strange places that benefit no one
if you're not doing a careful accounting.  If you're not convinced,
watch the movie Quiz Show, which nicely zings the thoughtful viewer
with some interesting questions posed by the network at the end on
the question of who is injured... I don't want to totally spoil the
movie by elaborating further, but I found it interesting to see ethics
working so hard toward no apparent end.  And I see a parallel here,
though if you don't, it won't be the first time one of my many odd
analogies fell flat. ;)
From: Paul F. Dietz
Subject: Re: Standards compliance question
Date: 
Message-ID: <FsqdnQ9N9M5aheajXTWcpQ@dls.net>
Kent M Pitman wrote:

> In the end, the only effect of all this testing is that _maybe_ an
> implementation or two will be found unable to conform.

Kent, as you may be aware, I've been putting together an ANSI CL
compliance test suite.

Of the implementations I've run or tried to run it on, every one
has initially failed *many* tests.  I'm not talking about a few
failures here.  Every lisp implementation has been substantially
noncompliant with the standard.

Now, maybe users would not encounter these problems.  But maybe
they would, and would learn to work around them (for example,
by avoiding sections of the language that are not properly
implemented.)  Or, worse, learn to depend on them.  The users
would not realize something was wrong until they tried to port
to another lisp implementation (although more important portability
problems could arise from areas where the standard is silent.)

Users who do find the bugs will get the impression that the standard
is not to be trusted.  And perhaps it shouldn't be.  If the lisp
market is so small that compliance with the standard is not economically
justified, then the standard, in a real sense, dead.  Maybe the
Scheme partisans are right, and Common Lisp is just too big to implement
with any confidence the implementations will actually do what was promised.

But I don't believe that.  While many lisp implementations had
impressive numbers of failures, most of the failures were also
not difficult to correct.  It seems to me the problem was not that fixing
the bugs was expensive, but rather, some combination of: the developers
(or their managers, if any) didn't realize the standard required particular
behaviors, did not have the resources to *find* the noncompliances,
overestimated the effort needed to fix most of them once identified,
or just didn't pay enough attention to the issue.

Personally, I believe conformance to the standard is very important
for the future of Common Lisp.  One thing that holds the lisp community
together is a pride in lisp and what it can do.  Failures of lisp
implementations to live up to the standard dampen this pride.

	Paul
From: Kent M Pitman
Subject: Re: Standards compliance question
Date: 
Message-ID: <sfw8yv8vhb6.fsf@shell01.TheWorld.com>
"Paul F. Dietz" <·····@dls.net> writes:

> Kent M Pitman wrote:
> 
> > In the end, the only effect of all this testing is that _maybe_ an
> > implementation or two will be found unable to conform.
> 
> Kent, as you may be aware, I've been putting together an ANSI CL
> compliance test suite.
> 
> Of the implementations I've run or tried to run it on, every one
> has initially failed *many* tests.  I'm not talking about a few
> failures here.  Every lisp implementation has been substantially
> noncompliant with the standard.

Ah, you misread the intent of my statement.  I wasn't intending to make a
numeric prediction about how many failures there would be.  I was intending
to say that disqualifying implementations will have two possible major
effects:

 - To hurt commerce by making people feel falsely that the Lisps are not
   suitably compliant for getting business done.  [This is what I mean
   by dismissing the fact that some would fail to conform. I'm sure some
   will, but that doesn't make them not suitable.  So I don't like the idea
   of leaning heavy on the binary nature of the term 'conform' which reduces
   an important and multi-axis concept, 'suitability for purpose', to 
   an overly simplistic catch-all phrase that simply misses the point in
   most or all cases.]

 - To tell vendors information they can use to make themselves better.
   [For this I think any test suite that is itself compliant can help,
   but it doesn't have to be a test that claims conformance.  Moreover,
   I do think you can by the creation of this tests suite divert energy
   from things that should be done that have more commercial impact.]

> Now, maybe users would not encounter these problems.  But maybe
> they would, and would learn to work around them (for example,
> by avoiding sections of the language that are not properly
> implemented.)

Or by asking for support from their vendors.  Which they would at that time
be paying for, and the vendors could then afford to fix.

> Or, worse, learn to depend on them.

This means the user is a cheapskate and doesn't want to pay for a correct
implementation.  How is the world helped by requiring all vendors to publicly
be ashamed of this bug when the only user experiencing the bug is too cheap
to pay for it being fixed?  Maybe this would 'shame' the vendor into fixing
it, but you're just effectively asking those who DON'T care about teh bug
fix to subsidize the fix that the person who does care won't pay for.
I'm claiming that withholding the 'conformance' label in order to coerce
compliance when the market doesn't want to pay for it is not good.

This is the reason I created the 'purports to coform' term in the spec and
wanted people to lean heavier on that.  One doesn't remove :ANSI-CL from
their *features* list because a bug is found.  One asks the vendor to remove
it if they have no willingness to come in line with reported bugs, subject
to budget.  That is, it represents an intent to fix bugs, not a claim to
be bug-free.

It's great to assert bug-freeness, but until the industry insists it will
only buy such lisps, I don't think it's good to require it.  It just 
makes a lot of players into non-players, and it ends up making people who
make Lisps think about whether they want to use a different standard.
I think it is potentially quite destructive to the community.

> The users
> would not realize something was wrong until they tried to port
> to another lisp implementation (although more important portability
> problems could arise from areas where the standard is silent.)

I think they won't realize anyway.  If they aren't trying THEIR code
on the other platform, they aren't going to realize that theirs is
impacted by the non-portability issues that are inherent.
 
No amount of conformance testing will assure that a conforming program in
one implementation will run in another.  

A conforming processor can run this conforming program:
  (= (random 3) 1)
and it can come out T.  But that's no guarantee that it even will again
in the same implementation, much less in another.  Running programs and
running tests is no assurance that something will port.

You have to read and analyze source code to know if the user is relying
on the aspsects of the program that they are told to rely on.

> Users who do find the bugs will get the impression that the standard
> is not to be trusted.

People who rely on use testing will get this impression anyway.

Moreover, Lisp users trust CL.  We have decades of commercial
experience on that.  To my knowledge, all the prominent criticisms of
Lisp, even the ones that have supposedly brought down companies, are
not based on conformance issues.

> And perhaps it shouldn't be.  If the lisp
> market is so small that compliance with the standard is not economically
> justified, then the standard, in a real sense, dead.  Maybe the
> Scheme partisans are right, and Common Lisp is just too big to implement
> with any confidence the implementations will actually do what was promised.

No.  "too big" implies "can't be done".  It is "bigger" and therefore
"harder".  The emphasis on the market is to emphasize what can be done
when the market needs it.  Overemphasis on theory can sink a market
needlessly.

When vendors NEED a certain degree of correctness, they PAY for it.
AT&T a while back needed to make a phone switch and needed certain
realtime behaviors.  They didn't at that point say "oh, I hope one is
available off the shelf this way and if there's nothing like that the
language must suck".  They just went to the vendors and said "we need
it to this spec and we'll pay for it to be that way" and the vendors
produced.  (Why AT&T didn't use the resulting switch is beyond me, but
as far as I know, the product they commissioned, including its realtime
behavior [which commercial barrier incidentally wasn't a conformance
issue] was produced.)

> But I don't believe that.  While many lisp implementations had
> impressive numbers of failures, most of the failures were also
> not difficult to correct.  It seems to me the problem was not that fixing
> the bugs was expensive, but rather, some combination of: the developers
> (or their managers, if any) didn't realize the standard required particular
> behaviors, did not have the resources to *find* the noncompliances,
> overestimated the effort needed to fix most of them once identified,
> or just didn't pay enough attention to the issue.

I'm not saying that there is no value in finding bugs.  I don't want
you to think that there's no value in what you're doing.  BUT...

I think there is a great deal more use in the technical work you are doing
than in the rhetoric you are now putting out.

I'm saying there IS real danger to the community in focusing this
discussion on whether the term 'conformance' is meaningful or not
in a given context.  Focusing it simply on fixing bugs that people
are going to care about makes more sense to me.

It's useful to know where there are bugs.  Saying that the bugs MUST be
fixed is what should be left to the market.

But it creates an environment in which people with again too much time
on their hand and no intent at commerce are going to run around saying
"the sky is falling, the sky is falling" and creating a problem that
is much more real than the problem you're focused on.

The community does not have infinite resources.  Non-commercial 
implementations won't be hurt by what you're doing; non-commercial 
implementations will just sit there being non-conforming if you demonstrate
them to be so.  But commercial implementations can be hurt, and if they
are hurt, can lose revenue.  This is another reason I think free software
threatens commercial software--it doesn't have any duty to anyone in order
to survive. It can just shrug its shoulders and say "who cares. you can't
hurt me."    If they lose enough revenue, they don't just get "not used
for a while", they get "gone".

The standard itself is not perfect.  It is FULL of errors. Typos. Technical
glitches.  We went with it.  Not because we had no pride but because we knew
that a flawed standard people could use was better than a perfect standard
that was never coming.  This is a commercial trade-off, not a statement of
our having no pride.

> Personally, I believe conformance to the standard is very important
> for the future of Common Lisp.

Personally I don't believe in one-place predicates.

Political arguments of the form "I like x" or "I don't like y" mask
the real issue of finite resources.

Sample one-place predicates:

 - I think conformance matters.
 - I think getting new implementations with new features that address
   the problems of today matters.

Sample two-place predicates:

 - I think getting new implementations with new features that address
   the problems of today matters more than conformance.

> One thing that holds the lisp community
> together is a pride in lisp and what it can do.  Failures of lisp
> implementations to live up to the standard dampen this pride.

Pride does not have a wit to do with commerce.

Focus on pride instead of customer need can bring us down.

Failures to live up to customer needs dampen more than just pride.

(All of the above just my personal opinion, btw.
I know I tend to assert these things as fact, but there will
be others who disagree.)
From: Paul F. Dietz
Subject: Re: Standards compliance question
Date: 
Message-ID: <9ZqcnaGbWsOXUeajXTWcpg@dls.net>
Kent M Pitman wrote:

> Ah, you misread the intent of my statement.  I wasn't intending to make a
> numeric prediction about how many failures there would be.  I was intending
> to say that disqualifying implementations will have two possible major
> effects:  [ deleted ]

Ok.  Just to be clear, I don't think an implementation need be 100% compliant
to be useful, or to be called Common Lisp.  I have a personal preference for
compliance over noncompliance (obviously!) but it's not the sole, or even
necessarily the most important, dimension by which I judge an implementation.


>>Or, worse, learn to depend on them [ noncompliances ]
> 
> This means the user is a cheapskate and doesn't want to pay for a correct
> implementation.

Or, it could mean that the user has not fully understood the specification,
and assumes that the way his lisp is implementing a feature is as
required by the standard.


>  How is the world helped by requiring all vendors to publicly
> be ashamed of this bug when the only user experiencing the bug is too cheap
> to pay for it being fixed? 

Perhaps you should ask the person who wrote section 1.5.1.5 of the ANSI
CL standard, paragraphs 3 and 4:

   If the implementation conforms with some but not all of the requirements
   of this standard, then the conformance statement shall be

   ``<<Implementation>> conforms with the requirements of ANSI <<standard number>>
    with the following exceptions: <<reference to or complete list of
    the requirements of the standard with which the implementation does not conform>>.''

This section doesn't go so far as to require the implementors to put a scarlet 'B'
on their foreheads :), but it does require documentation of each noncompliance.


> The community does not have infinite resources.  Non-commercial 
> implementations won't be hurt by what you're doing; non-commercial 
> implementations will just sit there being non-conforming if you demonstrate
> them to be so.

Oddly enough, they don't do this.  The maintainers of free CL implementations
have been quite aggressive about using the test suite and fixing problems it
reveals.  Their problem was more one of lack of bug reports and lack of adequate
tests rather than lack of desire to be conforming.


> The standard itself is not perfect.  It is FULL of errors. Typos. Technical
> glitches.  We went with it.  Not because we had no pride but because we knew
> that a flawed standard people could use was better than a perfect standard
> that was never coming.  This is a commercial trade-off, not a statement of
> our having no pride.

Let me express my gratitude to you and your colleagues for producing
the standard.  I appreciate all the hard work that went into it.  It is
very useful, even if it is imperfect.  I do regret that the resources to
continue to refine it were no longer available, but that does not negate
the accomplishment.

	Paul
From: Kent M Pitman
Subject: Re: Standards compliance question
Date: 
Message-ID: <sfw65qcdp6c.fsf@shell01.TheWorld.com>
"Paul F. Dietz" <·····@dls.net> writes:

> Kent M Pitman wrote:
...
> >  How is the world helped by requiring all vendors to publicly
> > be ashamed of this bug when the only user experiencing the bug is too cheap
> > to pay for it being fixed?
> 
> Perhaps you should ask the person who wrote section 1.5.1.5 of the ANSI
> CL standard, paragraphs 3 and 4:

Heh.  Nice try.  I didn't write that stuff, I just plopped it in place.
I think ANSI requires this text. :)

>    If the implementation conforms with some but not all of the requirements
>    of this standard, then the conformance statement shall be
> 
>    ``<<Implementation>> conforms with the requirements of ANSI
>    <<standard number>> with the following exceptions: <<reference to
>    or complete list of the requirements of the standard with which
>    the implementation does not conform>>.''
> 
> This section doesn't go so far as to require the implementors to put
> a scarlet 'B' on their foreheads :), but it does require
> documentation of each noncompliance.

Just because I got involved in an ANSI process doesn't mean I subscribe
to every detail of their style.  I have come to think ANSI has overly high
overhead, and this is as good an example as any.

> > The community does not have infinite resources.  Non-commercial
> > implementations won't be hurt by what you're doing; non-commercial
> > implementations will just sit there being non-conforming if you
> > demonstrate them to be so.
> 
> Oddly enough, they don't do this.

Again, I didn't mean to say this as a predictive thing.  I meant they are
able to tolerate this as a worst case.  Their worst-case behavior is better
than commercial worst-case behavior in the same situation, so they have an
unfair advantage.

> The maintainers of 
[some]
> free CL implementations
> have been quite aggressive about using the test suite and fixing problems it
> reveals.

Certainly.  I know this and did not mean to imply otherwise.

> Their problem was more one of lack of bug reports and lack of adequate
> tests rather than lack of desire to be conforming.

Uh, sometimes.  This has not always been true, although over time they
have gotten better.

> > The standard itself is not perfect.  It is FULL of
> > errors. Typos. Technical glitches.  We went with it.  Not because
> > we had no pride but because we knew that a flawed standard people
> > could use was better than a perfect standard that was never
> > coming.  This is a commercial trade-off, not a statement of our
> > having no pride.
> 
> Let me express my gratitude to you and your colleagues for producing
> the standard.  I appreciate all the hard work that went into it.  It is
> very useful, even if it is imperfect.  I do regret that the resources to
> continue to refine it were no longer available, but that does not negate
> the accomplishment.

Well, I'm not worried about that.  Yeah, it would have been fun to continue,
but it's still "good enough" so I rest easy.

And likewise I don't mean to be denigrating your work on the test cases.

I'm just again trying to emphasize that the goal should not be this badge
of compliance because that's very double-edged and devisive.  I'm all for
improving quality where it doesn't interfere with markets and doesn't
lock out competitors.

And I also agree with Duane's remarke about counting bugs.  The "frequency"
of bugs is a multi-dimensional space.  By this I mean you have to compare
the incidence rate on multiple dimensions.  e.g., (just off top of my
head):
 - how often does the bug occur in a given piece of code
 - how often does code need the feature in the first place
 - how often does the bug get exercised if a given piece of code runs
 - how often dose the given piece of code run for a given person
 - how often is that given person the one to run code

I first ran into this problem when I first came to Symbolics and I made them
hold up a release because binding of special variables in lambda combinations
was broken.  That is,
 ((lambda (*foo*) (declare (special *foo*)) ...*foo*...) 1)
wasn't doing the right thing.  The question was, how often did this happen.
I made a lot of arguments that it happened a lot, but others argued it hardly
ever happened, that rare was the case of anyone using raw lambda combinations
and even rarer was that subset of people who were gonna bind special variables
in that way; lambda combinations are mostly used by purists and they mostly
don't use specials.  (Turns out Macsyma had lots of legacy code that expanded
its private LET macro into lambda combinations, so it hit our group more than
average.)  But since that time I've thought a lot about situations like this
and I've decide there is no a priori standard about whether this is or isn't
a showstopper bug.  It really depends on what level of quality the 
implementation has promised, and what level of quality the user needs.
LAMBDA still seems darned primitive to me, and maybe I'd still hold up that
release.  But then, maybe not...  I'm much more conscious of finite resources
now and of the importance of release schedules and things like that.
Live and learn.  
From: Rob Warnock
Subject: Re: Standards compliance question
Date: 
Message-ID: <z3adnSkkgvIxF-CjXTWc-g@speakeasy.net>
Kent M Pitman <······@world.std.com> wrote:
+---------------
| The standard itself is not perfect.  It is FULL of errors. Typos.
| Technical glitches.
+---------------

As I've recently discovered...  ;-}  I've been slowly working my way
through the CLHS from one end to the other (a little bit at a time),
and I've run across several "howlers".

+---------------
| We went with it.  Not because we had no pride but because we knew
| that a flawed standard people could use was better than a perfect
| standard that was never coming.  This is a commercial trade-off,
| not a statement of our having no pride.
+---------------

Understood. But having bumped into a number of the typos [and after
having finally stopped patting myself on the back for having learned
enough CL that I could be confident that they *were* typos!], I began
to wonder if it would be at all useful to maintain a public-access
collection of CLHS "errata" somewhere (CLiki, maybe?), perhaps for
convenience structured to parallel the CLHS hierarchy, so that when
people (newbies especially) have a question about whether something
in the CLHS is a typo or not they could at least get a quick "second
opinion".

That is, I'm not suggesting that such an errata list needs to be at
all "perfect" to be useful. Realistically, errata lists will always
have bugs in them, too. But I think it would have been useful to me
to have had a parallel errata tree available as I was reading.

So on the grounds that sometimes what is useful to the one may be useful
to the many...  Comments?


-Rob

-----
Rob Warnock, PP-ASEL-IA		<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Gabe Garza
Subject: Re: Standards compliance question
Date: 
Message-ID: <87d6khvr6k.fsf@ix.netcom.com>
····@rpw3.org (Rob Warnock) writes:

> to wonder if it would be at all useful to maintain a public-access
> collection of CLHS "errata" somewhere (CLiki, maybe?)

I'd say the new ALU site would be a better choice, simply because CLHS
errata has a wider audience then those who use free Lisp on Unix.

> So on the grounds that sometimes what is useful to the one may be useful
> to the many...  Comments?

I think it's a good idea.

Gabe Garza
From: Duane Rettig
Subject: Re: Standards compliance question
Date: 
Message-ID: <4r890cw14.fsf@beta.franz.com>
"Paul F. Dietz" <·····@dls.net> writes:

> Kent M Pitman wrote:
> 
> > In the end, the only effect of all this testing is that _maybe_ an
> > implementation or two will be found unable to conform.
> 
> Kent, as you may be aware, I've been putting together an ANSI CL
> compliance test suite.

I, for one, am extremely grateful for your efforts in this area.
[For the newsgroup: I have been working with Paul's test suite,
and have been able to remove almost all of the nonconformances
the current form of his test suite has found, within my development
version of Allegro CL which will eventually become the next release.]

> Of the implementations I've run or tried to run it on, every one
> has initially failed *many* tests.  I'm not talking about a few
> failures here.  Every lisp implementation has been substantially
> noncompliant with the standard.

I have to disagree here, for a couple of reasons:

 1. As Kent says in another post, it is not necessary to call an
implementation nonconforming just because it fails to conform in
a particular area.  Indeed, if that had been the case, then _no_
implementation could _ever_ call itself conforming, even after
passing a test suite, because I guarantee I could find a new
nonconformance which the test suite missed (i.e. perfect conformance
would imply a perfect test suite, and how would we determine when
such a test suite exists?)  Kent instead invented the term "purports
to conform", which is more a matter of intentions and direction
than of specific current behaviors.  This is the most logical approach
to take, because it allows implementations to be constantly _moving_
in the direction of conformance (or not, if they decide not to comply).
This is what I love about your test suite, because it allows us all to
take a great leap in the direction of conformance, and while still not
getting all the way there, it makes the direction clear.

 2. Your emphasis on *many* could be misleading.  As an example,
Allegro CL fails a test where it called for (car 10) => <type-error>,
because it instead returned a simple-error.  OK, there's one
nonconformance (which I've fixed in my development version).  And I'm
not sure how many people will count on the condition of the error
being a type-error (as opposed to any error at all), but since it is
specified as such, and since I want my users to be able to count on
the correctness of the diagnostic, I make the change.  But now, the
test suite goes on to check that (cdr 10) ==> <type-error> also,
which will also fail in the pre-fixed lisp.  Also, (caar '(10)),
(cdar '(10)), etc.  I believe that there were over 100 failures based
on this same basic nonconformance.

Do I believe that every one of those tests is necessary?  Absolutely -
the c..r functions are all specific functions, and must be tested
individually in order to be complete.  Does that mean that this failing
of *many* c..r tests to be substantial non-conformance?  Not at all; it
is essentially the same single nonconformance caused by a single
mechanism in the implementation.

[Paul and I also had a conversation about this issue, where I requested
that he group his tests in such a way that the number of failures more
accurately reflect the extent of nonconformance.  I understand his point
of view that such groupings, especially after the fact, are hard, but I've
also noticed that many of his newer tests do have such groupings, and so
a failure will in the future tend to more accurately measure the extent
of the nonconformance.]

 3. It does not take into consideration the response and interaction
between the vendor and the customer.  Kent has already touched on that.
The "many" failures that the test suite shows on Allegro CL does not tend
to reflect the needs of our paying customers.  We tend to work with
our customers, prioritizing (with their help) which to patch
immediately, and which to put off to another release (sometimes, putting
off is a "Good Thing", because patching does tend to destabilize the
product, possibly introducing bugs that weren't there before).  And
indeed, there are nonconformances which our customers have never
seen, due to the nature of their applications.  We maintain a list of
these nonconformances, and work on them as we get time to do so.


> Now, maybe users would not encounter these problems.  But maybe
> they would, and would learn to work around them (for example,
> by avoiding sections of the language that are not properly
> implemented.)  Or, worse, learn to depend on them.  The users
> would not realize something was wrong until they tried to port
> to another lisp implementation (although more important portability
> problems could arise from areas where the standard is silent.)
> 
> Users who do find the bugs will get the impression that the standard
> is not to be trusted.  And perhaps it shouldn't be.  If the lisp
> market is so small that compliance with the standard is not economically
> justified, then the standard, in a real sense, dead.  Maybe the
> Scheme partisans are right, and Common Lisp is just too big to implement
> with any confidence the implementations will actually do what was promised.
> 
> But I don't believe that.  While many lisp implementations had
> impressive numbers of failures, most of the failures were also
> not difficult to correct.  It seems to me the problem was not that fixing
> the bugs was expensive, but rather, some combination of: the developers
> (or their managers, if any) didn't realize the standard required particular
> behaviors, did not have the resources to *find* the noncompliances,
> overestimated the effort needed to fix most of them once identified,
> or just didn't pay enough attention to the issue.

I agree with these paragraphs.  To paraphrase what I think are the
major points:

 1. The most important aspect of conformance is trust: Can I trust my
implementation to do what I expect it to do?  If it happens not to do
what I expect, can I trust that my vendor will quickly prioritize it,
and either give me fixes or workarounds, or help me solve my problem
in other ways?

 2. It is essential that the standard be continually tested, and that
the implementations be continually tested against the standard, in order
to continuously increase the level of trust in CL (as per #1).  To this
extent, I believe that you are doing the CL community a great and
invaluable service.

> Personally, I believe conformance to the standard is very important
> for the future of Common Lisp.  One thing that holds the lisp community
> together is a pride in lisp and what it can do.  Failures of lisp
> implementations to live up to the standard dampen this pride.

I agree completely with this, and that is why I take part in this
very valuable test suite.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Nils Goesche
Subject: Re: Standards compliance question
Date: 
Message-ID: <ly7kas3j8c.fsf@cartan.de>
"Paul F. Dietz" <·····@dls.net> writes:

> Kent M Pitman wrote:
> 
> > In the end, the only effect of all this testing is that _maybe_ an
> > implementation or two will be found unable to conform.
> 
> Kent, as you may be aware, I've been putting together an ANSI CL
> compliance test suite.

FWIW, I think what you're doing is great.  I hope your suite will be
downloadable somewhere, some time.  And everybody agrees, I think,
that standard compliance is not only nice but important.  But let's
not get carried away too much.  Yes, it is a good argument against an
implementation if it is deviant in numerous, significant places.  But
that doesn't mean that it is all bad, or even evil.  Let's regard your
suite as a helping tool for implementors and users, but not as a
guillotine or some such.

Regards,
-- 
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Paul F. Dietz
Subject: Re: Standards compliance question
Date: 
Message-ID: <J-ycncjDJrHCIeajXTWcoQ@dls.net>
Nils Goesche wrote:

>   Yes, it is a good argument against an
> implementation if it is deviant in numerous, significant places.  But
> that doesn't mean that it is all bad, or even evil.  Let's regard your
> suite as a helping tool for implementors and users, but not as a
> guillotine or some such.

Oh dear.  I do regard it as a tool to help implementors, and am gratified
that they have allowed me the opportunity to contribute   I don't regard
anyone here, or any implementation I've tried it on, as bad or evil.  I am
sorry if I used wording that has given that impression.

	Paul
From: Barry Margolin
Subject: Re: Standards compliance question
Date: 
Message-ID: <yQGea.3$0i.422@paloalto-snr1.gtei.net>
In article <··············@cartan.de>, Nils Goesche  <······@cartan.de> wrote:
>"Paul F. Dietz" <·····@dls.net> writes:
>
>> Kent M Pitman wrote:
>> 
>> > In the end, the only effect of all this testing is that _maybe_ an
>> > implementation or two will be found unable to conform.
>> 
>> Kent, as you may be aware, I've been putting together an ANSI CL
>> compliance test suite.
>
>FWIW, I think what you're doing is great.  I hope your suite will be
>downloadable somewhere, some time.  And everybody agrees, I think,
>that standard compliance is not only nice but important.  But let's
>not get carried away too much.

I agree.

If I were a vendor, and I were faced with a choice of fixing problems found
by a compliance test suite, or fixing problems that real users had
reported, which do you think I'd tackle first?

I'm reminded of the CACM article from the 80's, where they fed random input
data to a number of Unix programs, to see how they reacted.  Many of them
dumped core, often due to buffer overflows or other poor input validity
checking, rather than producing reasonable diagnostics.  They reported the
problems to the vendors.  5 or 10 years later they published another
article revisiting this issue.  They ran the test again, and most of the
problems still hadn't been fixed.

Moral: Bugs that actually impact customers get fixed; other bugs are low
priority.  Corollary: since bug reports often come in faster than they can
be fixed, it's rare that the vendor gets to the non-impacting bugs (some
might be fixed if it's convenient to do so while working on "real" bugs).

-- 
Barry Margolin, ··············@level3.com
Genuity Managed Services, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Paul F. Dietz
Subject: Re: Standards compliance question
Date: 
Message-ID: <2OOcnXtAW_zRROajXTWcqA@dls.net>
Barry Margolin wrote:

> I'm reminded of the CACM article from the 80's, where they fed random input
> data to a number of Unix programs, to see how they reacted.  Many of them
> dumped core, often due to buffer overflows or other poor input validity
> checking, rather than producing reasonable diagnostics.  They reported the
> problems to the vendors.  5 or 10 years later they published another
> article revisiting this issue.  They ran the test again, and most of the
> problems still hadn't been fixed.
> 
> Moral: Bugs that actually impact customers get fixed; other bugs are low
> priority.  Corollary: since bug reports often come in faster than they can
> be fixed, it's rare that the vendor gets to the non-impacting bugs (some
> might be fixed if it's convenient to do so while working on "real" bugs).

'Fuzz testing', it was called.

http://www.cs.wisc.edu/~bart/fuzz/fuzz.html

These results also illuminate the free-vs-commercial debate we've been
having here -- they found that the GNU/Linux utilities were more
reliable than their commercial counterparts.

I also remember a random testing project I saw a few years ago from
DEC (before it was part of HP, and before it was part of Compaq).  They
had a system for generating random C programs.  These programs were
fed to C compilers to perform differential testing.  If a program caused
a compiler to crash or hang, they'd found a bug; if a program caused two C
compilers to generate code that did different things they checked the
C program to make sure it was legal.  If so, they'd found a bug.

This approach is interesting because it can generate enormous numbers
of tests with a fixed investment in programmer time, leveraging any idle
computers you have.  They did encounter a problem, though: the testing
revealed utterly bizarre bugs that no human-written program would ever
stimulate.  The teams maintaining the DEC C compiler refused to fix these
bugs -- after all, no customer would ever see them.  As a result, the interesting
bugs were hidden in a stream of programs exhibiting the unfixed bizarre bugs.

http://research.compaq.com/wrl/DECarchives/DTJ/DTJT08/DTJT08HM.HTM
http://nar-associates.com/site/sdr/projects/ddt/download/kits/

DEC/Compaq/HP has a patent on this testing method, but it doesn't apply
if the compiler is in the same program as the generated/executed code, so lisp
is safe. :)  I've thought testing (a subset of) lisp would be really
easy using this approach, instead of being forced to use Tcl like they did.
But for this approach to be useful the implementors will have to be willing
to fix the easily stimulated weird bugs with no immediate customer impact.

	Paul
From: Tim Bradshaw
Subject: Re: Standards compliance question
Date: 
Message-ID: <ey3of44ehaa.fsf@cley.com>
* Barry Margolin wrote:
> I'm reminded of the CACM article from the 80's, where they fed
> random input data to a number of Unix programs, to see how they
> reacted.  Many of them dumped core, often due to buffer overflows or
> other poor input validity checking, rather than producing reasonable
> diagnostics.  They reported the problems to the vendors.  5 or 10
> years later they published another article revisiting this issue.
> They ran the test again, and most of the problems still hadn't been
> fixed.

There's something interesting here which is sort-of tangential to the
conformance issue.  Some bugs turn out to be enormously more
important[1] than others - these ones need to get fixed first,
obviously. and one of the things that, it seems to me, snuck under the
radar in these tests is what exactly the nature of the failure was and
how important that might be.  Plenty of the failures, if I remember
rightly, turned out to be buffer overflows.  And it turns out that
undetected buffer overflows are *critically* important bugs in many
programs, because people will indeed feed carefully-crafted `random'
input data to them in order to induce such a buffer overflow and cause
the program to execute some of this `random' data as code.

With hindsight we can look back and say that they (the vendors) ought
to have worried about these buffer overflows because of the terrible
consequences they would have, but probably no one saw it at the time
(was the article before or after the Morris worm?).  It would have
likely been much more important to fix a bug in awk (say) that caused
it to execute input data as code than to fix some minor conformance
bug.

The same thing goes for conformance and other bugs, I think.  I was
picked up the other day by Christophe Rhodes for arguing that
implementations should support arrays that have NIL element types.
Well, I think they should, but I think it's probably a fair way down
my list of priorities (so, really, he was right to pick me up on it
(:-)).  In particular I'd much rather have an implementation of CL
that had no buffer overflows, or an implementation that caught *all*
errors (even - especially - the ones the spec does not require you to
catch) than one that supported such arrays.

So I think conformance tests are interesting, but one needs to weigh
the noncomformances against each other and against other bugs - like
some bit of undefined behaviour being `delete all your files' in a
given implementation - which aren't even conformance issues.

--tim

Footnotes: [1] by which I mean `commercially important' or possibly
`safety-wise important' or something that makes `important' have some
meaning.
From: Russell McManus
Subject: Re: Standards compliance question
Date: 
Message-ID: <87vfycjrn6.fsf@thelonious.dyndns.org>
Tim Bradshaw <···@cley.com> writes:

> Plenty of the failures, if I remember rightly, turned out to be
> buffer overflows.  And it turns out that undetected buffer overflows
> are *critically* important bugs in many programs, because people
> will indeed feed carefully-crafted `random' input data to them in
> order to induce such a buffer overflow and cause the program to
> execute some of this `random' data as code.

In an interesting recent development, NetBSD has stolen the
implementation ideas from Irix that allow stack pages to be
non-executable, thus turning most buffer overflow attacks into denial
of service events (program crashes) rather than root exploits.  This
is a big deal, if eventually other OS'es pile on.

-russ
From: Paul F. Dietz
Subject: Protected stacks (was Re: Standards compliance question)
Date: 
Message-ID: <W5Scnc1GdODTUOajXTWcoA@dls.net>
Russell McManus wrote:

> In an interesting recent development, NetBSD has stolen the
> implementation ideas from Irix that allow stack pages to be
> non-executable, thus turning most buffer overflow attacks into denial
> of service events (program crashes) rather than root exploits.  This
> is a big deal, if eventually other OS'es pile on.

Doesn't this break gcc's implementation of the extension to C
involving nested functions?  IIRC, they implemented references to
these functions as pointers to a stack-allocated trampoline, because
they wanted function pointers to continue to fit in a single 32 bit
word.

	Paul
From: Daniel Barlow
Subject: Re: Protected stacks (was Re: Standards compliance question)
Date: 
Message-ID: <87el4z73sv.fsf@noetbook.telent.net>
"Paul F. Dietz" <·····@dls.net> writes:

> Doesn't this break gcc's implementation of the extension to C
> involving nested functions?  IIRC, they implemented references to
> these functions as pointers to a stack-allocated trampoline, because
> they wanted function pointers to continue to fit in a single 32 bit
> word.

At least in its initial version, it did, yes.  I don't know if that's
still the case, but my impression is that the people finding this
feature useful weren't using gcc's nested functions much anyway.

See http://www.x86-64.org/lists/discuss/msg03002.html and followups

There's a point to be made here about priorities, but I'm not entirely
sure what.  Perhaps that optimizing based on your current constituency
may have deletorious effects on other people who /might/ want to use
your system but currently don't.

This doesn't detract from the - obvious - point that implementors have
limited resources and have to prioritize them, but it does suggest
that they should be factoring in the priorities of the _potential_
market (in the example there, the ADA users) as well as of the actual
current user base (for x86-64, the development is probably being
driven mostly by Linux distributors).  And I'm sure that the
implementors do this already, because the alternative is that they end
up with a bunch of extremely happy customers who've already paid for
their licences, and nothing to sell to new markets with other
priorities.

Yes, prioritize.  No, don't prioritize exclusively by drawing up an
ordered list of reported bugs and working through from start to end.


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Paul F. Dietz
Subject: Re: Protected stacks (was Re: Standards compliance question)
Date: 
Message-ID: <Uoqcnd4IkPTJJ-GjXTWcow@dls.net>
Daniel Barlow wrote:

> At least in its initial version, it did, yes.  I don't know if that's
> still the case, but my impression is that the people finding this
> feature useful weren't using gcc's nested functions much anyway.

Yes, it still does.  This has recently been a topic of discussion
on the gcc development mailing list.  I don't know how often nested
functions are used in practice.

BTW, do lisps use stack-allocated trampolines for closures with
dynamic extent?


> This doesn't detract from the - obvious - point that implementors have
> limited resources and have to prioritize them, but it does suggest
> that they should be factoring in the priorities of the _potential_
> market (in the example there, the ADA users) as well as of the actual
> current user base (for x86-64, the development is probably being
> driven mostly by Linux distributors).

Interesting that you should mention Ada -- GNAT doesn't use trampolines,
I understand, so programs compiled with it can use protected stacks.

	Paul
From: Duane Rettig
Subject: Re: Protected stacks (was Re: Standards compliance question)
Date: 
Message-ID: <4k7er2m4w.fsf@beta.franz.com>
"Paul F. Dietz" <·····@dls.net> writes:

> Daniel Barlow wrote:
> 
> > At least in its initial version, it did, yes.  I don't know if that's
> > still the case, but my impression is that the people finding this
> > feature useful weren't using gcc's nested functions much anyway.
> 
> Yes, it still does.  This has recently been a topic of discussion
> on the gcc development mailing list.  I don't know how often nested
> functions are used in practice.
> 
> BTW, do lisps use stack-allocated trampolines for closures with
> dynamic extent?

Not Allegro CL.  Closure objects and/or their environment data are
allocated on the stack if dynamic-extent, but there is only ever
just one trampoline code bit for closures, and it is allocated
just below the lisp heap.

Allegro CL never allocates text (code) in the stack.  Not portable,
and too risky.  In earlier days, it was even hard getting some
operating system vendors to allow rwx access in the data segment -
they would cite the evils of self-modifying code and the separation
of text and data...

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: rydis (Martin Rydstr|m) @CD.Chalmers.SE
Subject: Re: Protected stacks (was Re: Standards compliance question)
Date: 
Message-ID: <w4cptoi9z8s.fsf@haddock.cd.chalmers.se>
"Paul F. Dietz" <·····@dls.net> writes:
> Interesting that you should mention Ada -- GNAT doesn't use trampolines,
> I understand, so programs compiled with it can use protected stacks.

I'm fairly certain that GNAT used to use something that didn't work
with Solaris' protected stacks just a few years back, when they were
new, I believe in 2.6 (or if they were made default, I'm not sure.)

Regards,

'mr

-- 
[Emacs] is written in Lisp, which is the only computer language that is
beautiful.  -- Neal Stephenson, _In the Beginning was the Command Line_
From: Paul F. Dietz
Subject: Re: Protected stacks (was Re: Standards compliance question)
Date: 
Message-ID: <DoGcnXfA9IrUPeCjXTWcqA@dls.net>
Martin Rydstr|m wrote:

> I'm fairly certain that GNAT used to use something that didn't work
> with Solaris' protected stacks just a few years back, when they were
> new, I believe in 2.6 (or if they were made default, I'm not sure.)

Yes, I misremembered.  GNAT still uses trampolines, but their removal
is 'quite high up on [the developers'] list for GNAT improvements'.

http://gcc.gnu.org/ml/gcc/2003-03/msg01010.html

	Paul
From: Lars Brinkhoff
Subject: Lisp programmers in Gothenburg, Sweden
Date: 
Message-ID: <85isu6woxg.fsf_-_@junk.nocrew.org>
Since everyone else is forming local groups, I might as well ask if
there are any comp.lang.lispers in Gothenburg other than me and Martin?

rydis (Martin Rydstr|m) @CD.Chalmers.SE writes:
> I'm fairly certain that GNAT used to use something that didn't work
From: Tobias Andersson
Subject: Re: Lisp programmers in Gothenburg, Sweden
Date: 
Message-ID: <3oh58v8l01q4usccii1bg98dgmmi32849u@4ax.com>
>Since everyone else is forming local groups, I might as well ask if
>there are any comp.lang.lispers in Gothenburg other than me and Martin?

If you find any, could you ask them to wake up their pals i Stockholm
for me? I have tried but failed several times.

/Tobias A
From: Patrik Nordebo
Subject: Re: Lisp programmers in Gothenburg, Sweden
Date: 
Message-ID: <slrnb85uuu.2a1f.patrik@pluto.elizium.org>
On Thu, 27 Mar 2003 10:41:12 +0100, Tobias Andersson
<·······················@optosweden.se> wrote:
>>Since everyone else is forming local groups, I might as well ask if
>>there are any comp.lang.lispers in Gothenburg other than me and Martin?
> 
> If you find any, could you ask them to wake up their pals i Stockholm
> for me? I have tried but failed several times.

I'm comp.lang.lisper in Stockholm. I don't know of anyone else, alas,
but that makes two of us, which I suppose is better than one.
From: Tobias Andersson
Subject: Re: Lisp programmers in Gothenburg, Sweden
Date: 
Message-ID: <p4288v05e4j592ccf1fit96vs2vovq9cqi@4ax.com>
Is there a swedish chapter of the ALU? I cannot say that I do "major
work" in Lisp myslef... but like many others I'm a persistent lurker
in comp.lang.lisp.
From: Johan Bockgård
Subject: Re: Lisp programmers in Gothenburg, Sweden
Date: 
Message-ID: <yoijistqksp3.fsf@helm.dd.chalmers.se>
Tobias Andersson <·······················@optosweden.se> writes:

> I cannot say that I do "major work" in Lisp myslef... but like many
> others I'm a persistent lurker in comp.lang.lisp.

So am I, not having done even minor work in Common Lisp (but I do
think it's cool). I use Emacs and Gnus on a daily basis, though, and
hang out on the *.emacs.* newsgroups. I know how to read and write a
little bit of elisp too.

-- 
What did you expect? The average Elisp hacker is so cool that he is in
no need of a table/desktop, placing his laptop where the name
indicates it belongs. But the precarious balance of the same is often
in need of structural support. -- DK about http://www.emacslisp.org
From: Jonas Öster
Subject: Re: Lisp programmers in Gothenburg, Sweden
Date: 
Message-ID: <slrnb86ge0.sh2.d97ost@licia.dtek.chalmers.se>
On 26 Mar 2003 08:46:19 +0100,
Lars Brinkhoff <·········@nocrew.org> wrote:
>Since everyone else is forming local groups, I might as well ask if
>there are any comp.lang.lispers in Gothenburg other than me and Martin?
>
>rydis (Martin Rydstr|m) @CD.Chalmers.SE writes:
>> I'm fairly certain that GNAT used to use something that didn't work

I'm here. I do not use Lisp in any significant way, but I do lurk in c.l.l
and very long ago, I implemented a toy Lisp interpreter (similar to
Autolisp, which was what I knew about then).

Of course you'll find obscene numbers of Haskell programmers, though.

/Jonas �ster
From: rydis (Martin Rydstr|m) @CD.Chalmers.SE
Subject: Re: Lisp programmers in Gothenburg, Sweden
Date: 
Message-ID: <w4c7kah9mb3.fsf@haddock.cd.chalmers.se>
Lars Brinkhoff <·········@nocrew.org> writes:
> Since everyone else is forming local groups, I might as well ask if
> there are any comp.lang.lispers in Gothenburg other than me and Martin?

I seem to recall that Robert Feldt has posted here once or twice, but
I might be confusing c.l.l with LL1, where he posts more frequently.

Regards,

'mr

-- 
[Emacs] is written in Lisp, which is the only computer language that is
beautiful.  -- Neal Stephenson, _In the Beginning was the Command Line_
From: Gisle Sælensminde
Subject: Re: Protected stacks (was Re: Standards compliance question)
Date: 
Message-ID: <slrnb8448b.gi8.gisle@apal.ii.uib.no>
rydis (Martin Rydstr|m) @CD.Chalmers.SE wrote:
> "Paul F. Dietz" <·····@dls.net> writes:
>> Interesting that you should mention Ada -- GNAT doesn't use trampolines,
>> I understand, so programs compiled with it can use protected stacks.
> 
> I'm fairly certain that GNAT used to use something that didn't work
> with Solaris' protected stacks just a few years back, when they were
> new, I believe in 2.6 (or if they were made default, I'm not sure.)

The problem was with taking the reference of a nested function. GNAT emits
trampolines for such code. It is not the most commonly used feature in
Ada, but the GNAT compiler, that is written in Ada itself, uses the
feature. This means that you can't compile Ada programs, even if they
don't use the feature. I got that problem with the code for my master
thesis when they patched the solaris instalation at the computer science
department. I could suddenly  not continue the development of my program,
but I could run the existing version of it.

On Linux this was no problem, even with nonexecutable stack, since the
similar patch for Linux could recognize a trampoline and let it execute.
The reason for that is that glibc use (or at least used) trampolines. With
nonexecutable stack a glibc-based linux would not even boot. On IRIX, gnat
either don't use trampolines, or the stack is executable.

--
Gisle S�lensminde  
Computational biology unit, University of Bergen, Norway
Email: ·····@cbu.uib.no 
Biology easily has 500 years of exciting problems to work on. (Donald Knuth)
From: Duane Rettig
Subject: Re: Protected stacks (was Re: Standards compliance question)
Date: 
Message-ID: <4of43clmy.fsf@beta.franz.com>
Daniel Barlow <···@telent.net> writes:

> There's a point to be made here about priorities, but I'm not entirely
> sure what.  Perhaps that optimizing based on your current constituency
> may have deletorious effects on other people who /might/ want to use
> your system but currently don't.

That is correct.  The customer set on which to base priorities consists
of both the current custmer-base (usually at highest priority), and also
the set of potential customers.  It's just a lot harder to determine
that second set.  Some effort should be done; it is hard to keep income
going when just maintaining the current customer base, and it is hard
to keep expenses from soaring out-of-hand by assuming that _everyone_
is a potential customer and thus making everything a priority.

> This doesn't detract from the - obvious - point that implementors have
> limited resources and have to prioritize them, but it does suggest
> that they should be factoring in the priorities of the _potential_
> market (in the example there, the ADA users) as well as of the actual
> current user base (for x86-64, the development is probably being
> driven mostly by Linux distributors).  And I'm sure that the
> implementors do this already, because the alternative is that they end
> up with a bunch of extremely happy customers who've already paid for
> their licences, and nothing to sell to new markets with other
> priorities.
> 
> Yes, prioritize.  No, don't prioritize exclusively by drawing up an
> ordered list of reported bugs and working through from start to end.

The real trick is in identifying these markets; a job for really good
marketing reasearch.  And it's not just potential customers; it's also
customers that might go away; at the turn of the last century the
buggy-whip manufacturers might have benefitted by some marketing
research identifying automobiles as a threat.  And more recently, we
were mostly prepared for the dot-com bust because of knowledge we had
of the weaknesses in that sector of our customer base.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Tim Bradshaw
Subject: Re: Standards compliance question
Date: 
Message-ID: <ey3bs04l4hm.fsf@cley.com>
* Russell McManus wrote:

> In an interesting recent development, NetBSD has stolen the
> implementation ideas from Irix that allow stack pages to be
> non-executable, thus turning most buffer overflow attacks into denial
> of service events (program crashes) rather than root exploits.  This
> is a big deal, if eventually other OS'es pile on.

Solaris has had this since at least 2.6, I don't think it's that
recent...

--tim
From: Russell McManus
Subject: Re: Standards compliance question
Date: 
Message-ID: <87ptohkft0.fsf@thelonious.dyndns.org>
Tim Bradshaw <···@cley.com> writes:

> * Russell McManus wrote:
> 
> > In an interesting recent development, NetBSD has stolen the
> > implementation ideas from Irix that allow stack pages to be
> > non-executable, thus turning most buffer overflow attacks into denial
> > of service events (program crashes) rather than root exploits.  This
> > is a big deal, if eventually other OS'es pile on.
> 
> Solaris has had this since at least 2.6, I don't think it's that
> recent...

I agree that the implementation idea is not recent. I did not know
that this feature was in Solaris, but I was aware that Irix has had it
for some time.  

What is recent is the feature's inclusion into NetBSD.  If Linux,
FreeBSD, etc. pick it up, it may have a signifcant security effect.
Readers of bugtraq will not the high percentage of incident reports
that rely on buffer overflows to gain a root exploit.

-russa
From: Michael Livshin
Subject: Re: Standards compliance question
Date: 
Message-ID: <s3r890qs56.fsf@laredo.verisity.com.cmm>
Russell McManus <···············@yahoo.com> writes:

> In an interesting recent development, NetBSD has stolen the
> implementation ideas from Irix that allow stack pages to be
> non-executable, thus turning most buffer overflow attacks into denial
> of service events (program crashes) rather than root exploits.  This
> is a big deal, if eventually other OS'es pile on.

is that something that can be configured individually for each process
invocation?  if not, there is a possibility that some native-compiling
dynamic language implementations will not be amused.

-- 
All ITS machines now have hardware for a new machine instruction --
SETS
Set to Self.
Please update your programs.
From: Håkon Alstadheim
Subject: Re: Standards compliance question
Date: 
Message-ID: <m08yv6xvwz.fsf@alstadhome.dyndns.org>
Barry Margolin <··············@level3.com> writes:

> In article <··············@cartan.de>, Nils Goesche  <······@cartan.de> wrote:
> >"Paul F. Dietz" <·····@dls.net> writes:
> >
> >> Kent M Pitman wrote:
> >> 
> >> > In the end, the only effect of all this testing is that _maybe_
> >> > an implementation or two will be found unable to conform.
> >> 
> >> Kent, as you may be aware, I've been putting together an ANSI CL
> >> compliance test suite.

> >
> >FWIW, I think what you're doing is great. I hope your suite will be
> >downloadable somewhere, some time. And everybody agrees, I think,
> >that standard compliance is not only nice but important. But let's
> >not get carried away too much.
> 
> I agree.
> 
> If I were a vendor, and I were faced with a choice of fixing problems found
> by a compliance test suite, or fixing problems that real users had
> reported, which do you think I'd tackle first?

Or maybe the vendor could devise 2-3 tests that tripped up on the
users problem, get the tests included in the suite, fix the users
problem and suddenly be 2-3 points ahead of competitors ?
-- 
H�kon Alstadheim, hjemmepappa.
From: Paul F. Dietz
Subject: Re: Standards compliance question
Date: 
Message-ID: <elidnVSqAo4eUeCjXTWcqg@dls.net>
H�kon Alstadheim wrote:

> Or maybe the vendor could devise 2-3 tests that tripped up on the
> users problem, get the tests included in the suite, fix the users
> problem and suddenly be 2-3 points ahead of competitors ?

The test suite is not designed to produce a numerical measure of
compliance.  It's not fair to say 'Lisp X fails N tests, but Lisp
Y fails 2 N tests, so X is more compliant than Y.'  This is because
there's a lot of redundancy in some tests, so some bugs will stimulate
a few failures while other bugs will cause many more.  The test suite
is also not yet 'complete'.

Having said that, I welcome tests that should be valid in any
compliant implementation and that have also come up in practice.

	Paul
From: Steven M. Haflich
Subject: Re: Standards compliance question
Date: 
Message-ID: <3E79223F.7010201@alum.mit.edu>
Paul F. Dietz wrote:
> Barry Margolin wrote:
> 
>> If a conformance test exists, it could be used as a compromise -- 
>> "conforms
>> to the standard" might be treated as equivalent to "passes the 
>> conformance
>> test".
> 
> 
> IMO, it's a shame the CL standard didn't come with a set of agreed-upon
> conformance tests.

In the early years of the X3J13 effort, there was a small group within
the committee who considered implementing such a test.  Eventually the
job was considered much larger than the perceived benefit, especially
considering that there were other more central tasks.

But to reconsider, what would it mean to have a conformance test.
It would mean next to nothing with regard to the standard, unless the
conformance test were a part of the standard.  (In SDO terminology,
this would mean "a normative part of the standard.")  That is very
hard to manage; you will not that W3C does not generally provide
conformance tests for relatively small simple [sic gaack] specification
like XML.

Every implementation has many many nonconformances.  We still discover
blatant nonconformances in the implementation that I have worked on for 
18 years.  Those nonconformances that are evident in code that users
actually write, of course, were discovered and eliminated quickly.
Those nonconformances that depend upon obscure details or usage >> that
no one every writes << are the ones that occasionally still come to
light.  It is good to discover and fix these (and thanks for your
ongoing work on your test suite) but empirically, most of these things
really aren't so important.

If I were argumentative, I might posit that it would be more helpful to
write scaling tests -- to test somehow that a particular implementation 
doesn't fail too soon as problems get large, or that there aren't
unnecessay O^2 functions where O^1 or nlogn are possible.  However, I
don't want to put myself in the position of having to argue _against_
conformance testing, so I'll merely suggest the alternate and no doubt
some idiot will pick up the argument for me.
From: Paul F. Dietz
Subject: Scalability tests (was Re: Standards compliance question)
Date: 
Message-ID: <ykmdnQR_TIWOw-GjXTWcqQ@dls.net>
Steven M. Haflich wrote:

> If I were argumentative, I might posit that it would be more helpful to
> write scaling tests -- to test somehow that a particular implementation 
> doesn't fail too soon as problems get large, or that there aren't
> unnecessay O^2 functions where O^1 or nlogn are possible.

That's a good idea.  My favorite location for nonscalability in
some current lisps is SUBTYPEP.  Obvious algorithms here are exponential
time or worse; there has to be some way to terminate computation when
bad cases are encountered. Consider this expression

(subtypep
  '(and
    (not (or
	 (cons integer (cons symbol  integer))
	 (cons symbol  (cons integer symbol))
	 (cons symbol  (cons symbol  integer))
	 (cons symbol  (cons integer integer))
	 (cons integer (cons integer symbol))
	 (cons symbol  (cons symbol  symbol))
	 (cons integer (cons integer integer))
	 (cons integer (cons symbol  symbol))))
    (cons (or symbol integer)
	 (cons (or symbol integer)
	       (or symbol integer))))
  nil)


This runs for a long time on one current lisp (I interrupted it after
ten minutes on a recently purchased Athlon XP+).

	Paul
From: Rob Warnock
Subject: Re: Standards compliance question
Date: 
Message-ID: <ee6dnW9hXeNeLeSjXTWc-w@speakeasy.net>
Steven M. Haflich <·················@alum.mit.edu> wrote:
+---------------
| Even though you think you know what is required from the language,
| and what is prohibited, the cracks are many and wide.
| 
| What does your favorite conforming implementation do with the following:
| 
| (make-package (copy-seq "FOO"))
| (setf (aref (package-name (find-package "FOO")) 2) #\B)
| (find-package "FOB)
| 
| I can find no prohibition in the ANS against executing this sequence,
| and if I were sufficiently ignorant of both common practice and
| implementation, I might expect it to work.
+---------------

Yes, but if one had hung out here in c.l.l. and seen the number of
catcalls about modifying constants [yes, yes, even though you explicitly
made a mutable copy of "FOO" in this case] one might hesitate a moment
about trying to do anything quite so "under-the-covers". *I* certainly
got a uneasy feeling about it, and I'm relatively new to CL (though not
to Scheme or its implementation).

For example, it seems reasonable to me for an implementation to use
a hash table internally to map package names to packages, and we *do*
have a restriction [CLHS 18.1.2] against modifying hash table keys!
[Also see CLHS 18.1.2.2.2.]

Plus, it says under Functions INTERN and MAKE-SYMBOL and Class SYMBOL,

	Once a string has been given as the name argument to MAKE-SYMBOL
	[or to INTERN, in the situation where a new symbol is created],
	the consequences are undefined if a subsequent attempt is made
	to alter that string. 
and:
	Every symbol has a name, and the consequences are undefined if
	that name is altered.

Packages seem to me to be enough like symbols to cause one to be at least
a little bit hesitant about risking mutating their names, either. Enough
that one might have ask, "Well, how *should* I think about doing this?"
One might look again at the spec [as I did] and find that in the case of
packages there's a way which *is* explicitly sanctioned:

        (make-package (copy-seq "FOO"))  ==>  #<PACKAGE FOO>

        (let* ((package (find-package "FOO"))
               (new-name (copy-seq (package-name package))))
          (setf (aref new-name 2) #\B)
          (rename-package package new-name))  ==>  #<PACKAGE FOB>

        (find-package "FOB")  ==>  #<PACKAGE FOB>


-Rob

-----
Rob Warnock, PP-ASEL-IA		<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607