From: Dowe Keller
Subject: newbie: please don't smash my case
Date: 
Message-ID: <slrn8maqmr.ahq.dowe@localhost.localdomain>
Hello, I am a lisp newbie with a slight problem, the following code works
fine except that it smashes case when it converts from symbols to strings
(I hope my jargon is correct).  How do I keep the case information intact?


(defun quote-all (lst)
    (if (not (null lst))
	(cons (string (car lst))(quote-all (cdr lst)))))

-- 
····@sierratel.com
---
Garbage In, Gospel Out

From: Paul Rudin
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <m3bt0az6d1.fsf@cara.scientia.com>
>>>>> "Dowe" == Dowe Keller <····@krikkit.localdomain> writes:

 Dowe> Hello, I am a lisp newbie with a slight problem, the following code works
 Dowe> fine except that it smashes case when it converts from symbols to strings
 Dowe> (I hope my jargon is correct).  How do I keep the case information intact?


 Dowe> (defun quote-all (lst)
 Dowe>     (if (not (null lst))
 Dowe> 	(cons (string (car lst))(quote-all (cdr lst)))))

Your problem is that string, given a symbol, returns it's name which
is normally uppercase. You can change the case using e.g. 

....
  (write-to-string (car lst) :case :downcase) ...


Incidentally your code can be expressed more succinctly by the
equivalent:

(defun quote-all (lst)
  (mapcar #'string lst))


-- 
class struggle SDI Rule Psix CIA Noriega assassination Clinton $400
million in gold bullion kibo genetic Saddam Hussein FSF explosion
supercomputer ammunition
From: Rudolf Schlatte
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <lxn1jupc4z.fsf@ist.tu-graz.ac.at>
····@krikkit.localdomain (Dowe Keller) writes:

> Hello, I am a lisp newbie with a slight problem, the following code
> works fine except that it smashes case when it converts from symbols
> to strings (I hope my jargon is correct).  How do I keep the case
> information intact?
> 
> (defun quote-all (lst)
>     (if (not (null lst))
> 	(cons (string (car lst))(quote-all (cdr lst)))))

If I understand you, you mean this effect:

> (quote-all '(foo bar baz))
("FOO" "BAR" "BAZ")

That's not a bug in your code.  Symbols in Lisp are case-insensitive,
look:

> 'foo
FOO


How that came to be and if it's a good idea still is being debated by
people much more experienced than me.  Just some things you could
explore, depending on what you want to do:

- If you want to process textual information, use strings, not
symbols.  A symbol is much more than just some characters (it can have
a value, a function value and a property list, to begin with).

- If you want to use lowercase symbols, you can quote them like this:

> '|fOo|
|fOo|
> (string '|fOo|)
"fOo"

- If you are feeling adventurous, you can make the lisp reader case
sensitive.  Read all about it in the HyperSpec:

http://www.xanalys.com/software_tools/reference/HyperSpec/Body/acc_readtable-case.html

The operative word is "adventurous" here.  Avoid clobbering the
standard readtable or be surprised at the amount of breakage that
occurs -- including having to type everything in UPPER CASE.  Instead,
use something like:

(let ((*readtable (copy-readtable nil)))
   (setf (readtable-case *readtable*) :preserve)
   (do-something))

And while you're exploring, take a look at the mapping functions.
Your example function could be written as

(defun quote-all (lst)
  (mapcar #'string lst))


Have fun,

Rudi
From: Simon Brooke
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <m28zvekqsf.fsf@gododdin.internal.jasmine.org.uk>
Rudolf Schlatte <········@ist.tu-graz.ac.at> writes:

> ····@krikkit.localdomain (Dowe Keller) writes:
> 
> > Hello, I am a lisp newbie with a slight problem, the following code
> > works fine except that it smashes case when it converts from symbols
> > to strings (I hope my jargon is correct).  How do I keep the case
> > information intact?
> > 
> > (defun quote-all (lst)
> >     (if (not (null lst))
> > 	(cons (string (car lst))(quote-all (cdr lst)))))
> 
> If I understand you, you mean this effect:
> 
> > (quote-all '(foo bar baz))
> ("FOO" "BAR" "BAZ")
> 
> That's not a bug in your code.  Symbols in Lisp are case-insensitive,
> look:
> 
> > 'foo
> FOO

Errr... this needs a bit of expansion. Symbols in Common LISP are
case-insensitive. Common LISP is the most prevelent LISP in use these
days, but that doesn't make it the only one. Symbols in InterLISP and
Portable Standard LISP, for example, are case sensitive.

> How that came to be and if it's a good idea still is being debated by
> people much more experienced than me.

Allegedly[1], part of the US DoD requirement was that Common LISP
should be usable with a type of terminal which had no shift key, and
could display only upper case. The DoD was very influential in the
beginnings of the Common LISP project.

Like the separation of function and value cells, it's a feature of
Common LISP we're stuck with now, but which I don't think anyone any
longer seriously defends.

[1] Several people have given me this story independently. I can't at
this moment find a reference for it.

-- 
·····@jasmine.org.uk (Simon Brooke) http://www.jasmine.org.uk/~simon/
         'Victories are not solutions.'
        ;; John Hume, Northern Irish politician, on Radio Scotland 1/2/95
        ;; Nobel Peace Prize laureate 1998; few have deserved it so much
From: Frank A. Adrian
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <4xo95.108$kT5.89248@news.uswest.net>
Simon Brooke <·····@jasmine.org.uk> wrote in message
> Like the separation of function and value cells, it's a feature of
> Common LISP we're stuck with now, but which I don't think anyone any
> longer seriously defends.

Actually, several do.  Just look at the last few month's postings on this
newsgroup.  Every so often a Schemer tries to make the point that Scheme's
"Lisp-1" approach is better than Common Lisp's "Lisp-2" approach and gets
shouted down.  As for myself, I've come to like the fact that I can name a
function parameter "list" without having the system go into a tizzy, even
though I am still somewhat dismayed each time I need to use a "funcall".
But, all-in-all, it's just another choice in language design that can be
easily defended either way.

faa
From: Christopher J. Vogt
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <3965FE86.55D9F50A@computer.org>
"Frank A. Adrian" wrote:
> 
> Simon Brooke <·····@jasmine.org.uk> wrote in message
> > Like the separation of function and value cells, it's a feature of
> > Common LISP we're stuck with now, but which I don't think anyone any
> > longer seriously defends.
> 
> Actually, several do.  Just look at the last few month's postings on this
> newsgroup.  Every so often a Schemer tries to make the point that Scheme's
> "Lisp-1" approach is better than Common Lisp's "Lisp-2" approach and gets
> shouted down.  As for myself, I've come to like the fact that I can name a
> function parameter "list" without having the system go into a tizzy, even
> though I am still somewhat dismayed each time I need to use a "funcall".
> But, all-in-all, it's just another choice in language design that can be
> easily defended either way.

maybe we need a new news group: comp.lang.common-lisp
From: Erik Naggum
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <3171974984694744@naggum.net>
* "Christopher J. Vogt" <····@computer.org>
| maybe we need a new news group: comp.lang.common-lisp

  comp.lang.lisp.other is probably better.

#:Erik
-- 
  If this is not what you expected, please alter your expectations.
From: Joe Marshall
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <vgyiue4v.fsf@alum.mit.edu>
Simon Brooke <·····@jasmine.org.uk> writes:

> Errr... this needs a bit of expansion. Symbols in Common LISP are
> case-insensitive. Common LISP is the most prevelent LISP in use these
> days, but that doesn't make it the only one. Symbols in InterLISP and
> Portable Standard LISP, for example, are case sensitive.

Common Lisp symbols *are* case sensitive:

USER(1): (eq 'foo '|foo|)
NIL

However, the CommonLisp reader may canonicalize the case of an
unescaped symbol before interning it.  (And the printer may
canonicalize the case before printing it, as well.)

> Like the separation of function and value cells, it's a feature of
> Common LISP we're stuck with now, but which I don't think anyone any
> longer seriously defends.

There are certainly numerous vehement defenders of this.
From: Rainer Joswig
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <rainer.joswig-017530.12005008072000@news.is-europe.net>
In article <············@alum.mit.edu>, Joe Marshall 
<·········@alum.mit.edu> wrote:

> Simon Brooke <·····@jasmine.org.uk> writes:
> 
> > Errr... this needs a bit of expansion. Symbols in Common LISP are
> > case-insensitive. Common LISP is the most prevelent LISP in use these
> > days, but that doesn't make it the only one. Symbols in InterLISP and
> > Portable Standard LISP, for example, are case sensitive.
> 
> Common Lisp symbols *are* case sensitive:
> 
> USER(1): (eq 'foo '|foo|)
> NIL
> 
> However, the CommonLisp reader may canonicalize the case of an
> unescaped symbol before interning it.  (And the printer may
> canonicalize the case before printing it, as well.)

Well, what also happens is that these symbols are INTERNed
in a package. Depending on what INTERNing does you can have
the same symbol or not. The behaviour of Common Lisp's
interning of symbols can be found in the standard docs.

-- 
Rainer Joswig, BU Partner,
ISION Internet AG, Steinh�ft 9, 20459 Hamburg, Germany
Tel: +49 40 3070 2950, Fax: +49 40 3070 2999
Email: ····················@ision.net WWW: http://www.ision.net/
From: Steven M. Haflich
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <3967677E.B534B640@pacbell.net>
Rainer Joswig wrote:

> > However, the CommonLisp reader may canonicalize the case of an
> > unescaped symbol before interning it.  (And the printer may
> > canonicalize the case before printing it, as well.)
> 
> Well, what also happens is that these symbols are INTERNed
> in a package. Depending on what INTERNing does you can have
> the same symbol or not. The behaviour of Common Lisp's
> interning of symbols can be found in the standard docs.

In know Rainer knows better, but he has expressed something
sloppily in a way that may someday confuse someone.

INTERN (and FIND-SYMBOL) operate on SYMBOL-NAMEs, which are
STRINGs.  INTERN and FIND-SYMBOL never manipulate the case
of characters in these names.  Conceptually, they compare
string names as if by EQUAL and are therefore case sensitive
and case preserving.

The reader does indeed call INTERN, and the reader does do
various things with case, but all that is the business of the
reader, not INTERN.

The reader normally shifts all lower-case characters in a symbol
name to upper case.  There is a settable mode in a READTABLE
which can change this to DOWNCASE, PRESERVE, or INVERT.  The
standard reader syntax allows any character (e.g. whitespace
and lower-case) to be escaped so that it loses any special
syntactic meaning and is not case shifted.  See backslash and
vertical bar.

The printer normally preserves case on output, although if
*PRINT-ESCAPE* or other controls are true, it may add escapes
necessary so that the printed output could be read back
the same.  The actual case behavior of the printer is controlled
by a combination of READTABLE-CASE and the *PRINT-CASE* according
to a set of rules that is far to complex to be worth learning,
much less remembering.  You can see them in ANS 22.1.3.3.2.

A point often missed by beginners (because it only _very_ rarely
has any effect) is that SYMBOL-NAMEs are not used when Lisp
executes.  In particular, comparison of symbols by EQ, EQL,
EQUAL, and EQUALP do not look at the SYMBOL-NAMEs or the
SYMBOL-PACKAGEs of the argument SYMBOLs.  SYMBOLs are first-class
objects for which all four equality predicates operate as by EQ.
It is rare CL code that ever calls INTERN or FIND-SYMBOL, and the
only important  parts of the implementation that call them are
the reader and printer.  SYMBOL-NAMES are not referenced at all
during the execution of random Lisp code, unless that code explicitly 
calls INTERN, FIND-SYMBOL, various package functions, or invokes
the reader or printer on SYMBOLs.  Informally, all the messy case
manipulations of CL happen at read and print time.

Bill Ackerman told me the following more than 30 years ago:
Lisp would be the perfect computing language if only it didn't
have I/O.
From: Rainer Joswig
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <rainer.joswig-879176.20570208072000@news.is-europe.net>
In article <·················@pacbell.net>, ···@alum.mit.edu wrote:

> > Well, what also happens is that these symbols are INTERNed
> > in a package. Depending on what INTERNing does you can have
> > the same symbol or not. The behaviour of Common Lisp's
> > interning of symbols can be found in the standard docs.
> 
> In know Rainer knows better, but he has expressed something
> sloppily in a way that may someday confuse someone.
> 
> INTERN (and FIND-SYMBOL) operate on SYMBOL-NAMEs, which are
> STRINGs.  INTERN and FIND-SYMBOL never manipulate the case
> of characters in these names.  Conceptually, they compare
> string names as if by EQUAL and are therefore case sensitive
> and case preserving.
> 
> The reader does indeed call INTERN,

Unless you create non-interned symbols. ;-)

? (eq ':foo ':foo)
T
? (eq '#:foo '#:foo)
NIL

> and the reader does do
> various things with case, but all that is the business of the
> reader, not INTERN.

Yes. Thus was my hint of reading the docs to see what
INTERN is doing.

But if you lean back and look at it more abstractly you
intern some object into a data structure. Later
you may want to retrieve the object based on some
input. You also may want to iterate in some order
over this datastructure. Removing objects may also
be nice.

In this case we have for packages:

- data structure:            package
- interning in packages:     INTERN
- retrieving from packages:  FIND-SYMBOL
- iterating:                 DO-SYMBOLS, ...
- removing:                  UNINTERN
- ...

So in some Lisp system it might be possible
to think of an INTERN operation that uses
a different mechanism and/or a different
datastructure. In Symbolics Genera INTERN
also is sensitive for styles in the strings,
for example. So when you are interning
"FOO" and "FOO" you won't get the same symbol,
given that there are different styles in the string. Yes,
this is ancient and only in Symbolics Genera.

Btw., in CL-HTTP you often find these mechanisms.
An example for URLs:

- data structure:            EQUAL hashtable
- interning of URLs:         URL:INTERN-URL
- retrieving URLs:           URL:INTERN-URL :if-does-not-exist :SOFT
- iterating:                 URL:MAP-URL-TABLE
- removing:                  URL:UNINTERN-URL

Hey, even more confusion.

-- 
Rainer Joswig, BU Partner,
ISION Internet AG, Steinh�ft 9, 20459 Hamburg, Germany
Tel: +49 40 3070 2950, Fax: +49 40 3070 2999
Email: ····················@ision.net WWW: http://www.ision.net/
From: Erik Naggum
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <3172082763674005@naggum.net>
* Rainer Joswig
| Unless you create non-interned symbols. ;-)

  So complete the list of functions by adding make-symbol.  Furrfu.

#:Erik
-- 
  If this is not what you expected, please alter your expectations.
From: Rob Warnock
Subject: gensym vs make-symbol [was: Re: newbie: please don't smash my case ]
Date: 
Message-ID: <8k8t64$374i0$1@fido.engr.sgi.com>
Erik Naggum  <····@naggum.net> wrote:
+---------------
| * Rainer Joswig
| | Unless you create non-interned symbols. ;-)
| 
|   So complete the list of functions by adding make-symbol.  Furrfu.
+---------------

Now that you bring it up, here's a (style?) question I've been wanting
to ask (still being somewhat only a casual user of CL):

Other than reducing possible confusion while debugging with "macroexpand",
is there any real reason to prefer (gensym "foo") over (make-symbol "foo")
when defining macros that need non-capturing local variables? Both produce
fresh uninterned symbols which can't conflict with (capture) any other symbol.
So in what circumstances is one preferred over the other, or vice-versa?


-Rob

p.s. I think I know why one doesn't use literal uninterned symbols for
such things, e.g.:

	(defmacro foo (arg1 arg2 &body body)
	  (let ((tmp '#:tmp))			; BUG!
	    ...stuff that uses ,tmp ...))

You only get *one* uninterned symbol when the macro is defined, which
gets used for all possible instances of the macro. So if you had nested
occurrences, in the expansion an inner occurrence of #:tmp could shadow
an outer one. But that can't happen with:

	(defmacro foo (arg1 arg2 &body body)
	  (let ((tmp (make-symbol "tmp")))
	    ...stuff that uses ,tmp ...))

because "make-symbol" gets called for each occurrence.

-----
Rob Warnock, 41L-955		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		PP-ASEL-IA
Mountain View, CA  94043
From: Erik Naggum
Subject: Re: gensym vs make-symbol [was: Re: newbie: please don't smash my case ]
Date: 
Message-ID: <3172117141705385@naggum.net>
* Rob Warnock
| Other than reducing possible confusion while debugging with
| "macroexpand", is there any real reason to prefer (gensym "foo")
| over (make-symbol "foo") when defining macros that need
| non-capturing local variables? Both produce fresh uninterned symbols
| which can't conflict with (capture) any other symbol.  So in what
| circumstances is one preferred over the other, or vice-versa?

  Well, I use make-symbol exclusively and see no reason for gensym or
  gentemp at all.  I tend to set *print-circle* to t, anyway.

#:Erik
-- 
  If this is not what you expected, please alter your expectations.
From: Christopher Browne
Subject: Re: gensym vs make-symbol [was: Re: newbie: please don't smash my case ]
Date: 
Message-ID: <slrn8mhekh.4di.cbbrowne@knuth.brownes.org>
Centuries ago, Nostradamus foresaw a time when Erik Naggum would say:
>* Rob Warnock
>| Other than reducing possible confusion while debugging with
>| "macroexpand", is there any real reason to prefer (gensym "foo")
>| over (make-symbol "foo") when defining macros that need
>| non-capturing local variables? Both produce fresh uninterned symbols
>| which can't conflict with (capture) any other symbol.  So in what
>| circumstances is one preferred over the other, or vice-versa?
>
> Well, I use make-symbol exclusively and see no reason for gensym or
> gentemp at all.  I tend to set *print-circle* to t, anyway.

Interesting.

That means that rather than macro-expanding into something that the
system provides, you generate whatever symbol name you want, right?
-- 
········@hex.net - <http://www.hex.net/~cbbrowne/>
Rules of the Evil Overlord #157. "Whenever plans are drawn up that
include a time-table, I'll post-date the completion 3 days after it's
actually scheduled to occur and not worry too much if they get
stolen." <http://www.eviloverlord.com/>
From: Kent M Pitman
Subject: Re: gensym vs make-symbol [was: Re: newbie: please don't smash my case ]
Date: 
Message-ID: <sfwn1jrrq6k.fsf@world.std.com>
····@rigden.engr.sgi.com (Rob Warnock) writes:

> Other than reducing possible confusion while debugging with "macroexpand",
> is there any real reason to prefer (gensym "foo") over (make-symbol "foo")

I'm reminded of the joke
 
  "Other than that, Mrs. Lincoln, how did you like the play?"

This is, after all, hardly a minor issue.  I think it's the key
reason, since sometimes there may be several calls to the same macro
in the same space yielding visually-otherwise-indistinguishable values
that you couldn't sort out otherwise.

Of course, MAKE-SYMBOL might be chosen in some case where such 
debugging wasn't needed, or where it was known that only one such
instance was to be made, or where non-program-readable visual brevity
might be needed.

MAKE-SYMBOL can, of course, also be used in bootstrapping package
configurations where you want to hold a symbol hostage for a little
while before you INTERN it.

And, finally, MAKE-SYMBOL is subprimitive to GENSYM.  You need it to
implement GENSYM.  So you might as well expose it just in case.  In case
someone wants to write a better GENSYM... as people often used to want
back when GENSYM didn't have as many options.  And even now sometimes
people want to vary it...
From: Steven M. Haflich
Subject: Re: gensym vs make-symbol [was: Re: newbie: please don't smash my case ]
Date: 
Message-ID: <3968EA69.49CD02AD@pacbell.net>
Rob Warnock wrote:
> Other than reducing possible confusion while debugging with "macroexpand",
> is there any real reason to prefer (gensym "foo") over (make-symbol "foo")
> when defining macros that need non-capturing local variables? Both produce
> fresh uninterned symbols which can't conflict with (capture) any other symbol.

You obviously realize that the additional behavior GENSYM provides
is that each generated symbol is visibly different from other
generated symbols.  This doesn't matter to the compiler, evaluator, or
other code walkers (which of course only depend on EQness of variable
names) but is helpful to human readers of macroexpanded code.

If it weren't for this, the save-space-at-any-cost luddite faction
would have invented some semistandard alternative to GENSYM that
would give all generated symbols the same (as in EQ) null symbol
name.  One could still differentiate these using *PRINT-CIRCLE*, but
this would hardly improve the readability of macroexpanded code.

GENSYM is a rather old function and I don't feel its behavior
is optimal for its typical use.  In particular, each time it is
used for macroexpansion the names of the generated variables are
numerically different.  This means that when my program throws
into the debugger and the stack backtrace shows gensyms, one cannot
usefully macroexpand the source functions to determine which
generated variable is which.  This has slowed me down many times
when debugging.

I wonder if some better conventions cuold be devised.  Suppose
the expansion of DEFMACRO wrapped the expander body in a MACROLET
for smoe GENSYM-like operator that would communicate the nested
origin of the macro.  That way, each macroexpansion (relative to
top level) would produce the "same" macroexpansion, where all
uninterned variable names would always have a reproducible
SYMBOL-NAME, even though the symbols themselves would remain
distinct, as required by hygene.  This would enhance readability
abd debuggability of macro-generated code.
From: ···@healy.washington.dc.us
Subject: Re: gensym vs make-symbol [was: Re: newbie: please don't smash my case ]
Date: 
Message-ID: <5166qcdl8o.fsf@zip.local>
>>>>> "Steven" == Steven M Haflich <·······@pacbell.net> writes:

    Steven> GENSYM is a rather old function and I don't feel its behavior
    Steven> is optimal for its typical use.  In particular, each time it is
    Steven> used for macroexpansion the names of the generated variables are
    Steven> numerically different.  This means that when my program throws
    Steven> into the debugger and the stack backtrace shows gensyms, one cannot
    Steven> usefully macroexpand the source functions to determine which
    Steven> generated variable is which.  This has slowed me down many times
    Steven> when debugging.

    Steven> I wonder if some better conventions cuold be devised.  Suppose
    Steven> the expansion of DEFMACRO wrapped the expander body in a MACROLET
    Steven> for smoe GENSYM-like operator that would communicate the nested
    Steven> origin of the macro.  That way, each macroexpansion (relative to
    Steven> top level) would produce the "same" macroexpansion, where all
    Steven> uninterned variable names would always have a reproducible
    Steven> SYMBOL-NAME, even though the symbols themselves would remain
    Steven> distinct, as required by hygene.  This would enhance readability
    Steven> abd debuggability of macro-generated code.

I've been thinking about this recently, specifically, I am trying to
make a regression tester for a bunch of nested macros.  It's hard to
test changes when every expansion (even without changes) produces
different symbols!  Though I could get away with make-symbol in some
spots, gensym is necessary for reasons elaborated previously in this
thread.

What I've come up with is using *gensym-counter*.  Prior to each
top-level expansion, bind this to the same fixed value.  Each
expansion should produce the same thing, no?  The loophole is that
there's no _guarantee_ that the code walker will expand things in the
same order (is there?) but in practice, I see no reason why it would
change.  There's a more significant chance that a macro rewrite will
produce functionally equivalent expansions, but that gensyms will be
called in different order, so the counter will not match.  In my case
I think the risk is minimal.

I haven't tried this yet, but I'm hopeful it will work.  Are there any
flaws in my reasoning?

Liam
From: Kent M Pitman
Subject: Re: gensym vs make-symbol [was: Re: newbie: please don't smash my case ]
Date: 
Message-ID: <sfw8zv7v59x.fsf@world.std.com>
···@healy.washington.dc.us writes:

> I've been thinking about this recently, specifically, I am trying to
> make a regression tester for a bunch of nested macros.  It's hard to
> test changes when every expansion (even without changes) produces
> different symbols!  Though I could get away with make-symbol in some
> spots, gensym is necessary for reasons elaborated previously in this
> thread. [...]

Is there some reason that you don't just write a gensym-p predicate and
notice uninterned symbols going by and just assume that any pair of gensyms
both of which are unknown are test-case-equal, and then log the pair so that
they must be considered test-case-equal only to each other?  Something like:

 (defvar *test-case-equivalents*)
 (defmacro with-test-case (&body forms)
   `(call-with-test-case #'(lambda () ,@forms)))
 (defun call-with-test-case (thunk)
   (let ((*test-case-equivalents* (make-hash-table)))
     (funcall thunk)))
 (defun gensym-p (x)
   (and (symbolp x) 
        (not (symbol-package x))))
 (defun equivalent-gensym (x)
   (and (gensym-p x)
        (gethash x *test-case-equivalents*)))
 (defun register-equivalent-gensym (x y)
   (setf (gethash x *test-case-equivalents*) y))
 (defun test-case-equal (x y)
   (or (equal x y)
       (let ((eqv (equivalent-gensym x)))
         (when eqv
           (eq eqv y)))
       (when (and (gensym-p x) (gensym-p y))
         (register-equivalent-gensym x y)
         t)))

I didn't check this code, so it might be buggy, but you get the idea.
There is no dependence on counters here at all. Indeed, you can make
the WITH-TEST-CASE return the equivalance table as a secondary value
if you want...
From: Erik Naggum
Subject: Re: gensym vs make-symbol [was: Re: newbie: please don't smash my case ]
Date: 
Message-ID: <3172397911664803@naggum.net>
* ···@healy.washington.dc.us
| Though I could get away with make-symbol in some spots, gensym is
| necessary for reasons elaborated previously in this thread.

  Huh?  To code, there is no difference between using make-symbol and
  gensym.  And if you compare uninterned symbols with string=, you're
  doing it the wrong way, anyway.  You have to do the same thing the
  cirularity detector in the printer is doing it, except you get away
  with a single pass.

| What I've come up with is using *gensym-counter*.  Prior to each
| top-level expansion, bind this to the same fixed value.  Each
| expansion should produce the same thing, no?

  No, the gensym'ed symbols are still unique to each invocation.
  That's the whole point.

| Are there any flaws in my reasoning?

  Yes, you have not understood what an uninterned symbol is.  They are
  symbols, and as such eq if you have more of the same, but they are
  _not_ the same symbol just because they print the same.

(eq '#:foo '#:foo)
=> nil

(eq 'foo 'foo)
=> t

(string= '#:foo '#:foo)
=> t

(eq (symbol-name '#:foo) (symbol-name '#:foo))
=> <unspecified>

(eq '#1=#:foo '#1#)
=> t

#:Erik
-- 
  If this is not what you expected, please alter your expectations.
From: Joe Marshall
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <wvivcfrx.fsf@alum.mit.edu>
Rainer Joswig <·············@ision.net> writes:

> In article <············@alum.mit.edu>, Joe Marshall 
> <·········@alum.mit.edu> wrote:
> 
> > Simon Brooke <·····@jasmine.org.uk> writes:
> > 
> > > Errr... this needs a bit of expansion. Symbols in Common LISP are
> > > case-insensitive. Common LISP is the most prevelent LISP in use these
> > > days, but that doesn't make it the only one. Symbols in InterLISP and
> > > Portable Standard LISP, for example, are case sensitive.
> > 
> > Common Lisp symbols *are* case sensitive:
> > 
> > USER(1): (eq 'foo '|foo|)
> > NIL
> > 
> > However, the CommonLisp reader may canonicalize the case of an
> > unescaped symbol before interning it.  (And the printer may
> > canonicalize the case before printing it, as well.)
> 
> Well, what also happens is that these symbols are INTERNed
> in a package. Depending on what INTERNing does you can have
> the same symbol or not. The behaviour of Common Lisp's
> interning of symbols can be found in the standard docs.

Whether the symbols are interned or not makes no difference to case
sensitivity.  Two symbols that differ in case can *never* be EQ,
regardless of whether they are interned or not.  (This does *not*
imply that two symbols that have the same symbol name *are*
necessarily EQ.)
  
From: Christopher J. Vogt
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <3965FE2D.81027675@computer.org>
Simon Brooke wrote:
> 
> [ ... ]

> Errr... this needs a bit of expansion. Symbols in Common LISP are
> case-insensitive. Common LISP is the most prevelent LISP in use these
> days, but that doesn't make it the only one. Symbols in InterLISP and
> Portable Standard LISP, for example, are case sensitive.

> [ ... ]
 
> Like the separation of function and value cells, it's a feature of
> Common LISP we're stuck with now, but which I don't think anyone any
> longer seriously defends.

Assuming I haven't misunderstood, I'll defend:

I think case-sensitivity is one of the worst mis-features of a language.
From a human cognitive standpoint, there is no semantic difference between
House, house, HOUSE ... so to design a language that allows tokens with
different capitalization to have differnet semantics is, imho, insanity.
One of the 327 things I love about CL is that it *is* case-insensitive.

I also like the fact that variable names don't collide with function names.
Given that (foo foo) is (ignoreing special-forms and macros) semantically
clear, the first foo is a function and the second foo is a value, it seems
to me that forbiding a symbol from being used for both is a waste of
legalese.

> [ ... ]
From: John Foderaro
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <MPG.13cfa56bd36d9340989681@news.dnai.com>
In article <·················@computer.org>, ····@computer.org says...
> I think case-sensitivity is one of the worst mis-features of a language.
> From a human cognitive standpoint, there is no semantic difference between
> House, house, HOUSE ... so to design a language that allows tokens with
> different capitalization to have differnet semantics is, imho, insanity.

Is there a semantic difference between  color  and  colour  ?   Should
the lisp reader stop the current insane behavior (behaviour) and make 
them the same symbol?

If your answer is 'no' then what have you learned about the 
responsibility of a language lexer vis-a-vis the semantics of a human 
language?
From: Hartmann Schaffer
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <39676880@news.sentex.net>
In article <·················@computer.org>,
	"Christopher J. Vogt" <····@computer.org> writes:
> ...
>> Like the separation of function and value cells, it's a feature of
>> Common LISP we're stuck with now, but which I don't think anyone any
>> longer seriously defends.
> 
> Assuming I haven't misunderstood, I'll defend:
> 
> I think case-sensitivity is one of the worst mis-features of a language.
> From a human cognitive standpoint, there is no semantic difference between
> House, house, HOUSE ... so to design a language that allows tokens with
> different capitalization to have differnet semantics is, imho, insanity.

this seems to be largely a question of your background.  i
mathematics, e.g., it is quite common to use differences in fonts and
capitalization as significant.  some people seem to be more visual
oriented, in which case font, capitalization etc are meaningful
distinctions, while other people seem to be more vebally oriented, in
which case case sensitivity is very difficult to grasp.

> ...

-- 

Hartmann Schaffer
From: Dowe Keller
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <m3sntk8j6v.fsf@localhost.localdomain>
It also can be painful when the language smashes case while the OS
is case sensitive.

Where I come from case matters. "ls" is a command, "LS" is an error.
I happen not to see any similarity in those two strings (`a' and `A'
are different letters damn-it ;-).

BTW: Even if the language is case insensitive it shouldn't smash case.

BTW version 2: Thanks to everyone for their thoughtful and insightful
answers.
-- 
····@sierratel.com
---
This is the theory that Jack built.
This is the flaw that lay in the theory that Jack built.
This is the palpable verbal haze that hid the flaw that lay in...
From: Erik Naggum
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <3171974897841999@naggum.net>
* Simon Brooke <·····@jasmine.org.uk>
| Errr... this needs a bit of expansion.  Symbols in Common LISP are
| case-insensitive.  Common LISP is the most prevelent LISP in use these
| days, but that doesn't make it the only one.  Symbols in InterLISP and
| Portable Standard LISP, for example, are case sensitive.

  Wrong.  Stop confusing the reader with the symbol!

| Like the separation of function and value cells, it's a feature of
| Common LISP we're stuck with now, but which I don't think anyone any
| longer seriously defends.

  What!?  Are you saying that YOU THINK the existence of function and
  value cells in symbols is as stupid as upper-case symbol names?  If
  so, are you insane, trolling, stupid, or another Scheme bigot?

#:Erik
-- 
  If this is not what you expected, please alter your expectations.
From: Simon Brooke
Subject: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <m27lax57e0.fsf_-_@gododdin.internal.jasmine.org.uk>
Erik Naggum <····@naggum.net> writes:

> * Simon Brooke <·····@jasmine.org.uk>
> | Like the separation of function and value cells, it's a feature of
> | Common LISP we're stuck with now, but which I don't think anyone any
> | longer seriously defends.
> 
>   What!?  Are you saying that YOU THINK the existence of function and
>   value cells in symbols is as stupid as upper-case symbol names?  If
>   so, are you insane, trolling, stupid, or another Scheme bigot?

I'm far from being a Scheme bigot; I don't like (and don't often use)
the language. I'm not trolling either. Whether I am insane is a matter
for the psychiatric profession; whether I am stupid is for you to
judge.

I do think that treating functions as different from other data is
'stupid', yes; it leads to all sorts of hacks and kludges and results
in no benefits that I am aware of. It is, in my opinion, historical
baggage left over from implementation details in LISP
1.5. Furthermore, it's clear that many of the most influential people
in the design of Common LISP are now of the same opinion; for example,
Richard Gabriel's 1988 paper in _LISP & Symbolic Computation_ on just
this issue, or some of Scott Fahlman's posts on LISP2 and the design
of Dylan on this very group in February and March of 1995.

However, if you have a different opinion, please feel free to argue
it. What benefit is there, beyond having to invoke baroque syntax to
use a lambda expression, and having to do two operations instead of
one at every step in a structure walker?

-- 
·····@jasmine.org.uk (Simon Brooke) http://www.jasmine.org.uk/~simon/
	"The result is a language that... not even its mother could 
	love.  Like the camel, Common Lisp is a horse designed by 
	committee. Camels do have their uses." 
				    ;; Scott Fahlman,  7 March 1995
From: Pierre R. Mai
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <87k8ewinam.fsf@orion.bln.pmsf.de>
Simon Brooke <·····@jasmine.org.uk> writes:

> I do think that treating functions as different from other data is
> 'stupid', yes; it leads to all sorts of hacks and kludges and results
> in no benefits that I am aware of. It is, in my opinion, historical

Functions are not treated any differently from other data in Common
Lisp.  Period.  There exist different namespaces in Common Lisp for
different constructs, among them are namespaces for types, classes, go
tags, restarts, ..., and finally for the operator position of compound
forms and another one for symbol-macros and variables.

While there have been long ranging and often violent discussions on
the merits and demerits of separating the last two namespaces, I
have as yet not seen the same amount of effort expended on the
unification of the other namespaces.  Why are we not seeing enraged
calls for the unification of say the class namespace with the s-m&v
namespace?  Why is no one raging about the ugliness of having to use
(find-class 'my-class-name) to get the class object for my-class-name?
Why doesn't my-class-name evaluate to the class object?  What about
restarts?

It seems to me that many instantly recognize the utility of different
namespaces in those cases, and implicitly agree that the trade-off
involved isn't very burdensome.  Yet when it comes to operator names
and s-m&v names, suddenly we enter holy teritory, and no amount of
utility can ever weigh up the cardinal sin of separating those
namespaces.  I find this rather remarkable, from a meta-viewpoint...

I won't argue in depth why I find the separation both of high utility
and little downside _in the context of CL_, because I find that others
have done this far more eloquently than I could have done.  Take a
look at postings and papers by Kent M. Pitman in this newsgroup and
elsewhere[1].

Regs, Pierre.

Footnotes: 
[1]  Among others the following message-ids should provide food for
     thought: 
     <···············@world.std.com>
     <···············@world.std.com>
     <···············@world.std.com>

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Erik Naggum
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <3172064189434540@naggum.net>
* Simon Brooke <·····@jasmine.org.uk>
| I do think that treating functions as different from other data is
| 'stupid', yes

  And just _how_ does Common Lisp do this?  Having two different value
  slots for a symbol is not at all what you describe as stupid.  If
  you are thinking about the need to use the `function' special form
  where (presumably) you would have been happier with regular variable
  access, that is _so_ not the issue.  You can bind a variable to a
  function object any day and funcall it.  Try, you may be shocked.
  If this is the old "I hate funcall" whine, please say so right away
  so I can ignore you completely.

| it leads to all sorts of hacks and kludges and results in no
| benefits that I am aware of.

  Really?  Assuming that we still talk about symbol-function and
  symbol-value, the hysterical need for "hygienic" macros in Scheme is
  _not_ caused by their conflated namespaces?  Come on, now.  Why
  exaggerate your case when there is at lest one _obvious_ benefit
  (and one obvious drawback to the conflating namespaces, namely the
  Scheme experiment).

  The main human benefits are that you don't have to worry about name
  collisions when you define a lexical variable and that you don't
  have to look for all sorts of bindings when reading code.  The main
  compiler benefit is that if you maintain control over what you stuff
  into the symbol-function slot, you don't have to test it for a
  function object every time you use it.

| It is, in my opinion, historical baggage left over from
| implementation details in LISP 1.5.

  And which "implementation details" are you referring to?

| Furthermore, it's clear that many of the most influential people
| in the design of Common LISP are now of the same opinion; for example,
| Richard Gabriel's 1988 paper in _LISP & Symbolic Computation_ on just
| this issue, or some of Scott Fahlman's posts on LISP2 and the design
| of Dylan on this very group in February and March of 1995.

  Yeah, _sure_ I'll trust a _Dylan_ designer to have useful comments
  about Common Lisp.  "I hate this, let's go make our _own_ language"
  people are so full of agendas both hidden and overt that you can't
  take anything they say about the old language seriously.

| However, if you have a different opinion, please feel free to argue
| it.  What benefit is there, beyond having to invoke baroque syntax
| to use a lambda expression, and having to do two operations instead
| of one at every step in a structure walker?

  And what the fuck does lambda expressions have to do with symbol and
  value cells in symbols?

  All of trolling, stupid, and insane, that's my conclusion.

| 	"The result is a language that... not even its mother could 
| 	love.  Like the camel, Common Lisp is a horse designed by 
| 	committee. Camels do have their uses." 
| 				    ;; Scott Fahlman,  7 March 1995

  If you don't like Common Lisp, Simon, you don't have to suffer it.
  There are plenty of other languages out there you can use.  Those
  who have hated parts of the language have gone elsewhere.  Those who
  remain actually _like_ the language, even if this is unfathomable to
  you.  It it also my firm opinion that computer professionals who use
  tools they dislike or hate _are_ insane at best.  (Worse would be
  slavery, criminal incompetence, extremely low self-esteem and other
  results of prostitution, or extreme poverty, all of which remove
  one's ability to choose rationally among the _generally_ available
  options.)  One simply does not choose to use something one hates for
  whatever reasons.  That's why Scott Fahlman went to Dylan, I guess,
  but I know that's why I dropped working with SGML, C++, and Perl.

#:Erik
-- 
  If this is not what you expected, please alter your expectations.
From: Kalle Olavi Niemitalo
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <iznhf9zk6cg.fsf@stekt.oulu.fi>
Erik Naggum <····@naggum.net> writes:

>   Really?  Assuming that we still talk about symbol-function and
>   symbol-value, the hysterical need for "hygienic" macros in Scheme is
>   _not_ caused by their conflated namespaces?

Please explain what you mean with that.  Common Lisp has its separate
namespaces, and even there macros commonly use gensym so as not to
accidentally use variables of their callers.  How does this differ
from Scheme's hygienic macros?

(Did you mean that Scheme macros are _forced_ to be hygienic?)
From: Harald Hanche-Olsen
Subject: Hygienic macros (Was: Separation in function and value cells)
Date: 
Message-ID: <pcoya3b1e3k.fsf_-_@math.ntnu.no>
+ Kalle Olavi Niemitalo <····@stekt.oulu.fi>:

| Common Lisp has its separate namespaces, and even there macros
| commonly use gensym so as not to accidentally use variables of their
| callers.  How does this differ from Scheme's hygienic macros?  (Did
| you mean that Scheme macros are _forced_ to be hygienic?)

Yes, Scheme has a "high-level" macro mechanism in which hygiene is
enforced.  This is the only standardized macro mechanism in Scheme.
Schemers have also realized that it is sometimes necessary to break
hygiene intentionally, so there are various proposals to accomplish
that.  It's been too long since I kept track of the Scheme world;
maybe one of these proposals is emerging as standard, or maybe not.

Scheme's hygienic macros are really easy to use, and ought to
eliminate a whole host of bugs, at least in the hands of inexperienced
programmers.  I see no reason why they cannot be implemented in CL --
not replacing CL macros of course, but as a supplement.  Does anybody
know if this has been tried?  Of course, if there is some deeper
reasons why this cannot or should not be done, I'd be thrilled to hear
about them.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- "There arises from a bad and unapt formation of words
   a wonderful obstruction to the mind."  - Francis Bacon
From: Erik Naggum
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <3172156073583918@naggum.net>
* Kalle Olavi Niemitalo
| Please explain what you mean with that.

  How can it be any clearer?  Scheme decided to conflate the function
  and variable namespaces, and approximately 50 milliseconds later,
  began whining about Common Lisp's lack of hygienic macros (Scheme
  people are always whining about something in Common Lisp), simply
  because a variable binding in Scheme screws up function calls, so an
  unintentional variable binding colliding with a local function will
  make Scheme code using non-hygienic macros very unsafe.  Scheme code
  uses local functions a lot more than Common Lisp, so you'd think
  they'd realize the need to separate the namesaces is even stronger
  than in Common Lisp, but noooo, "separate namespaces is stupid" so
  they'd rather suffer the complicating consequences of their decision
  than fix it.

| Common Lisp has its separate namespaces, and even there macros
| commonly use gensym so as not to accidentally use variables of their
| callers.

  That's _so_ irrelevant.  Common Lisp macros don't use fresh symbols
  in order to avoid accidentally clobbering the functional value of
  the symbols it uses.  Scheme's hysterical need for hygiene comes
  from the very significant danger of clobbering _functional_ values.
  (That's why you see Scheme code us silly variable names likt "lst"
  instead of "list", too -- in case you want to call `list', it'd
  better not be some arbitrary list of data.)

| How does this differ from Scheme's hygienic macros?

  It doesn't, of course, but if you look only for similarities to what
  you already know, you will find very little of interest in your life.

  Human beings don't have any problems with the noun "house" naming a
  very different concept than the _verb_ "house".  Anyone who argued
  that "no, no, you can't call your building a `house' because that's
  already a verb" should just be shot to help him out of his misery.

#:Erik
-- 
  If this is not what you expected, please alter your expectations.
From: Xah
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <B591A233.CE04%xah@xahlee.org>
Erik Naggum <····@naggum.net> 09 Jul 2000 18:27:53 +0000 wrote:
> Human beings don't have any problems with the noun "house" naming a
> very different concept than the _verb_ "house".  Anyone who argued
> that "no, no, you can't call your building a `house' because that's
> already a verb" should just be shot to help him out of his misery.

and some people have zero foresight, that they shudder at any inking of
rectification toward advances; gluing their asses to their pants in fear of
losing their seats of flight.

such clan could never discover sex. Lucky human beings that there are those
enterprising kind, with unrelenting curiosity jumped out of their threadbare
suits and seek out adventures.

it is due to the latter kind, that evolutionary copulation took places;
pushing and pulling, leading and guiding, grinding out the next generation
of common senses, spawning offspring who'd not be burdened to death by
mind-boggling legacies and traditions that are laughable only in hindsight.

those glue-panted simpletons who prescribe a missionary position law should
just be shot, tooo.

As for crying out loud, thank providence that the Scheme & Dylan people who
lopped lisp left and right.

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html
From: Tim Bradshaw
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <ey3n1jrbk2w.fsf@cley.com>
* Simon Brooke wrote:
> I do think that treating functions as different from other data is
> 'stupid', yes; it leads to all sorts of hacks and kludges and results
> in no benefits that I am aware of. 

Do you see the same issue with the class namespace, the package
namespace, or any of the other n namespaces that CL has?

--tim
From: Kent M Pitman
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <sfwog47rqf7.fsf@world.std.com>
Simon Brooke <·····@jasmine.org.uk> writes:

> I do think that treating functions as different from other data is
> 'stupid', yes;

Separation of namespaces is not about treating functions different from
other data.  As data, CL treats functions identically to other data.

However, there can be no mistaking that most of everything that functions do
is (a) access variables, and (b) call functions.  As such, having special
syntax for dealing with the "calling" of functions (nothing to do with the
function datatype itself) seems appropriate, just as having special syntax
for dealing with slot or array access in many languages is meaningful and
appropriate.  Long experience with natural languages shows that people 
prefer to take things that they use a lot and make shorthand notations for
them.  That's one reason people like to use macros a lot, btw, because it
allows them to optimize the presentation of common idioms in a manner 
appropriate to their frequency of use.

> it leads to all sorts of hacks and kludges and results
> in no benefits that I am aware of.

Hacks and kludges?  I don't think so.  You're welcome to cite some.

As to benefits, you need only look to the paper you cite below (for which
I was a co-author and for which my contribution was to do the "benefits"
part).

> It is, in my opinion, historical
> baggage left over from implementation details in LISP
> 1.5. 

Nonsense.  Taken on its face, this kind of conclusion would imply that left
to start anew, with no constraints of compatibility, people would not make
the same choice again.  I and many others would readily make the same choice
again and NUMEROUS popular languages do distinguish the namespaces of 
functions and variables, not just Lisp.  For good reason:  People's linguistic
hardware is, as evidenced by the extremely complex mechanisms for context
exhibited in human languages, well optimized for resolving names in different
contexts with only the context as a cue.   It would be a waste of already
existing brain wetware, and a pessmimization of that wetware's function,
to fail to take advantage of such context information.

> Furthermore, it's clear that many of the most influential people
> in the design of Common LISP are now of the same opinion; for example,
> Richard Gabriel's 1988 paper in _LISP & Symbolic Computation_ on just
> this issue,

My paper, too.  And no, it was not on the issue of why the separation is
bad.  It was on the PROS AND cons of the situation.  That paper's direct
result was that CL decided the weight of the evidence was technically in
favor of keeping the status quo.

Among other things, the paper specifically cites a speed issue that is not
easily resolvable without either two namespaces or an EXTREMELY POWERFUL
inference engine--more powerful than most Scheme implementations promise.
In particular, if you do
  (setq x #'foo)
you don't know for sure that X is going to be used as a function.  It might
be it will just be used as data.  And certainly if you do 
  (setq x 3)
you don't know that X is *not* going to be used as a function.  As a 
consequence, when you call X in a Lisp1 (the term I coined in the paper
you cite for the language subfamily that includes languages like Scheme
which have only a single namespace), then you have to check at call time
whether X contains something valid to execute as a function.  The only case
in which you can just immediately do a machine-level instruction jump to the
contents of X is where you have done theorem proving enough to know that X
contains good data.  In the case of some functions like
 (define (foo x y) (x y))
this can be arbitrarily complicated to prove because it means knowing all 
callers of FOO, and since neither Scheme nor Lisp is strongly typed, you
rapidly run up against the halting problem proving that (X Y) is a valid
function call at compile time, leaving the call as inefficient as 
 (defun foo (x y) (funcall x y))
in Common Lisp, except that in this case, Common Lisp calls attention to the
fact that there is something unusual about the call, and some Common Lisp
programmers appreciate that.  Indeed, CL also provides the ability to declare
that X is promised to be a function, consequently allowing an efficient
compilation even where theorem proving in Scheme might fail (because Scheme
designers, myself among them but outvoted) have stubbornly resisted providing
a declaration mechanism in Scheme that would allow it to know what the
programmer knows but what cannot easily be detected by general inferencing.

It's not just the FUNCALL case, though; in general, in Scheme, any
name whose value is ever assigned free is at risk of being a slow call
unless that call can be demonstrated to always get safe values.  And yet,
Scheme can't compile in safety checks at the time of assignment because it
doesn't know you will do a call to the function during the time with that
bad value.  Consider:
  (block
     (set! x 3)
     (print (+ x 1))
     (set! x car)
     (print (x '(a b c))))
Here it requires a great deal of compiler smarts to jump directly to the
function X efficiently because the compiler needs to be fearful that X will
contain 3 at the time of jump.  Only very careful flow analysis shows this
to be safe, and it's more complicated if these two pairs of SET!/PRINT 
operations occur in unrelated parts of programs.

By contrast, the CL design means that when you do the assignment (which
most people agree occurs statistically a LOT less often than the calling),
you can do the test then.  That is, 
  (progn
    (setf (symbol-function 'x) 3) ; this can signal an error
    ...)
As a consequence, all references to the symbol function namespace are safe,
and as a consequence of that, all references of the form 
  (f x)
can be 100% sure that the function cell of f contains something that is truly
safe to jump to.  This is a big speed gain, and the speed gain is enjoyed
regardless of whether the compiler has had a chance to completely analyze 
all of the flow paths, so there's no chance that a later addition of new code
will violate this contract in being-debugged code.

> or some of Scott Fahlman's posts on LISP2 and the design
> of Dylan on this very group in February and March of 1995.

I didn't read this so have no opinion.

> However, if you have a different opinion, please feel free to argue
> it. What benefit is there, beyond having to invoke baroque syntax to
> use a lambda expression, and having to do two operations instead of
> one at every step in a structure walker?

There are several additional benefits.

One is that statistically, functions are used less often in CL.  A
number of people don't like "functional programming" as a style, or
feel nervous about it, and want it "case marked" when it happens so
they can pay attention closer.  That benefit doesn't apply to all, but
is a feature to those that it does apply to.

Another benefit is that it allows the correct spelling of variable names.
I notice a lot of Scheme people who spell variables whose english name
is "list" as LST just to avoid an accidental collision with the LIST 
operator. CL people don't worry about this.  They write variables like LIST
because they don't worry that
 (defun foo (list)
   (mapcar #'(lambda (x) (list 'foo x))
           list))
will have some confusion about which list means what.  Practical experience
in CL says functions are almost always constant, and so (list x y) is reliable
to write anywhere to mean "make a list of two things".  And binding a lambda
variable doesn't change that.

Another benefit is that when you see (list x y) on a screen of a definition
that is not the start of definition, you don't have to scroll back in order
to understand its meaning.  It's true that someone could try to FLET LIST,
but as a practical convention, good programmers just don't ever FLET or LABELS
definitions that are globally defined.  So when you see a function that looks
like it's doing a certain function call, it generally is.  Not so at all in
Scheme.

It's well and good to say that you'd make this arbitrary choice differently
if you were designing the language.  But it's not well and good to say there
are no reasons for the language being as it is, nor is it well and good to
ascribe the reasons for the change as being "historical" and "ones that would
obviously be made differently with hindsight".  Many choices in the language
are about either predicting a usage pattern, or about accomodating a usage
pattern that will select a certain user group you desire to cater to.  CL
sought to cater to a set of people who are largely quite pleased with this
decision and most of which would be gravely offended if you changed it.  In
that regard, it made the "right" decision.  Not because "right" is a uniquely
determined attribute of the world, but because "right" is a term meaningless
without context.  ... Uh, that is to say, I guess, that there are multiple
namespaces in which the term "right" is assigned a value.

I'll close with a favorite quote of mine (wish I knew the source) and the
obligatory analysis of the quote.

   There are two kinds of people in the world.
   Those who think there are two kinds of people and those who don't.

The war about two namespaces isn't, I think, just about whether or not it's 
useful to sometimes distinguish context in functions and variables. In my
opinion, it's about whether there is one and only one right way to think 
about programs.  Most Lisp2 supporters don't object to the Lisp1 world 
existing; they see a multiplicity of options and are happy for the one they 
have.  But Lisp1'ers tend to be annoyed at Lisp2 more often.  And I think it's
not just about namespaces.  It's about the fact that other programmers have
a choice of how to think, and that they can be happy in a world where the
axioms are chosen differently.  I think that's terribly sad.

We all benefit from multiple points of view.  We are robbed when denied them.
We should all be seeking to learn to appreciate others' points of view, not
seeking to teach people to unlearn an appreciation for points of view we've
not taken the trouble to understand.

I've worked enough in Lisp1 languages to know that there are some cool things
about those.  Most of them, if I were to take your conversational posture, I
could have attacked as "hacks and kludges", but I choose not to.   Each 
notation yields interesting felicities which perhaps we should never take
advantage of if we really wanted to be ultra-purist.  But we pick specific
notations BECAUSE we want some things to be "accidentally easy", and so what's
the point of then avoiding those things which become so?  And, as a corollary,
we end up making some things "accidentally hard" as a result of some of our
choices, too.  That doesn't make those choices wrong.  It just means life is
full of tough choices, and we do the best we can making the ones we do.

Sure, the y-operator looks cooler in Scheme.  But I rarely write y-operators.
And on the other hand, in most of the functions I write, I like seeing the
funcall and the #' on functional items.  I find it a useful piece of 
documentation highlighting intent.  I don't see it as scary or asymmetric.
Trade-offs.  It's all just trade-offs.
From: Joe Marshall
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <aefr2cz4.fsf@alum.mit.edu>
Excuse the huge amount of quoted text, but the original article was
quite lengthy and I wanted to excerpt the relevant part.

Kent M Pitman <······@world.std.com> writes:

> Among other things, the paper specifically cites a speed issue that is not
> easily resolvable without either two namespaces or an EXTREMELY POWERFUL
> inference engine--more powerful than most Scheme implementations promise.
> In particular, if you do
>   (setq x #'foo)
> you don't know for sure that X is going to be used as a function.  It might
> be it will just be used as data.  And certainly if you do 
>   (setq x 3)
> you don't know that X is *not* going to be used as a function.  As a 
> consequence, when you call X in a Lisp1 (the term I coined in the paper
> you cite for the language subfamily that includes languages like Scheme
> which have only a single namespace), then you have to check at call time
> whether X contains something valid to execute as a function.  The only case
> in which you can just immediately do a machine-level instruction jump to the
> contents of X is where you have done theorem proving enough to know that X
> contains good data.  In the case of some functions like
>  (define (foo x y) (x y))
> this can be arbitrarily complicated to prove because it means knowing all 
> callers of FOO, and since neither Scheme nor Lisp is strongly typed, you
> rapidly run up against the halting problem proving that (X Y) is a valid
> function call at compile time, leaving the call as inefficient as 
>  (defun foo (x y) (funcall x y))
> in Common Lisp, except that in this case, Common Lisp calls attention to the
> fact that there is something unusual about the call, and some Common Lisp
> programmers appreciate that.  Indeed, CL also provides the ability to declare
> that X is promised to be a function, consequently allowing an efficient
> compilation even where theorem proving in Scheme might fail (because Scheme
> designers, myself among them but outvoted) have stubbornly resisted providing
> a declaration mechanism in Scheme that would allow it to know what the
> programmer knows but what cannot easily be detected by general inferencing.
> 
> It's not just the FUNCALL case, though; in general, in Scheme, any
> name whose value is ever assigned free is at risk of being a slow call
> unless that call can be demonstrated to always get safe values.  And yet,
> Scheme can't compile in safety checks at the time of assignment because it
> doesn't know you will do a call to the function during the time with that
> bad value.  Consider:
>   (block
>      (set! x 3)
>      (print (+ x 1))
>      (set! x car)
>      (print (x '(a b c))))
> Here it requires a great deal of compiler smarts to jump directly to the
> function X efficiently because the compiler needs to be fearful that X will
> contain 3 at the time of jump.  Only very careful flow analysis shows this
> to be safe, and it's more complicated if these two pairs of SET!/PRINT 
> operations occur in unrelated parts of programs.

You don't need two namespaces to handle this efficiently, you can do
it with function cacheing the way MIT Scheme does it.  When a free
variable is first invoked as a function, the value is checked to make
sure it is a function, and the entry point is computed.  This entry
point is placed in a cache in the caller.  The next time the function
is called, it is jumped to directly via the cache.

(Actually, the entry in the cache is *always* directly jumped to.
To invalidate the cache, you replace the entry with a call to a
trampoline that fetches the value, checks it, and updates the cache.)

> By contrast, the CL design means that when you do the assignment (which
> most people agree occurs statistically a LOT less often than the calling),
> you can do the test then.  

The cacheing mechanism described above depends on this feature.  When
you assign a variable that contains a function, you must invalidate
all cached links.  You could be aggressive and relink at this point,
but being lazy works just as well. 

> There are several additional benefits.
> 
> One is that statistically, functions are used less often in CL.  A
> number of people don't like "functional programming" as a style, or
> feel nervous about it, and want it "case marked" when it happens so
> they can pay attention closer.  That benefit doesn't apply to all, but
> is a feature to those that it does apply to.

I don't consider this a benefit.  In fact, the special `case marked'
syntax may make people feel that `something weird' is happening.  If
you are using a single namespace lisp, you could certainly add some
syntax to mark the places you are invoking a computed function.

> Another benefit is that it allows the correct spelling of variable names.
> I notice a lot of Scheme people who spell variables whose english name
> is "list" as LST just to avoid an accidental collision with the LIST 
> operator. CL people don't worry about this.  They write variables like LIST
> because they don't worry that
>  (defun foo (list)
>    (mapcar #'(lambda (x) (list 'foo x))
>            list))
> will have some confusion about which list means what.  Practical experience
> in CL says functions are almost always constant, and so (list x y) is reliable
> to write anywhere to mean "make a list of two things".  And binding a lambda
> variable doesn't change that.

I agree that this is a benefit.

> Another benefit is that when you see (list x y) on a screen of a definition
> that is not the start of definition, you don't have to scroll back in order
> to understand its meaning.  It's true that someone could try to FLET LIST,
> but as a practical convention, good programmers just don't ever FLET or LABELS
> definitions that are globally defined.  So when you see a function that looks
> like it's doing a certain function call, it generally is.  Not so at all in
> Scheme.

Not true.  Good Scheme programmers don't shadow global variables any
more capriciously than good lisp programmers.

> Trade-offs.  It's all just trade-offs.

Agreed.
From: Kent M Pitman
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <sfwn1jpr9qs.fsf@world.std.com>
Joe Marshall <·········@alum.mit.edu> writes:

> You don't need two namespaces to handle this efficiently, you can do
> it with function cacheing the way MIT Scheme does it.
> [non-controversial stuff about caching omitted]

I was comparing apples-to-apples an implementation with two namespaces
and one without.  Your reply says to me, in effect, that it's possible 
to build two namespaces and then to hide the fact.  So much for functions
not being "treated specially" and so much for a "kludge and hack free"
system.  But certainly I did not mean to imply there was no way to win;
I was merely pointing out, as I said, that it's all trade-offs.  In CL
you get the speed for free in a straightforward way if you like 2 namespaces
(and I do); in Scheme you don't get it for free because you bargained away
the two namespaces for something you valued more, so you pay some other place.
The world is full of options, and the key thing to understand is that this
just emphasizes that the choice is arbitrary.

The key thing isn't that I don't need two namespaces.  I WANT two namespaces.
I want them for linguistic reasons alone.  But you asserted there were no
benefits and I pointed out that in addition to the linguistic reasons, there
are efficiency benefits from having made the choice thusly.

> > There are several additional benefits.
> > 
> > One is that statistically, functions are used less often in CL.  [...]
> 
> I don't consider this a benefit.  In fact, the special `case marked'
> syntax may make people feel that `something weird' is happening.  If
> you are using a single namespace lisp, you could certainly add some
> syntax to mark the places you are invoking a computed function.

I realize you don't consider this a benefit, but I want to emphasize that
this does not mean it's not.  No benefit is something everyone uses and so
the test of benefitness is not "there exists a person who doesn't care";
rather, the test is "there exists a person who DOES care".

On the other hand, some features do work as misfeatures with some people
or in some situations.  I'd prefer to say "this is also a non-benefit".
I regard making lists of "pros and cons" as mostly an exercise in
monotonic logic--barring people being outright deceitful, you can't really
refute what they have said by saying the opposite--you can just point out
that the world is pretty complicated, logically speaking.  And it is.

But I specifically want to negate the claim that "There are no benefits
to a Lisp2" since that's what you had been saying.  What I think you meant
to say was "The benefits of a lisp2 don't speak to me."  That's ok.

Btw, as an aside, I should point out that I coined the terms Lisp1 and Lisp2
when writing this paper with Gabriel, and I prefer it in all discussion of
this kind over any use of "Scheme" or "CL".  I found that some people have
personally passionate feelings about CL and Scheme which have nothing to
do with this argument, and I find it especially useful to sometimes think
about a Lisp2 variant of Scheme and a Lisp1 variant of CL to keep the 
discussion honest and to keep people's feelings about unrelated issues from
swaing things.

Oh, and one final note: I personally have long wanted to see a Lisp Omega
in which there were an arbitrarily extensible number of namespaces and
where you could define both Lisp and Scheme in terms of common operators.
I once sketched out how such a system would work and partially implemented
it but have never completed it.  Sometime if I get time I will.  The basic
essence would be to have operators that take namespace keywords like
 (ASSIGN (VALUE CAR FUNCTION) (VALUE CDR VARIABLE))
or that take compound tokens as "variable" names like:
 (SET! (IDENTIFIER CAR FUNCTION) ...)
In this way, programs written in one or the other language could interoperate
by appealing to a more general substrate of lower-level interchange notations.
I often wonder if the Lisp2 haters don't hate Lisp2 not for having multiple
namespaces, but for having too few (that is, fewer than infinity).  2 has a
certain arbitrariness that 1 and 0 seem not to.  Though one is able to play
fast and loose with which of 1 or 0 gets to be the non-arbitrary value
of those two arbitrarily chosen non-arbitrary values.  Heh...
Anyway, it's on my list to finish up the implementation of this sometime
and maybe make a conference paper out of it or something, but time is always
so short...
From: Joe Marshall
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <itudwtga.fsf@alum.mit.edu>
Kent M Pitman <······@world.std.com> writes:

> Joe Marshall <·········@alum.mit.edu> writes:
> 
> > You don't need two namespaces to handle this efficiently, you can do
> > it with function cacheing the way MIT Scheme does it.
> > [non-controversial stuff about caching omitted]
> 
> I was comparing apples-to-apples an implementation with two namespaces
> and one without.  

I was taking issue with your statement that ``the speed issue is not
easily resolvable without either two namespaces or an EXTREMELY
POWERFUL inference engine.''  A relatively simple cacheing mechanism
can also be used to speed up function calling.

> Your reply says to me, in effect, that it's possible 
> to build two namespaces and then to hide the fact.  

I don't understand why you think that there are two namespaces.  There
is only one.  What is done differently is that when a variable appears
as the operator element of a combination, it is treated as a
function.  But you *have* to do this.

> So much for functions not being "treated specially" and so much for
> a "kludge and hack free" system.

Functions are special in that any combination the leftmost element is
applied to the remainder of the elements.  I don't think cacheing the
result of a variable lookup is a `kludge'.

> But certainly I did not mean to imply there was no way to win;
> I was merely pointing out, as I said, that it's all trade-offs.  In CL
> you get the speed for free in a straightforward way if you like 2 namespaces
> (and I do); in Scheme you don't get it for free because you bargained away
> the two namespaces for something you valued more, so you pay some other place.

Yes.  But you seemed to be asserting that there is an irreducable
cost in terms of performance that can only be ameliorated by heroic
efforts.  

> But you asserted there were no benefits and I pointed out that in
> addition to the linguistic reasons, there are efficiency benefits
> from having made the choice thusly.

I don't believe that I asserted that there were no benefits.  I
asserted that a two namespace Lisp does not have an inherent
performance advantage over a single namespace Lisp.

> > > There are several additional benefits.
> > > 
> > > One is that statistically, functions are used less often in CL.  [...]
> > 
> > I don't consider this a benefit.  In fact, the special `case marked'
> > syntax may make people feel that `something weird' is happening.  If
> > you are using a single namespace lisp, you could certainly add some
> > syntax to mark the places you are invoking a computed function.
> 
> I realize you don't consider this a benefit, but I want to emphasize that
> this does not mean it's not.  No benefit is something everyone uses and so
> the test of benefitness is not "there exists a person who doesn't care";
> rather, the test is "there exists a person who DOES care".

I *do* care, actually, and I dislike having to use FUNCTION and
FUNCALL to mark the code.

> On the other hand, some features do work as misfeatures with some people
> or in some situations.  I'd prefer to say "this is also a non-benefit".
> I regard making lists of "pros and cons" as mostly an exercise in
> monotonic logic--barring people being outright deceitful, you can't really
> refute what they have said by saying the opposite--you can just point out
> that the world is pretty complicated, logically speaking.  And it is.

I think I made myself clear:  I don't consider special marking of
functions passed as values to be a benefit.  If you wish to recast
that statement as `Jrm does not appreciate the many benefits of
special marking of functions passed as variables', feel free to read
it that way.

> But I specifically want to negate the claim that "There are no benefits
> to a Lisp2" since that's what you had been saying.  

I never made that claim.

> What I think you meant to say was "The benefits of a lisp2 don't
> speak to me."  That's ok.

Nor that one.
From: Rainer Joswig
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <rainer.joswig-749D16.00053111072000@news.is-europe.net>
In article <············@alum.mit.edu>, Joe Marshall 
<·········@alum.mit.edu> wrote:

> I *do* care, actually, and I dislike having to use FUNCTION and
> FUNCALL to mark the code.

I think it makes the code clearer to understand in the log run.

-- 
Rainer Joswig, BU Partner,
ISION Internet AG, Steinh�ft 9, 20459 Hamburg, Germany
Tel: +49 40 3070 2950, Fax: +49 40 3070 2999
Email: ····················@ision.net WWW: http://www.ision.net/
From: Hartmann Schaffer
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <396a734d@news.sentex.net>
In article <···································@news.is-europe.net>,
	Rainer Joswig <·············@ision.net> writes:
> In article <············@alum.mit.edu>, Joe Marshall 
> <·········@alum.mit.edu> wrote:
> 
>> I *do* care, actually, and I dislike having to use FUNCTION and
>> FUNCALL to mark the code.
> 
> I think it makes the code clearer to understand in the log run.

in the same way that coding conventions including all kind of
declarative  information in variable names


-- 

Hartmann Schaffer
From: Robert Monfera
Subject: Re: Separation in function and value cells (was Re: newbie: please don't  smash my case)
Date: 
Message-ID: <396A868B.5391E2F6@fisec.com>
Hartmann Schaffer wrote:

> > I think [FUNCTION and FUNCALL] make the code clearer to understand
> > in the log run.
> 
> in the same way that coding conventions [do,] including all kind of
> declarative information in variable names

You may think it is the same, but you can not objectively state it is. A
lot of people care about the difference for good reasons.

Robert
From: Hartmann Schaffer
Subject: Re: Separation in function and value cells (was Re: newbie: please don't  smash my case)
Date: 
Message-ID: <396bbead@news.sentex.net>
In article <·················@fisec.com>,
	Robert Monfera <·······@fisec.com> writes:
> Hartmann Schaffer wrote:
> 
>> > I think [FUNCTION and FUNCALL] make the code clearer to understand
>> > in the log run.
>> 
>> in the same way that coding conventions [do,] including all kind of
>> declarative information in variable names
> 
> You may think it is the same, but you can not objectively state it is. A
> lot of people care about the difference for good reasons.

ok, FUNCTION is needed because of the dual value cell of symbols.
FUNCALL however is similar to the type encoding variable names names
insofar as you have to explicitely specify with each usewhat should be 
obvious from the definition or declaration of the referenced item (at
least with the in my view reasonable convention that the head of a
list appearing in executed code must be a function object).

Note that I didn't give a judgement on the value of these constructs.

-- 

Hartmann Schaffer
From: Kent M Pitman
Subject: Re: Separation in function and value cells (was Re: newbie: please don't  smash my case)
Date: 
Message-ID: <sfwn1jo3t0x.fsf@world.std.com>
··@inferno.nirvananet (Hartmann Schaffer) writes:

> In article <·················@fisec.com>,
> 	Robert Monfera <·······@fisec.com> writes:
> > Hartmann Schaffer wrote:
> > 
> >> > I think [FUNCTION and FUNCALL] make the code clearer to understand
> >> > in the log run.
> >> 
> >> in the same way that coding conventions [do,] including all kind of
> >> declarative information in variable names
> > 
> > You may think it is the same, but you can not objectively state it is. A
> > lot of people care about the difference for good reasons.
> 
> ok, FUNCTION is needed because of the dual value cell of symbols.
> FUNCALL however is similar to the type encoding variable names names
> insofar as you have to explicitely specify with each usewhat should be 
> obvious from the definition or declaration of the referenced item (at
> least with the in my view reasonable convention that the head of a
> list appearing in executed code must be a function object).
> 
> Note that I didn't give a judgement on the value of these constructs.

I didn't understand this.  Do you want to try to restate it?  I
thought at first what you were saying was that while FUNCTION was
needed, funcall wasn't because it suffices to write (#'CAR X) instead
of (FUNCALL #'CAR X).  But if I was going to do that, then I would
just write (CAR X).  And on the other hand if I was going to write
(FUNCALL CAR X), then I definitely cannot write (CAR X) since that
already has a meaning and the meaning is not (FUNCALL CAR X).  So I
don't see how FUNCALL can be dispensed with in a Lisp2 situation.
Moreover, I think it's as attractive to have (funcall f x) for
function calling as it is to have (aref a x) for referencing an array.
In Maclisp (the one on pdp10's ages ago, not on the later macintosh
which came after the death of maclisp), something declared as an array
could be referenced as just (a x) because arrays were made
"funcallable" in the "obvious" way.  But it was not perspicuous.
I like seeing words like AREF and FUNCALL as redundant reminders.
Redundancy is not all bad--it is what allows compilers (and human
readers) to detect errors of intent.  If your language allows no
reduncancy (I call such languages "dense", in that meaningful expressions
are densely packed), it's hard to detect errors of intent; by contrast,
sparse languages (languages where it's easy to write non-sentences)
are good, I think, because they allow for syntax checkers and the like
to notice inconsistencies.  Declarations, similarly, are sometimes 
apparently redundant, but even when so they serve as opportunities for
consistency checkers to notice a problem of intent... so really they are
not redundant.
From: Erik Naggum
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <3172313117530301@naggum.net>
* Joe Marshall <·········@alum.mit.edu>
| I *do* care, actually, and I dislike having to use FUNCTION and
| FUNCALL to mark the code.

  When C got its simplified function pointer calling syntax, from
  (*foo)(...) to foo(...) for some function pointer foo, there were
  some very heated objections to this change.  Readability is not one
  of C's strongest suits, and function pointers are already way too
  complex for most people, but the fact that they were made to look
  like ordinary functions disturbed many people.  You'd think they'd
  appreciate the simplified syntax, but many programmers still prefer
  to make calls to variables explicit in a language which gives you a
  choice.  (I like funcall and dislike Scheme's variable-calling, but
  use the new syntax in C.)

#:Erik
-- 
  If this is not what you expected, please alter your expectations.
From: Tom Breton
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <m366qc5adi.fsf@world.std.com>
Kent M Pitman <······@world.std.com> writes:

> I often wonder if the Lisp2 haters don't hate Lisp2 not for having multiple
> namespaces, but for having too few (that is, fewer than infinity).  2 has a
> certain arbitrariness that 1 and 0 seem not to.  

In a sense, there are already 3 namespaces associated with symbols,
the third being setf forms.  Even more than that, properties
potentially associate an unlimited number of namespaces with symbols.

But I think your point is basically correct.  

-- 
Tom Breton, http://world.std.com/~tob
Not using "gh" since 1997. http://world.std.com/~tob/ugh-free.html
Some vocal people in cll make frequent, hasty personal attacks, but if
you killfile them cll becomes usable.
From: Kent M Pitman
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <sfwog443tgi.fsf@world.std.com>
Tom Breton <···@world.std.com> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > I often wonder if the Lisp2 haters don't hate Lisp2 not for having multiple
> > namespaces, but for having too few (that is, fewer than infinity).  2 has a
> > certain arbitrariness that 1 and 0 seem not to.  
> 
> In a sense, there are already 3 namespaces associated with symbols,
> the third being setf forms.  Even more than that, properties
> potentially associate an unlimited number of namespaces with symbols.

Well, more than that.  SETF is a stretch and some people don't agree.
But certainly block tags are a proper namespace, and so are go tags.
It is clear that CL has at least 4 namespaces.  We call it a Lisp2 merely
as a shorthand for Lisp>1.  4 is "even more arbitrary" (heh) than 2, but
2 is "arbitrary enough".

> But I think your point is basically correct.  

Phew.
From: Rob Warnock
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <8khesh$4d5ar$1@fido.engr.sgi.com>
Kent M Pitman  <······@world.std.com> wrote:
+---------------
| It is clear that CL has at least 4 namespaces.  We call it a Lisp2 merely
| as a shorthand for Lisp>1.  4 is "even more arbitrary" (heh) than 2, but
| 2 is "arbitrary enough".
+---------------

Who was it who said, "There are only three magic numbers in computer
science: zero, one, & infinity"?  See also George Gamow's book "One,
Two, Three...Infinity" and the Jargon File's "Zero-One-Infinity Rule"
<URL:http://www.tuxedo.org/~esr/jargon/html/entry/Zero-One-Infinity-Rule.html>


-Rob

-----
Rob Warnock, 41L-955		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		PP-ASEL-IA
Mountain View, CA  94043
From: Robert Monfera
Subject: Re: Separation in function and value cells (was Re: newbie: please don't  smash my case)
Date: 
Message-ID: <39691950.F04A590A@fisec.com>
Joe Marshall wrote:
[...]
> You don't need two namespaces to handle this efficiently, you can do
> it with function cacheing the way MIT Scheme does it.  When a free
> variable is first invoked as a function, the value is checked to make
> sure it is a function, and the entry point is computed.  This entry
> point is placed in a cache in the caller.  The next time the function
> is called, it is jumped to directly via the cache.
> 
> (Actually, the entry in the cache is *always* directly jumped to.
> To invalidate the cache, you replace the entry with a call to a
> trampoline that fetches the value, checks it, and updates the cache.)
[...]
> When
> you assign a variable that contains a function, you must invalidate
> all cached links.  You could be aggressive and relink at this point,
> but being lazy works just as well.

Caching, trampolines, relinks and lazy invalidation must make
assignments and some function calls much slower.  A similar, more
complex mechanism is (or at least should be) in place for CLOS generic
method call optimizations, but I think that's besides the point Kent was
making.

Robert
From: Joe Marshall
Subject: Re: Separation in function and value cells (was Re: newbie: please don't  smash my case)
Date: 
Message-ID: <wviu0zxj.fsf@alum.mit.edu>
Robert Monfera <·······@fisec.com> writes:

> Joe Marshall wrote:
> [...]
> > You don't need two namespaces to handle this efficiently, you can do
> > it with function cacheing the way MIT Scheme does it.  When a free
> > variable is first invoked as a function, the value is checked to make
> > sure it is a function, and the entry point is computed.  This entry
> > point is placed in a cache in the caller.  The next time the function
> > is called, it is jumped to directly via the cache.
> > 
> > (Actually, the entry in the cache is *always* directly jumped to.
> > To invalidate the cache, you replace the entry with a call to a
> > trampoline that fetches the value, checks it, and updates the cache.)
> [...]
> > When
> > you assign a variable that contains a function, you must invalidate
> > all cached links.  You could be aggressive and relink at this point,
> > but being lazy works just as well.
> 
> Caching, trampolines, relinks and lazy invalidation must make
> assignments and some function calls much slower.  

Assignments are indeed slower, but note that Kent had this to say:

> By contrast, the CL design means that when you do the assignment (which
> most people agree occurs statistically a LOT less often than the calling),
> you can do the [functionp] test then.  

He is proposing that when you assign to a function cell, you ensure
that the cell *always* has a functional value (i.e., you can always
jump to the entry point.  Again, you would use a trampoline to
indicate an unbound function cell, or a function cell bound to an
object other than a function.)
From: Rob Warnock
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <8kbhvm$3iqif$1@fido.engr.sgi.com>
Joe Marshall  <·········@alum.mit.edu> wrote:
+---------------
| Kent M Pitman <······@world.std.com> writes:
| > but as a practical convention, good programmers just don't ever FLET
| > or LABELS definitions that are globally defined.  So when you see a
| > function that looks like it's doing a certain function call, it generally
| > is.  Not so at all in Scheme.
| 
| Not true.  Good Scheme programmers don't shadow global variables any
| more capriciously than good lisp programmers.
+---------------

Joe, I really *like* Scheme, and I write much more of it than CL, but I
have to agree with Kent (and from previous comments, Erik) on this one.

In Scheme you have to be a *lot* more careful to avoid shadowing global
variables, because *all* of the standard functions are global "variables".
Not only can one not safely use "list" as a local variable, you can't use
"string" or "vector" or "min" or "max" -- or for that matter, "quotient",
"remainder", "real-part", "magnitude" or "angle"!! A *lot* of names that
are natural choices for intermediate-result are denied to the programmer,
because Scheme, a Lisp1, chose to give builtin functions those names.

IMHO, the situation would be far worse did not Scheme use the "?" suffix
convention for predicates and the "!" suffix for mutators. OTOH, that hack
denies all such potential variable names to programmers as well, since one
must assume that someday *somebody* might want to name a function with one
of those.


-Rob

p.s. And while we're making comparisons... nah, never mind, don't get me
started about how in Scheme "length" is not generic...

-----
Rob Warnock, 41L-955		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		PP-ASEL-IA
Mountain View, CA  94043
From: Jason Trenouth
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <p7djms0rf00008jbcj49iigho69ddhtqac@4ax.com>
On 10 Jul 2000 04:04:06 GMT, ····@rigden.engr.sgi.com (Rob Warnock) wrote:

> IMHO, the situation would be far worse did not Scheme use the "?" suffix
> convention for predicates and the "!" suffix for mutators. OTOH, that hack
> denies all such potential variable names to programmers as well, since one
> must assume that someday *somebody* might want to name a function with one
> of those.

FTR Dylan (an OO Scheme) similarly has conventions for class names to avoid
collisions in a Lisp-1:

define class <foo> ( <object> )
end class;

define function doit ( foo :: <foo> )
  format-out( foo );
end;

And <foo> is a constant that evaluates to a class object. There is no
find-class( #"<foo>" ).

__Jason
From: Joe Marshall
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <snti0zf0.fsf@alum.mit.edu>
····@rigden.engr.sgi.com (Rob Warnock) writes:

> Joe Marshall  <·········@alum.mit.edu> wrote:
> +---------------
> | Kent M Pitman <······@world.std.com> writes:
> | > but as a practical convention, good programmers just don't ever FLET
> | > or LABELS definitions that are globally defined.  So when you see a
> | > function that looks like it's doing a certain function call, it generally
> | > is.  Not so at all in Scheme.
> | 
> | Not true.  Good Scheme programmers don't shadow global variables any
> | more capriciously than good lisp programmers.
> +---------------
> 
> Joe, I really *like* Scheme, and I write much more of it than CL, but I
> have to agree with Kent (and from previous comments, Erik) on this one.
> 
> In Scheme you have to be a *lot* more careful to avoid shadowing global
> variables, because *all* of the standard functions are global "variables".
> Not only can one not safely use "list" as a local variable, you can't use
> "string" or "vector" or "min" or "max" -- or for that matter, "quotient",
> "remainder", "real-part", "magnitude" or "angle"!! A *lot* of names that
> are natural choices for intermediate-result are denied to the programmer,
> because Scheme, a Lisp1, chose to give builtin functions those names.

I agree that there are too many symbols that have `unfortunate'
names.  But you don't find too many scheme programs that bind
`vector-ref', `intern', `+', `expt', `sqrt', `cons-stream',
`load', etc. etc.

I'm not sure that I think that having two namespaces is the
appropriate solution, though.
From: Pierre R. Mai
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <878zv89w05.fsf@orion.bln.pmsf.de>
Joe Marshall <·········@alum.mit.edu> writes:

> I agree that there are too many symbols that have `unfortunate'
> names.  But you don't find too many scheme programs that bind
> `vector-ref', `intern', `+', `expt', `sqrt', `cons-stream',
> `load', etc. etc.

Nit-picking:  I can readily imagine programs that would use VECTOR-REF
(e.g. a reference to or into a vector), INTERN (e.g. in human
resources applications), EXPT (exponent of something, though I'd
encourage the author to use EXPONENT instead), and LOAD (e.g. the
structural load of something or another, etc.)...

> I'm not sure that I think that having two namespaces is the
> appropriate solution, though.

While having more than one namespace doesn't eliminate the problem of
namespace "polution" (you might still want to use LOAD as a (local)
function name), it can alleviate the situation quite well, especially
when seen in context with CL's sparsity of local functions, so that
clashes in the function namespace are much more seldom than they might
be in the variable namespace.  Combine that still further with the CL
convention of putting #\* around global, special variables, and you
have considerably increased the breathing room of programmers, without
resorting to the use of less fitting function names...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Dorai Sitaram
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <8kfa1l$lv1$1@news.gte.com>
In article <··············@fido.engr.sgi.com>,
Rob Warnock <····@rigden.engr.sgi.com> wrote:
>
>In Scheme you have to be a *lot* more careful to avoid shadowing global
>variables, because *all* of the standard functions are global "variables".
>Not only can one not safely use "list" as a local variable, you can't use
>"string" or "vector" or "min" or "max" -- or for that matter, "quotient",
>"remainder", "real-part", "magnitude" or "angle"!! A *lot* of names that
>are natural choices for intermediate-result are denied to the programmer,
>because Scheme, a Lisp1, chose to give builtin functions those names.

Ach, it seems that You of the "germanic" [1]
Variablenamingssystem unaware are.  In order to
with Standardnames Nameclashes to avoid, use You
Spellingvariations thereof.  To Example, "kons",
"kar", "kdr", "lyst", "vektor", "schtring", usw.

--d

[1] Not with the "hungarian" System to confuse, which
is also oft in C-like languages to find, and which in
a prefix an abbreviated form of the function
signature puts.
From: Xah
Subject: Re: Separation in function and value cells (was Re: newbie: please don't smash my case)
Date: 
Message-ID: <B592D46C.CE63%xah@xahlee.org>
Kent M Pitman wrote:
[human nature this and human nature that]

... another one of Kent Pitman's prolix relativism.

Person A: food is certainly more valuable than shit.

Kent Pitman: No no no, it really depends on who you speak to: to
coprophagers, shit is definitely much more savory. Life is full of
choices...

> ... But Lisp1'ers tend to be annoyed at Lisp2 more often.  And I think it's
> not just about namespaces.  It's about the fact that other programmers have
> a choice of how to think, and that they can be happy in a world where the
> axioms are chosen differently.  I think that's terribly sad.

The gist of such sadness, is not that Lisp1 folks grudge the choices of
Lisp2 folks, but that Lisp2 folks incapable of rewiring their synapses for
tomorrow. When pressed, praxes are rebutted as "choices". Such is the
argument from sloppy jolly fellows. With such people there cannot come out
anything breathtaking.

> We all benefit from multiple points of view.  We are robbed when denied them.
> We should all be seeking to learn to appreciate others' points of view, not
> seeking to teach people to unlearn an appreciation for points of view we've
> not taken the trouble to understand.

What a swelling science! What a discovery! Let's have a s�ance and settle
the existence of ghosts once for all.

> I've worked enough in Lisp1 languages to know that there are some cool things
> about those.  Most of them, if I were to take your conversational posture, I
> could have attacked as "hacks and kludges", but I choose not to.   Each
> notation yields interesting felicities which perhaps we should never take
> advantage of if we really wanted to be ultra-purist.  But we pick specific
> notations BECAUSE we want some things to be "accidentally easy", and so what's
> the point of then avoiding those things which become so?  And, as a corollary,
> we end up making some things "accidentally hard" as a result of some of our
> choices, too.  That doesn't make those choices wrong.

> It just means life is
> full of tough choices, and we do the best we can making the ones we do.

By that merry-go-round token, let's stop insisting that 1+1==1 is wrong.
Because, after all, that needs a context too!

> Sure, the y-operator looks cooler in Scheme.  But I rarely write y-operators.
> And on the other hand, in most of the functions I write, I like seeing the
> funcall and the #' on functional items.  I find it a useful piece of
> documentation highlighting intent.  I don't see it as scary or asymmetric.
> Trade-offs.  It's all just trade-offs.

Common Lisp == one bag of mumble jumble of the feeble.

I guess that statements can be judged upon contexts and preferences and
choices too, not to mention that it is all just trade-offs in relation to
truth.


The nature of progress is not relativism, but absolutes and generalities.
The measure of a (generic human->computer) language lies in purely technical
aspects along, not psychology or sociology. The merits of a computer
language lies in the measure of it's technical properties, not culture or
fads. If it all seems contrary to feelings, that's because human beings has
yet much to learn about their mind-boggling psychology in the context of
hard-sciences. If a superb language is a failure in popularity, then it's a
failure of people & education, not the language. Although technology caters
to human beings, but not to those who refuse to update their thought models
with science. It would be wrong to insist this or that because human nature
this or human nature that, because human nature isn't something you can put
down in equations, yet.

One equation is worth than a billion words. The only real advance in
computer science is mathematics, alone.

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html
From: Marc Battyani
Subject: Re: Separation in function and value cells (was Re: newbie:please don't smash my case)
Date: 
Message-ID: <9E9806871E36D87A.F30E745665BCDDA7.E102E14B16920C4F@lp.airnews.net>
"Xah" <···@xahlee.org> wrote in message ······················@xahlee.org...

> One equation is worth than a billion words. The only real advance in
> computer science is mathematics, alone.

Good! Please do post equations only from now on.

Marc Battyani
From: Kent M Pitman
Subject: Re: Separation in function and value cells (was Re: newbie: 	please don't smash my case)
Date: 
Message-ID: <sfwu2ducd32.fsf@world.std.com>
Xah <···@xahlee.org> writes:

> The nature of progress is not relativism, but absolutes and generalities.
> The measure of a (generic human->computer) language lies in purely technical
> aspects along, not psychology or sociology. The merits of a computer
> language lies in the measure of it's technical properties, not culture or
> fads.

I disagree with this, but appreciate your putting it in such cold terms
because it makes it easier to understand your attitudes expressed prior
(edited out here only to conserve space, since I had no particularly useful
reply in mind).

Personally, I think the merits of anything to do with computers are judged
on outcome in the sense of how it leverages the talents of people or how it
improves their happiness.  SGML, for example, was much more coherent and 
general in design than HTML, yet did mankind overall little good (other than
indirectly as a breeding ground for HTML-ish ideas, which I don't mean to
discount).  HTML, by contrast, was a technical kludge, but has connected
mankind's accumulated trivia inextricably, and is therefore reasonably counted
as superior.  

CL's value, to me, lies in the ability of people to learn competence in it
and to use it effectively to some other end than the language itself.  Any 
language whose value is seen only by metrics intrinsic to itself, and not by
its power and usefulness to its users, is not a success in my book.

> If it all seems contrary to feelings, that's because human beings has
> yet much to learn about their mind-boggling psychology in the context of
> hard-sciences. If a superb language is a failure in popularity, then it's a
> failure of people & education, not the language. Although technology caters
> to human beings, but not to those who refuse to update their thought models
> with science. It would be wrong to insist this or that because human nature
> this or human nature that, because human nature isn't something you can put
> down in equations, yet.

As with the HTML example I cited, there is a complex problem with getting
ordinary people interested in stuff.  HTML got people interested enough in
SGML-ish concepts that they wanted to reinvent SGML in the form of XML.  I
think XML has its own problems, but it will likely survive SGML because it
first created a need and then satisfied the need, unlike Scheme, which does
it backward.  CL is a straight-to-the-point language that directly addresses
problems, and I like that.  It's kludgey here and there, but it's 
"solid enough" to get real work done, even work where one system must build
on another.

> One equation is worth than a billion words. The only real advance in
> computer science is mathematics, alone.

I suppose this is the point at which I'm supposed to come up with some witty
retort.  Gosh, I can feel the heat from those stage lights bearing down on me
and sweat forming on my brow.  Ok, here goes with my big closer:

 A billion equations are worth nothing if they can't be reduced to meaningful
 practice.  The only real advance in the ability to express ideas is the 
 ability to hear ideas.

(At Symbolics, I remember proposing at one point that instead of building a
release full of new tools, we just spend a release excavating old tools that
were already there and no one had yet found.  Implementing great things is
cool.. as long as people use them.  But what's the point of putting in cool
stuff no one ever finds?)

 Math isn't the goal of CS, it's just a tool.  Fruit smoothies on a tropical
 island is the goal.

Well, at least it's obvious how we get to different conclusions, having started
from different premises.
From: Xah
Subject: Re: Separation in function and value cells (was Re: newbie:  please don't smash my case)
Date: 
Message-ID: <B595CA49.D148%xah@xahlee.org>
Kent Pitman & readers,

Kent M Pitman <······@world.std.com> 13 Jul 2000 14:35:45 GMT wrote:
> Personally, I think the merits of anything to do with computers are judged
> on outcome in the sense of how it leverages the talents of people or how it
> improves their happiness.  SGML, for example, was much more coherent and
> general in design than HTML, yet did mankind overall little good (other than
> indirectly as a breeding ground for HTML-ish ideas, which I don't mean to
> discount).  HTML, by contrast, was a technical kludge, but has connected
> mankind's accumulated trivia inextricably, and is therefore reasonably counted
> as superior.  

If rubes don't recognize a computer, the problem ain't faulty technology but
ignorance. If SGML hasn't made a splash but HTML did, that's a problem of
social immaturity. SGML remains a superior tool. If C droids don't
appreciate lisp, that's trauma of the droids, not lisp.

Computer languages have a solid mathematical foundation. As such tools, the
more solid their technical properties, the better they are. Because they are
not 100% mathematics, their validity or usefulness is partially dependent on
their user. C gurus may beat Lisper wannabes but that does not mean C is a
superior language. The fact that C has made much more contribution to the
world does not mean C is superior. The basis of computer languages' merit
lie in their mathematical properties. It is this metric, that we should use
as a guide for direction.

As an analogy, we measure the quality of a hammer by scientific principles:
ergonomics, material (weight, hardness...), construction, statistical
analysis of accidents/productivity/... ...etc., not by vogue or lore. If we
go by feelings and preferences, hammer's future will deviate and perhaps
become dildos or maces.

> As with the HTML example I cited, there is a complex problem with getting
> ordinary people interested in stuff.  HTML got people interested enough in
> SGML-ish concepts that they wanted to reinvent SGML in the form of XML.  I
> think XML has its own problems, but it will likely survive SGML because it
> first created a need and then satisfied the need, unlike Scheme, which does
> it backward.

Aim to have superior tools and knowledgeable people, not downgrading tools
or stagnate to fit ordinary people. Education is the key. (education is
always good in general.)

> Math isn't the goal of CS, it's just a tool.  Fruit smoothies on a tropical
> island is the goal.

If you thirst for an elixir, math is the only way to go. Anything else will
get you less. Nobody said anything about math being a goal.

--

a core aspect of progress is rationalism, science, and math. (math is
_"anything"_ reduced to its bare abstraction.) The progress of human beings
through history is pretty much a reflection of its body of (scientific)
knowledge. Eastern philosophies (e.g. Taoism) ignore logic but focus on
metaphysics and mumble jumble much the same way you or the farting Naggum
like to speak about human nature this or that when obsessed with Common
Lisp. At best it's a vacuous art; of little significance.

having read many of your verbose newsgroup writings about human nature &
Common Lisp, i'm drawn to the thought that Perl must be a god-like language
in your mind, because every one of your naturalism/relativism sentiments
fits it well. Apparently, some lispers like Perl (e.g. Barry Margolin ?),
and some don't (e.g. nagging Naggum). Perhaps i'm being redundant, but could
you explain why you DO or DO NOT like Perl?

Barry Margolin about 2 or 3 years ago pointed out the strong similarity of
Perl & Common Lisp, so did other lispers occasionally. Such marvelous
indiscretion makes one vomit.

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html
From: Lieven Marchand
Subject: Re: Separation in function and value cells (was Re: newbie:   please don't smash my case)
Date: 
Message-ID: <m3lmz31f2n.fsf@localhost.localdomain>
Xah <···@xahlee.org> writes:

> Aim to have superior tools and knowledgeable people, not downgrading tools
> or stagnate to fit ordinary people. Education is the key. (education is
> always good in general.)

From a business point of view, if your problem can be handled by
throwing a lot of average people at it, this is probably the best
solution. It increases the Bus Number of your project (being defined
as the number of people on your project that can be hit by a bus
without a too serious impact on the project). Suppose a problem can be
solved by an effort of 10 programmers just graduated from a trade
school by using Visual Basic in 6 months and suppose one experienced
Lisp hacker could do it in 8 months. (These kind of differences in
productivity have been measured in studies. Computer programming is
one of the fields in which the difference in productivity by
professional practitioners is huge.) Now when your Common Lisp guru
breaks something and is hospitalized for 2 months you're in
trouble. When one of your 10 programmers does the same, your project
goes on.

Mass market tools will always cater to the "ordinary people" (your
term) because that's what defines that market. People nagging about
the cost of commercial CL systems miss this point.

-- 
Lieven Marchand <···@bewoner.dma.be>
When C++ is your hammer, everything looks like a thumb.      Steven M. Haflich
From: Xah
Subject: Re: Separation in function and value cells (was Re: newbie:   please don't smash my case)
Date: 
Message-ID: <B59647A9.D187%xah@xahlee.org>
Lieven Marchand <···@bewoner.dma.be> Jul 2000 19:25:04 +0200 wrote:
> From a business point of view, if your problem can be handled by
> throwing a lot of average people at it, this is probably the best
> solution. ...
>...
> Mass market tools will always cater to the "ordinary people" (your
> term) because that's what defines that market. People nagging about
> the cost of commercial CL systems miss this point.

I was talking about language criterions in the context of long-term progress
in a critique of Kent Pitman's unflagging appeal to soft socio-psych-ology
in defense of CL. I was not talking about how businesses cater or how tools
actually shape.

Where do your shit come from?

(A: the mouth of an ass.)

PS: Answers to "Why" questions have two distinct flavors. One is the
_history_ of something, and the other is the cause and/or reason behind
something. This confusion and stupidity of English is the error in a lot
debates.

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html
From: William Deakin
Subject: Re: Separation in function and value cells (was Re: newbie:  please  don't smash my case)
Date: 
Message-ID: <397326CD.1EDE12C5@pindar.com>
Xah wrote:
> Answers to "Why" questions have two distinct flavors. One is the
> _history_ of something, and the other is the cause and/or reason behind
> something. 
are these not the same?

> This confusion and stupidity of English is the error in a lot
> debates.
AI lang-parse recode time,

;)will
From: Lieven Marchand
Subject: Re: Separation in function and value cells (was Re: newbie:  please don't smash my case)
Date: 
Message-ID: <m3r98yt0zg.fsf@localhost.localdomain>
Xah <···@xahlee.org> writes:

> By that merry-go-round token, let's stop insisting that 1+1==1 is wrong.
> Because, after all, that needs a context too!
> 

In the abelian group of order 1 it is right. How's that for context?

> The nature of progress is not relativism, but absolutes and generalities.

Historical evidence seems to be against you. Most one issue languages
are fairly soon left behind. (Everything is a matrix -> APL,
Everything is an object -> Smalltalk, ... ) Languages like Lisp that
support multiple paradigms are able to adapt to the times and
incorporate new insights. A monoculture is inherently unstable.

> The measure of a (generic human->computer) language lies in purely technical
> aspects along, not psychology or sociology. The merits of a computer
> language lies in the measure of it's technical properties, not culture or
> fads. 

Years of experience have taught me that it's never the technical stuff
that gets you in trouble, it's the politics and the
personalities. Must be nice on your planet though.

> One equation is worth than a billion words. The only real advance in
> computer science is mathematics, alone.

Mathematics is more diverse and more a matter of personal choice and
insight than computer languages. Put together some set theorists, some
category theorists and some intuitionists and let them argue over the
foundations of mathematics and you get a worse fight than with
programmers over computer languages.

-- 
Lieven Marchand <···@bewoner.dma.be>
When C++ is your hammer, everything looks like a thumb.      Steven M. Haflich
From: Thomas A. Russ
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <ymig0plevmm.fsf@sevak.isi.edu>
Simon Brooke <·····@jasmine.org.uk> writes:

> Rudolf Schlatte <········@ist.tu-graz.ac.at> writes:
> 
> > ····@krikkit.localdomain (Dowe Keller) writes:
> > 
> > > Hello, I am a lisp newbie with a slight problem, the following code
> > > works fine except that it smashes case when it converts from symbols
> > > to strings (I hope my jargon is correct).  How do I keep the case
> > > information intact?
> > > 
> > > (defun quote-all (lst)
> > >     (if (not (null lst))
> > > 	(cons (string (car lst))(quote-all (cdr lst)))))
> > 
> > If I understand you, you mean this effect:
> > 
> > > (quote-all '(foo bar baz))
> > ("FOO" "BAR" "BAZ")
> > 
> > That's not a bug in your code.  Symbols in Lisp are case-insensitive,
> > look:
> > 
> > > 'foo
> > FOO
> 
> Errr... this needs a bit of expansion. Symbols in Common LISP are
> case-insensitive. Common LISP is the most prevelent LISP in use these
> days, but that doesn't make it the only one. Symbols in InterLISP and
> Portable Standard LISP, for example, are case sensitive.

Actually, I would quibble with this.  Symbols in Common Lisp are, in
fact, case sensitive.  It is just that the default reader will upcase
all (non-escaped) symbol names before interning the symbol.

That is why  (eq 'foo 'FOO)  =>  T   
        and  (eq '|foo| '|FOO|)  => NIL
The symbols must be case sensitive for the latter to return NIL.  The
fact that the reader is doing the case conversion for you makes it
appear that lisp is case insensitive when in fact it isn't.

The "problem" that the newbie described was occurring during the reading
in of the symbols, so that means that nothing that could be done in the
function QUOTE-ALL could fix it.  That is because information was lost
during the reading process.

Now it turns out that you can set the Lisp reader to behave
differently.  If you set the readtable case to :preserve, then you tell
the reader not to do the case conversions.  One could do this to the
global readtable by
   (setf (readtable-case *readtable*) :preserve)
but I would be reluctant to do this for fear of breaking other code that
doesn't expect this.  For example, after doing this, the following would
fail:

  (defun quote-all (lst)
    (if (not (null lst))
 	(cons (string (car lst))
        (quote-all (cdr lst)))))

because the symbol |defun| isn't defined (among others).  You would have
to write all of the built-in lisp functions in uppercase:

  (DEFUN quote-all (lst)
    (IF (NOT (NULL lst))
 	(CONS (STRING (CAR lst))
        (quote-all (CDR lst)))))

Generally when I want to preserve the case, I create a new readtable and
then bind the global readtable around the forms that need to have the
reading done case sensitively.  Usually I end up doing something like:

  (defvar *case-sensitive-readtable* (copy-readtable nil))
  (setf (readtable-case *case-sensitive-readtable*) :preserve)

and then using it in a form like:

  (let ((*readtable* *case-sensitive-readtable*))
   (read ...)
   (read-from-string ....)
   ...)

For the example in question, one could do:

  (let ((*readtable* *case-sensitive-readtable*))
   (quote-all (read-from-string "(foo bar baz))))

 => ("foo" "bar" "baz")

Finally, one can ask why symbols are being used instead of strings in
this particular application at all.

-- 
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu    
From: Johan Kullstam
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <uem55ixmk.fsf@res.raytheon.com>
···@sevak.isi.edu (Thomas A. Russ) writes:

> Finally, one can ask why symbols are being used instead of strings in
> this particular application at all.

it seems to be a lisp tradition to do so.  many lisp texts do this,
e.g., peter norvigs PAIP (look at the eliza program).

you make the symbol-name be your string and let the lisp reader
tokenize.  you can also hang stuff off the symbol on the value,
function or property-list slots.

the reader case-smash makes sense for symbols used as code, but less
so for symbols used as a kind of ersatz string.

the symbol as string idea doesn't sit well with me either, but perhaps
i am missing something important.  if i were doing it, i'd use strings
(or a structure or class containing the string and whatever hangers-on
it needed) and write a string-tokenizer instead of (ab)using the
reader.

can someone with more experience provide some insights?

-- 
johan kullstam l72t00052
From: Rainer Joswig
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <rainer.joswig-FF04C2.11533308072000@news.is-europe.net>
In article <·············@res.raytheon.com>, Johan Kullstam 
<········@ne.mediaone.net> wrote:

> the symbol as string idea doesn't sit well with me either, but perhaps
> i am missing something important.  if i were doing it, i'd use strings
> (or a structure or class containing the string and whatever hangers-on
> it needed) and write a string-tokenizer instead of (ab)using the
> reader.
> 
> can someone with more experience provide some insights?

You can "INTERN" symbols. Interning symbols means that
you put them in a package. Reading FOO and later
reading FOO will give you the same symbol (an object).
Reading "FOO" and later reading "FOO" will give you
two string objects.

-- 
Rainer Joswig, BU Partner,
ISION Internet AG, Steinh�ft 9, 20459 Hamburg, Germany
Tel: +49 40 3070 2950, Fax: +49 40 3070 2999
Email: ····················@ision.net WWW: http://www.ision.net/
From: Steven M. Haflich
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <39693588.860A336C@pacbell.net>
Rainer Joswig wrote:
> 
> You can "INTERN" symbols. Interning symbols means that
> you put them in a package.

No no no.  Nein nein nein.  Gubbish, rubbish, and false:

You cannot INTERN a SYMBOL.  INTERN accepts only a STRING as the
first argument.  The result of calling INTERN of a STRING is
either the SYMBOL previously present in the PACKAGE, or else a
newly created SYMBOL INTERNed in the PACKAGE.

In normal usage it doesn't matter when one writes informally and
inaccurately.  But when investigating and explicating the
semantics of the language, clarity and precision are paramount.

IMPORT and other operators may cause a SYMBOL to become present
in a PACKAGE.
From: Hartmann Schaffer
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <396766db@news.sentex.net>
In article <··············@gododdin.internal.jasmine.org.uk>,
	Simon Brooke <·····@jasmine.org.uk> writes:
> ...
> Errr... this needs a bit of expansion. Symbols in Common LISP are
> case-insensitive. Common LISP is the most prevelent LISP in use these

are they?  what about x\yz or |aBcD| ?  unless i am grossly mistaken,
those symbols retain case sensitivity, and i also think that the
conversion to upper case is done in the reader.

> days, but that doesn't make it the only one. Symbols in InterLISP and
> Portable Standard LISP, for example, are case sensitive.
> 
>> How that came to be and if it's a good idea still is being debated by
>> people much more experienced than me.
> 
> Allegedly[1], part of the US DoD requirement was that Common LISP
> should be usable with a type of terminal which had no shift key, and
> could display only upper case. The DoD was very influential in the
> beginnings of the Common LISP project.

available hardware at the time of development is one reason.  another
cause that influences the decision about case sensitivity seems to be
the question whether program text should be talked about or whether it 
should only be viewed.

> ...

-- 

Hartmann Schaffer
From: John Foderaro
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <MPG.13d39228f6c5dd45989682@news.dnai.com>
In article <··············@gododdin.internal.jasmine.org.uk>, 
·····@jasmine.org.uk says...
much more experienced than me.
> 
> Allegedly[1], part of the US DoD requirement was that Common LISP
> should be usable with a type of terminal which had no shift key, and
> could display only upper case. The DoD was very influential in the
> beginnings of the Common LISP project.

No, this had nothing to do with the decision.   When Common Lisp was 
being designed it was technically possible to make the case-sensitive 
and case-insensitive people happy.  In fact the most widely distributed 
Lisp at the time could support both modes.  You could write case-
sensitive code and also load in Macsyma, a very large Lisp application 
written for a case insensitive Lisp (MacLisp).   [by case sensitive 
support I mean that the reader has a mode where you can naturally 
program in  case sensitive manner in your environment, and on Unix that 
means that you don't have to write everything in all caps, and that 
symbols names have normal behavior (i.e. the symbol foo has a print name 
of "foo").]

Allegro CL has always supported both modes, although it could have been 
done in a much nicer way if the support had been part of the Common Lisp 
spec.   I tried to convince the committee that case sensitivity was 
important for Lisp as it was being used in case sensitive environment 
(e.g. the Unix operating systems where system library names are case 
sensitive).   Some members didn't believe that it was possible to have a 
programming language with case sensitive identifiers (because you 
couldn't speak your programs), and others didn't want anyone to have the 
ability to write case sensitive lisp code for fear that they may one day 
have to read it [i'm serious, these are the actual arguments against 
support for a case sensitive lisp].

So rather than design a language that would have make everyone happy, 
the committee chose explicitly to exclude the case sensitive group.   
Since that time the case sensitive world has grown dramatically (C, C++, 
Perl, Java, xml) and interfacing with it critical for Lisp's survival 
(we Lisp programmers can't write everything, we have to link to code 
written by outsiders).   I've been using a case sensitive lisp for 20 
years now and I have to say that it makes interfacing with things in the 
Unix and Windows world much easier.
From: Erik Naggum
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <3171968032376514@naggum.net>
* Rudolf Schlatte <········@ist.tu-graz.ac.at>
| Symbols in Lisp are case-insensitive,

  Wrong.

| - If you are feeling adventurous, you can make the lisp reader case
| sensitive.

  Correct, the Lisp reader is doing the case manipulation.

#:Erik
-- 
  If this is not what you expected, please alter your expectations.
From: Wolfhard Bu�
Subject: Re: newbie: please don't smash my case
Date: 
Message-ID: <m3ya3ei60s.fsf@minka.katzen.ol>
····@krikkit.localdomain (Dowe Keller) writes:


> ... How do I keep the case information intact?

> (defun quote-all (lst)
>     (if (not (null lst))
> 	(cons (string (car lst))(quote-all (cdr lst)))))


How about


(defun quote-all (lst)
    (if lst
	(cons (symbol-name (first lst))(quote-all (rest lst)))
      nil))

(quote-all '(|Common| |Lisp| |is| |a| |big| |language|))


or
	   
(defun quote-all (lst)
  (mapcar #'symbol-name lst))


-wb