From: David Fox
Subject: Re: please tell me the design faults of CL & Scheme
Date: 
Message-ID: <72CAF6F0FA013229.CE93BDD56DC6F6E0.A9D06992CDD1E9A0@lp.airnews.net>
"Julian Morrison" <······@extropy.demon.co.uk> writes:

> Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> designed". 

One problem I have with Common Lisp is the separate name spaces for
functions and variables.

From: Kent M Pitman
Subject: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwofvge0py.fsf_-_@world.std.com>
[ comp.lang.scheme removed.
  http://world.std.com/~pitman/pfaq/cross-posting.html ]

·····@cogsci.ucsd.edu (David Fox) writes:

> "Julian Morrison" <······@extropy.demon.co.uk> writes:
> 
> > Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> > designed". 
> 
> One problem I have with Common Lisp is the separate name spaces for
> functions and variables.

We looked at this issue during the design of ANSI CL and decided there
were some strong reasons for our approach.  I doubt that you can
adequately defend the single namespace approach as a "design flaw".

Btw, this is a great time for me to put out a plug for the paper Dick
Gabriel and I wrote on this at the time of the ANSI CL stuff.  The
original discussion at ANSI was longer because it included both
technical and non-technical issues.  We distilled it down to just the
technical stuff for inclusion in the first edition of the journal
"Lisp and Symbolic Computation".  Anyway, you can read the distilled
version on the web now at

 http://world.std.com/~pitman/Papers/Technical-Issues.html

NOTE WELL: If you look closely, this paper reads a little like a
debate.  Gabriel and I wrote it because we disagreed on the answer,
and it goes back and forth like a dialog in places, suggesting one
thing and then immediately countering it.  If you find such places,
that's probably interleaved paragraphs of him talking and me talking.
But I learned long ago of debates that people following them always
come out thinking their hero won.  So I've talked to people on both
sides of the issue who believe this is finally the conclusive paper
supporting their position, whichever position they have.  Personally,
and perhaps because I'm on that side of things, I think the
*technical* arguments argue for multiple namespaces because there is
an efficiency issue that is tough to get around in a single namespace
Lisp, ESPECIALLY one that religiously eschews declarations to help the
compiler in places where automated proof techniques are going to slow
things down a lot.  But I think at minimum a fair reading of this will
tell you that there is no substantial technical reason to believe a
multi-namespace Lisp is flawed, and that this is largely an issue of
style.

I also think, although I think the paper doesn't get into it, that
people's brains plainly handle multiple namespaces and contexts naturally
because it comes up all the time in natural language, and that it's a
shame for a computer language not to take advantage of wetware we already
have for things.  Claims of simplicity are often, and without justification,
measured against mathematical notions of an empty processor or a simple
processor that you could build.  But since programming languages are designed
for people, I think simplicity should be measured against our best guess
as to what processor PEOPLE have, and that leads to wholly different
conclusions.
From: Raffael Cavallaro
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <raffael-BC571A.13012505032001@news.ne.mediaone.net>
In article <··················@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

>But since programming languages are designed
>for people, I think simplicity should be measured against our best guess
>as to what processor PEOPLE have, and that leads to wholly different
>conclusions.

Just to play devil's advocate, isn't this Larry Wall's argument for the 
complexity and TMTOWTDI of Perl? I guess the question then becomes what 
is the right balance between consistent abstraction and the complexity 
and inconsistency introduced by multiple contexts.

Ralph

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwlmqkqdc0.fsf@world.std.com>
Raffael Cavallaro <·······@mediaone.net> writes:

> In article <··················@world.std.com>, Kent M Pitman 
> <······@world.std.com> wrote:
> 
> >But since programming languages are designed
> >for people, I think simplicity should be measured against our best guess
> >as to what processor PEOPLE have, and that leads to wholly different
> >conclusions.
> 
> Just to play devil's advocate, isn't this Larry Wall's argument for the 
> complexity and TMTOWTDI of Perl? I guess the question then becomes what 
> is the right balance between consistent abstraction and the complexity 
> and inconsistency introduced by multiple contexts.

I'm not sure we're disagreeing.  I would count the bookkeeping required to
keep track of notations like Perl or Teco and to be sure you're correctly
composing things as part of what the human processor does and must be 
measured against.

I do think there is a balance to be struck, and that the solution isn't at
one end of the spectrum or the other.  Probably little factoids like the
number of short term memory slots and other such things create the parameters
that dictate where the "middle" is on such a spectrum.

Indeed, one of the criticisms that is made against multiple namespaces is
that it increases the complexity of the formal semantics.  I don't do formal
semantics stuff myself, so I can't say.  However, people I trust have assured
me that adding an infinite number of namespaces would be a trivial addition.
However, I think that would also increase program complexity because of the
mental bookkeeping, etc.  That's why I don't think the formal semantics
is predictive.  I think the middle ground of "just a few namespaces"
is most appropriate to how people think, regardless of what the formal
semantics says.  The more the formal semantics leads me into spaces that
I think don't use the brain well, the more I dislike it as a guiding force
in language design.
From: Janis Dzerins
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <87r90bmg5r.fsf@asaka.latnet.lv>
Kent M Pitman <······@world.std.com> writes:

>  http://world.std.com/~pitman/Papers/Technical-Issues.html
> 
> NOTE WELL: If you look closely, this paper reads a little like a
> debate.  Gabriel and I wrote it because we disagreed on the answer,
> and it goes back and forth like a dialog in places, suggesting one
> thing and then immediately countering it.  If you find such places,
> that's probably interleaved paragraphs of him talking and me talking.

XPW: eXtreme paper-writing! (ok, just a subset -- pair paper-writing.)

-- 
Janis Dzerins

  If million people say a stupid thing it's still a stupid thing.
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfw66hnw3el.fsf@world.std.com>
Janis Dzerins <·····@latnet.lv> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> >  http://world.std.com/~pitman/Papers/Technical-Issues.html
> > 
> > NOTE WELL: If you look closely, this paper reads a little like a
> > debate.  Gabriel and I wrote it because we disagreed on the answer,
> > and it goes back and forth like a dialog in places, suggesting one
> > thing and then immediately countering it.  If you find such places,
> > that's probably interleaved paragraphs of him talking and me talking.
> 
> XPW: eXtreme paper-writing! (ok, just a subset -- pair paper-writing.)

At a distance, btw.  3000 miles.  FWIW.
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-09FC6A.14430906032001@news.nzl.ihugultra.co.nz>
In article <··················@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

>  http://world.std.com/~pitman/Papers/Technical-Issues.html

Thanks for this.

(Re the instructions at the top, there *are* quite a few typos, starting 
with "proved on[e] of the most important" in the second para and the 
same typo in the very next sentence.)


The strongest arguments I see there are:

1) Lisp2 conveys extra type information, namely that you can call what 
is in the function cell *knowing* that it is a function -- you don't 
have to check first.

2) macros vs namespaces: "There are two ways to look at the arguments 
regarding macros and namespaces. The first is that a single namespace is 
of fundamental importance, and therefore macros are problematic. The 
second is that macros are fundamental, and therefore a single namespace 
is problematic."


I believe that a lexically-scoped Lisp1 that has a) type declarations, 
and b) hygenic macros avoids both problems.

I think that 1) is pretty obvious.  Two namespaces is a pretty weak type 
system -- why not go further and have different namespaces for scalars, 
arrays, hashes, labels, globs and God-only-knows-what-else.  You can 
introduce special symbols such as $, @, % to distinguish them in 
ambiguous contexts.  Well, we know what *that* language is called :-)

Even if the function cell is know not to be data, what if it's empty?  
Don't you have to check for that?  Or are symbols in CL bound to some 
sort of error function by default?


Re 2): <quote>

(DEFMACRO MAKE-FOO (THINGS) `(LIST 'FOO ,THINGS))

Here FOO is quoted, THINGS is taken from the parameter list for the 
Macro, but LIST is free. The writer of this macro definition is almost 
certainly assuming either that LIST is locally bound in the calling 
environment and is trying to refer to that locally bound name or that 
list is to be treated as constant and that the author of the code will 
not locally bind LIST. In practice, the latter assumption is almost 
always made. 

If the consumer of the above macro definition writes 

 (DEFUN FOO (LIST) (MAKE-FOO  (CAR  LIST)))

in Lisp1, there will probably be a bug in the code. 

</quote>

If the free use of LIST in the macro is defined by the language to refer 
to the lexical binding of LIST at the point where the macro is *defined* 
then there is no problem.  It will continue to refer (presumably) to the 
global function that creates a list from its arguments.  The (CAR LIST) 
in the use of the macro will refer to the argument of FOO.

If the writer of the macro (*not* the user of the macro) intends the use 
of LIST to refer to the binding at the point of use of the macro then 
they can indicate this using a suitable "hygiene-breaking" notation.  
This is something that should be done only rarely -- better in this case 
to make LIST another explicit argument of the macro.


Of course none of this is new today and this *is* an old paper, but 
since it is being re-presented today to justify Lisp2 perhaps some note 
should be made of the advances made (e.g. by Dylan, but also recently by 
Scheme) since the paper was written?

As a historical explanation of why things were done the way they were 
twenty years ago it is of course great.



> I also think, although I think the paper doesn't get into it, that
> people's brains plainly handle multiple namespaces and contexts naturally

Well, perl certainly seems to prove that.  I just like to write perl 
code with as many different uses of the same name as possible.  Such as 
...

  next b if $b{$b} = <b>;

Yum :-)

-- Bruce
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfw4rx7w35r.fsf@world.std.com>
Erik Naggum <····@naggum.net> writes:

> * Bruce Hoult <·····@hoult.org>
> > Even if the function cell is know not to be data, what if it's empty?  
> > Don't you have to check for that?  Or are symbols in CL bound to some 
> > sort of error function by default?
> 
>   You mean, unbound?  What the implementation does with an unbound function
>   slot in a symbol is not specified in the standard.  One smart way is to
>   make the internal representation of "unbound" be a function that signals
>   the appropriate error.  That would make function calls faster, and you
>   could not have optimized away the check for boundness if you asked for
>   the functional value, anyway.  Note that the user of this code would
>   never know how you represented the unbound value unless he peeked under
>   the hood, say by inspecting a symbol.

Exactly.  This is the efficiency issue I mentioned, which cannot be 
duplicated in a Lisp1 without either massive theorem proving (takes lots
of time) or declarations (which Scheme, for example, won't do, it seems
to me at least partially because the same minimalist mindset that drives
them to want to be a Lisp1 also drives them to want to be declaration-free).
Consequently, unless you are happy with just having programs execute 
machine level garbage, there are certain function calls which are inherently
faster in a Lisp2 than in a Lisp1, assuming you believe (as I believe both
CL and Scheme designers believe) that functions are called more often than
they are defined.  A Lisp2 can take advantage of this to check once at 
definition time, but a Lisp1 cannot take advantage because it can't 
(due to the halting problem) check the data flow into every (f x) to be
sure that f contained a valid machine-runnable function.
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <g0gq7ip4.fsf@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> Exactly.  This is the efficiency issue I mentioned, which cannot be 
> duplicated in a Lisp1 without either massive theorem proving (takes lots
> of time) or declarations (which Scheme, for example, won't do, it seems
> to me at least partially because the same minimalist mindset that drives
> them to want to be a Lisp1 also drives them to want to be declaration-free).
> Consequently, unless you are happy with just having programs execute 
> machine level garbage, there are certain function calls which are inherently
> faster in a Lisp2 than in a Lisp1, assuming you believe (as I believe both
> CL and Scheme designers believe) that functions are called more often than
> they are defined.  A Lisp2 can take advantage of this to check once at 
> definition time, but a Lisp1 cannot take advantage because it can't 
> (due to the halting problem) check the data flow into every (f x) to be
> sure that f contained a valid machine-runnable function.

This is a red herring.

The issue of whether a particular address contains executable code
and whether it would be legal to load that address into the program
counter is an issue of linker protocol.  Lisp hackers tend to forget
about linking because lisp links things on the fly and makes it easy
to run a partially linked image.

Having a particular `slot' in a symbol to hold the function value is
an implementation detail.  There is no necessity for such a slot to
actually exist, but rather that such a slot *appear* to exist for the
intents and purposes of SYMBOL-FUNCTION and for free-references to the
function in code.  What matters is that when a piece of code calls
function FOO, it either invokes the most recent piece of code
associated with FOO, or invokes the error handler for an `unbound
function'.

One way to implement this is to have a cell in every symbol that can
contain the `function' definition for that symbol.  You could `link'
by having the compiler cause all function calls to push the symbol
naming the target function and jump to the linker.  The linker would
then look in the function cell of the symbol, and if it finds a
function, jump to the entry point, otherwise jump to the `unbound
function' handler.  You could call your linker `FUNCALL'.

Another way to implement this is to inline the linker functionality at
the call point.  The compiler would `open code' funcall by inserting
the instructions to fetch the contents of the function cell, test to
ensure it is a function, and either jump to the entry point or to the
error handler.

You could go a step further.  Arrange for the function cell to
*always* have a valid entry point, so that the `open coded funcall'
wouldn't have to check the validity.  The default entry point would be
the error handler.

But why stop there?

You could arrange for the compiler to go one step further:  rather
than open coding a funcall, it could simply place a jump or call
template in the code itself.  In essence, there is no longer one
function cell, but a set of function cells --- one at each call
point.  The code that implements SYMBOL-FUNCTION would be much more
complicated, of course.  (Note, too, that some architectures may
not be amenable to this since it requires patching code on the fly).

Take it further:  do arity checking at link time.  Only link to those
functions when the number of arguments is correct.

And further:  arrange for multiple function entry points.  Link to the
appropriate one based upon arity (for optional and rest arguments).
Special case to allow unboxed floats.

Why does this require a separate function and value space?  It
doesn't.  The same techinques will work in a single namespace lisp,
and the resulting code will run as quickly (why would a jump
instruction care what the source code looks like?)  The difference
occurs in the ease of implementation.  It a two-namespace lisp, the
more complicated you make the linker protocol, the more complicated
SYMBOL-FUNCTION and (SETF SYMBOL-FUNCTION) have to be.  In a
one-namespace lisp, this complexity will extend to SETQ and
special-variable binding as well.

There is no need for dataflow analysis or declarations.

If you would like more detail on how this works in practice, email me.










-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfw8zmiv8ht.fsf@world.std.com>
Joe Marshall <···@content-integrity.com> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > Exactly.  This is the efficiency issue I mentioned, which cannot
> > be duplicated in a Lisp1 without either massive theorem proving
> > (takes lots of time) or declarations (which Scheme, for example,
> > won't do, it seems to me at least partially because the same
> > minimalist mindset that drives them to want to be a Lisp1 also
> > drives them to want to be declaration-free).
> >
> > Consequently, unless you are happy with just having programs
> > execute machine level garbage, there are certain function calls
> > which are inherently faster in a Lisp2 than in a Lisp1, assuming
> > you believe (as I believe both CL and Scheme designers believe)
> > that functions are called more often than they are defined.  A
> > Lisp2 can take advantage of this to check once at definition time,
> > but a Lisp1 cannot take advantage because it can't (due to the
> > halting problem) check the data flow into every (f x) to be sure
> > that f contained a valid machine-runnable function.
> 
> This is a red herring.

Well, I don't agree.
 
> The issue of whether a particular address contains executable code
> and whether it would be legal to load that address into the program
> counter is an issue of linker protocol.  Lisp hackers tend to forget
> about linking because lisp links things on the fly and makes it easy
> to run a partially linked image.

And modern programmers tend to assume the only hardware Lisp was designed
for is the stuff you can buy right now.  On the PDP10, for example, you
could load the contents of any  address into memory and execute it.
And you could just JRST or JSP to any memory location.  The linker
was not involved.

Surely it is the case that there are operating systems that protect you
betetr, and maybe increasingly this is how operating systems are designed.
But CL is not designed merely to accomodate a specific memory architecture
or operating system.
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <hf165w2d.fsf@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > Kent M Pitman <······@world.std.com> writes:
> > 
> > > Exactly.  This is the efficiency issue I mentioned, which cannot
> > > be duplicated in a Lisp1 without either massive theorem proving
> > > (takes lots of time) or declarations (which Scheme, for example,
> > > won't do, it seems to me at least partially because the same
> > > minimalist mindset that drives them to want to be a Lisp1 also
> > > drives them to want to be declaration-free).
> > >
> > > Consequently, unless you are happy with just having programs
> > > execute machine level garbage, there are certain function calls
> > > which are inherently faster in a Lisp2 than in a Lisp1, assuming
> > > you believe (as I believe both CL and Scheme designers believe)
> > > that functions are called more often than they are defined.  A
> > > Lisp2 can take advantage of this to check once at definition time,
> > > but a Lisp1 cannot take advantage because it can't (due to the
> > > halting problem) check the data flow into every (f x) to be sure
> > > that f contained a valid machine-runnable function.
> > 
> > This is a red herring.
> 
> Well, I don't agree.

I understand that you do, but I have outlined a mechanism that is used
in practice and appears to refute your claim.

> > The issue of whether a particular address contains executable code
> > and whether it would be legal to load that address into the program
> > counter is an issue of linker protocol.  Lisp hackers tend to forget
> > about linking because lisp links things on the fly and makes it easy
> > to run a partially linked image.
> 
> And modern programmers tend to assume the only hardware Lisp was designed
> for is the stuff you can buy right now.  

I am assuming modern hardware.

> On the PDP10, for example, you could load the contents of any
> address into memory and execute it.  And you could just JRST or JSP
> to any memory location.  The linker was not involved.

I am speaking of the `linker' as the abstract `thing that resolves
jump targets', not LD or whatever the OS provides.

> Surely it is the case that there are operating systems that protect you
> better, and maybe increasingly this is how operating systems are designed.
> But CL is not designed merely to accomodate a specific memory architecture
> or operating system.

Actually, the more modern operating systems are *more* amenable to
this technique, not less (because of DLLs and shared libraries).  Any
OS that allows dynamic loading of code has enough power to link in the
way I described.

This technique doesn't require anything unusual or too implementation
dependent, just a bit of cleverness.



-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwhf16igjo.fsf@world.std.com>
Joe Marshall <···@content-integrity.com> writes:

> I understand that you do, but I have outlined a mechanism that is used
> in practice and appears to refute your claim.

Then what I'm saying is that you might have a fixnum poitner whose
backing store held an instruction which was a syntactically valid
instruction to execute.  It could, for example, be seen as a system
call.  And yet you could do (setq x that-fixnum) and if you could just
funcall to it without checking it for pointerness (as in the PDP10
bibop scheme, where checking meant consulting an external table), then
you'd end up jumping to garbage and executing it. (We used to do this
stuff intentionally in Maclisp.  But if you do it by accident, it's
scary.  Now, a loader, either whole-image loader or a dynamic loader,
might protect you.  But it might not.  That's my only point.)

> > > The issue of whether a particular address contains executable code
> > > and whether it would be legal to load that address into the program
> > > counter is an issue of linker protocol.  Lisp hackers tend to forget
> > > about linking because lisp links things on the fly and makes it easy
> > > to run a partially linked image.
> > 
> > And modern programmers tend to assume the only hardware Lisp was designed
> > for is the stuff you can buy right now.  
> 
> I am assuming modern hardware.

"current" hardware.  My point is that hardware continues to change and not
all changes are monotonically in a given direction.  You cannot quantify over
existing operating systems and assume you have quantified over the target
platforms for CL.

It would have been possible to pick a set of plausible architectures and
work over only those, and that would have led to a much different language.
I think more short-sighted but the trade-off might be "more useful".  I'm
not taking a position on that.  Dylan is an example of a language that I
think I remember making some very specific tactical assumptions about the
architecture (e.g., for numbers and character codes, maybe other things
too, like files).
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <3dcq5s2y.fsf@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > I understand that you do, but I have outlined a mechanism that is used
> > in practice and appears to refute your claim.
> 
> Then what I'm saying is that you might have a fixnum poitner whose
> backing store held an instruction which was a syntactically valid
> instruction to execute.  It could, for example, be seen as a system
> call.  And yet you could do (setq x that-fixnum) and if you could just
> funcall to it without checking it for pointerness (as in the PDP10
> bibop scheme, where checking meant consulting an external table), then
> you'd end up jumping to garbage and executing it. (We used to do this
> stuff intentionally in Maclisp.  But if you do it by accident, it's
> scary.  Now, a loader, either whole-image loader or a dynamic loader,
> might protect you.  But it might not.  That's my only point.)

Yes, I understand this point.  I'm arguing that you don't need to have
separate function and value namespaces in the source language in order
to efficiently deal with functions at compile, link, or run time.
Assume, for the moment, that your code has an expression (foo 'bar)
where FOO is free.  At link time, you check to see if FOO is bound to
a function.  If it is, you arrange for the code to be linked to FOO,
either directly (by modifying the code itself) or indirectly (by
modifying a jump table or uuo link, or even having a special `function
cell' associated with the symbol FOO).

Now supposing at some later time, someone does (setq foo 42).  You
arrange to invalidate the links to the function FOO.  You can do this
via hash tables, weak pointers in the symbol, groveling through all of
memory, or replacing the trampoline in the `function cell'.  Now every
place that used to call FOO directly ends up calling an error handler
trampoline, instead.

But all of this is *implementation* detail.  It can be done regardless
of whether your source language has a separate namespace for functions
and variables or not.

It is certainly the case that the *implementation* can (and ought to)
treat functions and values as different things.

> > > > The issue of whether a particular address contains executable code
> > > > and whether it would be legal to load that address into the program
> > > > counter is an issue of linker protocol.  Lisp hackers tend to forget
> > > > about linking because lisp links things on the fly and makes it easy
> > > > to run a partially linked image.
> > > 
> > > And modern programmers tend to assume the only hardware Lisp was designed
> > > for is the stuff you can buy right now.  
> > 
> > I am assuming modern hardware.
> 
> "current" hardware.  

Something like a MIPS, Alpha or Pentium, for example.

> My point is that hardware continues to change and not
> all changes are monotonically in a given direction.  You cannot quantify over
> existing operating systems and assume you have quantified over the target
> platforms for CL.

No, but there are some generalizations I can make.  For instance, if
the OS disallows dynamic loading of code, you can't use UUO links.  On
the other hand, you couldn't incrementally compile, either.

> It would have been possible to pick a set of plausible architectures and
> work over only those, and that would have led to a much different language.
> I think more short-sighted but the trade-off might be "more useful".  I'm
> not taking a position on that.  Dylan is an example of a language that I
> think I remember making some very specific tactical assumptions about the
> architecture (e.g., for numbers and character codes, maybe other things
> too, like files).

I'm not suggesting that Common Lisp adopt a single namespace, or that
a single namespace is `better' than two namespaces.  I'm asserting
that a single namespace can be implemented with no less efficiency
than a dual namespace, and that such an implementation does not
require declarations or complex dataflow analysis.  This is because
the mechanism of linking does not depend on what happens at the
syntactic level.









-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Frode Vatvedt Fjeld
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <2hd7but3zn.fsf@dslab7.cs.uit.no>
Joe Marshall <···@content-integrity.com> writes:

> Now supposing at some later time, someone does (setq foo 42).  You
> arrange to invalidate the links to the function FOO. [...]

But this means _every_ setq needs to check if the previous value was a
function, no? Thus you've just changed the time of error checking from
apply-time to setq-time, which is still expectedly much more frequent
than defun-time.

-- 
Frode Vatvedt Fjeld
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-4DC7A7.00214608032001@news.nzl.ihugultra.co.nz>
In article <··············@dslab7.cs.uit.no>, Frode Vatvedt Fjeld 
<······@acm.org> wrote:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > Now supposing at some later time, someone does (setq foo 42).  You
> > arrange to invalidate the links to the function FOO. [...]
> 
> But this means _every_ setq needs to check if the previous value was a
> function, no? Thus you've just changed the time of error checking from
> apply-time to setq-time, which is still expectedly much more frequent
> than defun-time.

You can easily make the setq merely set the code value to a known 
constant function which does the error checking, thus deferring the vast 
majority of the work to the first time the object is used as a function.

-- Bruce
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwpuft64tq.fsf@world.std.com>
Bruce Hoult <·····@hoult.org> writes:

> In article <··············@dslab7.cs.uit.no>, Frode Vatvedt Fjeld 
> <······@acm.org> wrote:
> 
> > Joe Marshall <···@content-integrity.com> writes:
> > 
> > > Now supposing at some later time, someone does (setq foo 42).  You
> > > arrange to invalidate the links to the function FOO. [...]
> > 
> > But this means _every_ setq needs to check if the previous value was a
> > function, no? Thus you've just changed the time of error checking from
> > apply-time to setq-time, which is still expectedly much more frequent
> > than defun-time.
> 
> You can easily make the setq merely set the code value to a known 
> constant function which does the error checking, thus deferring the vast 
> majority of the work to the first time the object is used as a function.

But here you have implementationally two namespaces.  You're just hiding one.
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-4F10B6.12141308032001@news.nzl.ihugultra.co.nz>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > In article <··············@dslab7.cs.uit.no>, Frode Vatvedt Fjeld 
> > <······@acm.org> wrote:
> > 
> > > Joe Marshall <···@content-integrity.com> writes:
> > > 
> > > > Now supposing at some later time, someone does (setq foo 42).  You
> > > > arrange to invalidate the links to the function FOO. [...]
> > > 
> > > But this means _every_ setq needs to check if the previous value was 
> > > a
> > > function, no? Thus you've just changed the time of error checking 
> > > from
> > > apply-time to setq-time, which is still expectedly much more frequent
> > > than defun-time.
> > 
> > You can easily make the setq merely set the code value to a known 
> > constant function which does the error checking, thus deferring the 
> > vast 
> > majority of the work to the first time the object is used as a 
> > function.
> 
> But here you have implementationally two namespaces.  You're just hiding 
> one.

No, here I have implemented a cache.  This is an implementation 
technique not a language feature and there is no program for which the 
meaning changes as a result.

Two namespaces is a langauge feature whose presense or absense changes 
the meanings of programs.

-- Bruce
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <4rx59vg7.fsf@content-integrity.com>
In article <··············@dslab7.cs.uit.no>, Frode Vatvedt Fjeld 
<······@acm.org> wrote:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > Now supposing at some later time, someone does (setq foo 42).  You
> > arrange to invalidate the links to the function FOO. [...]
> 
> But this means _every_ setq needs to check if the previous value was a
> function, no? 

No, only SETQs to free variables.

Note that writing a value cell often involves more than just a move
instruction.  (Consider ephemeral GC, forwarding pointers, etc.)  Thus
checking whether the prior value was a function adds little, if any,
overhead.

> Thus you've just changed the time of error checking from
> apply-time to setq-time, which is still expectedly much more frequent
> than defun-time.

SETQs to free variables are much less frequent that SETQs to bound
variables.

It may be true that SETQs to free variables are more frequent that
SETFs of symbol functions, but funcall-time is the dominant factor.
Even if the performance of SETQ were to drop noticably, your code
would have to have a high ratio of SETQs to function calls for it to
make a significant performance difference.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Frode Vatvedt Fjeld
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <2h66hlqn40.fsf@dslab7.cs.uit.no>
Joe Marshall <···@content-integrity.com> writes:

> In article <··············@dslab7.cs.uit.no>, Frode Vatvedt Fjeld 
> <······@acm.org> wrote:
>
> > But this means _every_ setq needs to check if the previous value was a
> > function, no? 
> 
> No, only SETQs to free variables.

Do you mean free variables as opposed to lexically bound variables? I
fail to see why this is so (i.e. why wouldn't you have to check
lexically bound variables)..?

> Note that writing a value cell often involves more than just a move
> instruction.  (Consider ephemeral GC, forwarding pointers, etc.)
> Thus checking whether the prior value was a function adds little, if
> any, overhead.

Unless you have some better GC scheme.

-- 
Frode Vatvedt Fjeld
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <lmqh8cz7.fsf@content-integrity.com>
Frode Vatvedt Fjeld <······@acm.org> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > In article <··············@dslab7.cs.uit.no>, Frode Vatvedt Fjeld 
> > <······@acm.org> wrote:
> >
> > > But this means _every_ setq needs to check if the previous value was a
> > > function, no? 
> > 
> > No, only SETQs to free variables.
> 
> Do you mean free variables as opposed to lexically bound variables? 

Yes.

> I fail to see why this is so (i.e. why wouldn't you have to check
> lexically bound variables)..?

Because it is trivial for the compiler to determine whether they have
the potential to be used as functions.

(let ((answer nil))
  (dotimes (i 30) (push (foo i) answer)))

Since answer is not being used as a function anywhere it is visible,
there is no need to invalidate any links when assigning a non-function
value to it.


> > Note that writing a value cell often involves more than just a move
> > instruction.  (Consider ephemeral GC, forwarding pointers, etc.)
> > Thus checking whether the prior value was a function adds little, if
> > any, overhead.
> 
> Unless you have some better GC scheme.

GC isn't the only reason to read before a write.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Frode Vatvedt Fjeld
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <2hy9uhp5wv.fsf@dslab7.cs.uit.no>
Joe Marshall <···@content-integrity.com> writes:

> Frode Vatvedt Fjeld <······@acm.org> writes:
> 
> > I fail to see why this is so (i.e. why wouldn't you have to check
> > lexically bound variables)..?
> 
> Because it is trivial for the compiler to determine whether they
> have the potential to be used as functions.

Ok. This kind of thing doesn't really give me good vibes, but I
suppose the compiler can determine this in most situations.

But what if it determines that the variable does have the potential to
be used as a function?

> GC isn't the only reason to read before a write.

What are you thinking of?

-- 
Frode Vatvedt Fjeld
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <1ys0xgna.fsf@content-integrity.com>
Frode Vatvedt Fjeld <······@acm.org> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > Frode Vatvedt Fjeld <······@acm.org> writes:
> > 
> > > I fail to see why this is so (i.e. why wouldn't you have to check
> > > lexically bound variables)..?
> > 
> > Because it is trivial for the compiler to determine whether they
> > have the potential to be used as functions.
> 
> Ok. This kind of thing doesn't really give me good vibes, but I
> suppose the compiler can determine this in most situations.

It is a variant of free-variable analysis, and can be done in a single
top-down pass.

> But what if it determines that the variable does have the potential to
> be used as a function?

Then you do the check to invalidate the function cache when assigning
to the variable.

> > GC isn't the only reason to read before a write.
> 
> What are you thinking of?

All the categories of invisible or forwarding pointers.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwy9uhw5bc.fsf@world.std.com>
Joe Marshall <···@content-integrity.com> writes:

> In article <··············@dslab7.cs.uit.no>, Frode Vatvedt Fjeld 
> <······@acm.org> wrote:
>
> > Thus you've just changed the time of error checking from
> > apply-time to setq-time, which is still expectedly much more frequent
> > than defun-time.
> 
> SETQs to free variables are much less frequent that SETQs to bound
> variables.

But SETQs to defuns are much less frequent than SETQs to free variables.
So technically I win on the efficiency of a Lisp2.

I did say at the outset that this was a slim claim.  This is what I meant.
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <d7bt8778.fsf@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > In article <··············@dslab7.cs.uit.no>, Frode Vatvedt Fjeld 
> > <······@acm.org> wrote:
> >
> > > Thus you've just changed the time of error checking from
> > > apply-time to setq-time, which is still expectedly much more frequent
> > > than defun-time.
> > 
> > SETQs to free variables are much less frequent that SETQs to bound
> > variables.
> 
> But SETQs to defuns are much less frequent than SETQs to free variables.
> So technically I win on the efficiency of a Lisp2.

Not so fast, I get to split hairs as well.

If I arrange for the compiler to annotate where in the object code
SETQs are being performed, and what free variables are being SETQd,
then I can arrange it so that the linker can `patch up' only those
SETQs that modify free variables that could actually be used as a
function.

> I did say at the outset that this was a slim claim.  This is what I meant.

It's getting very slim...



-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: David Thornley
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <4_bq6.275$Tg.44268@ruti.visi.com>
In article <···························@news.nzl.ihugultra.co.nz>,
Bruce Hoult  <·····@hoult.org> wrote:
>In article <··············@dslab7.cs.uit.no>, Frode Vatvedt Fjeld 
><······@acm.org> wrote:
>
>> Joe Marshall <···@content-integrity.com> writes:
>> 
>> > Now supposing at some later time, someone does (setq foo 42).  You
>> > arrange to invalidate the links to the function FOO. [...]
>> 
>> But this means _every_ setq needs to check if the previous value was a
>> function, no? Thus you've just changed the time of error checking from
>> apply-time to setq-time, which is still expectedly much more frequent
>> than defun-time.
>
>You can easily make the setq merely set the code value to a known 
>constant function which does the error checking, thus deferring the vast 
>majority of the work to the first time the object is used as a function.
>
At this point, any symbol has to have a value that is a function and
a value that isn't, and one hopes an easy way to tell which is which.
The implementation is therefore that of a Lisp-2 (or more), and you're
doing extra work to transform it into a Lisp-1.  In a Lisp-2, the symbol
can officially have two values, and the system can use one of those
or the other as needed.  In a Lisp-1, you have to change one of the
values to a known invalid whenever you set the other.

To be specific, if you execute (setq foo #'car) first and (setq foo 42)
later, in a Lisp-1 you have to change the function value of foo on the
second setq, or set some flag that indicates that it is invalid, or you
get a de facto Lisp-2, because foo means car or 42 depending on whether
it's in a function or a value context.

(This was the argument on implementation efficiencies of Lisp-1s
vs Lisp-2s, wasn't it?)


--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Janis Dzerins
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <874rx6m09f.fsf@asaka.latnet.lv>
Joe Marshall <···@content-integrity.com> writes:

> I'm asserting that a single namespace can be implemented with no
> less efficiency than a dual namespace, and that such an
> implementation does not require declarations or complex dataflow
> analysis.

But that's flawed assertion. All you have described is an emulation of
two namespaces with one namespace. How can they be equally efficient?

-- 
Janis Dzerins

  If million people say a stupid thing it's still a stupid thing.
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <8zmh9vzm.fsf@content-integrity.com>
Janis Dzerins <·····@latnet.lv> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > I'm asserting that a single namespace can be implemented with no
> > less efficiency than a dual namespace, and that such an
> > implementation does not require declarations or complex dataflow
> > analysis.
> 
> But that's flawed assertion.  All you have described is an emulation of
> two namespaces with one namespace.  How can they be equally efficient?

I wasn't *trying* to do that, but I can see how that could be
construed.  What I forgot to include is an outline of why this
mechanism is no less efficient than indirecting through the function
cell.

Consider this function:  (defun foo () (bar))

Suppose we had a lisp on an intel processor, with the convention that
ESI will point to the beginning of the current code block, and that
invoking a function could be done via an indirect jump to the function
cell of a symbol.

The call sequence would roughly look like this:

; fetch the pointer to symbol bar
    movl        ebx,[esi+18]  
; fetch the function cell, put function object in esi
    movl        esi,[ecx-11]  
    jmp         *esi

Now if arrange for the linker to maintain a `uuo-link' to the function
bar in the code for function foo, the call sequence would be:

    jmp    pc + 23  ; jump into link table

link table + 23:
    jmp    <entry point for bar>

This latter will execute quicker.

Here are some numbers.  I use the tak function because it is dominated
by function call time.  In the Common Lisp version, optimization is
set to 3, safety 0, and I explicitly load the function cells to ensure
the compiler isn't short-circuiting anything.

    (defun takx (x y z)
      (declare (fixnum x y z)
	       (optimize (speed 3) (safety 0)))
      (cond ((not (< y x)) z)
	    (t
	     (taka
	       (takb (the fixnum (1- x)) y z)
	       (takc (the fixnum (1- y)) z x)
	       (takd (the fixnum (1- z)) x y)))))

    (defun test ()
      (setf (symbol-function 'taka) (symbol-function 'takx))
      (setf (symbol-function 'takb) (symbol-function 'takx))
      (setf (symbol-function 'takc) (symbol-function 'takx))
      (setf (symbol-function 'takd) (symbol-function 'takx))
      (time (dotimes (i 10000) (taka 18 12 6))))

In the scheme version I declare (usual-integrations) (this allows the
compiler to assume that I have not redefined the standard procedures).
I use the fixnum-specific < and decrement operators.  I set a switch
in the compiler to tell it to not perform stack checks (as the lisp
version does not do this when speed is at 3).  I couldn't figure out
how to instruct the compiler to not poll for interrupts.

    (declare (usual-integrations))

    (define taka)
    (define takb)
    (define takc)
    (define takd)

    (define (takx x y z)
      (cond ((not (fix:< y x)) z)
	    (else
	     (taka
	       (takb (fix:-1+ x) y z)
	       (takc (fix:-1+ y) z x)
	       (takd (fix:-1+ z) x y)))))

    (define (test)
      (set! taka takx)
      (set! takb takx)
      (set! takc takx)
      (set! takd takx)
      (time (lambda () 
	      (do ((i 0 (+ i 1))) 
		  ((= i 10000) #f)
		(taka 18 12 6)))))


On my machine, the lisp version takes 33.5 seconds, the scheme version
takes 29.2 seconds.

This demonstrates that function calls in a single namespace are not
significantly less time efficient than those in a dual namespace under
similar conditions of optimization.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Janis Dzerins
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <87elw8led5.fsf@asaka.latnet.lv>
Joe Marshall <···@content-integrity.com> writes:

> Here are some numbers.  I use the tak function because it is dominated
> by function call time.  In the Common Lisp version, optimization is
> set to 3, safety 0, and I explicitly load the function cells to ensure
> the compiler isn't short-circuiting anything.
> 
>     (defun takx (x y z)
>       (declare (fixnum x y z)
> 	       (optimize (speed 3) (safety 0)))
>       (cond ((not (< y x)) z)
> 	    (t
> 	     (taka
> 	       (takb (the fixnum (1- x)) y z)
> 	       (takc (the fixnum (1- y)) z x)
> 	       (takd (the fixnum (1- z)) x y)))))
> 
>     (defun test ()
>       (setf (symbol-function 'taka) (symbol-function 'takx))
>       (setf (symbol-function 'takb) (symbol-function 'takx))
>       (setf (symbol-function 'takc) (symbol-function 'takx))
>       (setf (symbol-function 'takd) (symbol-function 'takx))
>       (time (dotimes (i 10000) (taka 18 12 6))))
> 
> In the scheme version I declare (usual-integrations) (this allows the
> compiler to assume that I have not redefined the standard procedures).
> I use the fixnum-specific < and decrement operators.  I set a switch
> in the compiler to tell it to not perform stack checks (as the lisp
> version does not do this when speed is at 3).  I couldn't figure out
> how to instruct the compiler to not poll for interrupts.
> 
>     (declare (usual-integrations))
> 
>     (define taka)
>     (define takb)
>     (define takc)
>     (define takd)
> 
>     (define (takx x y z)
>       (cond ((not (fix:< y x)) z)
> 	    (else
> 	     (taka
> 	       (takb (fix:-1+ x) y z)
> 	       (takc (fix:-1+ y) z x)
> 	       (takd (fix:-1+ z) x y)))))
> 
>     (define (test)
>       (set! taka takx)
>       (set! takb takx)
>       (set! takc takx)
>       (set! takd takx)
>       (time (lambda () 
> 	      (do ((i 0 (+ i 1))) 
> 		  ((= i 10000) #f)
> 		(taka 18 12 6)))))
> 
> 
> On my machine, the lisp version takes 33.5 seconds, the scheme version
> takes 29.2 seconds.
> 
> This demonstrates that function calls in a single namespace are not
> significantly less time efficient than those in a dual namespace under
> similar conditions of optimization.

To me this demonstrates that hidden function namespace is as efficient
as not hidden one when comparing some mysterious CL and Scheme
implementations. So where are we now?

-- 
Janis Dzerins

  If million people say a stupid thing it's still a stupid thing.
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <zoew72ec.fsf@content-integrity.com>
Janis Dzerins <·····@latnet.lv> writes:

> > This demonstrates that function calls in a single namespace are not
> > significantly less time efficient than those in a dual namespace under
> > similar conditions of optimization.
> 
> To me this demonstrates that hidden function namespace is as efficient
> as not hidden one when comparing some mysterious CL and Scheme
> implementations.  So where are we now?

I think we are now in agreement that having two namespaces
syntactically present in the source language is no more efficient than
having a single namespace in the source language.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <4rx4xm4r.fsf@content-integrity.com>
Erik Naggum <····@naggum.net> writes:

> * Joe Marshall <···@content-integrity.com>
> > I think we are now in agreement that having two namespaces syntactically
> > present in the source language is no more efficient than having a single
> > namespace in the source language.
> 
>   When were we _not_ in agreement over this?

In message <···············@world.std.com>

>   Efficiency has been a red herring all along, IMNSHO.

Agreed.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwlmqgmgez.fsf@world.std.com>
Erik Naggum <····@naggum.net> writes:

> * Joe Marshall <···@content-integrity.com>
> > I think we are now in agreement that having two namespaces syntactically
> > present in the source language is no more efficient than having a single
> > namespace in the source language.
> 
>   When were we _not_ in agreement over this?
> 
>   Efficiency has been a red herring all along, IMNSHO.

Blame that on me.

Perhaps out of stubbornness, perhaps out of lack of time to make a
better judgment, and maybe even because there's a slim chance I'm
right, I'll continue to adhere to the belief that there is a slight
efficiency gain for a two-namespace Lisp.  How much does depend on how
much "mechanism" you add to compensate.  I never meant to suggest it
was more than a constant, just that it was probably a little bigger
than a factor of 1 and nowhere near 2.  Joe has argued that it can be
brought very close to 1, and I've not denied that.  I have not now nor
ever had any material problem with saying that for practical purposes,
the difference is unimportant--I just think the difference remains
there in theory.  Perhaps like the difference between -0.0 and true 0...

But at least part of my claim is that the driving "simplicity" of the
one-namespace Lisp's description hides the need for this extra
mechanism.  Nothing in the cute, simple, sleek formal semantics leaps
out and says "but you'll need to do lots of extra work to make this
competitively efficient".  I find it a deceptive kind of "simple" when
the simple has to be just an illusion to the user and a bunch of extra
stuff has to be done under the sheets to make it look like this
"simpleness" was really all it took.  You have to learn about this
"the hard way".  On the other hand, it seems to me, a straightforward
implementation of a Lisp2 will tend to lead to natural efficiency. And
I think there's some unacccounted-for grace in that which goes
unacknowledged by the self-appointed guardians of aesthetics in the
lisp1 camp.  (I have never denied that they have A useful notion of
aesthetics; I just use get bugged when they think they have THE notion
of aesthetics.)
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-555493.12324809032001@news.nzl.ihugultra.co.nz>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

> I find it a deceptive kind of "simple" when
> the simple has to be just an illusion to the user and a bunch of extra
> stuff has to be done under the sheets to make it look like this
> "simpleness" was really all it took.

So how do you feel about the illusion that you have an infinite number 
of named registers in your machine?  Wouldn't it be simpler with less 
illusion to be discovered the "hard way" to write all your programs 
using only variables named after the machine registers used to store 
them?

-- Bruce
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwwv9zepam.fsf@world.std.com>
Bruce Hoult <·····@hoult.org> writes:

> In article <···············@world.std.com>, Kent M Pitman 
> <······@world.std.com> wrote:
> 
> > I find it a deceptive kind of "simple" when
> > the simple has to be just an illusion to the user and a bunch of extra
> > stuff has to be done under the sheets to make it look like this
> > "simpleness" was really all it took.
> 
> So how do you feel about the illusion that you have an infinite number 
> of named registers in your machine?  Wouldn't it be simpler with less 
> illusion to be discovered the "hard way" to write all your programs 
> using only variables named after the machine registers used to store 
> them?

I'm not sure what you expect me to say.

We don't abstract away the registers for efficiency reasons nor for reasons
of hiding that which we know has to be there.  We abstract it away because
it isn't reliably there--some machines have registers and some don't.  If all
machines had registers, we might expose them for user use.  The simplicity
that's involved isn't about "people can't handle knowledge of registers"
but rather is about "telling people there are registers when there aren't
will just confuse them into trying to optimzie their programs in ways that
can't reliably win".

In other words, I think this is a red herring and I don't see the point 
other than to try to trap me into somehow having some weird "aha!" moment
over having some inconsistency in the way I apply my reasoning.  But that's
not going to happen because I don't assert I am always consistent in the
first place.  At most you're going to get a "hohum!" moment...

I basically think this discussion is at an end.  The points to be made have
been made. 
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-C6FC33.18100909032001@news.nzl.ihugultra.co.nz>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > In article <···············@world.std.com>, Kent M Pitman 
> > <······@world.std.com> wrote:
> > 
> > > I find it a deceptive kind of "simple" when
> > > the simple has to be just an illusion to the user and a bunch of 
> > > extra
> > > stuff has to be done under the sheets to make it look like this
> > > "simpleness" was really all it took.
> > 
> > So how do you feel about the illusion that you have an infinite number 
> > of named registers in your machine?  Wouldn't it be simpler with less 
> > illusion to be discovered the "hard way" to write all your programs 
> > using only variables named after the machine registers used to store 
> > them?
> 
> I'm not sure what you expect me to say.
> 
> We don't abstract away the registers for efficiency reasons nor
> for reasons of hiding that which we know has to be there.  We
> abstract it away because it isn't reliably there--some machines
> have registers and some don't.  If all machines had registers,
> we might expose them for user use.  The simplicity that's involved
> isn't about "people can't handle knowledge of registers" but rather
> is about "telling people there are registers when there aren't
> will just confuse them into trying to optimzie their programs in ways 
> that can't reliably win".
> 
> In other words, I think this is a red herring and I don't see the point 
> other than to try to trap me

I have no intentions to somehow "trap" you.

As someone coming to the discussion from a background of using C++ and 
Java and other similar languages I'm just trying to find out what the 
big advantage of having two namespaces is.  The only languages I've 
previously used with multiple namespaces are C (for structs, and C++ has 
backtracked from that) and Perl (which has about half a dozen 
namespaces).


Programming languages are what we use to organise and express our 
designs for solving problems.  Historically, programming language 
features tend to be there either a) because they are needed to enable 
efficient implementation of the language, or b) because they make it 
easier to express our ideas about the problem, or c) by historical 
accident, because it was the first thing the designer though of or in 
order to be compatable with something older.

Generally, at any given time the best language is the most abstract and 
powerful one that we know how to efficiently implement, without at the 
same time being so complex or obscure that we can't understand it 
(strike PL/I and C++).  Just as in science, a major source of advances 
is people realising that several previously distinct concepts or 
features can be unified into a single concept -- and simultaneously 
coming up with the technique whereby the unified concept can be 
efficiently implemented.

Programming directly with machine registers enables us to write more 
efficient programs.  Programming with an unlimited supply of variables 
enables us to write programs that are closer to the problem we are 
trying to solve than to the machine.  In order to meet the conflicting 
goals of abstractness and portability while enabling efficiency, C 
developed the "register" declaration.  The programmer could indicate 
which of their variables were the "most important" and tell the compiler 
to keep them in registers.  And it worked well, as long as you didn't 
blindly plaster "register" on more variables than the particular machine 
had registers.  But then compiler technology improved and modern 
compilers more or less totally ignore "register" declarations because 
they can do as good or better by themselves.

There are numerous examples from, for example, the gradual development 
of C++, where Stroustrup introduced a new feature -- inheritance, 
multiple inheritance, overloaded operators, exceptions, templates - only 
when he understod how to efficiently implement it.

For another example, take the Hindley-Milner type system.  Once H&M 
showed how to efficiently implement type inferencing in such a type 
system, languages appeared based upon it.  Or take Prolog.  A language 
with the syntax of prolog could have been developed in the 1950's, but 
it wasn't going to go anywhere until Colmerauer and Roussel showed how 
to efficiently implement unification of Horn Clauses.


So what is the reason for having two namespaces in Common Lisp -- and 
should other language designs emulate it?

a) "more efficient" looked like a good bet and people here were claiming 
that but in the last 24 hours or so I think both you and Erik have 
agreed that if it is more efficient the effect is down in the noise, 
even for systems without (other) type declarations.

b) "makes it easier to reason about problems and express solutions to 
them" looks like a very personal and non-universal thing and may well 
simply be closely related to what people happen to be used to.  Dual 
namespaces makes it slightly easier to come up with names for everything 
in a program because you can use the same name for two different things 
in certain contexts.  On the other hand, it makes it a litle hard to 
read the program because you have to analyse the context to determine 
the meaning.  Either way the effect is *very* small.

c) "historical accident" is certainly plausible.  Common Lisp was 
developed by a political process from a bunch of different Lisps of the 
1970's, all of which followed the 2-namespace example set by Lisp 1.5.


So in the end it doesn't appear to provide any compelling advantage or 
disadvantage.  People who use CL seem happy enough with it.  People who 
use other languages don't seem to feel any burning desire to have it.  I 
personally wouldn't decline to work on a job just because the language 
to be used has two namespaces (hell, I used Perl already), and I 
certainly wouldn't decline to work on a job just because the language to 
be used only has one namespace.

There being no significant difference between the two, I'd also choose 
the simpler concept -- a single namespace and evaluation rule -- for any 
new language I happened to be involved with designing.


>I basically think this discussion is at an end.  The points to
>be made have been made. 

Yep, seems that way.

-- Bruce
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwbsrbmqad.fsf@world.std.com>
Bruce Hoult <·····@hoult.org> writes:

> [...] in the end it doesn't appear to provide any compelling advantage 
> or disadvantage.

Then you've ignored the entirety of the discussion.  This issue is
major to me in the selection of a language.  Not as major as a few
other things, but definitely something that drives me absolutely nuts
in systems that don't have it.

> People who use CL seem happy enough with it.  People who 
> use other languages don't seem to feel any burning desire to have it.  

This neglects the possibility that the world is divided into camps of people
who feel strongly and have chosen the language by this feature.  That could
lead to the same outcome as you observe, but very different causal paths
and very different conclusions.  I suspect the truth is somewhere in between.
Some people probably care a lot, and some people less.  People never get
to choose a language based only on one feature, but I'm sure there are cases
where this has an incremental effect.

> I personally wouldn't decline to work on a job just because the language 
> to be used has two namespaces (hell, I used Perl already), 

This is like saying that most people won't leave the Democratic party
just because of a change of one little position on abortion.  Or most
people won't stop going to a restaurant just because they get rid of
one popular menu item.  Sure, that might be.  But that's because the
promise of the free market is not really true: people can't shop with
their feet UNLESS there is an alternative which offers them something
better.  When people don't leave, you aren't measuring "don't
care"--you're measuring "not enough threshold of caring to make up for
the set of negatives that would be acquired in a switch".  There is no
doubt in my mind that if you offered me a CL with 2 namespaces and one
without which one I would choose.  There is also no doubt in my mind
that if you changed this and a few other features I hold dear in CL
that I would start to think seriously that maybe CL wasn't meeting my
needs any more.

> I certainly wouldn't decline to work on a job just because the language to 
> be used only has one namespace.

That's probably because you don't mind mind using only one namespace.
(I assume you meant "has two namespaces" here.)

> There being no significant difference between the two,

You left out "Since I believe" at the start of this.

Or else you forgot to qualify that there is no "technical" difference.
Surely there are emotional and expressional reasons that were plainly
expressed in this discussion.  If you missed them, you had your eyes closed.

> I'd also choose 
> the simpler concept -- a single namespace and evaluation rule -- for any 
> new language I happened to be involved with designing.

Sure.  And you'd, by doing so, incrementally attact people like yourself and 
and push away people not like yourself.  Further, as you talked to the people
you had thus attracted, you would become increasingly convinced that the world
was full of people like you and that would support your belief that everyone
liked what you had done.  No reason you shouldn't feel happy about pleasing
the set of people you attracted, but you should not confuse this with having
found something universally acceptable.

Consider a best-selling book author.  No matter how high you are on the best
seller list with your murder mystery, there will still be people who don't
like murder mysteries.

> >I basically think this discussion is at an end.  The points to
> >be made have been made. 
> 
> Yep, seems that way.

(Just had to correct some of the summing up.  Feel free to reply if you 
want the last word.  I'll hold my tongue now.)
From: Peter Wood
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <8066hj2xf5.fsf@localhost.localdomain>
Bruce Hoult <·····@hoult.org> writes:


> b) "makes it easier to reason about problems and express solutions to 
> them" looks like a very personal and non-universal thing and may well 
> simply be closely related to what people happen to be used to.  Dual 
> namespaces makes it slightly easier to come up with names for everything 
> in a program because you can use the same name for two different things 
> in certain contexts.  

I disagree that it is personal and non-universal.  It is necessary in
all natural languages to infer meaning from context.  Even if some 
languages (Russian and Finnish, according to some posts) do not allow
identical verbs and nouns, I refuse to believe they have *NO* words
which have more than one meaning.  And why limit this discussion to
words - what about phrases?  Can a short phrase (eg "I see", in
English) not have more than one meaning in any natural language?

> On the other hand, it makes it a litle hard to 
> read the program because you have to analyse the context to determine 
> the meaning.  

In order to read (and understand) a program[fragment] in any
meaningful sense, you have to know what the context is, anyway.  If
this was not true, we would not be writing programs.  Our machines
would be doing it for us, and programming would be excruciatingly
boring.

> Either way the effect is *very* small.

I disagree.  The point is it *is* natural for humans to infer meaning
from context.  An efficient language will utilise human resources
optimally not machine resources.  What do you know about which machine
resources will be available in 10 years?  But you can be certain that
people will not have changed significantly in that time.

Peter
From: Thomas A. Russ
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <ymir906pote.fsf@sevak.isi.edu>
Bruce Hoult <·····@hoult.org> writes:

> As someone coming to the discussion from a background of using C++ and 
> Java and other similar languages I'm just trying to find out what the 
> big advantage of having two namespaces is.  The only languages I've 
> previously used with multiple namespaces are C (for structs, and C++ has 
> backtracked from that) and Perl (which has about half a dozen 
> namespaces).

Actually, MOST programming languages have separate variable and function
namespaces.  It is just not quite so obvious, since those languages also
have more syntax as well.  The extra syntax obscures the fact that, at
some level, there is a separate name lookup for identifying function
names and variable names.  Add to this the fact that since linking is
generally done at compile time rather than run time, and the function
name lookup is something that isn't really even thought about by
programmers in those languages.

Also, without dynamic function linking, definition and application, you
don't really have much cause to consider such issues as where the
compiler looks to find the function definition when it encounters a
form.  (OK, I'll ignore the issue of the linker dealing with mulltiple
definitions of functions with the same name....)

For example, the following Java program demonstrates that the compiler
is using two separate namespaces for resolving the symbol "f":

public class Test {

  static double f (double f) {
    return f * f;
  }

  public  static void main (String[] args) { 
    double f = 4.0;
    System.out.println(" f = " + f + "    f(f) = " + f(f));
  }
}

There is no prohibition on using a variable with the same name as a
function, and the compiler doesn't have any trouble sorting out shich is
which.  This is similar to the Lisp2 paradigm.

-- 
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu    
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-3C4807.23052912032001@news.nzl.ihugultra.co.nz>
In article <···············@sevak.isi.edu>, ···@sevak.isi.edu (Thomas 
A. Russ) wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > As someone coming to the discussion from a background of using C++ and 
> > Java and other similar languages I'm just trying to find out what the 
> > big advantage of having two namespaces is.  The only languages I've 
> > previously used with multiple namespaces are C (for structs, and C++ 
> > has 
> > backtracked from that) and Perl (which has about half a dozen 
> > namespaces).
> 
> Actually, MOST programming languages have separate variable and
> function namespaces.  [...]  For example, the following Java
> program demonstrates that the compiler is using two separate
> namespaces for resolving the symbol "f":
> 
> public class Test {
> 
>   static double f (double f) {
>     return f * f;
>   }
> 
>   public  static void main (String[] args) { 
>     double f = 4.0;
>     System.out.println(" f = " + f + "    f(f) = " + f(f));
>   }
> }
> 
> There is no prohibition on using a variable with the same name as a
> function, and the compiler doesn't have any trouble sorting out shich is
> which.  This is similar to the Lisp2 paradigm.

Well, methods are not first class objects in Java -- you can't even pass 
them as function arguments -- so I guess this isn't too surprising 
(though I bet most Java programmers don't realise it)  But *MOST* 
languages?  I wouldn't think so.  It certainly doesn't work in C:

-------------------------------------------------------
bash$ cat test.c
#include <stdio.h>

double f (double f) {
  return f * f;
}

int main () { 
  double f = 4.0;
  printf(" f = %f    f(f) = %f\n", f,  f(f));
  return 0;
}

bash$ make test
cc     test.c   -o test
test.c: In function `main':
test.c:9: called object is not a function
make: *** [test] Error 1
-------------------------------------------------------


It's also not going to work in Pascal or Modula-2 and I assume not in 
Oberon.  It doesn't work in JavaScript:

-------------------------------------------------------
<html><body><script>

function f (f) {
    return f * f;
}

function test() { 
    var f = 4.0;
    document.write(" f = " + f + "<p>f(f) = " + f(f));
}

test();

</script></body></html>
-------------------------------------------------------
JavaScript Error: file:/BruceHD/scope.html, line 9:
f is not a function. 
-------------------------------------------------------


Have you got a list of these MOST languages?

-- Bruce
From: David Bakhash
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m33dcmtfp5.fsf@alum.mit.edu>
Bruce Hoult <·····@hoult.org> writes:

> So in the end it doesn't appear to provide any compelling advantage or 
> disadvantage.  People who use CL seem happy enough with it.  People who 
> use other languages don't seem to feel any burning desire to have it.  I 
> personally wouldn't decline to work on a job just because the language 
> to be used has two namespaces (hell, I used Perl already), and I 
> certainly wouldn't decline to work on a job just because the language to 
> be used only has one namespace.

I still don't know why Perl keeps coming up here.  While Perl probably
does have multiple namespaces, I think people are interpreting foo,
<foo>, $foo, @foo, %foo, *foo, and &foo as the same name, when clearly 
they are not.

The best way to liken this to CL is to view the objects through
reference notation:

$scalar = 8
$array = [1,2,3];
$hash = {one => 1, two => 2};
$sub = sub {return (shift() + 3);};

Now this is like Common Lisp:

(setq $scalar 8)
(setq $array #(1 2 3))
(setq $hash (let ((ht (make-hash-table :test #'equal)))
              (setf (gethash "one" ht) 1
                    (gethash "two" ht) 2)
              ht))
(setq $sub (function (lambda (x) (+ x 3))))

and accessors are done "notationally" in each, e.g.

$array->[1]                ==          (aref $array 1)
$hash->{two}               ==          (gethash "two" $hash)
$sub->(10)                 ==          (funcall $sub 10)

So, to me, Perl and CL are very *very* similar in this way.  The only
difference, I guess, is that if you say:

sub mysub { ... }          ==          (defun mysub (...) (...))

Then you get to say, in Perl (among other equivalent notations):

if (mysub(4) > 0) {...}    ==          (if (> (mysub 4) 0) ...)

and again, it's just the syntactic sugar case that we're arguing about 
because that's how we _typically_ manage functional dispatch.

Anyway, this is my second post describing why Perl variables are not
exactly as multi-namespace as people keep saying.  Of course,
internally, they are, but on the surface, the symbol names for the
different "namespaces" are different, and so at the lexical level
(i.e. scanning, lexical analysis, pattern matching, etc.) they don't
look the same.  In Common Lisp, we access a symbol's field based on
the grammar-level syntax surrounding it, though it's also a mixture:

x vs. (function x)

Of course, since our reader is programmable, we can modify this so
that it occurs at the lexical level if we want:

x vs. #'x

which might be analogous to Perl's

$x vs. &x

Conclusions:

 o At the human reader's level, CL can be made to be Lisp1 style, if
   the programmer so chooses.

 o Namespaces of languages outside the Lisp (syntax) family don't
   always make the best analogies with respect to namespacing.

 o If Perl is really `multi-namespace' in the way people keep saying
   it is, then Scheme can be so as well with mere convention,
   e.g. starting all user-defined function names with some chosen
   character.  It's just that Perl _enforces_ this.

Lastly, I just wanted to reiterate that this only how _I_ see it --
just my interpretation of the surface namespacing issues that affect
programming, and not compiler issues.

dave
From: Xah Lee
Subject: Re: Separate namespaces [was: Re: please tell me the design	faults]
Date: 
Message-ID: <B6DB4DF1.68AC%xah@xahlee.org>
Dear Readers,

One of my co-worker forwarded to me the following allegory:


THE THREE CORPORATE LESSONS

  LESSON NUMBER ONE

  A crow was sitting on a tree, doing  nothing all Day.
  A small rabbit saw the crow, and asked him,
  "Can I also sit like you and do nothing all day long?"
  The crow answered: "Sure, why not." So, the rabbit
  sat on the  ground below the crow, and rested. All of
  a sudden, a fox appeared, jumped on the rabbit and
  ate it .

  Moral of the story is: To be sitting
  and doing nothing ,  you must be sitting very, very
  high up.

  LESSON NUMBER TWO

  A turkey was chatting with a bull. I would love to be
  able  to get to the top of that tree," sighed
  the turkey, "but I haven't got the energy." "Well, why
  don't you nibble on  some of my droppings?" replied
  the bull.
  "They're packed  with nutrients." The turkey pecked at
  a lump of dung and  found that it actually gave him
  enough strength to reach the  first branch of the tree.
   The next day, after eating some more
  dung, he reached the  second branch. Finally after a
  fortnight, there he was proudly
  perched at the top of the tree. Soon he
  was promptly
  spotted by a farmer, who shot the turkey
  out of the tree.

  Moral of the story: Bullshit might get
  you to the top, but it won't keep you there.

  LESSON NUMBER THREE

  A little bird was flying south for the
  winter. It was socold,the bird froze and fell to the ground in a
  large field. While it  was  lying there, a cow came by
  and dropped some dung on it.
  As the frozen bird lay there in the pile
  of cow dung, it began to  realize how warm it was. The dung was
  actually thawing him out! He lay there all warm and happy,
  and soon began tosing for joy.

  A passing cat heard the bird singing and came to
  investigate. Following the sound, the cat discovered
  the bird under the pile  of cow dung, and promptly dug
  him out
  and ate him!

The morals of this story is:

   1) Not everyone who drops shit on you is your enemy.
   2) Not everyone who gets you out of shit is your friend.
   3) And when you're in deep shit, keep your mouth shut


----

I haven't read such fat allegory for years. A little search on google.com
showed that apparently this is circulating on the net.

Allegory is a powerful device for nailing a point. Similar to simile,
analogous to analogy, figures like figures of speech, it makes you see
something that's otherwise hard to see, or refused to see. It's kinda like a
trap. You start to read with amusement about animals and their affairs, but
by the end of the story some moral you don't want to hear dawns and seizes
you by force.

However, just like analogies, there is a problem with them: they have
absolutely nothing to do with facts or truths. Even though their palpability
pushed your button, they actually proved nothing. Like, you don't see math
proofs littered with analogies.

Looking at the above parable, we could ask: "Can cows and bulls and shit
really prove something about modern corporate environment?" Of course, you
won't seriously consider asking that if you are not a turkey.

In Erik Naggum's last message, he relied on the analogy of unix shells & DOS
to propound his belief that the ability of a single name having multiple
meanings in a computer language is advantageous. In analogy, i'm using an
allegory to illustrate the vacuity of his method of persuasion.

By the way, the unix shells and environment variable and ways, is quite a
fucked up one. It is amazing to see its stupidities alluded as an advance
for some language design argument. The whole morbidity of the prospect to
place an executable script as any program name in any path with the fucked
up ways to search for programs to execute and the fucked up way to determine
whether it is a program by the fucked up permission bits system is one giant
unpurgeable shit pile arose from ad hoc hacks of unixism.

In defense of Common Lisps' namespace problems, Erik Naggum has a favorite
analogy, that people have no problem dealing with English words that are
both noun and verb.

This is similar to Larry Wall's habit of using de facto human languages to
defend status quo as design merits; contriving that Perl is such and such
finely "designed" because English this and that. (Kent Pitman falls in the
same pit.)

In the last 100 years or so, we have made tremendous advances in AI related
sciences. Logic, computer sciences, language theories, cognitive psychology,
neuroscience, unimaginable mountains of discrete mathematics. Only in the
last 60 years or so ago, we human beings were _able_ to conceive and _build_
constructed languages like loglan. We do not yet begin to have much info on
how specially constructed language like loglan/lojban can effect human
thinking as a native language. The Larry Wall type of moron, seems to have
already decided that the status quo natural languages like English is
superior, or have no facilities for imagination.

This line of moronicity are typical of sightless visionaries. They see the
present, and they deduce that it is the best of possible worlds, and they
pout and cry that the present state of things is the best state of things
and must be pervasively maintained and guarded. They take an active stance
to smite down those mathematicians who cater for tomorrow, and who brought
them today's common sense yesterday.

Open the book of history, and we shall see that when irrational number were
discovered and introduced, there were insistent naysayers. When Arabic
number system where introduce, these naysays again we encounter. When new
calender were introduced, again these morons. When machinery were
introduced, we have Luddites. When contraceptives were introduced, we have
Christians. When negative numbers were introduced, when "imaginary numbers"
were introduced, when set theory were introduced, when non-Euclidean
geometry were introduced, when type-writers were introduced, when
computational mathematics were introduced, when functional programing were
introduced... these fucking naysaying morons are the fighters of progress,
fighting to keep the world standstill in their complacency or ignorance.

The fact is, if the world is not filled with these morons in totality, then
scientific advances and new concepts and technologies are inevitable, only a
matter of time. Concepts, such as Scheme's single namespace, or pure and
non-strict functional languages, or other advanced ideas with superior
mathematics basis, will mature and prevail. First generation, legacy, and
fads like C, Common Lisp, Perl will die. Like a force of nature, inevitable,
only a matter of time.

The key to intellectual progress is science. The fuel to all sciences is
mathematics.

There is a difference between science and pseudo-science. Alonzo Church's
stuff, for example, is the former. Larry Wall's stuff, is the latter. Larry
Wall's crime, is that he trumpets his pseudo-science as science, using humor
as his mask.

Bonus:

George Orwell "Animal Farm". Classic allegory.
http://www.kulichki.com/moshkow/ORWELL/animal.txt

Category theory: mathematician's version of analogy:
http://plato.stanford.edu/entries/category-theory/

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html



> From: Erik Naggum <····@naggum.net>
> Organization: Naggum Software, Oslo, Norway
> Newsgroups: comp.lang.lisp
> Date: 09 Mar 2001 08:05:55 +0000
> Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
> 
> * Bruce Hoult <·····@hoult.org>
>> As someone coming to the discussion from a background of using C++ and
>> Java and other similar languages I'm just trying to find out what the
>> big advantage of having two namespaces is.  The only languages I've
>> previously used with multiple namespaces are C (for structs, and C++ has
>> backtracked from that) and Perl (which has about half a dozen
>> namespaces).
> 
> Do you use an environment where you can give commands to a shell?  Have
> you noticed that the first word of a command is treated differently than
> all the other words?  It is looked for as internal commands to the shell,
> and it is searched for in directories in a PATH variable of some kind, in
> case you are unfamiliar with it.  In the MS-DOS world, the name of a
> command is searched for with a particular extension (type).  In the Unix
> world, the file so named would have to the execute bit set and . would
> have to be in the search path for the file to be eligible as a command,
> but normally, neither of these conditions are met.  In both cases, this
> means that you can name a file in your local directory the same as the
> command, and there will be no confusion about which is command and which
> is local file.  I hope this is so simple you can understand that you are
> already using, and accepting, an environment with two namespaces.
> 
> That you can name your files anything you want and not affect the
> execution of any scripts or other programs that may invoke other programs
> that may be called the same by accident is a big win.  That you can
> change to a different directory and not be suprised by trojan horses
> there just because a file is named the same as a command is a big win.
> (One prank pulled on ignorant students at the time people thought . in
> $PATH was convenient _and_ safe, was to place an executable file named
> "ls" in directories that others would likely snoop in.)
> 
> Now, can you _imagine_ why anyone would want to name files in a local
> directory accidentally the same as some command someplace in the search
> list and blithely expect the command to work and the file to be seen as a
> simple file?  Perhaps the fact that you don't have full control over the
> growth of the command namespace can be a clue.  You _don't_ want a file
> you have had lying around for years to inhibit you from using a new
> program.  Perhaps just the freedom to name files as you like is enough of
> a value for people that it would be a undue burden to make certain that
> you don't make a command unavailable.
> 
> I suppose I'm wasting my time, again, being as you are so dense that you
> don't see anything that looks like clues to see why a namespace for
> functions different from variables makes sense, but it is a result of the
> desire for scalability at all levels.  In particular, in Common Lisp we
> don't want to change the meaning of a function in some _other_ package
> just because its symbol is accessible in our package by using it as a
> variable.  We even ensure that we don't step on other packages' symbols
> by using *foo* for global variables and foo for functions, so they split
> the one _symbol_ namespace amongst them, as well.
> 
> All of this is very carefully thought out and the practice of Common Lisp
> is very different from languages where scalability is an after-thought.
> 
> I suppose you'll dismiss this as irrelevant, again, being as you are so
> amazingly stupid to believe in omniscience and people knowing _exactly_
> this and _exactly_ that, but maybe, just _maybe_, there's a remnant of
> working brain that might make you realize that you have been using a
> system with just this separation of functions from variables all along,
> and the reason it is like that is that it scales better than any other
> approach, and it gives you freedom from worry that you nuke commands by
> naming your files what you think is best.
> 
>> So in the end it doesn't appear to provide any compelling advantage or
>> disadvantage.  People who use CL seem happy enough with it.  People who
>> use other languages don't seem to feel any burning desire to have it.
> 
> You're mistaken about the last part, and the first part is simply a
> statement of your staggering desire to remain ignorant, nothing else.
> 
>> There being no significant difference between the two, I'd also choose
>> the simpler concept -- a single namespace and evaluation rule -- for any
>> new language I happened to be involved with designing.
> 
> Then implement this in your shell or other command processor and let us
> know how comfortable you are with it after a while.  Search the current
> working directory first, and don't exclude files without an execute bit
> under Unix and look for files regardless of file type under MS-DOS.  If
> we can take your above paragraph as an indication, you would actually
> design a shell or command processor that made no distinction between
> files at all and would happily try to execute non-executable files.  Now,
> can you _imagine_ why Unix has execute bits and MS-DOS .EXE and the like?
> Does it make _any_ sense to you to try to distinguish functions from
> variables in the file system?  Do you _really_ think it's that different
> from programming languages that _nothing_ can be learned from the need
> for scalability and convenience and a shot at _security_ in shells?
> 
> If I had to deal with a computer that did not allow me call a file "cat"
> because there was a command by that name, I'd consider it broken as
> designed, and that's exactly what I feel about Scheme and Dylan and other
> retarded languages that conflate the function namespace with the variable
> namespace.  Sometimes, I think you one-namespace guys are just plain
> idiots, but it's probably a cultural thing -- you don't know any better
> because you never saw any better.  That would be OK, but when you're as
> stupid as to _refuse_ to _listen_ to people who know a better way, it's
> no longer a cultural thing, it's stupidity by choice.
> 
> And yes, I actually _do_ think of the Lisps I use as shells.  I live _in_
> Emacs and _in_ Allegro CL as well as _in_ the Unix shell (bash).  All of
> them enforce a separation of functions/commands from variables/data files.
> 
> Oh, I come from a Unix/C background.  The first time I was annoyed by the
> conflated namespaces in C was when I couldn't call a variable "time" when
> I wanted to call "time" in the same function.  That was in 1980, mere
> weeks after I first used a Unix system.  It was intuitively evident then
> and it has remained so, that functions and variables are different.  That
> I hadn't run into the problem before is sheer luck, plus I had used a few
> systems where it was naturally a difference so I wouldn't have noticed it
> if I had "exploited" the difference.  My experience leads me to believe
> that one-namespace-ness is a learned thing, an acquired braindamage.
> 
> #:Erik
> -- 
> "Hope is contagious"  -- American Cancer Society
> "Despair is more contagious"  -- British Farmers Society
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-6B1C47.12280409032001@news.nzl.ihugultra.co.nz>
In article <················@naggum.net>, Erik Naggum <····@naggum.net> 
wrote:

> * Joe Marshall <···@content-integrity.com>
> > I think we are now in agreement that having two namespaces syntactically
> > present in the source language is no more efficient than having a single
> > namespace in the source language.
> 
>   When were we _not_ in agreement over this?
> 
>   Efficiency has been a red herring all along, IMNSHO.

Every time you talked about "utility".

-- Bruce
From: David Bakhash
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m3lmqevkgx.fsf@alum.mit.edu>
Bruce Hoult <·····@hoult.org> writes:

> In article <················@naggum.net>, Erik Naggum <····@naggum.net> 
> wrote:
> 
> > * Joe Marshall <···@content-integrity.com>
> > > I think we are now in agreement that having two namespaces syntactically
> > > present in the source language is no more efficient than having a single
> > > namespace in the source language.
> > 
> >   When were we _not_ in agreement over this?
> > 
> >   Efficiency has been a red herring all along, IMNSHO.
> 
> Every time you talked about "utility".

I definitely agree that there have been suggestions that Lisp1
compilers have limitations that Lisp2 compilers do not which affect
the overall efficiency of compiled code.  Though I've seen and read
this, my instinct is that it is mostly wrong.

However, efficiency with respect to speed of compiled code vs. with
respect to productivity of programmers is probably the question here,
since the term "efficiency" is highly overloaded.  I consider myself
as somone who operates more efficiently in a Lisp2 world, and know
people who are the opposite.  Neither group can reasonably argue on
the basis of efficiency based on these preferences.

However, Lisp2 seems to have more utility by simply noting that human
beings can handle the two namespaces with ease, especially given the
denotational differences in how functions and values are accessed.

The more I think about the Lisp1 vs. Lisp2 issue, the more I think
it's not very significant with respect to re-coding.  I could probably 
go back and fourth without much problem.  Considering the *savings* in 
the size of a CL image if CL had been Lisp1, I personally see the
Lisp1 argument.  For me, it's just a matter of style, and Lisp2
affords _me_ a more intuitive style, and so that's where _my_ vote
is.

At my last job, I argued over this Lisp1 vs. Lisp2 issue with someone
relatively senior, and realize that it has nothing to do with
intellect; it's a simple matter of preference, and that's what all of
this is about to a large extent.  That's probably why it's been a bit
hostile: most of the points made on either side arn't enough to (or
even marginally) make one seem better than the other.  But I can see
why it's a bit easier to argue Lisp1 over Lisp2 since it does have
some more concrete advantages (e.g. size, uniformity).

Consider, however, that a good CL delivery system may detect that, for
example, symbol plists are never used at runtime, and thus optimize
the delivery with respect to space.  The compiler says "If you use it,
you pay for it."  That's all there is to it.  In CL, you have the
choice to use it or not, and eventually the compilers will catch up
with the theory so that when you don't use it, you don't suffer from
it either.  Right now, though, the language affords it.  I see this as 
optimistic language design, and will invest in it now because I
believe its value over time will increase (or rather, its cost will
decrease).

dave
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <u254725m.fsf@content-integrity.com>
Janis Dzerins <·····@latnet.lv> writes:

> > This demonstrates that function calls in a single namespace are not
> > significantly less time efficient than those in a dual namespace under
> > similar conditions of optimization.
> 
> To me this demonstrates that hidden function namespace is as efficient
> as not hidden one when comparing some mysterious CL and Scheme
> implementations.  So where are we now?

I think we are now in agreement that having two namespaces
syntactically present in the source language is no more efficient than
having a single namespace in the source language.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Marco Antoniotti
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <y6cpufsurq6.fsf@octagon.mrl.nyu.edu>
Janis Dzerins <·····@latnet.lv> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > Here are some numbers.  I use the tak function because it is dominated
> > by function call time.  In the Common Lisp version, optimization is
> > set to 3, safety 0, and I explicitly load the function cells to ensure
> > the compiler isn't short-circuiting anything.
> > 
> >     (defun takx (x y z)
> >       (declare (fixnum x y z)
> > 	       (optimize (speed 3) (safety 0)))
> >       (cond ((not (< y x)) z)
> > 	    (t
> > 	     (taka
> > 	       (takb (the fixnum (1- x)) y z)
> > 	       (takc (the fixnum (1- y)) z x)
> > 	       (takd (the fixnum (1- z)) x y)))))
> > 
> >     (defun test ()
> >       (setf (symbol-function 'taka) (symbol-function 'takx))
> >       (setf (symbol-function 'takb) (symbol-function 'takx))
> >       (setf (symbol-function 'takc) (symbol-function 'takx))
> >       (setf (symbol-function 'takd) (symbol-function 'takx))
> >       (time (dotimes (i 10000) (taka 18 12 6))))
> > 
> > In the scheme version I declare (usual-integrations) (this allows the
> > compiler to assume that I have not redefined the standard procedures).
> > I use the fixnum-specific < and decrement operators.  I set a switch
> > in the compiler to tell it to not perform stack checks (as the lisp
> > version does not do this when speed is at 3).  I couldn't figure out
> > how to instruct the compiler to not poll for interrupts.
> > 
> >     (declare (usual-integrations))
> > 
> >     (define taka)
> >     (define takb)
> >     (define takc)
> >     (define takd)
> > 
> >     (define (takx x y z)
> >       (cond ((not (fix:< y x)) z)
> > 	    (else
> > 	     (taka
> > 	       (takb (fix:-1+ x) y z)
> > 	       (takc (fix:-1+ y) z x)
> > 	       (takd (fix:-1+ z) x y)))))
> > 
> >     (define (test)
> >       (set! taka takx)
> >       (set! takb takx)
> >       (set! takc takx)
> >       (set! takd takx)
> >       (time (lambda () 
> > 	      (do ((i 0 (+ i 1))) 
> > 		  ((= i 10000) #f)
> > 		(taka 18 12 6)))))
> > 
> > 
> > On my machine, the lisp version takes 33.5 seconds, the scheme version
> > takes 29.2 seconds.
> > 
> > This demonstrates that function calls in a single namespace are not
> > significantly less time efficient than those in a dual namespace under
> > similar conditions of optimization.
> 
> To me this demonstrates that hidden function namespace is as efficient
> as not hidden one when comparing some mysterious CL and Scheme
> implementations. So where are we now?

Not quite.  The Scheme example is non-standard.  The CL example is
ANSI compliant.  That is the main difference.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <d7bst93l.fsf@content-integrity.com>
Marco Antoniotti <·······@cs.nyu.edu> writes:


> Not quite.  The Scheme example is non-standard.  The CL example is
> ANSI compliant.  That is the main difference.

I just *knew* someone would say that.

The Scheme example is definitely not RnRS standard, nor is it likely
to be repeatable on any version of Scheme that doesn't use
uuo-linking.

The point I was making had little to do with Scheme, it was the most
convenient (the only?) single-namespace lisp that uses uuo-links that
I could compare against.



-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Janis Dzerins
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <8766hkkxj4.fsf@asaka.latnet.lv>
Marco Antoniotti <·······@cs.nyu.edu> writes:

> Not quite.  The Scheme example is non-standard.  The CL example is
> ANSI compliant.  That is the main difference.

And you keep bringing fuel-tanks to our little picnic. How nice of you :)

-- 
Janis Dzerins

  If million people say a stupid thing it's still a stupid thing.
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwn1awmgzu.fsf@world.std.com>
Janis Dzerins <·····@latnet.lv> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > Here are some numbers. [...]
> > On my machine, the lisp version takes 33.5 seconds, the scheme version
> > takes 29.2 seconds.
> > 
> > This demonstrates that function calls in a single namespace are not
> > significantly less time efficient than those in a dual namespace under
> > similar conditions of optimization.

I'm not sure about this.  It says this for a specific configuration of those.
I'd have to study it longer than I have the patience for to find out if
this specific result generalizes.

> To me this demonstrates that hidden function namespace is as efficient
> as not hidden one when comparing some mysterious CL and Scheme
> implementations. So where are we now?

Also, it isn't obvious to me, and I don't have time to check, that
Joe has adequately controlled this experiment for other potential
cross-language and compiler differences, so nothing obvious leaps from
it to me.  That doesn't mean either that I believe or disbelieve Joe's
conclusions, just that I don't personally find this a compelling proof
technique.

But this is a weird discussion because of the conversational drift.  The
original claims in the conversation are that single namespace systems are
simpler.  But, to me, it's a weird notion of "simple" when the simplicity
does not automatically lead to efficiency and where you have to magically
know to implement all this extra mechanism (almost the same mechanism as
in the so-called "less-simple" system) in order to achieve parity.  And 
in the original formulation of the problem, the claim is made that there
is nothing special about function references over other references, yet
to achieve this equivalent efficiency you have to special-case function
references.

It's a sort of necessary result that if you allow arbitrary mechanism
under the sheets, a single namespace Lisp has to approach the same speed
as a multi-namespace one since, worst case, you can always make the second
namespace impossible to get to and just use the two-namespace Lisp as an
implementation vehicle for a one-namespace Lisp.  But this seems, to me,
a weak proof that a one-namespace Lisp is better from any kind of
implementation point of view.

It just leaves you back to the question of which one you'd want from a
user point of view.  And we already (hopefully) acknowledged that this 
choice is arbitrary, and IF uniquely determined by your choice of aesthetics,
is only possible to so determine because you've made an arbitrary choice
of aesthetics, there being neither any uniquely determined set of aesthetic
rules.

Lisp1 people like their goal state of a one-namespace lisp, and
through Scheme they get it.  They fuss over the punning in a Lisp2, but I've
seen plenty of equally egregious punning of different kinds in a Lisp1.
That's the nature of the game.  We use the tools we're given to the most
we can get of them.  But let's not play holier-than-thou about it.

Lisp2 people like their goal state of a two-namespace lisp, and through
CL they get it.  
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <8zmgxmfz.fsf@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> Also, it isn't obvious to me, and I don't have time to check, that
> Joe has adequately controlled this experiment for other potential
> cross-language and compiler differences, so nothing obvious leaps from
> it to me.  That doesn't mean either that I believe or disbelieve Joe's
> conclusions, just that I don't personally find this a compelling proof
> technique.

Yes, I have not rigorously defended this as a proof.  I did make an
attempt to control language and compiler differences, but whether that
attempt was adequate is certainly not shown.

However, I was intending to show that having two namespaces leads does
not lead to improved performance (or not so improved that one would
desire two namespaces for performance only.)

I believe, however, that the results of my timing may be generalized
to other CL compilers.  I don't think that they will generalize to
other Scheme compilers because very few Scheme systems have this kind
of uuo-link mechanism.  Anyone who is interested it trying it out
should be able to reproduce my results.

> But this is a weird discussion because of the conversational drift.  The
> original claims in the conversation are that single namespace systems are
> simpler.  

I didn't claim this.

> And in the original formulation of the problem, the claim is made
> that there is nothing special about function references over other
> references, yet to achieve this equivalent efficiency you have to
> special-case function references.

Nor would I claim this, either.  Functions *are* quite different from
variables.

> It's a sort of necessary result that if you allow arbitrary mechanism
> under the sheets, a single namespace Lisp has to approach the same speed
> as a multi-namespace one since, worst case, you can always make the second
> namespace impossible to get to and just use the two-namespace Lisp as an
> implementation vehicle for a one-namespace Lisp.  

Exactly.

> But this seems, to me, a weak proof that a one-namespace Lisp is
> better from any kind of implementation point of view.

It is a strong suggestion that a one-namespace lisp is no worse than a
two-namespace lisp from a performance point of view.


I think we are both in agreement that one-namespace vs. two-namespaces
is one of aesthetics.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-D7D22F.00131508032001@news.nzl.ihugultra.co.nz>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > I understand that you do, but I have outlined a mechanism that is used
> > in practice and appears to refute your claim.
> 
> Then what I'm saying is that you might have a fixnum poitner whose
> backing store held an instruction which was a syntactically valid
> instruction to execute.  It could, for example, be seen as a system
> call.  And yet you could do (setq x that-fixnum) and if you could just
> funcall to it without checking it for pointerness (as in the PDP10
> bibop scheme, where checking meant consulting an external table), then
> you'd end up jumping to garbage and executing it. (We used to do this
> stuff intentionally in Maclisp.  But if you do it by accident, it's
> scary.  Now, a loader, either whole-image loader or a dynamic loader,
> might protect you.  But it might not.  That's my only point.)

You can be protected simply by actually having a code pointer which 
always points to valid code, and by arranging for setq to maintain that 
invariant.  This doesn't require exposing the existance of the code 
pointer to the gaze of the programmer.

We're assuming, of course, that you can actually tell a fixnum pointer 
from a function pointer by some means -- whether by a tag in the object 
referred to, or bitfields in the pointer itself dosn't matter.


> > I am assuming modern hardware.
> 
> "current" hardware.  My point is that hardware continues to
> change and not all changes are monotonically in a given direction.

All the more reason to keep code slots as a mere implementation 
technique, rather than explicitly exposing it to the programmer by 
having a different namespace for it.


> I think more short-sighted but the trade-off might be "more
> useful".  I'm not taking a position on that.  Dylan is an
> example of a language that I think I remember making some
> very specific tactical assumptions about the architecture
> (e.g., for numbers and character codes, maybe other things
> too, like files).

The only assumptions I'm aware of (from reading both the original 1992 
prefix syntax book and the current reference manual) is that <integer> 
should have at least 28 bits of precision.  It is implementation-defined 
whether <integer> operations are modulo, trap, or overflow into bignums 
-- and in fact Harlequin Dylan (now Functional Developer) allows this to 
be a compile-time choice via importing <integer> from one of several 
different libraries.

Perhaps you could be more clear about what "tactical assumptions" you 
are talking about?

-- Bruce
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwn1ax633a.fsf@world.std.com>
Bruce Hoult <·····@hoult.org> writes:

> > I think more short-sighted but the trade-off might be "more
> > useful".  I'm not taking a position on that.  Dylan is an
> > example of a language that I think I remember making some
> > very specific tactical assumptions about the architecture
> > (e.g., for numbers and character codes, maybe other things
> > too, like files).
> 
> The only assumptions I'm aware of (from reading both the original 1992 
> prefix syntax book and the current reference manual) is that <integer> 
> should have at least 28 bits of precision.

I'll take your word for this, though I thought there was more.

> It is implementation-defined whether <integer> operations are modulo,
> trap, or overflow into bignums 

This may be how they got out of what I thought was an "assumption".
(This sounds awful.  Who can do real work not knowing whether they
have modulare arithmetic or real arithmetic?)

> Perhaps you could be more clear about what "tactical assumptions" you 
> are talking about?

I think I was talking both about the stuff above (had they not been
wishy-washy on the "implementation-defined" part) and also about
character.  As I recall, they made it impossible to have something of
type character that was not unicode--in particular that was bigger
than unicode.  Am I wrong on this?  A couple of times I almost had to
use Dylan for work, but mostly only ever wrote a few small programs to
test it, and read various versions of emerging manuals.  I didn't get
very deep into it, nor am I sure we're talking about suitably similar
versions since it changed a lot during the time I'm talking about.

Again with files, there are a lot of notions of files that we encountered
in the CL design that I don't expect the dylan pathname (locator? can't
recall what they called them) model doesn't admit.  In CL's design we
were up agains file systems that had no directory notion, that had multiple
hosts, that had two name components (or that could have either a version
or a type but not both), that had no hierarchy, that used spaces as 
component separators, some that used "<" and ">" to separate directory 
components directionally ("up" vs "down" in ">foo>bar>baz<x>y>z>w")
while ithers that used them to wrap around directory part ("<foo.bar.baz>").
And that's just at the syntax level; at the semantics level there were 
other differences.  It was a lot to unify.  I'm near certain Dylan didn't
attempt to follow in the "generality" CL sought, but thrived (such as
they did) on discarding the "baggage" of generality.

I'd rather not lean too heavy on this if we can't be in agreement here.
I thought I was saying something fairly non-controversial about Dylan
(i.e., something I'd heard directly from a Dylan designer--that this was
a difference between CL and Dylan--Dylan was designed specifically for
the Mac/PC platforms, not for arbitrary platforms, because those were
the ones that had "won", i.e., killed off competing platforms).
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-C45FC2.12504008032001@news.nzl.ihugultra.co.nz>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > > I think more short-sighted but the trade-off might be "more
> > > useful".  I'm not taking a position on that.  Dylan is an
> > > example of a language that I think I remember making some
> > > very specific tactical assumptions about the architecture
> > > (e.g., for numbers and character codes, maybe other things
> > > too, like files).
> > 
> > The only assumptions I'm aware of (from reading both the original 1992 
> > prefix syntax book and the current reference manual) is that <integer> 
> > should have at least 28 bits of precision.
> 
> I'll take your word for this, though I thought there was more.
> 
> > It is implementation-defined whether <integer> operations are modulo,
> > trap, or overflow into bignums 
> 
> This may be how they got out of what I thought was an "assumption".
> (This sounds awful.  Who can do real work not knowing whether they
> have modulare arithmetic or real arithmetic?)

Well, you either know that your data values are nowhere near caring, or 
else you explicitly use an <int32> or <bignum> type.  And if you read 
the manual for your implementation then you'll know anyway.

People seem to get work done in C, whcih doesn't specify all sorts of 
things, including the results of divisions.


> > Perhaps you could be more clear about what "tactical assumptions" you 
> > are talking about?
> 
> I think I was talking both about the stuff above (had they not been
> wishy-washy on the "implementation-defined" part) and also about
> character.  As I recall, they made it impossible to have something of
> type character that was not unicode--in particular that was bigger
> than unicode.  Am I wrong on this?

Well, the manual certainly *says* "Unicode", but the syntax for 
character constants is '\<xxxx>', which readily admits extension to 
larger than unicode if your machine/compiler supports it.

And if your machine has big characters that dont' happen to be Unicode 
then I guess the compiler gets to use a table mapping unicode numbers to 
whatever you actually have.  Assuming you want programs using wide 
character constants to be portable, which seems like a good thing.


> Again with files, there are a lot of notions of files that we encountered
> in the CL design that I don't expect the dylan pathname (locator? can't
> recall what they called them) model doesn't admit.  In CL's design we
> were up agains file systems that had no directory notion, that had 
> multiple
> hosts, that had two name components (or that could have either a version
> or a type but not both), that had no hierarchy, that used spaces as 
> component separators, some that used "<" and ">" to separate directory 
> components directionally ("up" vs "down" in ">foo>bar>baz<x>y>z>w")
> while ithers that used them to wrap around directory part 
> ("<foo.bar.baz>").

The Dylan Reference Manual says nothing at all about files and I/O.

The two existing implementations (Harlequin/Fun-O and Gwydion) have 
agreed on a common library spec that is sufficiently general to support 
Unix, MSDOS, and Macintosh.  This isn't ideal but it's better than 
nothing.  I expect any move from this at this point would be in the 
direction of using URLs (i.e. basically Unix format plus the protocol) 
and let the implementation map that to the local standard.

-- Bruce
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-036FA9.23495007032001@news.nzl.ihugultra.co.nz>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

> > The issue of whether a particular address contains executable code
> > and whether it would be legal to load that address into the program
> > counter is an issue of linker protocol.  Lisp hackers tend to forget
> > about linking because lisp links things on the fly and makes it easy
> > to run a partially linked image.
> 
> And modern programmers tend to assume the only hardware Lisp was designed
> for is the stuff you can buy right now.  On the PDP10, for example, you
> could load the contents of any  address into memory and execute it.
> And you could just JRST or JSP to any memory location.  The linker
> was not involved.

Ah, this time it is you rather than me who raises the possibility of 
decisions being time-dependent!

The reasons why Common Lisp was made the way it was in 1980 are very 
interesting, but I don't think it is reasonable to ignore current 
hardware and blindly maintain traditions that may (or may not) only be 
appropriate to stuff that you can't buy any more.

-- Bruce
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwofvd63v1.fsf@world.std.com>
Bruce Hoult <·····@hoult.org> writes:

> In article <···············@world.std.com>, Kent M Pitman 
> <······@world.std.com> wrote:
> 
> > > The issue of whether a particular address contains executable code
> > > and whether it would be legal to load that address into the program
> > > counter is an issue of linker protocol.  Lisp hackers tend to forget
> > > about linking because lisp links things on the fly and makes it easy
> > > to run a partially linked image.
> > 
> > And modern programmers tend to assume the only hardware Lisp was designed
> > for is the stuff you can buy right now.  On the PDP10, for example, you
> > could load the contents of any  address into memory and execute it.
> > And you could just JRST or JSP to any memory location.  The linker
> > was not involved.
> 
> Ah, this time it is you rather than me who raises the possibility of 
> decisions being time-dependent!

No, I observe others doing it.  I'm saying design for older hardware is
just as relevant as design for newer hardware in a system that seeks to
be timeless.  It's the people who are injecting phrases like "modern
operating systems do xxx" that are being time-dependent.  They have perhaps
not had the "luxury" (and I use the term advisedly) of publishing a paper
that refers to "modern" something and then looking back at it 20 years later
to see how laughable it sounds.  I've started trying to substitute words
like "contemporary" for modern in words I write to forums that I think will
survive into the future because it doesn't have the pejorative sense of
monotonic wisdom about it that I feel hides behind the use of "modern".
 
> The reasons why Common Lisp was made the way it was in 1980 are very 
> interesting, but I don't think it is reasonable to ignore current 
> hardware and blindly maintain traditions that may (or may not) only be 
> appropriate to stuff that you can't buy any more.

To the contrary, I think it's being built for those operating systems
that caused CL to be timeless in its design.  The people making the
decisions (and I was there at the time, but it wasn't me deciding
things like this, so it's not me patting myself on the back) were
sharp enough to realize that to have the language survive changes in
hardware over time, they ought not rely on features of hardware past
or present or future, but rather make a design that was as neutral as
possible.

To take a neutral (to this topic) example, they could have built in
ASCII encoding but they tolerated EBCDIC.  Whether EBCDIC was or is on
the way out is of no relevance; the point is that avoiding ASCII
implicitly avoided assumptions about Unicode, and if/when the world
outgrows Unicode (assuming Unicode doesn't just outright kill the
character sets that got left out), CL's character model will continue
to apply where Dylan's (for example) will have to be revised at the
language level.

Likewise for memory and operating system.  Geez, the "modern" operating
systems of the time were the Lisp Machine's.  They had hardware assist
for GC and there were tons of cool asssumptions they could have built in
to the language about what the operating system and hardware would or
wouldn't do for them.  Automatic tracking of invisible pointers was a 
big deal and would have allowed much easier implementation of some language
features, but reliance on it would have limited the set of target platforms.

I firmly believe it's short-sighted to assume that what happens today
is "better" than what happened yesterday just because it killed off
yesterday.  Things run in cycles more than people like to admit.  Good
ideas get killed as much for market or political reasons as technical
ones.  Old concepts have a way of coming back--sometimes in our
lifetime if we're lucky enough not to have lost all remembrance of
them and have to invent them from scratch.  And the best way I know to
insulate myself from the problems that could cost is not to rely on
either past OR present in situations where I don't need to.
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-A1E77A.12335208032001@news.nzl.ihugultra.co.nz>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > In article <···············@world.std.com>, Kent M Pitman 
> > <······@world.std.com> wrote:
> > 
> > > > The issue of whether a particular address contains executable code
> > > > and whether it would be legal to load that address into the program
> > > > counter is an issue of linker protocol.  Lisp hackers tend to 
> > > > forget
> > > > about linking because lisp links things on the fly and makes it 
> > > > easy
> > > > to run a partially linked image.
> > > 
> > > And modern programmers tend to assume the only hardware Lisp was 
> > > designed
> > > for is the stuff you can buy right now.  On the PDP10, for example, 
> > > you
> > > could load the contents of any  address into memory and execute it.
> > > And you could just JRST or JSP to any memory location.  The linker
> > > was not involved.
> > 
> > Ah, this time it is you rather than me who raises the possibility of 
> > decisions being time-dependent!
> 
> No, I observe others doing it.  I'm saying design for older hardware is
> just as relevant as design for newer hardware in a system that seeks to
> be timeless.

I agree, and therefore think that you should not design too closely to 
*either*.  You shouldn't put in language features that you can't see how 
to implement efficiently, but at the same time it's even *more* 
important to not put in language features that in some way *depend* on 
the machine you happen to be using at the time.

"Two namespaces was better on the PDP-10" strikes me as making the 
second mistake.



> To take a neutral (to this topic) example, they could have built in
> ASCII encoding but they tolerated EBCDIC.  Whether EBCDIC was or is on
> the way out is of no relevance; the point is that avoiding ASCII
> implicitly avoided assumptions about Unicode, and if/when the world
> outgrows Unicode (assuming Unicode doesn't just outright kill the
> character sets that got left out), CL's character model will continue
> to apply where Dylan's (for example) will have to be revised at the
> language level.

I don't understand this reference.

Dylan includes a "character" type but doesn't define how big it is or 
what the native encoding is.  Character and string literals can contain 
characters either as they are (e.g. "Hello world", in whatever the 
current character set is) or else as delimited hex strings such as 
"\<44>\<79>\<6c>\<61>\<6e>" (which is "Dylan").  There is no limit on 
the size of these hex strings, so Unicode, or extensions to Unicode are 
transparently supported.

Now, yes, the encoding is listed as ASCII/Unicode, but I don't see hwo 
that can be avoided.  How does Common Lisp allow you to portably 
specify, say, an a-umlaut, such that it works on EBCDIC systems?



> I firmly believe it's short-sighted to assume that what happens today
> is "better" than what happened yesterday just because it killed off
> yesterday.  Things run in cycles more than people like to admit.

I completely agree.  I believe Ivan Sutherland was the first to write 
about this.


> Good
> ideas get killed as much for market or political reasons as technical
> ones.  Old concepts have a way of coming back--sometimes in our
> lifetime if we're lucky enough not to have lost all remembrance of
> them and have to invent them from scratch.  And the best way I know to
> insulate myself from the problems that could cost is not to rely on
> either past OR present in situations where I don't need to.

I totally agree.

-- Bruce
From: Ray Blaak
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m3hf16xcp9.fsf@blight.transcend.org>
Kent M Pitman <······@world.std.com> writes:
> Exactly.  This is the efficiency issue I mentioned, which cannot be
> duplicated in a Lisp1 without either massive theorem proving (takes lots of
> time) or declarations [...]  Consequently, unless you are happy with just
> having programs execute machine level garbage, there are certain function
> calls which are inherently faster in a Lisp2 than in a Lisp1 [...] A Lisp2
> can take advantage of this to check once at definition time, but a Lisp1
> cannot take advantage because it can't (due to the halting problem) check the
> data flow into every (f x) to be sure that f contained a valid
> machine-runnable function.

Another solution to the Lisp1 function-call efficiency problem that does not
require type declarations (although that suits my language preferences just
fine) is to have function bindings be (usually) immutable.

That way one can guarantee not only that a valid function is bound to the
symbol, but that a *particular* function is bound, allowing for further
optmizations. 

For example, one could have (in some Lispy language that is not currently
Scheme or CL or anything in particular):

(define-constant foo (lambda (blah) blah))

or

(define (foo blah) blah)

or even

(defun foo (blah) blah)

all create immutable bindings.

If one truly does need functional variables, then the general symbol binding
can still be available to do the job:

(define foo (lambda (blah) blah))

This idea comes from Henry Baker's paper "Critique of DIN Kernel Lisp
Definition Version 1.2" at http://linux.rice.edu/~rahul/hbaker/CritLisp.html

One still has the problem of what to do with local function bindings, e.g.

(let ((foo (lambda (blah) blah)))
  ...)

Baker also recommends (for more general reasons) that (let ...) should create
immutable bindings by default, which would solve this problem. Alternatively,
data flow analysis in a (let ...) construct might be practical enough to allow
for function calls to be made efficient. Local (defun foo ...) or (define (foo
...) ...) declarations would also work.

At any rate, with such an approach, type declarations are no longer needed for
efficient function calls. Note however, that this approach requires strict
lexical scoping in order to work.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@infomatch.com                            The Rhythm has my soul.
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-D26152.01252008032001@news.nzl.ihugultra.co.nz>
In article <··············@blight.transcend.org>, Ray Blaak 
<·····@infomatch.com> wrote:

> For example, one could have (in some Lispy language that is not currently
> Scheme or CL or anything in particular):
> 
> (define-constant foo (lambda (blah) blah))
> 
> or
> 
> (define (foo blah) blah)
> 
> or even
> 
> (defun foo (blah) blah)
> 
> all create immutable bindings.
> 
> If one truly does need functional variables, then the general symbol 
> binding
> can still be available to do the job:
> 
> (define foo (lambda (blah) blah))

Which is exactly what Dylan does.

// immutable bindings
define constant foo = method(blah) blah end;
define method foo(blah) blah end;

// mutable binding
define variable foo = method(blah) blah end;


The difference between the first two examples is that define method 
creates an implicit generic function (if ti doesn't already exist) 
wheras the define constant doesn't.

Both Dylan implementations now provide a "define function" macro that 
expands to the "define constant" form.


> One still has the problem of what to do with local function bindings, 
> e.g.
> 
> (let ((foo (lambda (blah) blah)))
>   ...)

Dylan provides a syntax...

   local
      method a()  end,
      method b()  end,
      method c()  end;

... which provides immutable bindings for local functions that can be 
mutually-recursive.  You can of course also make mutable local function 
indings using let.

-- Bruce
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-BA9536.00585507032001@news.nzl.ihugultra.co.nz>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

> Erik Naggum <····@naggum.net> writes:
> 
> > * Bruce Hoult <·····@hoult.org>
> > > Even if the function cell is know not to be data, what if
> > > it's empty? 
> > >  
> > > Don't you have to check for that?  Or are symbols in CL 
> > > bound to some sort of error function by default?
> > 
> >   You mean, unbound?  What the implementation does with an 
> >   unbound function slot in a symbol is not specified in the
> >   standard.  One smart way is to make the internal representation
> >   of "unbound" be a function that signals the appropriate error.
> >   That would make function calls faster, and you could not have
> >   optimized away the check for boundness if you asked for the
> >   functional value, anyway.  Note that the user of this code would
> >   never know how you represented the unbound value unless he peeked 
> >   under the hood, say by inspecting a symbol.
> 
> Exactly.  This is the efficiency issue I mentioned, which cannot
> be duplicated in a Lisp1 without either massive theorem proving
> (takes lots of time) or declarations (which Scheme, for example,
> won't do, it seems to me at least partially because the same
> minimalist mindset that drives them to want to be a Lisp1 also
> drives them to want to be declaration-free).
> Consequently, unless you are happy with just having programs execute 
> machine level garbage, there are certain function calls which are 
> inherently faster in a Lisp2 than in a Lisp1, assuming you believe
> (as I believe both CL and Scheme designers believe) that functions
> are called more often than they are defined.  A Lisp2 can take
> advantage of this to check once at  definition time, but a Lisp1
> cannot take advantage because it can't (due to the halting problem)
> check the data flow into every (f x) to be sure that f contained a
> valid machine-runnable function.

I see at least two serious problems with this argument:

1) you appear to be assuing that "Lisp1" is identically equal to 
"Scheme", when that's not the case at all.  Well, I know you *invented* 
the term "Lisp1", but I understand that you defined it in terms of the 
number of namespaces and not "actually, when I say Lisp1 I *really* mean 
Scheme but don't want to say so".

Other Lisp1's, such as Dylan, do in fact have declarations which enable 
the compiler, just as in a Lisp2, to put any necessary type checks at 
the point of assignment of the function instead of the point of use.


2) if the ability to move type checks from the point of use to the point 
of definition is in fact so important then why do it only for *function* 
values?  Why not do it for integers, floats, chars, strings, arrays, 
lists?  Perhaps each symbol should have a slot for a possible integer 
binding, a slot for a pssible float binding, a slot for a possible char 
binding, a slot for a possible string binding, a slot for a possible 
array binding, and a slot for a possible pair binding?

If you do, for example, (CAR X) then the CAR operator will go direct to 
the slot in the symbol X that has been reserved for pairs.  No type 
check is necessary.  It is undefined what happens if that slot is 
unbound, but perhaps a smart implementation will put a pair there which 
refers to itself, or maybe which contains an illegal hardware address to 
cause a controlled fault?


But what about user-sefined types, such as classes?  There are an 
infinite number of those possible.  You can't reserve a slot in every 
symbol for each one.


At some point doesn't it just become easier to break down and use type 
declarations and symbols that can be bound to only one value at any 
given time?

Or is the benefit from not having to type check functin calls *so* much 
greater than the benefit from not having to type check integer addition 
or CAR/CDR that two namespaces (and not type declarations) is the 
optimum answer?  I wouldn't have thought so.

-- Bruce
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwr90bc86m.fsf@world.std.com>
Bruce Hoult <·····@hoult.org> writes:

> In article <···············@world.std.com>, Kent M Pitman 
> <······@world.std.com> wrote:
> 
> > Erik Naggum <····@naggum.net> writes:
> > 
> > > * Bruce Hoult <·····@hoult.org>
> > > > Even if the function cell is know not to be data, what if
> > > > it's empty? 
> > > >  
> > > > Don't you have to check for that?  Or are symbols in CL 
> > > > bound to some sort of error function by default?
> > > 
> > >   You mean, unbound?  What the implementation does with an 
> > >   unbound function slot in a symbol is not specified in the
> > >   standard.  One smart way is to make the internal representation
> > >   of "unbound" be a function that signals the appropriate error.
> > >   That would make function calls faster, and you could not have
> > >   optimized away the check for boundness if you asked for the
> > >   functional value, anyway.  Note that the user of this code would
> > >   never know how you represented the unbound value unless he peeked 
> > >   under the hood, say by inspecting a symbol.
> > 
> > Exactly.  This is the efficiency issue I mentioned, which cannot
> > be duplicated in a Lisp1 without either massive theorem proving
> > (takes lots of time) or declarations (which Scheme, for example,
> > won't do, it seems to me at least partially because the same
> > minimalist mindset that drives them to want to be a Lisp1 also
> > drives them to want to be declaration-free).
> > Consequently, unless you are happy with just having programs execute 
> > machine level garbage, there are certain function calls which are 
> > inherently faster in a Lisp2 than in a Lisp1, assuming you believe
> > (as I believe both CL and Scheme designers believe) that functions
> > are called more often than they are defined.  A Lisp2 can take
> > advantage of this to check once at  definition time, but a Lisp1
> > cannot take advantage because it can't (due to the halting problem)
> > check the data flow into every (f x) to be sure that f contained a
> > valid machine-runnable function.
> 
> I see at least two serious problems with this argument:
> 
> 1) you appear to be assuing that "Lisp1" is identically equal to 
> "Scheme", when that's not the case at all.  Well, I know you *invented* 
> the term "Lisp1", but I understand that you defined it in terms of the 
> number of namespaces and not "actually, when I say Lisp1 I *really* mean 
> Scheme but don't want to say so".
> 
> Other Lisp1's, such as Dylan, do in fact have declarations which enable 
> the compiler, just as in a Lisp2, to put any necessary type checks at 
> the point of assignment of the function instead of the point of use.

Absolutely.  This thread initiated discussing Scheme and namespaces though.
Just as a Lisp1 calls for hygienic macros, when a Lisp2 doesn't, it also
calls for declarations.

> 2) if the ability to move type checks from the point of use to the point 
> of definition is in fact so important then why do it only for *function* 
> values? 

Because when you illegally reference a pointer, the worst you get is 
generally a pointer into a non-existent page.  When you jump to garbage
thinking it's machine executable data, the worst case can be much worse:
it could be an integer whose bit configuration coincidentally says
"delete all my files".

> Why not do it for integers, floats, chars, strings, arrays, 
> lists?

Not a bad plan, but not as essential, in the sense of image integrity.

> ...
> At some point doesn't it just become easier to break down and use type 
> declarations and symbols that can be bound to only one value at any 
> given time?

No.  Because the decision to use only one namespace is expressionally
limiting.  I simply would not want to use only one namespace for 
expressional reasons.  I'm only using the technical argument to reinforce
that this is a sound choice.
 
> Or is the benefit from not having to type check functin calls *so* much 
> greater than the benefit from not having to type check integer addition 
> or CAR/CDR that two namespaces (and not type declarations) is the 
> optimum answer?  I wouldn't have thought so.

I personally think so.  Perhaps this is just an opinion.  I haven't
coded machine code in a long time, so it's possible that the
equivalent "danger" has been created in other areas since then, but
function calling in my day used to be special (danger-wise) in the way
I'm describing, in a way ordinary data is not.
From: Duane Rettig
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <4u256hkj6.fsf@beta.franz.com>
This is really more of a response to Bruce Hoult than to Kent Pitman,
but since Kent started a tenative argument in the direction I wanted
to go anyway, I am answering his article.

Up to this point, the arguments between Lisp1 and Lisp2 have either
been religious or asthetic.  I'd like to introduce an "implementational"
argument, that is, that the number of namespaces should closely follow
what the underlying hardware best implements.  In the case of code vs
data, _all_ modern computer hardware of any significance establish a
clear distinction between code and data spaces, even though that
distinction could be blurred a little because the spaces tend to overlap
in practical situations.  However, anyone who has had to deal with
cache-flushing mechanisms whenever establishing or moving a code vector
will see first-hand this distinction.

Kent M Pitman <······@world.std.com> writes:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > 2) if the ability to move type checks from the point of use to the point 
> > of definition is in fact so important then why do it only for *function* 
> > values? 
> 
> Because when you illegally reference a pointer, the worst you get is 
> generally a pointer into a non-existent page.  When you jump to garbage
> thinking it's machine executable data, the worst case can be much worse:
> it could be an integer whose bit configuration coincidentally says
> "delete all my files".

A cogent argument, but I actually think it's more of an efficiency
argument than a safety argument.  It's true that one architecture's
"garbage" is another architecture's machine instruction.  Nowadays,
even newer versions of the "same" arhitecture will relegate a deprecated
bit pattern to an "emulation trap", so that the machine treats the code
as garbage (somewhat) but a trap handler will simulate an execution of
the instruction anyway.  Taking this emulation a step further, any data
at all could be made to _look_ like instructions, with the proper
emulator (it doesn't even have to look like the same architecture as
the one doing the emulation).  Any such emulation could possibly result
in the "delete all my files" coincidence.  But the most efficient way
to do so :-) is through native code, as much as possible, where the
actual level of native-ness depends on your design and portability
requirements.

The way this all ties in with the Lisp1/Lisp2 argument is that if you
implement your lisp at a native-down-to-the-hardware level, you can
take advantage of codespace vectoring to perform your functionality
checks, as I believe Erik and Kent have discussed already, but if you
must treat your code as potential data, even though it is in a functional
position, then you must either make checks at runtime or elide them by
checking at compile-time.  This reduces dynamicity.  And since CL has
a way to transition from data to code (i.e. via funcall) it loses
nothing in practice.

> > Why not do it for integers, floats, chars, strings, arrays, 
> > lists?
> 
> Not a bad plan, but not as essential, in the sense of image integrity.

Along the same lines as my efficiency argument above:  I submit that
Ansi C does this very thing, more and more as time progresses.  Each
architecture has a Standard Calling convention, where register uses are
assigned.  Many of the RISC architectures define a set of N integer
registers and a set of N floating point registers that will be used
to pass the first N arguments between functions.  So, for example,
if a C function is defined as

  int foo (int a, int b, float c, double d, int e);

then the arguments might be passed in gr1, gr2, fr3, fr4, and gr5,
respectively (gr => general or integer register, fr => float register).

The advantage of passing in this manner is one of efficiency.  The
floating point units tend to be separate, and a move and/or conversion
to an integer register tends to add to the instruction and cycle count.
Passing a float argument in a float register is the "natural" thing to
do.

The disadvantage of this kind of passing is one of normalization (the
lack thereof); both caller and callee must agree on where the arguments
will be, or the results could be disastrous.  For example, in the above
declaration, if the caller of foo placed the third argument into gr3
instead of fr3, then the argument seen would be garbage.

Performing a hand-wave, I conclude that the reasons for using the first
style vs the second style has to with dynamism.  The first style eschews
dynamism and the second style allows it.  CL defines the second style
for its calling convention, and this allows maximum dynamism.  As we lisp
vendors have had to provide foreign calling capabilities, such capabilities
inherently tend to force such resulting code to be static in nature, to
the extent that it is made efficient.

> > At some point doesn't it just become easier to break down and use type 
> > declarations and symbols that can be bound to only one value at any 
> > given time?
> 
> No.  Because the decision to use only one namespace is expressionally
> limiting.  I simply would not want to use only one namespace for 
> expressional reasons.  I'm only using the technical argument to reinforce
> that this is a sound choice.

Perhaps when it comes down to it, the technical argument becomes the
only one.  If one sticks only with arguments of Turing completeness, one
could argue that a Turing machine is just as good as a CL (better, in fact,
because it is simpler).  Note that in rejecting this previous statement as
ridiculous, we all use the obvious efficiency argument to disprove the
statement, even if only subconsciously.

Perhaps the best way to answer Mr. Hoult's question is to invite him to
continue on with the thought process and to flesh out his design, to see
if he can come to a point where such multiple-bindings-per-type style
is really easier or not...

> > Or is the benefit from not having to type check functin calls *so* much 
> > greater than the benefit from not having to type check integer addition 
> > or CAR/CDR that two namespaces (and not type declarations) is the 
> > optimum answer?  I wouldn't have thought so.
> 
> I personally think so.  Perhaps this is just an opinion.  I haven't
> coded machine code in a long time, so it's possible that the
> equivalent "danger" has been created in other areas since then, but
> function calling in my day used to be special (danger-wise) in the way
> I'm describing, in a way ordinary data is not.

Your instincts are good, as I have mentioned above with the float-vs-int
parameter passing.  However, the code-vs-data has always been much more
distinctive in the Von Neumann model, and will probably always be the
most clear dividing line between namespaces.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bsre7idp.fsf@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> Because when you illegally reference a pointer, the worst you get is 
> generally a pointer into a non-existent page.  When you jump to garbage
> thinking it's machine executable data, the worst case can be much worse:
> it could be an integer whose bit configuration coincidentally says
> "delete all my files".

The worse you can get when you dereference an illegal pointer can be
destruction of your hardware.  I remember a nasty garbage collector bug
caused stray write to the screen controller in a pc.  Certain values
loaded into the controller could cause physical damage to the screen.
In this case, the screen was completely trashed.

Another case I remember involved landing the heads of a disk drive off
the platter, then attempting a seek to an inner cylinder.

Come to think of it, I don't think I've *ever* heard of a stray jump
causing all files to be deleted.




-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-1F66D5.00314708032001@news.nzl.ihugultra.co.nz>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

> > 2) if the ability to move type checks from the point of use to the 
> > point of definition is in fact so important then why do it only for 
> > *function* values? 
> 
> Because when you illegally reference a pointer, the worst you get is 
> generally a pointer into a non-existent page.  When you jump to garbage
> thinking it's machine executable data, the worst case can be much worse:
> it could be an integer whose bit configuration coincidentally says
> "delete all my files".

I don't see anyone advocating not providing suitable checks at *all*.  
Waterproof type-safety is an *extremely* important characteristic.


> > At some point doesn't it just become easier to break down and use type 
> > declarations and symbols that can be bound to only one value at any 
> > given time?
> 
> No.  Because the decision to use only one namespace is expressionally
> limiting.  I simply would not want to use only one namespace for 
> expressional reasons.

In what way is it limiting?  Can you give examples where you would 
habitually pun functions and values onto the same name?

-- Bruce
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwk86162ul.fsf@world.std.com>
Bruce Hoult <·····@hoult.org> writes:

> In article <···············@world.std.com>, Kent M Pitman 
> <······@world.std.com> wrote:

> > ... the decision to use only one namespace is expressionally
> > limiting.  I simply would not want to use only one namespace for 
> > expressional reasons.
> 
> In what way is it limiting?  Can you give examples where you would 
> habitually pun functions and values onto the same name?

(defun foo (list)
  (loop for x in list
        collect (list (first x) (third x))))

I do this all the time.  And after 20 years of practice, it continues to
drive me nuts in Scheme when it blows up "needlessly".  In Scheme, you
learn to misspell your parameters ("lst") to reduce the risk of this.
I hate that.  It is obvious from context what I meant.  In the rare case
that I want to funcall my parameter, and it IS rare enough in CL to be
meaningful, I would prefer to designate that by putting in a FUNCALL.
From: Tim Bradshaw
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <ey3snkom3io.fsf@cley.com>
* Kent M Pitman wrote:

> I do this all the time.  And after 20 years of practice, it continues to
> drive me nuts in Scheme when it blows up "needlessly".  In Scheme, you
> learn to misspell your parameters ("lst") to reduce the risk of this.
> I hate that.  It is obvious from context what I meant.  In the rare case
> that I want to funcall my parameter, and it IS rare enough in CL to be
> meaningful, I would prefer to designate that by putting in a FUNCALL.

I once used a single-namespace lisp with dynamic scope, which can lead
to bugs of completely amazing obscurity along these lines.

--tim
From: Ray Blaak
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m3hf13tpg0.fsf@blight.transcend.org>
Tim Bradshaw <···@cley.com> writes:
> I once used a single-namespace lisp with dynamic scope, which can lead
> to bugs of completely amazing obscurity along these lines.

It occurred to me to wonder if the function/variable separation originally came
about as a solution to this problem in dynamically scoped lisps, given that
lisps were originally dynamically scoped (check: they were, were they not?).

That is, to fix this practical problem, as opposed to efficiency/aesthetics
considerations.

God knows, a single-spaced dynamic lisp could lead to abominations:

  (defun some-other-list ()
    (list 'a 'b 'c))

  (let ((list '(1 2 3)))
    (let ((another-list (some-other-list)))
      (cons list another-list)))

When some-other-list is called, list is no longer the function. Not only would
it execute incorrectly, but it could no longer even execute at all.

Lisp2 in a dynamically-scoped lisp is not a matter of preference but a
requirement. Sure you can still screw functions up, but not quite as
accidentally.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@infomatch.com                            The Rhythm has my soul.
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwpufh6kzb.fsf@world.std.com>
Ray Blaak <·····@infomatch.com> writes:
>
> Tim Bradshaw <···@cley.com> writes:
> > I once used a single-namespace lisp with dynamic scope, which can
> > lead to bugs of completely amazing obscurity along these lines.
> 
> It occurred to me to wonder if the function/variable separation
> originally came about as a solution to this problem in dynamically
> scoped lisps, given that lisps were originally dynamically scoped
> (check: they were, were they not?).

(Yes, mostly.  It didn't used to be as semantically rigorous as all 
of that, and some compilation processes played fast and loose with it,
but generally you're right.)
 
> That is, to fix this practical problem, as opposed to efficiency/aesthetics
> considerations.
> 
> God knows, a single-spaced dynamic lisp could lead to abominations:
> 
>   (defun some-other-list ()
>     (list 'a 'b 'c))
> 
>   (let ((list '(1 2 3)))
>     (let ((another-list (some-other-list)))
>       (cons list another-list)))
> 
> When some-other-list is called, list is no longer the function. Not
> only would it execute incorrectly, but it could no longer even
> execute at all.
> 
> Lisp2 in a dynamically-scoped lisp is not a matter of preference but a
> requirement. Sure you can still screw functions up, but not quite as
> accidentally.

I passed this post along to several people who I thought might know; the
only reply I got back was the following from JonL White, which he said I
could quote back to the group:

JonL> I recall advancing that argument way back in the early-to-mid
JonL> 1970's, when first hearing about the "simplifying" nature of the
JonL> lambda-calculus using only one name-to-value mapping.  [and this
JonL> _may_ have been an issue with Scheme design; Guy, wdyt?]  This
JonL> particular presentation based on the liklihood of having a
JonL> putatively local variable named 'list' is so obvious that
JonL> *everyone* facing the issue must think of it at de novo at one
JonL> point or another.
JonL>
JonL> A typical counterargument is that there are "reserved words" in
JonL> any language and on just has to get used to it.  Consider having
JonL> functions Horse, Buggy, and Car.  But the compelling interest of
JonL> the present example is that 'list' is just too obvious a
JonL> candidate for _both_ a function and a local variable name.
JonL>
JonL> Incidentally, even Lisp1.5 mixed a true interpreter with
JonL> compiled code.  I _believe_, based on the somewhat copycat
JonL> design of the PDP6 Lisp, that it "compiled away" local
JonL> (non-SPECIAL) variable references so that the compiler's model
JonL> indeed did have true local name-scoping for local variables.
JonL> But the interpreter didn't.  Inspired by similar problems in the
JonL> redesign effort called NIL (the "New Implementation of Lisp") I
JonL> proposed some tricks for efficiently implementing an interpreter
JonL> that would correctly separate out the lexical boundaries both
JonL> for variable names, for function names, and for declaration
JonL> information [I think it was presented in the 1982 L&FP.]
From: Tim Bradshaw
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <nkjlmq1hpr9.fsf@tfeb.org>
Kent M Pitman <······@world.std.com> writes:

> 
> (Yes, mostly.  It didn't used to be as semantically rigorous as all 
> of that, and some compilation processes played fast and loose with it,
> but generally you're right.)

I remember now what the issue was that bit me, and it was exactly this
compilation/interpretation thing.

Somewhere there was a function whose definition looked like this (in
reconstructed form, I think the defining form was DF not DEFUN (or
maybe DE?)):

  (defun foo (x)
    (let ((list (cons x nil)))
      ...))

This called a bunch of stuff, and somewhere down the call tree (a logn
way down) was:

  (defun bar (...)
    ...
    (list ...))

Now: if FOO was compiled, then the LIST variable was compiled away and
was not visible to its callees.  If FOO was interpreted but all the
callees were compiled then if they called LIST this got resolved at
compile time, so everything was still OK.  But if FOO and a callee
which called LIST were both interpreted, then things would fail in
exciting ways, even if the callee which was intepreted was a long way
down the stack.

This leaves you with a bomb waiting to go off if you are developing a
system -- two apparently unrelated changes to the code -- one is not
enough -- cause things to blow up in completely mysterious ways.  I
also think that the system this was written in didn't have an in-core
compiler, or if it did I didn't use it (it probably autoloaded and we
had pretty serious memory issues), so when developing programs things
did move from being compiled to interpreted like this.  So when it
blows up, the first thing you do is stop the system, rebuild it from
cold, and then the problem is gone, until a few days later, when it
comes back.

--tim
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <4rww8mte.fsf@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > In article <···············@world.std.com>, Kent M Pitman 
> > <······@world.std.com> wrote:
> 
> > > ... the decision to use only one namespace is expressionally
> > > limiting.  I simply would not want to use only one namespace for 
> > > expressional reasons.
> > 
> > In what way is it limiting?  Can you give examples where you would 
> > habitually pun functions and values onto the same name?
> 
> (defun foo (list)
>   (loop for x in list
>         collect (list (first x) (third x))))
> 
> I do this all the time.  And after 20 years of practice, it continues to
> drive me nuts in Scheme when it blows up "needlessly".  In Scheme, you
> learn to misspell your parameters ("lst") to reduce the risk of this.

This is, of course, the canonical example, but do you often (as in
constantly) have this name collision problem for any symbol other than
LIST ?


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <r900734d.fsf@content-integrity.com>
Erik Naggum <····@naggum.net> writes:

> * Joe Marshall <···@content-integrity.com>
> > This is, of course, the canonical example, but do you often (as in
> > constantly) have this name collision problem for any symbol other than
> > LIST?
> 
>   Do you have insurance?  Even if accidents don't regularly happen to you?

I don't have complete coverage.  I don't have insurance against things
that are very unlikely to happen to me;  I have no flood insurance
because I live on the top of a 50' (15 m) hill.  I have no insurance
against things that don't regularly happen to *anyone* (like alien
abduction).

I often use the symbol LIST for a value, occasionally CONS, but I
don't suppose many people use the value cell of the symbol
LOAD-LOGICAL-PATHNAME-TRANSLATIONS.  Since I rarely see examples other
than LIST used to demonstrate name collision, I wanted some real-world
examples of other common name collisions.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfw4rww8fmj.fsf@world.std.com>
Joe Marshall <···@content-integrity.com> writes:

> I often use the symbol LIST for a value, occasionally CONS, but I
> don't suppose many people use the value cell of the symbol
> LOAD-LOGICAL-PATHNAME-TRANSLATIONS.  Since I rarely see examples other
> than LIST used to demonstrate name collision, I wanted some real-world
> examples of other common name collisions.


(defun load-my-system (&key load-logical-pathname-translations)
  ...
  (when load-logical-pathname-translations
    (load-logical-pathname-translations ...))
  ...)

The issue isn't whether most people do it.  The issue is whether it's
a reasonable and natural thing to want to do.  I claim it is.

Relevant quote:

 "Those who like this kind of thing will find that this is the kind
  of thing they like."

  --not sure of the author, but a quick web search turned up at
    least one person claiming it was Abraham Lincoln
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <66hcxgyc.fsf@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > I often use the symbol LIST for a value, occasionally CONS, but I
> > don't suppose many people use the value cell of the symbol
> > LOAD-LOGICAL-PATHNAME-TRANSLATIONS.  Since I rarely see examples other
> > than LIST used to demonstrate name collision, I wanted some real-world
> > examples of other common name collisions.
> 
> 
> (defun load-my-system (&key load-logical-pathname-translations)
>   ...
>   (when load-logical-pathname-translations
>     (load-logical-pathname-translations ...))
>   ...)
> 

I would probably name that `load-logical-pathname-translations-p'

> The issue isn't whether most people do it.  The issue is whether it's
> a reasonable and natural thing to want to do.  I claim it is.

I would claim that if it is a reasonable and natural thing to do, most
people would do it (or attempt to do it).  That's why I wanted more
examples. 


> Relevant quote:
> 
>  "Those who like this kind of thing will find that this is the kind
>   of thing they like."
> 
>   --not sure of the author, but a quick web search turned up at
>     least one person claiming it was Abraham Lincoln

This is one of my favorite quotes.  It has been attributed to Lincoln
for quite some time, but there is no compelling evidence that he
actually said it.  I like to think he did, though.
 


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwitlckngs.fsf@world.std.com>
Joe Marshall <···@content-integrity.com> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > Joe Marshall <···@content-integrity.com> writes:
> > 
> > > I often use the symbol LIST for a value, occasionally CONS, but I
> > > don't suppose many people use the value cell of the symbol
> > > LOAD-LOGICAL-PATHNAME-TRANSLATIONS.  Since I rarely see examples other
> > > than LIST used to demonstrate name collision, I wanted some real-world
> > > examples of other common name collisions.
> > 
> > 
> > (defun load-my-system (&key load-logical-pathname-translations)
> >   ...
> >   (when load-logical-pathname-translations
> >     (load-logical-pathname-translations ...))
> >   ...)
> > 
> 
> I would probably name that `load-logical-pathname-translations-p'
> 
> > The issue isn't whether most people do it.  The issue is whether it's
> > a reasonable and natural thing to want to do.  I claim it is.
> 
> I would claim that if it is a reasonable and natural thing to do, most
> people would do it (or attempt to do it).  That's why I wanted more
> examples. 

I often don't to avoid confusion over
 load-logical-pathname-translationsp
 load-logical-pathname-translations?
etc.  Also, sometimes it's not simple boolean, as in

(defun load-my-system (&key load-logical-pathname-translations)
  ...
  (when load-logical-pathname-translations
    (case load-logical-pathname-translations
      ((t) (load-logical-pathname-translations *default-translations*))
      (otherwise (load-logical-pathname-translations
                  load-logical-pathname-translations)))))

> > The issue isn't whether most people do it.  The issue is whether it's
> > a reasonable and natural thing to want to do.  I claim it is.
>
> I would claim that if it is a reasonable and natural thing to do, most
> people would do it (or attempt to do it).

By this argument, behing honest is not reasonable and natural for kids
to do, since statistics show that upward of 90% of students cheat or
have cheated.  One might note that other things that most people don't
do or attempt to do are: being gay, being smart, being catholic, being 
jewish, being republican, being democrat, or even being male.  Surely a
statistical approach to what is reasonable/natural (i.e., normal) is
inappropriate.

When someone a doctor tells a person they have type AB blood, and they
ask if this is a reasonable/natural result or, again I'm just going to
say a "normal" result, the doctor doesn't say "no" even though he knows
that it's statistically not predicted.  The reason is that the person is
not asking whether it's common or not common, the person is asking whether
remedial action is called for--whether there is cause for alarm.  It is not
statistically likely to have any given birthday, yet it is not "out of the
norm" or "beyond reason" or "unnatural" to have any given birthday.

I think the question is not whether the output is reasonable but
whether the process leading to the output is flawed in such way that
it is not functioning and should be repaired.  In this regard, I've
seen some pretty darned weird variable names coming from some pretty
rational people, so I wouldn't push this too far.  MACSYMA had some
that were English abbreviations for transliterations of Chinese words,
for example.  Geez, MACLISP even used the variable CAR and CDR and
ERRSET as option variables to control the effect of the operators CAR
and CDR and ERRSET, respectively; it made the names easy to remember,
and one can see definite reason in that.

I think the question as to whether something is natural should mean that
someone operating with a properly reasoning processor might be expected to
sometimes come across this approach without some special design to thwart
the ongoing test or otherwise make mischief.  So if we checked code
that was written without an attempt to pass this test and we found it common
not to follow the "-P" convention either because people feel it looks stupid
or they can't figure out when to use the "hyphen" or they don't like the
way it's pronounced or it feels like gratuitous extra typing, then I think
we'd have to conclude it's natural not to use it.  I think even without 
looking very far, we'd find systems that have a :load-patches argument rather
than :load-patches-p to control whether patches get loaded.  I know LispWorks'
scm:compile-system takes a :load argument to control whether the file is 
loaded; no, it's not called :load-p.  I'm sufficiently sure that I could come
up with tons more data on this that I'll just stop here and rest my case
on natural.
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <k85s542h.fsf@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > I would probably name that `load-logical-pathname-translations-p'
> 
> I often don't to avoid confusion over
>  load-logical-pathname-translationsp
>  load-logical-pathname-translations?
> etc.  Also, sometimes it's not simple boolean, as in...

Fair enough.

Now for something completely off topic:

> > I would claim that if it is a reasonable and natural thing to do, most
> > people would do it (or attempt to do it).

> By this argument, behing honest is not reasonable and natural for kids
> to do, since statistics show that upward of 90% of students cheat or
> have cheated.  

Careful there, I didn't claim the contrary.  I hope that 90% of
students don't think cheating is reasonable or natural.

> One might note that other things that most people don't do or
> attempt to do are: being gay, being smart, being catholic, being
> jewish, being republican, being democrat, or even being male.

Being male isn't natural for a woman, and some of those things listed
aren't reasonable...

> Surely a statistical approach to what is reasonable/natural (i.e.,
> normal) is inappropriate.

I'd expect *some* statistical correlation.  A reasonable and natural
activity that no one ever does seems to be stretching the definition.

> When someone a doctor tells a person they have type AB blood, and they
> ask if this is a reasonable/natural result or, again I'm just going to
> say a "normal" result, the doctor doesn't say "no" even though he knows
> that it's statistically not predicted.  

Sure, but I didn't say that it had to be statistically predicted, only
that there ought to be some common examples.

> The reason is that the person is
> not asking whether it's common or not common, the person is asking whether
> remedial action is called for--whether there is cause for alarm.  It is not
> statistically likely to have any given birthday, yet it is not "out of the
> norm" or "beyond reason" or "unnatural" to have any given birthday.

Again, any given day is the birthday for a large number of people.
 
> I think the question as to whether something is natural should mean that
> someone operating with a properly reasoning processor might be expected to
> sometimes come across this approach without some special design to thwart
> the ongoing test or otherwise make mischief.  So if we checked code
> that was written without an attempt to pass this test and we found it common
> not to follow the "-P" convention either because people feel it looks stupid
> or they can't figure out when to use the "hyphen" or they don't like the
> way it's pronounced or it feels like gratuitous extra typing, then I think
> we'd have to conclude it's natural not to use it.  

Again, that's the reverse of what I am claiming.  If you, for example,
were to claim that it is reasonable and natural to use the ^ character
as a word separator in an identifier, I'd expect you to be able to
defend that position with some easily found examples of people doing
just that.

> I think even without looking very far, we'd find systems that have a
> :load-patches argument rather than :load-patches-p to control
> whether patches get loaded.  I know LispWorks' scm:compile-system
> takes a :load argument to control whether the file is loaded; no,
> it's not called :load-p.  I'm sufficiently sure that I could come up
> with tons more data on this that I'll just stop here and rest my
> case on natural.

I'm sure you could, but that wasn't my thesis.  I'm arguing that it
would be hard to defend something as reasonable and natural if it were
in fact quite rare and unusual.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwwv9rkio3.fsf@world.std.com>
Joe Marshall <···@content-integrity.com> writes:

> > Surely a statistical approach to what is reasonable/natural (i.e.,
> > normal) is inappropriate.
> 
> I'd expect *some* statistical correlation.  A reasonable and natural
> activity that no one ever does seems to be stretching the definition.

This may be a common model people have for reasonableness and naturalness,
but I'm claiming it's flawed because it does not reasonably explain many
natural facts about the use of these words.

> > The reason is that the person is
> > not asking whether it's common or not common, the person is asking whether
> > remedial action is called for--whether there is cause for alarm.  It is not
> > statistically likely to have any given birthday, yet it is not "out of the
> > norm" or "beyond reason" or "unnatural" to have any given birthday.
> 
> Again, any given day is the birthday for a large number of people.

You used the word "most" in your original message. It was the use of this
word that set me off.  But even so, an appeal to statistics is not relevant.

> I'm sure you could, but that wasn't my thesis.  I'm arguing that it
> would be hard to defend something as reasonable and natural if it were
> in fact quite rare and unusual.

Nonsense.  Type AB blood is reasonable and natural even if rare and unusual.
All recessive traits are this way, yet not all are "unreasonable".

The question of reasonableness has to do with whether they follow logic
both in implementation and intent; in effect, whether they have ill
effects, not whether they are common.  Tail recursion, for example, might
be argued to be unreasonable if you took a look at how many people use it.
But it can be shown to be a structurally sound approach to iteration,
and, I think, therefore "reasonable" regardless of statistics of use.

The question of natural has to do with whether the engine producing them is
functioning properly not whether it kicks out these things often.  It is
reasonable and natural to write Gone With The Wind ... once. (It would be 
surprising to see it done twice, of course.  Just as it would be reasonable
to see a random number generator generate any given long sequence of numbers,
but surprising to see it do the same long series twice without more time
between than we would normally want to wait.)

Rare is not the opposite of reasonable.

Natural is not the opposite of unusual.
From: Joe Marshall
Subject: OT:  reasonable and natural Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bsr36g1u.fsf_-_@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > > Surely a statistical approach to what is reasonable/natural (i.e.,
> > > normal) is inappropriate.
> > 
> > I'd expect *some* statistical correlation.  A reasonable and natural
> > activity that no one ever does seems to be stretching the definition.
> 
> This may be a common model people have for reasonableness and naturalness,
> but I'm claiming it's flawed because it does not reasonably explain many
> natural facts about the use of these words.

It may be flawed, but I think it is a reasonable and natural model.

> > > The reason is that the person is
> > > not asking whether it's common or not common, the person is asking whether
> > > remedial action is called for--whether there is cause for alarm.  It is not
> > > statistically likely to have any given birthday, yet it is not "out of the
> > > norm" or "beyond reason" or "unnatural" to have any given birthday.
> > 
> > Again, any given day is the birthday for a large number of people.
> 
> You used the word "most" in your original message. It was the use of this
> word that set me off.  But even so, an appeal to statistics is not relevant.
> 
> > I'm sure you could, but that wasn't my thesis.  I'm arguing that it
> > would be hard to defend something as reasonable and natural if it were
> > in fact quite rare and unusual.
> 
> Nonsense.  Type AB blood is reasonable and natural even if rare and unusual.
> All recessive traits are this way, yet not all are "unreasonable".

There are around 11 million people in the US with AB blood.  Hardly a
rare occurrance.  But can we restrict ourselves to deliberate actions
rather than freaks of nature?

> The question of reasonableness has to do with whether they follow logic
> both in implementation and intent; in effect, whether they have ill
> effects, not whether they are common.  Tail recursion, for example, might
> be argued to be unreasonable if you took a look at how many people use it.
> But it can be shown to be a structurally sound approach to iteration,
> and, I think, therefore "reasonable" regardless of statistics of use.
>
> Rare is not the opposite of reasonable.
> 
> Natural is not the opposite of unusual.

Again, note that I'm not arguing that rare things are unreasonable,
I'm arguing that if you claim some activity is `reasonable and natural',
you ought to be able to trivially find examples of people doing it.




-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Tim Bradshaw
Subject: Re: OT:  reasonable and natural Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <nkj4rwvs326.fsf@tfeb.org>
Joe Marshall <···@content-integrity.com> writes:

> Again, note that I'm not arguing that rare things are unreasonable,
> I'm arguing that if you claim some activity is `reasonable and natural',
> you ought to be able to trivially find examples of people doing it.
> 
> 

I think -- going back to the issue of using things with function
bindings as variable bindings -- the problem is that because there is
quite a large space of things with function bindings (and that space
is much larger in a substantial application written in CL than in
naked CL), that actual occurences of any *particular* case, other than
perhaps a few really common ones, could be quite rare, but it doesn't
mean that *some* case happens often enough to be a pain.

In my case I definitely have caught myself doing it for some quite
obscure things (SLOT-VALUE was a recent example), and I do it really a
lot for common things, my favourite being

	(let ((read (read ...))) ...)

which reads as `let red read ...'.

--tim
From: Pierre R. Mai
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <878zm8s39x.fsf@orion.bln.pmsf.de>
Joe Marshall <···@content-integrity.com> writes:

> I often use the symbol LIST for a value, occasionally CONS, but I
> don't suppose many people use the value cell of the symbol
> LOAD-LOGICAL-PATHNAME-TRANSLATIONS.  Since I rarely see examples other
> than LIST used to demonstrate name collision, I wanted some real-world
> examples of other common name collisions.

Well, in CL how surprised would you be if you found yourself unable to
use the following nouns as variable names, just because there's also a
function by that name:

- float, max, min, round, random, complex, conjugate, phase, ash,
  byte,
- string, char
- sequence, fill, map, count, length, reverse, sort, position, search,
  mismatch, substitute
- first, second, third, fourth, ..., tenth, rest, list, cons, atom,
  car, null, last, member, intersection, union
- vector, bit
- read, load

This is just a small sampling of 4-5 CLHS section dictionaries.

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Janis Dzerins
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <878zm8474r.fsf@asaka.latnet.lv>
Joe Marshall <···@content-integrity.com> writes:

> I often use the symbol LIST for a value, occasionally CONS, but I
> don't suppose many people use the value cell of the symbol
> LOAD-LOGICAL-PATHNAME-TRANSLATIONS.  Since I rarely see examples other
> than LIST used to demonstrate name collision, I wanted some real-world
> examples of other common name collisions.

(defun frob-list (list)
  (loop for rest on list
     do ...))

Can you imagine this in _your_ real-world?

-- 
Janis Dzerins

  If million people say a stupid thing it's still a stupid thing.
From: Nicolas Neuss
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <wsn1annzw7.fsf@ortler.iwr.uni-heidelberg.de>
Joe Marshall <···@content-integrity.com> writes:

> I often use the symbol LIST for a value, occasionally CONS, but I
> don't suppose many people use the value cell of the symbol
> LOAD-LOGICAL-PATHNAME-TRANSLATIONS.  Since I rarely see examples other
> than LIST used to demonstrate name collision, I wanted some real-world
> examples of other common name collisions.

Some name collisions occured for me when using Goops (CLOS for Guile).
I often wanted local variables that had the same name as slot
accessors, something like

(let ((blocks (blocks object)))
	....

thus shadowing the blocks method.  As much as I've seen, this works in
CL.  (In Scheme, my remedy was to rename the accessors to "my-blocks".
I don't know if there is a better way.)

Yours, Nicolas.
From: Espen Vestre
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <w6n1ao3028.fsf@wallace.ws.nextra.no>
Joe Marshall <···@content-integrity.com> writes:

> This is, of course, the canonical example, but do you often (as in
> constantly) have this name collision problem for any symbol other than
> LIST ?

I can imagine that the (many!) car manufacturers that use Common Lisp
would have had ;-)
-- 
  (espen)
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <vgpc73wz.fsf@content-integrity.com>
Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:

> Joe Marshall <···@content-integrity.com> writes:
> 
> > This is, of course, the canonical example, but do you often (as in
> > constantly) have this name collision problem for any symbol other than
> > LIST ?
> 
> I can imagine that the (many!) car manufacturers that use Common Lisp
> would have had ;-)

Ah yes, maybe C would be better....



-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Marius Vollmer
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <87hf0w3pr5.fsf@zagadka.ping.de>
Joe Marshall <···@content-integrity.com> writes:

> Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:
> 
> > Joe Marshall <···@content-integrity.com> writes:
> > 
> > > This is, of course, the canonical example, but do you often (as in
> > > constantly) have this name collision problem for any symbol other than
> > > LIST ?
> > 
> > I can imagine that the (many!) car manufacturers that use Common Lisp
> > would have had ;-)
> 
> Ah yes, maybe C would be better....

You mean, like, "struct car auto"?
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfw66hc8fxa.fsf@world.std.com>
Joe Marshall <···@content-integrity.com> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > Bruce Hoult <·····@hoult.org> writes:
> > 
> > > In article <···············@world.std.com>, Kent M Pitman 
> > > <······@world.std.com> wrote:
> > 
> > > > ... the decision to use only one namespace is expressionally
> > > > limiting.  I simply would not want to use only one namespace for 
> > > > expressional reasons.
> > > 
> > > In what way is it limiting?  Can you give examples where you would 
> > > habitually pun functions and values onto the same name?
> > 
> > (defun foo (list)
> >   (loop for x in list
> >         collect (list (first x) (third x))))
> > 
> > I do this all the time.  And after 20 years of practice, it continues to
> > drive me nuts in Scheme when it blows up "needlessly".  In Scheme, you
> > learn to misspell your parameters ("lst") to reduce the risk of this.
> 
> This is, of course, the canonical example, but do you often (as in
> constantly) have this name collision problem for any symbol other than
> LIST ?

One class of symbols that this style comes up against a lot with are
the type names that have associated constructors by the same name
create this.  That's why LIST.  If the function LIST were called MAKE-LIST
and the function STRING were called TO-STRING, this would happen less.
But the language would look uglier even when the name collision issue
was not coming up.  I very much like the short names for container class
constructors like these and if "fixing" this problem meant breaking those
names (e.g., turning CONS to MAKE-CONS), I wouldn't want that.

Constructors aren't the only case.  Accessors, too, have the issue when
you pass a part of something and want to name the argument by its role-name,
such as box-handle or symbol-name.

Basically I've been trained ever since forever that it's good not to
create extra symbols unless you need them:

 (defun foo (really-cool-space-ship) (frob really-cool-space-ship))

 (defun bar (extra-cool-spaceship) (frob2 extra-cool-spaceship))

is not nearly as good as

 (defun foo (spaceship) (frob  spaceship))
 (defun bar (spaceship) (frob2 spaceship))

because (a) the latter has one fewer symbols used in the image [space
efficiency], (b) the latter doesn't force billions of unwanted apropos 
hits in debugging [debuggability], and (c) the latter is easier to
detect typos and other design errors in because of good parallel 
construction [programming hygiene => easier debugging].

A consequence of this (and remember everyone has different style rules,
but I promise you that I use this one a lot) is that I have the following
style rule.  I do not apply this rule all the time, but it contributes
to my choice of variable names where there isn't some supervening issue
of clarity driving another decision.

To avoid unnecessary symbol proliferation in my images, I like to use
existing symbols, whether or not they are already names of types or
functions, and often indirectly BECAUSE they are (since, by induction,
they had to be pre-existng for SOME reason and it's usually that). 
as variables.  This means I often aggressively write code like:

  (let ((package (find-package "FOO")))
    (do-external-symbols (symbol package)
       (let ((symbol-package (symbol-package symbol)))
         (unless (eq symbol-package package)
           (setf (symbol-package symbol) package)))))

Ignore the issue of whether symbol-package is portably setf-able; it's
not.  I'm just grabbing the first example that comes to mind.  My
point is that this is a coding style I use and enjoy.  I aggressively
lock down symbols that are pre-existing names and the pont is that in
a lisp2, I just don't have to care that they have other meanings
because I am not trampling on those other meanings.  Without inviting
a gratuitous case of carpal tunnel syndrome, one simply cannot use
this style in general in a Lisp1 because it's inviting trouble.  As
you move inward in a form, you have fewer and fewer functions
available in exactly the set of functions that I bet are
probabilistically most likely to be needed.  The LAST thing I'd want
to have to do was rewrite my code if I needed to call one of those
names as a function while in that space, but since such rewrite will
not have to occur, I don't give the use of the variable a second
thought.

It is extremely common that I name a variable STRING, by the way.  And
that is not a declaration that I wish to give up the use of STRING within
the name.  e.g.,

 (defmethod compare-symbol-designators ((string string) (symbol symbol))
   (equal string (string symbol)))

I don't see an obviously clearer way to write this.  In Lisp1 style, I'd
likely do 

 (defmethod compare-symbol-designators ((str string) (sym symbol))
   (equal str (string sym)))

but I don't find that visually appealing.  If I had to tell someone out
loud over the phone I would not say "Call the string str and call the
symbol sym, then use equal to compare str to string of sym."  I would
instead say:  "Use equal to compare the string to string of the symbol."

Or, since we don't talk a lot about such things as strings, and so you
can test the naturalness: "Make sure the price is less than the price
you paid yesterday."  The hearer of this sentence does not balk and
say "Wait a minute! You want me to apply a dollar amount as a
function? What is this, church numeral pricing?"  They instead
correctly understand that the first use of "price" is an anaphor to a
recently introduced noun and the second use is a function call (a
different and more long-ranging anaphor to a well-known verb), and
they have no difficulty resolving the sentence.

A thing that was noticed when people introduced cdr-coding on the Lisp
Machine was that it was based on a guess that some amount, let's say
90%, of lists could benefit in some way from this encoding and that
some other number, say 10%, would be pessimized.  However, knowledge
of the decision and a belief in its reliable availability allows users
to actually make the facility outperform the original guess of how
good a job it will do because people can code to it.  I'd say the same
is true of Lisp2.  Assuming people want to take advantage of this
overlap (and I do), they will aggressively move to do so.  This
shouldn't be thought of as some conspiracy to make Lisp2 look more
important than it is, any more than it's a conspiracy to make Lisp1
look more critical when people pun in Lisp1 systems in the
corresponding way.  (I sometimes hear people mistakenly assume that only
the doubled-namespace allows an opportunity for punning and that Lisp1
is clearer for avoiding that, but they don't realize that folding two
namespace in on itself create other opportunities for puns elsewhere that
are just as vile, and Lisp1 is not the morally high-minded pun-free zone
it is sometimes touted to be.  I've seen at least one such pun put-down
in this conversation, I believe, but many others in my time.)
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <ae6oxh3h.fsf@content-integrity.com>
Kent M Pitman <······@world.std.com> writes:

> (I sometimes hear people mistakenly assume that only
> the doubled-namespace allows an opportunity for punning and that Lisp1
> is clearer for avoiding that, but they don't realize that folding two
> namespace in on itself create other opportunities for puns elsewhere that
> are just as vile, and Lisp1 is not the morally high-minded pun-free zone
> it is sometimes touted to be.  I've seen at least one such pun put-down
> in this conversation, I believe, but many others in my time.)

Apparently they also don't realize that punning can be an important
abstraction tool.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Tim Bradshaw
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <ey3r90bnfnp.fsf@cley.com>
* Bruce Hoult wrote:

> 2) if the ability to move type checks from the point of use to the point 
> of definition is in fact so important then why do it only for *function* 
> values?  Why not do it for integers, floats, chars, strings, arrays, 
> lists?  Perhaps each symbol should have a slot for a possible integer 
> binding, a slot for a pssible float binding, a slot for a possible char 
> binding, a slot for a possible string binding, a slot for a possible 
> array binding, and a slot for a possible pair binding?

I think the fact that the language considers function call so
important that it has a special syntax to support it might be a clue
here.  It looks like you need one.

--tim
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-F6F706.13493907032001@news.nzl.ihugultra.co.nz>
In article <···············@cley.com>, Tim Bradshaw <···@cley.com> 
wrote:

> I think the fact that the language considers function call so
> important that it has a special syntax to support it might be a clue
> here.  It looks like you need one.

I can't think offhand of any language that *doesn't* have special syntax 
to support function calls, so that's hardly a distinguishing feature of 
Common Lisp.

-- Bruce
From: Michael Parker
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <3C024FF307AAA78B.8B92BABA9A445130.E9826819F0598659@lp.airnews.net>
Bruce Hoult wrote:
> 
> In article <···············@cley.com>, Tim Bradshaw <···@cley.com>
> wrote:
> 
> > I think the fact that the language considers function call so
> > important that it has a special syntax to support it might be a clue
> > here.  It looks like you need one.
> 
> I can't think offhand of any language that *doesn't* have special syntax
> to support function calls, so that's hardly a distinguishing feature of
> Common Lisp.

Forth.
From: Duane Rettig
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <48zmif0za.fsf@beta.franz.com>
Michael Parker <·······@pdq.net> writes:

> Bruce Hoult wrote:
> > 
> > In article <···············@cley.com>, Tim Bradshaw <···@cley.com>
> > wrote:
> > 
> > > I think the fact that the language considers function call so
> > > important that it has a special syntax to support it might be a clue
> > > here.  It looks like you need one.
> > 
> > I can't think offhand of any language that *doesn't* have special syntax
> > to support function calls, so that's hardly a distinguishing feature of
> > Common Lisp.
> 
> Forth.

Actually, in Forth _every_ operation is a function call (or, more
properly, a word execution).  Even a variable reference is defined
by the <builds does> construct as the execution of a "variable" class
of word which places its address on the stack.  Forth goes to the
opposite extreme; instead of everything being data, everything is
code...

But, then again, I suppose you could argue that this verifies your
argument that a function call is not a special thing in Forth, since
it is the _only_ thing.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: ········@hex.net
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <%7qp6.26682$lj4.567350@news6.giganews.com>
Duane Rettig <·····@franz.com> writes:
> Michael Parker <·······@pdq.net> writes:
> > Forth.
> 
> Actually, in Forth _every_ operation is a function call (or, more
> properly, a word execution).  Even a variable reference is defined
> by the <builds does> construct as the execution of a "variable" class
> of word which places its address on the stack.  Forth goes to the
> opposite extreme; instead of everything being data, everything is
> code...
> 
> But, then again, I suppose you could argue that this verifies your
> argument that a function call is not a special thing in Forth, since
> it is the _only_ thing.

Close, but possibly not _quite_ there; when Forth deals with comments,
there winds up being something _very_ vaguely like a CL *readtable*
involved.  

And you can connect in your own parser if you need to; the default
behaviour of "everything is a space-separated WORD" is enforced by
WORD, but if you write words that parse input otherwise, then WORD
doesn't get involved, and Stranger Things Can Happen...

But to be sure, just about everything in Forth is a WORD, or "function
call," so that is pretty nearly the only thing...
-- 
(reverse (concatenate 'string ····················@" "454aa"))
http://www.ntlug.org/~cbbrowne/sap.html
"I once went  to a shrink.  He  told me to speak freely.   I did.  The
damn fool tried to charge me $90 an hour."
-- ·····@qis.net (Jim Moore Jr)
From: Michael Parker
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <42D8F6F913DAF917.B0C9CC3898C668CD.A73114510A8D2B03@lp.airnews.net>
········@hex.net wrote:
> 
> Duane Rettig <·····@franz.com> writes:
> > Michael Parker <·······@pdq.net> writes:
> > > Forth.
> >
> > Actually, in Forth _every_ operation is a function call (or, more
> > properly, a word execution).  Even a variable reference is defined
> > by the <builds does> construct as the execution of a "variable" class
> > of word which places its address on the stack.  Forth goes to the
> > opposite extreme; instead of everything being data, everything is
> > code...
> >
> > But, then again, I suppose you could argue that this verifies your
> > argument that a function call is not a special thing in Forth, since
> > it is the _only_ thing.
> 
> Close, but possibly not _quite_ there; when Forth deals with comments,
> there winds up being something _very_ vaguely like a CL *readtable*
> involved.

Not that I've ever seen.  The comment word just scans to its terminator
char (usually either eol or ')' depending on the comment word) and
returns to the outer interpreter.  In a block-based system the \ word
doesn't even scan for a terminator, it just skips x chars.

> And you can connect in your own parser if you need to; the default
> behaviour of "everything is a space-separated WORD" is enforced by
> WORD, but if you write words that parse input otherwise, then WORD
> doesn't get involved, and Stranger Things Can Happen...
> 
> But to be sure, just about everything in Forth is a WORD, or "function
> call," so that is pretty nearly the only thing...

But a word *is* a function call, whether it is executed immediately or
whether it is compiled into the code stream.  The only things that are
non-functions in most forths I've used/written are numbers except for
possibly a handful of small numbers implemented as CONSTANTs.

...Definitely getting off topic here, although the original topic isn't
too hot either...
From: Tim Bradshaw
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <ey3ae6ynwna.fsf@cley.com>
* Bruce Hoult wrote:
> In article <···············@cley.com>, Tim Bradshaw <···@cley.com> 
> wrote:

> I can't think offhand of any language that *doesn't* have special syntax 
> to support function calls, so that's hardly a distinguishing feature of 
> Common Lisp.

It is, however, a good indication that treating functions as a special
case is a useful and practical thing to do (in *all* languages), while
inventing special cases for every possible type might be less useful.

--tim
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-85A1A1.00265108032001@news.nzl.ihugultra.co.nz>
In article <···············@cley.com>, Tim Bradshaw <···@cley.com> 
wrote:

> * Bruce Hoult wrote:
> > In article <···············@cley.com>, Tim Bradshaw <···@cley.com> 
> > wrote:
> 
> > I can't think offhand of any language that *doesn't* have special 
> > syntax 
> > to support function calls, so that's hardly a distinguishing feature of 
> > Common Lisp.
> 
> It is, however, a good indication that treating functions as a special
> case is a useful and practical thing to do (in *all* languages), while
> inventing special cases for every possible type might be less useful.

A special *syntactic* case, certainly.  Function call is commonly 
signalled using (), while array indexing is signalled using [], pointer 
dereference by * or ^ or ->, and so forth.  This doesn't imply anything 
about the implementation details.

-- Bruce
From: Tim Bradshaw
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <ey366hlohgr.fsf@cley.com>
* Bruce Hoult wrote:

> A special *syntactic* case, certainly.  Function call is commonly 
> signalled using (), while array indexing is signalled using [], pointer 
> dereference by * or ^ or ->, and so forth.  

Note that Lisp (and Scheme) do *not* have special syntactic cases for
the things other than function call above.  Function call is
considered that special.

> This doesn't imply anything about the implementation details.

OK, I think I'll give up now.

--tim
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwlmqh630w.fsf@world.std.com>
Erik Naggum <····@naggum.net> writes:

>   In Fortran, foo(1) is either an array reference or a function call.

In Maclisp, you could declare and use an array this way, too.
I agree this was sometimes useful.
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-3D4ED3.00493309032001@news.nzl.ihugultra.co.nz>
In article <················@naggum.net>, Erik Naggum <····@naggum.net> 
wrote:

> * Bruce Hoult <·····@hoult.org>
> > A special *syntactic* case, certainly.  Function call is commonly 
> > signalled using (), while array indexing is signalled using [], pointer 
> > dereference by * or ^ or ->, and so forth.  This doesn't imply anything 
> > about the implementation details.
> 
>   In Fortran, foo(1) is either an array reference or a function call.

Yes, FORTRAN IV (on a B1700) having been my first programming language, 
I am aware of that.


>   I had expected you to see some of the several interesting implications
>   from this when I tried to show how aref and funcall are related and how
>   it would work to create a new namespace for array references, but it
>   looks like you were simply unaware of the Fortran way and that anyone
>   could actually have done this.

It's trivially obvious that the compiler has two choices -- to either 
realise that the reference is to an array and generate inline code to 
calculate the offset and fetch the data, or else to bind the name of the 
array to a closure containing standard code and the array bounds and 
just treat the things like a function.  And so...?


>   Perhaps you would like to read that
>   article again and respond to the point I made, instead of to each
>   paragraph.  Please note: It's a "for sake of argument"-style idea, not
>   something I would actually want to change in Common Lisp.

But what was your point?

You seem to like to point out a series of trivially obvious things and 
make a huge leap to a conclusion not supported in any obvious way by 
those things.

-- Bruce
From: Hallvard B Furuseth
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <HBF.20010307pw32@bombur.uio.no>
Erik Naggum <····@naggum.net> writes:
>   Scheme works very, very hard to not to distinguish a function call from
>   any other variable reference.  And vice versa.  At least give them credit
>   for having achieved that, even though it is a fundamentally silly thing
>   to want to do.

It would be silly in CL, but it seems curiously *right* for Scheme.
Scheme is fond of exposing the programmer to Neat Ideas, and "Code Is
Data" is the Neatest Idea I ever saw in programming.  If Scheme didn't
reflect _that_, what would be the point of the language?

-- 
Hallvard
From: Hallvard B Furuseth
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <HBF.20010307ou8l@bombur.uio.no>
Erik Naggum <····@naggum.net> writes:
>* Hallvard B Furuseth <············@usit.uio.no>
>> It would be silly in CL, but it seems curiously *right* for Scheme.
>> Scheme is fond of exposing the programmer to Neat Ideas, and "Code Is
>> Data" is the Neatest Idea I ever saw in programming.  If Scheme didn't
>> reflect _that_, what would be the point of the language?
> 
>   That's the weirdest use of the idea "code is data" I have ever seen.

Well, it's "use" as in illustration more than application.

> I don't think the ability to read source code as data should be confused
> with the values of variables.

Why "source"?

> For instance, you cannot really work with _compiled_ code in any other
> way than to call it.

Sure, but there are other entities you can do very few operations on too
- like variables.  Usually a few more operations, but so what?  Remember
Hoult's argument that the only numbers that make sense as "how many?"
are "zero", "one", and "unlimited".  For that matter, _compiled_ code is
just a special case of _code_ which we can do more with.

It just seems to me to fit well with Scheme and the pro-Scheme mindset
I've seen in the Scheme vs CL wars.

-- 
Hallvard
From: Hallvard B Furuseth
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <HBF.20010307okao@bombur.uio.no>
Erik Naggum <····@naggum.net> writes:
> * Hallvard B Furuseth <············@usit.uio.no>
>> Why "source"?
> 
>   Because that is the traditional understanding of "code is data".  It is
>   _not_ referring to bytes of machine memory that can be regarded as data
>   and also be executable code.

Duh.  Of course.  My problem is, "code is data" reached _me_ at a time
when I lived with a Basic & Machine Code (no, not assembly) dingbat and
no printer, so there was little difference:-)

>> Remember Hoult's argument that the only numbers that make sense as "how
>> many?" are "zero", "one", and "unlimited".
> 
>   FWIW, I think that "argument" makes absolutely zero sense.

Think "theory is more important than practice" - in short, Scheme.
"Zero" and "one" give nice theoretical solutions to various things,
"two" makes the theory more complicated to analyze even though it can
usually be emulated with a (less practical) "one" solution.

-- 
Hallvard
From: Craig Brozefsky
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <878zmhgp5b.fsf@piracy.red-bean.com>
Hallvard B Furuseth <············@usit.uio.no> writes:

> Think "theory is more important than practice"

Yes, in theory it is easier for me to write programs in Scheme than
CL.

Yes, in theory I am an American citizen with full democratic rights.

Practice is how I feed myself and my family, so you can understand
that as someone whose family or government cannot/won't pay or them
put theory before practice, I subjugate theory to practice and
evaluate it's worth based on practical results.

-- 
Craig Brozefsky                             <·····@red-bean.com>
In the rich man's house there is nowhere to spit but in his face
					             -- Diogenes
From: Alain Picard
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <8666hky4vc.fsf@localhost.apana.org.au>
>>>>> Hallvard B Furuseth writes:

Hallvard> Think "theory is more important than practice" - in short, Scheme.

That's an astounding statement.


But this may brighten your day.  Physicists have a saying:

"In theory, theory and practice are the same.
 In practice, theory and practice are different."  




-- 
It would be difficult to construe        Larry Wall, in  article
this as a feature.			 <·····················@netlabs.com>
From: Tim Bradshaw
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <ey3ofvcm39h.fsf@cley.com>
* Hallvard B Furuseth wrote:

> Think "theory is more important than practice" - in short, Scheme.
> "Zero" and "one" give nice theoretical solutions to various things,
> "two" makes the theory more complicated to analyze even though it can
> usually be emulated with a (less practical) "one" solution.

I thought the ancient Greeks were the last people who took this
attitude to science seriously?

--tim
From: David Bakhash
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m3itl3i555.fsf@alum.mit.edu>
Hallvard B Furuseth <············@usit.uio.no> writes:

> Think "theory is more important than practice" - in short, Scheme.
> "Zero" and "one" give nice theoretical solutions to various things,
> "two" makes the theory more complicated to analyze even though it
> can usually be emulated with a (less practical) "one" solution.

I was so hoping that this nonsense would end.  the zero/one business
was and is completely bogus.  The whole base 2 (binary) system is one
of the fundamental backbones of computer science.  Things being either 
A or B, on or off, true or false, FUNCTION or VARIABLE is essential.
the number 2 is *very* special too.

and no, I don't think that it's "zero", "one", or "two"; I just want
want to point out again that this argument is the dumbest thing I've
ever heard as a defense for Scheme's single namespace.

dave
From: Hallvard B Furuseth
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <HBF.20010326oj5t@bombur.uio.no>
David Bakhash <·····@alum.mit.edu> writes:
>Hallvard B Furuseth <············@usit.uio.no> writes:
>> Think "theory is more important than practice" - in short, Scheme.
>> (...)
> 
> I was so hoping that this nonsense would end.

What did I do now?  That was more than two weeks ago.
Anyway, kill files are nice things.  Personally I'm glad I caught
this thread's latest twist to call/cc, though.

> the zero/one business was and is completely bogus.  (...)
> and no, I don't think that it's "zero", "one", or "two"; I just want
> want to point out again that this argument is the dumbest thing I've
> ever heard as a defense for Scheme's single namespace.

I wasn't defending it, I just said I thought it fit nicely with the
pro-Scheme arguments in that language war.  Perhaps I should have
mentioned that it looks like a tie to DFA machines with 0 tapes, 1, or
2+, but it wasn't supposed to be a long enough post (or thread) to need
that many words.

-- 
Hallvard
From: David Bakhash
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m3elvletmq.fsf@alum.mit.edu>
>>>>> "hbf" == Hallvard B Furuseth <Hallvard> writes:

 hbf> David Bakhash <·····@alum.mit.edu> writes:
 >> Hallvard B Furuseth <············@usit.uio.no> writes:
 >>> Think "theory is more important than practice" - in short,
 >>> Scheme.  (...)
 >>  I was so hoping that this nonsense would end.

 hbf> What did I do now?  That was more than two weeks ago.  Anyway,
 hbf> kill files are nice things.  Personally I'm glad I caught this
 hbf> thread's latest twist to call/cc, though.

I didn't mean this against you, of course.  I just wanted to see this
logic disappearing from the group, since it's completely unfounded in
my opinion.  If ever a good thing were not to happen because of
arguments like these, computers would be very unfriendly, in my
opinion.  So I do my best to come down hard on them.  I would have
pretty much done the same if someone made the argument that two
namespaces was ideal, and that the number 2 had special meaning in
computer science, e.g. base 2, and hence Lisp2 semantics was
superior.

In fact, I'm happy that this kind of logic was used in favor of
Scheme.  It would be embarrassing if CL had Lisp nuts running around
making arguments like these.  I think the points raised by the Lisp
guys were more thought-provoking, more consistent, and were aimed
toward a better, more flexible language, while the Scheme arguments
seemed minimalist to me.

dave
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-A06C91.16202507032001@news.nzl.ihugultra.co.nz>
In article <················@naggum.net>, Erik Naggum <····@naggum.net> 
wrote:

> * Bruce Hoult <·····@hoult.org>
> > I can't think offhand of any language that *doesn't* have special 
> > syntax 
> > to support function calls, so that's hardly a distinguishing feature of 
> > Common Lisp.
> 
>   Scheme works very, very hard to not to distinguish a function call from
>   any other variable reference.  And vice versa.  At least give them 
>   credit
>   for having achieved that, even though it is a fundamentally silly thing
>   to want to do.

How is that?  If you see something at the start of a non-quoted list 
then you know it must be a reference to a function (or possibly, an 
error).

That's just as special as, say, putting the reference to the function 
outside (in front of) the arguent list.

-- Bruce
From: Rob Warnock
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <984bmp$1ag21$1@fido.engr.sgi.com>
Bruce Hoult  <·····@hoult.org> wrote:
+---------------
| Erik Naggum <····@naggum.net> wrote:
| > Scheme works very, very hard to not to distinguish a function call from
| > any other variable reference.  And vice versa.  At least give them 
| > credit for having achieved that, even though it is a fundamentally
| > silly thing to want to do.
| 
| How is that? If you see something at the start of a non-quoted list 
| then you know it must be a reference to a function (or possibly, an error).
+---------------

I think what Erik might be referring to is that Scheme insists that the
evaluator use *THE EXACT SAME* evaluation rules on the function position
as on the argument positions. That is, the evaluator basically does this:

	(let ((evaled-args (mapcar #'eval exp)))
	  (funcall apply (car evaled-args) (cdr evaled-args)))

[Except the "mapcar" is *not* required to execute left-to-right or
right-to-left or any other fixed order -- only *some* serializable order.]

That lets Scheme get away with writing stuff like this, where the function
position can be an arbitrary expression:

	> (define x 13)
	> ((if (odd? x) + *) 2 3)
	5
	> 

instead of as in CL:

	> (defvar x 13)
	X
	> (funcall (if (oddp x) #'+ #'*) 2 3)
	5
	> 

[In CL, of course, the Scheme style is an error: ]

	> ((if (oddp x) #'+ #'*) 2 3)

	*** - EVAL: (IF (ODDP X) #'+ #'*) is not a function name
	1. Break> 

Now do Scheme programmers ever *use* that generality? Actually, very
seldom. I've used it maybe a couple of times, total, in several years of
Scheme hacking. I probably wouldn't even miss it much if it were gone.
(You'd still have "apply", and you can trivially define "funcall" in
terms of "apply".)


-Rob

-----
Rob Warnock, 31-2-510		····@sgi.com
SGI Network Engineering		<URL:http://reality.sgi.com/rpw3/>
1600 Amphitheatre Pkwy.		Phone: 650-933-1673
Mountain View, CA  94043	PP-ASEL-IA
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <d7bta24t.fsf@content-integrity.com>
····@rigden.engr.sgi.com (Rob Warnock) writes:

> Now do Scheme programmers ever *use* that [the ability to invoke
> computed functions without FUNCALL] generality? 

It would depend on the programmer and the program, but I have seen it
used to good effect in several situations.  For instance, suppose you
were doing some classical mechanics.  You would be working a fair
amount with derivatives of functions.  Suppose you had a functions
that compute derivatives and partial derivatives.  You might wish to
evaluate the a partial derivative of a function at a particular
point.  You would write:  (((partial 2) F) state) rather than 
(funcall (funcall (partial 2) f) state)

> Actually, very seldom. I've used it maybe a couple of times, total,
> in several years of Scheme hacking. I probably wouldn't even miss it
> much if it were gone.  (You'd still have "apply", and you can
> trivially define "funcall" in terms of "apply".)

Not being able to do this would not a `showstopper' by any means.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Hartmann Schaffer
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <slrn9adj7g.htt.hs@paradise.nirvananet>
In article <··············@fido.engr.sgi.com>, Rob Warnock wrote:
> ...
>That lets Scheme get away with writing stuff like this, where the function
>position can be an arbitrary expression:
>
>	> (define x 13)
>	> ((if (odd? x) + *) 2 3)
>	5
>	> 
>
>instead of as in CL:
>
>	> (defvar x 13)
>	X
>	> (funcall (if (oddp x) #'+ #'*) 2 3)
>	5
>	> 
>
>[In CL, of course, the Scheme style is an error: ]
>
>	> ((if (oddp x) #'+ #'*) 2 3)
>
>	*** - EVAL: (IF (ODDP X) #'+ #'*) is not a function name
>	1. Break> 
>
>Now do Scheme programmers ever *use* that generality? Actually, very
>seldom. I've used it maybe a couple of times, total, in several years of
>Scheme hacking. I probably wouldn't even miss it much if it were gone.
>(You'd still have "apply", and you can trivially define "funcall" in
>terms of "apply".)

isn't that feature of scheme used in some object systems?  something like

((class-method 'window 'move) mywin z y)  ?

hs
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-CAC1FE.01460008032001@news.nzl.ihugultra.co.nz>
In article <··············@fido.engr.sgi.com>, ····@rigden.engr.sgi.com 
(Rob Warnock) wrote:

> Bruce Hoult  <·····@hoult.org> wrote:
> +---------------
> | Erik Naggum <····@naggum.net> wrote:
> | > Scheme works very, very hard to not to distinguish a function call 
> | > from
> | > any other variable reference.  And vice versa.  At least give them 
> | > credit for having achieved that, even though it is a fundamentally
> | > silly thing to want to do.
> | 
> | How is that? If you see something at the start of a non-quoted list 
> | then you know it must be a reference to a function (or possibly, an 
> | error).
> +---------------
> 
> I think what Erik might be referring to is that Scheme insists that the
> evaluator use *THE EXACT SAME* evaluation rules on the function position
> as on the argument positions. That is, the evaluator basically does this:
> 
> 	(let ((evaled-args (mapcar #'eval exp)))
> 	  (funcall apply (car evaled-args) (cdr evaled-args)))
> 
> [Except the "mapcar" is *not* required to execute left-to-right or
> right-to-left or any other fixed order -- only *some* serializable 
> order.]

Plus, of course, the compiler can optimize the hell out of it :-)


> That lets Scheme get away with writing stuff like this, where the 
> function position can be an arbitrary expression:
> 
> 	> (define x 13)
> 	> ((if (odd? x) + *) 2 3)
> 	5

You can do the same in Dylan:

-----------------------------------------------------
module: funcall

define function test(x :: <integer>)
  if (odd?(x)) \+ else \* end (2, 3)
end
-----------------------------------------------------
descriptor_t * funcallZfuncallZtest_FUN(descriptor_t *orig_sp, long A_x)
{
    descriptor_t *cluster_0_top;
    heapptr_t L_function; /* function */
    descriptor_t L_temp;
    descriptor_t L_temp_2;

    if (((A_x & 1) == 0)) {
        L_function = &dylanZdylan_visceraZV_HEAP;
    }
    else {
        L_function = &dylanZdylan_visceraZPLUS_HEAP;
    }

    L_temp.heapptr = funcallZliteral.heapptr;
    L_temp.dataword.l = 2;
    L_temp_2.heapptr = funcallZliteral.heapptr;
    L_temp_2.dataword.l = 3;
    orig_sp[0] = L_temp;
    orig_sp[1] = L_temp_2;
    cluster_0_top = GENERAL_ENTRY(L_function)(orig_sp + 2, L_function, 
2);
    return cluster_0_top;
}
-----------------------------------------------------

(if you put "let x=13" inside the function then it would simply return 5)



> Now do Scheme programmers ever *use* that generality? Actually, very
> seldom. I've used it maybe a couple of times, total, in several years of
> Scheme hacking. I probably wouldn't even miss it much if it were gone.
> (You'd still have "apply", and you can trivially define "funcall" in
> terms of "apply".)

I don't think you'd often use an "if" in that position, but grabbing a 
value out of an array, or calling a function that returns a function are 
both pretty common, I suspect.

In certain versions of OO implemnted in Scheme styles such as...

  ((myObj 'set-foo) newVal)

... are natural.

-- Bruce
From: David Bakhash
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m3y9ucl1ro.fsf@alum.mit.edu>
Bruce Hoult <·····@hoult.org> writes:

> In certain versions of OO implemnted in Scheme styles such as...
> 
>   ((myObj 'set-foo) newVal)
> 
> ... are natural.

Yeah, and to CL programmers, SETF is natural:

(setf (slot-value my-obj 'foo) new-val)

The mere idea of `set'-ers and `get'-ers to me is nasty after you've
used SETF, and know that it's all taken care of at compile-time.

In my opinion, this is an example of the sloppiness that is common
among Scheme programmers.  That may be "natural", but so is death.  It 
still sucks.

dave
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-0647F7.23324812032001@news.nzl.ihugultra.co.nz>
In article <··············@alum.mit.edu>, David Bakhash 
<·····@alum.mit.edu> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > In certain versions of OO implemnted in Scheme styles such as...
> > 
> >   ((myObj 'set-foo) newVal)
> > 
> > ... are natural.
> 
> Yeah, and to CL programmers, SETF is natural:
> 
> (setf (slot-value my-obj 'foo) new-val)
> 
> The mere idea of `set'-ers and `get'-ers to me is nasty after you've
> used SETF, and know that it's all taken care of at compile-time.

Dylan is closer to CL in this regard.  You'd write the above as:

   my-obj.foo := new-val

... which is defined to mean the same as ...

   foo(my-obj) := new-val

... or ...

   foo-setter(new-val, my-obj)


The assignment operator ":=" in Dylan is pretty similar to SETF in CL.  
it will assign symmetrically to lexical variables, globals, slots in 
objects, array elements, or generalised setter functions.

And of course it's taken care of at compile time too.


> In my opinion, this is an example of the sloppiness that is common
> among Scheme programmers.  That may be "natural", but so is death.  It 
> still sucks.

Actually, the Scheme code can be taken care of at compile time, too.  
The "Stalin" compiler keeps track of which closure-generating expression 
is used to create things such as "myObj" above, and it can therefore 
very often evaluate "(myObj 'set-foo)" and select the correct lambda 
expression to call at compile time.

-- Bruce
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-51FCC7.23402507032001@news.nzl.ihugultra.co.nz>
In article <················@naggum.net>, Erik Naggum <····@naggum.net> 
wrote:

> * Bruce Hoult <·····@hoult.org>
> > 2) if the ability to move type checks from the point of use
> > to the  point of definition is in fact so important then why
> > do it only for *function* values?  Why not do it for integers,
> > floats, chars, strings, arrays, lists?
> 
>   Because of the distinct way function values are used.

And what is the nature of this "distinct way"?  Do you mean that 
function calling is extremely common?  Or that functions are mostly 
"called" while other values are "used"?  Or something else?  Syntax?


>   Incidentally, I prefer to think of arrays as functions and
>   I'm annoyed by the way I have to refer to slots in arrays in
>   Common Lisp, but Scheme suffers from the same problem, only
>   more so.

I don't have a problem with that.  Arrays are functions over 
discrete-valued arguments.  The mutability is interesting, of course, 
but at any given time it's a function.


> > Perhaps each symbol should have a slot for a possible
> > integer binding, a slot for a pssible float binding, a
> > slot for a possible char binding, a slot for a possible
> > string binding, a slot for a possible array binding,
> > and a slot for a possible pair binding?
> 
>   Pardon me for being such a party pooper, but what would you
>   do with these slots?  Do you want integer-let and float-let
>   binding forms

Well, *I* wouldn't like such a language, but yes things such as 
float-let would probably be what you would do.  Or you could go the Perl 
route (which I dislike, but I use it).


>   the above ridiculous suggestion suggests that you fail
>   completely to see the need for cost/benefit analyses of both
>   the Lisp2 approach and your own silly exaggerations.

No, not at all.  I'm trying to explore what the benefits of a 
user-visible function slot in symbols is, and why you'd want that but 
not other user-visible slots for different types of values.

One of the very first things I learned when studying software design a 
couple of decades ago was that the only numbers that make sense as an 
answer to "how many" were "zero", "one", and "unlimited".  Now of course 
that's not an iron-clad rule, but I think you've got to make a pretty 
good case before you break it.


> > At some point doesn't it just become easier to break down and
> > use type declarations and symbols that can be bound to only
> > one value at any given time?
> 
>   You seem not to grasp the point that has been made several
>   times: That the functional value is used very, very differently
>   from all other types of vlaues.  In fact so differently that
>   the benefits of separating it from the rest has a relatively
>   low cost and relatively many benefits.  As long as you see this
>   is in terms of type declarations and general types, you will
>   not see any of the benefits that come from separating only the
>   functional value from the rest of the types.

I can see reasons to seperate it in the implementation -- basically as
some sort of a cached pointer to stuff known to be code -- but I can't 
see any good reason to expose this to the programmer.


>   But suppose we let arrays be like functions.  An array reference
>   is then like a function call with the array indices as arguments.
>   In a Lisp1 without mandatory and explicit type declarations or
>   sufficient type inference, whatever actually does the function
>   calls would have to make a decision whether to do the array
>   reference or make a function call, for every function call and
>   array reference.

Why wouldn't you make the array reference *be* a function call that has 
the address of the data as a closure value and grabs the index values 
and does the right thing?


> > Or is the benefit from not having to type check functin calls *so* much 
> > greater than the benefit from not having to type check integer addition 
> > or CAR/CDR that two namespaces (and not type declarations) is the 
> > optimum answer?  I wouldn't have thought so.
> 
>   The Lisp1 mindset almost forces you to think this is about types.
>   The fact that the first position in a function call form must have
>   the functional type is a consequence of the design decition, not
>   the design decision itself.

The design decision itself being?


>   This is pretty easy to see if you shed the Lisp1 mindset and
>   really understand that type information is an optimization
>   that is _optional_.

If it's _optional_, then why force the Lisp2 user to butt his head 
against the distinction between function and non-function values, when 
you could instead do that entirely within the implementation, as an 
optimization?


>   If you didn't optimize the way the functional value
>   is typed, and did a type check at every function call

Those are not the only options.

A Lisp1 is free to provide a "code" slot along with the necessary data 
slot within objects, and if the object is used in a function position 
then it can blindly and safely jump to that code, provided only that it 
maintains an appropriate invariant.

One method, for example, would be to have every set! using an unknown 
value store the new value in the data slot, and as well check the type 
of the value and fill the code slot with either a pointer to the actual 
executable code or else to an error routine.

This would slow down every set! quite a bit -- but we're assuming that 
set! is relatively rare.  Another option would be to *always* have set! 
blindly store the address of a special function in the code slot, such 
that if the object is some time later used as a function the code will 
at that time check the data slot to see if it contains a function, and 
if so patch the code slot, ready for the next time.

In this way, what looks to the user to be (and is) a Lisp1, can gain 
exactly the same benefits you claim to be unique to a Lisp2 -- the cost 
being one extra memory store on each set!.  Which, being to the same 
cache line you're already writing to, is very nearly free on modern 
machines.

All this is assuming an implementation that makes no attmept at static 
type checking or type propogation.

Which is a very curious level of implementation.  It is not an 
interpreter, since it is generating machine code from Lisp functions, 
but it is being almost perverse in studiously *not* analysing the code 
it is generating.

-- Bruce
From: Aaron Crane
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <djae6xmao1.fsf@planet.dcs.ed.ac.uk>
In article <···························@news.nzl.ihugultra.co.nz>,
Bruce Hoult <·····@hoult.org> writes:
> I'm trying to explore what the benefits of a user-visible function slot in
> symbols is, and why you'd want that but not other user-visible slots for
> different types of values.
> 
> One of the very first things I learned when studying software design a
> couple of decades ago was that the only numbers that make sense as an
> answer to "how many" were "zero", "one", and "unlimited".  Now of course
> that's not an iron-clad rule, but I think you've got to make a pretty good
> case before you break it.

Regardless of whether that is an iron-clad rule for software design, please
note that we're actually discussing language design.  It seems to me that
language design is much closer to interface design than to software design,
and that in interface design, zero, one, and "unlimited" are _not_ the
interesting numbers.  An example I encountered recently: a search over a
database produces some non-negative integer number of query results.  The
cases you want to identify in the user interface are as follows:

  0
    Give the user a message saying that nothing matched, and offer help in
    formulating queries.

  1
    Give the user a message saying that one item matched.  Use the singular
    of the relevant noun.

  A few
    Give the user one page of results, and say how many there were.  Use the
    plural of the relevant noun.

  A lot
    Give the user one page of results.  Say that there are more pages
    available, and provide some means of obtaining the remaining pages.

  Too many
    Negotiate with the user for a more precise search query, and offer help
    in formulating queries.

For some (human) languages, we also have to add:

  2
    Give the user one page of results, and say how many there were.  Use the
    dual of the relevant noun.

Note also that, if there are multiple pages, we must be careful to use the
right grammatical number (single/dual/plural) of the word for "page".

One could also argue (though I don't personally subscribe to this view in
its strong form) that, for humans, the interesting numbers are in fact one,
two, and many.  This argument would be based simply on the fact that (some)
human languages distinguish singular, dual, and plural number.  Zero isn't
included in this list, because the natural-language way of saying "zero" is
more akin to "there aren't any" than to a number.  Note also that we have
"many" rather than "unlimited", because the notion of an arbitrarily large
number is not one that typically occurs in human language outside of
scientific or mathematical discourse.

Alternatively, one could argue that numbers fall into three classes: "one",
"two through four", and "five or more".  (See selection of case on Russian
nouns in the presence of numerals.)

These arguments rapidly become silly, but the central idea -- that the
zero/one/infinity distinction has nothing in particular to do with human
capabilities -- should remain clear.

-- 
Aaron Crane
From: David Combs
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <997shh$5v7$3@news.panix.com>
In article <··············@planet.dcs.ed.ac.uk>,
Aaron Crane  <···········@pobox.com> wrote:
><SNIP>
>For some (human) languages, we also have to add:
>
>  2
>    Give the user one page of results, and say how many there were.  Use the
>    dual of the relevant noun.
>
>Note also that, if there are multiple pages, we must be careful to use the
>right grammatical number (single/dual/plural) of the word for "page".
>

Could you please define your particular use
of the word "dual", with a few examples
of duals.

Thanks

David
From: Lieven Marchand
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m3ae6gbal3.fsf@localhost.localdomain>
·······@panix.com (David Combs) writes:

> In article <··············@planet.dcs.ed.ac.uk>,
> Aaron Crane  <···········@pobox.com> wrote:
> >Note also that, if there are multiple pages, we must be careful to use the
> >right grammatical number (single/dual/plural) of the word for "page".
> >
> 
> Could you please define your particular use
> of the word "dual", with a few examples
> of duals.

It refers to the grammatical category of 'number'. Most languages only
distinguish singular and plural but there are exceptions.
Indo-European originally had a dual number (two). In Old English the
pronouns wit, git (we two, you two) were survivors for that feature,
but the corresponding verb declensions were lost. In Icelandic the
forms survived until recently but with a shift in meaning to a
politeness form.

An other example is a Melanesian language which has four numbers,
singular, dual, trial and plural.

-- 
Lieven Marchand <···@wyrd.be>
Gla�r ok reifr skyli gumna hverr, unz sinn b��r bana.
From: Aaron Crane
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <dju24o2jp8.fsf@planet.dcs.ed.ac.uk>
In article <············@news.panix.com>,
·······@panix.com (David Combs) writes:
> In article <··············@planet.dcs.ed.ac.uk>,
> Aaron Crane  <···········@pobox.com> wrote:
> >Note also that, if there are multiple pages, we must be careful to use
> >the right grammatical number (single/dual/plural) of the word for "page".
> 
> Could you please define your particular use of the word "dual", with a few
> examples of duals.

Some languages have a feature called "grammatical number".  In such
languages, some parts of speech (typically one or more of nouns, adjectives,
and verbs) inflect for grammatical number; some words must agree in
grammatical number with other words in the same phrase or with an antecedent
in some other phrase.  Most languages distinguish two grammatical numbers:
singular (which is used when the inflected word relates to a single
referent) and plural (which is used when the inflected word relates to
multiple referents).  Some languages distinguish a third grammatical number:
the dual.  Duals are used when the inflected word relates to _precisely two_
referents.  (Some languages have forms such as a triple (for precisely three
referents), or a paucal (for an indeterminate but small number of
referents), but these are much less common.)

Languages that have a dual are perhaps a little uncommon in comparison with
those exhibiting merely a singular/plural distinction, but there are
nonetheless plenty of examples.  In Vranian (New Slavonic) the masculine
noun "rab" (slave) is "rab", "raba", "rabi" in the nominative singular,
dual, plural respectively.  Other modern languages exhibiting a dual include
Aleut, Arabic, Lithuanian, and Yupik.  Other languages have historically had
a productive dual form, even if modern descendants either do not have it or
have it only in a fossilised form.  These include classical Hebrew, several
varieties of classical Greek, (Irish) Gaelic, Akkadian, Old Church Slavonic,
and Sanskrit.  I'm sure a little Googling would reveal more.

-- 
Aaron Crane
From: Stefan Ljungstrand
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <Pine.SOL.4.30.0103221644260.24906-100000@fraggel13.mdstud.chalmers.se>
On 20 Mar 2001, Aaron Crane wrote:

> In article <············@news.panix.com>,
> ·······@panix.com (David Combs) writes:
> > In article <··············@planet.dcs.ed.ac.uk>,
> > Aaron Crane  <···········@pobox.com> wrote:
> > >Note also that, if there are multiple pages, we must be careful to use
> > >the right grammatical number (single/dual/plural) of the word for "page".
> >
> > Could you please define your particular use of the word "dual", with a few
> > examples of duals.
>
> Some languages have a feature called "grammatical number".  In such
> languages, some parts of speech (typically one or more of nouns, adjectives,
> and verbs) inflect for grammatical number; some words must agree in
> grammatical number with other words in the same phrase or with an antecedent
> in some other phrase.  Most languages distinguish two grammatical numbers:
> singular (which is used when the inflected word relates to a single
> referent) and plural (which is used when the inflected word relates to
> multiple referents).  Some languages distinguish a third grammatical number:
> the dual.  Duals are used when the inflected word relates to _precisely two_

(often referring to a natural pair
 (like hands,eyes,leg parts of pants,glasses ..)
)

> referents.  (Some languages have forms such as a triple (for precisely three
> referents), or a paucal (for an indeterminate but small number of
> referents), but these are much less common.)

Tolkien's Quenya has four numbers :
singular
plural
partitive plural ("multiple plural")
dual

see :
http://www.uib.no/People/hnohf/quenya.htm#Heading7
(http://www.move.to/ardalambion)

> Languages that have a dual are perhaps a little uncommon in comparison with
> those exhibiting merely a singular/plural distinction, but there are
> nonetheless plenty of examples.  In Vranian (New Slavonic) the masculine
> noun "rab" (slave) is "rab", "raba", "rabi" in the nominative singular,
> dual, plural respectively.  Other modern languages exhibiting a dual include
> Aleut, Arabic, Lithuanian, and Yupik.  Other languages have historically had
> a productive dual form, even if modern descendants either do not have it or
> have it only in a fossilised form.  These include classical Hebrew, several

(hebrew :
  Jerushalajim - the two Jerusalems (?)
, Misrajim - the two Egypts (upper and lower)
)

> varieties of classical Greek, (Irish) Gaelic, Akkadian, Old Church Slavonic,
> and Sanskrit.  I'm sure a little Googling would reveal more.
>
> --
> Aaron Crane
>

--
Stefan Lj
md9slj

The infinity that can be finitely expressed is not the true infinity
From: Marco Antoniotti
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <y6c4rwkpmy2.fsf@octagon.mrl.nyu.edu>
Stefan Ljungstrand <······@mdstud.chalmers.se> writes:

> Tolkien's Quenya has four numbers :
> singular
> plural
> partitive plural ("multiple plural")
> dual
> 
> see :
> http://www.uib.no/People/hnohf/quenya.htm#Heading7
> (http://www.move.to/ardalambion)
> 

Well.  Has anybody looked into www.kli.org, just to round up this
thread? :)

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-B2E332.00573009032001@news.nzl.ihugultra.co.nz>
In article <················@naggum.net>, Erik Naggum <····@naggum.net> 
wrote:

> * Bruce Hoult <·····@hoult.org>
> > And what is the nature of this "distinct way"?
> 
>   Sigh.  You failed to grasp even the most obvious points in
>   my message, so I won't waste any more time on you.  Your
>   paragraph-by-paragraph response also shows that you're into
>   bickering, not arguing.

On the contrary, your messages appear to consist of a series of 
extremely obvious points, interspersed by enigmatic remarks bearing no 
obvious relationship to those obvious points.  It's like the 
stereotypical mathematics proof with lots of working and then "and then 
a miracle happens" and the answer popping out.
 

>   If you at least could understand that the Lisp1 mindset requires
>   a focus on types (which you have to such an extent that you can't
>   even see anything else), and the Lisp2 mindset is the result of
>   focus on utility,

What do you mean, specifically, by utility?  And how does Lisp2 achieve 
this better than Lisp1?

It's all very well saying that you're in favour of world peace and 
brotherly love -- who wouldn't be? -- but what do you actually propose 
to *do* about it, and why do you think it will help rather than hurt?

-- Bruce
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-AD5B55.12264509032001@news.nzl.ihugultra.co.nz>
In article <················@naggum.net>, Erik Naggum <····@naggum.net> 
wrote:

> * Bruce Hoult
> > On the contrary, your messages appear to consist of a series of
> > extremely obvious points, interspersed by enigmatic remarks bearing no
> > obvious relationship to those obvious points.
> 
>   It would look differently to you if you were a lot smarter, but you 
>   could
>   compensate for your handicap by working a little more to figure out 
>   those
>   "enigmatic remarks".  I actually think there's a pretty strong clue 
>   right
>   there that you _don't_ understand enough to make the kind of remarks 
>   you
>   make, and that insisting that the only things you find are extremely
>   obvious is a reflection of your inability to grasp anything that runs
>   counter in any meanigful way to what you have made up your mind about.
> 
> * Erik Naggum
> > If you at least could understand that the Lisp1 mindset requires a 
> > focus
> > on types (which you have to such an extent that you can't even see
> > anything else), and the Lisp2 mindset is the result of focus on 
> > utility,
> 
> * Bruce Hoult
> > What do you mean, specifically, by utility?  And how does Lisp2 achieve 
> > this better than Lisp1?
> 
>   This is one of those cases where you miss the point completely because
>   you're too damn stupid.  It is so incredibly annoying to have to spoon-
>   feed some retard by telling him that what you focus on is not the same 
>   as
>   what you achieve, it is what you look at when you make your decisions
>   about how to achieve whatever it is you want to achieve, which is, 
>   like,
>   orthogonal to your focus, OK?  So it's like "what does type theory say
>   about this" versus "what would be more useful to do".  To Lisp1 guys,
>   there is nowhere else to go but to types, so the focus is very natural,
>   but Lisp2 guys became Lisp2 guys because they found the Lisp2 solution
>   more useful than putting some theoretical constructs first.  Note that
>   this is _not_ the trivially obvious "theory vs practice" bullshit I'm
>   99.4% certain you think it is.  The question remains _how_ anyone found
>   type theory to be their pet peeve.  _My_ bet is that it was once seen 
>   as
>   a more _useful_ approach than the competing approaches at the time, but
>   then some people of limited intelligence and too much dedication to 
>   some
>   completely unrelated goal (such as getting a PhD with the least amount 
>   of
>   effort, which by itself is a pretty smart thing to do) saw that they
>   could achieve that goal by going all gung-ho for type theory, losing
>   every concept of its usefulness in the process.  In other words, most
>   theories start out with practice, but if you lose track of what you
>   wanted to accomplish or may accomplish with it and focus on the theory
>   itself, you can get really seriously lost and fuel the 
>   theory-vs-practice
>   myth that also really bugs me.
> 
> > It's all very well saying that you're in favour of world peace and 
> > brotherly love -- who wouldn't be? -- but what do you actually propose 
> > to *do* about it, and why do you think it will help rather than hurt?
> 
>   What an annoyingly stupid thing to say.

What I'm really impressed by in all this is that you're so omniscient 
that you know exactly how intelligent I am (far less than you, 
obviously), you know exactly what's wrong with the brains of those guys 
over there who use Scheme, you know exactly what is the best thing in 
practise in all cases directly from your feelings without requiring 
evidence or logic to substantiate it, you know exactly which pigeonhole 
to put everyone in (e.g. I'm *clearly* a Scheme freak because I dare to 
question the utility of some part of CommonLisp).

And yet you've got just about zero skill in explaining whatever it is 
that you're actually trying to talk about in a way that anyone other 
than Erik Naggum can understand.

I've made a serious attempt to understand your points, have asked for 
clarification on some not clear parts, have suggested where I think 
you're mistaken.  And all I get for my trouble is exhortations that I 
need to reply to your writing as a whole, not to the actual points.  Oh, 
and continual reminding and harping on the fact and extent of my mental 
retardation, which while no doubt perfectly true is nonetheless 
something that is not under my control and therefor your continual 
pointing out of it has zero utility other than, perhaps, to reassure you 
in your obvious and genuine superiority over me.  Which reassurance you 
certainly appear to badly need.

-- Bruce
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-38E01C.21065109032001@news.nzl.ihugultra.co.nz>
In article <················@naggum.net>, Erik Naggum <····@naggum.net> 
wrote:

>   Did it make you feel better to have a much worse enemy than
>   you actually have?  _All_ of this just shows that you're a
>   moron who can't do anything better.

*Enemy*?  Where did that come from?  *Enemy*?  Over a technical 
disagreement?  Well I never.


> > I've made a serious attempt to understand your points, have asked for 
> > clarification on some not clear parts, have suggested where I think 
> > you're mistaken.
> 
>   I have tried to explain to you, and I have seen several other
>   people get the idea very quickly, and you not getting it at all.

Care to name them?  I'll lay money that they were simply brow-beaten 
into submission by your venom and abuse in a way that I've refused to be.


> > Which reassurance you certainly appear to badly need.
> 
>   You've spent your entire message telling me that you need to erect a
>   monster you can be justified in hating

I don't hate you Erik Naggum.  Highly amused would be closer to the 
mark.  Hell, I'd probably even buy you a beer if we ever found ourselves 
in the same hemisphere and I felt like a bubbly shower.


>   Look, Bruce of limited brainpower, my irritation with your
>   particular idiocy is at an end.  You're just too damn stupid
>   to be worth talking to.

And yet you do.  And yet you do...


>   You can do something else that will change my mind about this,
>   but if you're as stupid as I think you are, you won't. 

Care to tell me the email address of the last person who changed your 
mind about anything?

Hell, anyone out there who thinks they managed it, feel free to email me 
with the message-ID.  I'll keep count, but I won't reveal your name to 
Erik, I promise!


>   actually keep hoping that some of the incredible morons that
>   come visting this newsgroup every once in a while can be
>   convinced to think, but when all hope is lost, I'm just so
>   irritated with the waste of time that you have engaged in.
>   I consider it on par with fraud for someone who has no
>   willingness or ability to listen and understand to engage
>   others in a discussion of moderately complex issues when all
>   he's after is proving that he's right and others wrong.

You know, I've been happy to learn a lot over the years from some of the 
polite people who hang out here, such as .. oh ... Kent Pitman and 
Markus Mottl and Lieven Marchand and Chris Double and Jason Trenouth and 
Matt Curtin and Duane Rettig and no doubt plenty of others who's names 
don't happen to spring to mind at this particular instant.

From Erik Naggum I've been pleased to learn ... well, let's just say 
that if they ever bring back the draft then I know I'll survive boot 
camp just fine :-)

-- Bruce
From: David Combs
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <997s4e$5v7$2@news.panix.com>
In article <················@naggum.net>, Erik Naggum  <····@naggum.net> wrote:
>* Bruce Hoult <·····@hoult.org>
>> *Enemy*?  Where did that come from?  *Enemy*?  Over a technical 
>> disagreement?  Well I never.
>
>  Can't you even admit to _yourself_ what you're doing?  The rest of your
>  message is all about how you need to describe me in terms that defend and
>  justify your own emotional responses.  You're not even close, of course.
>  Your insanely exaggerated image is indicative of only one thing: Your
>  constant lack of mental capacity to deal with a simple technical issue
>  that evidently would invalidate your beliefs if you _understood_ what it
>  is all about.  We have the opportunity, again, to watch a moron in action
>  when he realizes that if he grasped the point he's trying desperately to
>  avoid understanding, all that he has spent so much time defending would
>  just evaporate.  Therefore, he _cannot_ understand the issue at hand.
>  Instead, he has to go _purely_ personal, and the best evidence of a
>  simple-minded idiot is that he has to pretend that another person is a
>  one-dimensional archetype-like caricature, but that only shows how he has
>  been approaching the technical issue, too: Without the ability to deal
>  with _any_ complexity.  And hence the need for just one namespace, too.
>
>#:Erik


Or another idea -- keep (some of) your invective, but
INTERSPERSE it with TECHNICAL stuff and examples that
we can LEARN from.

Put in enough technical ah-ha's and explanations and
we won't mind the name-calling quite so much, as
long the rewards along the way are there, we'll wade
through the personal-attack stuff.

(Just be sure to give us *enough* for us to.

Thanks

David
From: Geoffrey Summerhayes
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <CBZt6.427553$Pm2.6665750@news20.bellglobal.com>
"Erik Naggum" <····@naggum.net> wrote in message
·····················@naggum.net...
> * David Combs
> > Or another idea -- keep (some of) your invective, but INTERSPERSE it
with
> > TECHNICAL stuff and examples that we can LEARN from.
>
>   Why do you demand this of others, but not of yourself?
>
>   Why do you post your demands instead of mailing them if you are seriuos?
>

Speaking for myself, I find some of your posts insightful and
some not particularly useful, from a programming POV. Since mine
is heading into the useless category, there is a question I've
been meaning to ask. I've heard people complain about continuations
in Scheme, Graham creates the thing in OnLisp and uses it (accidental
pun, honest, just finished rereading the anaphoric section) for a
version of Prolog. What's the problem with cc and what is the Lisp
alternative?

Geoff
(still can't see Erik's posts at work, haven't been able to
determine why)
From: Tim Bradshaw
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <nkjelvrpgod.fsf@tfeb.org>
"Geoffrey Summerhayes" <·············@hNoOtSmPAaMil.com> writes:

> Speaking for myself, I find some of your posts insightful and
> some not particularly useful, from a programming POV. Since mine
> is heading into the useless category, there is a question I've
> been meaning to ask. I've heard people complain about continuations
> in Scheme, Graham creates the thing in OnLisp and uses it (accidental
> pun, honest, just finished rereading the anaphoric section) for a
> version of Prolog. What's the problem with cc and what is the Lisp
> alternative?
> 

I think that this has been discussed here at some length, so looking
in deja (erm, google?) might be useful.  The basic answer is, I think,
that it's really pretty hard to implement upward continuations
efficiently, and the presence of them makes it really pretty hard to
do a lot of other things efficiently (like stack-allocate) unless
you're willing to do heroic whole-program analysis which would clearly
not be appropriate for a system like CL.  CL instead provides
primitives which do a lot of the things continuations are used for
(BLOCK, RETURN-FROM and so on), and elects not to provide the really
hairy stuff.

As far as I can see `the really hairy stuff' is an easy way of doing
nondeterminism so you can do prolog-type things (but I'm not sure if a
heavy-duty prolog system would do it this way), and something that
looks like a substrate for multiprocessing but is in fact a completely
toxic approach to it, and finally a lot of really obscure programming
tricks.

--tim
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <y9tz81qy.fsf@content-integrity.com>
"Geoffrey Summerhayes" <·············@hNoOtSmPAaMil.com> writes:

> What's the problem with cc and what is the Lisp alternative?

The big problem with cc is that it is a C compiler, and has nothing to
do with Lisp.

call-with-current-continuation is useful for modeling complex control
flow, but the largest problem with it is that it is ``too powerful'';
it lets you use a continuation more than once and it allows you to
break the LIFO order of continuation invocation.

If you wish to use a continuation more than once, you have to retain a
copy of it.  The stack is part of the continuation, so it would have
to be copied.  However, if you copy the stack, you can no longer
allocate data structures (like &rest args, downward funargs, etc.)
there because these structures would end up unshared when copied.
Additionally, this copying has to appear to be done at the time the
continuation is reified (at catch time).  You have a few options as to
how to implement this, all of them have drawbacks.

You could actually copy the stack into the heap.  This is simple, but
computationally expensive, and it would have to be done every time you
reified a continuation (every catch).  Presumably, catch is called far
more often than throw.

You could implement a `copy on write' mechanism and only copy the
parts of the stack that need copying.  However, this is both hairy and
it would be necessary to check if a continuation needed copying every
time one was invoked.  Given that function return is a continuation
invocation, this would be quite expensive as well.

You could implement a `write only' `stack' by allocating stack frames
in the heap, and letting the GC pick them up.  This is a popular
method, and ephemeral GCs can clean up stack frames without too much
of a performance hit, BUT, Miller and Rozas point out in AIM 1462 that
stack allocation of continuations will still outperform heap
allocation (by as much as a factor of three), even if the
implementation has an efficient GC and a lot of memory.

So call-with-current-continuation incurrs a performance hit if you use
it in its most general form.  But what if you specialized it for the
most common uses?  If you restrict continuations to be invoked only
once and in a strict last-in first-out order, it becomes easy to
efficiently implement them.  But Common Lisp already has this:

(defun call-with-current-continuation (receiver)
  (block continuation
    (funcall receiver (lambda (arg) (return-from continuation arg)))))

The other common use of continuations is to implement co-routines.
But these are easily implemented via other mechanisms:  lazy
evaluation, objects that maintain state, and `stack groups' (found on
many lisp systems) are commonly used.

The last common use of call-with-current-continuation is to mimic
multitasking.  I argue that this is an abuse of continuations, not a
valid implementation technique.  Many Common Lisp implementations
provide a `process' (thread) abstraction.

First-class continuations are hard to understand.  CATCH and THROW or
setjmp are hard enough for beginners to grok, but first-class
continuations can baffle much more experienced people.

First-class continuations raise some thorny questions.  When escaping
from a (with-open-file ...) construct, do you close the file or not?
What if you `escape' back in?  Jonathan Rees and Alan Bawden showed
how you can take a `pure functional' subset of Scheme with LETREC and
first-class continuations and create side effects.

So continuations are expensive, confusing, poke some holes in the
language semantics, and essentially redundant for virtually all Common
Lisp applications.  There seems no compelling reason to add them to
Common Lisp.




They are, however, in the `spirit' of Scheme, required by the Scheme
standard, and if you don't mind the performance hit, they make a very
flexible substrate in order to implement the standard control flow
constructs such as catch/throw, block/return, etc.  Common Lisp
implements these constructs `outside the language'.  So there do exist
some valid reasons for retaining them in Scheme, although many people
think that the *requirement* of having them in Scheme is a significant
problem.

One language implementation I did allocated the `stack frames' on the
heap (the language was context-sensitive, so EVAL ended up requiring
two continuations!).  Since the continuations were already in the
heap, there was no additional performance penalty to add a user-level
call-with-current-continuation to the language.  While this made
implementation of the error handling system much easier, I wouldn't
have added it to the language if it weren't already there for other
reasons. 



-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Paolo Amoroso
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <LAu5OiCUUtH3Jk3r6Z7rDRz5iFtF@4ax.com>
On Wed, 21 Mar 2001 08:27:14 GMT, "Geoffrey Summerhayes"
<·············@hNoOtSmPAaMil.com> wrote:

> been meaning to ask. I've heard people complain about continuations
> in Scheme, Graham creates the thing in OnLisp and uses it (accidental
> pun, honest, just finished rereading the anaphoric section) for a
> version of Prolog. What's the problem with cc and what is the Lisp
> alternative?

I don't know about continuations in general. As for the
continuation-passing macros for Common Lisp discussed in "On Lisp", Graham
gives in the same book some hints on their potential problems. In section
21.3 "The Less-than-Rapid Prototype" on page 284 he writes (my notes are in
square brackets):

  "The program described in this chapter [an implementation of multiple
  processes] is, like those in succeeding chapters, a sketch. It suggests
  the outlines of multiprocessing in a few, broad strokes. And though it
  would not be efficient enough for use in production software, it could be
  quite useful for experimenting with other aspects of multiple processes,
  like scheduling algorithms.
  Chapters 22-24 present other applications of continuations [including the
  Prolog implementation based on continuations]. None of them is efficient
  enough for use in production software. [...]"


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: David Combs
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <997rl2$5v7$1@news.panix.com>
In article <················@naggum.net>, Erik Naggum  <····@naggum.net> wrote:
>* Bruce Hoult <·····@hoult.org>
>> What I'm really impressed by in all this is that you're so omniscient 
>> that you know exactly how intelligent I am (far less than you, 
<snip>
>
>  I don't know where you get all this crap, but you're clearly stupid
>  enough to believe it.  Where _did_ all these "exactly" come from?  Why
>  all the incredibly _stupid_ exaggerations?  Did it make you feel better
>  to have a much worse enemy than you actually have?  _All_ of this just
>  shows that you're a moron who can't do anything better.
>
>> And yet you've got just about zero skill in explaining whatever it is
>> that you're actually trying to talk about in a way that anyone other
>> than Erik Naggum can understand.
>
>  You're extrapolating from yourself to the entire world.  How desperate
>  _are_ you in trying to make yourself look normal?
><SNIP>

Eric, please realize that others besides Bruce are
reading through your responses, which, to the extent
they contain mostly personal-attacks, are a waste
of our precious newsgroup-reading time, as we 
proced through, searching for a return to normal
Naggum genius-level discourse on the *technical* subject.

Now, if you (and Bruce) *promised* that you two would
*not* include any *technical* argument or explanation
in the rest of this thread's sub-thread you two are
beginning, then those of us looking only for info on
*lisp* and related topics, *then* we would be safe
in killing rooted at that node you had just generated,
and not lose anything *we* (lurkers) were interested
in -- we just kill that subthread, and go on to the
next unseen-part of the thread.

(eg, in trn, by hitting "," and then "n")

---

Instead, we are stuck wading through all this stuff
that indeed might be of vital interest to you and
Bruce, in the often-but-not-always in-vain effort
to hunt out a bit of *technical* discussion that
isn't on the qualities of someone's brain.

---

So, a hint about the subthread's future would
be helpful!

Thanks!

David
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-ED21C4.08420221032001@news.nzl.ihugultra.co.nz>
In article <············@news.panix.com>, ·······@panix.com (David 
Combs) wrote:

> Now, if you (and Bruce) *promised* that you two would
> *not* include any *technical* argument or explanation

I'm afraid I can make no such promise.

I'm extremely interested in a technical discussion, but Erik apparently 
wishes to make that impossible, so I've elected to not talk to him.  I 
will, however, talk to civil people.

-- Bruce
From: David Bakhash
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m37l1wmhwv.fsf@alum.mit.edu>
Bruce Hoult <·····@hoult.org> writes:

> Well, *I* wouldn't like such a language, but yes things such as 
> float-let would probably be what you would do.  Or you could go the Perl 
                                                  ^^
> route (which I dislike, but I use it).

what do you mean `Or' ???

That is *exactly* what Perl does:

our (%global);

 {
  our (%global);
  local %global = (one => 1, two => 2); # dynamic extent; `local' used 
                                        # for emphasis
  ... # code
  another_sub() # %gloal has its dynamic value there
 } # %global gets re-bound to its value before entering the block
 
Your "float-let" is the combination of "let" with the type
information.  In the Perl version, we use "local" for the "let" and
"%" for the type (here, a hash).  The "our()" is analogous to the
(declare (special ...)) we use in CL.  So to me, there's barely a
difference.

[a NOTE for Perl enthusiasts: The only difference here is that in
Perl, even for dynamic variables like the %global above, you can use
my() instead of local() to make %global lexical within that lexical
block.  Therefore, %global, instead set my() as above, would *not*
affect code not in that block but called in that block's dynamic
extent.  I don't know if such a thing can be done in CL, nor have I
ever wanted this.  And the *reason* I never wanted this is that I'M
THE ONE WRITING THAT BLOCK OF CODE, and so, if I don't want something
to refer to the special %global variable, then I'll _create_ a new
lexical variable and bind it.  This just shows how sloppy Perl is,
just in order to have these extra subtle features that people trip on
all the time, though admittedly compiler warnings with -w help,
e.g. in this case.]

> One of the very first things I learned when studying software design
> a couple of decades ago was that the only numbers that make sense as
> an answer to "how many" were "zero", "one", and "unlimited".  Now of
> course that's not an iron-clad rule, but I think you've got to make
> a pretty good case before you break it.

That case is right before us.  Even with functional progamming in the
extreme, this is obvious.  This is why the evaluator treats the first
argument's *value* differently in Scheme.

[On another note, the number 2 in Computer Science and software design
is by far the most massive!  Just think of binary: it's either 0 or 1;
boolean: true or false.  How many values, you ask?  TWO.  Please as
whoever taught you this how computers would ever have come about if
base 2 were not universal in the utmost.]

dave
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfw66hf68v5.fsf@world.std.com>
Bruce Hoult <·····@hoult.org> writes:

> One of the very first things I learned when studying software design a 
> couple of decades ago was that the only numbers that make sense as an 
> answer to "how many" were "zero", "one", and "unlimited".  Now of course 
> that's not an iron-clad rule, but I think you've got to make a pretty 
> good case before you break it.

I'm curious if you have a problem with cars having accelerators?
Do you think they'd be better with just just "velocitors", or are
you implicitly lobbying for full access to all positional change 
derivatives?  Just curious... ;-)

(Not an iron clad rule indeed.)
From: Bruce Hoult
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <bruce-CCD5D4.23184612032001@news.nzl.ihugultra.co.nz>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > One of the very first things I learned when studying software design a 
> > couple of decades ago was that the only numbers that make sense as an 
> > answer to "how many" were "zero", "one", and "unlimited".  Now of 
> > course 
> > that's not an iron-clad rule, but I think you've got to make a pretty 
> > good case before you break it.
> 
> I'm curious if you have a problem with cars having accelerators?
> Do you think they'd be better with just just "velocitors", or are
> you implicitly lobbying for full access to all positional change 
> derivatives?  Just curious... ;-)

Are you suggesting that I think the accelerator and brake should be 
combined?

Interesting suggestion, and I certainly prefer to drive vehicles which 
don't require much use of the brakes -- ones with manual transmissions 
and good engine braking, such as my BMW R1100RT motorcycle.

OTOH, I also enjoy operating vehicles which don't have accelerators at 
all, such as sailplanes and bicycles.  You get interesting instruments 
in sailplanes, such as direct pneumatic measurement of the rate of 
change of total energy (kinetic + gravitational potential), but you 
don't get simple controls for that.


I see little point (and many reasons not to) in having direct access to 
rate of change of acceleration (i.e. 3rd derivitive of position), though 
that is a useful quantity for engine management computers to use in 
accessing the "seriousness" of a request for more power.

But this seems off topic.

-- Bruce
From: Kent M Pitman
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <sfwelw32p9x.fsf@world.std.com>
Bruce Hoult <·····@hoult.org> writes:

> In article <···············@world.std.com>, Kent M Pitman 
> <······@world.std.com> wrote:
> 
> > Bruce Hoult <·····@hoult.org> writes:
> > 
> > > One of the very first things I learned when studying software design a 
> > > couple of decades ago was that the only numbers that make sense as an 
> > > answer to "how many" were "zero", "one", and "unlimited".  Now of 
> > > course 
> > > that's not an iron-clad rule, but I think you've got to make a pretty 
> > > good case before you break it.
> > 
> > I'm curious if you have a problem with cars having accelerators?
> > Do you think they'd be better with just just "velocitors", or are
> > you implicitly lobbying for full access to all positional change 
> > derivatives?  Just curious... ;-)
> 
> Are you suggesting that I think the accelerator and brake should be 
> combined?
> [...]
> I see little point (and many reasons not to) in having direct access to 
> rate of change of acceleration (i.e. 3rd derivitive of position), though 
> that is a useful quantity for engine management computers to use in 
> accessing the "seriousness" of a request for more power.
> 
> But this seems off topic.

No more off topic than your remark about 0, 1, infinity.

I thought some more after posting the question, and here's what I think is
the relevance:  When creating independent things, things with no dependence
on anything else, that rule might make sense.  However, once constraints are
applied, we find that other numbers have relevance.

Pi, for example, would be more convenient if it were at least rational but
we're stuck with it because it is dictated by the constraint of the pragmatic
need for which it was created--the need to compare diameters and 
circumferences.

Aside: If you want to have fun winning a lot of bets at a bar
sometime, make a bet with someone about whether a tall drinking glass
you see around you is taller or bigger than around.  Empirically, you
can find that people think the value of pi is about 2, since you can
tell by their threshold in this bet that they will typically only
double the diameter in order to guess how big around it is, and will
often guess that it's taller than wide.  There are some, but very few,
drink glasses that are taller than they are big around, and you will
win most of the time without premeasuring.  You can wrap a napkin
around the mouth and straighten it out to show them.

But the point is that, convenient or not, pi isn't 2, or 3, or 22/7, or even 
3.14 but is the irrational that it is because that's what works.

And, in general, one might say the whole "purpose" of math is to create names
for the uncomfortably many different relationships that there can be to 
describe ordinary practicality.

So it'd be nice if the golden mean were an integer, but it's not.  It
would be nice if certain computer keyboard designers would finally
learn that certain keys on the keyboard, no matter how cheaply they
can be manufactured that way, are not well-placed at integral,
rectangular spacings.  Typical hands don't match up to that.

Math is about the ability to take down all the independent variables, crank
through their relationship, and tell you the "like it or not" value of
what the dependent variables are.

Now, I allege that one independent variable in this equation about Lisp1
vs Lisp2 is the wetware in my head, and the number of things it its
capable of managing.  I doubt that is "0".  I doubt that is "1".  I doubt
that is "infinity".  It's some unknown quantity that is probably bessed
described as "messy".  And "messy", like pi or bottom, when multiplied
or otherwise combined in most ways with other "neat" numbers, tends to
result in a "messy" result.  So that CL is a Lisp2 (or, more properly,
a Lisp4, but in any case not a Lisp1) seems as appropriate to me as that
pi is a number other than 0, 1, or infinity.  The N in LispN is not an
independently chosen quantity that I pick in order to drive the universe.
It is a dependent quantity that I pick in order to be in harmony with
specific, fixed parts of the universe, one of those being messy old me.

Just like the accelerator in the car.  We fuss with the second derivative
because it seems to work.  Saying "I'd like to be going at 100mph" could
might be a fine control at 99mph but might cause serious whiplash if you
were going at 10mph or 1000mph and the car misunderstood your urgency.
Experimentally, we determine that control of the acceleration gives most
of what we need.  And, as you observe, the third derivative is just more
control than we really need or want.  So we go with it.  It's a dependent
quanitity, and the notion of "fit" is pre-determined by its setting, not
by some context-independent truth about mathematics and the value of 
certain numbers over others.

Or so it seems to me.
 --Kent
From: David Thornley
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <zOtr6.1218$Tg.162726@ruti.visi.com>
In article <···············@world.std.com>,
Kent M Pitman  <······@world.std.com> wrote:
>
>I thought some more after posting the question, and here's what I think is
>the relevance:  When creating independent things, things with no dependence
>on anything else, that rule might make sense.  However, once constraints are
>applied, we find that other numbers have relevance.
>
It seems to me that what the 0, 1, infinity rule really does is make the
system easier to work with from a theoretical point of view.  It's easier
to prove theorems about system performance if we don't have, say, 2 or
17 of anything.  This doesn't mean that it's a good rule in the real
world.  If we can easily find the optimal performance of a system with
one whatsit, and find that we can't find the optimal performance of
a system with seven whatsits but we can get close enough so that it's
twice as fast as with one whatsit, then there may be very good reasons
to put in seven whatsits.

From a practical point of view, we 
>Pi, for example, would be more convenient if it were at least rational but
>we're stuck with it because it is dictated by the constraint of the pragmatic
>need for which it was created--the need to compare diameters and 
>circumferences.
>
And everything else it's used for.  It shows up in a lot of different
places in mathematics.  It would be a shame to have probability
distributions change in weird ways because it's more convenient
to have a rational pi.

>Math is about the ability to take down all the independent variables, crank
>through their relationship, and tell you the "like it or not" value of
>what the dependent variables are.
>
Having studied mathematics, I'm not so sure about that.  Mathematics
is, roughly, the study of interesting things that are provable, and
so it tends towards theoretically tractable problems.  Applied
math has always been rather messy.  (Of course, there have been
new ideas since I stopped keeping up with it, probably messing things
up further.)

But, yes, zero-one-infinity seems to go more with the Scheme side
than the Common Lisp side of the Lisp 1.5 descendants.


--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Lieven Marchand
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m3itldz7lq.fsf@localhost.localdomain>
········@visi.com (David Thornley) writes:

> Having studied mathematics, I'm not so sure about that.  Mathematics
> is, roughly, the study of interesting things that are provable, and
> so it tends towards theoretically tractable problems.  Applied
> math has always been rather messy.  (Of course, there have been
> new ideas since I stopped keeping up with it, probably messing things
> up further.)

There's a nice paper by Knuth[1] where he analyses the difference
between mathematics and computer science by studying page 100 of 9
randomly chosen books on mathematics. One very striking difference he
finds is that "a computer scientist tends to be much more willing to
deal with a multitude of quite different cases".

[1] Algorithms in Modern Mathematics and Computer Science

-- 
Lieven Marchand <···@wyrd.be>
Gla�r ok reifr skyli gumna hverr, unz sinn b��r bana.
From: Tim Bradshaw
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <nkjofv4pqna.fsf@tfeb.org>
········@visi.com (David Thornley) writes:

> It seems to me that what the 0, 1, infinity rule really does is make the
> system easier to work with from a theoretical point of view.  It's easier
> to prove theorems about system performance if we don't have, say, 2 or
> 17 of anything.  This doesn't mean that it's a good rule in the real
> world.  If we can easily find the optimal performance of a system with
> one whatsit, and find that we can't find the optimal performance of
> a system with seven whatsits but we can get close enough so that it's
> twice as fast as with one whatsit, then there may be very good reasons
> to put in seven whatsits.
> 

I think that the cases where this rule doesn't apply are called
`engineering', and it's just incredibly easy to find instances.
Typuically theorists try and solve engineering problems by trying to
find general solutions for n (where n is not 0,1 or infinity), and
fail.  Meanwhile engineers build bridges and land spacecraft on the
moon.

There is an amusing definition of a theoretical physicist which I
think describes the problem:

	A theoretical physicist is someone who, when asked to
	calculate the stability of an ordinary, four-legged, table
	rapidly arives at preliminary results regarding the stability
	of tables with zero, one or an infinite number of legs.  He
	then spends the rest of his life trying to calculate the stability
	of a table with an arbitrary, finite nunmber of legs.

--tim
From: Paolo Amoroso
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <Q4WwOmq4s4PxERSZZZuFK+1nJbQI@4ax.com>
On 14 Mar 2001 10:54:17 +0000, Tim Bradshaw <···@tfeb.org> wrote:

> fail.  Meanwhile engineers build bridges and land spacecraft on the
> moon.

Or they safely land on an asteroid a spacecraft that is not designed for
landing (the NEAR mission to asteroid Eros is cooler than deep space :)


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Will Deakin
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <3AB0C19E.3020704@pindar.com>
Paolo wrote:

> ...(the NEAR mission to asteroid Eros is cooler than deep space :)
I'm not sure. The temperature of deep space is closer -273.16 
celcius than NEAR the sun.

;)will
From: David Bakhash
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m3ae6vmgvx.fsf@alum.mit.edu>
Bruce Hoult <·····@hoult.org> writes:

> > I also think, although I think the paper doesn't get into it, that
> > people's brains plainly handle multiple namespaces and contexts
> > naturally
> 
> Well, perl certainly seems to prove that.  I just like to write perl
> code with as many different uses of the same name as possible.  Such
> as ...
> 
>   next b if $b{$b} = <b>;

I don't think this argument holds for Perl.  In Perl, the multiple
namespaces are notationally separated: $x, @x, and %x are all separate 
objects, for example, but in Common Lisp, these would all be different 
symbols!  So in your example, there is no ambiguity at all about what
you're doing.  Ambiguity in Perl arises when you don't "use strict"
and leave barewords around for Perl to try to figure out the meanings
of.

A cleaner analogy into Perl would be if you coded everything as a
reference.  Perl's great for this, since you can refer to anything in
Perl (including functions).  So, for example:

$square = sub {my $arg = shift; return \($arg * $arg);};

$nine = $square->(3);

This starts to look very much like a Lisp1.

Now, let's talk about the simple case in Lisp2, where symbols have
function and value cells.  The form:

(square 3)

in CL implies the function slot of 'square whereas:

(cube square) implies the value slot of 'square.

In a Lisp1 (like Scheme), writing:

(square 3)

will generate an error if evaluated and the value in 'square is not a
function.  Same deal in perl, if you write:

$square->(3) and the value referred to by $square is not CODE.

I don't know how to think of Perl with respect to this discussion, but 
Perl, to me, feels more like a Lisp1 than a Lisp2 when you consider
the additional notational meaning added by the {&,$,@,%,*} characters.

It's also important to note that, given that all functions are
dispatched with FUNCALL in Common Lisp, the non-macro part of CL can
very much look Lisp1 as well.  Unfortunately, Common Lisp doesn't have 
a LETREC, so if you tried to re-write all your CL code in Lisp1 style, 
I don't think you can completely get away with it (i.e. a LETREC would 
be needed to do what LABELS does, and we don't have it, as far as I
know)

Anyway, of course we could implement a LETREC in CL to do the
following transformation:

(letrec ((fib (lambda (x)
                (if (< x 2)
                    x
                  (+ (funcall fib (1- x))
                     (funcall fib (- x 2)))))))
  (funcall fib 10))

into, perhaps:

(let* (fib)
 (setq fib (lambda (x)
             (if (< x 2)
                 x
               (+ (funcall fib (1- x))
                  (funcall fib (- x 2))))))
 (funcall fib 10))

(I think this would work for the more general case that LABELS
handles, which is mutually dependent recursive functions in which the
function definitions are evaluated in a environment where the function 
bindings are defined.)

My point being that nothing is insurmountable.  If the funcall bothers 
people, then I'm sure they can create a reader macro that expands into 
the funcall, such as:

$fib(x y z) ==> (funcall fib x y z)

or something like this.  Bottom line is that if you insist on a
functional Lisp1 style, then you can do it on top of Common Lisp.  I
havn't given much thought to how macros and special operators would
be affected, but hopefully I've provided enough to make my point.

But what all this says is that:

  Given the current design of CL, we can handle the preferred style of 
  the Lisp1 folk without an overhaul of the language.

That's exactly *not* the point of many people in this discussion;
their point was "What is the best design for the language at the
lowest level?"

Handling the many styles that people program in, but in a single
language, is not my deepest concern, but my deepest interest.  Common
Lisp certainly affords this, despite the talked-about limitations with
respect to Scheme and Lisp1.  For example, CL doesn't have a
DYNAMIC-FLET, but such a macro can easily be defined (using
(SETF FUNCTION-DEFINITION) and UNWIND-PROTECT).  Or, better yet,
CL:FLET can be shadowed into a code-walking macro that looks for a new
kind of declaration declaring a function to be dynamic.

I don't know how hard it would be to give Scheme Lisp2 semantics, but
I definitely think that CL can be made to offer Lisp1 advocates
something palatable given their stylistic preferences.

dave

[note: I'm not suggesting that such a task is easy; only that it's
       do-able in Common Lisp, and while it does not address the
       underlying problem of making CL smaller, simpler, faster, it
       does address the stylistic needs of some members of the Lisp
       community who may like write some or all of their code in ways
       not immediately endorsed by CL.

       Also, since such a system would have to be done on top of CL,
       it couldn't `undo' low-level limitations imposed by CL at its
       lowest level, e.g. real multi-processing.]



                            
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <y9uewsq1.fsf@content-integrity.com>
David Bakhash <·····@alum.mit.edu> writes:

> Anyway, of course we could implement a LETREC in CL to do the
> following transformation:
> 
> (letrec ((fib (lambda (x)
>                 (if (< x 2)
>                     x
>                   (+ (funcall fib (1- x))
>                      (funcall fib (- x 2)))))))
>   (funcall fib 10))
> 
> into, perhaps:
> 
> (let* (fib)
>  (setq fib (lambda (x)
>              (if (< x 2)
>                  x
>                (+ (funcall fib (1- x))
>                   (funcall fib (- x 2))))))
>  (funcall fib 10))
> 
> (I think this would work for the more general case that LABELS
> handles, which is mutually dependent recursive functions in which the
> function definitions are evaluated in a environment where the function 
> bindings are defined.)

This is in fact how LETREC is defined in R5RS.

> I don't know how hard it would be to give Scheme Lisp2 semantics, 

It would probably be tricky to do this portably, but not too hard to
do it for the better Scheme implementations.  (Provided you didn't try
to get too fancy with packages).

> but I definitely think that CL can be made to offer Lisp1 advocates
> something palatable given their stylistic preferences.

Jonathan Rees has a package called `pseudo-scheme' that translates
Scheme into common-lisp, but I think that is mostly useful for porting
existing Scheme code.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Dorai Sitaram
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <98db2r$q2a$1@news.gte.com>
In article <············@content-integrity.com>,
Joe Marshall  <···@content-integrity.com> wrote:
>David Bakhash <·····@alum.mit.edu> writes:
>> I don't know how hard it would be to give Scheme Lisp2 semantics, 
>
>It would probably be tricky to do this portably, but not too hard to
>do it for the better Scheme implementations.  (Provided you didn't try
>to get too fancy with packages).
>
>> but I definitely think that CL can be made to offer Lisp1 advocates
>> something palatable given their stylistic preferences.
>
>Jonathan Rees has a package called `pseudo-scheme' that translates
>Scheme into common-lisp, but I think that is mostly useful for porting
>existing Scheme code.

I thought PseudoScheme _ran_ Scheme code as
opposed to _translating_ it -- i.e., in the sense of
producing operationally equivalent CL source for a
given Scheme source.  

--d
From: Joe Marshall
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <4rwzvtgl.fsf@content-integrity.com>
····@goldshoe.gte.com (Dorai Sitaram) writes:

> >Jonathan Rees has a package called `pseudo-scheme' that translates
> >Scheme into common-lisp, but I think that is mostly useful for porting
> >existing Scheme code.
> 
> I thought PseudoScheme _ran_ Scheme code as
> opposed to _translating_ it -- i.e., in the sense of
> producing operationally equivalent CL source for a
> given Scheme source.  

PseudoScheme compiles (translates) Scheme to CommonLisp.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: David Bakhash
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m38zlxwhot.fsf@alum.mit.edu>
····@goldshoe.gte.com (Dorai Sitaram) writes:

> I thought PseudoScheme _ran_ Scheme code as opposed to _translating_
> it -- i.e., in the sense of producing operationally equivalent CL
> source for a given Scheme source.

Yes.  Scheme in CL is a fairly well-known problem, solved in a variety 
of ways.  There's even a Scheme implementation in Emacs Lisp.  But
going the other way is a huge challenge (probably because of the
magnitude of Common Lisp).  The BBN Butterfly implementation of CL is, 
I think, written on top of Scheme.  I think I remember looking through 
the sources once, unimpressed, saying to myself, "this must be slow as 
hell".  Of course, I don't know anything about it; just got that
impression.

Dorai wrote a macro for the SYNTAX-RULES thingy for Scheme-style
macros in CL.  Though I like CL macros (using DEFMACRO), when a macro
can be written using the SYNTAX-RULES style, I think they're cleaner
to look at, and writing the macro without gensym is (to me) a minor
plus -- enough that in a large system I might include it as a utility, 
and ask others to use it when applicable.  The point being that ANSI
CL provides the ability to create a Scheme utility, whereas R5RS
Scheme doesn't seem to provide enough to create a DEFMACRO without
writing a new parser/reader/whatever.  CL gives the programmer lots of 
low-level syntactic freedom that Scheme users don't get, as far as I
know.  I love that freedom, messy as it looks.  I've used it over and
over, and seen it used (infix.lisp, Common SQL, etc.).

dave
From: Dorai Sitaram
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <99fpup$27j$1@news.gte.com>
In article <··············@alum.mit.edu>,
David Bakhash  <·····@alum.mit.edu> wrote:
>
>Though I like CL macros (using DEFMACRO), when a macro
>can be written using the SYNTAX-RULES style, I think they're cleaner
>to look at, and writing the macro without gensym is (to me) a minor
>plus -- enough that in a large system I might include it as a utility, 
>and ask others to use it when applicable.  The point being that ANSI
>CL provides the ability to create a Scheme utility, whereas R5RS
>Scheme doesn't seem to provide enough to create a DEFMACRO without
>writing a new parser/reader/whatever.  CL gives the programmer lots of 
>low-level syntactic freedom that Scheme users don't get, as far as I
>know.  I love that freedom, messy as it looks.  I've used it over and
>over, and seen it used (infix.lisp, Common SQL, etc.).

One could make a case that defmacro is more Scheme-ly
than R5RS's macro-by-example (mbe) syntax-rules.  Ie,
it's simple and powerful (easy macros are easy to
write, hard macros are proportionally hard to write but
not impossible), and it doesn't add a whole new
language like say the CL format or loop.  You may be
overstating by a smidgeon the ability of defmacro to
implement the R5RS mbe given the latter's subtle
"referential transparency" requirements.  For
most practical purposes though, this is irrelevant and
I'm going to grant you your point.  

That said, I descry some movement toward consolidating
a low-level macro system for the Scheme standard based
on a special form "syntax-case".  A CL-style defmacro
is writable (or stealable -- see examples in Kent
Dybvig's Petite Chez Scheme distro) fairly easily in
terms of syntax-case.  Using syntax-case directly is I
find still a lot hairier and lot less intuitive than
using defmacro, but that may be a function of
unfamiliarity.  

--d
From: Lieven Marchand
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <m3vgp0795f.fsf@localhost.localdomain>
David Bakhash <·····@alum.mit.edu> writes:

> Though I like CL macros (using DEFMACRO), when a macro can be
> written using the SYNTAX-RULES style, I think they're cleaner to
> look at, and writing the macro without gensym is (to me) a minor
> plus -- enough that in a large system I might include it as a
> utility, and ask others to use it when applicable.

Most people who like macrology have tools like WITH-GENSYMS or
LispWorks WITH-UNIQUE-NAMES hanging around.

-- 
Lieven Marchand <···@wyrd.be>
Gla�r ok reifr skyli gumna hverr, unz sinn b��r bana.
From: Tim Bradshaw
Subject: Re: Separate namespaces [was: Re: please tell me the design faults]
Date: 
Message-ID: <ey3g0gmlsfc.fsf@cley.com>
* David Bakhash wrote:
> For example, CL doesn't have a
> DYNAMIC-FLET, but such a macro can easily be defined (using
> (SETF FUNCTION-DEFINITION) and UNWIND-PROTECT).  Or, better yet,
> CL:FLET can be shadowed into a code-walking macro that looks for a new
> kind of declaration declaring a function to be dynamic.

Not in the presence of multiple processes it can't.  Further (just to
prevent the inevitable followup from someone), dynamic-wind does not
solve this problem *either*, unless you are willing to assume that
your multithreaded system is actually an emulation running on a single
processor, so stack unwinding/rewinding actually happens.  If you want
dynamic bindings you have to do it at a much lower level than that,
which is why it is *right* that CL should have specials in the
language.

--tim
From: Ole Myren Rohne
Subject: Re: please tell me the design faults of CL & Scheme
Date: 
Message-ID: <ebwvgpo1fqv.fsf@pcedu3.cern.ch>
·····@cogsci.ucsd.edu (David Fox) writes:

> "Julian Morrison" <······@extropy.demon.co.uk> writes:
> 
> > Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> > designed". 
> 
> One problem I have with Common Lisp is the separate name spaces for
> functions and variables.

One problem I have with Scheme is the unified name space for 
functions and variables.

Sorry, I just couldn't resist;-)