From: Scott Burson
Subject: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <ab2784f8-1bd3-4463-84ca-75b0abcfbe09@c19g2000prf.googlegroups.com>
So I was thinking about Ron Garret's lexicons, as I had once thought
about and played around with T locales back in the mid-1980s, and
having the same experience I had back then: that they were
interesting, but they didn't seem to do all the things I wanted them
to and thought they should, somehow, be able to do.  I had puzzled
over this, back then, but never managed to come up with a better
mechanism.

And then, working on another project, I was writing a piece of code of
the form

(defun foo (...)
  (let (... some bindings ...)
    (labels (... lots of clauses ...)
       ...))

and I had the thought -- wouldn't it be nice if there were some way to
write this so that I could call the various `labels' functions
interactively, for testing and debugging?  And then I had an idea.
What if, instead of writing

(defun foo (x)
  (let ((y (bar x)))            ; `bar' is a global function
    (labels ((baz (z) ... x ... y ...))
      (baz (quux x)))))      ; `quux' is a global function

we could instead write the two top-level forms

(defcontext foo-context (x)

  (deflex y (bar x))          ; a lexical `defvar'

  (defun baz (z) ... x ...  y ...))

(defun foo (q)
  (with-context (foo-context q)
    (baz (quux x)))

Here the `defcontext' declares, not a lexical environment, but a
lexical environment _generator_ -- a function that constructs lexical
environments with a known set of bound names (in this case, `x', `y',
and `baz').  `with-context' does two things: it invokes the generator
with the supplied argument(s) to construct a "context instance", and
it arranges for its body to be evaluated in the lexical environment
that binds the names declared in the context to the values in the
instance.

Notice that the body of `baz' is written in a lexical environment
containing the bindings of `x' and `y'.  The effect is similar to that
of dynamic binding; consider for comparison (ignoring `y' for the
moment):

(defvar *x*)

(defun baz1 (z) ... *x* ...)

(defun foo1 (q)
  (let ((*x* q))
    (baz (quux *x*))))

This will do the same thing in many cases.  But the context
implementation is absolutely lexical; you could write

(defun foo (q)
  (with-context (foo-context q)
    #'baz))

and get back a function which is closed over the supplied value of
`x' (the value which was passed to `foo' as `q').

The primary price to be paid for this is that where `baz1', the
dynamic version, can be compiled separately, the entire context `foo-
context' has to be compiled at one time.  If one were to make heavy
use of `defcontext', one could easily wind up with a large source file
consisting entirely of a single `defcontext' form.  This will
certainly slow down incremental compilation a bit.  On modern
hardware, though, I think it's unlikely to be a problem (though I have
yet to try it).

And there's an upside too.  Since all the `defun' forms in a single
`defcontext' body are gathered into a single `labels', you get what
amounts to automatic block compilation -- the calls between these
routines have less overhead than calls between ordinary top-level
functions.  (In most implementations, at least.)  Also, the references
to the context variables -- its parameters plus those additional
variables defined with `deflex' -- are quite fast; there's no need to
deal with the overhead of special binding (which is not as cheap as it
used to be, in this brave new SMP world).

Some may recall that I have previously expressed in this forum my
dislike of dynamic binding, my feeling that it's a hack, that obscures
the true structure of the program and is at least a little bit bug-
prone.  Yet the alternatives that have been available have not been
entirely satisfactory -- the best, usually, has been to gather up the
desired bits of context into an object and pass this object around
explicitly everywhere it's needed.  I am glad to finally have an
alternative to suggest that permits the same elegance of notation as
dynamic binding without (what I see as) the downside.

Another subtopic I'd like to raise before I turn to the implementation
details.  I have seen, on various occasions and in various places
around the Web, people criticizing CL for not having a "real" module
system.  The package system, these people say, doesn't cut it.  I have
some sympathy with their position.  So it's natural to wonder, should
I be calling these "modules" rather than "lexical contexts"?  It's
tempting.  But when I think about features what a "real" module system
should have, one of them is the ability for modules to define types,
and I haven't thought enough yet about how to do that.  I also need to
do some background reading on other module systems to see if there's
any other functionality worth including.

Unfortunately, while CL has notions of variable names, function names,
and even macro names with non-global scopes, it does not have any
notion of a type name with a non-global scope.  Types are normally
named by symbols, which means that the only scoping mechanism for them
is the package; it also means that type names live in their own
namespace.  It's possible this could be circumvented, at least
partially, by thinking of types as the values of variables rather than
global names in the type namespace.  CL makes some provision for this
by allowing classes (that is, class objects) to be used as type
specifiers; but the support is not complete -- it's not the case that
every type specifier has a corresponding class, and it's also not the
case that classes which are the values of variables (and may not be
named at all, or not with the same name as the variable) can be used
in every context (consider `defmethod' parameter specializers).  So
I'm not sure what can be done here.

So anyway, I don't yet think of this as a module system.  Just to put
it in context: CL already has two other modularity mechanisms,
packages and classes.  I certainly don't think that these lexical
contexts are going to supplant classes, despite a certain
resemblance.  And they can't supplant packages either.  My sense is
that lexical contexts complement these other mechanisms rather than
subsuming them.

Well, there's more I could say about that, but for now I want to move
on to my initial implementation.  Here's what I have.

The macro `defcontext' takes, as you see in the examples above, a
parameter list and a body.  The parameter list is just like the
ordinary parameter list of a function, with one exception (ha, you
guys are going to love this! [donning flame-retardant suit]): if you
want the parameter name to be bound in the function namespace rather
than the variable namespace, you can indicate that by marking the
parameter name with #'.  That is, such a parameter can be referenced
within the body of the `defcontext' as a function name rather than a
variable name; for instance

  (defcontext foo (x #'f)
    (defun bar () (f x)))

Why did I do this?  Obviously, it isn't strictly necessary.  I just
thought it would be common to want to supply functional parameters to
a context and that it would be nice to be able to invoke them without
`funcall', and this occurred to me.  The only little weirdness is that
you can't use `function' as the name of an optional or keyword
parameter; conversely, if you have an optional or keyword function
parameter, you must supply a default (`nil' wouldn't make much sense
anyway).  Since the two situations -- an optional parameter named
`function' with a default, or an optional function parameter with no
default -- are indistinguishable, the implementation disallows either.

The body of the `defcontext' must contain only these kinds of forms:

  (import-context (context-name arg0 arg1 ...))

The named context is instantiated with the supplied arguments; the
names it exports are made available to code in the invoking context.
(Currently there is no way to get any of the imported names re-
exported from the invoking context; there should be.)  If two `import-
context' forms import the same name, the later one shadows the earlier
one.

  (deflex name init-form ["doc-string"])

Defines a lexical variable.  The init-form is evaluated in an
environment that includes the parameters, all names imported via
`import-context', and all preceding `deflex' variables (but _not_
functions defined with `defun').  The variable name will be exported
from the context.

  (defmacro ...)

Defines a macro.  The macro will be visible to the init-forms of
`deflex' forms as well as to the bodies of `defun' forms.  It will
also be exported from the context.  The macro bodies can see only the
context parameters and imported names.

  (defun ...)

Defines a function.  Functions defined in a context may be mutually
recursive.  The name will be exported from the context.

I could allow these forms to be intermixed arbitrarily, but I thought
it might lead to confusion -- users might think that by putting a
`deflex' after a `defun', they could make the function available to
the variable's init-form -- so the implementation restricts the
ordering: `import-context' forms must come first, followed by `deflex'
and `defmacro' forms in any combination (I figure these don't tend to
interact much), followed by `defun'.

To use a context, all you have to do, as shown in the examples above,
is to use `with-context' to instantiate it and enter the lexical
environment containing its exported names.

Oh, one more thing.  `with-context' forms have a compile-time
dependency on the corresponding `defcontext' similar to that which a
struct accessor invocation has on the corresponding `defstruct'.  The
implementation goes to some effort to minimize this, however.  As long
as fasl continuity is maintained for the `defcontext' form, it can be
edited and recompiled arbitrarily without necessarily having to
recompile the `with-context' forms that reference it.  The only
strange thing that can happen is if one version of the `defcontext' is
compiled that exports a name `foo', a `with-context' is compiled that
references `foo', and then a new version of the `defcontext' is
compiled that no longer contains `foo'; in that case, when the `with-
context' is executed, the value provided for `foo' will be a symbol
whose name contains a message telling you to recompile the `with-
context'.  (This recompilation will fail, since `foo' is no longer
exported(*), but the compiler will give you a nice error to that
effect.)

(Obviously there are other ways to get your code out of sync, but
they're all familiar; e.g., changing the context parameter list
incompatibly.)

(* Unless there's an outer `foo', previously shadowed, that now
becomes visible, of course.)

So, here it is.  This code is EXPERIMENTAL.  Use At Your Own Risk,
Your Mileage May Vary, Post No Bills, etc.

   http://www.ergy.com/contexts.lisp

The best way to see how it works is to macroexpand some `defcontext'
forms.

-- Scott

From: Scott Burson
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <cba02203-a8bf-4ac3-84a2-89bd901ce585@s19g2000prg.googlegroups.com>
As long as that was, I still forgot something important :)

What about interactive debugging?  Wasn't that part of what got me on
this path?

Yes.  One idea would be to interface context's to Ron's lexicons, by
writing something that turns a context instance into a lexicon.  The
question I have about lexicons, though, is how well they can work in
practice without being integrated into the REPL.  (What happens if you
call `compile-file' with some lexicon active?)

And I had a different idea.  It's easy enough to interactively define
macros like this:

  (defmacro c (&body body) `(with-context (foo 27 #'mumble) . ,body))

Then you can use it to interactively access variables, functions, and
macros exported by the context:

  (c (bar 13))

The slight downside of this is that each invocation of `c' re-
instantiates the context, which could in principle be expensive, as
the init-forms of the `deflex'es have to be re-evaluated.  The right
fix is probably to pull the expensive computation out of the
`defcontext', instead passing the result in as an argument.

Oh yes, also, I want to add a feature that lets you declare certain
names as private to the `defcontext' form, so the body of a `with-
context' can't see them.  Along with that, there will need to be
something like `with-context-all' that gives you access to the private
names too, for debugging.

I'll have to get some actual experience with this thing to see if this
approach to interaction suffices, but I suspect it will.  If not, I
guess it's back to lexicons.

-- Scott
From: Scott Burson
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <34d3f670-7ecd-4367-b91e-412f261a706b@c19g2000prf.googlegroups.com>
Huh, no replies yet?  I think it's because the friggin' spam wave
pushed me off the first page of Google Groups.  Replying to my own
thread to at least get it back into the active-older-threads list.

Surely somebody will have something to say about this thing :)

-- Scott
From: Ron Garret
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <rNOSPAMon-8C6419.21561924032008@news.gha.chartermi.net>
In article 
<····································@c19g2000prf.googlegroups.com>,
 Scott Burson <········@gmail.com> wrote:

> Huh, no replies yet?  I think it's because the friggin' spam wave
> pushed me off the first page of Google Groups.  Replying to my own
> thread to at least get it back into the active-older-threads list.
> 
> Surely somebody will have something to say about this thing :)
> 
> -- Scott

FWIW, I think it's kind of a cool idea.

rg
From: D Herring
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <w6idnXWcgr5FO3TanZ2dnUVZ_uKpnZ2d@comcast.com>
Scott Burson wrote:
> Huh, no replies yet?  I think it's because the friggin' spam wave
> pushed me off the first page of Google Groups.  Replying to my own
> thread to at least get it back into the active-older-threads list.
> 
> Surely somebody will have something to say about this thing :)

Hijacking your thread (but keeping it alive)...

Symbols are objects.  Packages are places where symbols live.

Files are objects.  Directories are places where files live.

See the similarity?

I don't see how to implement this in Lisp without heavily hacking the 
reader, but I envision hierarchical packages with a full set of 
unix-like commands.

(use-package :super-symbols)
(cd package-path) ; similar to IN-PACKAGE
(mkdir new-package) ; similar to DEFPACKAGE
(rm symbol-or-package) ; similar to UNINTERN
(mv symbol-or-package-old symbol-or-package-new)
(ln symbolic-or-hard-link old-name new-name)
(set-path &rest paths-for-finding-symbols) ; similar to USE-PACKAGE
(get-path)

Thoughts?

- Daniel
From: Scott Burson
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <76af627c-74a8-4066-9313-d7234b322aa4@i29g2000prf.googlegroups.com>
On Mar 25, 6:29 pm, D Herring <········@at.tentpost.dot.com> wrote:
>
> Hijacking your thread (but keeping it alive)...
>
> Symbols are objects.  Packages are places where symbols live.
>
> Files are objects.  Directories are places where files live.

Huh.  Don't know how smart it is for me to participate in the
hijacking of my own thread, but here's a comment anyway :)

Seems to me that directories are much more like lexicons than they are
like packages.

If you don't understand what that means, well, you need to read up on
some material that's relevant to the original thread topic :)  Here
are Ron's previous lexicon threads:

http://groups.google.com/group/comp.lang.lisp/browse_thread/thread/ee3538af0af6167b/6aa0377eae172427
http://groups.google.com/group/comp.lang.lisp/browse_thread/thread/995c913742377050/861d58ad74071240

-- Scott
From: D Herring
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <-tOdnZX8e4aVdHTanZ2dnUVZ_jmdnZ2d@comcast.com>
Scott Burson wrote:
> On Mar 25, 6:29 pm, D Herring <········@at.tentpost.dot.com> wrote:
>> Hijacking your thread (but keeping it alive)...
>>
>> Symbols are objects.  Packages are places where symbols live.
>>
>> Files are objects.  Directories are places where files live.
> 
> Huh.  Don't know how smart it is for me to participate in the
> hijacking of my own thread, but here's a comment anyway :)
> 
> Seems to me that directories are much more like lexicons than they are
> like packages.

In many ways, directories are more like lexicons.  In some ways, I 
feel that packages are a holdover from the old static/dynamic 
variables debate.

Regardless of whether my analogy holds water, I just want to explore 
the design space and move towards an even better Common Lisp.

> If you don't understand what that means, well, you need to read up on
> some material that's relevant to the original thread topic :)  Here
> are Ron's previous lexicon threads:
> 
> http://groups.google.com/group/comp.lang.lisp/browse_thread/thread/ee3538af0af6167b/6aa0377eae172427
> http://groups.google.com/group/comp.lang.lisp/browse_thread/thread/995c913742377050/861d58ad74071240

Yes, and here is a related thread from 2003 (before I saw the light)
http://groups.google.com/group/comp.lang.lisp/browse_thread/thread/98d32d48786d5670/66ca639d0a0c07dc?lnk=gst&q=erann+gat+modules#66ca639d0a0c07dc
(s/module/lexicon/)

- Daniel
From: Kent M Pitman
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <uej9ykxbe.fsf@nhplace.com>
D Herring <········@at.tentpost.dot.com> writes:

> Scott Burson wrote:
> > Huh, no replies yet?  I think it's because the friggin' spam wave
> > pushed me off the first page of Google Groups.  Replying to my own
> > thread to at least get it back into the active-older-threads list.
> > Surely somebody will have something to say about this thing :)
> 
> Hijacking your thread (but keeping it alive)...
> 
> Symbols are objects.  Packages are places where symbols live.
> 
> Files are objects.  Directories are places where files live.
> 
> See the similarity?
> 
> I don't see how to implement this in Lisp without heavily hacking the
> reader, but I envision hierarchical packages with a full set of
> unix-like commands.
> 
> (use-package :super-symbols)
> (cd package-path) ; similar to IN-PACKAGE
> (mkdir new-package) ; similar to DEFPACKAGE
> (rm symbol-or-package) ; similar to UNINTERN
> (mv symbol-or-package-old symbol-or-package-new)
> (ln symbolic-or-hard-link old-name new-name)
> (set-path &rest paths-for-finding-symbols) ; similar to USE-PACKAGE
> (get-path)
> 
> Thoughts?

The hard part is what your notion of identity is.  Put another way,
once you've "lunk" two symbols together, how does EQ behave.  Or just
print for that matter.  In order to make this work on unix, you have
to have the underlying concept that the file system is made of inodes,
not filenames, and that the filenames are just window-dressing.  And
you have to know when people are wanting to talk about names and when
they are talking about file contents.  Since the lisp system already has
an elaborated notion of how that all works, the things you have to do to
make the above work cannot be an add-on, if it's serious--it has to redefine
how all the other functions relate. for example, if EQ or EQL doesn't do what 
it did before, that's an incompatible change. If it does what it did before,
then I'm assuming that the symbols linked together are different under at
least EQ.  If that's so, what's a hard link mean?  And does
 (member 'x '(a b c))
find anything if x and b are hard-linked to one another.  The thing
you're suggesting is a legitimate thought exercise.  But the implications
of it are complicated.  To get it right relies on two things: working
through the details AND having a theory of what you're trying to accomplish.
The latter is subtle and hard to explain other than to say that you will
confront many choices that will seem arbitrary, yet the choice of which way
you make them will have big effects, and where the decision about which is
right is not dictated by science, but by an understanding of which of many
possible alternate universes you'd like to live in after you get done making
all those choices. There are also implications on speed and space and other
tradeoffs that such choices trip over along the way.

I myself have been looking at this problem lately for unrelated
reasons (not your particular cute choice of names, but just the
problem of sharing symbols with unlike names), and I've seen (private)
systems where it's played out with interesting effects (with costs,
too, though).  I don't think it's a simple area, nor one with obvious
answers, but neither do I think it's an utter waste of time to ponder.
I do think that good data requires a lot of care to gather though.
From: Scott Burson
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <017bdaa0-a281-4e2a-a399-742e5b18eeaf@h11g2000prf.googlegroups.com>
On Mar 25, 7:56 pm, Kent M Pitman <······@nhplace.com> wrote:
> D Herring <········@at.tentpost.dot.com> writes:
> > Symbols are objects.  Packages are places where symbols live.
>
> > Files are objects.  Directories are places where files live.
>
> > See the similarity?
>
> > I don't see how to implement this in Lisp without heavily hacking the
> > reader, but I envision hierarchical packages with a full set of
> > unix-like commands.
>
> > (use-package :super-symbols)
> > (cd package-path) ; similar to IN-PACKAGE
> > (mkdir new-package) ; similar to DEFPACKAGE
> > (rm symbol-or-package) ; similar to UNINTERN
> > (mv symbol-or-package-old symbol-or-package-new)
> > (ln symbolic-or-hard-link old-name new-name)
> > (set-path &rest paths-for-finding-symbols) ; similar to USE-PACKAGE
> > (get-path)
>
> > Thoughts?
>
> The hard part is what your notion of identity is.  Put another way,
> once you've "lunk" two symbols together, how does EQ behave.

To expand on my reply to Daniel, if directories are like lexicons, and
filenames are like variables rather than symbols, seems to me many of
these questions go away.  `ln a b' is just (setq b a).  No?

-- Scott
From: D Herring
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <rJ-dndsYtIgXfnTanZ2dnUVZ_j6dnZ2d@comcast.com>
Kent M Pitman wrote:
> D Herring <········@at.tentpost.dot.com> writes:
>> Hijacking your thread (but keeping it alive)...
>> 
>> Symbols are objects.  Packages are places where symbols live.
>> 
>> Files are objects.  Directories are places where files live.
>> 
>> See the similarity?
>> 
>> I don't see how to implement this in Lisp without heavily hacking
>>  the reader, but I envision hierarchical packages with a full set
>>  of unix-like commands.
>> 
>> (use-package :super-symbols) (cd package-path) ; similar to 
>> IN-PACKAGE (mkdir new-package) ; similar to DEFPACKAGE (rm 
>> symbol-or-package) ; similar to UNINTERN (mv 
>> symbol-or-package-old symbol-or-package-new) (ln 
>> symbolic-or-hard-link old-name new-name) (set-path &rest 
>> paths-for-finding-symbols) ; similar to USE-PACKAGE (get-path)
>> 
>> Thoughts?
> 
> The hard part is what your notion of identity is.  Put another way,
>  once you've "lunk" two symbols together, how does EQ behave.  Or 
> just print for that matter.  In order to make this work on unix, 
> you have to have the underlying concept that the file system is 
> made of inodes, not filenames, and that the filenames are just 
> window-dressing.

Lisp symbols are almost unix inodes.  Names are keys for accessing
symbols.  The hard part is how to handle SYMBOL-NAME...  The name
merely provides access; it would no longer be part of the symbol.  A
compromise might be to leave the name in the symbol, and update it
whenever the current name link is deleted or moved -- you aren't
guaranteed to get the name you used, but you are guaranteed to get a
usable name.

Code which carelessly[1] uses names should be expected to break when
named things move, just like programs which open files by name.  If
this bothers people, then they shouldn't move or delete the symbol's
original name, much as I don't rename things in /bin.
[1] in the "not careful" sense, not in the negligent sense


> And you have to know when people are wanting to talk about names 
> and when they are talking about file contents.  Since the lisp 
> system already has an elaborated notion of how that all works, the
>  things you have to do to make the above work cannot be an add-on,
>  if it's serious--it has to redefine how all the other functions 
> relate. for example, if EQ or EQL doesn't do what it did before, 
> that's an incompatible change. If it does what it did before, then
>  I'm assuming that the symbols linked together are different under
>  at least EQ.  If that's so, what's a hard link mean?

In unix, a symlink is a (fixed or relative) path from one file to
another.  If I delete the target, the symlink is broken until a new
symbol reclaims the target name.  If I replace the target, the symlink
takes on a new value.  A hard link is different; if I move or replace
the target, the link is unaffected; it still points to the same object.

CLHS:  EQ Returns true if its arguments are the same, identical
object; otherwise, returns false.

So hard links are definitely EQ; maybe symlinks should be too.  Other
options include making them EQL but not EQ, EQ but not SYM=, or
SYM= but not EQ.


> And does (member 'x '(a b c)) find anything if x and b are 
> hard-linked to one another.

Yes; they are EQ in the truest sense of the word.


> The thing you're suggesting is a legitimate thought exercise.  But 
> the implications of it are complicated.  To get it right relies on 
> two things: working through the details AND having a theory of what
>  you're trying to accomplish. The latter is subtle and hard to 
> explain other than to say that you will confront many choices that 
> will seem arbitrary, yet the choice of which way you make them will
>  have big effects, and where the decision about which is right is 
> not dictated by science, but by an understanding of which of many 
> possible alternate universes you'd like to live in after you get 
> done making all those choices. There are also implications on speed
>  and space and other tradeoffs that such choices trip over along
> the way.

A few design goals:

- not affect the performance or behavior of normal symbols (not links)

- give hard links the same performance but possibly modified behavior
(e.g. SYMBOL-NAME may act funny)

- provide symlinks (or Lispy variants) so the user can selectively 
trade performance for other behaviors


Some use cases:

- nestable packages -- this is very useful in C++ and Java for 
isolating and manipulating third-party libraries.

- the ability to "backup" symbols/packages and "restore" them later
(e.g. by moving them into a .bak package)

- the ability to collect symbols in aggregate packages (impetus
for hijacking this thread)

- creating an "alternate-common-lisp" package
It would exports *the-same-symbols* as CL, but verbosely rearranged 
into subpackages for easier browsing (e.g. ACONS => ALIST:CONS, RPLACA 
=> CONS:SET-CAR, RPLACD => CONS:SET-CDR).  This package should be 
fully compatible and bidirectionally usable with normal code written 
using the stock CL symbols.  Here, having SYMBOL-NAME return the 
original CL name could be seen as a feature.


And a couple pie-in-the-sky ideas:

- support arbitrary "keyword" packages, thereby implementing "C style" 
enumerations.
Example of (use-keyword &rest keyword-packages):
(use-keyword :enumerate-something)
(defun f (x) (eq x :test))
(f :test) => T
(use-keyword :enumerate-something-else)
(f :test) => nil

- provide a standard place for threads to store thread-local data 
(much like the /proc filesystem)


> I myself have been looking at this problem lately for unrelated 
> reasons (not your particular cute choice of names, but just the 
> problem of sharing symbols with unlike names), and I've seen 
> (private) systems where it's played out with interesting effects 
> (with costs, too, though).  I don't think it's a simple area, nor 
> one with obvious answers, but neither do I think it's an utter 
> waste of time to ponder. I do think that good data requires a lot 
> of care to gather though.

If you feel like sharing references, I'm all ears.  I poured over
Ron's lexicons last summer, but came to the conclusion that my desires
couldn't be met within the current reader/symbol/package framework
without creating a name-mangled monster that puts C++ to shame.

- Daniel
From: Ron Garret
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <rNOSPAMon-5A8B9E.12555726032008@news.gha.chartermi.net>
In article <································@comcast.com>,
 D Herring <········@at.tentpost.dot.com> wrote:

> If you feel like sharing references, I'm all ears.  I poured over
> Ron's lexicons last summer, but came to the conclusion that my desires
> couldn't be met within the current reader/symbol/package framework
> without creating a name-mangled monster that puts C++ to shame.

You might want to take another look at lexicons.  Things have changed 
significantly since last summer.  In particular, there are now two 
orthogonal implementations of lexicons.  One implements lexicons as a 
first-class abstract associative map from symbols to bindings (with 
multiple namespaces).  The other is a thin layer that just maps symbols 
onto other symbols at macroexpand time.  The net effect is very similar 
in its essential quality of allowing top-level bindings to be resolved 
at macroexpand/compile time instead of at read-time which is what 
packages do.  But some of the emergent properties are different in 
interesting ways.  For example, the first-class implementation allows 
enormous flexibility in how namespaces are treated.  It allows, for 
example, Lisp-1 and Lisp-2 lexicons to co-exist, and indeed for one to 
be transformed into the other.  (In fact, doing that is a trivial 
operation!)  The symbol-macro-expanding-to-other-symbols solution has a 
better "impedance match" with packages, because it's really just a thin 
macroexpand-time layer on top of packages.  So, for example, you can 
short-circuit the lexicon code and refer to the underlying symbols 
directly using the usual package::symbol syntax.  (It's possible to do 
effectively the same thing in the other implementation too, but it 
requires a major reader hack to hijack the colon syntax.)

rg
From: D Herring
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <7p6dnVODhdgjjnbanZ2dnUVZ_uDinZ2d@comcast.com>
Ron Garret wrote:
> In article <································@comcast.com>,
>  D Herring <········@at.tentpost.dot.com> wrote:
> 
>> If you feel like sharing references, I'm all ears.  I poured over
>> Ron's lexicons last summer, but came to the conclusion that my desires
>> couldn't be met within the current reader/symbol/package framework
>> without creating a name-mangled monster that puts C++ to shame.
> 
> You might want to take another look at lexicons.  Things have changed 
> significantly since last summer.  In particular, there are now two 
> orthogonal implementations of lexicons.  One implements lexicons as a 
> first-class abstract associative map from symbols to bindings (with 
> multiple namespaces).  The other is a thin layer that just maps symbols 
> onto other symbols at macroexpand time.  The net effect is very similar 
> in its essential quality of allowing top-level bindings to be resolved 
> at macroexpand/compile time instead of at read-time which is what 
> packages do.  But some of the emergent properties are different in 
> interesting ways.  For example, the first-class implementation allows 
> enormous flexibility in how namespaces are treated.  It allows, for 
> example, Lisp-1 and Lisp-2 lexicons to co-exist, and indeed for one to 
> be transformed into the other.  (In fact, doing that is a trivial 
> operation!)  The symbol-macro-expanding-to-other-symbols solution has a 
> better "impedance match" with packages, because it's really just a thin 
> macroexpand-time layer on top of packages.  So, for example, you can 
> short-circuit the lexicon code and refer to the underlying symbols 
> directly using the usual package::symbol syntax.  (It's possible to do 
> effectively the same thing in the other implementation too, but it 
> requires a major reader hack to hijack the colon syntax.)

Hi Ron,

Don't get me wrong, its impressive what you pulled off, but I'm not 
convinced lexicons (even as currently implemented) are the proper 
solution to the problem.

Some questions:
- Have you found a way to handle macros/functions so that they can 
have the proper lambda lists?  Or is that impossible due to delayed 
binding?
- Is there a way to move/link symbols?
- Do (funcall 'lexical-fun args) and #'lexical-fun work correctly?
- What's the syntax for referring to a nested package? e.g. p1:p2:sym
- Would it require major hackery to have the printer show 
fully-qualified names?

Later,
Daniel
From: Ron Garret
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <rNOSPAMon-72688C.01205427032008@news.gha.chartermi.net>
In article <································@comcast.com>,
 D Herring <········@at.tentpost.dot.com> wrote:

> Ron Garret wrote:
> > In article <································@comcast.com>,
> >  D Herring <········@at.tentpost.dot.com> wrote:
> > 
> >> If you feel like sharing references, I'm all ears.  I poured over
> >> Ron's lexicons last summer, but came to the conclusion that my desires
> >> couldn't be met within the current reader/symbol/package framework
> >> without creating a name-mangled monster that puts C++ to shame.
> > 
> > You might want to take another look at lexicons.  Things have changed 
> > significantly since last summer.  In particular, there are now two 
> > orthogonal implementations of lexicons.  One implements lexicons as a 
> > first-class abstract associative map from symbols to bindings (with 
> > multiple namespaces).  The other is a thin layer that just maps symbols 
> > onto other symbols at macroexpand time.  The net effect is very similar 
> > in its essential quality of allowing top-level bindings to be resolved 
> > at macroexpand/compile time instead of at read-time which is what 
> > packages do.  But some of the emergent properties are different in 
> > interesting ways.  For example, the first-class implementation allows 
> > enormous flexibility in how namespaces are treated.  It allows, for 
> > example, Lisp-1 and Lisp-2 lexicons to co-exist, and indeed for one to 
> > be transformed into the other.  (In fact, doing that is a trivial 
> > operation!)  The symbol-macro-expanding-to-other-symbols solution has a 
> > better "impedance match" with packages, because it's really just a thin 
> > macroexpand-time layer on top of packages.  So, for example, you can 
> > short-circuit the lexicon code and refer to the underlying symbols 
> > directly using the usual package::symbol syntax.  (It's possible to do 
> > effectively the same thing in the other implementation too, but it 
> > requires a major reader hack to hijack the colon syntax.)
> 
> Hi Ron,
> 
> Don't get me wrong, its impressive what you pulled off,

Thanks!

> but I'm not 
> convinced lexicons (even as currently implemented) are the proper 
> solution to the problem.

What do you consider to be "the problem"?  It's possible that lexicons 
were designed to solve a different problem than the one you're trying to 
solve.

> Some questions:
> - Have you found a way to handle macros/functions so that they can 
> have the proper lambda lists?  Or is that impossible due to delayed 
> binding?

I don't understand that question.  What do you mean by "have the proper 
lambda lists"?

> - Is there a way to move/link symbols?

Again, I don't understand what this means.  Lexicons don't have anything 
to do with symbols, they just map symbols into bindings.  It is possible 
for two lexicons to share the same binding, which is somewhat analogous 
to having a "hard link" in unix, but it's the binding that's shared, not 
the symbol.  (Well, the symbol is shared too.  The whole point of 
lexicons is to provide multiple top-level bindings for the same symbol.)

> - Do (funcall 'lexical-fun args) and #'lexical-fun work correctly?

No, though they could be made to by shadowing FUNCALL and FUNCTION (and 
hijacking the #' reader macro).  At the moment you have to do (funcall 
lexical-fun args) (without the quote) and (lfunction lexical-fun) 
instead of (function lexical-fun) == #'lexical-fun.  But the ldefun 
macro helpfully binds the symbol's value slot to the function being 
defined so you usually don't have to use LFUNCTION (unless you reassign 
the top-level value binding to be something else).

> - What's the syntax for referring to a nested package? e.g. p1:p2:sym

I presume you mean "nested lexicon."  There is no special syntax for 
this, though one could be provided if people wanted it.

> - Would it require major hackery to have the printer show 
> fully-qualified names?

Again, I don't understand what you mean by this.  Lexicons are all about 
bindings, not names.  There is no such thing as a "fully qualified name" 
in the context of lexicons; the concept is nonsensical.  It would not 
even be possible to print out the lexicon to which a binding "belongs" 
because a single binding can be in multiple lexicons at the same time.  
(I suppose one could add the concept of a binding's "home lexicon" 
rather like a symbol's home package.)

rg
From: D Herring
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <yrCdnclr4_aw4HHanZ2dnUVZ_h2pnZ2d@comcast.com>
Ron Garret wrote:
> In article <································@comcast.com>,
>  D Herring <········@at.tentpost.dot.com> wrote:
>> but I'm not 
>> convinced lexicons (even as currently implemented) are the proper 
>> solution to the problem.
> 
> What do you consider to be "the problem"?  It's possible that lexicons 
> were designed to solve a different problem than the one you're trying to 
> solve.

Don't worry about "the correct definition of the problem" for now.  As 
Kent pointed out, there are many subtle issues, and judgment calls 
will have to be made.  For now, I'll say the basic problem is that 
Lisp's namespace support is not hierarchical and that there is a 1-1 
correlation between symbols and their names.

The fundamental issue I had with lexicons was that they use reader 
macros to redirect names/symbols to their bindings.  While this works 
in many common cases, it fails for unevaluated forms (e.g. 'sym) and 
symbols which are used before being lbound.  These are two important 
cases which probably don't have satisfactory solutions without 
tweaking the lisp reader itself.

Additionally, using reader macros for lexicons means that they are no 
longer available for other purposes.  Other packages could be taught 
how to identify and wrap lexicon reader macros, but that's starting to 
sound fragile.


>> Some questions:
>> - Have you found a way to handle macros/functions so that they can 
>> have the proper lambda lists?  Or is that impossible due to delayed 
>> binding?
> 
> I don't understand that question.  What do you mean by "have the proper 
> lambda lists"?

With packages, I can enter "(mapc " and an IDE like SLIME can query 
the lambda list to show me "(mapc FUNCTION LIST &REST MORE-LISTS)". 
With the original lexicons, the lambda list for all lexicon functions 
and macros was always "(mapc &rest args &environment env)". 
Lexicons2.lisp appears to have fixed this by relying more directly on 
raw CL symbols.


>> - Is there a way to move/link symbols?
> 
> Again, I don't understand what this means.  Lexicons don't have anything 
> to do with symbols, they just map symbols into bindings.  It is possible 
> for two lexicons to share the same binding, which is somewhat analogous 
> to having a "hard link" in unix, but it's the binding that's shared, not 
> the symbol.  (Well, the symbol is shared too.  The whole point of 
> lexicons is to provide multiple top-level bindings for the same symbol.)

Good enough for now; I essentially want the ability to change the 
name/binding mappings without changing the underlying symbols.


>> - Do (funcall 'lexical-fun args) and #'lexical-fun work correctly?
> 
> No, though they could be made to by shadowing FUNCALL and FUNCTION (and 
> hijacking the #' reader macro).  At the moment you have to do (funcall 
> lexical-fun args) (without the quote) and (lfunction lexical-fun) 
> instead of (function lexical-fun) == #'lexical-fun.  But the ldefun 
> macro helpfully binds the symbol's value slot to the function being 
> defined so you usually don't have to use LFUNCTION (unless you reassign 
> the top-level value binding to be something else).

Understood.


>> - What's the syntax for referring to a nested package? e.g. p1:p2:sym
> 
> I presume you mean "nested lexicon."  There is no special syntax for 
> this, though one could be provided if people wanted it.

Yeah; s/package/lexicon/.  Just asking; its frequently useful to say 
"::a::b::c" in C++ (or the equivalent in Java) in order to specify a 
name which is shadowed by the current scope.


>> - Would it require major hackery to have the printer show 
>> fully-qualified names?
> 
> Again, I don't understand what you mean by this.  Lexicons are all about 
> bindings, not names.  There is no such thing as a "fully qualified name" 
> in the context of lexicons; the concept is nonsensical.  It would not 
> even be possible to print out the lexicon to which a binding "belongs" 
> because a single binding can be in multiple lexicons at the same time.  
> (I suppose one could add the concept of a binding's "home lexicon" 
> rather like a symbol's home package.)

In C++/Java, all names have a unique path, rooted from the "global 
namespace"; I was just looking for a way to print the lexicon 
equivalent.  Normal lisp symbols print as PACKAGE:SYMBOL unless (equal 
'PACKAGE *package*); it would be nice if lexicon bindings could do the 
same, in a format that allowed unambiguous retrieval of the same 
binding from within any other lexicon context.

- Daniel
From: Ron Garret
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <rNOSPAMon-53D097.23430127032008@news.gha.chartermi.net>
In article <································@comcast.com>,
 D Herring <········@at.tentpost.dot.com> wrote:

> Ron Garret wrote:
> > In article <································@comcast.com>,
> >  D Herring <········@at.tentpost.dot.com> wrote:
> >> but I'm not 
> >> convinced lexicons (even as currently implemented) are the proper 
> >> solution to the problem.
> > 
> > What do you consider to be "the problem"?  It's possible that lexicons 
> > were designed to solve a different problem than the one you're trying to 
> > solve.
> 
> Don't worry about "the correct definition of the problem" for now.  As 
> Kent pointed out, there are many subtle issues, and judgment calls 
> will have to be made.  For now, I'll say the basic problem is that 
> Lisp's namespace support is not hierarchical and that there is a 1-1 
> correlation between symbols and their names.

OK, I'm down with that.

> The fundamental issue I had with lexicons was that they use reader 
> macros to redirect names/symbols to their bindings.

Only to implement the ^foo syntax, and it's actually pretty easy to fix 
it so even that is not necessary.  (In fact, the Lexicons2 
implementation doesn't use reader macros.)

> While this works 
> in many common cases, it fails for unevaluated forms (e.g. 'sym)

Well, it forces you into a different programming style, where you use 
symbols as identifiers whose semantics are supplied by the environment 
(which is to say, the current lexicon) rather than as first-class data 
objects with their own semantics.

> and symbols which are used before being lbound.

That is a real problem.  (Ironically, it could be solved by using reader 
macros, but as I noted above I don't do that for precisely the reasons 
that you cite.)

> These are two important 
> cases which probably don't have satisfactory solutions without 
> tweaking the lisp reader itself.

There are other possible solutions, but they involve changes to the 
implementation.  For example, you could introduce an 
undefined-free-variable hook function.

> Additionally, using reader macros for lexicons means that they are no 
> longer available for other purposes.  Other packages could be taught 
> how to identify and wrap lexicon reader macros, but that's starting to 
> sound fragile.

As I noted above, lexicons don't really need reader macros.

> >> Some questions:
> >> - Have you found a way to handle macros/functions so that they can 
> >> have the proper lambda lists?  Or is that impossible due to delayed 
> >> binding?
> > 
> > I don't understand that question.  What do you mean by "have the proper 
> > lambda lists"?
> 
> With packages, I can enter "(mapc " and an IDE like SLIME can query 
> the lambda list to show me "(mapc FUNCTION LIST &REST MORE-LISTS)". 
> With the original lexicons, the lambda list for all lexicon functions 
> and macros was always "(mapc &rest args &environment env)". 
> Lexicons2.lisp appears to have fixed this by relying more directly on 
> raw CL symbols.

Ah.  Yes, interacting with the development environment is a whole can of 
worms that I have not yet opened.  I was hoping to get some help with 
that since I've never been much of an emacs hacker.

> >> - Is there a way to move/link symbols?
> > 
> > Again, I don't understand what this means.  Lexicons don't have anything 
> > to do with symbols, they just map symbols into bindings.  It is possible 
> > for two lexicons to share the same binding, which is somewhat analogous 
> > to having a "hard link" in unix, but it's the binding that's shared, not 
> > the symbol.  (Well, the symbol is shared too.  The whole point of 
> > lexicons is to provide multiple top-level bindings for the same symbol.)
> 
> Good enough for now; I essentially want the ability to change the 
> name/binding mappings without changing the underlying symbols.

Look at the LREBIND function.  (It's only implemented in one of the 
implementations at the moment, but it's pretty straightforward to port 
it.)

> >> - What's the syntax for referring to a nested package? e.g. p1:p2:sym
> > 
> > I presume you mean "nested lexicon."  There is no special syntax for 
> > this, though one could be provided if people wanted it.
> 
> Yeah; s/package/lexicon/.  Just asking; its frequently useful to say 
> "::a::b::c" in C++ (or the equivalent in Java) in order to specify a 
> name which is shadowed by the current scope.

I strongly suspect that you'll want to do that less when using lexicons.  
The reason you sometimes want to do that when using packages is that 
everything has to be exported manually, and so often something doesn't 
get exported that you end up needing.  But with lexicons everything is 
exported by default, so this becomes less of an issue.  But it would be 
very easy to add this feature if people wanted it.

> >> - Would it require major hackery to have the printer show 
> >> fully-qualified names?
> > 
> > Again, I don't understand what you mean by this.  Lexicons are all about 
> > bindings, not names.  There is no such thing as a "fully qualified name" 
> > in the context of lexicons; the concept is nonsensical.  It would not 
> > even be possible to print out the lexicon to which a binding "belongs" 
> > because a single binding can be in multiple lexicons at the same time.  
> > (I suppose one could add the concept of a binding's "home lexicon" 
> > rather like a symbol's home package.)
> 
> In C++/Java, all names have a unique path, rooted from the "global 
> namespace"; I was just looking for a way to print the lexicon 
> equivalent.  Normal lisp symbols print as PACKAGE:SYMBOL unless (equal 
> 'PACKAGE *package*); it would be nice if lexicon bindings could do the 
> same, in a format that allowed unambiguous retrieval of the same 
> binding from within any other lexicon context.

Ah.  The problem here is that lexicons don't store symbols, they store 
bindings.  Right now there isn't a way to take a binding and figure out 
what symbol it is bound to.  This could get quite tricky, since one 
binding can actually be associated with multiple symbols, or one symbol 
but in multiple lexicons.  So the whole concept of "fully qualified 
name" gets a little slippery.  If you can show me an actual use-case 
that might help.

rg
From: Ken Tilton
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <47ea41d0$0$5622$607ed4bc@cv.net>
Scott Burson wrote:
> Huh, no replies yet?  I think it's because the friggin' spam wave
> pushed me off the first page of Google Groups.  Replying to my own
> thread to at least get it back into the active-older-threads list.
> 
> Surely somebody will have something to say about this thing :)

(a) I am having flashbacks to my announcement here of Cells, ne 
Semaphors, back in '95 or '96. Little did I know that c.l.lisp was and 
is the land of the walking dead. Of course back then there was no PG 
(er, his writings, of course) and the RtL had yet to be spawned in 
reponse to the flood (yep) of once-a-month-or-two noobs coming here to 
ask their first questions, but still, we are all busy (with FreeCell if 
nothing else) and long technical aricles such as yours are easy to skip 
over to find the good stuff like debates over God and the FSF. 
Meanwhile, thirteen years we have Celtk, Cells-Gtk, and Hunchncells 
rising from the sea and there is talk (I am hearing voices anyway) of 
Cells 3.1 to add referential integrity and maybe clean-up unfinished 
business handling, so these things take time.

(b) Please don't call me Shirley.

hth, kenny

(c) Just comparing it to lexicons was enough to make me skip it -- that 
had me thinking solution in search of problem. k

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/

"In the morning, hear the Way;
  in the evening, die content!"
                     -- Confucius
From: Scott Burson
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <3b76aed9-7f65-4615-ace3-efae3876d6dc@i12g2000prf.googlegroups.com>
On Mar 26, 5:30 am, Ken Tilton <···········@optonline.net> wrote:
> Scott Burson wrote:
> > Huh, no replies yet?  I think it's because the friggin' spam wave
> > pushed me off the first page of Google Groups.  Replying to my own
> > thread to at least get it back into the active-older-threads list.
>
> > Surely somebody will have something to say about this thing :)
>
> (a) I am having flashbacks to my announcement here of Cells

Yeah.  Nobody's using my FSet library either, AFAIK (except me, of
course).  Probably, I should write some tutorial material -- that will
make it easier to get started with -- except it's hard to get myself
to do that without some indication that anyone will care.

> long technical aricles such as yours are easy to skip
> over to find the good stuff like debates over God and the FSF.

Yeah, TLDR as the bloggers say :)

> (c) Just comparing it to lexicons was enough to make me skip it -- that
> had me thinking solution in search of problem. k

Point taken.  I think it's difficult to know how useful this thing is
now.  I'll have to use it in other projects for a while to see.

-- Scott
From: Ken Tilton
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <47eaa848$0$15195$607ed4bc@cv.net>
Scott Burson wrote:
> On Mar 26, 5:30 am, Ken Tilton <···········@optonline.net> wrote:
> 
>>Scott Burson wrote:
>>
>>>Huh, no replies yet?  I think it's because the friggin' spam wave
>>>pushed me off the first page of Google Groups.  Replying to my own
>>>thread to at least get it back into the active-older-threads list.
>>
>>>Surely somebody will have something to say about this thing :)
>>
>>(a) I am having flashbacks to my announcement here of Cells
> 
> 
> Yeah.  Nobody's using my FSet library either, AFAIK (except me, of
> course).  Probably, I should write some tutorial material -- that will
> make it easier to get started with -- except it's hard to get myself
> to do that without some indication that anyone will care.

Oh, Christ, I thought I had written that... Many are called, few are 
chosen -- who wants users who need documentation?

Less abusively, if they aren't asking for doc now, they won't read it if 
you provide it, and doc is way too hard and un-fun for that.

kenny

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/

"In the morning, hear the Way;
  in the evening, die content!"
                     -- Confucius
From: Scott Burson
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <47881627-a87f-48bc-920c-c2433d642c31@i29g2000prf.googlegroups.com>
On Mar 26, 12:46 pm, Ken Tilton <···········@optonline.net> wrote:
> Scott Burson wrote:
> > Nobody's using my FSet library either, AFAIK (except me, of
> > course).  Probably, I should write some tutorial material -- that will
> > make it easier to get started with -- except it's hard to get myself
> > to do that without some indication that anyone will care.
>
> Oh, Christ, I thought I had written that... Many are called, few are
> chosen -- who wants users who need documentation?
>
> Less abusively, if they aren't asking for doc now, they won't read it if
> you provide it, and doc is way too hard and un-fun for that.

Just to be clear -- it's tutorial material specifically that I have
yet to write for FSet.  The API reference is quite complete and
thorough, I believe; the design rationale is also fairly complete,
though it could use a bit of reorganization and is written in rather
too breathless a style.

I hoped that that would be enough documentation for at least a couple
senior Lisp people to take to FSet -- and then one of them would be
inspired to write a tutorial for the less experienced.  (Didn't
someone other than yourself write a Cells tutorial?)  For whatever
reasons, it hasn't happened.  Maybe I need the tutorial to even
attract the interest of the senior people.  Or -- maybe no one will
ever care :)

-- Scott
From: Ken Tilton
Subject: Re: Lexical contexts; or, Lambda, the Ultimate Module?
Date: 
Message-ID: <47eacdcb$0$25023$607ed4bc@cv.net>
Scott Burson wrote:
> On Mar 26, 12:46 pm, Ken Tilton <···········@optonline.net> wrote:
> 
>>Scott Burson wrote:
>>
>>>Nobody's using my FSet library either, AFAIK (except me, of
>>>course).  Probably, I should write some tutorial material -- that will
>>>make it easier to get started with -- except it's hard to get myself
>>>to do that without some indication that anyone will care.
>>
>>Oh, Christ, I thought I had written that... Many are called, few are
>>chosen -- who wants users who need documentation?
>>
>>Less abusively, if they aren't asking for doc now, they won't read it if
>>you provide it, and doc is way too hard and un-fun for that.
> 
> 
> Just to be clear -- it's tutorial material specifically that I have
> yet to write for FSet.  The API reference is quite complete and
> thorough, I believe; the design rationale is also fairly complete,
> though it could use a bit of reorganization and is written in rather
> too breathless a style.

Ah, OK. You might be proving my point, tho. :(

Me, I knew from day one Cells would need a killer app to be understood, 
but I thought it would be a Lisp GUI. Of course that was before the 
iterweb, now I am betting on Hunchncells... to prove that I am write, I 
am the only one actually programming in Lisp.

> 
> I hoped that that would be enough documentation for at least a couple
> senior Lisp people to take to FSet -- and then one of them would be
> inspired to write a tutorial for the less experienced.  (Didn't
> someone other than yourself write a Cells tutorial?)

Good memory:

    http://bc.tech.coop/blog/030911.html

>  For whatever
> reasons, it hasn't happened.  Maybe I need the tutorial to even
> attract the interest of the senior people.  Or -- maybe no one will
> ever care :)

Do what Froggy and Qi Boy do, spam us with superior FSet solutions every 
time you see code being discussed here. The denizens will ignore you, 
the lurkers will check out your site.

I think c.l.lisp has the coolest lurkers going, they're kinda the dark 
matter of Lisp.

kenny

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/

"In the morning, hear the Way;
  in the evening, die content!"
                     -- Confucius