From: Joerg Hoehle
Subject: packages and preventing name-collisions in large systems
Date:
Message-ID: <5g4brs$fpk@omega.gmd.de>
Hi,
In quest of the "do it small and do it large as well" language Gral
which led me to investigate Scheme, Haskell, Sather, Icon and others
for useful ideas and literature, I came across Henry Baker's "Critique
of DIN Kernel Lisp Definition Version 1.2".
It reads:
: We therefore recommend that DKLisp drop the use of "packages"[24]
:and better utilize the power of lexical scoping to solve the
:name-collision problems of large-scale systems.
I wanted to comment that I'm well aware of the possibilities to write
code using lexical scoping in Lisp that reduces the top-level defuns
to strictly the exported functions. I guess that Pascal and Modula
programmers would also be familiar with this style.
However, the problem I see is with prototyping, ease of debugging and
the natural lazyness of programmers. It's much easier to add a
"static" in front of a C declaration and definition (two lines to
change) and this way ensure some kind of privacy than it is to rewrite
a defun into an flet, as many more editing operations are involved.
The key is that I (and presumably other programmers as well) tend to
first write small functions and test them, then compose and integrate
them into larger functions. I can easily test top-level functions,
but can test local flet functions deep inside a defun only with pain.
Maybe the Lisp system environment that I use is to blame, but I feel
there's more to it: until it's as easy to make and test a private
function as is adding a "static" in C, Lisp source files will continue
to contain many top-level definitions that clutter the name space.
But this also leads to another topic, namely the significance of files
to the language. Typically, Lisp systems tend to load everything into
one incrementally growing, big space (or the complete OS is that one
space, as in Lisp machines), whereas for many (small) applications,
the shell script or C file approach is more appropriate: accessible
functions are limited in number, lexically known and limited to the
file being executed (Shell) or compiled (C, Pascal etc.). Common Lisp
ignores the concept of files except for *readtable* and *package*
being bound. I do not claim that files are the ideal grain for
scoping, just that the "all in one" approach is not satisfactory, as
you all knew.
Any ideas about how to improve modularity and privacy while
maintaining ease of prototyping and debugging in Lisp?
Regards,
Jo"rg Ho"hle.
············@gmd.de http://zeus.gmd.de/~hoehle/amiga-clisp.html
From: Marc Wachowitz
Subject: Re: packages and preventing name-collisions in large systems
Date:
Message-ID: <5gh1tk$fqh@trumpet.uni-mannheim.de>
Joerg Hoehle (······@zeus.gmd.de) wrote:
> Any ideas about how to improve modularity and privacy while
> maintaining ease of prototyping and debugging in Lisp?
Common Lisp's concept of "official access" with PACKAGE:NAME and
"intimate access" with PACKAGE::NAME seems quite appopriate for
that purpose; one might just wish to emphasize modularity more by
not allowing intimate access all over the place, but restricting
it in some way, and generate diagnostic messages for general usage
in some modes of operation. This should also be done if one would
replace the package concept with a more conventional module system,
where MODULE:NAME (and similarly MODULE::NAME) would refer to the
top-level binding of NAME in MODULE, and symbols themselves wouldn't
belong to any particular modules/packages. The representation which
is observed by macros and related processing phases would see two
special forms - say, (access MODULE NAME) and (private-access MODULE
NAME) - to which the short-hand notation would expand - quite similar
to #'f becoming (function f). It should still be possible to import
some names explicity (with renaming where needed) into a name space,
similar to EuLisp, but the general form should always be allowed
without any need for further declarations, for the benefit of clean
generated code.
Given such facilities, one could then write hygenic macros with an IMO
quite acceptable effort, and in a nice style, without introducing a
lot of new concepts (like extending Scheme-style automatic hygiene to
a module system, and allowing it to use the full power of the language
for transformations). With Common Lisp's packages, the best way I know
to write reasonably safe macros is to alias the referred entities in an
obscure package which is only meant for that purpose, such that nobody
is likely to shadow those bindings. With a module system, the output of
macros could safely refer to top-level bindings of any module, use
(gensym) as usual for its "private" auxiliary names, and use ordinary
symbols for names to be looked up in the caller's environment. I'm
currently considering to add such a module system on top of ISO Lisp,
which is otherwise a nice start for a small (yet extendable) base
Lisp, appropriate for the delivery of applications which don't carry
much around beyond their actual needs. (For efficiency and interaction
with the non-Lisp world, a more expressive type system or some other
way to indicate low-level properties of specific entities will surely
be necessary, but at least it gets rid of Common Lisp's extreme dynamic
modifiabilitiy, which often isn't needed for non-development purposes.)
If the one-to-one connection between modules and files is too strong,
another middle-way solution may be to let a module definition list the
files belonging to it (or generate such information automatically from
some statically visible form at the beginning of source files, with a
way to make new relevant source files known to the implementation). One
may also contemplate a hierarchical name space for module names, e.g.
M1:M2:M3:NAME (not implying anything about the contents of modules, but
just providing an explicit structure for several independently developed
libraries), or simply something like Dylan's two-layered library/module
concept.
Note that my above comments e.g. about packages and the ability to
do many things at run time aren't meant to say that these facilities
aren't sometimes useful - but IMO there are many cases where a clear
separation between a development environment and a delivered program
can be very helpful, e.g. for optimizations by the implementation, as
well as more obvious dependencies for a human reader who's not very
familiar with the whole system, to allow relatively isolated analysis
and testing of components in large systems. It may be nice to have
mechanisms for run-time adaption available in the language, but for
typical maintenance, this could usually also be provided with special
system tools, and for those parts which are expected to change as
a normal part of system execution, it may be preferable to make the
dynamic nature of the behaviour explicit, e.g. in ISO Lisp (draft 19):
(defglobal *current-foo* (lambda (...) ...)) ; initial definition
(defun current-foo (:rest args) (apply *current-foo* args))
instead of allowing any modification, and expecting dynamic extraction
of a symbol's function cell at run-time for general execution, as in
Common Lisp (even if there are cases where a good compiler will be able
perform further optimizations, a general free-standing defun, which is
not meant to be inlined, is open to such modifications, though for most
function definitions, no such modification is going to happen).
-- Marc Wachowitz <··@ipx2.rz.uni-mannheim.de>
With a mighty <··········@trumpet.uni-mannheim.de>,
··@ipx2.rz.uni-mannheim.de uttered these wise words...
> Given such facilities, one could then write hygenic macros with an IMO
> quite acceptable effort, and in a nice style, without introducing a
> lot of new concepts (like extending Scheme-style automatic hygiene to
> a module system, and allowing it to use the full power of the language
> for transformations). With Common Lisp's packages, the best way I know
> to write reasonably safe macros is to alias the referred entities in an
> obscure package which is only meant for that purpose, such that nobody
> is likely to shadow those bindings. With a module system, the output of
> macros could safely refer to top-level bindings of any module, use
> (gensym) as usual for its "private" auxiliary names, and use ordinary
> symbols for names to be looked up in the caller's environment. I'm
I'd have been very happy with hygienic macros were a standard feature
in CL, perhaps with the existing macro system underneath it, as an
option. After all, implementations may well use the existing macro
system _anyway_, but some of us might choose not to use it, prefering
a more hygienic style.
I know we have that option already, but it's not exactly a standard
feature of CL. How many CL systems provide a hygienic macro system as
part of the system? (I feel the same way about the series functions,
but I won't go into that now.)
> currently considering to add such a module system on top of ISO Lisp,
> which is otherwise a nice start for a small (yet extendable) base
> Lisp, appropriate for the delivery of applications which don't carry
> much around beyond their actual needs. (For efficiency and interaction
> with the non-Lisp world, a more expressive type system or some other
> way to indicate low-level properties of specific entities will surely
> be necessary, but at least it gets rid of Common Lisp's extreme dynamic
> modifiabilitiy, which often isn't needed for non-development purposes.)
Yes, not all of us desire the most dynamic CL features at runtime. I
expect that there'll be some disagreement about what these features
should be, but if you're not proposing changes to CL itself, but a
modified dialect of CL, then I don't see how there could be a problem.
> If the one-to-one connection between modules and files is too strong,
> another middle-way solution may be to let a module definition list the
> files belonging to it (or generate such information automatically from
> some statically visible form at the beginning of source files, with a
> way to make new relevant source files known to the implementation). One
> may also contemplate a hierarchical name space for module names, e.g.
> M1:M2:M3:NAME (not implying anything about the contents of modules, but
> just providing an explicit structure for several independently developed
> libraries), or simply something like Dylan's two-layered library/module
> concept.
Couldn't you just use declarations to mark the start and end of a
module, or an enclosing special form? How about the following:
(defmodule "module_name"
(load "file1") ; not indented for convenience only
(load "file2")
(expr1)
(expr2)
...
(exprn)
)
Each (expr) could be a normal toplevel expression. Would that not
work? If not, then a Dylan style module system also sounds good.
--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
Martin Rodgers | Programmer and Information Broker | London, UK
Please remove the "nospam" if you want to email me.
From: Marc Wachowitz
Subject: Re: packages and preventing name-collisions in large systems
Date:
Message-ID: <5gmvl6$326@trumpet.uni-mannheim.de>
Cyber Surfer (············@nospam.wildcard.demon.co.uk) wrote:
> I know we have that option already, but it's not exactly a standard
> feature of CL. How many CL systems provide a hygienic macro system as
> part of the system?
Yes, let's remember that hygiene is a property of macro behaviour,
not necessarily something which happens automatically due to the
semantic properties of a transformation language. If one is some-
what aware of the relevant interactions (which are admittedly quite
complicated in Common Lisp), it's not very difficult to get such an
effect, at least for those cases where the macro user doesn't want
to abuse the mechanisms (and preventing intentional abuse isn't
so interesting, since it will eventually prohibit any interesting
usage as well). If one does that frequently, the tools for pattern
matching and structure composition (beyond what's already in CL
anyway) and support for a hygienic style are expressible on top
of CL. While it would surely be nice to have this already done
as part of CL, in my view, until someone would be able to under-
stand and build such support, such a person should probably better
leave macro-writing alone. Without understanding the context of
program transformations, the result of even seemingly simple macros
on the software development process are likely to make things much
worse than just staying with functional abstractions, and perhaps
creating a few superfluous intermediate functions. Using abstraction
techniques without understanding abstraction processes yields the kind
of "black boxes" which those having to use them would prefer to throw
against a wall. On the other hand, _only_ inventing lots of trivial
syntacic abbreviations to save typing does easily lead to feature
explosion, hiding the conceptual structure of the program, instead
of emphasizing the essential notions. (Frankly, for more complicated
semantic transformations via macros, which aren't just some obvious
syntactic sugar, I'd prefer a combination of top-level references,
GENSYMs and ordinary symbols against a completely automatic mechanism
which isn't very well integrated with the other data structures and
manipulations of Lisp, even if something like Scheme's syntax-rules
may be more convenient for simple surface transformations.)
> Couldn't you just use declarations to mark the start and end of a
> module, or an enclosing special form? How about the following:
> (defmodule "module_name"
> (load "file1") ; not indented for convenience only
> (load "file2")
[...]
> )
> Each (expr) could be a normal toplevel expression. Would that not
> work?
Yes, but I wouldn't want to use LOAD, which is traditionally used
as an execution-time mechanism (with some "interesting" features
to allow pre-compilation and optimizations), but prefer a conceptual
separation between program compilation and program execution, quite
similar to conventional compiler technology (just that macros can
extend the compiler). In my vision, there are compile-time dependencies
due to used macros (and functions/data used by macros) and imported
entities from other modules (possibly including knowledge e.g. about
a function's arity and type signature etc.), but beyond that, quite
traditional object files are produced in independent translation steps,
without any shared state beyond the file system, and linked together
to arbitrary applications (where "linking" could be any mixture of
monolithic application-linking, shared libraries, and loading object
files on demand, but all without implying redefinition of what has
already been defined - that's left for development/debugging tools).
IMO, Lisp shouldn't try to dominate an application, and besides the
need for being more explicit (on one side or more, depending on the
situation) about data structures shared between code from several
languages, and perhaps mentioning the names of some specific libraries,
one shouldn't even have to know whether some object/library which one
is linking has been written in Lisp, C, or whatever else. There should
be no need to hesitate about implementing in Lisp e.g. most of what's
in /usr/bin on Unix, or most of what's usually done as shell script or
in Perl, or most larger applications.
(In case you wonder: I'm exploring this for education/enjoyment, ready to
share it when it's ready, but not to make money or to reach the masses.
My computer at home runs Linux, has so far not even X installed, and will
never carry one of those popular GUIs with embedded bootstrap loader. I'm
programming and using enough customer-centered machinery at my job, where
the likelihood for being able to use something like Lisp is a very close
approximation of zero, mostly due to factors which have little to do with
the technological matters above. Nevertheless, the problems to be solved
there do influence my thinking about programming, like e.g. the usage of
large database applications, concurrently developed/maintained/extended
and used over several years, by many people with a variety of skills.)
-- Marc Wachowitz <··@ipx2.rz.uni-mannheim.de>
With a mighty <··········@trumpet.uni-mannheim.de>,
··@ipx2.rz.uni-mannheim.de uttered these wise words...
[hygiene stuff deleted]
> creating a few superfluous intermediate functions. Using abstraction
> techniques without understanding abstraction processes yields the kind
> of "black boxes" which those having to use them would prefer to throw
> against a wall. On the other hand, _only_ inventing lots of trivial
> syntacic abbreviations to save typing does easily lead to feature
> explosion, hiding the conceptual structure of the program, instead
> of emphasizing the essential notions. (Frankly, for more complicated
> semantic transformations via macros, which aren't just some obvious
> syntactic sugar, I'd prefer a combination of top-level references,
> GENSYMs and ordinary symbols against a completely automatic mechanism
> which isn't very well integrated with the other data structures and
> manipulations of Lisp, even if something like Scheme's syntax-rules
> may be more convenient for simple surface transformations.)
That's why I like higher order functions so much. Macros have their
uses, and so do HOFs. The trick is to know when to use one and not the
other. IMHO that comes with experience.
> Yes, but I wouldn't want to use LOAD, which is traditionally used
> as an execution-time mechanism (with some "interesting" features
> to allow pre-compilation and optimizations), but prefer a conceptual
> separation between program compilation and program execution, quite
> similar to conventional compiler technology (just that macros can
Yes, ideally LOAD would be hidden behind some syntactic sugar.
> extend the compiler). In my vision, there are compile-time dependencies
> due to used macros (and functions/data used by macros) and imported
> entities from other modules (possibly including knowledge e.g. about
> a function's arity and type signature etc.), but beyond that, quite
> traditional object files are produced in independent translation steps,
> without any shared state beyond the file system, and linked together
> to arbitrary applications (where "linking" could be any mixture of
> monolithic application-linking, shared libraries, and loading object
> files on demand, but all without implying redefinition of what has
> already been defined - that's left for development/debugging tools).
That's what I've been looking for. I'd probably be using Gambit C if
it had more support for working with Windows features like DLLs. I'd
like to write Lisp code that'll go into a DLL, to be called by other
tools (like a web server). Alas, it seems that the only Lisp that can
do this is too expensive for me. Still, I'd like to know how ILOG Talk
does it - if indeed it _can_ do it (I've not seen it).
> IMO, Lisp shouldn't try to dominate an application, and besides the
> need for being more explicit (on one side or more, depending on the
> situation) about data structures shared between code from several
> languages, and perhaps mentioning the names of some specific libraries,
> one shouldn't even have to know whether some object/library which one
> is linking has been written in Lisp, C, or whatever else. There should
> be no need to hesitate about implementing in Lisp e.g. most of what's
> in /usr/bin on Unix, or most of what's usually done as shell script or
> in Perl, or most larger applications.
Exactly! The problem I (and probably many others) have is there are
tools designed to work with C and C++, and unless your Lisp (or
whatever language you use) can work the same way, you're stuffed.
Of course, a lot of Lisp systems appear to assume that you won't want
to do things the C/C++ way, as that's too ugly. Well, it's true that
it's ugly, but there are too many people who feel otherwise for us to
argue with them.
> (In case you wonder: I'm exploring this for education/enjoyment, ready to
> share it when it's ready, but not to make money or to reach the masses.
Even if it's only for education/enjoyment, that should be enough to
show that it's possible to do it, _and_ useful. Go for it.
> My computer at home runs Linux, has so far not even X installed, and will
> never carry one of those popular GUIs with embedded bootstrap loader. I'm
> programming and using enough customer-centered machinery at my job, where
> the likelihood for being able to use something like Lisp is a very close
> approximation of zero, mostly due to factors which have little to do with
> the technological matters above. Nevertheless, the problems to be solved
> there do influence my thinking about programming, like e.g. the usage of
> large database applications, concurrently developed/maintained/extended
> and used over several years, by many people with a variety of skills.)
I'm in a similar position, except writing software to run on a web
server. There's _no chance_ of using Lisp and CL-HTTP, as I'd then be
the only one in the company who could use it (ouch). I'd also have to
reverse engineer a lot of other tools. Besides, these tools have been
purchased, so _of course_ we're going to use them.
So, rather than demanding that everyone else in the company work _my_
way (yeah, right), I have to work their way. That means writing DLLs
to be called by various web server tools. ACL/PC and LispWorks can't
do that. Gambit C can't do it. Hmm. So I use C++...
I'm hoping that Amzi Prolog can help change this, as that _can_ do it,
and at a price that I can (just about) afford. I'd be far happier
using Lisp, as there'd then be less of a learning curve. A Common Lisp
compiler would be _perfect_, as I've been using that for years. <sigh>
So, good luck. I'll be interested to see the results.
--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
Martin Rodgers | Programmer and Information Broker | London, UK
Please remove the "nospam" if you want to email me.