From: Don Geddis
Subject: Static type checking in Lisp
Date: 
Message-ID: <877k42rjqw.fsf@sidious.geddis.org>
On a different newsgroup, I had an extensive discussion with a fan of
Ada/Haskell/ML, about compile-time type checking.  He didn't know much about
Lisp, but believing it to have dynamic typing, criticized Lisp for not having
the compiler catch some possible bugs.  I then mentioned that some Lisp
compilers (notably CMUCL, with the "python" system) have extensive compile-time
type inference.

He then said that Lisp usually uses type inference for optimizations (probably
true).  However, some compilers (e.g. CMUCL again) can give you compile-time
type error (warnings).

We finally got to a specific example, which intrigued me.  I don't use types
in Lisp much, but you can probably get the intent from this code fragment:

(deftype index (integer 1 1000))
(deftype byte  (integer 0 255))
(defun foo (x y)
   (declare (type index x))
   (declare (type byte y))
   (+ x y) )

Most Lisp systems probably happily compile this fragment, because the
implementation of the data types are compatible (both fixnums), so the
plus operation is possible.

The anti-Lisp complaint becomes: "index" and "byte" are not just labels for
a subset of an implementation type; they are actually semantic categories in
the problem domain.  And it doesn't make sense to add them together, so
"(+ x y)" should yield a compiler error (or at least a warning).

And if, in fact, you do want them to add together, then I should create a
third type, which is the union of index and byte, and cast the x & y variables
to this third type, to inform the compiler that I am aware of what I'm doing.
Much like "apples" and "oranges" shouldn't be added together, but two piles of
"fruit" can be.

I generally think that Lisp can encompass pretty much any computational
style.  And this one is close.  But I do wonder: can it accommodate the
needs of those who think that static type checking is a critical element in
modern programming?

In particular, do you believe that some Common Lisp compiler could refuse to
compile the code fragment above, giving a "type error", and yet be a
conforming implementation?

(Perhaps I'm misusing the Lisp type system.  If you have another suggestion
for incorporating semantic type characteristics for compile-time static type
checks, I'd be interested in hearing about it.)

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Measure wealth not by the things you have, but by the things you have for which
you would not take money.

From: ·············@comcast.net
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <brtdg2oe.fsf@comcast.net>
Don Geddis <···@geddis.org> writes:

> We finally got to a specific example, which intrigued me.  I don't use types
> in Lisp much, but you can probably get the intent from this code fragment:
>
> (deftype index (integer 1 1000))
> (deftype byte  (integer 0 255))
> (defun foo (x y)
>    (declare (type index x))
>    (declare (type byte y))
>    (+ x y) )
>
> Most Lisp systems probably happily compile this fragment, because the
> implementation of the data types are compatible (both fixnums), so the
> plus operation is possible.

While they are both fixnums, that isn't the reason that Lisp will
compile the FOO function.  The DECLARE forms tell Lisp that it is
allowed to assume that X is an integer in the range [1, 1000] and that
Y is an integer in the range [0, 255].  The + operation is certainly
defined over integers, so there is no reason it would not be defined
over a subset of those integers.  Not only will lisp compile those
fragments, but those declarations specifically allow it to omit any
dynamic type or range checks that are less restrictive.

> The anti-Lisp complaint becomes:  "index" and "byte" are not just labels for
> a subset of an implementation type; they are actually semantic categories in
> the problem domain.  And it doesn't make sense to add them together, so
> "(+ x y)" should yield a compiler error (or at least a warning).

If this is the case, they should be using a DEFSTRUCT or DEFCLASS;
Lisp will then behave exactly like they want.  In addition, they
should insert CHECK-TYPE forms where they feel that polymorphism is
undesirable.

> And if, in fact, you do want them to add together, then I should create a
> third type, which is the union of index and byte, and cast the x & y variables
> to this third type, to inform the compiler that I am aware of what I'm doing.
> Much like "apples" and "oranges" shouldn't be added together, but two piles of
> "fruit" can be.

This is what the class hierarchy is for.

> I generally think that Lisp can encompass pretty much any computational
> style.  And this one is close.  But I do wonder: can it accommodate the
> needs of those who think that static type checking is a critical element in
> modern programming?

Yes.  It can do that just fine.  It is important, however, to use the
correct tools.  The DECLARE form, although it superficially resembles
a static type declaration that static type afficionados are fond of,
is *not* the same sort of thing at all.

> In particular, do you believe that some Common Lisp compiler could refuse to
> compile the code fragment above, giving a "type error", and yet be a
> conforming implementation?

No.  A conforming implementation must *not* signal a type error given
the above code (although it could issue a warning if it could prove
that there exists a call to FOO that violates the declaration).

> (Perhaps I'm misusing the Lisp type system.  If you have another suggestion
> for incorporating semantic type characteristics for compile-time static type
> checks, I'd be interested in hearing about it.)

Yep.  Use DEFCLASS.
From: Don Geddis
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <87zngwln8s.fsf@sidious.geddis.org>
> Don Geddis <···@geddis.org> writes:
> > (deftype index (integer 1 1000))
> > (deftype byte  (integer 0 255))
> > (defun foo (x y)
> >    (declare (type index x))
> >    (declare (type byte y))
> >    (+ x y) )
> > The anti-Lisp complaint becomes: "index" and "byte" are not just labels
> > for a subset of an implementation type; they are actually semantic
> > categories in the problem domain.  And it doesn't make sense to add
> > them together, so "(+ x y)" should yield a compiler error (or at least
> > a warning).

·············@comcast.net writes:
> If this is the case, they should be using a DEFSTRUCT or DEFCLASS;
> Lisp will then behave exactly like they want.

I don't think so.  You _should_ be able to add two "index" types together
(or two "byte" types).  If you make them classes (or structs), then all
the usual math functions won't work directly.

So you'll have to write code that accesses the slot, to get the value.
But then, suddenly, you can add "index" values and "byte" values again.

I'm looking for a representation that makes
        (+ x y)
an error if X is INDEX and Y is BYTE, but _not_ an error if both are
INDEX or both are BYTE.

I don't see how class/struct solves this problem.

> In addition, they should insert CHECK-TYPE forms where they feel that
> polymorphism is undesirable.

That requires manual effort on the part of the programmer.  One of the
supposed advantages of static type languages like Ada/Haskell/ML is that
the compiler does extensive type inference, so you don't need to specify
this kind of thing all over the place.

> The DECLARE form, although it superficially resembles a static type
> declaration that static type afficionados are fond of, is *not* the same
> sort of thing at all.

I had imagined that DECLARE of types might be interpreted for optimizations
at high SPEED settings, but perhaps instead for compile-time static type
checks at high SAFETY.

You're saying that's not possible, and that the intent is solely for
optimization.  That's fine, but almost a shame, as it seems pretty much the
same information that the compiler needs in both cases.

> > In particular, do you believe that some Common Lisp compiler could
> > refuse to compile the code fragment above, giving a "type error", and
> > yet be a conforming implementation?
> 
> No.  A conforming implementation must *not* signal a type error given
> the above code (although it could issue a warning if it could prove
> that there exists a call to FOO that violates the declaration).

The point of static type checking was that the code at it stands should
already be a type error; it doesn't require a call to FOO to determine that.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
But the fact that some geniuses were laughed at does not imply that all who are
laughed at are geniuses.  They laughed at Columbus, they laughed at Fulton,
they laughed at the Wright brothers.  But they also laughed at Bozo the Clown.
	-- Carl Sagan
From: Joe Marshall
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <ad8vwjps.fsf@ccs.neu.edu>
Don Geddis <···@geddis.org> writes:

>> Don Geddis <···@geddis.org> writes:
>> > (deftype index (integer 1 1000))
>> > (deftype byte  (integer 0 255))
>> > (defun foo (x y)
>> >    (declare (type index x))
>> >    (declare (type byte y))
>> >    (+ x y) )
>> > The anti-Lisp complaint becomes: "index" and "byte" are not just labels
>> > for a subset of an implementation type; they are actually semantic
>> > categories in the problem domain.  And it doesn't make sense to add
>> > them together, so "(+ x y)" should yield a compiler error (or at least
>> > a warning).
>
> ·············@comcast.net writes:
>> If this is the case, they should be using a DEFSTRUCT or DEFCLASS;
>> Lisp will then behave exactly like they want.
>
> I don't think so.  You _should_ be able to add two "index" types together
> (or two "byte" types).  If you make them classes (or structs), then all
> the usual math functions won't work directly.

The built-in + doesn't do what you want.  Shadow +, then define
methods on the index and byte classes.

As to `the usual math functions won't work directly', I'd have to ask,
isn't that the desired effect?  The usual math functions are defined
over integers and floating point numbers, not over `index' or `byte'
quantities.

>> In addition, they should insert CHECK-TYPE forms where they feel that
>> polymorphism is undesirable.
>
> That requires manual effort on the part of the programmer.  

Yes.  By default lisp forms are polymorphic, so it requires effort on
the part of the programmer to make them not so.  The default is different
for Ada/Haskell or ML.

BTW, why is inserting DECLARE ok, but inserting CHECK-TYPE a problem?

> One of the supposed advantages of static type languages like
> Ada/Haskell/ML is that the compiler does extensive type inference,
> so you don't need to specify this kind of thing all over the place.

Common Lisp does not supply a built-in type inferencer, but let us
suppose that you have one (I bet there is a H-M type inferencer out
there).  You could shadow DEFUN to automagically insert the appropriate
type checks.

>> The DECLARE form, although it superficially resembles a static type
>> declaration that static type afficionados are fond of, is *not* the same
>> sort of thing at all.
>
> I had imagined that DECLARE of types might be interpreted for optimizations
> at high SPEED settings, but perhaps instead for compile-time static type
> checks at high SAFETY.

No, DECLARE is for telling the compiler information it might not be able
to figure out on its own.

> You're saying that's not possible, and that the intent is solely for
> optimization.  That's fine, but almost a shame, as it seems pretty much the
> same information that the compiler needs in both cases.
>
>> > In particular, do you believe that some Common Lisp compiler could
>> > refuse to compile the code fragment above, giving a "type error", and
>> > yet be a conforming implementation?
>> 
>> No.  A conforming implementation must *not* signal a type error given
>> the above code (although it could issue a warning if it could prove
>> that there exists a call to FOO that violates the declaration).
>
> The point of static type checking was that the code at it stands should
> already be a type error; it doesn't require a call to FOO to determine that.

Yes, but there is an implication there that the compiler has total knowledge
about the program.  Suppose I wrote the following:

(defclass index ()
  ((integer-value :initarg :integer-value
                  :initform (error "Required initarg :integer-value omitted.")
                  :reader ->integer)))

(defclass byte ()
  ((integer-value :initarg :integer-value
                  :initform (error "Required initarg :integer-value omitted.")
                  :reader ->integer)))

(defgeneric add (left right)
  (:method ((left index) (right index)) (+ (->integer left) (->integer right)))
  (:method ((left byte) (right byte)) (+ (->integer left) (->integer right))))

(define foo (x y)
  (check-type x index)
  (check-type y byte)
  (add x y))

Can I infer a type error in foo?  Not if at some time later I'm allowed to

(defmethod add ((left index) (right byte)) ...)
From: Marco Antoniotti
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <3F71AE80.2010909@cs.nyu.edu>
Random notes.


Joe Marshall wrote:
> Don Geddis <···@geddis.org> writes:
> 
> 
>>>Don Geddis <···@geddis.org> writes:
>>>
>>>>(deftype index (integer 1 1000))
>>>>(deftype byte  (integer 0 255))
>>>>(defun foo (x y)
>>>>   (declare (type index x))
>>>>   (declare (type byte y))
>>>>   (+ x y) )
>>>>The anti-Lisp complaint becomes: "index" and "byte" are not just labels
>>>>for a subset of an implementation type; they are actually semantic
>>>>categories in the problem domain.  And it doesn't make sense to add
>>>>them together, so "(+ x y)" should yield a compiler error (or at least
>>>>a warning).
>>>
>>·············@comcast.net writes:
>>
>>>If this is the case, they should be using a DEFSTRUCT or DEFCLASS;
>>>Lisp will then behave exactly like they want.
>>
>>I don't think so.  You _should_ be able to add two "index" types together
>>(or two "byte" types).  If you make them classes (or structs), then all
>>the usual math functions won't work directly.

The usual '+' does not "work directly" in *ML.  You need different ones 
for floats and integers.

> BTW, why is inserting DECLARE ok, but inserting CHECK-TYPE a problem?

DECLARE is checked at compile time, CHECK-TYPE is a run time check.

	...
> 
> No, DECLARE is for telling the compiler information it might not be able
> to figure out on its own.
> 
> 
>>You're saying that's not possible, and that the intent is solely for
>>optimization.  That's fine, but almost a shame, as it seems pretty much the
>>same information that the compiler needs in both cases.
>>
>>
>>>>In particular, do you believe that some Common Lisp compiler could
>>>>refuse to compile the code fragment above, giving a "type error", and
>>>>yet be a conforming implementation?
>>>
>>>No.  A conforming implementation must *not* signal a type error given
>>>the above code (although it could issue a warning if it could prove
>>>that there exists a call to FOO that violates the declaration).
>>
>>The point of static type checking was that the code at it stands should
>>already be a type error; it doesn't require a call to FOO to determine that.
> 
> 
> Yes, but there is an implication there that the compiler has total knowledge
> about the program.  Suppose I wrote the following:
> 
> (defclass index ()
>   ((integer-value :initarg :integer-value
>                   :initform (error "Required initarg :integer-value omitted.")
>                   :reader ->integer)))
> 
> (defclass byte ()
>   ((integer-value :initarg :integer-value
>                   :initform (error "Required initarg :integer-value omitted.")
>                   :reader ->integer)))
> 
> (defgeneric add (left right)
>   (:method ((left index) (right index)) (+ (->integer left) (->integer right)))
>   (:method ((left byte) (right byte)) (+ (->integer left) (->integer right))))
> 
> (define foo (x y)
>   (check-type x index)
>   (check-type y byte)
>   (add x y))
> 
> Can I infer a type error in foo?  Not if at some time later I'm allowed to
> 
> (defmethod add ((left index) (right byte)) ...)

I always think of the compiler as a CA = Computer Aid tool.  IMHO Your 
argument that the dynamicity of the environment does not allow you to 
use information at compile time is not all that valid.

In the previous example, with

	(defun foo (x y)
	  (declare (type index x) (type byte y))
	  (add x y))

I think that the compiler should be able to tell you that something is 
fishy if the (method add (index byte)) is not known at compile time.

I have had this discussions before.  The counter argument is that by 
inserting the declaration in FOO I am "lying to the compiler".  I also 
think that this is not a valid argument.

But then again these are just user's complaints, as I do not write 
compilers. :)

Cheers
--
Marco
From: ·············@comcast.net
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <llseb2me.fsf@comcast.net>
Marco Antoniotti <·······@cs.nyu.edu> writes:

>
> DECLARE is checked at compile time, CHECK-TYPE is a run time check.
>

In general, yes, but not necessarily.  If you declare something
to be a fixnum, it is not disallowed for the compiler to insert
a type check (and under high safety conditions, some compilers
may indeed do this, but it isn't required).  Conversely, if a
compiler is able to prove that a CHECK-TYPE will never fail, it
could omit it.

> I always think of the compiler as a CA = Computer Aid tool.  IMHO Your
> argument that the dynamicity of the environment does not allow you to
> use information at compile time is not all that valid.

I think the compiler should *use* every bit of information it can,
but I also think the compiler should not necessarily *trust* all
of it, nor should the compiler assume that it has total knowledge.

> In the previous example, with
>
> 	(defun foo (x y)
> 	  (declare (type index x) (type byte y))
> 	  (add x y))
>
> I think that the compiler should be able to tell you that something is
> fishy if the (method add (index byte)) is not known at compile time.

I agree.  If the compiler has not seen a method for ADD for the
correct types, it would be fine for it to say something.  On the other
hand, because it doesn't have total knowledge, it shouldn't refuse to
compile the form.  It seems a warning is appropriate here.

> I have had this discussions before.  The counter argument is that by
> inserting the declaration in FOO I am "lying to the compiler".  I also
> think that this is not a valid argument.

I don't see how this is `lying to the compiler'.  When you put in a
declaration you *know* is invalid, then you are lying to the compiler,
but there doesn't seem to be anything wrong here.
From: Marco Antoniotti
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <3F730841.6050009@cs.nyu.edu>
·············@comcast.net wrote:
> Marco Antoniotti <·······@cs.nyu.edu> writes:
> 
> 
>>DECLARE is checked at compile time, CHECK-TYPE is a run time check.
>>
> 
> 
> In general, yes, but not necessarily.  If you declare something
> to be a fixnum, it is not disallowed for the compiler to insert
> a type check (and under high safety conditions, some compilers
> may indeed do this, but it isn't required).  Conversely, if a
> compiler is able to prove that a CHECK-TYPE will never fail, it
> could omit it.

Ok.  I suppose that for "simple" uses of CHECK-TYPE you could somehow make

	(declare (fixnum x))
and
	(check-type x 'fixnum)

equivalent and compiler checkable.


>>I always think of the compiler as a CA = Computer Aid tool.  IMHO Your
>>argument that the dynamicity of the environment does not allow you to
>>use information at compile time is not all that valid.
> 
> 
> I think the compiler should *use* every bit of information it can,
> but I also think the compiler should not necessarily *trust* all
> of it, nor should the compiler assume that it has total knowledge.
> 
> 
>>In the previous example, with
>>
>>	(defun foo (x y)
>>	  (declare (type index x) (type byte y))
>>	  (add x y))
>>
>>I think that the compiler should be able to tell you that something is
>>fishy if the (method add (index byte)) is not known at compile time.
> 
> 
> I agree.  If the compiler has not seen a method for ADD for the
> correct types, it would be fine for it to say something.  On the other
> hand, because it doesn't have total knowledge, it shouldn't refuse to
> compile the form.  It seems a warning is appropriate here.

Of course.  I want the compiler to give me a warning here.  That is what 
I want.

>>I have had this discussions before.  The counter argument is that by
>>inserting the declaration in FOO I am "lying to the compiler".  I also
>>think that this is not a valid argument.
> 
> 
> I don't see how this is `lying to the compiler'.  When you put in a
> declaration you *know* is invalid, then you are lying to the compiler,
> but there doesn't seem to be anything wrong here.

The point is that I may not know it is invalid.  That's when I need help 
from the compiler.

Cheers
--
Marco
From: Bob Riemenschneider
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <ce15782d.0309231751.1dfb6a4e@posting.google.com>
Don Geddis <···@geddis.org> wrote in message news:<··············@sidious.geddis.org>...
> > Don Geddis <···@geddis.org> writes:
> > > (deftype index (integer 1 1000))
> > > (deftype byte  (integer 0 255))
> > > (defun foo (x y)
> > >    (declare (type index x))
> > >    (declare (type byte y))
> > >    (+ x y) )
> > > The anti-Lisp complaint becomes: "index" and "byte" are not just labels
> > > for a subset of an implementation type; they are actually semantic
> > > categories in the problem domain.  And it doesn't make sense to add
> > > them together, so "(+ x y)" should yield a compiler error (or at least
> > > a warning).
> 
> ·············@comcast.net writes:
> > If this is the case, they should be using a DEFSTRUCT or DEFCLASS;
> > Lisp will then behave exactly like they want.
> 
> I don't think so.  You _should_ be able to add two "index" types together
> (or two "byte" types).  If you make them classes (or structs), then all
> the usual math functions won't work directly.

I'm somewhat echoing Joe Marshall here, but ... No, you *shouldn't*,
unless you say so as part of the type declaration.  Type (in this
sense, henceforth "ITS") declaration, is something Lisp got
more-or-less right, whereas the "standard"  treatment in Ada, et al.,
is based on a thorough conceptual muddle.

Briefly, types (ITS) are algebras, not simply sets of values.  The
"standard" treatment recognizes this, and, in cases like your example,
supposes that the algebra you want to represent is always a copy of
the set of numerical values, together with a copy of all the
(primitive, if you prefer) operations defined on those values. 
Occasionally, that's exactly what you want; much more often, it isn't.
 You seem to think (Lisp-)adding INDEXes always makes semantic sense,
in other words, that indices (the things you're representing in Lisp
by INDEXes) are the sort of things that can be (Platonically-)added. 
But what about multiplying INDEXes?  What about applying EXPT or LOG
or REM or MOD or ... to a pair of INDEXes?  And, suppose that, instead
of INDEX, you'd defined JERSEY-NUMBER -- would addition still make
semantic sense?  Conversely, suppose that you'd defined LENGTH and
AREA -- should taking the product of a LENGTH and an AREA be an error?
 Admittedly, there are workarounds, but the basic question is: What is
the justification for this choice of defaults, given that they
typically (pun not especially intended, but what the heck) result in
both missed semantic errors and identification of semantically-correct
operations as erroneous?  As far as I can tell, just that they allow
some errors to be caught and that the required piecemeal patching to
eliminate false-positives is pretty easy.  So, it's not *right*, but
it often works "well enough" for people who are too lazy to actually
define their semantic models using something like Lisp's class system.

If you really want the "standard" notion, it would be easy enough to
implement in Lisp.  Doesn't seem to me like it's worth the effort. 
But, by all means, feel free.

> I had imagined that DECLARE of types might be interpreted for optimizations
> at high SPEED settings, but perhaps instead for compile-time static type
> checks at high SAFETY.
> 
> You're saying that's not possible, and that the intent is solely for
> optimization.  That's fine, but almost a shame, as it seems pretty much the
> same information that the compiler needs in both cases.

As you might have inferred, I completely disagree.  How does telling
the compiler that the value of some variable will always fit in 8 bits
provide any information about what the value represents?  How does the
DEFTYPE provide any information about what operations on values of the
type don't make sense?

                                                     -- rar

P.S. [You probably don't want to read this.]  On the chance in a
million you were wondering what I meant by saying Lisp only
more-or-less gets it right, I was thinking that the class-based
version seems to encourage confusing three different algebras that are
floating around.  First, there's the algebra of physical objects and
physical operations performed on them (e.g., measuring rods and
physical concatenation), then there's the algebra pf relevant
attributes of the physical objects (e.g., lengths and an ordering
relation on them), and finally there's the algebra of numerical values
that measure the lengths, which should be a homomorphic image of
algebra of attributes.  Based on empirical observation, people's class
definitions don't tend to keep these all straight, but instead provide
a representation for some odd mixture of the three.   Higher-level
support for defining classes specifically intended to represent bits
of the real world could take care of this, and, thanks to the Miracle
that is Macros, would be easy to provide.  But, again based on
empirical obeservation, the frequent confusion of, say, the mass of an
electron (a possibly variable physical quantity) with its numerical
measure (an immutable mathematical object) when representing whatever
you're interested in as a Lisp data structure just doesn't lead to any
real problems.

The only reason I mentioned all this is that, strictly speaking,
lengths don't get added -- physical objects get concatenated to create
a complex object, and the measure of the length of that complex object
is the sum of the measures of the objects concatenated.  So
Lisp-adding LENGTHs actually involves an implicit assumption that
certain facts about lengths -- e.g., that different lengths can have
the same measure -- can be safely ignored. Since there's always
potential for trouble when representing different things by the same
thing in your program -- the raison d'etre of Ada-style type
definitions! -- a little bit of thought should be, at least ideally,
devoted to such matters at representation-definition-time.  [Don't say
I didn't warn you!]
From: Pascal Costanza
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <bkunqa$17ro$1@f1node01.rhrz.uni-bonn.de>
Don Geddis wrote:

> You _should_ be able to add two "index" types together
> (or two "byte" types).  If you make them classes (or structs), then all
> the usual math functions won't work directly.

No, you shouldn't. If you really want to be that picky, you should 
notice that it doesn't make sense to add two indices. 
Increasing/decreasing an index by a distance makes sense, and this 
results in a new index, but adding two indices is a type error in a 
strict sense.

Maybe it's better to worry about real problems. ;)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kaz Kylheku
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <cf333042.0309220910.61daf1b0@posting.google.com>
Don Geddis <···@geddis.org> wrote in message news:<··············@sidious.geddis.org>...
> We finally got to a specific example, which intrigued me.  I don't use types
> in Lisp much, but you can probably get the intent from this code fragment:
> 
> (deftype index (integer 1 1000))
> (deftype byte  (integer 0 255))
> (defun foo (x y)
>    (declare (type index x))
>    (declare (type byte y))
>    (+ x y) )
> 
> Most Lisp systems probably happily compile this fragment, because the
> implementation of the data types are compatible (both fixnums), so the
> plus operation is possible.
> 
> The anti-Lisp complaint becomes: "index" and "byte" are not just labels for
> a subset of an implementation type; they are actually semantic categories in
> the problem domain.  And it doesn't make sense to add them together, so
> "(+ x y)" should yield a compiler error (or at least a warning).

Lisp supports this type of data typing with classes and structs, not
unlike say C++. If you want two incompatible types INDEX and BYTE,
make them classes or structures. Then you can precisely control what
methods apply to them, and in combination with what other types.

> And if, in fact, you do want them to add together, then I should create a
> third type, which is the union of index and byte, and cast the x & y variables
> to this third type, to inform the compiler that I am aware of what I'm doing.
> Much like "apples" and "oranges" shouldn't be added together, but two piles of
> "fruit" can be.
> 
> I generally think that Lisp can encompass pretty much any computational
> style.  And this one is close.  But I do wonder: can it accommodate the
> needs of those who think that static type checking is a critical element in
> modern programming?

Lisp can accomodate a wide range of psychological problems.
From: Christopher C. Stacy
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <u1xu8afb0.fsf@dtpq.com>
 Kaz> Lisp can accomodate a wide range of psychological problems.

I nominate that for Quote Of The Month.
From: Rayiner Hashem
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <a3995c0d.0309251706.963a663@posting.google.com>
> > I generally think that Lisp can encompass pretty much any computational
> > style.  And this one is close.  But I do wonder: can it accommodate the
> > needs of those who think that static type checking is a critical element in
> > modern programming?
> 
> Lisp can accomodate a wide range of psychological problems.
Glibness might be funny, but your ignoring some very important
research that has gone into static typing systems. Read:
http://www.cs.washington.edu/research/projects/cecil/www/pubs/vass-thesis.html.
If you don't want to read a 150+ doctoral dissertation, try 
http://www.cs.washington.edu/research/projects/cecil/www/pubs/vass-thesis.html
Note that the latter is five years older and not up-to-date. The
salient point of the above papers is that, using a newly developed
static type system, there were able to get the Cecil compiler, which
was originally written in a dynamic style, to typecheck with very
minimal changes to the actual code. Even though some of the results of
the paper are a little disconcerting --- they did bump up against the
type system several times, the research is promising and could be a
useful if applied creatively.
From: Raffael Cavallaro
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <aeb7ff58.0309260517.29ea12cf@posting.google.com>
·······@mindspring.com (Rayiner Hashem) wrote in message news:<···························@posting.google.com>...

> Glibness might be funny, but your ignoring some very important
> research that has gone into static typing systems. Read:
> [references snipped] ... the research is promising and could be a
> useful if applied creatively.

I think Paul Graham got it right when he wrote the following:

"For example, it is a huge win in developing software to have an
interactive toplevel, what in Lisp is called a read-eval-print loop. 
And when you have one this has real effects on the design of the
language.  It would not work well for a language where you have to
declare variables before using them, for example.  When you're just
typing expressions into the toplevel, you want to be  able to set x to
some value and then start doing things to x.  You don't want to have
to declare the type of x first.  You may dispute either of the
premises, but if a language has to have a toplevel to be convenient,
and mandatory type declarations are incompatible with a toplevel, then
no language that makes type declarations   mandatory could be
convenient to program in."
from: <http://www.paulgraham.com/desres.html>

To the extent that any language *requires* any type declarations at
all, it defeats most of the usefulness of a top-level. No matter how
smart the type inferencing is, real gains will require programmers to
make many type declarations, which, I concur with Paul Graham, would
be incompatible with lisp development. At that point, you might as
well be using any of the dozens of statically compiled languages,
since you'll never be quickly writing code, testing it, rewriting it,
etc., in the typical lisp, top-level, iterative, bottom-up, style.
From: Rayiner Hashem
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <a3995c0d.0309261356.4026f602@posting.google.com>
> At that point, you might as
> well be using any of the dozens of statically compiled languages,
> since you'll never be quickly writing code, testing it, rewriting it,
> etc., in the typical lisp, top-level, iterative, bottom-up, style.
I think *required* type declarations would be counter-productive in a
Lisp-like-language, but note I spoke of "creative applications" of
this static typing research. I was thinking of something not along the
lines of ML, where the language is statically typed all the time, but
of CL and Dylan, where you have optional static type checking in
limited portions of the program. The new research in static typing
could be used to extend the power of CL's type declaration mechanism,
and also used in CL compilers to improve detection of type errors.
From: Raffael Cavallaro
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <raffaelcavallaro-04FED5.09083227092003@netnews.attbi.com>
In article <····························@posting.google.com>,
 ·······@mindspring.com (Rayiner Hashem) wrote:

> I think *required* type declarations would be counter-productive in a
> Lisp-like-language,

agreed.

> but note I spoke of "creative applications" of
> this static typing research. I was thinking of something not along the
> lines of ML, where the language is statically typed all the time, but
> of CL and Dylan, where you have optional static type checking in
> limited portions of the program. The new research in static typing
> could be used to extend the power of CL's type declaration mechanism,
> and also used in CL compilers to improve detection of type errors.

You might want to look at goo <http://www.ai.mit.edu/~jrb/goo/>

It follows pretty much this model. Goo is the child of Jonathan 
Bachrach, who led Harlequin's (now Functional Object's) Dylan team. It 
has a lisp syntax. Declarations are not required, but when used, the 
compiler tries to use this information to optimize output. I believe jrb 
has mentioned type inferencing as well.


raf

n.b. I am the grandparent of this post as well - I just happened to post 
that one from google.
From: Rayiner Hashem
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <a3995c0d.0309271255.6861fcbb@posting.google.com>
> You might want to look at goo <http://www.ai.mit.edu/~jrb/goo/>
I've seen Goo (which is at googoogaga.org now). Its still in the very
early stages of development, though. I like the direction its heading,
though I'm really ambivalent about the new typing proposal.

PS> Has anybody noticed something odd about Goo code? Its so clean and
regular, its actually kind of hard to read. Everything just kind of
looks the same. I do wish Graham would hurry about and unvail his
syntax ideas for Arc.
From: Raffael Cavallaro
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <aeb7ff58.0309272125.1f2c421b@posting.google.com>
·······@mindspring.com (Rayiner Hashem) wrote in message news:<····························@posting.google.com>...
> > You might want to look at goo <http://www.ai.mit.edu/~jrb/goo/>
> I've seen Goo (which is at googoogaga.org now).

Which just redirects you to the original link I posted, the one
pointing to jrb's mit site.

> Its still in the very
> early stages of development, though. I like the direction its heading,
> though I'm really ambivalent about the new typing proposal.

Specifically, what is it that you like/dislike about it? I'm curious
as you seem to be familiar with the research into the tradeoffs
involved in adding type inferencing to a language.


> PS> Has anybody noticed something odd about Goo code? Its so clean and
> regular, its actually kind of hard to read. Everything just kind of
> looks the same.

There was a thread related to this on the goo mailing list. Some
posters, myself included, felt that it was because so many of the
built ins are so similar, and mostly two characters each:
df, dm, dc instead of defun, defmethod, defclass, etc. Although easier
to type, two character identifiers differing by only one character are
much harder to read.

> I do wish Graham would hurry about and unvail his
> syntax ideas for Arc.

I'm in no rush at all. I think Graham got it wrong with "Why Arc isn't
especially object oriented," <http://www.paulgraham.com/noop.html>
and jrb got it right when he argued that, by making the language
completely oo, all the way down, and completely extensible (e.g.,
defining new methods on #'+), you're forced to come up with
optimizations that will benefit user code just as much as
language/runtime code.
<http://www.ai.mit.edu/~jrb/goo/wiki/index.php?GooWhy>

Raf
From: Rayiner Hashem
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <a3995c0d.0309281252.21f323ef@posting.google.com>
> Specifically, what is it that you like/dislike about it? I'm curious
> as you seem to be familiar with the research into the tradeoffs
> involved in adding type inferencing to a language.
I think I should make myself more clear. I don't have a problem with
the typing mechanism, but the proposal to add parameterized types.
There doesn't seem to be any overriding philosophy behind the type
system --- its just kind of tacked on. Compare this to the
Hindley-Milner type system (or the Cecil proposal I linked to earlier)
which has some theoretical underpinning. Even from a language-user
standpoint, it just seems kind of inelegant, especially the stuff
about the procedural type-variable narrowing. I'm not suggesting a
better way --- incorporating a formal type system into a
dynamically-typed language is quite challenging, but I'm just saying
the current proposal strikes me as inelegant.

> I'm in no rush at all. I think Graham got it wrong with "Why Arc isn't
> especially object oriented," <http://www.paulgraham.com/noop.html>
> and jrb got it right when he argued that, by making the language
> completely oo, all the way down, and completely extensible (e.g.,
> defining new methods on #'+), you're forced to come up with
> optimizations that will benefit user code just as much as
> language/runtime code.
I agree 100%. The nice thing about Goo (and Dylan, btw) is that its
objects all the way down, but you don't have to do any OO programming
unless you want to. Even when you do, it doesn't entail a lot of the
"busy-work" that Graham was complaining is often associated with OOP
code. However, I was referring not to Arc in general, but its ideas
for adding some syntax to Arc without affecting the power of its macro
system. IIRC, there is also some plans within the Goo camp to explore
new directions for its syntax?
From: ·············@comcast.net
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <smmg60mx.fsf@comcast.net>
·······@mindspring.com (Rayiner Hashem) writes:

> PS> Has anybody noticed something odd about Goo code? Its so clean and
> regular, its actually kind of hard to read. Everything just kind of
> looks the same. I do wish Graham would hurry about and unvail his
> syntax ideas for Arc.

Wl, I thnk tht goo hs too mny abbrv.
From: Rayiner Hashem
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <a3995c0d.0309282042.2275c3d2@posting.google.com>
> Wl, I thnk tht goo hs too mny abbrv.
There is a happy medium between "dc," "df," and
"make-dispatch-macro-character." The older bits of CL (cons, princ,
progn, etc) are pretty good. One wonders whether, if the CL guys had
been involved from the beginning, whether lambda might have been
called "make-anonymous-function." I think the C++ and Python libraries
are pretty good about naming. Some things are cryptic (push_back)
standing by themselves, but are easily understood in context. The C
style of shortened names (like sbrk, fputs) is nice, but I don't know
how well it would scale to a library with 1000 functions like CL's.
From: Gareth McCaughan
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <87vfrcq89i.fsf@g.mccaughan.ntlworld.com>
Rayiner Hashem wrote:

> > Wl, I thnk tht goo hs too mny abbrv.
> There is a happy medium between "dc," "df," and
> "make-dispatch-macro-character." The older bits of CL (cons, princ,
> progn, etc) are pretty good. One wonders whether, if the CL guys had
> been involved from the beginning, whether lambda might have been
> called "make-anonymous-function." I think the C++ and Python libraries
> are pretty good about naming. Some things are cryptic (push_back)
> standing by themselves, but are easily understood in context. The C
> style of shortened names (like sbrk, fputs) is nice, but I don't know
> how well it would scale to a library with 1000 functions like CL's.

Ugh. One of the original Unix folks (Ken Thompson?) was asked
some years into Unix's lifetime what he'd do differently if
he were designing the system again. He thought for a bit and
said "I'd spell 'creat' with an 'e'."

sbrk is rarely enough used that there's basically no reason
to make it short. What possible advantage does it have over
"set-break"? (Other than being a valid identifier in C.
"set_break", then.)

fputs isn't so bad, but the first character is basically
only there because C doesn't have optional arguments.
And, further, the existence of fputs, puts, fwrite, write,
putc, fputc, putchar, all as separate functions, is only
required because of the limitations of the language. A
*good* interface to all that functionality would be a
single "write" function taking two arguments: an object
to be written, and an optional thing to write to. Then
the first object could be, e.g.,
  - a character (causing a single char to be written)
  - a string (causing the whole string to be written)
  - some sort of block-of-data abstraction
and the second could be, e.g.,
  - a low-level file without any buffering beyond what
    the OS gives (compare "write")
  - a higher-level stream object with buffering and
    possibly other funky features (compare "fwrite")

Of course you can't do that in C; it doesn't have the
necessary type-discrimination facilities. But you can
do it in C++, and indeed that's how the iostream model
works. They've made rather a mess of it, but the
underlying idea is pretty sound and it avoids needing
all those 7 write-to-a-file functions I listed above
(not to mention send, sendto, sendmsg, for writing
to sockets, which *of course* are completely different).

There are some very neat things in C's cryptic function
names. For instance, the fact that although "strstr" gives
no indication of which way round the arguments go, you
can work it out by considering "strchr". It's amazing how
if you design something badly enough you can make people
grateful for every little concession to sanity there is
in it. :-)

-- 
Gareth McCaughan
.sig under construc
From: Brian Downing
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <Ta_db.637387$uu5.101671@sccrnsc04>
In article <··············@g.mccaughan.ntlworld.com>,
Gareth McCaughan  <················@pobox.com> wrote:
> sbrk is rarely enough used that there's basically no reason
> to make it short. What possible advantage does it have over
> "set-break"? (Other than being a valid identifier in C.
> "set_break", then.)

I believe older linkers had six-character limits on names, so lots of
old C programs assumed that the first six characters of symbol names had
to be unique.

Here's a post in the middle of a thread from 1988, where it looks like
someone is discussing a "mangling" scheme to avoid this problem.  

http://groups.google.com/groups?selm=4111%40bsu-cs.UUCP

-bcd
-- 
*** Brian Downing <bdowning at lavos dot net> 
From: Rayiner Hashem
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <blarh6$18q$1@news-int.gatech.edu>
Well, its not so much a matter of the capabilities of C, but the relative
merits of a shortened but still comprehensible naming scheme. For the
purposes of this discussion, fputs and puts could be functions to set the
time and read from the network device, respectively. The point is that you
don't have to have 30-character names to keep function names memorable. To
experienced programmers, common abbreviations like str, sqrt, etc, are as
recognizable as string, square-root, etc. Take, for example, the
'lisp-implementation-version' function. Discount, for demonstration
purposes, that its probably not a function used terribly often. Would
naming it "impl-ver" be any less comprehensible to a programmer? I mean,
its implied that you're talking about a lisp implementation here, and
"impl" and "ver" are widely understood abbreviations in the programming
community. 
From: Matthias
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <36w7k3qadia.fsf@chagall.ti.uni-mannheim.de>
Rayiner Hashem <·······@mail.gatech.edu> writes:

> Well, its not so much a matter of the capabilities of C, but the relative
> merits of a shortened but still comprehensible naming scheme. For the
> purposes of this discussion, fputs and puts could be functions to set the
> time and read from the network device, respectively. The point is that you
> don't have to have 30-character names to keep function names memorable.

I think the trick is to put related functions into packages ("list",
"string", "network", "io", "gui") and use a more-or-less consistent
naming scheme within the packages.  This structures available
functions and names and makes finding and memorizing them painless.
Most programmers are accustomed to different naming schemes being
used.  If you use packages and keep your names consistent within the
individual packages few people will have difficulties remembering.

For the few global identifier needed: As long as there are not too
many, people will adopt to whatever you give them.
From: Rayiner Hashem
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <a3995c0d.0309301812.460febca@posting.google.com>
> I think the trick is to put related functions into packages ("list",
> "string", "network", "io", "gui") and use a more-or-less consistent
> naming scheme within the packages.  
Speaking of packages, I have to bitch about the symbol naming system.
CL's is a lot like C++, with the additional feature that you can
rename symbols upon importing them. That mechanism always struck me as
either too manual, or too verbose. I hate having to write foo:bar (or
foo::bar) everywhere, where the foo should be implicit from the usage.
There has got to be a more automagical way of doing namespaces that
requires less user intervention. I'm thinking of something like Koenig
lookup, but more generalized. Anybody know a language that implements
"smarter" namespaces than does Java/C++/etc?
From: Thomas A. Russ
Subject: Packages [Was Re: Static type checking in Lisp]
Date: 
Message-ID: <ymi7k3on0oo.fsf_-_@sevak.isi.edu>
·······@mindspring.com (Rayiner Hashem) writes:

> Speaking of packages, I have to bitch about the symbol naming system.
> CL's is a lot like C++, with the additional feature that you can
> rename symbols upon importing them. That mechanism always struck me as
> either too manual, or too verbose. I hate having to write foo:bar (or
> foo::bar) everywhere, where the foo should be implicit from the usage.

Um, exactly how should the package name be implicit from the usage?  I
don't see what would allow a system to know that when you write
"bar" that you mean the "bar" in package foo instead of in your current
package.  Afterall, you are free to make forward references in lisp code 
to functions that you haven't yet defined.

> There has got to be a more automagical way of doing namespaces that
> requires less user intervention.

Well, there is always the option of importing the symbols you want from
other packages.  You do that once, up front, and then they act as
internal symbols in the current package.  That is then explicit rather
than implicit, which has maintenance advantages.

> I'm thinking of something like Koenig
> lookup, but more generalized. Anybody know a language that implements
> "smarter" namespaces than does Java/C++/etc?

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Rayiner Hashem
Subject: Re: Packages [Was Re: Static type checking in Lisp]
Date: 
Message-ID: <a3995c0d.0310012243.46bf5c63@posting.google.com>
> Um, exactly how should the package name be implicit from the usage?
A couple of things. The most straightforward one would be that if a
symbol is not ambiguous (ie. only one currently visible package
exports a symbol with that name) it would not need to be qualified
with a package name. You can also go further. A symbol could be looked
up relative to the package of one of its parameters. So if you have a
generic function named 'foo' that operates on an object of type 'bar',
both of which are in package 'foobar',  you could use 'foo' with 'bar'
parameters without explicitly qualifying its name. That's basically
Koenig lookup. There are probably more things you can do, but I
haven't really thought it through all the way. There is an element of
DWIM (do what I mean) to all of this, and you probably wouldn't want
to use these features at a larger scope. However, I often find myself
getting irritated that I keep having to qualify names even though I'm
working in a small scope (ie. was careful about what I imported) and
it should be obvious what packages the symbols are referring to. This
is less of a problem in CL and Dylan, because the core language
symbols don't need to be qualified (like std:: in C++) but that's not
really a general solution.

> Well, there is always the option of importing the symbols you want from
> other packages.  You do that once, up front, and then they act as
> internal symbols in the current package.  That is then explicit rather
> than implicit, which has maintenance advantages.
True, you probably would want more predictability in a larger system.
However, if you're doing a one-off or a small program, but use lots of
outside packages, explicit qualification seems like an imposed level
of rigor that's rather unwarrented.
From: Kenny Tilton
Subject: Re: Packages [Was Re: Static type checking in Lisp]
Date: 
Message-ID: <g8Qeb.14537$q71.12129@twister.nyc.rr.com>
Rayiner Hashem wrote:

>>Um, exactly how should the package name be implicit from the usage?
> 
> A couple of things. The most straightforward one would be that if a
> symbol is not ambiguous (ie. only one currently visible package
> exports a symbol with that name) it would not need to be qualified
> with a package name. You can also go further. A symbol could be looked
> up relative to the package of one of its parameters. So if you have a
> generic function named 'foo' that operates on an object of type 'bar',
> both of which are in package 'foobar',  you could use 'foo' with 'bar'
> parameters without explicitly qualifying its name. That's basically
> Koenig lookup. There are probably more things you can do, but I
> haven't really thought it through all the way. There is an element of
> DWIM (do what I mean) to all of this, 

Well this is exactly the problem. It's not DWIM, it's GWIM: "Guess what 
I mean." Including, if I make a typo and it happens to be visible in 
some package I did not even know existed, initiate launch sequence on 
all ICBMs. This because, yawn, I cannot be bothered to work out what I 
am doing. This is progress? This is "smarter"? Good lord, I do not even 
know what I am doing, how on Earth can a stupid compiler guess at that?

> ...and you probably wouldn't want
> to use these features at a larger scope. However, I often find myself
> getting irritated that I keep having to qualify names even though I'm
> working in a small scope (ie. was careful about what I imported) and
> it should be obvious what packages the symbols are referring to. This
> is less of a problem in CL and Dylan, because the core language
> symbols don't need to be qualified (like std:: in C++) but that's not
> really a general solution.
> 
> 
>>Well, there is always the option of importing the symbols you want from
>>other packages.  You do that once, up front, and then they act as
>>internal symbols in the current package.  That is then explicit rather
>>than implicit, which has maintenance advantages.
> 
> True, you probably would want more predictability in a larger system.
> However, if you're doing a one-off or a small program, but use lots of
> outside packages, explicit qualification seems like an imposed level
> of rigor that's rather unwarrented.


Ah, I see, you are proposing a new:

     (declare (mindset just-pissing-around))

I like it! But I think we are working on a non-problem here. You know 
what I do when I get a compiler error because some symbol is not 
visible? I make it visible and recompile. Whew! I need a comp day!

:)

kenny
From: John Thingstad
Subject: Re: Packages [Was Re: Static type checking in Lisp]
Date: 
Message-ID: <oprwe2y5uoxfnb1n@news.chello.no>
On Thu, 02 Oct 2003 07:06:52 GMT, Kenny Tilton <·······@nyc.rr.com> wrote:

In His book OnLisp Graham introduses anamorphic funtions.

like: (aif (bind ?x 'foo) it nil)
Here it refers to the result of bind.
This seems to be a adequate method for dealing with the same object in 
code.
Just like in a paragraph of thext. The 'Green house' is referenced in the 
first sentence
and later on in the paragraph it is referenced as 'it'. As long as context 
is unambigous
this is permissible.
Is somthing like this what you are looking for?
>
>
> Rayiner Hashem wrote:
>

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Rayiner Hashem
Subject: Re: Packages [Was Re: Static type checking in Lisp]
Date: 
Message-ID: <blin1m$b7h$1@news-int.gatech.edu>
> this is permissible.
> Is somthing like this what you are looking for?
Something like that, yes. It doesn't address the namespace issue I'm
thinking of directly, but its exactly this sort of DWIM that I was
referring to.
From: Mario S. Mommer
Subject: Re: Packages [Was Re: Static type checking in Lisp]
Date: 
Message-ID: <fzsmmc6or5.fsf@cupid.igpm.rwth-aachen.de>
·······@mindspring.com (Rayiner Hashem) writes:
> > Um, exactly how should the package name be implicit from the usage?
> A couple of things. The most straightforward one would be that if a
> symbol is not ambiguous (ie. only one currently visible package
> exports a symbol with that name) it would not need to be qualified
> with a package name.

The above paragraph just makes me wonder if you are aware of
use-package, export, etc.

I very rarely qualify symbols with a package name. I export all
symbols I need, and use the packages I need. End of problem. And
afaict, this is what most other people do.
From: james anderson
Subject: Re: Packages [Was Re: Static type checking in Lisp]
Date: 
Message-ID: <3F7BDE6A.A6700C90@setf.de>
Rayiner Hashem wrote:
> 
> > Um, exactly how should the package name be implicit from the usage?
> A couple of things. The most straightforward one would be that if a
> symbol is not ambiguous (ie. only one currently visible package
> exports a symbol with that name) it would not need to be qualified
> with a package name.

if one needs this, one can accomplish it by constructing an "implementation"
package which does just this. this can either be the current package, or it
can re-export the symbols and be used by the current package. if, as you note
below, you do this, and yet you are "irritated that [you] keep having to
qualify names even though [you are] working in a small scope (ie. was careful
about what [you] imported)" then you need to restate the problem, as it is not clear.

if the constituent (ie exporting) packages are amenable to consolidation (ie.
no conflicting exports), then the "implementation" package need just use them
all, otherwise (shadowing) imports and (if the implementation package is not
the current package) re-exports may be required.

>   You can also go further. A symbol could be looked
> up relative to the package of one of its parameters. So if you have a
> generic function named 'foo' that operates on an object of type 'bar',
> both of which are in package 'foobar',  you could use 'foo' with 'bar'
> parameters without explicitly qualifying its name.

this is underspecified. is it the symbol of the argument variable, the symbol
which names the type of an declared argument type, the symbol which names the
type of the effective argument value, the symbol of the name of the function
operator in an argument expression, or what? should it be possible to pin it
down, it would be straight-forward to write an alternative to defun which
walked the function body and did this.

>    That's basically
> Koenig lookup. There are probably more things you can do, but I
> haven't really thought it through all the way.

oh. lisp packages are first class objects. once one figures out what one needs
to do, one might be surprised that one can actually implement it.

>    There is an element of
> DWIM (do what I mean) to all of this, and you probably wouldn't want
> to use these features at a larger scope. However, I often find myself
> getting irritated that I keep having to qualify names even though I'm
> working in a small scope (ie. was careful about what I imported) and
> it should be obvious what packages the symbols are referring to.

as this statement stands, that is without examples, it appears to contradict itself.

>    This
> is less of a problem in CL and Dylan, because the core language
> symbols don't need to be qualified (like std:: in C++) but that's not
> really a general solution.
> 
> > Well, there is always the option of importing the symbols you want from
> > other packages.  You do that once, up front, and then they act as
> > internal symbols in the current package.  That is then explicit rather
> > than implicit, which has maintenance advantages.
> True, you probably would want more predictability in a larger system.
> However, if you're doing a one-off or a small program, but use lots of
> outside packages, explicit qualification seems like an imposed level
> of rigor that's rather unwarrented.

this passage reads as if "importing the symbols you want" leads to "explicit
qualification." that cannot have been the intended reading. an example might
help to make the problem clearer.

...
From: Thomas A. Russ
Subject: Re: Packages [Was Re: Static type checking in Lisp]
Date: 
Message-ID: <ymi65j7mzkh.fsf@sevak.isi.edu>
james anderson <··············@setf.de> writes:

Rayiner Hashem wrote:
> 
> > Um, exactly how should the package name be implicit from the usage?
> A couple of things. The most straightforward one would be that if a
> symbol is not ambiguous (ie. only one currently visible package
> exports a symbol with that name) it would not need to be qualified
> with a package name.

But then it means that you would have lots of trouble writing mutually
recursive functions, if there were some conflicting symbol name.  For
example:

(in-package "FOO")

(defun f (x) ...)

(in-package "BAR")

;; In the following, F must be read as FOO::F, since that is the
;; only existing symbol.

(defun g (x) (if ... (f x) ...))

;; But BAR::F is really the function that we wanted:

(defun f (x) (if ... (g (1- x)) ...))


Note that this grabbing the already existing symbol regardless of
package pretty much seems to subvert the entire idea of a package system 
by making you need to know about (exported) symbols in all of packages
loaded into the system, just so you can know whether you need to qualify 
names or not.  And the proposed solution falls down if anyone's other
package uses a symbol with the same name and exports it.

Instead of needing to know only about the packages you currently care to 
use for the project, you suddenly need to know about all of the packages 
in the current lisp system.

This also doesn't even address the issue that symbols are created and
packages resolved at read time.  This makes it rather difficult to have
semantic tests like whether a symbol should name a function, since that
level of analysis by the interpreter or compiler happens long after the
form is read and the decision about which package a symbol belongs to
has been made.  I suppose it wouldn't be impossible to allow the
interpreter or compiler to substitute a different symbol for the one it
got.  In fact, that would probably be where you would want it to be.

On the other hand, the real problem with DWIM systems like this is that
it makes the code somewhat (even a lot?) unpredictable.  I could imagine 
that trying to debug a function where the compiler had "helpfully"
substituted an entirely different function automatically would be rather 
difficult.  It would be similar to trying to track down bugs due to
unexpected destructive modification of shared list structure, except
that it was the compiler that had unexpectedly modified your program.


-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Rayiner Hashem
Subject: Re: Packages [Was Re: Static type checking in Lisp]
Date: 
Message-ID: <blinaf$b7h$2@news-int.gatech.edu>
> On the other hand, the real problem with DWIM systems like this is that
> it makes the code somewhat (even a lot?) unpredictable.
True. I didn't think the initial idea through very well, I was just trying
to give an idea what type of functionality I was thinking about. Now that I
think of it, this functionality is probably better implemented in a smart
editor. You type in a function name and it tries to automatically fill in
the package name as you type. Hmm, there is probably an emacs mode for this
already --- time for a Google search...
From: Bruce Hoult
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <bruce-F05E25.18102401102003@copper.ipg.tsnz.net>
In article <····························@posting.google.com>,
 ·······@mindspring.com (Rayiner Hashem) wrote:

> > I think the trick is to put related functions into packages ("list",
> > "string", "network", "io", "gui") and use a more-or-less consistent
> > naming scheme within the packages.  
> Speaking of packages, I have to bitch about the symbol naming system.
> CL's is a lot like C++, with the additional feature that you can
> rename symbols upon importing them. That mechanism always struck me as
> either too manual, or too verbose. I hate having to write foo:bar (or
> foo::bar) everywhere, where the foo should be implicit from the usage.
> There has got to be a more automagical way of doing namespaces that
> requires less user intervention. I'm thinking of something like Koenig
> lookup, but more generalized. Anybody know a language that implements
> "smarter" namespaces than does Java/C++/etc?

Dylan is fairly flexible.

By default, all names are imported as-is.  If something conflicts then 
you can rename it individually to something else.  You can also import 
only names you explcitly specify, or import everything except names you 
explicitly specify.  You can add an arbitrary prefix to every name 
imported from a given library.

-- Bruce
From: Joe Marshall
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <brt7psyn.fsf@ccs.neu.edu>
·······@mindspring.com (Rayiner Hashem) writes:

>> > I generally think that Lisp can encompass pretty much any computational
>> > style.  And this one is close.  But I do wonder: can it accommodate the
>> > needs of those who think that static type checking is a critical element in
>> > modern programming?
>> 
>> Lisp can accomodate a wide range of psychological problems.
> Glibness might be funny, but you're ignoring some very important
> research that has gone into static typing systems. Read:
> http://www.cs.washington.edu/research/projects/cecil/www/pubs/vass-thesis.html.
> If you don't want to read a 150+ doctoral dissertation, try 
> http://www.cs.washington.edu/research/projects/cecil/www/pubs/vass-thesis.html

I am always amused by people who put *huge* obstacles into their
paths, then invest an extraordinary amount of resources to mitigate
the problem.  A large amount of research in static type checking has
been invested with the goal of making it almost as easy to use as
those systems that don't require it.

I appreciate having a smart enough compiler that can tell me when I've
written code that is provably wrong, but there is enough interesting
code that cannot be proved correct for this to be a general solution.
From: Pascal Costanza
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <bl0t9n$vfi$1@f1node01.rhrz.uni-bonn.de>
Rayiner Hashem wrote:
>>>I generally think that Lisp can encompass pretty much any computational
>>>style.  And this one is close.  But I do wonder: can it accommodate the
>>>needs of those who think that static type checking is a critical element in
>>>modern programming?
>>
>>Lisp can accomodate a wide range of psychological problems.
> 
> Glibness might be funny, but your ignoring some very important
> research that has gone into static typing systems. Read:
> http://www.cs.washington.edu/research/projects/cecil/www/pubs/vass-thesis.html.
> If you don't want to read a 150+ doctoral dissertation, try 
> http://www.cs.washington.edu/research/projects/cecil/www/pubs/vass-thesis.html
> Note that the latter is five years older and not up-to-date. The
> salient point of the above papers is that, using a newly developed
> static type system, there were able to get the Cecil compiler, which
> was originally written in a dynamic style, to typecheck with very
> minimal changes to the actual code. Even though some of the results of
> the paper are a little disconcerting --- they did bump up against the
> type system several times, the research is promising and could be a
> useful if applied creatively.

...just that they were able to do this doesn't prove anything. Where are 
the empirical studies that show that static type systems are actually 
useful in practice?

To argue that static type systems are useful by showing how sound they 
are is a category error.

Here is some experience obviously made with the Erlang language:

"The type system has uncovered no errors.  The kernel libraries were 
written by Erlang "experts"--it seems that good programmers don't make 
type errors.  It will be interesting to see if this remains true when we 
start type checking code written by less experienced programmers."

as seen at 
http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/msg02779.html



Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Brian Downing
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <7MMbb.554042$uu5.91779@sccrnsc04>
In article <··············@sidious.geddis.org>,
Don Geddis  <···@geddis.org> wrote:
> (deftype index (integer 1 1000))
> (deftype byte  (integer 0 255))
> (defun foo (x y)
>    (declare (type index x))
>    (declare (type byte y))
>    (+ x y) )
> 
> Most Lisp systems probably happily compile this fragment, because the
> implementation of the data types are compatible (both fixnums), so the
> plus operation is possible.
> 
> The anti-Lisp complaint becomes: "index" and "byte" are not just labels for
> a subset of an implementation type; they are actually semantic categories in
> the problem domain.  And it doesn't make sense to add them together, so
> "(+ x y)" should yield a compiler error (or at least a warning).

But assuming a byte is (integer 0 255), the type of the result of adding
a byte to a byte is NOT a byte, but an (integer 0 510).

So why does it make any sense to limit adding only bytes together, when
the result is not a byte?

(This is what I really like about the type-inferring CLs, once I figured
it out; the type of the result is correct depending on the type of the
inputs, not just blindly the same as the inputs.)

-bcd
-- 
*** Brian Downing <bdowning at lavos dot net> 
From: Edi Weitz
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <87wubz4fl2.fsf@bird.agharta.de>
On Sun, 21 Sep 2003 22:06:23 GMT, Don Geddis <···@geddis.org> wrote:

> I generally think that Lisp can encompass pretty much any
> computational style.  And this one is close.  But I do wonder: can
> it accommodate the needs of those who think that static type
> checking is a critical element in modern programming?

By chance I just came across this one:

  <http://www.simulys.com/guideto.htm>

Maybe that's what you're looking for?

Edi.
From: Carl Shapiro
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <ouy7k3zl5zn.fsf@panix3.panix.com>
Edi Weitz <···@agharta.de> writes:

> By chance I just came across this one:
> 
>   <http://www.simulys.com/guideto.htm>

While the Qi language is implemented on top of the Common Lisp
substrate, it does not give you static type checking for ordinary
Common Lisp definitions.  Type check is only available for definitions
written in the Qi language.

However, you can use Qi's theorem prover to write an ML-style type
inferencer for Common Lisp (I have authored the beginnings of just
such a system).
From: Joerg Hoehle
Subject: Re: Static type checking in Lisp
Date: 
Message-ID: <ur825gioz.fsf@users.sourceforge.net>
Don Geddis <···@geddis.org> writes:

> On a different newsgroup, I had an extensive discussion with a fan of
> Ada/Haskell/ML, about compile-time type checking.

>    (declare (type index x))
>    (declare (type byte y))
>    (+ x y) )
> The anti-Lisp complaint becomes: "index" and "byte" are not just labels for
> a subset of an implementation type;

That's also a complaint against C, Java, C#, possibly C++ and many
other languages.

> they are actually semantic categories in
> the problem domain.  And it doesn't make sense to add them together, so
> "(+ x y)" should yield a compiler error (or at least a warning).

Ada is to me the one known language where you can really and easily
have distinct "3" literals for apples and oranges, preventing you from
adding them, yet being used efficiently as you learnt in your
assembler language courses.
Maybe Dylan can do that as well.

That's a useful thing, indeed.[*]

Typical OO approaches would encapsulate the 3 inside a class, with all
the overhead. Ada can avoid that.
ML, Haskell and other FP languages are likely to encpasulate it into a
type, for which you must supply definitions (e.g. define
+-*/ etc.), causing similar overhead as for the OO case.


> He didn't know much about Lisp, but believing it to have dynamic
> typing, criticized Lisp for not having the compiler catch some
> possible bugs.
A typical symptom of somebody who's in his camp.

The Haskell/ML/Ada compilers also could be criticized for not catching
possible bugs. Nobody can catch all. Does the other guy know that well enough?
The type-system ML uses is known to be incomplete, and not all
errors are type errors.
What about dead-locks, protocol violations etc.?
What about good use of exceptions (when to raise an error or signal a
Lisp condition)?
What about liveness analysis (memory leaks, even in FP)?

Smalltalkers and Lispers experience is: very little of the bugs they
find are type errors. That's part of the reason why some of these
people may see little value in static typing.

I hope I won't be misunderstood: I like Haskell, SDL, VDM and other
languages with which I can express and check specifications. I also
value Lisp and other languages with strong, latent typing.

Latent typing probably caracterizes Lisp and Smalltalk better than
dynamic typing (there was a thread on this, once).

> And if, in fact, you do want them to add together, then I should create a
> third type, which is the union of index and byte, and cast the x & y variables
> to this third type, to inform the compiler that I am aware of what I'm doing.
> Much like "apples" and "oranges" shouldn't be added together, but two piles of
> "fruit" can be.
Uh,oh...

You know: this is the schism: some people very much prefer to have the
implicit (default) contract be "I'm aware of what I'm doing, thank you".

There's a lot of work going into type inference in non-statically
typed languages, and I remember having read about various projects for
Erlang, Common Lisp, Scheme and Smalltalk.

> style.  And this one is close.  But I do wonder: can it accommodate the
> needs of those who think that static type checking is a critical element in
> modern programming?
What is modern programming?

Working in a large company, I observe different needs for software
than other uses. The possible goal is reliable software. The dogma (or
goal) must not be "static typing". It can be a mean, though.

Design for testability is another mean.
Design for using the type system so as to catch as many errors as
possible is another one, when you have such a type system. But few
people in the ML/Haskell camp actually program this way.

[*] I was once told that Linus Torvald devised a trick for C, so as to
nail down type errors in the Linux kernel: the trick is to encapsulate
every int type inside a different struct definition, and hope for
compiler support that this won't be less efficient. The C compiler
will warn when there's a struct mismatch. He found a few errors this
way.

Regards,
	Jorg Hohle
Telekom/T-Systems Technology Center