Hmm, that's not so much a weakness in static typing, but rather of
those who use the type system as a mental crutch to avoid reasoning
about the code. The comment about changing a type and then having the
compiler tell you where you need to make modifications made me cringe.
That's an abuse of the type system, not a good way to take advantage
of it! Of course, dynamic typing would not have helped in your case.
Only proper testing and code review would have.
Static typing will tell you exactly one thing --- that your code is
free of type errors. It has its utility --- it means that, if your
code typechecks correctly, you don't have to pay attention to looking
for type errors, rather, you can spend your efforts (testing, code
review time) looking for other sorts of errors. People who sell static
typing as a panacea (like anybody who sells anything as a panacea) are
selling you snake oil.
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <4qx2t4af.fsf@comcast.net>
·······@mindspring.com (Rayiner Hashem) writes:
> Static typing will tell you exactly one thing --- that your code is
> free of type errors. It has its utility --- it means that, if your
^
static
> code typechecks correctly, you don't have to pay attention to looking
> for type errors, rather, you can spend your efforts (testing, code
^
static
> review time) looking for other sorts of errors.
Like the dynamic type errors.
--
~jrm
Joe Marshall wrote:
> ·······@mindspring.com (Rayiner Hashem) writes:
>
>>Static typing will tell you exactly one thing --- that your code is
>>free of type errors. It has its utility --- it means that, if your
> ^
> static
>
>>code typechecks correctly, you don't have to pay attention to looking
>>for type errors, rather, you can spend your efforts (testing, code
> ^
> static
>
>>review time) looking for other sorts of errors.
>
> Like the dynamic type errors.
There is no such thing as a dynamic type error. Not in a
statically-typed language.
Unless you count type casts in. Type-inferring languages like ML or
Haskell don't even have them though - they are unnecessary. (Well, you
can always do a type cast by going through C, which would be excessively
awkward. The important point being: there was never pressure from the
language users to add a more convenient way to do type casts.)
Regards,
Jo
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <llqdvco5.fsf@ccs.neu.edu>
Joachim Durchholz <·················@web.de> writes:
> Joe Marshall wrote:
>
>> ·······@mindspring.com (Rayiner Hashem) writes:
>>
>>>Static typing will tell you exactly one thing --- that your code is
>>>free of type errors. It has its utility --- it means that, if your
>> ^
>> static
>>
>>>code typechecks correctly, you don't have to pay attention to looking
>>>for type errors, rather, you can spend your efforts (testing, code
>> ^
>> static
>>
>>> review time) looking for other sorts of errors.
>> Like the dynamic type errors.
>
> There is no such thing as a dynamic type error. Not in a
> statically-typed language.
> Unless you count type casts in.
Type casts *and* cases where a type union needs to be discriminated
and at least one clause raises an error. I would consider both of
those to be dynamic type errors.
If you do not consider the union discrimination case to be a type
error, then Lisp trivially does not raise dynamic type errors, either.
Joe Marshall wrote:
> Joachim Durchholz <·················@web.de> writes:
>>There is no such thing as a dynamic type error. Not in a
>>statically-typed language.
>>Unless you count type casts in.
>
> Type casts *and* cases where a type union needs to be discriminated
> and at least one clause raises an error. I would consider both of
> those to be dynamic type errors.
The error case for discriminated unions sounds interesting, but I don't
know what kind of error situation you mean.
With typical functional-language systems, there's certainly room for
detecting error cases, but they are usually handled in ways that make
them non-errors. I'm not sure whether that makes them "dynamic type
errors" by your definition, and I'm not sure that your definition
matches mine, so could you elaborate?
Regards,
Jo
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <65hg3pdz.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:
> Joe Marshall wrote:
>
>> Joachim Durchholz <·················@web.de> writes:
>>>There is no such thing as a dynamic type error. Not in a
>>>statically-typed language.
>>> Unless you count type casts in.
>> Type casts *and* cases where a type union needs to be discriminated
>> and at least one clause raises an error. I would consider both of
>> those to be dynamic type errors.
>
> The error case for discriminated unions sounds interesting, but I
> don't know what kind of error situation you mean.
> With typical functional-language systems, there's certainly room for
> detecting error cases, but they are usually handled in ways that make
> them non-errors. I'm not sure whether that makes them "dynamic type
> errors" by your definition, and I'm not sure that your definition
> matches mine, so could you elaborate?
This code (thanks to Dirk Thierbach) showed up earlier in this thread:
data Foo a = Op (a -> a -> a) | Val a
instance Show a => Show (Foo a) where
show (Op f) = "operator"
show (Val x) = show x
foocall (Op p) (Val x) (Val y) = p x y
foocall _ = error "dynamic type error"
foo f = foocall (f (Op (+))) (f (Val 3)) (f (Val 2))
As you can see, this code will pass a `static type check' and
therefore has no `static type errors', yet there is a line in there
that throws a runtime error, and in other sections of code, there
could be a call that matches that line, at which point a runtime
error ought to be generated.
This really is the same thing as when you call (CAR X) when X is bound
to something other than a pair or nil. The only difference is that
the definition of CAR has the error case built in to it, but FOOCALL
needed it manually written.
--
~jrm
Joe Marshall <·············@comcast.net> writes:
> This really is the same thing as when you call (CAR X) when X is bound
> to something other than a pair or nil. The only difference is that
> the definition of CAR has the error case built in to it, but FOOCALL
> needed it manually written.
Uh, that's not true, there will be a canned error message (and of
course a compiler warning) should you choose not to provide the
specific case -- typically, GHC will say:
Prelude Ratio> let f (x:xs) = True
Prelude Ratio> f []
*** Exception: <interactive>:1: Non-exhaustive patterns in function f
Prelude Ratio> head []
*** Exception: Prelude.head: empty list
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
From: Esa Pulkkinen <esa.pulkkinen>
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <87r7zzeioi.fsf@212.246.110.78>
··········@ii.uib.no writes:
> Uh, that's not true, there will be a canned error message (and of
> course a compiler warning) should you choose not to provide the
> specific case -- typically, GHC will say:
[SNIP]
> Prelude Ratio> head []
> *** Exception: Prelude.head: empty list
Interesting. This example is actually something I wish was fixed in
Haskell. I would rather have
head :: [a] -> Maybe a
or even more general:
head :: (Monad m) => [a] -> m a
Which would both make this behaviour obvious. Of course, this should
apply to all projection functions for data types with more than one
alternative. At least if the projection function is not defined for
each alternative.
--
Esa Pulkkinen
On Sun, 23 Nov 2003 07:51:41 +0200, Esa Pulkkinen <esa.pulkkinen> wrote:
> Interesting. This example is actually something I wish was fixed in
> Haskell. I would rather have
>
> head :: [a] -> Maybe a
I rarely use head at all - pattern matching is usually more convenient and
it allows to handle the potential error. It's definitely not as common as
car in Lisp.
--
__("< Marcin Kowalczyk
\__/ ······@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/
Esa Pulkkinen <esa.pulkkinen> wrote:
> Interesting. This example is actually something I wish was fixed in
> Haskell. I would rather have
>
> head :: [a] -> Maybe a
That's Maybe.listToMaybe
Anyway, you can define such functions yourself if you like.
Best regards,
Tom
--
.signature: Too many levels of symbolic links
Joe Marshall <·············@comcast.net> wrote:
> This really is the same thing as when you call (CAR X) when X is bound
> to something other than a pair or nil. The only difference is that
> the definition of CAR has the error case built in to it, but FOOCALL
> needed it manually written.
And that's the important point: You have *control* about whether exceptions
will be thrown, if whether they won't.
If you don't want exceptions, then you can (given a reasonable application)
very often change your code slightly (as I did for the above example),
and you have solved the same problem in a way that is guaranteed to
throw no exception.
- Dirk
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <k75v58ck.fsf@ccs.neu.edu>
Dirk Thierbach <··········@gmx.de> writes:
> Joe Marshall <·············@comcast.net> wrote:
>
>> This really is the same thing as when you call (CAR X) when X is bound
>> to something other than a pair or nil. The only difference is that
>> the definition of CAR has the error case built in to it, but FOOCALL
>> needed it manually written.
>
> And that's the important point: You have *control* about whether exceptions
> will be thrown, if whether they won't.
>
> If you don't want exceptions, then you can (given a reasonable application)
> very often change your code slightly (as I did for the above example),
> and you have solved the same problem in a way that is guaranteed to
> throw no exception.
As one can in Lisp:
(if (consp x) (car x) *reasonable-default-value*)
or sledghammer approach:
(ignore-errors (car x))
In article <············@news.oberberg.net>,
Joachim Durchholz <·················@web.de> wrote:
>Joe Marshall wrote:
>
>> ·······@mindspring.com (Rayiner Hashem) writes:
>>
>>>Static typing will tell you exactly one thing --- that your code is
>>>free of type errors. It has its utility --- it means that, if your
>> ^
>> static
>>
>>>code typechecks correctly, you don't have to pay attention to looking
>>>for type errors, rather, you can spend your efforts (testing, code
>> ^
>> static
>>
>>>review time) looking for other sorts of errors.
>>
>> Like the dynamic type errors.
>
>There is no such thing as a dynamic type error. Not in a
>statically-typed language.
You can still have dynamic type errors when the language supports
subtypes. A function that accepts a particular type also accepts any of
its subtypes, but it's possible that it doesn't work properly for some of
them.
There's a simple proof of this: A dynamically-typed language is equivalent
to a statically-typed language where all functions are declared to accept
and return the type at the root of the type hierarchy. Since dynamic type
errors are possible in this language, it shows that there's at least one
statically-typed language that permits dynamic type errors.
--
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
Barry Margolin <··············@level3.com> wrote in message news:<·················@news.level3.com>...
> In article <············@news.oberberg.net>,
> Joachim Durchholz <·················@web.de> wrote:
> >There is no such thing as a dynamic type error. Not in a
> >statically-typed language.
>
> You can still have dynamic type errors when the language supports
> subtypes. A function that accepts a particular type also accepts any of
> its subtypes, but it's possible that it doesn't work properly for some of
> them.
You can have dynamic type errors in a static language if it has an
escape hatch that allows for discriminated unions. In other words, it
allows the program to conditionally treat a piece of memory as having
more than one possible type.
If you don't have this escape hatch, then dynamic typing can be
bootstrapped only in some very circuitous, expensive way. For example,
you pick the most flexible data structure in the language---such as
the character string---and force it to represent everything. The
integer 42 is represented, perhaps, as the string "42", and addition
is done on strings, and so forth. Or instead of a discriminated union,
you make a large aggregate instead. Since you can't overlap the memory
between an integer, floating point number or string (for instance),
you make a container that has all three and an indicator of which
value is intended.
We can look at Lisp as a dynamic language that is bootstrapped in a
simple, low-level static language with discriminated unions. That
static language ensures that a memory cell is always treated properly.
A rigid representation scheme allows some bits of memory to be trusted
to indicate what can be done with the remaining bits. Unless an escape
hatch has been invoked to defeat the safety, Lisp will not, for
instance, add two cons cell references as though they were integers.
Or, for instance, passing an integer to a function that needs a cons
cell will not blow the parameter passing mechanism: the representation
was chosen so that the call is compatible.
Your Lisp compiler and/or interpreter eliminated these errors at the
time they were built.
When your Lisp system throws some type mismatch condition at you,
that's not realy a type error; it's a soft condition detected by a
statically typed program (or safe machine code generated by a program
that was statically typed).
Static typing forms that rigid substrate that lets us bootstrap more
flexible things. At some point, it helps to have the assurance that a
variable points to valid memory with a known, rigid representation.
Some programmers want to carry this ``stone knife'' with them to
higher levels of programming.
> There's a simple proof of this: A dynamically-typed language is equivalent
> to a statically-typed language where all functions are declared to accept
> and return the type at the root of the type hierarchy.
Right, but that static language needs some type introspection. It's
not an entirely static language. To be useful, this type hierarchy
needs a way to ask an object what its type is, so for instance the
most specific method can be applied to it. That constitutes a form of
dynamic typing. Otherwise, if a reference to an object can be
converted to a supertype reference, but the reverse conversion is not
possible, that is next to useless!
Barry Margolin wrote:
>>
>>There is no such thing as a dynamic type error. Not in a
>>statically-typed language.
>
> You can still have dynamic type errors when the language supports
> subtypes.
Only if the type system has holes or its subtyping rules are unsound (one
instance of the former is the presence of downcasts, instances of the
latter are the infamous covariant function types, or subtyping on mutable
collections).
> A function that accepts a particular type also accepts any of
> its subtypes, but it's possible that it doesn't work properly for some of
> them.
Then it would not be a subtype.
> There's a simple proof of this: A dynamically-typed language is equivalent
> to a statically-typed language where all functions are declared to accept
> and return the type at the root of the type hierarchy.
No, this is wrong. All subtypes of the type TOP->TOP you sketch have the
form TOP->t (for arbitrary t). That is, no function of this type can do
anything interesting with its argument, as long as it does not do an
(inherently type-unsafe) downcast.
This confusion about the direction (variance) of subtyping for function
types unfortunately still is common.
- Andreas
--
Andreas Rossberg, ········@ps.uni-sb.de
"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
In article <············@grizzly.ps.uni-sb.de>,
Andreas Rossberg <········@ps.uni-sb.de> wrote:
>Barry Margolin wrote:
>>>
>>>There is no such thing as a dynamic type error. Not in a
>>>statically-typed language.
>>
>> You can still have dynamic type errors when the language supports
>> subtypes.
>
>Only if the type system has holes or its subtyping rules are unsound (one
>instance of the former is the presence of downcasts, instances of the
>latter are the infamous covariant function types, or subtyping on mutable
>collections).
>
>> A function that accepts a particular type also accepts any of
>> its subtypes, but it's possible that it doesn't work properly for some of
>> them.
>
>Then it would not be a subtype.
I disagree. Sometimes you need to special-case certain subtypes. For
instance, flightless-bird is a subtype of bird, but a function that takes
objects of type bird and performs operations on the assumption that they
can fly would have a type error.
Perhaps the bird type should be split into two subtypes, flying-bird and
flightless-bird, and that function should only accept flying-bird. What
this shows is that you have to design your type hierarchy carefully to
prevent type errors like this.
--
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
Barry Margolin wrote:
> In article <············@grizzly.ps.uni-sb.de>,
> Andreas Rossberg <········@ps.uni-sb.de> wrote:
>
>>Barry Margolin wrote:
[...]
>>>A function that accepts a particular type also accepts any of
>>>its subtypes, but it's possible that it doesn't work properly for some of
>>>them.
>>
>>Then it would not be a subtype.
>
>
> I disagree. Sometimes you need to special-case certain subtypes. For
> instance, flightless-bird is a subtype of bird, but a function that takes
> objects of type bird and performs operations on the assumption that they
> can fly would have a type error.
What you're describing is inheritance. Inheritance is one way to provide
for subtyping, but it places constraints on designs that encourage the
introduction of holes in the type system to allow for the
"special-cases" to which you refer. Some people even erroneously
conclude we're better off with no static typing systems at all. :-)
[...]
-thant
Barry Margolin <··············@level3.com> wrote:
> >
> >> A function that accepts a particular type also accepts any of
> >> its subtypes, but it's possible that it doesn't work properly for some
of
> >> them.
> >
> >Then it would not be a subtype.
>
> I disagree. Sometimes you need to special-case certain subtypes. For
> instance, flightless-bird is a subtype of bird, but a function that takes
> objects of type bird and performs operations on the assumption that they
> can fly would have a type error.
Sorry, but that is false. Restriction is not subtyping. The very meaning of
A being a subtype of B is that all values of type A can be used in every
context values of type B can be used. The type of objects without a "fly"
method can never be a subtype of the type of objects with such a method.
If, on the other hand, you give your flightless bird a "fly" method and
raise an error explicitly on every call, than you have created a subtype
(well, actually the same type), and as far as typing is concerned everything
is fine because the method exists. You cannot reasonably argue that an
exception you throw yourself manually is a type error. Rather, you have
created a design error, by violating what often is called the Liskov
Substitution Principle in the OO community (which essentially says that you
should only piggyback subtyping if there also is a true semantic inclusion
relation in the problem domain you model).
Or perhaps you are confusing subtyping with inheritance, and types with
classes?
- Andreas
In article <············@grizzly.ps.uni-sb.de>,
Andreas Rossberg <········@ps.uni-sb.de> wrote:
>Or perhaps you are confusing subtyping with inheritance, and types with
>classes?
Since those are the tools provided to implement types in many programming
languages, I think I'm not the only one.
--
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
Barry Margolin <··············@level3.com> wrote:
> >Or perhaps you are confusing subtyping with inheritance, and types with
> >classes?
>
> Since those are the tools provided to implement types in many programming
> languages, I think I'm not the only one.
Yes, unfortunately many mainstream languages lump them together (<sarcasm>
which is not really surprising, since lumping orthogonal things together
into one mess is at the core of the OO philosophy </sarcasm> :-) ).
The following classic paper may interest you:
William Cook, Walter Hill, Peter Canning
Inheritance Is Not Subtyping.
Proc. of the ACM Symp. on Principles of Programming Languages (POPL), 1990
Sorry, I have no online link.
- Andreas
Andreas Rossberg wrote:
> The following classic paper may interest you:
>
> William Cook, Walter Hill, Peter Canning
> Inheritance Is Not Subtyping.
> Proc. of the ACM Symp. on Principles of Programming Languages (POPL), 1990
>
> Sorry, I have no online link.
A Google research revealed that it is available for money on ACM's site, at
http://portal.acm.org/citation.cfm?id=96721&coll=portal&dl=ACM&CFID=14242028&CFTOKEN=58602608
Citeseer comes up empty, but lists lots of interesting (and influential)
papers that cite from that paper.
Regards,
Jo
In article <···················@news.indigo.ie>,
David Golden <············@oceanfree.net> wrote:
>It might be worth pointing out that types and classes aren't the same in
>Common Lisp anyway, although "defining a class defines a type of the same
>name", you CAN make types without defining classes.
That's true, although the distinction between types and classes in CL is
not the same as the distinction that others are talking about. Types in CL
are a messy mixture of many different concepts, and I'm not sure if any of
them correspond directly to the way the term is used in computer science or
mathematical literature.
--
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
Thant Tessman <·····@acm.org> wrote:
>
> > Mh, I'm not sure in what sense templates support subtyping (or typing at
> > all, for that matter). ;-)
>
> They do in the sense I just described, only the type is inferred from
> the template code (which is itself annotated). Admittedly this is not
> the way C++ or SML programmers think about it, but I've become convinced
> this is a potentially fruitful point-of-view.
The problem is: template declarations are not really type-checked themselves
(and cannot be, as things stand), only template instantiations are (of
course). So I feel that calling templates typed is equally justified as
calling preprocessor macros typed. ;-)
- Andreas
Andreas Rossberg wrote:
> Thant Tessman <·····@acm.org> wrote:
>
>>>Mh, I'm not sure in what sense templates support subtyping (or typing at
>>>all, for that matter). ;-)
>>
>>They do in the sense I just described, only the type is inferred from
>>the template code (which is itself annotated). Admittedly this is not
>>the way C++ or SML programmers think about it, but I've become convinced
>>this is a potentially fruitful point-of-view.
>
>
> The problem is: template declarations are not really type-checked themselves
> (and cannot be, as things stand), only template instantiations are (of
> course). So I feel that calling templates typed is equally justified as
> calling preprocessor macros typed. ;-)
It's not that templates are type-checked (which as you said they're
not). It's that the process of instantiating a template *is* the process
of inferring the type(s) it describes *and* checking its validity. A
template instantiation error is a type error. For example, if you
instantiate std::set with a type T and compile the resulting code, you
implicitly confirm that type T supports a copy constructor and a
comparison operator named '<'. In this sense, std::set describes a type T.
I started a rambling essay, but I'll cut myself off and just say that
what I'm interested in is a calculus that will allow me to describe a
type in the same sense that C++ template *instantiation* describes a
type. I think it would be similar to the way SML functors describe
types, but with the twist that C++ supports various other features. I
think the only really important one is static overloading, but this is
the feature that convinces me that--at least in this system--type
annotations are unavoidable.
-thant
Andreas Rossberg wrote:
> Marco Antoniotti wrote:
>
>>>Or perhaps you are confusing subtyping with inheritance, and types with
>>>classes?
>>
>>... and here we go... So what is exactly the difference between
>>subtyping and inheritance and between types and classes? If possible
>>we'd like a short summary of length between the bibtex reference for
>>Russell's "Principia Mathematica" and the textual inclusion of its
>>contents. :)
>
>
> :)
>
> Types describe the interfaces of objects, while classes are factories for
> objects. Subtyping is between types, and inheritance happens between
> classes, and is just implementation detail. A class produces objects of
> some particular type, but a derived class does not necessarily produce
> objects of a subtype (because of self types etc). On the other hand,
> objects of unrelated classes can well be in subtyping relation wrt the
> operations they provide. Coupling both concepts leads to unnecessary
> restrictions on subtyping as well as inheritance, because you always have
> to enforce consistency.
This is very interesting, however, I fail to see what could be a
counterexample (an "instance" of a subclass that is not a subtype of the
supertype; btw. are types associated to classes or not in this
framework?). Is there a simple example of this?
> Sorry, but providing more than this superficial summary would be a
> significant amount of work for me, which already has been done by more
> competent people. Either try to get hold of the paper I cited, or look into
> some of the Theory of Objects papers from Abadi/Cardelli (all available
> from Luca Cardelli's homepage). Their work is also available in book form:
>
> @book{Abadi+Cardelli:TheoryOfObjects,
> author = "Mart{\'\i}n Abadi and Luca Cardelli",
> title = "A Theory of Objects",
> publisher = "Springer-Verlag",
> address = "New York",
> year = 1996,
> }
>
> But it's heavy stuff.
That's why I asked :)
Cheers
--
Marco
Marco Antoniotti wrote:
> This is very interesting, however, I fail to see what could be a
> counterexample (an "instance" of a subclass that is not a subtype of the
> supertype; btw. are types associated to classes or not in this
> framework?). Is there a simple example of this?
A square is a rectangle, but...
-thant
David Golden <············@oceanfree.net> writes:
> From hyperspec:
>
> (defun equidimensional (a)
> (or (< (array-rank a) 2)
> (apply #'= (array-dimensions a)))) => EQUIDIMENSIONAL
Which raises a semi-interesting question. How come you can't
call = with no arguments (getting the result T, of course)?
You *can* call it with exactly one argument. I appreciate
that there may be no answer other than "just because, and
anyway who cares?".
--
Gareth McCaughan
.sig under construc
Gareth McCaughan <·····@g.local> writes:
> David Golden <············@oceanfree.net> writes:
>
> > From hyperspec:
> >
> > (defun equidimensional (a)
> > (or (< (array-rank a) 2)
> > (apply #'= (array-dimensions a)))) => EQUIDIMENSIONAL
>
> Which raises a semi-interesting question. How come you can't
> call = with no arguments (getting the result T, of course)?
> You *can* call it with exactly one argument. I appreciate
> that there may be no answer other than "just because, and
> anyway who cares?".
Probably the reason is either simple oversight or else the fact that it
catches a lot of program errors, compared to the likelihood of someone
doing it for legit reasons. Neither of these is a strong argument, mind
you.
Then again, I'd feel even more queasy about (> 5), (>), (<), and (< 5)
returning T, which would probably be the next thing asked for. No good
reason here either, I guess, but...
Kent M Pitman wrote:
> Then again, I'd feel even more queasy about (> 5), (>), (<), and (< 5)
> returning T, which would probably be the next thing asked for. No good
> reason here either, I guess, but...
But (> 5) and (< 5) *do* return true in a conforming implementation.
Paul
+ Gareth McCaughan <·····@g.local>:
| Which raises a semi-interesting question. How come you can't
| call = with no arguments (getting the result T, of course)?
| You *can* call it with exactly one argument.
Well, byt why should you? The case with one argument is a natural
boundary case. The case with none is not. Consider: A call to #'=
returns true if all arguments are equal. It is the kind of mistake a
beginning student of mathematical logic would make to translate this
into "for every argument x, x is equal", the universal quantifier over
the empty set is always true, But of course, that translation is
nonsensical -- not that I am accusing you of this mistake, mind you.
A better one might be recursive: (= x y z ...) is the same as
(and (= x y) (= y z ...)), and (= x) => T is a natural starting point
for that recursion, but there is no way to continue that argument to
cover the case of no arguments.
--
* Harald Hanche-Olsen <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
than thinking does: but it deprives us of whatever chance there is
of getting closer to the truth. -- C.P. Snow
Harald Hanche-Olsen <······@math.ntnu.no> writes:
> + Gareth McCaughan <·····@g.local>:
>
> | Which raises a semi-interesting question. How come you can't
> | call = with no arguments (getting the result T, of course)?
> | You *can* call it with exactly one argument.
>
> Well, but why should you? The case with one argument is a natural
> boundary case. The case with none is not. Consider: A call to #'=
> returns true if all arguments are equal. It is the kind of mistake a
> beginning student of mathematical logic would make to translate this
> into "for every argument x, x is equal", the universal quantifier over
> the empty set is always true, But of course, that translation is
> nonsensical -- not that I am accusing you of this mistake, mind you.
> A better one might be recursive: (= x y z ...) is the same as
> (and (= x y) (= y z ...)), and (= x) => T is a natural starting point
> for that recursion, but there is no way to continue that argument to
> cover the case of no arguments.
The correct translation is: For every pair x,y of arguments, x=y.
With this translation, (=) should return T.
I'm not sure n=1 is a more natural stopping point for the recursion
you mention than n=2 is. If we didn't have to handle n=1 we could
say
(defun = (&rest args)
(if (just-two args)
(apply #'two-arg-= args)
(and (two-arg== (first args) (second args))
(apply #'= (rest args)))))
which seems a lot more "natural" to me than doing (= a b)
using a call to (two-arg-= a b) *and* one to (= b), which
is the result if you continue the recursion so as to use
the n=1 case:
(defun = (&rest args)
(if (just-one args)
t
(and (two-arg-= (first args) (second args))
(apply #'= (rest args)))))
On the other hand, it's true that the code is a bit
shorter that way.
--
Gareth McCaughan
.sig under construc
Gareth McCaughan <·····@g.local> writes:
> The correct translation is: For every pair x,y of arguments, x=y.
> With this translation, (=) should return T.
nil ==> T, therefore (=) should return T.
(and <, <=, >, >= and /= too).
--
__Pascal_Bourguignon__ . * * . * .* .
http://www.informatimago.com/ . * . .*
* . . /\ () . *
Living free in Alaska or in Siberia, a . . / .\ . * .
grizzli's life expectancy is 35 years, .*. / * \ . .
but no more than 8 years in captivity. . /* o \ .
http://www.theadvocates.org/ * '''||''' .
SCO Spam-magnet: ··········@sco.com ******************
Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> Gareth McCaughan <·····@g.local> writes:
> > The correct translation is: For every pair x,y of arguments, x=y.
> > With this translation, (=) should return T.
>
> nil ==> T, therefore (=) should return T.
>
> (and <, <=, >, >= and /= too).
I agree with your conclusion, but I'm not sure I understand
the reasoning that leads up to it. Was I meant to?
--
Gareth McCaughan
.sig under construc
Gareth McCaughan <·····@g.local> writes:
> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
>
> > Gareth McCaughan <·····@g.local> writes:
> > > The correct translation is: For every pair x,y of arguments, x=y.
> > > With this translation, (=) should return T.
> >
> > nil ==> T, therefore (=) should return T.
> >
> > (and <, <=, >, >= and /= too).
>
> I agree with your conclusion, but I'm not sure I understand
> the reasoning that leads up to it. Was I meant to?
All right, we also have nil ==> nil so they could return nil as well.
But the reasoning comes from universal quantifier behavior.
for all x in R such as ((x<0) and (x>0)), x /= x
is true. The only thing is that there is no x such as ((x<0) and (x>0)).
But we still have x /= x, in those cases...
That's why I'd prefer have (/=) be T than nil.
The same for the other comparisons.
--
__Pascal_Bourguignon__ . * * . * .* .
http://www.informatimago.com/ . * . .*
There is no worse tyranny than to force * . . /\ () . *
a man to pay for what he does not . . / .\ . * .
want merely because you think it .*. / * \ . .
would be good for him. -- Robert Heinlein . /* o \ .
http://www.theadvocates.org/ * '''||''' .
SCO Spam-magnet: ··········@sco.com ******************
Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> Gareth McCaughan <·····@g.local> writes:
>
> > Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> >
> > > Gareth McCaughan <·····@g.local> writes:
> > > > The correct translation is: For every pair x,y of arguments, x=y.
> > > > With this translation, (=) should return T.
> > >
> > > nil ==> T, therefore (=) should return T.
> > >
> > > (and <, <=, >, >= and /= too).
> >
> > I agree with your conclusion, but I'm not sure I understand
> > the reasoning that leads up to it. Was I meant to?
>
> All right, we also have nil ==> nil so they could return nil as well.
>
> But the reasoning comes from universal quantifier behavior.
>
> for all x in R such as ((x<0) and (x>0)), x /= x
>
> is true. The only thing is that there is no x such as ((x<0) and (x>0)).
>
> But we still have x /= x, in those cases...
>
> That's why I'd prefer have (/=) be T than nil.
> The same for the other comparisons.
I *think* you're saying the same as me. (=) means
"for every pair of arguments x,y in <the empty set>,
x=y"; there is no such pair, so the statement is
vacuously true, so (=) should return T. I'm still
not sure this has anything to do with the fact
that NIL implies T.
(/=) should indeed return T for the same reason.
Anyway, none of this is very important. (=) and (/=)
are not defined, and it's hard to make a very convincing
case that defining them would change the world :-).
--
Gareth McCaughan
.sig under construc
Gareth McCaughan <·····@g.local> writes:
> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
>
> > Gareth McCaughan <·····@g.local> writes:
> >
> > > Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> > >
> > > > Gareth McCaughan <·····@g.local> writes:
> > > > > The correct translation is: For every pair x,y of arguments, x=y.
> > > > > With this translation, (=) should return T.
> > > >
> > > > nil ==> T, therefore (=) should return T.
> > > >
> > > > (and <, <=, >, >= and /= too).
> > >
> > > I agree with your conclusion, but I'm not sure I understand
> > > the reasoning that leads up to it. Was I meant to?
> >
> > All right, we also have nil ==> nil so they could return nil as well.
> >
> > But the reasoning comes from universal quantifier behavior.
> >
> > for all x in R such as ((x<0) and (x>0)), x /= x
> >
> > is true. The only thing is that there is no x such as ((x<0) and (x>0)).
> >
> > But we still have x /= x, in those cases...
> >
> > That's why I'd prefer have (/=) be T than nil.
> > The same for the other comparisons.
>
> I *think* you're saying the same as me. (=) means
Yes.
> "for every pair of arguments x,y in <the empty set>,
> x=y"; there is no such pair, so the statement is
> vacuously true, so (=) should return T. I'm still
> not sure this has anything to do with the fact
> that NIL implies T.
You could define the empty set intentionnaly with the predicate:
ES(x,y) = FALSE
for every pair (x,y), (x,y) in {} <=> ES(x,y)
[ (x,y) in {} ==> x=y ] <=> [ ES(x,y) ==> x=y ]
[ FALSE ==> TRUE ] <=> [ FALSE ==> TRUE ]
> (/=) should indeed return T for the same reason.
>
> Anyway, none of this is very important. (=) and (/=)
> are not defined, and it's hard to make a very convincing
> case that defining them would change the world :-).
--
__Pascal_Bourguignon__ . * * . * .* .
http://www.informatimago.com/ . * . .*
There is no worse tyranny than to force * . . /\ () . *
a man to pay for what he does not . . / .\ . * .
want merely because you think it .*. / * \ . .
would be good for him. -- Robert Heinlein . /* o \ .
http://www.theadvocates.org/ * '''||''' .
SCO Spam-magnet: ··········@sco.com ******************
Joachim Durchholz <·················@web.de> writes:
> There is no such thing as a dynamic type error. Not in a
> statically-typed language.
> Unless you count type casts in. Type-inferring languages like ML or
> Haskell don't even have them though - they are unnecessary. (Well, you
> can always do a type cast by going through C, which would be
> excessively awkward. The important point being: there was never
> pressure from the language users to add a more convenient way to do
> type casts.)
That's interesting. How do you handle a case like this in such a language?:
double b;
b = sqrt(4*x^4+8*x^3-3*x^2+3*x-8*x+2);
with x a floating point number? Do you have no other choice but to
define b as a complex number, or do these languages fire up maple and
figure it out?
Mario S. Mommer <········@yahoo.com> wrote in message news:<··············@cupid.igpm.rwth-aachen.de>...
> Joachim Durchholz <·················@web.de> writes:
> > There is no such thing as a dynamic type error. Not in a
> > statically-typed language.
> > Unless you count type casts in. Type-inferring languages like ML or
> > Haskell don't even have them though - they are unnecessary. (Well, you
> > can always do a type cast by going through C, which would be
> > excessively awkward. The important point being: there was never
> > pressure from the language users to add a more convenient way to do
> > type casts.)
>
> That's interesting. How do you handle a case like this in such a language?:
>
> double b;
> b = sqrt(4*x^4+8*x^3-3*x^2+3*x-8*x+2);
>
> with x a floating point number? Do you have no other choice but to
> define b as a complex number, or do these languages fire up maple and
> figure it out?
Hmmm maybe you could do this:
f x = sqrt (4.0*x^4.0+8.0*x^3.0-3.0*x^2.0+3.0*x-8.0*x+2.0)
best wishes, Isaac
·····@yahoo.com (Isaac Gouy) writes:
> Mario S. Mommer <········@yahoo.com> wrote:
> > That's interesting. How do you handle a case like this in such a language?:
> >
> > double b;
> > b = sqrt(4*x^4+8*x^3-3*x^2+3*x-8*x+2);
> >
> > with x a floating point number? Do you have no other choice but to
> > define b as a complex number, or do these languages fire up maple and
> > figure it out?
>
> Hmmm maybe you could do this:
> f x = sqrt (4.0*x^4.0+8.0*x^3.0-3.0*x^2.0+3.0*x-8.0*x+2.0)
That wasn't the point.
How does "it" rule out that the argument to sqrt is not negative?
Since you seem to have tested it, may I conclude that there is a hole
in your type system?
Mario S. Mommer <········@yahoo.com> writes:
> How does "it" rule out that the argument to sqrt is not negative?
I guess this is another "static typing cannot guarantee against all
runtime errors so-it-must-be-worthless" argument?
I'm sure you can get yet another long thread from this, but do us all
a favor and get it pre-discussed from Goole.
-kzm
PS: If you are genuinely asking about this, and not just
trolling/working hard to confirm your prejudices, then you have at
least two options in a statically typed language, along the lines of
sqrt :: Num a => a -> Complex
and
sqrt :: Num a => a -> a -- giving a runtime error for
-- negative argument
--
If I haven't seen further, it is by standing in the footprints of giants
··········@ii.uib.no writes:
> Mario S. Mommer <········@yahoo.com> writes:
>
> > How does "it" rule out that the argument to sqrt is not negative?
>
> I guess this is another "static typing cannot guarantee against all
> runtime errors so-it-must-be-worthless" argument?
It is not me that is making unreasonable claims. The above is a fairly
simple error, so it seems that in the end you will have to test
anyway.
> PS: If you are genuinely asking about this, and not just
> trolling/working hard to confirm your prejudices, then you have at
> least two options in a statically typed language, along the lines of
>
> sqrt :: Num a => a -> Complex
> and
> sqrt :: Num a => a -> a -- giving a runtime error for
> -- negative argument
Someone said that in Haskell you never made casts. I find this rather
hard to believe, so in that sense it is a genuine question. I don't
understand what you have written above, so I don't know what it
implies.
How do you deal with this case? Given integer n, evaluate the
expression
(n^3)/3+n^2-n/3
The result is always an integer. Can Haskell infer that? And if not,
how do I tell him to believe me that this is an integer? And if I can
do that, then what is the guarantee that I am not mistaken?
Mario S. Mommer <········@yahoo.com> wrote in message news:<··············@cupid.igpm.rwth-aachen.de>...
> ··········@ii.uib.no writes:
> > Mario S. Mommer <········@yahoo.com> writes:
> >
> > > How does "it" rule out that the argument to sqrt is not negative?
> >
> > I guess this is another "static typing cannot guarantee against all
> > runtime errors so-it-must-be-worthless" argument?
>
> It is not me that is making unreasonable claims. The above is a fairly
> simple error, so it seems that in the end you will have to test
> anyway.
The arch staticist Cardelli states: "Even statically checked languages
usually need to perform tests at run time to achieve safety." p4 'Type
Systems'
> Someone said that in Haskell you never made casts. I find this rather
> hard to believe, so in that sense it is a genuine question. I don't
> understand what you have written above, so I don't know what it
> implies.
>
> How do you deal with this case? Given integer n, evaluate the
> expression
>
> (n^3)/3+n^2-n/3
>
> The result is always an integer. Can Haskell infer that? And if not,
> how do I tell him to believe me that this is an integer? And if I can
> do that, then what is the guarantee that I am not mistaken?
Start = f 5
f n = (n^3)/3+n^2-n/3
answer was 65 (This is Clean, not Haskell)
The inferred types are:
f :: !.Int -> Int
Start :: Int
> what is the guarantee that I am not mistaken?
can you rephrase the question please
best wishes, Isaac
> > The inferred types are:
> > f :: !.Int -> Int
> > Start :: Int
>
> I don't understand your programming language, so the above doesn't
> mean much to me.
That's your problem. If you don't know Clean or Haskell, the type
system won't make sense to you. Its very different from Common Lisp's.
Basically, the above means the following:
f :: !.Int -> Int --- The function 'f' maps an Int (integer) to an Int
Start :: Int --- The symbol 'Start' has the type Int
These type declarations are what the compiler uses to infer the types
of variables.
> Does Clean resort to an algebra system?
Nope. It inferred it from the types of the operations. It uses the
type of each function in the expression, as shown above, to determine
the type of the result.
> What if the value of 'n' wasn't available at compile-time, but was
> rather entered by a user, for instance?
You generally have to know at least the type of 'n'. See the slide
titled "getting ints with prompt" at
http://216.239.37.104/search?q=cache:5a5iH5RO9dYJ:www.cs.waikato.ac.nz/~bernhard/209/haskell/IO_2.ppt+input+output+in+haskell+read+float&hl=en&ie=UTF-8
> If I add a cast to integer for the above formula (or construct an int
> from the result of the above), how does "it" know that I am right?
You can't cast, so its a moot point. Anyway, Haskell (which apparently
has a different type definition for '/' than Clean) interprets the
result of the original expression as 65.0, which is of type float. If
you want to store it to an int, you have to use 'truncate' or 'round'.
That's not a cast, but a numeric conversion that explicitly states the
intention to drop any fractional component.
·····@yahoo.com (Isaac Gouy) writes:
> > How do you deal with this case? Given integer n, evaluate the
> > expression
> >
> > (n^3)/3+n^2-n/3
> >
> > The result is always an integer. Can Haskell infer that? And if not,
> > how do I tell him to believe me that this is an integer? And if I can
> > do that, then what is the guarantee that I am not mistaken?
>
> Start = f 5
> f n = (n^3)/3+n^2-n/3
>
> answer was 65 (This is Clean, not Haskell)
>
> The inferred types are:
> f :: !.Int -> Int
> Start :: Int
I don't understand your programming language, so the above doesn't
mean much to me.
Does Clean resort to an algebra system? The above looks more like a
trick, where it actually performed the computation (with n=5, I
presume) and concluded it was an int. CMUCL would do exactly the same,
btw.
What if the value of 'n' wasn't available at compile-time, but was
rather entered by a user, for instance?
> > what is the guarantee that I am not mistaken?
> can you rephrase the question please
If I add a cast to integer for the above formula (or construct an int
from the result of the above), how does "it" know that I am right?
Mario S. Mommer <········@yahoo.com> wrote in message news:<··············@cupid.igpm.rwth-aachen.de>...
> Does Clean resort to an algebra system? The above looks more like a
> trick, where it actually performed the computation (with n=5, I
> presume) and concluded it was an int. CMUCL would do exactly the same,
> btw.
Let me put that concern to rest:
Start = "hello"
f n = (n^3)/3 + n^2 - n/3
The inferred types are:
Start :: .{#Char}
f :: !.Int -> Int
As we haven't performed any type conversions on 'n', and we haven't
included any rational library, and the literals in f are Int - n must
be an Int (if it wasn't there'd be an error performing one of the
operations - there's no implicit coercion between Real and Int).
> > > what is the guarantee that I am not mistaken?
> > can you rephrase the question please
>
> If I add a cast to integer for the above formula (or construct an int
> from the result of the above), how does "it" know that I am right?
Given that 'n' must be Int, so must be the result.
best wishes, Isaac
·····@yahoo.com (Isaac Gouy) writes:
> Mario S. Mommer <········@yahoo.com> wrote
>
> > Does Clean resort to an algebra system? The above looks more like a
> > trick, where it actually performed the computation (with n=5, I
> > presume) and concluded it was an int. CMUCL would do exactly the same,
> > btw.
>
> Let me put that concern to rest:
> Start = "hello"
> f n = (n^3)/3 + n^2 - n/3
>
> The inferred types are:
> Start :: .{#Char}
> f :: !.Int -> Int
Ah - I see.
> As we haven't performed any type conversions on 'n', and we haven't
> included any rational library, and the literals in f are Int - n must
> be an Int (if it wasn't there'd be an error performing one of the
> operations - there's no implicit coercion between Real and Int).
Well, it seems that it is doing 5/3 -> 1. Excuse me, but this is
really broken.
> Given that 'n' must be Int, so must be the result.
Oh no:
n^2/2+n/3
is very likely not an integer, even for integer n.
The point of the example was that the arithmetic included rationals,
but the result was always an integer. One cannot reasonably expect the
type checker to notice that.
Mario S. Mommer <········@yahoo.com> wrote in message news:<··············@cupid.igpm.rwth-aachen.de>...
> Well, it seems that it is doing 5/3 -> 1. Excuse me, but this is
> really broken.
It may not meet your expectations.
It agrees with the language spec, so it doesn't seem "really broken"
to me ;-)
> > Given that 'n' must be Int, so must be the result.
> Oh no:
> n^2/2+n/3
> is very likely not an integer, even for integer n.
As we've seen already, that depends on the meaning of '/'
> The point of the example was that the arithmetic included rationals,
> but the result was always an integer. One cannot reasonably expect the
> type checker to notice that.
Also implicit in your understanding of the example, is an
implementation that supports mixed arithmetic - Int & Rational. With
different assumptions, we get different results.
best wishes, Isaac
Mario S. Mommer <········@yahoo.com> once said:
>Someone said that in Haskell you never made casts. I find this rather
The issue here may merely be (again) a matter of terminology (see below).
>hard to believe, so in that sense it is a genuine question. I don't
>understand what you have written above, so I don't know what it
>implies.
>
>How do you deal with this case? Given integer n, evaluate the
>expression
>
>(n^3)/3+n^2-n/3
>
>The result is always an integer. Can Haskell infer that? And if not,
>how do I tell him to believe me that this is an integer? And if I can
>do that, then what is the guarantee that I am not mistaken?
A static type system will not infer that the expression has type
"integer", rather it will infer some other type (e.g. "rational")[*].
To convert an arbitrary rational into an integer, one would write a
function:
convert :: Rational -> Maybe Int
convert r = if (some_dynamic_value_check r)
then Just (integer_part_of r)
else Nothing
Or in a Java-like language:
Integer convert( Rational r ) throws BadConversionException {...}
The exceptions (Maybe/BadConversionException) create the "guarantee".
You might call a function like "convert" a "cast", but I think a
Haskeller would consider it "just another function" (terminology issue
from above).
[*] Unless your type system is very expressive and someone has taken a
great deal of time implementing the library of types to deal with cases
just like this.
--
Brian M. McNamara ······@acm.org : I am a parsing fool!
** Reduce - Reuse - Recycle ** : (Where's my medication? ;) )
·······@prism.gatech.edu (Brian McNamara!) writes:
> Mario S. Mommer <········@yahoo.com> once said:
> >Someone said that in Haskell you never made casts. I find this rather
>
> The issue here may merely be (again) a matter of terminology (see below).
Yes, casts are never made, but new objects from one type are
constructed from the other. Ketil wrote that to me by email
(thanks).
> >hard to believe, so in that sense it is a genuine question. I don't
> >understand what you have written above, so I don't know what it
> >implies.
> >
> >How do you deal with this case? Given integer n, evaluate the
> >expression
> >
> >(n^3)/3+n^2-n/3
> >
> >The result is always an integer. Can Haskell infer that? And if not,
> >how do I tell him to believe me that this is an integer? And if I can
> >do that, then what is the guarantee that I am not mistaken?
>
> A static type system will not infer that the expression has type
> "integer", rather it will infer some other type (e.g. "rational")[*].
> To convert an arbitrary rational into an integer, one would write a
> function:
>
> convert :: Rational -> Maybe Int
> convert r = if (some_dynamic_value_check r)
> then Just (integer_part_of r)
> else Nothing
You see? Redundant stuff just to appease your type system and
circumvent its limitations (the limitation being that it cannot prove
such things, and it would not even be reasonable to expect it).
> Or in a Java-like language:
>
> Integer convert( Rational r ) throws BadConversionException {...}
>
> The exceptions (Maybe/BadConversionException) create the "guarantee".
It is a "guarantee" in the sense that the program is type correct, not
that the program is correct. You are not any better off than if you
just used a lisp without declaring anything, which would have done all
of the above automatically. In fact, here you were forced to do extra
work for nothing.
> You might call a function like "convert" a "cast", but I think a
> Haskeller would consider it "just another function" (terminology issue
> from above).
Fair enough.
> [*] Unless your type system is very expressive and someone has taken a
> great deal of time implementing the library of types to deal with cases
> just like this.
This would be, in my opinion, Quixotic.
Mario S. Mommer <········@yahoo.com> once said:
>·······@prism.gatech.edu (Brian McNamara!) writes:
>> Mario S. Mommer <········@yahoo.com> once said:
>> >How do you deal with this case? Given integer n, evaluate the
>> >expression
>> >
>> >(n^3)/3+n^2-n/3
>> >
>> >The result is always an integer. Can Haskell infer that? And if not,
>> >how do I tell him to believe me that this is an integer? And if I can
>> >do that, then what is the guarantee that I am not mistaken?
>>
>> A static type system will not infer that the expression has type
>> "integer", rather it will infer some other type (e.g. "rational")[*].
>> To convert an arbitrary rational into an integer, one would write a
>> function:
>>
>> convert :: Rational -> Maybe Int
>> convert r = if (some_dynamic_value_check r)
>> then Just (integer_part_of r)
>> else Nothing
>
>You see? Redundant stuff just to appease your type system and
>circumvent its limitations (the limitation being that it cannot prove
>such things, and it would not even be reasonable to expect it).
>
>> Or in a Java-like language:
>>
>> Integer convert( Rational r ) throws BadConversionException {...}
>>
>> The exceptions (Maybe/BadConversionException) create the "guarantee".
>
>It is a "guarantee" in the sense that the program is type correct, not
>that the program is correct. You are not any better off than if you
>just used a lisp without declaring anything, which would have done all
>of the above automatically. In fact, here you were forced to do extra
>work for nothing.
It is not "for nothing". The static typing ensures that I have
explicitly dealt with the possibility of the result not being an
integer. If I go and mistakenly change the code from
x = convert ( (n^3)/3+n^2-n/3 )
to
x = convert ( (n^2)/3+n^2-n/3 ) -- n^3 became n^2
I am "ok", because my code downstream is already prepared to deal with
the problem. That is, somewhere downstream there must eventually be
code along the lines of
case x of
(Just n) -> do_whatever n -- do_whatever expects an integer
Nothing -> handle_the_problem
Sure, in Lisp you could just say the equivalent of
let x = (n^3)/3+n^2-n/3
in do_whatever x
and not deal with "convert" and "Maybe". But then if I later introduce
the error:
let x = (n^2)/3+n^2-n/3 -- oops, n^2 again
in do_whatever x
the Lisp code will "fail at runtime" in the body of do_whatever, which
was unprepared to handle a parameter with a non-integer value.
It is a trade-off:
- Dynamic typing lets you do whatever you like. If you do things right,
you "win" for free. If you do them wrong, the penalty you pay is
that your program ends up in the generic top-level error-handler at
run-time.
- Static typing sometimes forces you to do things more verbosely. If
you do things right, you have paid the penalty for nothing. If you
do them wrong, then the advantage you incur is that the type system
will have forseen the potential problem and forced you to deal with
it explicitly, so that you can take the appropriate specific action.
(In the above bullets, by "right" and "wrong" I mean, for example, the
case above where "right" is the "(n^3)/3+n^2-n/3" case, which really
truly is always an integer, whereas "wrong" is "(n^2)/3+n^2-n/3", where
the programmer apparently made a mistake.)
Finally, I should emphasize that cases like "(n^3)/3+n^2-n/3" are "edge
cases". That is, in my opinion, the "sometimes" in my previous
statement:
Static typing sometimes forces you...
is really a "rarely". Most expressions do not require such complex
reasoning to discover their "narrowest"/"truest" type.
It just occurred to me that Java exceptions are a nice example of a
system which, in a sense, allows for both static and dynamic typing.
The Java code
Integer convert( Rational r ) throws BadConversionException {...}
can behave in two different ways:
- If BadConversionException is a subtype of RuntimeException or Error,
then clients are allowed to ignore the possibility of an exception.
(The exception may propogate all the way out of the application in
this case.) This is analogous to dynamic typing.
- Otherwise, clients are forced to provide an explicit handler for
this exception somewhere. This is analogous to static typing.
>> [*] Unless your type system is very expressive and someone has taken a
>> great deal of time implementing the library of types to deal with cases
>> just like this.
>
>This would be, in my opinion, Quixotic.
I agree. (At least for today. Maybe in 100 years this technology will
be commonplace. Who knows.)
--
Brian M. McNamara ······@acm.org : I am a parsing fool!
** Reduce - Reuse - Recycle ** : (Where's my medication? ;) )
Brian McNamara! wrote:
> It is a trade-off:
>
> - Dynamic typing lets you do whatever you like. If you do things right,
> you "win" for free. If you do them wrong, the penalty you pay is
> that your program ends up in the generic top-level error-handler at
> run-time.
...and that top-level error-handler allows you to correct the situation,
and then to proceed computation at the point where the error occurred,
or even perhaps at a different point in the execution path, which might
be exactly the flexibility you need.
This is to say that this alleged penalty might not be a penalty at all.
> - Static typing sometimes forces you to do things more verbosely. If
> you do things right, you have paid the penalty for nothing. If you
> do them wrong, then the advantage you incur is that the type system
> will have forseen the potential problem and forced you to deal with
> it explicitly, so that you can take the appropriate specific action.
...and if you cannot provide a completely automatic way to deal with the
erroneous situation, you might want to implement an interactive solution
that asks the user to correct the situation. This can be a very high
overhead, especially when you want to get close to the flexibility Lisp
provides by default.
For example, the correction of a situation might be to completely change
the function definitions of parts of the program. For a certain class of
programs, it can be invaluable to make such deep modifications to a
running program. This is one of the possible scenarios for which we
actually _want_ type errors to be thrown at runtime instead of handled
completely automatically, because we might not know upfront what the
best way is to deal with such a situation. The automated solution might
turn out to be just the wrong one.
> (In the above bullets, by "right" and "wrong" I mean, for example, the
> case above where "right" is the "(n^3)/3+n^2-n/3" case, which really
> truly is always an integer, whereas "wrong" is "(n^2)/3+n^2-n/3", where
> the programmer apparently made a mistake.)
>
> Finally, I should emphasize that cases like "(n^3)/3+n^2-n/3" are "edge
> cases". That is, in my opinion, the "sometimes" in my previous
> statement:
> Static typing sometimes forces you...
> is really a "rarely". Most expressions do not require such complex
> reasoning to discover their "narrowest"/"truest" type.
"edge cases", "sometimes", "rarely", "most". Where do you get these
assessments from? Please cite your sources.
(I happen to know in what application domain Mario mainly works -
although admittedly only superficially. However, from what he has told
me, I am convinced that these cases occur very often in his domain,
including the flexibility I have sketched above that gives him a real
competitive advantage. I leave it to you to actually ask Mario about his
application domain, and to him to actually give away more details.)
> It just occurred to me that Java exceptions are a nice example of a
> system which, in a sense, allows for both static and dynamic typing.
> The Java code
>
> Integer convert( Rational r ) throws BadConversionException {...}
>
> can behave in two different ways:
>
> - If BadConversionException is a subtype of RuntimeException or Error,
> then clients are allowed to ignore the possibility of an exception.
> (The exception may propogate all the way out of the application in
> this case.) This is analogous to dynamic typing.
>
> - Otherwise, clients are forced to provide an explicit handler for
> this exception somewhere. This is analogous to static typing.
Yes, and the Java community is slowly understanding that this sucks very
badly.
What you actually _need_ in order to be able to deal with unexpected
situations are exceptions that are not statically checked and that don't
unwind your call stack by default. Statically checked exceptions only
work for toy examples.
(This is actually one of the best examples that shows how a "static
mindset" has crippled an otherwise useful language feature.)
Pascal
--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
Pascal Costanza <········@web.de> once said:
>Brian McNamara! wrote:
>> It is a trade-off:
>>
>> - Dynamic typing lets you do whatever you like. If you do things right,
>> you "win" for free. If you do them wrong, the penalty you pay is
>> that your program ends up in the generic top-level error-handler at
>> run-time.
>
>...and that top-level error-handler allows you to correct the situation,
>and then to proceed computation at the point where the error occurred,
...
>This is to say that this alleged penalty might not be a penalty at all.
Indeed. Some of this is a "library issue", rather than a "static
typing" issue, though. Lisp comes with a nice to-level error-handler.
Haskell doesn't. But you can write a monad which does the same kind of
thing: instead of
convert :: Rational -> Maybe Int
convert r = if really_isa_int r
then Just (int_val r)
else Nothing
you'd write
convert :: Rational -> TopLevelHandler Int
convert r = if really_isa_int r
then Just (int_val r)
else get_int_from_user
or whatever.
But yes,
>> - Static typing sometimes forces you to do things more verbosely. If
>> you do things right, you have paid the penalty for nothing. If you
>> do them wrong, then the advantage you incur is that the type system
>> will have forseen the potential problem and forced you to deal with
>> it explicitly, so that you can take the appropriate specific action.
>
>...and if you cannot provide a completely automatic way to deal with the
>erroneous situation, you might want to implement an interactive solution
>that asks the user to correct the situation. This can be a very high
>overhead, especially when you want to get close to the flexibility Lisp
>provides by default.
>
>For example, the correction of a situation might be to completely change
>the function definitions of parts of the program. For a certain class of
>programs, it can be invaluable to make such deep modifications to a
>running program. This is one of the possible scenarios for which we
>actually _want_ type errors to be thrown at runtime instead of handled
>completely automatically, because we might not know upfront what the
>best way is to deal with such a situation. The automated solution might
>turn out to be just the wrong one.
...if you want or need to be able to change the program interactively at
run-time, then static typing does seem more likely to be a hindrance.
Whether or not this is something that people do (or desire to do)
frequently seems to be a matter of debate (or at least a matter of
application domain), as you suggest here:
>> Finally, I should emphasize that cases like "(n^3)/3+n^2-n/3" are "edge
>> cases". That is, in my opinion, the "sometimes" in my previous
>> statement:
>> Static typing sometimes forces you...
>> is really a "rarely". Most expressions do not require such complex
>> reasoning to discover their "narrowest"/"truest" type.
>
>"edge cases", "sometimes", "rarely", "most". Where do you get these
>assessments from? Please cite your sources.
>
>(I happen to know in what application domain Mario mainly works -
>although admittedly only superficially. However, from what he has told
>me, I am convinced that these cases occur very often in his domain,
>including the flexibility I have sketched above that gives him a real
>competitive advantage. I leave it to you to actually ask Mario about his
>application domain, and to him to actually give away more details.)
(To answer your question above: My "sources" are my own experience. I
think many of the other "staticists" here would agree with me. I
don't mind if we disagree on this point (how "frequently" these issues
occur in practice).)
>> It just occurred to me that Java exceptions are a nice example of a
>> system which, in a sense, allows for both static and dynamic typing.
...
>What you actually _need_ in order to be able to deal with unexpected
>situations are exceptions that are not statically checked and that don't
>unwind your call stack by default.
We "already have" statically checked exceptions which don't necessarily
unwind your call stack. (E.g. continuation monads in Haskell.)
>Statically checked exceptions only work for toy examples.
I am tempted to say "please cite your sources", but I shall refrain.
Instead, I'll just ask: can you explain how "static typing" typically
"gets in the way" here? I can understand how "unwinding" may not always
be desirable, but as I just mentioned, you can easily create
statically-typed continuation-expections as well as
unwinding-exceptions. What makes you think checked exceptions are bad?
--
Brian M. McNamara ······@acm.org : I am a parsing fool!
** Reduce - Reuse - Recycle ** : (Where's my medication? ;) )
Brian McNamara! wrote:
> (To answer your question above: My "sources" are my own experience. I
> think many of the other "staticists" here would agree with me. I
> don't mind if we disagree on this point (how "frequently" these issues
> occur in practice).)
Thanks a lot for admitting that your opinions are based on your own
experiences. I think this holds for everyone involved in such
discussions, including, of course, the dynamic typers in this discussion.
>>>It just occurred to me that Java exceptions are a nice example of a
>>>system which, in a sense, allows for both static and dynamic typing.
>
> ...
>
>>What you actually _need_ in order to be able to deal with unexpected
>>situations are exceptions that are not statically checked and that don't
>>unwind your call stack by default.
>
> We "already have" statically checked exceptions which don't necessarily
> unwind your call stack. (E.g. continuation monads in Haskell.)
To make this very explicit: I am criticizing Java's model for checked
exceptions. Maybe there are better solutions in other statically type
languages. (I wouldn't be surprised.)
>>Statically checked exceptions only work for toy examples.
>
> I am tempted to say "please cite your sources", but I shall refrain.
> Instead, I'll just ask: can you explain how "static typing" typically
> "gets in the way" here? I can understand how "unwinding" may not always
> be desirable, but as I just mentioned, you can easily create
> statically-typed continuation-expections as well as
> unwinding-exceptions. What makes you think checked exceptions are bad?
Other people can explain this better than I can:
http://www.mindview.net/Etc/Discussions/CheckedExceptions
http://radio.weblogs.com/0122027/stories/2003/04/01/JavasCheckedExceptionsWereAMistake.html
The short version: Checked exceptions make your abstractions leak in
large projects. When component A calls component B, and B in turn calls
component C, A might be the only place to reasonably deal with
exceptions thrown in C. Yet a statically checked CException either needs
to be declared in the interface of B - which might not have anything to
do with B's tasks - or else B needs to package them in dynamically typed
exceptions which makes exception handling in A harder.
For example, component A might be a GUI layer, component B might be some
application-specific component and C might be the RMI layer of the Java
API. Why should B declare to throw RMI exceptions, especially when you
want to use the same component sometimes in a distributed setting and
other times in local settings? Yet the best way to deal with RMI
exceptions might be to let the end-user decide (via dialog boxes) what
to do.
The problem here is that it is generally hard to anticipate in what
settings a component might be used. So in general, when in doubt,
dynamic exception handling is the better idea.
Again, this is mainly a criticism of Java's design in this regard.
Pascal
--
Pascal Costanza University of Bonn
···············@web.de Institute of Computer Science III
http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)
"Mario S. Mommer" wrote:
> Yes, casts are never made, but new objects from one type are
> constructed from the other. Ketil wrote that to me by email
> (thanks).
Here's a cast (and I'm pretty sure I got this right):
newtype A = A Integer
newtype B = B Integer
AtoB::A->B
AtoB (A i) = B i
This cast can never fail.
Mario S. Mommer <········@yahoo.com> wrote:
> How do you deal with this case? Given integer n, evaluate the
> expression
>
> (n^3)/3+n^2-n/3
>
> The result is always an integer.
Lisp doesn't deal with that case, either. Try
* (defun f (n) (+ (/ (expt n 3) 3) (expt n 2) (- (/ n 3))))
F
* (f 90000)
243008099970000
* (f 9e4)
2.430081e+14
Oops, not an integer, and I have lost digits.
> Can Haskell infer that?
No, but Haskell can tell you that you will get a floating point result
if you do a floating division.
> And if not, how do I tell him to believe me that this is an integer?
You could always convert the float back to an integer. But you would
be wrong, because it is not, and you will loose digits if you insist
you're right.
> And if I can do that, then what is the guarantee that I am not
> mistaken?
As has been shown, the correct thing is to factor out the division by
3, and use integer division instead of floating division.
A type system is not a 'do what I mean button'. The computer cannot
second guess you (if it could, we wouldn't need programmers), but it
can point out cases where your expectations about what the code does
doesn't match with what the code really does.
- Dirk
Dirk Thierbach <··········@gmx.de> writes:
> * (f 90000)
> 243008099970000
>
> * (f 9e4)
> 2.430081e+14
>
> Oops, not an integer, and I have lost digits.
This is _so_ boring. 9e4 reads as a floating point and nothing else
- and so what?
--
(espen)
Espen Vestre <·····@*do-not-spam-me*.vestre.net> wrote:
> This is _so_ boring. 9e4 reads as a floating point and nothing else
> - and so what?
Sorry, my mistake. I didn't realize it was converted into rationals.
- Dirk
Joachim Durchholz wrote:
> Unless you count type casts in. Type-inferring languages like ML or
> Haskell don't even have them though - they are unnecessary.
Eh?
data Foo = F Int | G Float
newtype Bar = B Int --Forgive me if this notation is wrong. It's
--been a while.
FooToBar::Foo->Bar
FooToBar (F i) = i
FooToBar _ = error "Bad Cast"
David
Feuer <·····@his.com> writes:
> Joachim Durchholz wrote:
>> Unless you count type casts in. Type-inferring languages like ML or
>> Haskell don't even have them though - they are unnecessary.
> FooToBar _ = error "Bad Cast"
Uh, what is a cast, anyway? Any function that returns a value of a
different type than its argument? Is "sum" a cast from Num a => [a]
to a?
Perhaps we should rather talk about implicit conversions?
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
··········@ii.uib.no wrote:
>
> Feuer <·····@his.com> writes:
>
> > Joachim Durchholz wrote:
>
> >> Unless you count type casts in. Type-inferring languages like ML or
> >> Haskell don't even have them though - they are unnecessary.
>
> > FooToBar _ = error "Bad Cast"
>
> Uh, what is a cast, anyway? Any function that returns a value of a
> different type than its argument? Is "sum" a cast from Num a => [a]
> to a?
>
> Perhaps we should rather talk about implicit conversions?
I screwed up making Bar a newtype. If it's a data, there at least
could be identical representations between the input and output
of the function.
David
> Like the dynamic type errors.
Dynamic type errors in statically typed languages stem from attempts
to bypass the type system, like casts. Most strictly typed languages
do not support them, and even for the ones that do, their usage is
discouraged. I'm curious to see how else you'd get a dynamic type
error in any other way. There might very well be a way to do this, but
I really can't think of one.
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <r805ql5n.fsf@comcast.net>
·······@mindspring.com (Rayiner Hashem) writes:
>> Like the dynamic type errors.
> Dynamic type errors in statically typed languages stem from attempts
> to bypass the type system, like casts. Most strictly typed languages
> do not support them, and even for the ones that do, their usage is
> discouraged. I'm curious to see how else you'd get a dynamic type
> error in any other way. There might very well be a way to do this, but
> I really can't think of one.
You might be more familiar with them as union discrimination failures.
--
~jrm
Rayiner Hashem <·······@mindspring.com> wrote:
> The comment about changing a type and then having the
> compiler tell you where you need to make modifications made me cringe.
> That's an abuse of the type system, not a good way to take advantage
> of it!
Why? Types encode invariants. They form a contract about the behaviour
of some part of my code. If I break this contract in such a way that
it changes the type, the type checker *will* point out all places
I have to think about. (Of course, if I change it in a way that keeps
the type, I have to do testing and code review).
> Only proper testing and code review would have.
But static typing *does* testing by abstract interpretation. And it
does this more thoroughly than any test suite or code review ever
can, because it is automated.
> People who sell static typing as a panacea (like anybody who sells
> anything as a panacea) are selling you snake oil.
Yes. Nobody tries to sell static typing as a panacea. (Though I
sometimes got the impression that people were trying to sell dynamic
typing as one). Static typing is a great tool, nothing more. It has
its limits, but within those limits, it is really helpful.
- Dirk
Dirk Thierbach wrote:
> Types encode invariants. They form a contract about the behaviour
> of some part of my code. If I break this contract in such a way that
> it changes the type, the type checker will point out all places
> I have to think about.
Why do you believe that all variables and parameters need this very strict
invariant during all phases of program development, even disallowing the
testing of a portion that does type-check safely simply because some other
portion of the program does not?
faa
Frank A. Adrian <·······@ancar.org> wrote:
> Dirk Thierbach wrote:
>
>> Types encode invariants. They form a contract about the behaviour
>> of some part of my code. If I break this contract in such a way that
>> it changes the type, the type checker will point out all places
>> I have to think about.
> Why do you believe that all variables and parameters need this very strict
> invariant during all phases of program development,
Because unless I explicitely say so, this invariant will be the minimal
invariant needed to make the program work at all. Let's again have a
look at the type of map
map :: (a -> b) -> [a] -> [b]
This just says that one argument should be a function, and the other
argument should be a list. Duh. Moreover, the function should be able
to deal with the elements that are in the list. In return, I am
promised a list of whatever things the function produces.
If any of these invariants is invalidated, map cannot possibly work.
So either I have made a typo somewhere, or I have made a design error.
> even disallowing the testing of a portion that does type-check
> safely simply because some other portion of the program does not?
We already discussed this. If I typed too fast and I really want
to experiment with parts of the code, I just comment the other part
out. Usually, I don't want to do any experiments at all, because
the type checks *are* the experiments I am interested in in the
first place. And in situations where I haven't dealt with all
the possible tags in a datatype (as in Joes example), I don't have
to do anything at all, it *will* typecheck and I can do the experiments,
though I will get warnings.
- Dirk
[Followup ignored]
In article <··············@ID-7776.user.dfncis.de>, Dirk Thierbach
<··········@gmx.de> wrote:
> Frank A. Adrian <·······@ancar.org> wrote:
> > Dirk Thierbach wrote:
> >
> >> Types encode invariants. They form a contract about the behaviour
> >> of some part of my code. If I break this contract in such a way that
> >> it changes the type, the type checker will point out all places
> >> I have to think about.
>
> > Why do you believe that all variables and parameters need this very strict
> > invariant during all phases of program development,
>
> Because unless I explicitely say so, this invariant will be the minimal
> invariant needed to make the program work at all. Let's again have a
> look at the type of map
>
> map :: (a -> b) -> [a] -> [b]
>
> This just says that one argument should be a function, and the other
> argument should be a list. Duh. Moreover, the function should be able
> to deal with the elements that are in the list. In return, I am
> promised a list of whatever things the function produces.
>
> If any of these invariants is invalidated, map cannot possibly work.
> So either I have made a typo somewhere, or I have made a design error.
What is the type of this function:
(defun mapn (fn &rest lists)
(if (every #'null lists)
nil
(cons (apply fn (remove nil (mapcar #'car lists)))
(apply #'mapn fn (mapcar #'cdr lists)))))
in light of this example:
? (mapcar 'type-of (mapn (compose 'sqrt '+) '(1 2 3) '(nil 5 -41/4 1)
'(-101.7 5 5 -3/4 nil)))
((COMPLEX FLOAT) DOUBLE-FLOAT (COMPLEX RATIONAL) RATIO FIXNUM)
E.
···@jpl.nasa.gov (Erann Gat) once said:
>What is the type of this function:
>
>(defun mapn (fn &rest lists)
> (if (every #'null lists)
> nil
> (cons (apply fn (remove nil (mapcar #'car lists)))
> (apply #'mapn fn (mapcar #'cdr lists)))))
Hugs says:
Main> :type mapn
mapn :: Eq a => ([[a]] -> b) -> [[[a]]] -> [b]
based on
remove x l = filter (/= x) l
mapn f l = if all null l
then []
else (f (remove [] (map head l))) :
(mapn f (map tail l))
This isn't the type you "want", though:
>in light of this example:
>
>? (mapcar 'type-of (mapn (compose 'sqrt '+) '(1 2 3) '(nil 5 -41/4 1)
>'(-101.7 5 5 -3/4 nil)))
>((COMPLEX FLOAT) DOUBLE-FLOAT (COMPLEX RATIONAL) RATIO FIXNUM)
as we static typers don't usually make lists like this:
'(nil 5 -41/4 1)
which contain different types. We would instead probably represent such
a list as a value like
[ Nothing, Just (AnyNum 5), Just (AnyNum -41/4), Just (AnyNum 1) ]
which has type
[ Maybe AnyNum ] -- where data AnyNum = forall a . (Num a)=> AnyNum a
or (perhaps more likely) we would not be in the business of trying to
represent such a list at all in the first place, because such unusual
types do not seem to come up very often in practice. (Whether or not
this is a "cause" or an "effect" is probably up for debate. By "this"
in the previous sentence, I mean the observation that "such types don't
seem to occur much in practice".)
--
Brian M. McNamara ······@acm.org : I am a parsing fool!
** Reduce - Reuse - Recycle ** : (Where's my medication? ;) )
In article <············@news-int2.gatech.edu>, ·······@prism.gatech.edu
(Brian McNamara!) wrote:
> as we static typers don't usually make lists like this:
>
> '(nil 5 -41/4 1)
Really? Then what would "you static typers" return from the following:
(mapcar (lambda (x y) (ignore-errors (/ x y))) '(1 10 -41 5) '(0 2 4 5))
E.
In article <··························@k-137-79-50-101.jpl.nasa.gov>,
Erann Gat <·········@jpl.nasa.gov> wrote:
(snip)
>(mapcar (lambda (x y) (ignore-errors (/ x y))) '(1 10 -41 5) '(0 2 4 5))
map (\(x, y) -> x / y) $ zip [1, 10, -41, 5] [0, 2, 4, 5] in my
Haskell environment gives,
[Infinity,5.0,-10.25,1.0]
If you want to encode a special sort of "no answer" value among your
others, the Maybe monad might be what you want. For instance, if I
redefine "/" to use that sort of scheme, then I get:
[Nothing,Just 5.0,Just (-10.25),Just 1.0]
Of course, if you want fractions instead of floating-point, then you
might want to use the % operator which makes ratios, where you might
want to define your own version of that to do a "Maybe" thing because
that does get upset if you try to use a zero denominator.
-- Mark
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <znetqllz.fsf@comcast.net>
·········@jpl.nasa.gov (Erann Gat) writes:
> In article <············@news-int2.gatech.edu>, ·······@prism.gatech.edu
> (Brian McNamara!) wrote:
>
>> as we static typers don't usually make lists like this:
>>
>> '(nil 5 -41/4 1)
>
> Really? Then what would "you static typers" return from the following:
>
> (mapcar (lambda (x y) (ignore-errors (/ x y))) '(1 10 -41 5) '(0 2 4 5))
Careful, there. I can statically analyze that program and determine
that it cannot cause a runtime exception.
--
~jrm
·········@jpl.nasa.gov (Erann Gat) once said:
>(Brian McNamara!) wrote:
>> as we static typers don't usually make lists like this:
>>
>> '(nil 5 -41/4 1)
>
>Really? Then what would "you static typers" return from the following:
>
>(mapcar (lambda (x y) (ignore-errors (/ x y))) '(1 10 -41 5) '(0 2 4 5))
Mark Carroll already wrote a good reply to this.
In general, I would say that in cases where you have an operational
which is "well typed" except for a tiny portion of its domain, you use
an exception type (like Maybe) to handle the exceptional portion. That
is, in a statically typed language, you would probably say
add :: (Num a)=> a -> a -> a
-- adding two numbers produces another number
divide :: (Num a)=> a -> a -> Maybe a
-- dividing two numbers either produces another number or fails
In a Java-like language, the same signatures would probably be written as
Number add( Number x, Number y ) { ... }
Number divide( Number x, Number y ) throws DivisionByZeroException { ... }
In both cases, the type system will force you to deal with the
possibility of failure.
--
Brian M. McNamara ······@acm.org : I am a parsing fool!
** Reduce - Reuse - Recycle ** : (Where's my medication? ;) )
·······@prism.gatech.edu (Brian McNamara!) writes:
> as we static typers don't usually make lists like this:
>
> '(nil 5 -41/4 1)
>
> which contain different types. We would instead probably represent such
> a list as a value like
>
> [ Nothing, Just (AnyNum 5), Just (AnyNum -41/4), Just (AnyNum 1) ]
>
> or (perhaps more likely) we would not be in the business of trying to
> represent such a list at all in the first place, because such unusual
> types do not seem to come up very often in practice.
A list of objects of different types rarely comes up in practice? Are
you serious???
> (Whether or not this is a "cause" or an "effect" is probably up for
> debate. By "this" in the previous sentence, I mean the observation
> that "such types don't seem to occur much in practice".)
Well, if you have to write bloat as the above to be able to have such
a simple thing as a list of objects of different types, no wonder you
avoid them.
Sorry, but the more I read about static typing from their advocates,
the more I think it is a full-fledged nightmare.
Mario S. Mommer <········@yahoo.com> writes:
> A list of objects of different types rarely comes up in practice? Are
> you serious???
Why not? After all, most modern statically typed languages have
better facilities for grouping together values of different types.
List are best used when you have uniform data of unknown count, for
non-uniform data of known count there are tuples (and, more generally,
records). Non-uniform data of unknown count is, in my experience,
very rare indeed.
Matthias Blume <····@my.address.elsewhere> writes:
> Mario S. Mommer <········@yahoo.com> writes:
>
> > A list of objects of different types rarely comes up in practice? Are
> > you serious???
>
> Why not? After all, most modern statically typed languages have
> better facilities for grouping together values of different types.
> List are best used when you have uniform data of unknown count, for
> non-uniform data of known count there are tuples (and, more generally,
> records). Non-uniform data of unknown count is, in my experience,
> very rare indeed.
Just from the top of my head: Could be output of a tokenizer, for
instance. Or a list of objects in the bag of some adventure game
hero.
The list goes on and on, full of all sorts of different things...
Mario S. Mommer wrote:
[...]
> Just from the top of my head: Could be output of a tokenizer, for
> instance. Or a list of objects in the bag of some adventure game
> hero.
To add to Lauri Alanko's response, I will say from experience that
ml-lex and ml-yacc are a true joy to use. Yes, it's sometimes (okay,
usually) a pain to get things to compile, but once you do, your program
most likely really does just work. I add the "most likely" to avoid
accusations of hyperbole, but in my case, the bugs I have found have
always been conceptual mistakes--that is, the program did what I wanted,
but I wanted the wrong thing. And even these are much rarer.
-thant
From: Lauri Alanko
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <bpe8dr$aog$1@la.iki.fi>
Mario S. Mommer <········@yahoo.com> virkkoi:
> Just from the top of my head: Could be output of a tokenizer, for
> instance. Or a list of objects in the bag of some adventure game
> hero.
If you tokenize things, you know what kind of tokens you are going to
have. Thus you can make them into a new data type. This is not a
particularly big deal, since you have to list the kinds of tokens you
are going have _somewhere_ anyway.
If a list of game objects is to be of any use, all those objects must
share a common form of some kind. This common form then determines the
type of the elements of the list. Again, the matter of writing down the
type is just a natural part of the design process. You have to define
the form of the things you are dealing with _somewhere_, or else your
_design_ is sloppy.
Lauri Alanko
··@iki.fi
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <8ymds0b2.fsf@comcast.net>
Lauri Alanko <··@iki.fi> writes:
> If a list of game objects is to be of any use, all those objects must
> share a common form of some kind. This common form then determines the
> type of the elements of the list.
Yes, they are all subtypes of lisp type T.
> Again, the matter of writing down the type is just a natural part of
> the design process. You have to define the form of the things you
> are dealing with _somewhere_, or else your _design_ is sloppy.
That's the point, actually. Early on in the process you really don't
know what your design is going to end up like. You'd *like* to know
what you're dealing with, but you're not sure if you have all the
bases covered, or if maybe you need to segregate things or coerce
them, etc. At this point, writing it down is premature. Later on, it
may be that you do want a uniform list.
Then again, maybe not.
--
~jrm
Joe Marshall <·············@comcast.net> once said:
>> Again, the matter of writing down the type is just a natural part of
>> the design process. You have to define the form of the things you
>> are dealing with _somewhere_, or else your _design_ is sloppy.
>
>That's the point, actually. Early on in the process you really don't
>know what your design is going to end up like. You'd *like* to know
>what you're dealing with, but you're not sure if you have all the
>bases covered, or if maybe you need to segregate things or coerce
>them, etc. At this point, writing it down is premature. Later on, it
>may be that you do want a uniform list.
That's fine. I dealt with this issue in a message a day or two ago.
Static typing doesn't force you to commit to a representation early.
In the example of a tokenizer that was being discussed, all you have to
do is create a type _name _ called "Token". You don't have to commit
to any representation; you just write functions in terms of the "Token"
type. Later on, when you decide what the representation ought to be,
then you go and actually link the type name "Token" to some actual
representation. (Like most abstractions, the name provides the shield.)
--
Brian M. McNamara ······@acm.org : I am a parsing fool!
** Reduce - Reuse - Recycle ** : (Where's my medication? ;) )
Joe Marshall <·············@comcast.net> wrote:
> That's the point, actually. Early on in the process you really don't
> know what your design is going to end up like. You'd *like* to know
> what you're dealing with, but you're not sure if you have all the
> bases covered, or if maybe you need to segregate things or coerce
> them, etc. At this point, writing it down is premature. Later on, it
> may be that you do want a uniform list.
>
> Then again, maybe not.
I'd really like to see an example of a design process where you cannot
decide if your data is used uniformly or not. I already asked Raffael,
but didn't get one.
If these things happen so often, it surely must be possible to find
an example.
- Dirk
Dirk Thierbach <··········@gmx.de> writes:
> I'd really like to see an example of a design process where you cannot
> decide if your data is used uniformly or not.
Of course, you almost always can do that! The point is that you can
just design and never make that decission. Making that decission, and
notifying the compiler, is an additional effort which may backfire
when something unexpected happens. Often, the benefit of deciding this
kind of thing is very small.
I know you can plan for the unexpected, but you can't do so reliably
without a decent crystal ball.
> If these things happen so often, it surely must be possible to find
> an example.
Just take any example where you make these decissions, and dispense of
them.
Mario S. Mommer <········@yahoo.com> wrote:
>
> Dirk Thierbach <··········@gmx.de> writes:
>> I'd really like to see an example of a design process where you cannot
>> decide if your data is used uniformly or not.
> Of course, you almost always can do that! The point is that you can
> just design and never make that decission.
*Sigh*
Please show me how you "just design and never make this decision"
in Smalltalk. After all, Smalltalk is also dynamically typed.
- Dirk
Dirk Thierbach wrote:
> Joe Marshall <·············@comcast.net> wrote:
>
>
>>That's the point, actually. Early on in the process you really don't
>>know what your design is going to end up like. You'd *like* to know
>>what you're dealing with, but you're not sure if you have all the
>>bases covered, or if maybe you need to segregate things or coerce
>>them, etc. At this point, writing it down is premature. Later on, it
>>may be that you do want a uniform list.
>>
>>Then again, maybe not.
>
>
> I'd really like to see an example of a design process where you cannot
> decide if your data is used uniformly or not. I already asked Raffael,
> but didn't get one.
>
> If these things happen so often, it surely must be possible to find
> an example.
XP?
(This might not be the answer you expected, but extreme programming is
built around the notion that you can change your mind about the design
of a program very late in the game.)
Pascal
--
Pascal Costanza University of Bonn
···············@web.de Institute of Computer Science III
http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)
Pascal Costanza <········@web.de> wrote in message news:<·············@f1node01.rhrz.uni-bonn.de>...
> Dirk Thierbach wrote:
> > Joe Marshall <·············@comcast.net> wrote:
> >
> >
> >>That's the point, actually. Early on in the process you really don't
> >>know what your design is going to end up like. You'd *like* to know
> >>what you're dealing with, but you're not sure if you have all the
> >>bases covered, or if maybe you need to segregate things or coerce
> >>them, etc. At this point, writing it down is premature. Later on, it
> >>may be that you do want a uniform list.
> >>
> >>Then again, maybe not.
And so we write the code to deal with what we know to be required now
(or assume to be required in the immediate future). And when things
change, we change the code. So the interesting question is what are
the relative costs of changing the code. In a modern statically typed
language, how do you reduce the ripple from changing data-type
definitions?
In my Smalltalk code that would include going through and changing the
method parameter names from anInteger to anIntegerOrString ;-)
> > I'd really like to see an example of a design process where you cannot
> > decide if your data is used uniformly or not. I already asked Raffael,
> > but didn't get one.
> >
> > If these things happen so often, it surely must be possible to find
> > an example.
>
> XP?
>
> (This might not be the answer you expected, but extreme programming is
> built around the notion that you can change your mind about the design
> of a program very late in the game.)
No! No! No! Not that!
I've more or less given up on comp.object since they became comp.xp
;-)
best wishes, Isaac
Isaac Gouy <·····@yahoo.com> wrote:
> So the interesting question is what are the relative costs of
> changing the code. In a modern statically typed language, how do you
> reduce the ripple from changing data-type definitions?
I have given a (very simplistic) example how it could work in
<··············@ID-7776.user.dfncis.de>
- Dirk
In article <····························@posting.google.com>,
Isaac Gouy <·····@yahoo.com> wrote:
(snip)
>the relative costs of changing the code. In a modern statically typed
>language, how do you reduce the ripple from changing data-type
>definitions?
>
>In my Smalltalk code that would include going through and changing the
>method parameter names from anInteger to anIntegerOrString ;-)
(snip)
HaRe, the Haskell Refactorer, at
http://www.cs.kent.ac.uk/projects/refactor-fp/hare.html is on the way
to this, but it's early days yet. In principle, all sorts of cool
stuff should be possible, though: a programming language that lends
itself to static analysis probably also tends to lend itself to clever
refactoring of source code.
IIRC things like Eclipse for Java are meant to do this sort of things,
but its online documentation isn't working for me and I've never used
it before myself.
-- Mark
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <ad6s3txg.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:
> And I would like to have a *concrete*, very specific example, so
> we can actually go through it and see what happens if we attempt the
> same thing in different languages.
You won't like this, but here goes:
In the change management product I'm working on we provide a versioned
object system. As changes are made to objects, information about what
object changed, what slot in the object changed, what the change was,
who requested the change, and when is recorded in a log.
Since *any* slot in *any* object in the system can change to
essentially *any* value, the various log entries have all sorts of
different types of objects in them.
Since the versioned object system is a substrate to the overlying
change management system, it is compiled as a separate module *before*
the overlying system, so it has no access to the type schema. The
compiler has no knowledge of the types that may be used.
--
~jrm
Joe Marshall <·············@comcast.net> wrote:
> Dirk Thierbach <··········@gmx.de> writes:
>> And I would like to have a *concrete*, very specific example, so
>> we can actually go through it and see what happens if we attempt the
>> same thing in different languages.
> You won't like this, but here goes:
I do like it, but it is again on a very abstract level. Why can't
anybody produce a *concrete* example with code one can actually touch?
And it isn't about uniform vs. non-uniform lists either.
> In the change management product I'm working on we provide a versioned
> object system. As changes are made to objects, information about what
> object changed, what slot in the object changed, what the change was,
> who requested the change, and when is recorded in a log.
I am not sure of these are objects in the programming (CLOS) sense or
objects in the data sense. It seems like you want to integrate a
language management tool into the language itself. This only makes
sense for languages that have introspection, and collapse development
and runtime system into one. For languages which don't, you're on the
developing side, so you have to deal on the meta level with it.
This is not a matter of static vs. dynamic typing.
You can get some sort of this introspection with the Dynamic type,
but that is probably not want you want.
(Now you will say: But why do the difficult thing and keep runtime
and development system apart? You're right that this is a nice
feature of Lisp, but that doesn't mean you cannot do such things
with static typing. Template Haskell is an example for a tool that
works on the "development" level, i.e. at "compile" time.)
- Dirk
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <fzgj57xk.fsf@ccs.neu.edu>
Dirk Thierbach <··········@gmx.de> writes:
> Joe Marshall <·············@comcast.net> wrote:
>> Dirk Thierbach <··········@gmx.de> writes:
>
>>> And I would like to have a *concrete*, very specific example, so
>>> we can actually go through it and see what happens if we attempt the
>>> same thing in different languages.
>
>> You won't like this, but here goes:
>
> I do like it, but it is again on a very abstract level. Why can't
> anybody produce a *concrete* example with code one can actually touch?
It is concrete, but unfortunately I can't just post the code.
> I am not sure of these are objects in the programming (CLOS) sense or
> objects in the data sense.
The managed objects are CLOS objects, but the managed data within them
may be arbitrary.
> It seems like you want to integrate a language management tool into
> the language itself. This only makes sense for languages that have
> introspection, and collapse development and runtime system into
> one. For languages which don't, you're on the developing side, so
> you have to deal on the meta level with it.
>
> This is not a matter of static vs. dynamic typing.
I told you you wouldn't like it.
I don't know what you mean by `this is not a matter of static
vs. dynamic typing' though. You wanted a non-trivial case where
heterogeneous lists were used and I gave you one.
> (Now you will say: But why do the difficult thing and keep runtime
> and development system apart? You're right that this is a nice
> feature of Lisp, but that doesn't mean you cannot do such things
> with static typing. Template Haskell is an example for a tool that
> works on the "development" level, i.e. at "compile" time.)
Even if we separated the development time from the runtime, there is a
type schema associated with the runtime model and we have to model it
within our system without having any details about the types at
compile time. For instance, the user may wish to specify that objects
of type FOO must have a string in the first slot and must be contained
in objects of type BAR. The runtime must enforce this, but it cannot
do so statically.
Joe Marshall wrote:
> In the change management product I'm working on we provide a versioned
> object system. As changes are made to objects, information about what
> object changed, what slot in the object changed, what the change was,
> who requested the change, and when is recorded in a log.
What for? And how can you deal with objects referenced
in multiple ways? (if slot 1 in object A is changed so it contains
object B, and then something in B is changed, how do you figure out
which B you want when you're lookign for the old A?)
David
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <vfpdlmd0.fsf@ccs.neu.edu>
Feuer <·····@his.com> writes:
> Joe Marshall wrote:
>
>> In the change management product I'm working on we provide a versioned
>> object system. As changes are made to objects, information about what
>> object changed, what slot in the object changed, what the change was,
>> who requested the change, and when is recorded in a log.
>
> What for?
The versioned object system is the substrate for the change management.
Given the versioned object system, I can define versioned CLOS classes
to model whatever needs changing. For instance:
(defclass file ()
((contents ...))
(:metaclass versioned-standard-class))
and now calling (setf (file-contents file) new-contents) is all that
is necessary to create a new version of the file.
> And how can you deal with objects referenced in multiple ways? (if
> slot 1 in object A is changed so it contains object B, and then
> something in B is changed, how do you figure out which B you want
> when you're looking for the old A?)
In each transaction you specify a version scope. This consists of a
timestamp and the change-sets you want to apply. Within the
transaction, you get a particular view of the world.
So suppose slot 1 in object A is changed to contain object B at time
t0, and then something in object B is changed at time t1. If you set
the version scope to be before t0, then you see A as it was before you
modified the slot. If you set the version scope between t0 and t1,
you will see A with the new object B, but without the change to B. If
you set the version scope to `now', you get A with the latest version
of B in it.
And, if you want, you can synthesize A unchanged and B changed, but
you won't be able to refer to B via A. You *may* want to do this if,
for instance, A represents a directory and B represents a file. The
first change involves moving B to a new directory in the file system,
the second change is a patch to file B. Since these are orthogonal
changes, it makes sense to be able to combine them independent of time
order.
Joe Marshall wrote:
> The versioned object system is the substrate for the change management.
<description>
> And, if you want, you can synthesize A unchanged and B changed, but
> you won't be able to refer to B via A. You *may* want to do this if,
> for instance, A represents a directory and B represents a file. The
> first change involves moving B to a new directory in the file system,
> the second change is a patch to file B. Since these are orthogonal
> changes, it makes sense to be able to combine them independent of time
> order.
I'm thinking that in Haskell a system like this would probably
work rather differently. An object would be represented by a list
of versions (or if you have objects with many fields then by some
more clever means to save space), so there would be no need to record
the versions somewhere else, so no need to deal with dynamic types.
Things like named slots and record-to-file functionality would be
fairly easy to add.
David
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <ad6plji4.fsf@ccs.neu.edu>
Feuer <·····@his.com> writes:
> Joe Marshall wrote:
>
>> The versioned object system is the substrate for the change management.
>
> <description>
>
>> And, if you want, you can synthesize A unchanged and B changed, but
>> you won't be able to refer to B via A. You *may* want to do this if,
>> for instance, A represents a directory and B represents a file. The
>> first change involves moving B to a new directory in the file system,
>> the second change is a patch to file B. Since these are orthogonal
>> changes, it makes sense to be able to combine them independent of time
>> order.
>
> I'm thinking that in Haskell a system like this would probably
> work rather differently. An object would be represented by a list
> of versions (or if you have objects with many fields then by some
> more clever means to save space), so there would be no need to record
> the versions somewhere else, so no need to deal with dynamic types.
> Things like named slots and record-to-file functionality would be
> fairly easy to add.
Ah, but then you get into other difficulties. The administrator comes
along and says, `Gee, things were ok last week, what changed?' So
unless you have a central index of changes, you would have to examine
*every* object in the system!
Also consider that the database is intended to last for years, so
objects may acquire or shed slots over time.
Joe Marshall wrote:
> Ah, but then you get into other difficulties. The administrator comes
> along and says, `Gee, things were ok last week, what changed?' So
> unless you have a central index of changes, you would have to examine
> *every* object in the system!
You don't need a central index of changes: just a central index of
which objects have changed. In a static system, this is not generally
possible, but it becomes possible if you name all the objects in the
system, and then just put the names in the log. So every member of
the "Loggable" class, say, would support method "getName". Presumably,
once you have the name of an object, you have the means to inspect
it, and all its previous versions.
> Also consider that the database is intended to last for years, so
> objects may acquire or shed slots over time.
This is a nasty problem. How do you manage this in Lisp?
David
BTW: the real way to deal with these sorts of things in a
statically typed functional language is to use an RTS that
does most of the work for you. That's cheating, of course,
but it is better to confine icky representational stuff to
the RTS than to deal with it explicitly.
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <65ha63bu.fsf@comcast.net>
Feuer <·····@his.com> writes:
> Joe Marshall wrote:
>
>> Ah, but then you get into other difficulties. The administrator comes
>> along and says, `Gee, things were ok last week, what changed?' So
>> unless you have a central index of changes, you would have to examine
>> *every* object in the system!
>
> You don't need a central index of changes: just a central index of
> which objects have changed. In a static system, this is not generally
> possible, but it becomes possible if you name all the objects in the
> system, and then just put the names in the log. So every member of
> the "Loggable" class, say, would support method "getName". Presumably,
> once you have the name of an object, you have the means to inspect
> it, and all its previous versions.
This just defers the problem. If you have the name of an object, you
need to have some mapping from name to object, the output of such
mapping would have to allow any object and we're back to where we
started. You'd probably want to do this by making some sort of
global type that you'd downcast when necessary.
>> Also consider that the database is intended to last for years, so
>> objects may acquire or shed slots over time.
>
> This is a nasty problem. How do you manage this in Lisp?
The type schema is also versioned, but the meta-schema need not be.
Selecting a version involves selecting the appropriate meta-version.
This being said, I wouldn't mind a bit of static typing in parts of
the system. Some of the low level bookkeeping and utility routines
explicitly type check their args to ensure that the high-level
routines are using them correctly. This is a place where static
checking might come in handy.
--
~jrm
Lauri Alanko <··@iki.fi> writes:
> Mario S. Mommer <········@yahoo.com> virkkoi:
> > Just from the top of my head: Could be output of a tokenizer, for
> > instance. Or a list of objects in the bag of some adventure game
> > hero.
>
> If you tokenize things, you know what kind of tokens you are going to
> have.
So much so that specifying it is just redundant.
> Thus you can make them into a new data type. This is not a
> particularly big deal, since you have to list the kinds of tokens you
> are going have _somewhere_ anyway.
Says who?
> If a list of game objects is to be of any use, all those objects must
> share a common form of some kind. This common form then determines the
> type of the elements of the list. Again, the matter of writing down the
> type is just a natural part of the design process. You have to define
> the form of the things you are dealing with _somewhere_, or else your
> _design_ is sloppy.
Just because you say so? If I just stuff these things in a list, and
it just works, what is wrong with it?
From: Lauri Alanko
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <bpeaei$aog$3@la.iki.fi>
Mario S. Mommer <········@yahoo.com> virkkoi:
> > Thus you can make them into a new data type. This is not a
> > particularly big deal, since you have to list the kinds of tokens you
> > are going have _somewhere_ anyway.
>
> Says who?
Good design principles, quite independent of typing issues. Your lexer
ought to have a documented interface. If you just say "just look at the
friggin source to see what kind of tokens it produces", well, then
someone using your lexer gets the additional burden of wading through
the implementation, and moreover cannot be sure whether some bizarre
behavior is intentional or a bug, since there is no specification to
compare it against.
Writing down the type of the token just means that in addition to the
human reader, also the compiler gets to know the intended interface.
> > You have to define
> > the form of the things you are dealing with _somewhere_, or else your
> > _design_ is sloppy.
>
> Just because you say so? If I just stuff these things in a list, and
> it just works, what is wrong with it?
What is wrong with it is that it is brittle and undocumented: if you
haven't defined what kind of objects your code actually can deal with,
nothing prevents anyone from accidentally putting the wrong kind of
object in the list, since there is nothing telling otherwise, except the
source. And again, having to rely on reading the source to determine the
interface, is, IMNSHO, Bad Design. And if you do add documentation, you
could as well write it in a language that both a human and the compiler
understands, ie. types.
Lauri Alanko
··@iki.fi
Mario S. Mommer <········@yahoo.com> writes:
>Matthias Blume <····@my.address.elsewhere> writes:
>> Mario S. Mommer <········@yahoo.com> writes:
>>
>> > A list of objects of different types rarely comes up in practice? Are
>> > you serious???
>>
>> Why not? After all, most modern statically typed languages have
>> better facilities for grouping together values of different types.
>> List are best used when you have uniform data of unknown count, for
>> non-uniform data of known count there are tuples (and, more generally,
>> records). Non-uniform data of unknown count is, in my experience,
>> very rare indeed.
>
>Just from the top of my head: Could be output of a tokenizer, for
>instance.
That'd be a list of tokens.
>Or a list of objects in the bag of some adventure game
>hero.
That'd be a list of game objects.
Neither of those would require non-uniform data.
--
Fergus Henderson <···@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Fergus Henderson <···@cs.mu.oz.au> writes:
> >Just from the top of my head: Could be output of a tokenizer, for
> >instance.
>
> That'd be a list of tokens.
>
> >Or a list of objects in the bag of some adventure game
> >hero.
>
> That'd be a list of game objects.
>
> Neither of those would require non-uniform data.
No, but they do not require a uniform rep either.
Only for religious resons they would do.
On 18 Nov 2003 23:14:45 +0100
Mario S. Mommer <········@yahoo.com> wrote:
> Sorry, but the more I read about static typing from their advocates,
> the more I think it is a full-fledged nightmare.
I get a similar feeling when I see things like
(list 3 t #\a) where a Haskeller would have (3,True,'a') and things like
Joe Marshall's lazy-fringe which is not parametric (e.g. consider a
"tree" of pairs or lists or "trees").
From: Joe Marshall
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <4qx1s08d.fsf@comcast.net>
Darius <·······@hotpop.com> writes:
> On 18 Nov 2003 23:14:45 +0100
> Mario S. Mommer <········@yahoo.com> wrote:
>
>> Sorry, but the more I read about static typing from their advocates,
>> the more I think it is a full-fledged nightmare.
>
> I get a similar feeling when I see things like
> (list 3 t #\a) where a Haskeller would have (3,True,'a') and things like
> Joe Marshall's lazy-fringe which is not parametric (e.g. consider a
> "tree" of pairs or lists or "trees").
I'm not sure whether I should be flattered or just enjoy the
schadenfreude.
--
~jrm
Darius <·······@hotpop.com> writes:
> On 18 Nov 2003 23:14:45 +0100
> Mario S. Mommer <········@yahoo.com> wrote:
> > Sorry, but the more I read about static typing from their advocates,
> > the more I think it is a full-fledged nightmare.
>
> I get a similar feeling when I see things like
> (list 3 t #\a)
What's the friggin' problem?
> where a Haskeller would have (3,True,'a')
...and what's the advantage of that? Commas more readable?
> and things like Joe Marshall's lazy-fringe which is not parametric
> (e.g. consider a "tree" of pairs or lists or "trees").
This is only something you /can/ do, not something you /have/ to
do. Nobody forces you to program in that style.
On 19 Nov 2003 00:08:28 +0100
Mario S. Mommer <········@yahoo.com> wrote:
> Darius <·······@hotpop.com> writes:
> > On 18 Nov 2003 23:14:45 +0100
> > Mario S. Mommer <········@yahoo.com> wrote:
> > > Sorry, but the more I read about static typing from their
> > > advocates, the more I think it is a full-fledged nightmare.
> >
> > I get a similar feeling when I see things like
> > (list 3 t #\a)
>
> What's the friggin' problem?
I could say the same thing.
From: Lauri Alanko
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <bpe9ds$aog$2@la.iki.fi>
Mario S. Mommer <········@yahoo.com> virkkoi:
> > > Sorry, but the more I read about static typing from their advocates,
> > > the more I think it is a full-fledged nightmare.
> >
> > I get a similar feeling when I see things like
> > (list 3 t #\a)
>
> What's the friggin' problem?
This is exactly what one could ask you. And the answer is the same: to a
static typing advocate, that heterogeneous untyped list is a source of
great discomfort since it goes against his design principles. Likewise,
the considerations in statically typed programs are a source of great
discomfort to you, since it goes against your design principles.
This is all quite all right, but I just don't think that his problem is
any more friggin' than yours.
Lauri Alanko
··@iki.fi
Lauri Alanko <··@iki.fi> writes:
> Mario S. Mommer <········@yahoo.com> virkkoi:
> > > > Sorry, but the more I read about static typing from their advocates,
> > > > the more I think it is a full-fledged nightmare.
> > >
> > > I get a similar feeling when I see things like
> > > (list 3 t #\a)
> >
> > What's the friggin' problem?
>
> This is exactly what one could ask you. And the answer is the same: to a
> static typing advocate, that heterogeneous untyped list is a source of
> great discomfort since it goes against his design principles. Likewise,
> the considerations in statically typed programs are a source of great
> discomfort to you, since it goes against your design principles.
>
> This is all quite all right, but I just don't think that his problem is
> any more friggin' than yours.
But lisp IS a strongly static typed language:
(DECLAIM (FTYPE (FUNCTION ((INTEGER 0 10) (INTEGER 0 10)) T) FUN))
(DEFUN FUN (A B)
(IF (= 0 (THE (INTEGER 0 20) (+ (THE (INTEGER 0 10) A)
(THE (INTEGER 0 10) B))))
(LIST 3 T #\A B)
(MAKE-ARRAY :INITIAL-CONTENT '(3 t #\B)))
In all the modern languages I know you can always STRONGLY and
STATICALLY type everything as T (or Object, or whatever it's named in
the language you're using).
For example in Objective-C, you could use the type id (not strongly
statically typed?) or the type Object* (strongly statically typed!)
but they happen to be the same thing (at least at the Objective-C
level, if not at the C level).
So on one hand, statically/dynamically is not much significant when
you consider that the compilation-time and run-time border is fuzzy,
and on the other hand, you can always statically type all your
variable as the ANY type and process the values dynamically. So,
really,
> What's the friggin' problem?
I'd agree to say that typing can help prove some assertions or
properties of the program. But since there is proof of existance of
good ways to deduce automatically the types, why should the programmer
be forced to _write_ type declarations? Of course, any programmer
will still _think_ about types, how would we navigate in complicated
(or even simple) cons trees otherwise?
--
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Lauri Alanko
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <bpei8q$f64$1@la.iki.fi>
Pascal Bourguignon <····@thalassa.informatimago.com> virkkoi:
> So on one hand, statically/dynamically is not much significant when
> you consider that the compilation-time and run-time border is fuzzy,
Indeed. However, even then you can distinguish between "check as much as
possible as soon as you can" (eval does type checking), or "check only
when it's absolutely necessary" (ie. during primitive calls).
> I'd agree to say that typing can help prove some assertions or
> properties of the program. But since there is proof of existance of
> good ways to deduce automatically the types, why should the programmer
> be forced to _write_ type declarations? Of course, any programmer
> will still _think_ about types, how would we navigate in complicated
> (or even simple) cons trees otherwise?
Many people think that it's good design and good coding practice to
write down your thoughts explicitly. It helps uncover logical errors
that you'd miss if you just kept everything in your head.
But that aside, there's always inference. For example, using ocaml's
polymorphic variants you can do many things without ever writing down
the exact types:
# let rec eval = function `Num(i) -> i | `Plus(e1,e2) -> eval e1 + eval e2;;
val eval : ([< `Num of int | `Plus of 'a * 'a ] as 'a) -> int = <fun>
(The second line is produced by the compiler.)
So static typing by no means mandates writing down all your types. It
is usually done, but this is by _choice_. The idea is that types of
terms are simpler and easier to grasp than the terms themselves, so the
programmer first figures out the types of the things he deals with, and
once he's sure that those correspond to the problem, he starts writing
the functions. Now the compiler can tell if he does a mistake with the
implementation that makes it have the wrong type, since this obviously
means that he got something wrong. Without the prior type declaration,
he could make a logical mistake that still resulted in a well-typed (but
differently typed) function, but he wouldn't realize the error until
later.
This is why to a static typing advocate these complaints about type
declarations seem so silly. To him they are not a burden: often they are
quite optional. But still he uses them because they are a powerful
design and certification tool.
So, pure inference without declarations is less work for the programmer
but it only guarantees that the parts of the program are compatible with
each other. Declarations guarantee that they are also compatible with
the programmer's intentions.
Lauri Alanko
··@iki.fi
In article <············@la.iki.fi>, Lauri Alanko <··@iki.fi> wrote:
(snip)
>So static typing by no means mandates writing down all your types. It
>is usually done, but this is by _choice_. The idea is that types of
(snip)
>quite optional. But still he uses them because they are a powerful
>design and certification tool.
(snip)
Moreover, they're very useful for debugging: the compiler is more able
to give you helpful error messages if it has more information about
what you really meant.
The Haskell code I write normally needs no type signatures at
all. However, I provide them, because if I do then the compiler tells
me exactly where most of my bugs are in a helpful manner. (Also, just
as with adding declares for CMU-CL, if I can be more specific, that
can help the optimiser.)
Of course, I'm not saying that such bugs wouldn't be just as easily
diagnosed in the interactive debugger a good Lisp development
environment can provide after a runtime error raised during a unit
test. The static thing works well for me, though, partly because the
errors are phrased at the abstraction level at which I would tend to
think about how to fix the bug, instead of in terms of concrete data.
-- Mark
Pascal Bourguignon <····@thalassa.informatimago.com> writes:
>> What's the friggin' problem?
> I'd agree to say that typing can help prove some assertions or
> properties of the program. But since there is proof of existance of
> good ways to deduce automatically the types, why should the programmer
> be forced to _write_ type declarations?
Indeed. Is anybody actually advocating this? I'm amazed that after
miles and miles of debate, you still think anybody involved is trying
to suggest this.
Just to make this clear: the static typing side of this thread is
about Hindley-Milner type systems, where type declarations are
*optional*, and where the system *infers* the type of expressions for
you.
If you (for any value of you, not P.B. in particular) don't have
experience with these systems, please don't extrapolate and generalize
too lightly from previous experience with C++ or similar languages.
Type declarations are of course considered good practice -- it is
documentation, and it confirms for the programmer that he has the
right idea.
-----
If you want to add types, you do need *data* declarations (which for
some is, apparently, the friggin' problem). I must admit I don't
quite understand this point of view -- if you have a tokenizer, surely
it produces tokens, and not objects for adventure game heroes' bags?
The disadvantage is that you need to type "data Token" (and probably
add the different types of token). Whoa, that's ten characters
wasted, and productivity down the drain. What you get for free, is
that the system will tell you about coverage problems in all cases
where tokens are consumed, and it will prove that you won't ever feed
non-tokens to those consumers.
In my opinion, the effort is small and the benefit substantial, but
it's clearly something we can agree to disagree on.
I haven't programmed much Lisp, but one of the problems is that
it is sometimes hard to grok the data structures used -- programmers
throw around lists of T, but of course, functions rely on some
particular internal structure, and it's sometimes hard to read this
purely from the code. (You can blame this on sloppy programmers, of
course. Or on my incompetence, I won't mind.)
-----
> Of course, any programmer will still _think_ about types, how would
> we navigate in complicated (or even simple) cons trees otherwise?
Exactly. I often do
my_function :: a -> [b] -> c
my_function = undefined -- I'm not sure how, but I do know what
You can see this as thinking about types, or you can see it as writing
"unit tests" in the form of a static invariant for my_function. I can
still test (the rest of) my code, and the system will tell me if my
invariant for my_function is incompatible with the code it interacts
with.
Equally, I often do
other_function = ...some expression...
and leave it to the system to work out the appropriate type. (And if
I'm lazy, I can of course query the type, and paste the declaration.)
Still the type needs to be compatible with how the other_function is
used.
So if you think about types anyway, why not express those thoughts to
the system?
-kzm
PS: Some people have posted examples in C-like languages, please don't
interpret it as endorsement of those languages.
--
If I haven't seen further, it is by standing in the footprints of giants
Andreas Rossberg wrote:
> Pascal Bourguignon wrote:
>
>>I can't see the difference between Lisp and Haskel or other strongly,
>>statically or dynamically typed language. I've shown example where
>>the Lisp compiler clearly checked types at compilation time.
>>
>>We may discuss the strength and depth of the theorem prover used by
>>Lisp compilers for type inference and type checking, but that's an
>>implementation detail...
>
>
> No, a proper static type system is *complete* wrt the properties it proves
> (except if the language designers shoot holes in it by providing unsound
> casting, but then it's not proper), while a soft type system is necessarily
> incomplete (and highly so). A fundamental difference, not just an
> implementation detail.
Yes. But the issue is what are you willing to trade off. On one side
you have flexibility on the other you have (type) safety. I have not
done my homework, but I bet that it wouldn't be impossible to come up
with a complete type system for the "static" (quotes mandatory) parts of
Common Lisp (I think Drew McDermott is doing this and maybe more).
The stand of Common Lispers is that it does not have to be an
all-or-nothing stance. Flexibility is extremely important. That is why
we also insist on an S-expr syntax.
Cheers
--
marco
Marco Antoniotti wrote:
>
>> No, a proper static type system is *complete* wrt the properties it
>> proves (except if the language designers shoot holes in it by providing
>> unsound casting, but then it's not proper), while a soft type system is
>> necessarily incomplete (and highly so). A fundamental difference, not
>> just an implementation detail.
>
> Yes. But the issue is what are you willing to trade off. On one side
> you have flexibility on the other you have (type) safety.
Sure.
> I have not
> done my homework, but I bet that it wouldn't be impossible to come up
> with a complete type system for the "static" (quotes mandatory) parts of
> Common Lisp (I think Drew McDermott is doing this and maybe more).
I am absolutely convinced that this is indeed impossible without severe
changes to the language (eg. distinguishing tuples from lists, to start
with).
> The stand of Common Lispers is that it does not have to be an
> all-or-nothing stance. Flexibility is extremely important. That is why
> we also insist on an S-expr syntax.
I'd say that S-expressions are pretty much on one side of the stance. YMMV
about which one... :-)
--
Andreas Rossberg, ········@ps.uni-sb.de
"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
Andreas Rossberg wrote:
> Marco Antoniotti wrote:
>
>>>No, a proper static type system is *complete* wrt the properties it
>>>proves (except if the language designers shoot holes in it by providing
>>>unsound casting, but then it's not proper), while a soft type system is
>>>necessarily incomplete (and highly so). A fundamental difference, not
>>>just an implementation detail.
>>
>>Yes. But the issue is what are you willing to trade off. On one side
>>you have flexibility on the other you have (type) safety.
>
>
> Sure.
>
>
>>I have not
>>done my homework, but I bet that it wouldn't be impossible to come up
>>with a complete type system for the "static" (quotes mandatory) parts of
>>Common Lisp (I think Drew McDermott is doing this and maybe more).
>
>
> I am absolutely convinced that this is indeed impossible without severe
> changes to the language (eg. distinguishing tuples from lists, to start
> with).
But you have tuples in CL. The are called DEFSTRUCT. What you want is
some restriction on lists (incidentally I strongly believe that the CL
type system should be extended to include SIMPLE-LISTs and COMPLEX-LISTs
- or whatever you want to call them). However, whenever you want to use
something other than LISTS, you can already get a lot of mileage.
CMUCL being an existence proof.
Cheers
--
Marco
From: Feuer
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <3FBB15BD.67BD9CF@his.com>
"Mario S. Mommer" wrote:
>
> Darius <·······@hotpop.com> writes:
> > I get a similar feeling when I see things like
> > (list 3 t #\a)
>
> What's the friggin' problem?
>
> > where a Haskeller would have (3,True,'a')
>
> ...and what's the advantage of that? Commas more readable?
Actually, (3,True,'a') is not a list. It is a triple (3-tuple)
which is much like a Pascal record or C struct. Lists are
homogeneous; tuples are heterogeneous. The good thing is that
if it's in the list, you know what you can do with it. In Java
all the time, or CL (at least sometimes) a runtime check is needed
to make sure the thing coming out of the list is what it should be.
David
Mario S. Mommer <········@yahoo.com> once said:
>A list of objects of different types rarely comes up in practice? Are
>you serious???
Yes. However your incredulity suggests to me that we have different
notions about what "list" or "different type" means in this context.
Looking at prior messages here, it appears that "terminology" is always
a big sticking point. It might be better to only consider specific
examples. That is, if you have a few instances of what you would call
"lists of different types" which typically occur in your practice,
perhaps you can describe them (along with the general "problem
context") here. Then we can see if/how we would approach the same
problem/example in a static-typing setting.
--
Brian M. McNamara ······@acm.org : I am a parsing fool!
** Reduce - Reuse - Recycle ** : (Where's my medication? ;) )
Erann Gat <···@jpl.nasa.gov> wrote:
> What is the type of this function:
>
> (defun mapn (fn &rest lists)
> (if (every #'null lists)
> nil
> (cons (apply fn (remove nil (mapcar #'car lists)))
> (apply #'mapn fn (mapcar #'cdr lists)))))
If you ignore the NILs (see below for that), the function is
mapn f = map f . transpose
and has type
mapn :: ([a] -> b) -> [[a]] -> [b]
> in light of this example:
>
> ? (mapcar 'type-of (mapn (compose 'sqrt '+) '(1 2 3) '(nil 5 -41/4 1)
> '(-101.7 5 5 -3/4 nil)))
> ((COMPLEX FLOAT) DOUBLE-FLOAT (COMPLEX RATIONAL) RATIO FIXNUM)
That's the same old game; you think that because there is a
single type 'a', it would be difficult to mix different 'lisp types'.
I have shown now often enough how to do that, I'll do it once more,
but now you should have really understood the principle.
Haskell does not come with library functions that mix different
numeric types. So starting with the examples I already gave for
a general numeric datatype Number, one can easily extend this
with floats and a square function that works on every type. (I
didn't do complex on rationals, only complex on floats, but that's
easy to add, too). Then, make it an instance of the Num typeclass.
We can even write a 'typeof' function.
So we have a new datatype Number, new functions nsqrt, ntypeof, and
constructors <%> and <:+> for rational and complex datatypes. Then,
from GHCI (replacing NIL with 0):
Ok, modules loaded: Main.
Main> mapn (nsqrt . sum) [[1,2,3],[0,5,-41<%>4,1],[-101.7,5,5,-3<%>4,0]]
[0.0 :+ 10.034939,3.4641016,0.0 :+ 1.5,1 % 2,0]
Main> mapn (ntypeof.nsqrt.sum) [[1,2,3],[0,5,-41<%>4,1],[-101.7,5,5,-3<%>4,0]]
["Complex-Float","Float","Complex-Float","Rational","Integer"]
Here mapn specialises to ([Number] -> Number) -> [[Number]] -> [Number].
Finally, it's easy to simulate the NIL with a Maybe type. I'll leave this
as an exercise. (Hint: Use the Prelude function catMaybes if you want to
avoid doing it 'manually').
- Dirk
In article <··············@ID-7776.user.dfncis.de>, Dirk Thierbach
<··········@gmx.de> wrote:
> That's the same old game
...
> So starting with the examples I already gave for
> a general numeric datatype Number, one can easily extend this
> with floats and a square function that works on every type.
Yes, this is indeed the "same old game." I call this game "proving
Greenspun's tenth law."
E.
From: Lauri Alanko
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <bpefq2$e9k$1@la.iki.fi>
·········@jpl.nasa.gov (Erann Gat) virkkoi:
> > So starting with the examples I already gave for
> > a general numeric datatype Number, one can easily extend this
> > with floats and a square function that works on every type.
>
> Yes, this is indeed the "same old game." I call this game "proving
> Greenspun's tenth law."
It's only proven once you show that in any large enough Haskell
application such heterogeneous numeric types always crop up. Just
because you need them doesn't mean that a Haskell programmer needs them.
Lauri Alanko
··@iki.fi
Erann Gat <·········@jpl.nasa.gov> wrote:
> In article <··············@ID-7776.user.dfncis.de>, Dirk Thierbach
> <··········@gmx.de> wrote:
>
>> That's the same old game
> ...
>> So starting with the examples I already gave for
>> a general numeric datatype Number, one can easily extend this
>> with floats and a square function that works on every type.
>
> Yes, this is indeed the "same old game." I call this game "proving
> Greenspun's tenth law."
I guess in your case that boils down to "I want to program in Lisp,
no matter what language I use". You can actually program in other
languages than Lisp by using their features, instead of imitating
those of Lisp because for some reason you insist that you need them.
But then, that means you have to actually understand the language
you're using.
And libraries are different in different languages. Is that so
surprising?
Have you ever tried programming in Smalltalk? Did you complain when
you couldn't find the equivalent to some Lisp library function
at once?
- Dirk
Dirk Thierbach <··········@gmx.de> writes:
> I guess in your case that boils down to "I want to program in Lisp,
> no matter what language I use". You can actually program in other
> languages than Lisp by using their features, instead of imitating
> those of Lisp because for some reason you insist that you need them.
Well, some of us (lisp hackers) have tried quite a number of languages
(the languages I've done reasonably substantial work in myself include
quite different beasts like fortran, simula, pascal, prolog and perl,
and most of these I learned _before_ lisp) and getting back to Common
Lisp is like coming home from a horror trip. CL is an amazingly
versatile language and an extremely powerful tool (and best of all, a
meta-tool!). Please keep that in mind when CL hackers are reluctant to
believe that Haskell or whatever will solve any problems for them.
For me, this discussion has sparked some interest in really learning
Haskell, but I'm still a bit sceptical, mainly because (Computer
Science-oriented) Type Theory always seemed pretty disgusting to me (I
used to do mathematical logic).
--
(espen)
In article <··············@ID-7776.user.dfncis.de>,
Dirk Thierbach <··········@gmx.de> wrote:
> I want them to rethink the silly name calling and prejudices
> that seem to be an automatic reaction whenever someone mentions
> static typing. Raffael is a good example of this.
Let me say that I have absolutely no doubt that you and others are quite
productive using Haskell, or Ocaml. I'll go farther to say that for
certain tasks, for example, software that put people's lives in
jeopardy, such as an x-ray controller for treating tumors, I would
*prefer* to use a statically typed language for the additional
protection against type errors that it provides. In such a case, I would
try to use Haskell, or Ocaml, as they seem to me the statically typed
languages closest to lisp in spirit.
However, I don't write such software, so I do not wish to have to
conform my thinking to a language's type system in order to avoid having
type errors show up at runtime. With software that put's people's lives
at risk, detecting type errors at runtime is often too late. With
software that doesn't put people's lives on the line, detecting type
errors at runtime and having the facilites to deal with them gracefully
is not a problem. So I choose not to have to deal with static type
checking if I don't have to, and I don't have to.
Please be aware, however, that my responses that seem like name calling
to you have been to comments of the form:
"lisp has no real facilities for abstraction."
"lisp doesn't really have types."
(Note that I am *not* attributing these statements to you, Dirk.)
Such statements are merely condescending and insulting, and such posters
(again, not you, Dirk) should expect to get as good as they gave.
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
> Please be aware, however, that my responses that seem like name calling
> to you have been to comments of the form:
>
> "lisp has no real facilities for abstraction."
The abstraction facilities thing was brought up by me, and I did *not*
say "lisp has no real facilities for abstraction". I merely said that
it has few good ones, which is an entirely different statement.
> "lisp doesn't really have types."
It has been said (including by myself) that "lisp is untyped". This
is true in the sense that, e.g., the Untyped Lambda Calculus is
untyped. It is not true in another sense, namely the one that says
"lisp has no types" -- provided you take a broad enough range of
possible definitions for "type" including the one that is "set of
values". Notice, however, that the statement "X has no types" is then
trivially false for pretty much every single general-purpose
programming language, so that is not saying all that much. For that
reason alone it should be pretty clear that it is not the
interpretation that was intended when one says "lisp is untyped".
There has been a third statement which has caused widespread dispute,
including name-calling, and that was "lisp has no type system".
Again, this is true in the technical sense where the phrase "type
system" is used the way, e.g., Pierce defines it. It may be false
under certain other re-interpretations of the same phrase, but that
does not really matter. (After all, we all know that everything
depends on the meaning of the word "is".)
In any case, I still don't know why people feel offended or insulted
by any of these statements. I, for one, have been trying to be as
cordial as possible with my replies (being tempted to "retaliate in
kind" more than once, but having -- at least this time -- always
resisted). Likewise, I have not seen any hostility on the parts of
Andreas or Dirk or anybody else on "our" side of the debate. The
bottom line is that you and some other people here simply reject
terminology that is accepted for most of the scientific discourse on
the subjects that were discussed (or attempted to discuss), and any
disagreement is viewed as an affront.
On the other hand, there has been much openly hostile language and
ridicule for those among us who prefer static type checking, so
complaining about insults is more than just a little bit hypocritical.
Matthias
In article <··············@tti5.uchicago.edu>,
Matthias Blume <····@my.address.elsewhere> wrote:
> In any case, I still don't know why people feel offended or insulted
> by any of these statements. I, for one, have been trying to be as
> cordial as possible with my replies (being tempted to "retaliate in
> kind" more than once, but having -- at least this time -- always
> resisted).
Oh really:
In article <··············@hanabi-air.shimizu.blume>,
Matthias Blume <····@my.address.elsewhere> wrote:
> I think you don't know what an abstraction is.
If you think that this was "cordial" then you really have to work on
your people skills. It was, in fact, condescending, and insulting.
From: Lauri Alanko
Subject: Re: Why I don't believe in static typing
Date:
Message-ID: <bph3m3$c0g$1@la.iki.fi>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> virkkoi:
> Matthias Blume <····@my.address.elsewhere> wrote:
> > I think you don't know what an abstraction is.
>
> If you think that this was "cordial" then you really have to work on
> your people skills. It was, in fact, condescending, and insulting.
That's a strange thing to find insulting. If someone said that to me,
I'd consider two options: either
a) the speaker is mistaken and has maken a hasty judgement concerning my
knowledge of abstractions, perhaps due to miscommunication,
or
b) the speaker is correct and indeed _I_ have had a faulty or inaccurate
idea of what is really meant by an abstraction.
Thus a sensible way of action would seem to be continuing discussion to
find out which is the case. In either case _someone_ is going to learn
something.
If I got insulted when someone questioned my knowledge, I might well
miss an opportunity for increasing it.
Lauri Alanko
··@iki.fi
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
> In article <··············@tti5.uchicago.edu>,
> Matthias Blume <····@my.address.elsewhere> wrote:
>
> > In any case, I still don't know why people feel offended or insulted
> > by any of these statements. I, for one, have been trying to be as
> > cordial as possible with my replies (being tempted to "retaliate in
> > kind" more than once, but having -- at least this time -- always
> > resisted).
>
> Oh really:
>
> In article <··············@hanabi-air.shimizu.blume>,
> Matthias Blume <····@my.address.elsewhere> wrote:
>
> > I think you don't know what an abstraction is.
>
> If you think that this was "cordial" then you really have to work on
> your people skills. It was, in fact, condescending, and insulting.
See Lauri's reply. But I apologize anyway: Yes, it was certainly a
slip-up and below the standards that I tried to set for myself.
In article <··············@hanabi-air.shimizu.blume>,
Matthias Blume <····@my.address.elsewhere> wrote:
> See Lauri's reply. But I apologize anyway: Yes, it was certainly a
> slip-up and below the standards that I tried to set for myself.
Thanks. I apologise if any of my remarks have offended you, or any other
participants in this discussion.
On Wed, 19 Nov 2003, Raffael Cavallaro wrote:
> [...] I do not wish to have to conform my thinking to a language's type
> system in order to avoid having type errors show up at runtime.
Sometimes subjugating yourself to an arbitrary formalism is useful in
itself. Good programmers do this instinctively when designing frameworks.
You set up some constraints, and, if you have trouble solving a problem
within them it forces you to think. Sometimes you realize you were
thinking about the problem the wrong way, sometimes you realize why your
constraints were too tight and how they don't properly capture the
problem domain.
In both cases you have reached a better understanding of the problem.
That's why it's good to err on the side of too-tight constraints
ESPECIALLY when you're still experimenting and trying to understand.
A type system places logical constraints on what you can do, and usually
when you have difficulty with them you'll find its your thinking that is
flawed. There are still a few valid cases that don't fit, which is why
type systems are still being extended and researched (existential types
etc.).
Raffael Cavallaro <················@junk.mail.me.not.mac.com> wrote:
>
> Let me say that I have absolutely no doubt that you and others are quite
> productive using Haskell, or Ocaml. I'll go farther to say that for
> certain tasks, for example, software that put people's lives in
> jeopardy, such as an x-ray controller for treating tumors, I would
> *prefer* to use a statically typed language for the additional
> protection against type errors that it provides. In such a case, I would
> try to use Haskell, or Ocaml, as they seem to me the statically typed
> languages closest to lisp in spirit.
>
> However, I don't write such software, so I do not wish to have to
> conform my thinking to a language's type system in order to avoid having
> type errors show up at runtime. With software that put's people's lives
> at risk, detecting type errors at runtime is often too late. With
> software that doesn't put people's lives on the line, detecting type
> errors at runtime and having the facilites to deal with them gracefully
> is not a problem. So I choose not to have to deal with static type
> checking if I don't have to, and I don't have to.
Let me rephrase that with a bit of sarcasm: "In general I don't care if my
program works as it is supposed to, because that would be more work for me.
I'm not willing to do that work unless lifes depend on it."
Andreas Rossberg <········@ps.uni-sb.de> wrote:
> Let me rephrase that with a bit of sarcasm: "In general I don't care if my
> program works as it is supposed to, because that would be more work for me.
> I'm not willing to do that work unless lifes depend on it."
Let me rephrase that with a bit of realism: "In general the real world
is extremely complicated and my resources are limited. When I write a
program that models some part of the real world, I'll have to make
a lot of simplifying assumptions to get anything done. Most of those
assumptions are likely to be wrong, but it's very hard to know in advance
whether that is critical to the success of my program. The best way to learn
something is by trying something, making some mistakes and analyzing those
mistakes. So I'd better get started right now, instead of waiting
until I understand the real world well enough that I can build a perfect
model (especially because the real world is changing so fast that my
model will probably be obsolete by the time it's 'finished')."
The static typing mentality that all errors are fundamentally wrong and must
be avoided at all costs corresponds badly to most things in real life. I think
it's more productive to see programming as a learning process, where errors
are inevitable. The interesting questions are: how can I deal gracefully
with the errors that are bound to happen, and how can I get the most
information out of them so I'll make smarter errors the next time?
Arthur Lemmens
Arthur Lemmens <········@xs4all.nl> wrote:
> The static typing mentality that all errors are fundamentally wrong
> and must be avoided at all costs corresponds badly to most things in
> real life.
No, it isn't. The static typing mentality (and least mine) is that
errors are fundamentally wrong, and if you can avoid them, you should
avoid them. If you can avoid a whole class of them with no or little
additional cost, then it's natural to do so. If the cost is too
high, you do something else.
> I think it's more productive to see programming as a learning
> process, where errors are inevitable.
Errors *are* inevitable. After all, we are humans. So it's good to
safely get rid of the "stupid" errors, to be able to concentrate on
the "interesting" ones.
> The interesting questions are: how can I deal gracefully
> with the errors that are bound to happen, and how can I get the most
> information out of them so I'll make smarter errors the next time?
You do that in the same way in a statically typed language as you do it
in a dynamically typed one.
- Dirk
Dirk Thierbach <··········@gmx.de> wrote:
> Arthur Lemmens <········@xs4all.nl> wrote:
>
>> The interesting questions are: how can I deal gracefully
>> with the errors that are bound to happen, and how can I get the most
>> information out of them so I'll make smarter errors the next time?
>
> You do that in the same way in a statically typed language as you do it
> in a dynamically typed one.
Not if your statically typed language prevents you from running the
program because it contains a type error. Yes, the type error is an
error, so my program will probably fail in some circumstances.
But what if the program contains other, more important, errors and
the best way to discover those errors is by running the program? A
static compiler won't let me discover those errors until I've done
my bookkeeping. It forces me to follow a design process (first get
your types right, then test) that may not be appropriate to my
circumstances.
It's like a parent who prevents his child from setting his first
steps, because the child is going to fall anyway. How will the
child learn to walk when you don't give it the opportunity to try?
Arthur Lemmens
Arthur Lemmens <········@xs4all.nl>:
> Not if your statically typed language prevents you from running the
> program because it contains a type error. Yes, the type error is an
> error, so my program will probably fail in some circumstances.
> But what if the program contains other, more important, errors and
> the best way to discover those errors is by running the program?
You guys keep saying this, but it's a red herring. If I'm working on
a program in Haskell, and I have a type error somewhere but want to
try out some other code path, it's always a trivial change to get it
to type, knowing that if it actually gets to the place that didn't check,
it will (as it should) fail. I'll give a tiny example. Suppose I have:
f a b = if a then do_this else do_that
But when I try to run it, the compiler complains that do_this and do_that
can't be unified. But I want to try do_this! What should I do? I can
simply insert a marker that tells me that something isn't right yet and
makes the function type:
f a b = if a then do_this else xxx $ do_that
Where I have xxx defined elsewhere:
xxx = error "You need to fix this (xxx)"
xxx has type (forall a. a), which means it can be used at any type,
including (forall a b. a -> b), that is, a function from any type to
any other type. If the first argument to f is True, it will test out
do_this as I intended; if _a_ is False, then it will raise an exception,
as it ought to. I consider inserting "xxx $" a small price to pay for
soundness the rest of the time, but I expect some don't. That's cool
too.
Jesse
In article <············@grizzly.ps.uni-sb.de>,
"Andreas Rossberg" <········@ps.uni-sb.de> wrote:
> Let me rephrase that with a bit of sarcasm: "In general I don't care if my
> program works as it is supposed to, because that would be more work for me.
> I'm not willing to do that work unless lifes depend on it."
I think you miss my point which is not that these errors must be
detected - of course they must. The issue is whether one should do so
statically, at compile time, or dynamically, at run time.
In cases where lives are on the line, we can't allow errors to slip
through until runtime, that might be fatal. In such cases it makes sense
to have a fixed design spec, a rigid methodology, as many resources
devoted to testing as to coding, and probably, very limited interaction
built into the software, if any, as interaction is something that forces
us to do runtime type checking and adds to unpredictability. See
<http://www.fastcompany.com/online/06/writestuff.html> for a description
of such a methodology used by NASA's on-board shuttle group.
Now, let's consider software that is just the opposite of this. What if
your project has an ill-defined, often changing specification
(frequently the case), or, that you are discovering the specifcation as
you code and test (exploratory programming). Further, your software is
highly interactive, meaning that many important type checks have to be
done at runtime anyway. Since we are doing important type checks at
runtime, and we can't know as we are coding, learning, and recoding,
exactly how our software will be structured in its final form, it makes
sense to design our whole methodology around the absolutely vital
runtime checks. This was put well by Arthur Lemmens when he wrote, in
article <················@news.xs4all.nl>:
> When I write a
> program that models some part of the real world, I'll have to make
> a lot of simplifying assumptions to get anything done. Most of those
> assumptions are likely to be wrong, but it's very hard to know in advance
> whether that is critical to the success of my program. The best way to learn
> something is by trying something, making some mistakes and analyzing those
> mistakes. So I'd better get started right now, instead of waiting
> until I understand the real world well enough that I can build a perfect
> model (especially because the real world is changing so fast that my
> model will probably be obsolete by the time it's 'finished')."
Since I can't afford to wait until I have a complete understanding of
the problem before I start coding and testing, trying to satisfy a
static type checker is going to be particularly difficult and time
consuming unless I use some means of evading it (type Dynamic, for
example) which essentially defeats the static type checker's purpose. It
will also be fairly pointless to try to statically verify early code, as
I know this code will change radically before it is done. Thus, the
final program will evolve into a form where these type checks are done
at runtime.
As my design matures to completion, I may think I should re-code my work
so that it can be statically type checked (say, by porting it to Haskell
from lisp). But that would only lay me open to the same set of problems
when I'm called upon to modify the working application in equally
ill-specified, changing ways. Of course one could keep two versions, one
in a statically typed language for absolute type safety of delivery, and
one in a dynamically typed language for exploratory development. But,
apart from the added resources needed to do this, by the time my design
has been nailed down sufficiently to re-code it in a statically typed
language, the continuous testing done throughout the development process
has already found essentially all the type errors that a static type
checker would find. This is especially true of applications with a great
deal of interactivity, where even statically typed languages must
perform runtime type checks to achieve type safety.
Would it be nice if there were a language that gave us both a flexible
default to start our exploratory programming in, and the ability to add
stricter static type checks as we nail down a more complete
understanding of our problem and our solution? Yes. And I agree with
Marco Antoniotti that cmucl/sbcl are the closest thing we have to that
goal today.
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
>What if your project has an ill-defined, often changing specification
>(frequently the case), or, that you are discovering the specifcation as
>you code and test (exploratory programming).
In that case, I think static typing would be ideal.
>Further, your software is highly interactive,
I think that is irrelevant to the choice of static or dynamic typing.
>Since we are doing important type checks at runtime,
>and we can't know as we are coding, learning, and recoding,
>exactly how our software will be structured in its final form, it makes
>sense to design our whole methodology around the absolutely vital
>runtime checks.
Complete non-sequitur, IMHO. The fact that some checks need to be done
at runtime is no reason to throw away all static type checking, and the
fact that you don't know exactly how the software will be structured is
all the more reason to use a statically typed language.
>Since I can't afford to wait until I have a complete understanding of
>the problem before I start coding and testing, trying to satisfy a
>static type checker is going to be particularly difficult and time
>consuming unless I use some means of evading it (type Dynamic, for
>example) which essentially defeats the static type checker's purpose.
This opinion just suggests to me that you don't have a good understanding
of how to do program development in a statically typed language.
As people have explained, you start off using abstract types that
may gradually get fleshed out as you gain more understanding of the
problem. The fact that you don't understand the problem doesn't make
it harder to use static typing; instead, the fact that static typing
will quickly pick up inconsistencies in your design makes it easier
to change your designs as you go along.
--
Fergus Henderson <···@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Raffael Cavallaro wrote:
> Let me say that I have absolutely no doubt that you and others are quite
> productive using Haskell, or Ocaml. I'll go farther to say that for
> certain tasks, for example, software that put people's lives in
> jeopardy, such as an x-ray controller for treating tumors, I would
> *prefer* to use a statically typed language for the additional
> protection against type errors that it provides. In such a case, I would
> try to use Haskell, or Ocaml, as they seem to me the statically typed
> languages closest to lisp in spirit.
That would probably be insane. If lives are at risk, you don't
want to use a complex system like Haskell or O'Caml which was
designed to give the most flexibility possible. You want to use a
simple real-time language with a type system designed to give the bare
minimum of functionality necessary. There are systems out there that
give you a good bit of computing power while making guarantees about
space and time that you really want when writing critical apps.
Haskell and O'Caml are designed for software that does a lot with
information between the time it goes in and the time it goes out.
As mentioned before, if the data go in and the data go right back
out, a type system is not so useful. But if a lot of thinking goes
on in between, a type system can help make sure that the in between
part works as it should, and save debugging time.
David
In article <·················@his.com>, Feuer <·····@his.com> wrote:
> That would probably be insane.
Thanks for the compliment. I'd react more strongly, but I now understand
that such statements are considered cordial amone static typing
advocates.
In any event, I don't believe that the software I was refering to has
hard real time requirements - just failsafes against exposing the
patient to too much radiation.
> Haskell or O'Caml which was
> designed to give the most flexibility possible.
Both have static typing, so, no, neither was designed for maximum
flexibility. Both were designed to enforce constraints *inflexibly*, but
as painlessly as possible - by doing type inference so the programmer
doesn't have to do a lot of manual type declaration.
> But if a lot of thinking goes
> on in between, a type system can help make sure that the in between
> part works as it should, and save debugging time.
The failsafes need to be fairly intelligent - they have to second guess
the operator, and the programmer to ensure patient safety. One could
characterize that sort of minimal AI as "a lot of thinking," at least
for a computer.
Raffael Cavallaro wrote:
> In any event, I don't believe that the software I was refering to has
> hard real time requirements - just failsafes against exposing the
> patient to too much radiation.
Which, generally speaking, requires hard realtime guarantees. You
need to _know_ that the machine will be turned off when the timer
goes. You need to _know_ that when the operator hits the cancel
button, the machine will go off in time. You need to _know_ that the
machine will report accurate timings to the operator no matter what
weirdness the operator does. The big problem with the Therac _____
a while back was caused by idiots writing multi-threaded programs
in assembly in a critical application. Really, the radiation machine
is a terrible example because it is too simple. Parameters for
treatment are entered, treatment begins, the operator watches
progress of treatment, treatment ends. A correctly written program
would be simple, simple, simple. A realtime language is more useful
in a more dynamic application like a flight control system or
electrical grid control system where things are changing all the
time and there's no guaranteed "safest thing to do at all times".
> > Haskell or O'Caml which was
> > designed to give the most flexibility possible.
>
> Both have static typing, so, no, neither was designed for maximum
> flexibility. Both were designed to enforce constraints *inflexibly*, but
> as painlessly as possible - by doing type inference so the programmer
> doesn't have to do a lot of manual type declaration.
These languages are designed to be as flexible as possible within the
context of static typing while still offering useful type inference.
> > But if a lot of thinking goes
> > on in between, a type system can help make sure that the in between
> > part works as it should, and save debugging time.
>
> The failsafes need to be fairly intelligent - they have to second guess
> the operator, and the programmer to ensure patient safety. One could
> characterize that sort of minimal AI as "a lot of thinking," at least
> for a computer.
Failsafes on a radiation machine need to be simple, not intelligent.
If you are using some kind of AI, however minimal, then you have
designed the system wrong. If a programmer cannot tell by a short
inspection that the failsafe system is right, then it is wrong.
David
In article <·················@his.com>, Feuer <·····@his.com> wrote:
> Really, the radiation machine
> is a terrible example because it is too simple. Parameters for
> treatment are entered, treatment begins, the operator watches
> progress of treatment, treatment ends. A correctly written program
> would be simple, simple, simple.
I'm sorry - I guess I was thinking of the software that computes dosage
based on tumor size, location, etc., before treatment begins, not the
hardware microcontroller for the xray itself, or for a robot arm.
Dosimetry software still needs sanity checks and failsafes, no? The
hardware microcontrollers would, as you say, be implemented with hard
real time guarantees.
In any event, before you go to town about the details of software I gave
as a hypothetical example of the kind of software I *don't* write,
consider this: One time you really *do* want static type checking is in
such a hard real time system with lives at risk. But, in your opinion at
least, it would be "insane" to use Haskell or Ocaml. So in these
situations, statically typed really does mean C-like languages, where
the programmer has to make explicit type declarations. This makes it
seem like the protestations of static typing advocates that statically
typed doesn't mean C and it's bretheren are a little misleading. In the
situation where you arguably most want static type guarantees, static
typing does mean C-like explicit type declarations, and not the type
inference of Haskell or Ocaml.
Raffael Cavallaro wrote:
>
> I'm sorry - I guess I was thinking of the software that computes dosage
> based on tumor size, location, etc., before treatment begins, not the
> hardware microcontroller for the xray itself, or for a robot arm.
> Dosimetry software still needs sanity checks and failsafes, no? The
> hardware microcontrollers would, as you say, be implemented with hard
> real time guarantees.
I see. That software has a pretty good failsafe: whatever it
spits out is reviewed by a doctor.
> So in these
> situations, statically typed really does mean C-like languages, where
> the programmer has to make explicit type declarations. This makes it
> seem like the protestations of static typing advocates that statically
> typed doesn't mean C and it's bretheren are a little misleading. In the
> situation where you arguably most want static type guarantees, static
> typing does mean C-like explicit type declarations, and not the type
> inference of Haskell or Ocaml.
The ability to make explicit type declarations does not imply
the necessity to declare all types. Also, when I think of
"C-like languages", I don't even begin to think of languages
with type systems designed for real-time use.
David
In article <·················@his.com>, Feuer <·····@his.com> wrote:
> I see. That software has a pretty good failsafe: whatever it
> spits out is reviewed by a doctor.
Actually, what it can spit out is the intesity and duration of
potentially scores of different radiation sources - some of the devices
commonly in use consist of a couple of hundred xray beams - so It's not
clear that the doctor could make much sense of its output, except in
some synthetic form, such as a 3D model of the patient and an overlay of
the region to be treated. So, yes, the software would need sanity checks
and fail safes because doctors aren't always so good at doing 3D
geometry in their heads just given raw data.
> Also, when I think of
> "C-like languages", I don't even begin to think of languages
> with type systems designed for real-time use.
I meant C-like (or Java-like, or C++ like) in the sense that we have
been discussing in the threads here relating to static typing -
requiring explicit type declarations on the part of the programmer.
So when considering languages with type systems designed for real-time
use, what do you think of Timber? Isn't that like O'Haskell with
real-time extensions?
Raffael Cavallaro wrote:
> So when considering languages with type systems designed for real-time
> use, what do you think of Timber? Isn't that like O'Haskell with
> real-time extensions?
No clue. I don't really know anything about this stuff.
David
In article <······································@netnews.attbi.com>,
Raffael Cavallaro <················@junk.mail.me.not.mac.com> wrote:
...
> I meant C-like (or Java-like, or C++ like) in the sense that we have
> been discussing in the threads here relating to static typing -
> requiring explicit type declarations on the part of the programmer.
You are probably aware that this still draws a somewhat inaccurate
picture. The type system employed by ML et al. is not achieved by
starting with C and relieving us of the need for explicit declarations.
> So when considering languages with type systems designed for real-time
> use, what do you think of Timber? Isn't that like O'Haskell with
> real-time extensions?
I'm no authority on such things, but I have written some small
programs in Timber. I don't think the author is really very
interested in widespread adoption right now, but rumor has it
he's working on a substantial revision that might be a more
interesting candidate. The versions I've seen of O'Haskell and
Timber have been adaptations from an old Hugs interpreter for
the sake of exploring a couple of interesting notions about the
execution model of a declarative functional program, and from
that standpoint I think they're very interesting. That appears
to put me in a very small crowd, though.
The way I see it, O'Haskell is about reactive objects, which
are kind of a procedural elaboration of the Monad that drive
the execution of a program in response to external events and
interactions with each other. Could be seen as "event driven
Haskell" - the OOP aspect is not very interesting on its own.
(Sometimes I wonder if it was a big mistake if he ever wanted
to get people interested, as I think the FP world is not really
dying to see OOP bolted on to their favorite FPL and the OOP
people aren't going to get O'Haskell at all.) Timber extends
the event model to include time - now a non-event is an event too!
Donn Cave, ····@u.washington.edu
Raffael Cavallaro wrote:
>
> In any event, before you go to town about the details of software I gave
> as a hypothetical example of the kind of software I *don't* write,
> consider this: One time you really *do* want static type checking is in
> such a hard real time system with lives at risk. But, in your opinion at
> least, it would be "insane" to use Haskell or Ocaml. So in these
> situations, statically typed really does mean C-like languages, where
> the programmer has to make explicit type declarations. This makes it
> seem like the protestations of static typing advocates that statically
> typed doesn't mean C and it's bretheren are a little misleading. In the
> situation where you arguably most want static type guarantees, static
> typing does mean C-like explicit type declarations, and not the type
> inference of Haskell or Ocaml.
Static typing would actually speak *for* Haskell or OCaml when critical
data is processed.
However, there are factors that speak *against* it. First of all, the
compilers aren't verified (at least not in the way that an Ada compiler
is). Second, if hard real time were involved, high-level languages are
generally at a disadvantage since it's more difficult to predict
run-time efficiency; even if amortized efficiency is good, that's not of
much help if your HRT requirements say that it must not only be good but
also predictable.
Note that at this level of required reliability, C would be far more
insane than Haskell or OCaml. I'd use a validated Ada compiler. And I
wouldn't moan too much about the language because I won't write more
than an amortized three lines of code per day - the rest is testing an
documentation. At that rate, programmer productivity and code quality
are determined more by factors like compiler validity and the
availability of validated (!) libraries than by language quality.
Regards,
Jo
Joachim Durchholz wrote:
> Static typing would actually speak *for* Haskell or OCaml when critical
> data is processed.
I disagree. The Haskell type system is far too complex for me to
trust implementations, and I suspect the same is true for O'Caml.
David
> I disagree. The Haskell type system is far too complex for me to
> trust implementations, and I suspect the same is true for O'Caml.
Huh? The Haskell type system isn't all that complicated. More
importantly, its theoretically sound.
·······@mindspring.com (Rayiner Hashem) writes:
>[someone wrote:]
>> I disagree. The Haskell type system is far too complex for me to
>> trust implementations, and I suspect the same is true for O'Caml.
>
>Huh? The Haskell type system isn't all that complicated. More
>importantly, its theoretically sound.
The Glasgow Haskell type system is certainly quite complicated.
Even if the Haskell type system is in theory sound, there is the question
of whether the implementation actually implements it correctly.
ghc is a quite complicated program, and I'm sure it has some bugs.
Even Hugs is by no means trivial, and I'm sure it too has some bugs.
However, that said, GCC has some bugs too.
--
Fergus Henderson <···@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Feuer wrote:
> I disagree.��The�Haskell�type�system�is�far�too�complex�for�me�to
> trust implementations, and I suspect the same is true for O'Caml.
This makes absolutely no sense at all. What precisely would you not
trust? That the program is well typed? I mean: do you think you can
write a ill-typed program and let ocamlc compile it? Without resorting
to explicitly dangerous constructs like the ffi?
V.
Feuer wrote:
> Joachim Durchholz wrote:
>
>>Static typing would actually speak *for* Haskell or OCaml when critical
>>data is processed.
>
> I disagree. The Haskell type system is far too complex for me to
> trust implementations, and I suspect the same is true for O'Caml.
I don't know the OCaml type system well enough to comment, but AFAIK
it's comparable to Haskell's.
The Haskell type system is roughly at the same complexity as that of
Ada, which is routinely used in verified compilers.
Actually, turning Haskell into a verified language that could be trusted
is easier than for most other languages, since a complete formal
semantics (including type systems) exists. I don't have any concrete
numbers, but I know the formal semantics of Ada is a fat book; AFAIK the
Haskell semantics doesn't even come near to that volume. Speaking of
language complexity, this seems to be a clear advantage for Haskell.
I wouldn't like Haskell under real-time conditions, since its laziness
makes it difficult to predict whether some code will meet a deadline.
Also, the code generation backends of Haskell are unverified AFAIK,
which means it cannot be used for critical systems.
However, turning Haskell into a language with verified implementations
would be easier than most other languages. Plug a verified Ada code
generation backend into the compiler, explicitly declare the language as
"not for real-time use", and get those tags that make the language
suitable for, say, constructing critical technical systems (nuclear
power plants, airplanes, or whatnot).
In fact, I'd trust software created using such a compiler more than
software created using a verified Ada compiler - not because I distrust
Ada compilers, but because Haskell is the more expressive language with
less of a gap between concept and implementation.
Regards,
Jo
In comp.lang.functional Joachim Durchholz <·················@web.de> wrote:
> I don't know the OCaml type system well enough to comment, but AFAIK
> it's comparable to Haskell's.
There are big differences, e.g. Haskell has overloading with classes,
which _significantly_ adds to the complexity of the type system.
> The Haskell type system is roughly at the same complexity as that of
> Ada, which is routinely used in verified compilers.
It's much cleaner and more elegant since it builds on strong theoretic
foundations.
> Actually, turning Haskell into a verified language that could be trusted
> is easier than for most other languages, since a complete formal
> semantics (including type systems) exists.
I knew that SML has a completely specified formal semantics, but didn't
know that Haskell has one. Any links?
> I don't have any concrete numbers, but I know the formal semantics of
> Ada is a fat book;
I can't imagine that anybody has ever written a fully formal semantics
for the whole of Ada.
> I wouldn't like Haskell under real-time conditions, since its laziness
> makes it difficult to predict whether some code will meet a deadline.
In fact, it's laziness that in some cases makes real-time
operations possible for certain persistent (i.e. purely functional)
datastructures! See Okasaki's book on "Purely Functional Datastructures"
for details.
> Also, the code generation backends of Haskell are unverified AFAIK,
> which means it cannot be used for critical systems.
What do you mean by "verified"?
Regards,
Markus Mottl
--
Markus Mottl http://www.oefai.at/~markus ······@oefai.at
Markus Mottl wrote:
> Joachim Durchholz <·················@web.de> wrote:
>
>>Actually, turning Haskell into a verified language that could be trusted
>>is easier than for most other languages, since a complete formal
>>semantics (including type systems) exists.
>
> I knew that SML has a completely specified formal semantics, but didn't
> know that Haskell has one. Any links?
Got me - should have checked my facts before posting.
http://www.haskell.org/definition/ has links to "old definitions of the
semantics of Haskell" (of 1991/1992, so they are probably quite outdated).
>>I don't have any concrete numbers, but I know the formal semantics of
>>Ada is a fat book;
>
> I can't imagine that anybody has ever written a fully formal semantics
> for the whole of Ada.
Not sure about completeness (it seems that complete formal semantics are
indeed rare). However, when Ada was young, my professor once showed me
the book, so I /know/ it exists :-)
>>I wouldn't like Haskell under real-time conditions, since its laziness
>>makes it difficult to predict whether some code will meet a deadline.
>
> In fact, it's laziness that in some cases makes real-time
> operations possible for certain persistent (i.e. purely functional)
> datastructures! See Okasaki's book on "Purely Functional Datastructures"
> for details.
Been there, read the book, got the t-shirt :-)
However, I don't think that Okasaki's book is relevant for the
discussion. Laziness analysis is whole-system analysis, so even the best
library cannot make many guarantees (and in fact there aren't many in
Okasaki's book).
Strictness analysis (which is laziness analysis by another name) would
be a matter-of-fact part of lazy language implementations if it were
easy, or modular. AFAIK it's one of the things that a good compiler
spends a noticeable fraction of its time on, so I assume the opposite is
the case.
And if compilers spend a good deal of time on it, this implies that
humans will have a hard time verifying it, too.
>>Also, the code generation backends of Haskell are unverified AFAIK,
>>which means it cannot be used for critical systems.
>
> What do you mean by "verified"?
Roughly speaking, "verified in the way that Ada compilers are verified".
Which includes verification along the lines of a formal semantics, with
massive compiler test suites, and whatever else you might wish to have.
(The DoD did a massive campaign to get the Ada compilers verified.
Consequently, early compilers were slow, bulky, /very/ expensive - and,
I assume, reliable. I didn't and don't like Ada very much, but I was and
am impressed by the effort invested into getting the compilers right.)
Regards,
Jo
In article <············@news.oberberg.net>, Joachim Durchholz wrote:
>
> Not sure about completeness (it seems that complete formal semantics are
> indeed rare). However, when Ada was young, my professor once showed me
> the book, so I /know/ it exists :-)
As far as I know, there are three production languages with full
formal semantics. It's a short list, but at least one of the members
is surprising: SML, Scheme, and Javascript 2.0!
--
Neel Krishnaswami
·····@cs.cmu.edu
Neelakantan Krishnaswami <·····@cs.cmu.edu> writes:
> In article <············@news.oberberg.net>, Joachim Durchholz wrote:
>>
>> Not sure about completeness (it seems that complete formal semantics are
>> indeed rare). However, when Ada was young, my professor once showed me
>> the book, so I /know/ it exists :-)
>
> As far as I know, there are three production languages with full
> formal semantics. It's a short list, but at least one of the members
> is surprising: SML, Scheme, and Javascript 2.0!
Doesn't Occam belong in this group, too?
--
Raymond Wiker Mail: ·············@fast.no
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60
Try FAST Search: http://alltheweb.com/
Raymond Wiker <·············@fast.no> writes:
>Neelakantan Krishnaswami <·····@cs.cmu.edu> writes:
>
>> In article <············@news.oberberg.net>, Joachim Durchholz wrote:
>>>
>>> Not sure about completeness (it seems that complete formal semantics are
>>> indeed rare). However, when Ada was young, my professor once showed me
>>> the book, so I /know/ it exists :-)
>>
>> As far as I know, there are three production languages with full
>> formal semantics. It's a short list, but at least one of the members
>> is surprising: SML, Scheme, and Javascript 2.0!
>
> Doesn't Occam belong in this group, too?
Also Prolog... and perhaps also Oberon?
For Prolog, there is a formal semantics included in the ISO Prolog standard.
For Oberon,
see <http://www.uni-paderborn.de/cs/asm/Available_Materials/proglang.html>.
--
Fergus Henderson <···@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
On Wed, 26 Nov 2003, Fergus Henderson wrote:
> Raymond Wiker <·············@fast.no> writes:
>
> >Neelakantan Krishnaswami <·····@cs.cmu.edu> writes:
> >
> >> In article <············@news.oberberg.net>, Joachim Durchholz wrote:
> >>>
> >>> Not sure about completeness (it seems that complete formal semantics are
> >>> indeed rare). However, when Ada was young, my professor once showed me
> >>> the book, so I /know/ it exists :-)
> >>
> >> As far as I know, there are three production languages with full
> >> formal semantics. It's a short list, but at least one of the members
> >> is surprising: SML, Scheme, and Javascript 2.0!
> >
> > Doesn't Occam belong in this group, too?
>
> Also Prolog... and perhaps also Oberon?
>
> For Prolog, there is a formal semantics included in the ISO Prolog standard.
> For Oberon,
> see <http://www.uni-paderborn.de/cs/asm/Available_Materials/proglang.html>.
And XQuery?
http://www.w3.org/TR/xquery-semantics/
Cheers,
Bijan Parsia.
In article <············@news.oberberg.net>,
Joachim Durchholz <·················@web.de> wrote:
> I wouldn't like Haskell under real-time conditions, since its laziness
> makes it difficult to predict whether some code will meet a deadline.
Just so eveyone here is on the same page, and I know many of you will
know much of this already, but there is a language,
O'Haskell: <http://www.math.chalmers.se/~nordland/ohaskell/>
which is Haskell with OO extensions, and there is a language,
Timber: <http://citeseer.nj.nec.com/black02timber.html>
which is O'Haskell with real-time extensions. I don't know how mature
Timber is yet, but it is intended to be an OO, real-time, Haskell.
On Sun, 23 Nov 2003 19:18:21 GMT
Raffael Cavallaro <················@junk.mail.me.not.mac.com> wrote:
> In article <············@news.oberberg.net>,
> Joachim Durchholz <·················@web.de> wrote:
>
> > I wouldn't like Haskell under real-time conditions, since its
> > laziness makes it difficult to predict whether some code will meet a
> > deadline.
>
> Just so eveyone here is on the same page, and I know many of you will
> know much of this already, but there is a language,
> O'Haskell: <http://www.math.chalmers.se/~nordland/ohaskell/>
> which is Haskell with OO extensions, and there is a language,
> Timber: <http://citeseer.nj.nec.com/black02timber.html>
> which is O'Haskell with real-time extensions. I don't know how mature
> Timber is yet, but it is intended to be an OO, real-time, Haskell.
There's also RT-FRP.
Raffael Cavallaro wrote:
> Joachim Durchholz <·················@web.de> wrote:
>> I wouldn't like Haskell under real-time conditions, since its
>> laziness makes it difficult to predict whether some code will meet
>> a deadline.
>
> Timber: <http://citeseer.nj.nec.com/black02timber.html> which is
> O'Haskell with real-time extensions. I don't know how mature Timber
> is yet, but it is intended to be an OO, real-time, Haskell.
The Timber home page is at
http://www.cse.ogi.edu/PacSoft/projects/Timber/Default.htm .
It seems to be at an experimental stage. At least that's how I read the
sentence "Ongoing work on the language report and the Timper compiler"
under "Accomplishments".
Regarding laziness in a real-time environment,
http://www.cse.ogi.edu/PacSoft/projects/Timber/Timber-2002-04.pdf has to
say the following:
"One of the consequences of laziness is that it can sometimes become
quite hard to predict when computation will actually take place, and
calculating worst case execution times is correspondingly difficult.
Whether the costs of laziness outweigh the benefits in a language
intended for real-time programming is an open question, and one that we
will continue to examine experimentally. It is important to note that
none of the extensions to Haskell that we put forward in Timber relies
on laziness. Thus it is perfectly reasonable to judge the merits of our
extensions as if they were intended for an eager programming language,
and it would be perfectly possible to give Timber an eager semantics
without major surgery."
In other words, the Timber team reserve their judgement, just as I did
above :-)
Regards,
Jo
Joachim Durchholz wrote:
>
> Feuer wrote:
> > Joachim Durchholz wrote:
> >
> >>Static typing would actually speak *for* Haskell or OCaml when critical
> >>data is processed.
> >
> > I disagree. The Haskell type system is far too complex for me to
> > trust implementations, and I suspect the same is true for O'Caml.
>
> I don't know the OCaml type system well enough to comment, but AFAIK
> it's comparable to Haskell's.
>
> The Haskell type system is roughly at the same complexity as that of
> Ada, which is routinely used in verified compilers.
So I guess I'm wrong.
David
David Golden wrote:
> The best lisp approach to haskell might be to make a haskell-like
> sublanguage IN lisp, similarly to the lisp approach to prolog, or the lisp
> approach to SQL, or the lisp approach to OO.
Methinks that would be disgustingly inefficient. (just see what
kinds of nastiness GHC has to go through to get good performance
semi-semi-portably).
David
> Why? Types encode invariants. They form a contract about the behaviour
> of some part of my code.
In C++, the language Erann Gatt used in his example, types form a very
weak contract. As such, its not a good idea to depend on the compiler
to flag all the places where a type change needs to be propogated,
because the compiler will *not* catch a large percentage of those
cases. The same is true for other supposedly statically type safe
languages like Java. For example, consider the case of component A
transferring some objects of certain type to component B via a
standard container. If the type of the objects changes, the compiler
will *not* flag B as needing updating. The compiler will happily
accept the code and throw a runtime exception when B tries to use
objects of an unexpected type. Also, a change in type is usually
associated with a non-type-related change in behavior, and the only
way to catch those is to start from the top-level design and see what
needs updating.
> But static typing *does* testing by abstract interpretation. And it
> does this more thoroughly than any test suite or code review ever
> can, because it is automated.
It tests for type errors. However, the error Erann Gatt used in his
example was not a type error, but rather a race condition. A type
system could not have prevented his error.
> It has its limits, but within those limits, it is really helpful.
I said as much in my post. Static type systems are good at preventing
type errors. However, they aren't a catch all. The fact that static
typing did not prevent the problem in Mr. Gatt's example doesn't
demonstrate a weakness of static typing, but merely a problem outside
of its scope.
Rayiner Hashem wrote:
> Hmm, that's not so much a weakness in static typing, but rather of
> those who use the type system as a mental crutch to avoid reasoning
> about the code. The comment about changing a type and then having the
> compiler tell you where you need to make modifications made me cringe.
> That's an abuse of the type system, not a good way to take advantage
> of it! Of course, dynamic typing would not have helped in your case.
> Only proper testing and code review would have.
>
> Static typing will tell you exactly one thing --- that your code is
> free of type errors. It has its utility --- it means that, if your
> code typechecks correctly, you don't have to pay attention to looking
> for type errors, rather, you can spend your efforts (testing, code
> review time) looking for other sorts of errors. People who sell static
> typing as a panacea (like anybody who sells anything as a panacea) are
> selling you snake oil.
Exactly. I agree with this. This is why CMUCL/SBCL are the closest
thing to THE RIGHT THING that exists on the planet. OTOH, spending ages
to satisfy the Type Checker before you can run your program (in a
*ML-like language without S-exprs and macros - sorry the Ocaml
preprocessor and Haskell macros do not even come close) is not my idea
of fun or productivity :)
Cheers
--
Marco
Marco Antoniotti <·······@cs.nyu.edu> wrote:
> Exactly. I agree with this. This is why CMUCL/SBCL are the closest
> thing to THE RIGHT THING that exists on the planet.
There we go again with claims that there is ONLY ONE RIGHT THING.
From the "static camp" point of view, it is the WRONG THING, because
it's much too weak. It combines some of the disadvantages of both
worlds: You are forced to use type annotations, *and* you don't
get complete code coverage.
> OTOH, spending ages to satisfy the Type Checker before you can run
> your program [...] is not my idea of fun or productivity :)
And if you don't want to do that, then don't do it. But from the
"static camp" point of view, you don't have to "satisfy" the
type checker or invest "extra work". It's part of the normal
development process, in the same way you write your tests, let
your code fail, and then change your code until it works. The only
difference is that the tests are not hand written.
Is it so hard to accept that there might be actually different ways
of doing things, and your way, however you personally like it, is
not the only one, or the "best" one?
- Dirk
Dirk Thierbach wrote:
> Marco Antoniotti <·······@cs.nyu.edu> wrote:
>
>
>>Exactly. I agree with this. This is why CMUCL/SBCL are the closest
>>thing to THE RIGHT THING that exists on the planet.
>
>
> There we go again with claims that there is ONLY ONE RIGHT THING.
Yes. The RIGHT THING is a Common Lisp plus as much Type Inference as
you can get.
>
> From the "static camp" point of view, it is the WRONG THING, because
> it's much too weak. It combines some of the disadvantages of both
> worlds: You are forced to use type annotations, *and* you don't
> get complete code coverage.
I never said that what we have is the RIGHT THING. I said that
CMUCL/SBCL (which *DOES* quite a bit of type inference) gets "close" to it.
>
>
>>OTOH, spending ages to satisfy the Type Checker before you can run
>>your program [...] is not my idea of fun or productivity :)
>
>
> And if you don't want to do that, then don't do it. But from the
> "static camp" point of view, you don't have to "satisfy" the
> type checker or invest "extra work". It's part of the normal
> development process, in the same way you write your tests, let
> your code fail, and then change your code until it works. The only
> difference is that the tests are not hand written.
>
> Is it so hard to accept that there might be actually different ways
> of doing things, and your way, however you personally like it, is
> not the only one, or the "best" one?
I am accepting that. I have never stopped anybody from using O'Caml or
Haskell or whatever. I am just stating my preference: Common Lisp +
Type Inference. Now it is up to you to convince me and others that
dropping the Common Lisp part is worth while. It is up to me to convice
you that picking it up is worth while.
So, when is the last time you did some serious Common Lisp programming?
(Remember. I asked first. You cannot retort with the same question :))
Cheers
--
Marco
Dirk Thierbach wrote:
> Is it so hard to accept that there might be actually different ways
> of doing things, and your way, however you personally like it, is
> not the only one, or the "best" one?
Maybe the discussion is indeed conducted along the wrong axis.
Perhaps this should be along the axis of "there is one way to do it" vs.
"there are many ways to do it".
There are proponents of "the right thing" vs. "many right things" in all
camps.
(This is also a typical misconception of Common Lisp - it is essentially
a multi-paradigm language, and I prefer it over other languages because
it doesn't tell me what "the right thing" is. Of course, there are also
"right-thing" thinkers in the Common Lisp community.)
Pascal
--
Pascal Costanza University of Bonn
···············@web.de Institute of Computer Science III
http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)
On Tue, 18 Nov 2003 12:08:10 -0500, Marco Antoniotti posted:
>
> Exactly. I agree with this. This is why CMUCL/SBCL are the closest
> thing to THE RIGHT THING that exists on the planet. OTOH, spending ages
> to satisfy the Type Checker before you can run your program (in a
> *ML-like language without S-exprs and macros - sorry the Ocaml
> preprocessor and Haskell macros do not even come close) is not my idea
> of fun or productivity :)
What's wrong with Haskell macros from a Lisp programmer's perspective?
I know they're not quite as regular: is that all?
(I've talked with a few people about the possibility of running macros over
Core Haskell: without that, code-manipulating macros become a bit hairy
because you have to cope with different but equivalent forms (eg, using
pattern matching vs case statements))
apologies for the changed f-up, but I don't read cll.
mrak
--
realise your life was only bait for a bigger fish
-- aesop rock