From: ·······@ziplip.com
Subject: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <LVOAILABAJAFKMCPJ0F1IFP5N3JTNUL0EPMGKMDS@ziplip.com>
THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

THE BAD:

1. f(x,y,z) sucks. f x y z  would be much easier to type (see Haskell)
   90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing 
   a = if x : 
           y
       else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know 
   about them) You now have to suffer from visitor patterns, etc. like
    lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
   and you have a different dialect being developed that needs type    
   declarations. Why not allow type declarations in the main language
   instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
   square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to 
   function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
   (If the answer is no to any of these questions, Python is simply
    not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
   than [a, b(c,d), e, f]
	
420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching

From: Jarek Zgoda
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bmu1bj$l82$1@nemesis.news.tpi.pl>
·······@ziplip.com <·······@ziplip.com> pisze:

> 8. Can you undefine a function, value, class or unimport a module?
>    (If the answer is no to any of these questions, Python is simply
>     not interactive enough)

Yes. By deleting a name from namespace. You better read some tutorial,
this will save you some time.

-- 
Jarek Zgoda
Registered Linux User #-1
http://www.zgoda.biz/ ·········@jabberpl.org http://zgoda.jogger.pl/
From: Frode Vatvedt Fjeld
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2hvfqlmkcd.fsf@vserver.cs.uit.no>
> ·······@ziplip.com <·······@ziplip.com> pisze:
>
>> 8. Can you undefine a function, value, class or unimport a module?
>>    (If the answer is no to any of these questions, Python is simply
>>     not interactive enough)

Jarek Zgoda <······@gazeta.usun.pl> writes:

> Yes. By deleting a name from namespace. You better read some
> tutorial, this will save you some time.

Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

  1. Function names (strings) are resolved (looked up in the
     namespace) each time a function is called.

  2. You can't really undefine a function such that existing calls to
     the function will be affected.

Is this (i.e. one of these) correct?

-- 
Frode Vatvedt Fjeld
From: Peter Hansen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F928F52.A422AB0B@engcorp.com>
Frode Vatvedt Fjeld wrote:
> 
> > ·······@ziplip.com <·······@ziplip.com> pisze:
> >
> >> 8. Can you undefine a function, value, class or unimport a module?
> >>    (If the answer is no to any of these questions, Python is simply
> >>     not interactive enough)
> 
> Jarek Zgoda <······@gazeta.usun.pl> writes:
> 
> > Yes. By deleting a name from namespace. You better read some
> > tutorial, this will save you some time.
> 
> Excuse my ignorance wrt. to Python, but to me this seems to imply that
> one of these statements about functions in Python are true:
> 
>   1. Function names (strings) are resolved (looked up in the
>      namespace) each time a function is called.
> 
>   2. You can't really undefine a function such that existing calls to
>      the function will be affected.
> 
> Is this (i.e. one of these) correct?

Both are correct, in essence.  (And depending on how one interprets
your second point, which is quite ambiguous.)

-Peter
From: Frode Vatvedt Fjeld
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2hr819mj9t.fsf@vserver.cs.uit.no>
Peter Hansen <·····@engcorp.com> writes:

> Both are correct, in essence.  (And depending on how one interprets
> your second point, which is quite ambiguous.)

Frode Vatvedt Fjeld wrote:

>>   1. Function names (strings) are resolved (looked up in the
>>      namespace) each time a function is called.

But this implies a rather enormous overhead in calling a function,
doesn't it?

>>   2. You can't really undefine a function such that existing calls to
>>      the function will be affected.

What I meant was that if you do the following, in sequence:

  a. Define function foo.
  b. Define function bar, that calls function foo.
  c. Undefine function foo

Now, if you call function bar, will you get a "undefined function"
exception? But if point 1. really is true, I'd expect you get a
"undefined name" execption or somesuch.

-- 
Frode Vatvedt Fjeld
From: Peter Hansen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F929D9B.5FCDFE57@engcorp.com>
(I'm replying only because I made the mistake of replying to a
triply-crossposted thread which was, in light of that, obviously
troll-bait.  I don't plan to continue the thread except to respond
to Frode's questions.  Apologies for c.l.p readers.)

Frode Vatvedt Fjeld wrote:
> 
> Peter Hansen <·····@engcorp.com> writes:
> 
> > Both are correct, in essence.  (And depending on how one interprets
> > your second point, which is quite ambiguous.)
> 
> Frode Vatvedt Fjeld wrote:
> 
> >>   1. Function names (strings) are resolved (looked up in the
> >>      namespace) each time a function is called.
> 
> But this implies a rather enormous overhead in calling a function,
> doesn't it?

"Enormous" is of course relative.  Yes, the overhead is more than in,
say C, but I think it's obvious (since people program useful software
using Python) that the overhead is not unacceptably high? 

As John Thingstad wrote in his reply, there is a dictionary lookup
involved and dictionaries are extremely fast (yes, yet another relative
term... imagine that!) in Python so that part of the overhead is
relatively unimportant.  There is actually other overhead which is
involved (e.g. setting up the stack frame which is, I believe, much larger
than the trivial dictionary lookup).

Note also that if you have a reference to the original function is,
say, a local variable, removing the original doesn't really remove it,
but merely makes it unavailable by the original name.  The local variable
can still be used to call it.

> >>   2. You can't really undefine a function such that existing calls to
> >>      the function will be affected.
> 
> What I meant was that if you do the following, in sequence:
> 
>   a. Define function foo.
>   b. Define function bar, that calls function foo.
>   c. Undefine function foo
> 
> Now, if you call function bar, will you get a "undefined function"
> exception? But if point 1. really is true, I'd expect you get a
> "undefined name" execption or somesuch.

See below.

Python 2.3.1 (#47, Sep 23 2003, 23:47:32) [MSC v.1200 32 bit (Intel)] on win32
>>> def foo():
...   print 'in foo'
...
>>> def bar():
...   foo()
...
>>> bar()
in foo
>>> del foo
>>> bar()
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "<stdin>", line 2, in bar
NameError: global name 'foo' is not defined

On the other hand, as I said above, one can keep a reference to the original.
If I'd done "baz = foo" just before the "del foo", then I could easily have
done "baz()" and the original method would still have been called.

Python is dynamic.  Almost everything is looked up in dictionaries at 
runtime like this.  That's its nature, and much of its power (as with
the many other such languages).

-Peter
From: John Thingstad
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <oprxalmqlzxfnb1n@news.chello.no>
On Sun, 19 Oct 2003 15:24:18 +0200, Frode Vatvedt Fjeld <······@cs.uit.no> 
wrote:

>> ·······@ziplip.com <·······@ziplip.com> pisze:
>>
>>> 8. Can you undefine a function, value, class or unimport a module?
>>>    (If the answer is no to any of these questions, Python is simply
>>>     not interactive enough)
>
> Jarek Zgoda <······@gazeta.usun.pl> writes:
>
>> Yes. By deleting a name from namespace. You better read some
>> tutorial, this will save you some time.
>
> Excuse my ignorance wrt. to Python, but to me this seems to imply that
> one of these statements about functions in Python are true:
>
>   1. Function names (strings) are resolved (looked up in the
>      namespace) each time a function is called.
>
>   2. You can't really undefine a function such that existing calls to
>      the function will be affected.
>
> Is this (i.e. one of these) correct?
>
Neither is complely correct. Functions are internally delt with using 
dictionaies.
The bytecode compiler gives it a ID and the look up is done using a 
dictionary.
Removing the function from the dictionary removes the function.
(pythonese for hash-table)


-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Frode Vatvedt Fjeld
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2hn0bxm8kf.fsf@vserver.cs.uit.no>
John Thingstad <··············@chello.no> writes:

> [..] Functions are internally delt with using dictionaies.  The
> bytecode compiler gives it a ID and the look up is done using a
> dictionary.  Removing the function from the dictionary removes the
> function.  (pythonese for hash-table)

So to get from the ID to the bytecode, you go through a dictionary?
And the mapping from name to ID happens perhaps when the caller is
bytecode-compiled?

-- 
Frode Vatvedt Fjeld
From: Paul Rubin
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <7xismlw1cr.fsf@ruckus.brouhaha.com>
Frode Vatvedt Fjeld <······@cs.uit.no> writes:
> > [..] Functions are internally delt with using dictionaies.  The
> > bytecode compiler gives it a ID and the look up is done using a
> > dictionary.  Removing the function from the dictionary removes the
> > function.  (pythonese for hash-table)
> 
> So to get from the ID to the bytecode, you go through a dictionary?
> And the mapping from name to ID happens perhaps when the caller is
> bytecode-compiled?

Hah, you wish.  If the function name is global, there is a dictionary
lookup, at runtime, on every call.

   def square(x):
      return x*x

   def sum_of_squares(n):
      sum = 0
      for i in range(n):
         sum += square(x)
      return sum

   print sum_of_squares(100)

looks up "square" in the dictionary 100 times.  An optimization:

   def sum_of_squares(n):
      sum = 0
      sq = square
      for i in range(n):
        sum += sq(x)
      return sum

Here, "sq" is a local copy of "square".  It lives in a stack slot in
the function frame, so the dictionary lookup is avoided.
From: Alex Martelli
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <NoEkb.314461$R32.10385761@news2.tin.it>
Frode Vatvedt Fjeld wrote:

> John Thingstad <··············@chello.no> writes:
> 
>> [..] Functions are internally delt with using dictionaies.  The

Rather, _names_ are dealt that way (for globals; it's faster for
locals -- then, the compiler can turn the name into an index
into the table of locals' values), whether they're names of functions
or names of other values (Python doesn't separate those namespaces).

>> bytecode compiler gives it a ID and the look up is done using a
>> dictionary.  Removing the function from the dictionary removes the
>> function.  (pythonese for hash-table)
> 
> So to get from the ID to the bytecode, you go through a dictionary?

No; it's up to the implementation, but in CPython the id is the
memory address of the function object, so the bytecode's directly
accessed from there (well, there's a couple of indirectness --
function object to code object to code string -- nothing important).

> And the mapping from name to ID happens perhaps when the caller is
> bytecode-compiled?

No, it's a lookup.  Dict lookup for globals, fast (index in table)
lookup for locals (making locals much faster to access), but a
lookup anyway.  I've already posted about how psyco can optimize
this, being a specializing compiler, when it notices the dynamic
possibilities are not being used in a given case.


Alex
From: Alex Martelli
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <kjBkb.313494$R32.10342512@news2.tin.it>
Frode Vatvedt Fjeld wrote:
   ...
> Excuse my ignorance wrt. to Python, but to me this seems to imply that
> one of these statements about functions in Python are true:
> 
>   1. Function names (strings) are resolved (looked up in the
>      namespace) each time a function is called.
> 
>   2. You can't really undefine a function such that existing calls to
>      the function will be affected.
> 
> Is this (i.e. one of these) correct?

Both, depending on how you define "existing call".  A "call" that IS
in fact existing, that is, pending on the stack, will NOT in any way
be "affected"; e.g.:

def foo():
    print 'foo, before'
    remove_foo()
    print 'foo, after'

def remove_foo():
    print 'rmf, before'
    del foo
    print 'rmf, after'

the EXISTING call to foo() will NOT be "affected" by the "del foo" that
happens right in the middle of it, since there is no further attempt to 
look up the name "foo" in the rest of that call's progress.

But any _further_ lookup is indeed affected, since the name just isn't
bound to the function object any more.  Note that other references to
the function object may have been stashed away in many other places (by
other names, in a list, in a dict, ...), so it may still be quite
possible to call that function object -- just not to look up its name
in the scope where it was earlier defined, once it has been undefined.

As for your worries elsewhere expressed that name lookup may impose
excessive overhead, in Python we like to MEASURE performance issues
rather than just reason about them "abstractly"; which is why Python
comes with a handy timeit.py script to time a code snippet accurately.
So, on my 30-months-old creaky main box (I keep mentioning its venerable
age in the hope Santa will notice...:-)...:

[····@lancelot ext]$ timeit.py -c -s'def foo():pass' 'foo'
10000000 loops, best of 3: 0.143 usec per loop
[····@lancelot ext]$ timeit.py -c -s'def foo():return' 'foo()'
1000000 loops, best of 3: 0.54 usec per loop

So: a name lookup takes about 140 nanoseconds; a name lookup plus a
call of the simplest possible function -- one that just returns at
once -- about 540 nanoseconds.  I.e., the call itself plus the
return take about 400 nanoseconds _in the simplest possible case_;
the lookup adds a further 140 nanoseconds, accounting for about 25%
of the overall lookup-call-return pure overhead.

Yes, managing less than 2 million function calls a second, albeit on
an old machine, is NOT good enough for some applications (although,
for many of practical importance, it already is).  But the need for speed
is exactly the reason optimizing compilers exist -- for those times
in which you need MANY more millions of function calls per second.
Currently, the best optimizing compiler for Python is Psyco, the
"specializing compiler" by Armin Rigo.  Unfortunately, it currently only
only supports Intel-386-and-compatible CPU's -- so I can use it on my
old AMD Athlon, but not, e.g., on my tiny Palmtop, whose little CPU is
an "ARM" (Intel-made these days I believe, but not 386-compatible)
[ for plans by Armin, and many others of us, on how to fix that in the
reasonably near future, see http://codespeak.net/pypy/ ]

Anyway, here's psyco in action on the issue in question:

import time
import psyco

def non_compiled(name):
    def foo(): return
    start = time.clock()
    for x in xrange(10*1000*1000): foo()
    stend = time.clock()
    print '%s %.2f' % (name, stend-start)

compiled = psyco.proxy(non_compiled)

non_compiled('noncomp')
compiled('psycomp')


Running this on the same good old machine produces:

[····@lancelot ext]$ python2.3 calfoo.py
noncomp 5.93
psycomp 0.13

The NON-compiled 10 million calls took an average of 593 nanoseconds
per call -- roughly the already-measured 540 nanoseconds for the
call itself, plus about 50 nanoseconds for each leg of the loop's
overhead.  But, as you can see, Psyco has no trouble optimizing that
by over 45 times -- to about 80 million function calls per second,
which _is_ good enough for many more applications than the original
less-than-2 million function calls per second was.

Psyco entirely respects Python's semantics, but its speed-ups take
particular good advantage of the "specialized" cases in which the
possibilities for extremely dynamic behavior are not, in fact, being
used in a given function that's on the bottleneck of your application
(Psyco can also automatically use a profiler to find out about that
bottleneck, if you want -- here, I used the finer-grained approach
of having it compile ["build a compiled proxy for"] just one function
in order to be able to show the speed-ups it was giving).

Oh, BTW, you'll notice I explicitly ran that little test with
python2.3 -- that was to ensure I was using the OLD release of
psyco, 1.0; as my default Python I use the current CVS snapshot,
and on that one I have installed psyco 1.1, which does more
optimizations and in particular _inlines function calls_ under
propitious conditions -- therefore, the fact that running
just "python calfoo.py" would have shown a speed-up of _120_
(rather than just 45) would have been "cheating", a bit, as it's
not measuring any more anything related to name lookup and function
call overhead.  That's a common problem with optimizing compilers:
once they get smart enough they may "optimize away" the very
construct whose optimization you were trying to check with a
sufficiently small benchmark.  I remember when the whole "SPEC"
suite of benchmarks was made obsolete at a stroke by one advance
in compiler optimization techniques, for example:-).

Anyway, if your main interest is in having your applications run
fast, rather than in studying optimization yields on specific
constructs in various circumstances, be sure to get the current
Psyco, 1.1.1, to go with the current Python, 2.3.2 (the pre-alpha
Python 2.4a0 is recommended only to those who want to help with
Python's development, including testing -- throughout at least 2004
you can count on 2.3.something, NOT 2.4, being the production,
_stable_ version of Python, recommended to all).


Alex
From: Frode Vatvedt Fjeld
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2hk76ylj39.fsf@vserver.cs.uit.no>
Alex Martelli <·····@aleax.it> writes:

> [..] the EXISTING call to foo() will NOT be "affected" by the "del
> foo" that happens right in the middle of it, since there is no
> further attempt to look up the name "foo" in the rest of that call's
> progress. [..]

What this and my other investigations amount to, is that in Python a
"name" is somewhat like a lisp symbol [1]. In particluar, it is an
object that has a pre-computed hash-key, which is why
hash-table/dictionary lookups are reasonably efficient. My worry was
that the actual string hash-key would have to be computed at every
function call, which I believe would slow down the process some 10-100
times. I'm happy to hear it is not so.

[1] One major difference being that Pyhon names are not first-class
    objects. This is a big mistake wrt. to supporting interactive
    programming in my personal opinion.

> As for your worries elsewhere expressed that name lookup may impose
> excessive overhead, in Python we like to MEASURE performance issues
> rather than just reason about them "abstractly"; which is why Python
> comes with a handy timeit.py script to time a code snippet
> accurately. [...]

Thank you for the detailed information. Still, I'm sure you will agree
that sometimes reasoning about things can provide insight with
predictive powers that you cannot achieve by mere experimentation.

-- 
Frode Vatvedt Fjeld
From: Terry Reedy
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <jJmdnY9p88gR7QiiRVn-sw@comcast.com>
"Frode Vatvedt Fjeld" <······@cs.uit.no> wrote in message
···················@vserver.cs.uit.no...
> What this and my other investigations amount to, is that in Python a
> "name" is somewhat like a lisp symbol [1].

This is true in that names are bound to objects rather than
representing a block of memory.

>In particluar, it is an object that has a pre-computed hash-key,

NO.  There is no name type. 'Name' is a grammatical category, with
particular syntax rules, for Python code, just like 'expression',
'statement' and many others.

A name *may* be represented at runtime as a string, as CPython
*sometimes* does.  The implementation *may*, for efficiency, give
strings a hidden hash value attributre, which CPython does.

For even faster runtime 'name lookup' an implementation may represent
names
as slot numbers (indexes) for a hiddem, non-Python array.  CPython
does this (with C pointer arrays) for function locals whenever the
list of locals is fixed at compile time, which is usually.  (To
prevent this optimization, add to a function body something like 'from
mymod import *', if still allowed, that makes the number of locals
unknowable until runtime.)

To learn about generated bytecodes, read the dis module docs and use
dis.dis.
For example:
>>> import dis
>>> def f(a):
...   b=a+1
...
>>> dis.dis(f)
          0 SET_LINENO               1

          3 SET_LINENO               2
          6 LOAD_FAST                0 (a)
          9 LOAD_CONST               1 (1)
         12 BINARY_ADD
         13 STORE_FAST               1 (b)
         16 LOAD_CONST               0 (None)
         19 RETURN_VALUE
This says: load (onto stack) first pointer in local_vars array and
second pointer in local-constants array, add referenced values and
replace operand pointers with pointer to result, store that result
pointer in the second slot of local_vars, load first constant pointer
(always to None), and return.

Who knows what *we* do when we read, parse, and possibly execute
Python code.

Terry J. Reedy
From: Frode Vatvedt Fjeld
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2h7k2yl9xo.fsf@vserver.cs.uit.no>
"Terry Reedy" <·······@udel.edu> writes:

> [..] For even faster runtime 'name lookup' an implementation may
> represent names as slot numbers (indexes) for a hiddem, non-Python
> array.  CPython does this (with C pointer arrays) for function
> locals whenever the list of locals is fixed at compile time, which
> is usually.  (To prevent this optimization, add to a function body
> something like 'from mymod import *', if still allowed, that makes
> the number of locals unknowable until runtime.) [..]

This certainly does not ease my worries over Pythons abilities with
respect to interactivity and dynamism.

-- 
Frode Vatvedt Fjeld
From: Alex Martelli
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <hMilb.28634$e5.1003296@news1.tin.it>
Frode Vatvedt Fjeld wrote:
   ...
>> excessive overhead, in Python we like to MEASURE performance issues
>> rather than just reason about them "abstractly"; which is why Python
>> comes with a handy timeit.py script to time a code snippet
>> accurately. [...]
> 
> Thank you for the detailed information. Still, I'm sure you will agree
> that sometimes reasoning about things can provide insight with
> predictive powers that you cannot achieve by mere experimentation.

A few centuries ago, a compatriot of mine was threatened with
torture, and backed off, because he had dared state that "all
science comes from experience" -- he refuted the "reasoning
about things" by MEASURING (and fudging the numbers, if the
chi square tests about his reports about the sloping-plane
experiments are right -- but then, Italians _are_ notoriously
untrustworthy, even though sometimes geniuses;-).

These days, I'd hope not to be threatened with torture if I assert:
"reasoning" is cheap, that's its advantage -- it can lead you to
advance predictive hypotheses much faster than mere "data
mining" through masses of data might yield them.  But those
hypotheses are very dubious until you've MEASURED what they
predict.  If you don't (or can't) measure, you don't _really KNOW_;
you just _OPINE_ (reasonably or not, justifiably or not, etc).  One
independently repeatable measurement trumps a thousand clever
reasonings, when that measurement gives numbers contradicting
the reasonings' predictions -- that one number sends you back to
the drawing board.

Or, at least, that's how we humble engineers see the world...


Alex
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bmujf9$vgh$1@news.oberberg.net>
Oh, you're trolling for an inter-language flame fest...
well, anyway:

> 3. no multimethods (why? Guido did not know Lisp, so he did not know 
>    about them) You now have to suffer from visitor patterns, etc. like
>     lowly Java monkeys.

Multimethods suck.

The longer answer: Multimethods have modularity issues (if whatever 
domain they're dispatching on can be extended by independent developers: 
different developers may extend the dispatch domain of a function in 
different directions, and leave undefined combinations; standard 
dispatch strategies as I've seen in some Lisps just cover up the 
undefined behaviour, with a slightly less than 50% chance of being correct).

Regards,
Jo
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.19.18.38.05.184834@knm.org.pl>
On Sun, 19 Oct 2003 20:01:03 +0200, Joachim Durchholz wrote:

> The longer answer: Multimethods have modularity issues (if whatever domain
> they're dispatching on can be extended by independent developers:
> different developers may extend the dispatch domain of a function in
> different directions, and leave undefined combinations;

This doesn't matter until you provide an equally powerful mechanism which
fixes that. Which is it?

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn0fh9$qeg$1@news.oberberg.net>
Marcin 'Qrczak' Kowalczyk wrote:

> On Sun, 19 Oct 2003 20:01:03 +0200, Joachim Durchholz wrote:
> 
>>The longer answer: Multimethods have modularity issues (if whatever domain
>>they're dispatching on can be extended by independent developers:
>>different developers may extend the dispatch domain of a function in
>>different directions, and leave undefined combinations;
> 
> This doesn't matter until you provide an equally powerful mechanism which
> fixes that. Which is it?

I don't think there is a satisfactory one. It's a fundamental problem: 
if two people who don't know of each other can extend the same thing 
(framework, base class, whatever) in different directions, who's 
responsible for writing the code needed to combine these extensions?

Solutions that I have seen or thought about are:

1. Let the system decide. Technically feasible for base classes (in the 
form of priorisation rules for multimethods), technically infeasible for 
frameworks. The problem here is that the system doesn't (usually) have 
enough information to reliably make the correct decision.

2. Let the system declare an error if the glue code isn't there. 
Effectively prohibits all forms of dynamic code loading. Can create 
risks in project management (unexpected error messages during code 
integration near a project deadline - yuck). Creates a temptation to 
hack the glue code up, by people who don't know the details of the two 
modules involved.

3. Disallow extending in multiple directions. In other words, no 
multimethods, and live with the asymmetry.
Too restricted to be comfortable with.

4. As (3), but allow multiple extensions if they are contained within 
the same module. I.e. allow multiple dispatch within an "arithmetics" 
module that defines the classes Integer, Real, Complex, etc. etc., but 
don't allow additional multiple dispatch outside the module. (Single 
dispatch would, of course, be OK.)

5. As (3), but require manual intervention. IOW let the two authors who 
did the orthogonal extensions know about each other, and have each 
module refer to the other, and each module carry the glue code required 
to combine with the other.
Actually, this is the practice for various open source projects. For 
example, authors of MTAs, mail servers etc. cooperate to set standards.
Of course, if the authors aren't interested in cooperating, this doesn't 
work well either.

6. Don't use dynamic dispatch, use parametric polymorphism (or whatever 
your language offers for that purpose, be it "generics" or "templates").

Regards,
Jo
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.20.12.46.12.841236@knm.org.pl>
Followup-To: comp.lang.misc

On Mon, 20 Oct 2003 13:06:08 +0200, Joachim Durchholz wrote:

>>>The longer answer: Multimethods have modularity issues (if whatever
>>>domain they're dispatching on can be extended by independent developers:
>>>different developers may extend the dispatch domain of a function in
>>>different directions, and leave undefined combinations;
>> 
>> This doesn't matter until you provide an equally powerful mechanism
>> which fixes that. Which is it?
> 
> I don't think there is a satisfactory one. It's a fundamental problem:
> if two people who don't know of each other can extend the same thing
> (framework, base class, whatever) in different directions, who's
> responsible for writing the code needed to combine these extensions?

Indeed. I wouldn't thus blame the language mechanism.

> 1. Let the system decide. Technically feasible for base classes (in the
> form of priorisation rules for multimethods), technically infeasible for
> frameworks. The problem here is that the system doesn't (usually) have
> enough information to reliably make the correct decision.

Sometimes the programmer can write enough default specializations that it
can be freely extended. Example: drawing shapes on devices. If every shape
is convertible to Bezier curves, and every device is capable of drawing
Bezier curves, then the most generic specialization, for arbitrary shape
and arbitrary device, will call 'draw' again with the shape converted to
Bezier curves.

The potential of multimethods is used: particular shapes have specialized
implementations for particular devices (drawing text is usually better
done more directly than through curves), separate modules can provide
additional shapes and additional devices. Yet it is safe and modular, as
long as people agree who provides a particular specialization.

It's easy to agree with a certain restriction: the specialization is
provided either by the module providing the shape or by module providing
the device. In practice the restriction doesn't have to be always followed
- it's enough that the module providing the specialization is known to all
people who might want to write their own, so I wouldn't advocate enforcing
the restriction on the language level.

I would favor multimethods even if they provided only solutions extensible
in one dimension, since they are nicer than having to enumerate all cases
in one place. Better to have a partially extensible mechanism than nothing.
Here it is extensible.

> 2. Let the system declare an error if the glue code isn't there.
> Effectively prohibits all forms of dynamic code loading. Can create risks
> in project management (unexpected error messages during code integration
> near a project deadline - yuck). Creates a temptation to hack the glue
> code up, by people who don't know the details of the two modules involved.

It would be interesting to let the system find the coverage of multimethods,
but without making it an error if not all combinations are covered. It's
useful to be able to test an incomplete program.

There is no definite answer for what kind of errors should prevent running
the program. It's similar to static/dynamic typing, or being able to
compile calls to unimplemented functions or not.

Even if the system shows that all combinations are covered, it doesn't
imply that they do the right thing. It's analogous to failing to override
a method in class-based OOP - the system doesn't know if the superclass
implementation is appropriate for the subclass. So you can't completely
rely on detection of such errors anyway.

> 3. Disallow extending in multiple directions. In other words, no
> multimethods, and live with the asymmetry. Too restricted to be
> comfortable with.

I agree.

> 4. As (3), but allow multiple extensions if they are contained within the
> same module. I.e. allow multiple dispatch within an "arithmetics" module
> that defines the classes Integer, Real, Complex, etc. etc., but don't
> allow additional multiple dispatch outside the module. (Single dispatch
> would, of course, be OK.)

For me it's still too restricted. It's a useful guideline to follow but
it should not be a hard requirement.

> 5. As (3), but require manual intervention. IOW let the two authors who
> did the orthogonal extensions know about each other, and have each module
> refer to the other, and each module carry the glue code required to
> combine with the other.

The glue code might reside in yet another module, especially if each of
the modules makes sense without the other (so it might better not depend
on it). Again, for me it's just a guideline - if one of the modules can
ensure that it's composable with the other, it's a good idea to change it -
but I would like to be able to provide the glue code elsewhere to make
them working in my program which uses both, and remove it once the modules
include the glue code themselves.

> Actually, this is the practice for various open source projects. For
> example, authors of MTAs, mail servers etc. cooperate to set standards. Of
> course, if the authors aren't interested in cooperating, this doesn't work
> well either.

The modules might also be a part of one program, where it's relatively
easy to make them cooperate. Inability to cope with some uses is generally
not a sufficient reason to reject a language mechamism which also has well
working uses.

> 6. Don't use dynamic dispatch, use parametric polymorphism (or whatever
> your language offers for that purpose, be it "generics" or "templates").

I think it can rarely solve the same problem. C++ templates (which can
use overloaded operations, i.e. with implementation dependent on type
parameters) help only in statically resolvable cases. Fully parametric
polymorphism doesn't seem to help at all even in these cases (equality,
arithmetic).

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Alex Martelli
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <GeDkb.17844$e5.652934@news1.tin.it>
Joachim Durchholz wrote:

> Oh, you're trolling for an inter-language flame fest...
> well, anyway:
> 
>> 3. no multimethods (why? Guido did not know Lisp, so he did not know
>>    about them) You now have to suffer from visitor patterns, etc. like
>>     lowly Java monkeys.
> 
> Multimethods suck.

Multimethods are wonderful, and we're using them as part of the
implementation of pypy, the Python runtime coded in Python.  Sure,
we had to implement them, but that was a drop in the ocean in
comparison to the amount of other code in pypy as it stands, much
less the amount of code we want to add to it in the future.  See
http://codespeak.net/ for more about pypy (including all of its
code -- subversion makes it available for download as well as for
online browsing).

So, you're both wrong:-).


Alex
From: Thomas F. Burdick
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <xcvfzho7967.fsf@famine.OCF.Berkeley.EDU>
Alex Martelli <·····@aleax.it> writes:

> Joachim Durchholz wrote:
> 
> > Oh, you're trolling for an inter-language flame fest...
> > well, anyway:
> > 
> >> 3. no multimethods (why? Guido did not know Lisp, so he did not know
> >>    about them) You now have to suffer from visitor patterns, etc. like
> >>     lowly Java monkeys.
> > 
> > Multimethods suck.
> 
> Multimethods are wonderful, and we're using them as part of the
> implementation of pypy, the Python runtime coded in Python.  Sure,
> we had to implement them, but that was a drop in the ocean in
> comparison to the amount of other code in pypy as it stands, much
> less the amount of code we want to add to it in the future.

So do the Python masses get to use multimethods?

(with-lisp-trolling
  And have you seen the asymptote yet, or do you need to grow macros first?)

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Alex Martelli
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <PPTkb.21307$e5.763476@news1.tin.it>
Thomas F. Burdick wrote:
   ...
> So do the Python masses get to use multimethods?

Sure!  Check out http://codespeak.net/ : pypy is aggressively open-source,
and both the masses and the elites get to download and reuse all they want.


> (with-lisp-trolling
>   And have you seen the asymptote yet, or do you need to grow macros
>   first?)

We felt absolutely no need to tweak Python's syntax in the least in
order to implement multi-methods, so, no need for macros (including
Armin Rigo, who, I think, does have extensive experience using CL).

"The asymptote" of pypy is Python -- an implementation more flexible
than the current C and Java ones, giving better optimization (Armin
is convinced he can easily surpass his own psyco, that way), ease of
fine-grained subsetting (building tiny runtimes for cellphones &c),
and also, no doubt, ease of play and experimentation (oops, we'd
better say "Research", it sounds way more dignified doesn't it!).

Macros are definitely not part of our current plans.  But, hey, this
is just a summary: visit http://codespeak.net/ and see for yourself --
everything is spelled out in great detail, we have no secrets.  Get
a subversion client and download everything, check out all of the
mailing lists' archives -- have a ball.  Anybody who wants to play
along is welcome to join any of our "sprints" for a week or so of
nearly-nonstop heavy duty pair-programming -- "nearly" because we
generally manage to schedule a barbecue, picnic, beer-bash, or other
suchlike outing (and a lot of fruitful design discussion takes place
during that scheduled break, in my observation).  Between sprints,
mailing lists, wikis, IRC and the like keep the fires going.  Indeed,
the social aspects of the pypy experience manage to be almost more
fascinating than the technical ones, which IS saying something (and
reinforces my beliefs about programming being first and foremost an
issue of social interaction, but that's another thread:-).

Ah, yeah, one sad thing for non-Europeans -- pypy's very much a
European thing -- everybody's welcome, but you'll have a hard time
convincing us to schedule a sprint elsewhere (each participant pays
his or her own travel costs, you see...).  Still, codespeak.net
does give free access to all material anyway, wherever you are:-).

[ducking back out of c.l.lisp...:-)]


Alex
From: Kenny Tilton
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <_8Ekb.7543$pT1.318@twister.nyc.rr.com>
Joachim Durchholz wrote:

> Oh, you're trolling for an inter-language flame fest...
> well, anyway:
> 
>> 3. no multimethods (why? Guido did not know Lisp, so he did not know 
>>    about them) You now have to suffer from visitor patterns, etc. like
>>     lowly Java monkeys.
> 
> 
> Multimethods suck.
> 
> The longer answer: Multimethods have modularity issues 

Lisp consistently errs on the side of more expressive power. The idea of 
putting on a strait jacket while coding to protect us from ourselves 
just seems batty. Similarly, a recent ex-C++ journal editor recently 
wrote that test-driven development now gives him the code QA peace of 
mind he once sought from strong static typing. An admitted former static 
typing bigot, he finished by wondering aloud, "Will we all be coding in 
Python ten years from now?"

kenny

-- 
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
  http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
From: Kenny Tilton
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <sgEkb.7545$pT1.2273@twister.nyc.rr.com>
Kenny Tilton wrote:

> 
> 
> Joachim Durchholz wrote:
> 
>> Oh, you're trolling for an inter-language flame fest...
>> well, anyway:
>>
>>> 3. no multimethods (why? Guido did not know Lisp, so he did not know 
>>>    about them) You now have to suffer from visitor patterns, etc. like
>>>     lowly Java monkeys.
>>
>>
>>
>> Multimethods suck.
>>
>> The longer answer: Multimethods have modularity issues 
> 
> 
> Lisp consistently errs on the side of more expressive power. The idea of 
> putting on a strait jacket while coding to protect us from ourselves 
> just seems batty. Similarly, a recent ex-C++ journal editor recently 
> wrote that test-driven development now gives him the code QA peace of 
> mind he once sought from strong static typing. An admitted former static 
> typing bigot, he finished by wondering aloud, "Will we all be coding in 
> Python ten years from now?"

    http://www.artima.com/weblogs/viewpost.jsp?thread=4639

-- 
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
  http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
From: Tomasz Zielonka
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbp64hl.el6.t.zielonka@zodiac.mimuw.edu.pl>
Kenny Tilton wrote:
> 
> Lisp consistently errs on the side of more expressive power. The idea of 
> putting on a strait jacket while coding to protect us from ourselves 
> just seems batty. Similarly, a recent ex-C++ journal editor recently 
> wrote that test-driven development now gives him the code QA peace of 
> mind he once sought from strong static typing.

C++ is not the best example of strong static typing. It is a language
full of traps, which can't be detected by its type system.

> An admitted former static typing bigot, he finished by wondering
> aloud, "Will we all be coding in Python ten years from now?"
> 
> kenny

Best regards,
Tom

-- 
.signature: Too many levels of symbolic links
From: Scott McIntire
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <MoEkb.821534$YN5.832338@sccrnsc01>
"Kenny Tilton" <·······@nyc.rr.com> wrote in message
·······················@twister.nyc.rr.com...
>
>
> Joachim Durchholz wrote:
>
> > Oh, you're trolling for an inter-language flame fest...
> > well, anyway:
> >
> >> 3. no multimethods (why? Guido did not know Lisp, so he did not know
> >>    about them) You now have to suffer from visitor patterns, etc. like
> >>     lowly Java monkeys.
> >
> >
> > Multimethods suck.
> >
> > The longer answer: Multimethods have modularity issues
>
> Lisp consistently errs on the side of more expressive power. The idea of
> putting on a strait jacket while coding to protect us from ourselves
> just seems batty. Similarly, a recent ex-C++ journal editor recently
> wrote that test-driven development now gives him the code QA peace of
> mind he once sought from strong static typing. An admitted former static
> typing bigot, he finished by wondering aloud, "Will we all be coding in
> Python ten years from now?"
>
> kenny
>

There was a nice example from one of the ILC 2003 talks about a Europian
Space Agency rocket exploding with a valueable payload. My understanding was
that there was testing, but maybe too much emphasis was placed the static
type checking of the language used to control the rocket. The end result was
a run time arithmetic overflow which the code intepreted as "rocket off
course". The rocket code instructions in this event were to self destruct.
It seems to me that the Agency would have fared better if they just used
Lisp - which has bignums - and relied more on regression suites and less on
the belief that static type checking systems would save the day.

 I'd be interested in hearing more about this from someone who knows the
details.

-R. Scott McIntire
From: Terry Reedy
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <cumdnd4b_LRkgA6iRVn-gA@comcast.com>
"Scott McIntire" <····················@comcast.net> wrote in message
····························@sccrnsc01...
> There was a nice example from one of the ILC 2003 talks about a
Europian
> Space Agency rocket exploding with a valueable payload. My
understanding was
> that there was testing, but maybe too much emphasis was placed the
static
> type checking of the language used to control the rocket. The end
result was
> a run time arithmetic overflow which the code intepreted as "rocket
off
> course". The rocket code instructions in this event were to self
destruct.
> It seems to me that the Agency would have fared better if they just
used
> Lisp - which has bignums - and relied more on regression suites and
less on
> the belief that static type checking systems would save the day.
>
>  I'd be interested in hearing more about this from someone who knows
the
> details.

 I believe you are referring to the first flight of the Ariane 5
(sp?).  The report of the investigating commission is on the web
somewhere and an interesting read.  They identified about five
distinct errors. Try google.

Terry
From: Kenny Tilton
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <FXIkb.7570$pT1.1148@twister.nyc.rr.com>
Dennis Lee Bieber wrote:

> Scott McIntire fed this fish to the penguins on Sunday 19 October 2003 
> 15:39 pm:
> 
> 
>>There was a nice example from one of the ILC 2003 talks about a
>>Europian Space Agency rocket exploding with a valueable payload. My
>>understanding was that there was testing, but maybe too much emphasis
>>was placed the static type checking of the language used to control
>>the rocket. The end result was a run time arithmetic overflow which
>>the code intepreted as "rocket off course". The rocket code
>>instructions in this event were to self destruct. It seems to me that
>>the Agency would have fared better if they just used Lisp - which has
>>bignums - and relied more on regression suites and less on the belief
>>that static type checking systems would save the day.
>>
>> I'd be interested in hearing more about this from someone who knows
>> the
>>details.
>>
> 
>         Just check the archives for comp.lang.ada and Ariane-5.
> 
>         Short version: The software performed correctly, to specification 
> (including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS 
> DESIGNED.

Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html

"The internal SRI software exception was caused during execution of a 
data conversion from 64-bit floating point to 16-bit signed integer 
value. The floating point number which was converted had a value greater 
than what could be represented by a 16-bit signed integer. This resulted 
in an Operand Error. The data conversion instructions (in Ada code) were 
not protected from causing an Operand Error, although other conversions 
of comparable variables in the same place in the code were protected. 
The error occurred in a part of the software that only performs 
alignment of the strap-down inertial platform. This software module 
computes meaningful results only before lift-off. As soon as the 
launcher lifts off, this function serves no purpose."


>         LISP wouldn't have helped -- since the A-4 code was supposed to 
> failure with values that large... And would have done the same thing if 
> plugged in the A-5. (Or are you proposing that the A-4 code is supposed 
> to ignore a performance requirement?)

"supposed to" fail? chya. This was nothing more than an unhandled 
exception crashing the sytem and its identical backup. Other conversions 
were protected so they could handle things intelligently, this bad boy 
went unguarded. Note also that the code functionality was pre-ignition 
only, so there is no way they were thinking that a cool way to abort the 
flight would be to leave a program exception unhandled.

What happened (aside from an unnecessary chunk of code running 
increasing risk to no good end) is that the extra power of the A5 caused 
oscillations greater than those seen in the A4. Those greater 
oscillations took the 64-bit float beyond what would fit in the 16-bit 
int. kablam. Operand Error. This is not a system saying "whoa, out of 
range, abort".

As for Lisp not helping:

 > most-positive-fixnum ;; constant provided by implementation
536870911

 > (1+ most-positive-fixnum) ;; overflow fixnum type and...
536870912

 > (type-of (1+ most-positive-fixnum)) ;; ...auto bignum type
BIGNUM

 > (round most-positive-single-float) ;; or floor or ceiling
340282346638528859811704183484516925440
0.0

 > (type-of *)
BIGNUM

kenny

-- 
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
  http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9390d5$1@news.unimelb.edu.au>
Kenny Tilton <·······@nyc.rr.com> writes:

>Dennis Lee Bieber wrote:
>
>>         Just check the archives for comp.lang.ada and Ariane-5.
>> 
>>         Short version: The software performed correctly, to specification 
>> (including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS 
>> DESIGNED.
>
>Nonsense.

No, that is exactly right.  Like the man said, read the archives for
comp.lang.ada.

>From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
>
>"The internal SRI software exception was caused during execution of a 
>data conversion from 64-bit floating point to 16-bit signed integer 
>value. The floating point number which was converted had a value greater 
>than what could be represented by a 16-bit signed integer. This resulted 
>in an Operand Error. The data conversion instructions (in Ada code) were 
>not protected from causing an Operand Error, although other conversions 
>of comparable variables in the same place in the code were protected. 
>The error occurred in a part of the software that only performs 
>alignment of the strap-down inertial platform. This software module 
>computes meaningful results only before lift-off. As soon as the 
>launcher lifts off, this function serves no purpose."

That's all true, but it is only part of the story, and selectively quoting
just that part is misleading in this context.

For a more detailed answer, see
<http://www.google.com.au/groups?as_umsgid=359BFC60.446B%40lanl.gov>.

>>         LISP wouldn't have helped -- since the A-4 code was supposed to 
>> failure with values that large... And would have done the same thing if 
>> plugged in the A-5. (Or are you proposing that the A-4 code is supposed 
>> to ignore a performance requirement?)
>
>"supposed to" fail? chya. This was nothing more than an unhandled 
>exception crashing the sytem and its identical backup. Other conversions 
>were protected so they could handle things intelligently, this bad boy 
>went unguarded.

The reason that it went unguarded is that the programmers DELIBERATELY
omitted an exception handler for it.  The post at the URL quoted above
explains why.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Kenny Tilton
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bNTkb.12409$pT1.7163@twister.nyc.rr.com>
Fergus Henderson wrote:

> Kenny Tilton <·······@nyc.rr.com> writes:
> 
> 
>>Dennis Lee Bieber wrote:
>>
>>
>>>        Just check the archives for comp.lang.ada and Ariane-5.
>>>
>>>        Short version: The software performed correctly, to specification 
>>>(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS 
>>>DESIGNED.
>>
>>Nonsense.
> 
> 
> No, that is exactly right.  Like the man said, read the archives for
> comp.lang.ada.

Yep, I was wrong. They /did/ handle the overflow by leaving the 
operation unguarded, trusting it to eventually bring down the system, 
their design goal. Apologies to Dennis.

> 
> 
>>From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
>>
>>"The internal SRI software exception was caused during execution of a 
>>data conversion from 64-bit floating point to 16-bit signed integer 
>>value. The floating point number which was converted had a value greater 
>>than what could be represented by a 16-bit signed integer. This resulted 
>>in an Operand Error. The data conversion instructions (in Ada code) were 
>>not protected from causing an Operand Error, although other conversions 
>>of comparable variables in the same place in the code were protected. 
>>The error occurred in a part of the software that only performs 
>>alignment of the strap-down inertial platform. This software module 
>>computes meaningful results only before lift-off. As soon as the 
>>launcher lifts off, this function serves no purpose."
> 
> 
> That's all true, but it is only part of the story, and selectively quoting
> just that part is misleading in this context.

I quoted the entire paragraph and it seemed conclusive, so I did not 
read the rest of the report. ie, I was not being selective, I just 
assumed no one would consider crashing to be a form of error-handling. 
My mistake, they did.

Well, the original question was, "Would Lisp have helped?". Let's see. 
They dutifully went looking for overflowable conversions and decided 
what to do with each, deciding in this case to do something appropriate 
for the A4 which was inappropriately allowed by management to go into 
the A5 unexamined.

In Lisp, well, there are two cases. Did they have to dump a number into 
a 16-bit hardware channel? There was some reason for the conversion. If 
not, no Operand Error arises. It is an open question whether they decide 
to check anyway for large values and abort if found, but this one arose 
only during a sweep of all such conversions, so probably not.

But suppose they did have to dance to the 16-bit tune of some hardware 
blackbox. they would go thru the same reasoning and decide to shut down 
the system. No advantage to Lisp. But they'd have to do some work to 
bring the system down, because there would be no overflow. So:

(define-condition e-hardware-broken (e-pre-ignition e-fatal)
   ((component-id :initarg :component-id :reader component-id)
    (bad-value :initarg :bad-value :intiform nil :reader bad-value)
    ...etc etc...

And then they would have to kick it off, and the exception handler of 
the controlling logic would get a look at the condition on the way out. 
Of course, it also sees operand errors, so one can only hope that at 
some point during testing they for some reason had /some/ condition of 
type e-pre-ignition get trapped by the in-flight supervisor, at which 
point someone would have said either throw it away or why is that module 
still running?

Or, if they were as meticulous with their handlers as they were with 
numeric conversions, they would have during the inventory of explicit 
conditions to handle gotten to the pre-ignition module conditions and 
decided, "what does that software (which should not even be running) 
know about the hardware that the rest of the system does not know?".

The case is not so strong now, but the odds are still better with Lisp.

kenny


-- 
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
  http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ismjg7u9.fsf@thalassa.informatimago.com>
Fergus Henderson <···@cs.mu.oz.au> writes:
> <http://www.google.com.au/groups?as_umsgid=359BFC60.446B%40lanl.gov>.

The post at that url writes  about the culture of the Ariane team, but
I would say  that it's even a more fundamental  problem of our culture
in general: we build brittle  stuff with very little margin for error.
Granted, it would  be costly to increase physical  margin, but in this
case, adopting a point of  view more like _robotics_ could help.  Even
in case of hardware failure, there's  no reason to shut down the mind;
just go on with what you have.


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war?  Trust US presidents :-(
From: Steve Schafer
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <4i78pvgpmiih1bohm1o9u6tumksmuoosc1@4ax.com>
On 20 Oct 2003 19:03:10 +0200, Pascal Bourguignon
<····@thalassa.informatimago.com> wrote:

>Even in case of hardware failure, there's no reason to shut down the
>mind; just go on with what you have.

When the thing that failed is a very large rocket having a very large
momentum, and containing a very large amount of very volatile fuel, it
makes sense to give up and shut down in the safest possible way.

Also keep in mind that this was a "can't possibly happen" failure
scenario. If you've deemed that it is something that can't possibly
happen, you are necessarily admitting that you have no idea how to
respond in a meaningful way if it somehow does happen.

-Steve
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ekx7fz9d.fsf@thalassa.informatimago.com>
Steve Schafer <···@reply.to.header> writes:

> On 20 Oct 2003 19:03:10 +0200, Pascal Bourguignon
> <····@thalassa.informatimago.com> wrote:
> 
> >Even in case of hardware failure, there's no reason to shut down the
> >mind; just go on with what you have.
> 
> When the thing that failed is a very large rocket having a very large
> momentum, and containing a very large amount of very volatile fuel, it
> makes sense to give up and shut down in the safest possible way.

You  have  to define  a  "dangerous"  situation.   Remember that  this
"safest possible way" is usually  to blow the rocket up.  AFAIK, while
this  parameter was out  of range,  there was  no instability  and the
rocket was not uncontrolable.  
 

> Also keep in mind that this was a "can't possibly happen" failure
> scenario. If you've deemed that it is something that can't possibly
> happen, you are necessarily admitting that you have no idea how to
> respond in a meaningful way if it somehow does happen.

My point.  This "can't possibly happen" failure did happen, so clearly
it was not a "can't  possibly happen" physically, which means that the
problem was with the software. We know it, but what I'm saying is that
a smarter software could have deduced it on fly.

We  all agree that  it would  be better  to have  a perfect  world and
perfect,  bug-free, software.   But  since that's  not  the case,  I'm
saying that instead of having software that behaves like simple unix C
tools, where  as soon  as there is  an unexpected situation,  it calls
perror() and exit(), it would  be better to have smarter software that
can  try and  handle UNEXPECTED  error situations,  including  its own
bugs.  I would feel safer in an AI rocket.


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war?  Trust US presidents :-(
From: Andrew Dalke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <D0Zkb.4766$S52.1095@newsread4.news.pas.earthlink.net>
Pascal Bourguignon:
> We  all agree that  it would  be better  to have  a perfect  world and
> perfect,  bug-free, software.   But  since that's  not  the case,  I'm
> saying that instead of having software that behaves like simple unix C
> tools, where  as soon  as there is  an unexpected situation,  it calls
> perror() and exit(), it would  be better to have smarter software that
> can  try and  handle UNEXPECTED  error situations,  including  its own
> bugs.  I would feel safer in an AI rocket.

Since it was written in Ada and not C, and since it properly raised
an exception at that point (as originally designed), which wasn't
caught at a recoverable point, ending up in the default "better blow
up than kill people" handler ... what would your AI rocket have
done with that exception?  How does it decide that an UNEXPECTED
error situation can be recovered?  How would you implement it?
How would you test it?  (Note that the above software wasn't
tested under realistic conditions; I assume in part because of cost.)

I agree it would be better to have software which can do that.
I have no good idea of  how that's done.  (And bear in mind that
my XEmacs session dies about once a year, eg, once when NFS
was acting flaky underneath it and a couple times because it
couldn't handle something X threw at it. ;)

The best examples of resilent architectures I've seen come from
genetic algorithms and other sorts of feedback training; eg,
subsumptive architectures for robotics and evolvable hardware.
There was a great article in CACM on programming an FPGA
via GAs, in 1998/'99 (link, anyone?).  It worked quite well (as
I recall) but pointed out the hard part about this approach is
that it's hard to understand, and the result used various defects
on the chip (part of the circuit wasn't used but the chip wouldn't
work without it) which makes the result harder to mass produce.

                    Andrew
                    ·····@dalkescientific.com
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ismjdz1g.fsf@thalassa.informatimago.com>
"Andrew Dalke" <······@mindspring.com> writes:

> Pascal Bourguignon:
> > We  all agree that  it would  be better  to have  a perfect  world and
> > perfect,  bug-free, software.   But  since that's  not  the case,  I'm
> > saying that instead of having software that behaves like simple unix C
> > tools, where  as soon  as there is  an unexpected situation,  it calls
> > perror() and exit(), it would  be better to have smarter software that
> > can  try and  handle UNEXPECTED  error situations,  including  its own
> > bugs.  I would feel safer in an AI rocket.
> 
> Since it was written in Ada and not C, and since it properly raised
> an exception at that point (as originally designed), which wasn't
> caught at a recoverable point, ending up in the default "better blow
> up than kill people" handler ... what would your AI rocket have
> done with that exception?  How does it decide that an UNEXPECTED
> error situation can be recovered?  

By having a view at the big picture!

The blow up action would be  activated only when the big picture shows
that the AI has no control of the rocket and that it is going down.


> How would you implement it?

Like any AI.

> How would you test it?  (Note that the above software wasn't
> tested under realistic conditions; I assume in part because of cost.)

In a simulator.  In any case, the  point is to have a software that is
able to handle even unexpected failures.


> I agree it would be better to have software which can do that.
> I have no good idea of  how that's done.  (And bear in mind that
> my XEmacs session dies about once a year, eg, once when NFS
> was acting flaky underneath it and a couple times because it
> couldn't handle something X threw at it. ;)

XEmacs is not AI.
 
> The best examples of resilent architectures I've seen come from
> genetic algorithms and other sorts of feedback training; eg,
> subsumptive architectures for robotics and evolvable hardware.
> There was a great article in CACM on programming an FPGA
> via GAs, in 1998/'99 (link, anyone?).  It worked quite well (as
> I recall) but pointed out the hard part about this approach is
> that it's hard to understand, and the result used various defects
> on the chip (part of the circuit wasn't used but the chip wouldn't
> work without it) which makes the result harder to mass produce.
> 
>                     Andrew
>                     ·····@dalkescientific.com

In  any case,  you're  right, the  main  problem may  be  that it  was
specified to blow up when an unhandled exception was raised...



-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war?  Trust US presidents :-(
From: Andrew Dalke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <DN2lb.5398$S52.3443@newsread4.news.pas.earthlink.net>
Me:
> > How would you test it?  (Note that the above software wasn't
> > tested under realistic conditions; I assume in part because of cost.)

Pascal Bourguignon:
> In a simulator.  In any case, the  point is to have a software that is
> able to handle even unexpected failures.

Like I said, the existing code was not tested in a simulator.  Why
do you think some AI code *would* be tested for this same case?
(Actually, I believe that an AI would need to be trained in a
simulator, just like humans, but that it would require so much
testing as to preclude its use, for now, in rocket control systems.)

Nor have you given any sort of guideline on how to implement
this sort of AI in the first place.  Without it, you've just restated
the dream of many people over the last few centuries.  It's a
dream I would like to see happen, which is why I agreed with you.

> > couldn't handle something X threw at it. ;)

> XEmacs is not AI

Yup, which is why the smiley is there.  You said that C was
not the language to use (cf your perror/exit comment) and implied
that Ada wasn't either, so I assumed you had a more resiliant
programming language in mind.  My response was to point
out that Emacs Lisp also crashes (rarely) given unexpected
errors and so imply that Lisp is not the answer.

Truely I believe that programming languages as we know
them are not the (direct) solution, hence my pointers to
evolvable hardware and similar techniques.

Even then, we still have a long way to go before they
can be used to control a rocket.  They require a lot of
training (just like people) and software simulators just
won't cut it.  The first "AI"s will replace those things
we find simple and commonplace [*] (because our brain
evolved to handle it), and not hard and rare.

                    Andrew
                    ·····@dalkescientific.com
[*]
In thinking of some examples, I remembered a passage in
on of Cordwainer Smith's stories.  In them, dogs, cats,
eagles, cows, and many other animals were artifically
endowed with intelligence and a human-like shape.
Turtles were bred for tasks which required long patience.
For example, one turtle was assigned the task of standing
by a door in case there was trouble, which he did for
100 years, without complaint.
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87vfqicfm5.fsf@thalassa.informatimago.com>
"Andrew Dalke" <······@mindspring.com> writes:
> [...]
> Nor have you given any sort of guideline on how to implement
> this sort of AI in the first place.  Without it, you've just restated
> the dream of many people over the last few centuries.  It's a
> dream I would like to see happen, which is why I agreed with you.
> [...]
> Truely I believe that programming languages as we know
> them are not the (direct) solution, hence my pointers to
> evolvable hardware and similar techniques.

You're  right, I  did not  answer.  I  think that  what is  missing in
classic software, and that ought to be present in AI software, is some
introspective  control:  having  a  process checking  that  the  other
processes are  live and  progressing, and able  to act to  correct any
infinite loop,  break down  or dead-lock.  Some  hardware may  help in
controling  this controling  software, like  on the  latest Macintosh:
they automatically restart when the system is hung.  And purely at the
hardware level,  for a real  life system, you  can't rely on  only one
processor.

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Garry Hodgson
Subject: Re: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2003102211431066837384@k2.sage.att.com>
Pascal Bourguignon <····@thalassa.informatimago.com> wrote:

> You're  right, I  did not  answer.  I  think that  what is  missing in
> classic software, and that ought to be present in AI software, is some
> introspective  control:  having  a  process checking  that  the  other
> processes are  live and  progressing, and able  to act to  correct any
> infinite loop,  break down  or dead-lock.  

so assume this AI software was running on Ariane 5, and the same
condition occurs.  based on the previously referenced design 
assumptions, it is told that there's been a hardware failure, and that 
numerical calculations can no longer be trusted.  how does it cope 
with this?

> Some  hardware may  help in
> controling  this controling  software, like  on the  latest Macintosh:
> they automatically restart when the system is hung. 

in this case, a restart would cause the same calculations to occur,
and the same failure to be reported.

> And purely at the
> hardware level,  for a real  life system, you  can't rely on  only one
> processor.

absolutely right.  though, in this case, this wouldn't have helped either.

the fatal error was a process error, and it occurred long before launch.

----
Garry Hodgson, Technology Consultant, AT&T Labs

Be happy for this moment.
This moment is your life.
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87he21c88w.fsf@thalassa.informatimago.com>
Garry Hodgson <·····@sage.att.com> writes:

> Pascal Bourguignon <····@thalassa.informatimago.com> wrote:
> 
> > You're  right, I  did not  answer.  I  think that  what is  missing in
> > classic software, and that ought to be present in AI software, is some
> > introspective  control:  having  a  process checking  that  the  other
> > processes are  live and  progressing, and able  to act to  correct any
> > infinite loop,  break down  or dead-lock.  
> 
> so assume this AI software was running on Ariane 5, and the same
> condition occurs.  based on the previously referenced design 
> assumptions, it is told that there's been a hardware failure, and that 
> numerical calculations can no longer be trusted.  how does it cope 
> with this?

I just read yesterday an old  paper by Sussman about how they designed
a  Lisp  on a  chip,  including the  garbage  collector  and the  eval
function.  Strangely enough that did not included any ALU (only a test
for zero and an incrementer, for address scanning).

You can  implement an  eval without arithmetic  and you  can implement
theorem prover above it still  without arithmetic.  You can still do a
great deal of thinking without any arithmetic...


> > Some  hardware may  help in
> > controling  this controling  software, like  on the  latest Macintosh:
> > they automatically restart when the system is hung. 
> 
> in this case, a restart would cause the same calculations to occur,
> and the same failure to be reported.

In this case, since the problem was not in the supposed AI controlling
agent, there would have been no restart.


> > And purely at the
> > hardware level,  for a real  life system, you  can't rely on  only one
> > processor.
> 
> absolutely right.  though, in this case, this wouldn't have helped either.
> the fatal error was a process error, and it occurred long before launch.

I think it would have  been helped.  For example, an architecture like
the  Shuttle's where  there are  five computer  differently programmed
would have  helped, because  at least one  of the computers  would not
have had the Ariane-4 module.

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn7706$s9o$1@news.oberberg.net>
Pascal Bourguignon wrote:
> [...] For example, an architecture like
> the  Shuttle's where  there are  five computer  differently programmed
> would have  helped, because  at least one  of the computers  would not
> have had the Ariane-4 module.

Even the Ariane team is working under budget constraints. Obviously, in 
this case, the budget didn't allow a re-check of the SRI design wrt. 
Ariane-5 specifications, much less programming the same software five(!) 
times over.

Besides, programming the same software multiple times would have helped 
regardless of whether you're doing it with an AI or traditionally. I 
still don't see how AI could have helped prevent the Ariane-5 crash. As 
far as I have seen, any advances in making chips or programs smarter 
have consistently been offset by higher testing efforts: you still have 
to formally specify what the system is supposed to do, and then test 
against that specification.
Actually, AI wouldn't have helped in the least bit here: the 
specification was wrong, so even an AI module, at whatever 
sophistication level, wouldn't have worked.

The only difference is that AI might allow people to write higher-level 
specifications. I.e. something like "the rocket must be stable" instead 
of "the rocket must not deviate more than 12.4 degrees from the 
vertical"... but even "the rocket must be stable" would have to be 
broken down into much more technical terms, with leeway for much the 
same design and specification errors as those that caused the Ariane-5 
software to lose control.

Regards,
Jo
From: Andrew Dalke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <saIlb.1534$I04.40@newsread4.news.pas.earthlink.net>
Pascal Bourguignon:
> You can  implement an  eval without arithmetic  and you  can implement
> theorem prover above it still  without arithmetic.  You can still do a
> great deal of thinking without any arithmetic...

But theorem proving and arithmetic are isomorphic.  TP-> arithmetic
is obvious.  Arithmetic -> TP is through Godel.

> I think it would have  been helped.  For example, an architecture like
> the  Shuttle's where  there are  five computer  differently programmed
> would have  helped, because  at least one  of the computers  would not
> have had the Ariane-4 module.

Manned space projects get a lot more money for safety checks.
If a few rockets blow up for testing then it's still cheaper than
quintupling the development costs.

                    Andrew
                    ·····@dalkescientific.com
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8b4f$cl4$1@news.oberberg.net>
Andrew Dalke wrote:
> If a few rockets blow up for testing then it's still cheaper than
> quintupling the development costs.

Not quite - that was a loss of 500 million dollars. I don't know what 
the software development costs were, so I'm just guessing here, but I 
think it's relatively safe to assume a doubly redundant system would 
already have paid off if it had caught the problem.

The point is that no amount of software technology would have caught the 
problem if the specifications are wrong. I think it would have been more 
successful if there had been some automated specification checking, 
which is safely in the area of theorem proving - which has interesting 
connections to static type checking but is otherwise unrelated to 
programming.

Regards,
Jo
From: Andrew Dalke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <cDWlb.3375$I04.1198@newsread4.news.pas.earthlink.net>
Joachim Durchholz
> Not quite - that was a loss of 500 million dollars. I don't know what
> the software development costs were, so I'm just guessing here, but I
> think it's relatively safe to assume a doubly redundant system would
> already have paid off if it had caught the problem.

Since the Mars rover mission a few years ago cost only about $250million,
I'm going to assume that you included payload cost.  Here's some
relevant references I found [after sig], which suggests a price per rocket
of well under $100 million and the cost of the "four uninsured scientific
satellites" made it be about $500 million.

It used to be that rockets needed a lot of real-world tests before
people would stick expensive payloads on them.  For a while, dead
weight was used, but people like amateur hams got permission to
put "cheap" satellites in its place, and as the reliability increased,
more and more people were willing to take chances with unproven
rockets.

So there's an interesting tradeoff here between time spent on live
testing and the chance it will blow up.  Suppose Ariane decided
to launch with just bricks as a payload.  Then they would have
been out ~$75 million.  But suppose they could convince someone
to take a 10% chance of failure to launch a $100 million satellite
for half price, at $40 million.  Statistically speaking, that's a good
deal.  As long as it really is a 10% chance.

(The satellites were uninsured, which suggests that this was
indeed the case.)

However, it seems that 4 of the first 14 missions failed, making
about a 30% failure rate.  It also doesn't appear that all of those
were caused by software failures; the 4th was in a "cooling
circuit."

> The point is that no amount of software technology would have caught the
> problem if the specifications are wrong.

I agree.

                    Andrew
                    ·····@dalkescientific.com


http://www.namibian.com.na/2002/june/techtalk/02651A180A.html
] The Ariane 44L rocket equipped with four liquid strap-on boosters --
] the most powerful in the Ariane-4 series --
 ...
] Specialists estimated the cost of the satellite, launch and insurance at
] more than $250 million.


http://www.cnn.com/2000/TECH/space/12/20/rocket.ariane.reut/
] Western Europe's new generation Ariane-5 rocket has placed three
] satellites into space
 ...
] Experts have estimated the cost of the [ASTRA 2D] satellite,
] launch and insurance at over $85 million
  ...
] The estimated cost of the GE-8 satellite, launch and insurance is
] over $125 million.

] But Ariane-5's career began with a spectacular failure during its
] maiden test launch in June 1996, exploding 37 seconds after
] lift-off and sending four uninsured scientific satellites worth $500
] million plunging into mangrove swamps on French Guiana's coast.

http://www.centennialofflight.gov/essay/SPACEFLIGHT/ariane/SP42.htm
] After Arianespace engineers rewrote the rocket's control software,
] the second Ariane-5 launch successfully took place on October 30,
] 1997. More launches followed and the rocket soon entered full commercial
] service, although it suffered another failure on its tenth launch in July
2001.
] Ariane-5 joined the Russian Proton, American Titan IV and Japanese
] H-IIA as the most powerful rockets in service. Ariane-5 initially had a
very
] high vehicle cost, but Arianespace mounted an aggressive campaign to
] significantly reduce this cost and make the rocket more cost-effective.
The
] company also planned further upgrades to the Ariane-5 to enable it to
remain
] competitive against a growing number of competitors.

http://www.chron.com/cgi-bin/auth/story.mpl/content/interactive/space/news/9
9/990824.html
] Each launch of Japan's flagship H-2 rocket to place a satellite into
] geostationary orbit costs close to 19 billion yen, about double the cost
] of competitors such as the European Space Agency's Ariane rocket.
(19 billion yen ~ $190 million => ~$100million for geostationary orbit on
Ariane)

] Part of the six-billion European-Currency-Unit ($6.28 billion U.S.) cost
of
] the Ariane 5 project went toward construction of new facilities at ESA's
Kourou,
] French Guiana launch complex

http://www.rte.ie/news/2002/1212/satellite.html
] It is the fourth failure of an Ariane-5 in its 14-mission history, and is
] being seen as a major setback for the European space programme.
See also
http://www.dw-world.de/english/0,3367,1446_A_713425,00.html
] the problem occurred in the cooling circuit of one of the rocket's main
] engines. A change in engine speed around 180 seconds after take-off
] caused the launcher to "demonstrate erratic behaviour".
From: John Atwood
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn4iot$4tv$1@cvpjaws03.dhcp.cv.hp.com>
Andrew Dalke <······@mindspring.com> wrote:

>The best examples of resilent architectures I've seen come from
>genetic algorithms and other sorts of feedback training; eg,
>subsumptive architectures for robotics and evolvable hardware.
>There was a great article in CACM on programming an FPGA
>via GAs, in 1998/'99 (link, anyone?).  It worked quite well (as
>I recall) but pointed out the hard part about this approach is
>that it's hard to understand, and the result used various defects
>on the chip (part of the circuit wasn't used but the chip wouldn't
>work without it) which makes the result harder to mass produce.

something along these lines?
  http://www.cogs.susx.ac.uk/users/adrianth/cacm99/node3.html


John
From: Steve Schafer
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <hdu8pvg2gn3nr5nbu1bapmvtdgjfqpdmkd@4ax.com>
On 20 Oct 2003 22:08:30 +0200, Pascal Bourguignon
<····@thalassa.informatimago.com> wrote:

>AFAIK, while this parameter was out of range, there was no instability
>and the rocket was not uncontrolable.

That's perfectly true, but also perfectly irrelevant. When your
carefully designed software has just told you that your rocket, which,
you may recall, is traveling at several thousand metres per second, has
just entered a "can't possibly happen" state, you don't exactly have a
lot of time in which to analyze all of the conflicting information and
decide which to trust and which not to trust. Whether that sort of
decision-making is done by engineers on the ground or by human pilots or
by some as yet undesigned intelligent flight control system, the answer
is the same: Do the safe thing first, and then try to figure out what
happened.

All well-posed problems have boundary conditions, and the solutions to
those problems are bounded as well. No matter what the problem or its
means of solution, a boundary is there, and if you somehow cross that
boundary, you're toast. In particular, the difficulty with AI systems is
that while they can certainly enlarge the boundary, they also tend to
make it fuzzier and less predictable, which means that testing becomes
much less reliable. There are numerous examples where human operators
have done the "sensible" thing, with catastrophic consequences.

>My point.

Well, actually, no. I assure you that my point is very different from
yours.

>This "can't possibly happen" failure did happen, so clearly it was not
>a "can't  possibly happen" physically, which means that the problem was 
>with the software.

No, it still was a "can't possibly happen" scenario, from the point of
view of the designed solution. And there was nothing wrong with the
software. The difficulty arose because the solution for one problem was
applied to a different problem (i.e., the boundary was crossed).

>it would  be better to have smarter software that can try and handle
>UNEXPECTED error situations

I think you're failing to grasp the enormity of the concept of "can't
possibly happen." There's a big difference between merely "unexpected"
and "can't possibly happen." "Unexpected" most often means that you
haven't sufficiently analyzed the situation. "Can't possibly happen," on
the other hand, means that you've analyzed the situation and determined
that the scenario is outside the realm of physical or logical
possibility. There is simply no meaningful means of recovery from a
"can't possibly happen" scenario. No matter how smart your software is,
there will be "can't possibly happen" scenarios outside the boundary,
and your software is going to have to shut down.

>I would feel safer in an AI rocket.

What frightens me most is that I know that there are engineers working
on safety-critical systems that feel the same way. By all means, make
your flight control system as sophisticated and intelligent as you want,
but don't forget to include a simple, reliable, dumber-than-dirt
ejection system that "can't possibly fail" when the "can't possibly
happen" scenario happens.

Let me try to summarize the philosophical differences here: First of
all, I wholeheartedly agree that a more sophisticated software system
_may_ have prevented the destruction of the rocket. Even so, I think the
likelihood of that is rather small. (For some insight into why I think
so, you might want to take a look at Henry Petroski's _To Engineer is
Human_.) Where we differ is how much impact we believe that more
sophisticated software would have on the problem. I get the impression
that you believe that an AI-based system would drastically reduce
(perhaps even eliminate?) the "can't possibly happen" scenario. I, on
the other hand, believe that even the most sophisticated system enlarges
the boundary of the solution space by only a very small amount--the area
occupied by "can't possibly happen" scenarios remains far greater than
that occupied by "software works correctly and saves the rocket"
scenarios.

-Steve
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn31t4$161$1@news.oberberg.net>
Pascal Bourguignon wrote:
> AFAIK, while this  parameter was out  of range,  there was  no
> instability  and the rocket was not uncontrolable.

Actually, the rocket had started correcting its orientation according to
the bogus data, which resulted in uncontrollable turning. The rocket
would have broken into parts in an uncontrollable manner, so it was
blewn up.
(The human operator decided to press the emergency self-destruct button
seconds before the control software would have initiated self destruct.)

> My point.  This "can't possibly happen" failure did happen, so
> clearly it was not a "can't  possibly happen" physically, which means
> that the problem was with the software. We know it, but what I'm
> saying is that a smarter software could have deduced it on fly.

No. The smartest software will not save you from human error. It was a
specification error.
The only way to detect this error (apart from more testing) would have
been to model the physics of the rocket, in software, and either verify
the flight control software against the rocket model or to test run the
whole thing in software. (I guess neither of these options would have
been cheaper than the simple test runs that were deliberately omitted,
probably on the grounds of "we /know/ it works, it worked in the Ariane 4".)

> We  all agree that  it would  be better  to have  a perfect  world
> and perfect,  bug-free, software.   But  since that's  not  the case,
> I'm saying that instead of having software that behaves like simple
> unix C tools, where  as soon  as there is  an unexpected situation,
> it calls perror() and exit(), it would  be better to have smarter
> software that can  try and  handle UNEXPECTED  error situations,
> including  its own bugs.  I would feel safer in an AI rocket.

This all may be true, but you're solving problems that didn't cause the
Ariane crash.

Regards,
Jo
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn19dt$7qc$1@news.oberberg.net>
Pascal Bourguignon wrote:
> The post at that url writes  about the culture of the Ariane team, but
> I would say  that it's even a more fundamental  problem of our culture
> in general: we build brittle  stuff with very little margin for error.
> Granted, it would  be costly to increase physical  margin,

Which is exactly why the margin is kept as small as possible.
Occasionally, it will be /too/ small.

Anybody seen a car model series, every one working perfectly from the 
first one?
 From what I read, every new model has its small quirks and 
"near-perfect" gotchas. The difference is just that you're not allowed 
to do that in expensive things like rockets (which is, among many other 
things, one of the reasons why space vehicles and aircraft are so d*mn 
expensive: if something goes wrong, you can't just drive them on the 
nearest parking lot and wait for maintenance and repair...)

 > but in this
> case, adopting a point of  view more like _robotics_ could help.  Even
> in case of hardware failure, there's  no reason to shut down the mind;
> just go on with what you have.

As Steve wrote, letting a rocket carry on regardless isn't a good idea 
in the general case: it would be a major disaster if it made it to the 
next coast and crashed into the next town. Heck, it would be enough if 
the fuel tanks leaked, and the whole fuel rained down on a ship 
somewhere in the Atlantic - most rocket fuels are toxic.

Regards,
Jo
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn0gf9$qqt$1@news.oberberg.net>
Kenny Tilton wrote:
> 
> Dennis Lee Bieber wrote:
> 
>>         Short version: The software performed correctly, to 
>> specification (including the failure mode) -- ON THE ARIANE 4 FOR 
>> WHICH IT WAS DESIGNED.
> 
> Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
> 
> "The internal SRI software exception was caused during execution of a 
> data conversion from 64-bit floating point to 16-bit signed integer 
> value. The floating point number which was converted had a value greater 
> than what could be represented by a 16-bit signed integer. This resulted 
> in an Operand Error. The data conversion instructions (in Ada code) were 
> not protected from causing an Operand Error, although other conversions 
> of comparable variables in the same place in the code were protected. 
> The error occurred in a part of the software that only performs 
> alignment of the strap-down inertial platform. This software module 
> computes meaningful results only before lift-off. As soon as the 
> launcher lifts off, this function serves no purpose."

That's the sequence of events that led to the crash.
Why this sequence could happen though it shouldn't have happened is 
exactly how Dennis wrote it: the conversion caused an exception because 
the Ariane-5 had a tilt angle beyond what the SRI was designed for.

> What happened (aside from an unnecessary chunk of code running 
> increasing risk to no good end) is that the extra power of the A5 caused 
> oscillations greater than those seen in the A4. Those greater 
> oscillations took the 64-bit float beyond what would fit in the 16-bit 
> int. kablam. Operand Error. This is not a system saying "whoa, out of 
> range, abort".
> 
> As for Lisp not helping:
> 
>  > most-positive-fixnum ;; constant provided by implementation
> 536870911
> 
>  > (1+ most-positive-fixnum) ;; overflow fixnum type and...
> 536870912
> 
>  > (type-of (1+ most-positive-fixnum)) ;; ...auto bignum type
> BIGNUM
> 
>  > (round most-positive-single-float) ;; or floor or ceiling
> 340282346638528859811704183484516925440
> 0.0
> 
>  > (type-of *)
> BIGNUM

Lisp might not have helped even in that case.
1. The SRI was designed for an angle that would have fit into a 16-bit 
operand. If the exception hadn't been thrown, some hardware might still 
have malfunctioned.
2. I'm pretty sure there's a reason (other than saving space) for that 
conversion to 16 bits. I suspect it was to be fed into some hardware 
register... in which case all bignums of the world aren't going to help.

Ariane 5 is mostly a lesson in management errors. Software methodology 
might have helped, but just replacing the programming language would 
have been insufficient (as usual - languages can make proper testing 
easier or harder, but the trade-off will always be present).

Regards,
Jo
From: Markus Mottl
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn1017$m39$1@bird.wu-wien.ac.at>
In comp.lang.functional Kenny Tilton <·······@nyc.rr.com> wrote:
> Dennis Lee Bieber wrote:
>>         Short version: The software performed correctly, to specification 
>> (including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS 
>> DESIGNED.

> Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html

Dennis is right: it was indeed a specification problem. AFAIK, the coder
had actually even proved formally that the exception could not arise
with the spec of Ariana 4. Lisp code, too, can suddenly raise unexpected
exceptions. The default behaviour of the system was to abort the mission
for safety reasons by blasting the rocket. This wasn't justified in this
case, but one is always more clever after the event...

> "supposed to" fail? chya.

Indeed. Values this extreme were considered impossible on Ariane 4 and
taken as indication of such a serious failure that it would justify
aborting the mission.

> This was nothing more than an unhandled exception crashing the sytem
> and its identical backup.

Depends on what you mean by "crash": it certainly didn't segfault. It
just realized that something happened that wasn't supposed to happen
and reacted AS REQUIRED.

> Other conversions were protected so they could handle things
> intelligently, this bad boy went unguarded.

Bad, indeed, but absolutely safe with regard to the spec of Ariane 4.

> Note also that the code functionality was pre-ignition 
> only, so there is no way they were thinking that a cool way to abort the 
> flight would be to leave a program exception unhandled.

This is a serious design error, not a problem of the programming language.

> What happened (aside from an unnecessary chunk of code running 
> increasing risk to no good end)

Again, it's a design error.

> is that the extra power of the A5 caused 
> oscillations greater than those seen in the A4. Those greater 
> oscillations took the 64-bit float beyond what would fit in the 16-bit 
> int. kablam. Operand Error. This is not a system saying "whoa, out of 
> range, abort".

Well, the system was indeed programmed to say "whoa, out of range, abort".
A design error.

> As for Lisp not helping:

There is basically no difference between checking the type of a value
dynamically for validity and catching exceptions that get raised on
violations of certain constraints. One can forget to do both or react
to those events in a stupid way (or prove in both cases that the check /
exception handling is unnecessary given the spec).

Note that I am not defending ADA in any way or arguing against FPLs: in
fact, being an FPL-advocate myself I do think that FPLs (including Lisp)
have an edge what concerns writing safe code. But the Ariane-example just
doesn't support this claim. It was an absolutely horrible management
mistake to not check old code for compliance with the new spec. End
of story...

Regards,
Markus Mottl

--
Markus Mottl          http://www.oefai.at/~markus          ······@oefai.at
From: Terry Reedy
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <QI2dnV8AWb4WvAmiRVn-tw@comcast.com>
"Markus Mottl" <······@oefai.at> wrote in message
·················@bird.wu-wien.ac.at...
> Note that I am not defending ADA in any way or arguing against FPLs:
in
> fact, being an FPL-advocate myself I do think that FPLs (including
Lisp)
> have an edge what concerns writing safe code. But the Ariane-example
just
> doesn't support this claim. It was an absolutely horrible management
> mistake to not check old code for compliance with the new spec. End
> of story...

The investigating commission reported about 5 errors that, in series,
allowed the disaster.  As I remember, another nonprogrammer/language
one was in mockup testing.  The particular black box, known to be
'good', was not included, but just simulated according to its expected
behavior.  If it has been included, and a flight similated in real
time with appropriate tilting and shaking, it should probably have
given the spurious abort message that it did in the real flight.

TJR
From: Kenny Tilton
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <OAUkb.12491$pT1.1778@twister.nyc.rr.com>
Markus Mottl wrote:

> In comp.lang.functional Kenny Tilton <·······@nyc.rr.com> wrote:
> 
>>Dennis Lee Bieber wrote:
>>
>>>        Short version: The software performed correctly, to specification 
>>>(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS 
>>>DESIGNED.
> 
> 
>>Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
> 
> 
> Dennis is right: it was indeed a specification problem. AFAIK, the coder
> had actually even proved formally that the exception could not arise
> with the spec of Ariana 4. Lisp code, too, can suddenly raise unexpected
> exceptions. The default behaviour of the system was to abort the mission
> for safety reasons by blasting the rocket. This wasn't justified in this
> case, but one is always more clever after the event...
> 
> 
>>"supposed to" fail? chya.
> 
> 
> Indeed. Values this extreme were considered impossible on Ariane 4 and
> taken as indication of such a serious failure that it would justify
> aborting the mission.

Yes, I have acknowledged in another post that I was completely wrong in 
my guesswork: everything was intentional and signed-off on by many.

A small side-note: as I now understand things, the idea was not to abort 
the mission, but to bring down the system. The thinking was that the 
error would signify a hardware failure, and with any luck shutting down 
would mean either loss of the backup system (if that was where the HW 
fault occurred) or correctly falling back on the still-functioning 
backup system if the supposed HW fault had been in the primary unit. ie, 
an HW fault would likely be isolated to one unit.

kenny


-- 
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
  http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <myfirstname.mylastname-2010031017240001@k-137-79-50-101.jpl.nasa.gov>
In article <····················@twister.nyc.rr.com>, Kenny Tilton
<·······@nyc.rr.com> wrote:

[Discussing the Arianne failure]

> A small side-note: as I now understand things, the idea was not to abort 
> the mission, but to bring down the system. The thinking was that the 
> error would signify a hardware failure, and with any luck shutting down 
> would mean either loss of the backup system (if that was where the HW 
> fault occurred) or correctly falling back on the still-functioning 
> backup system if the supposed HW fault had been in the primary unit. ie, 
> an HW fault would likely be isolated to one unit.

That's right.  This is why hardware folks spend a lot of time thinking
about common mode failures, and why software folks could learn a thing or
two from the hardware folks in this regard.  

E.
From: CezaryB
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo61a8$ptp$1@nemesis.news.tpi.pl>
On 10/20/2003 5:49 AM, Kenny Tilton wrote:
> 
> 
> Dennis Lee Bieber wrote:
> 
>>         Just check the archives for comp.lang.ada and Ariane-5.
>>
>>         Short version: The software performed correctly, to 
>> specification (including the failure mode) -- ON THE ARIANE 4 FOR 
>> WHICH IT WAS DESIGNED.
> 
> 
> Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
> 
> "The internal SRI software exception was caused during execution of a 
> data conversion from 64-bit floating point to 16-bit signed integer 
[...]


> 
> 
>>         LISP wouldn't have helped -- since the A-4 code was supposed 
>> to failure with values that large... And would have done the same 
>> thing if plugged in the A-5. (Or are you proposing that the A-4 code 
>> is supposed to ignore a performance requirement?)
> 
> "supposed to" fail? chya. This was nothing more than an unhandled 
> exception crashing the sytem and its identical backup. Other conversions 
> were protected so they could handle things intelligently, this bad boy 
> went unguarded. Note also that the code functionality was pre-ignition 
> only, so there is no way they were thinking that a cool way to abort the 
> flight would be to leave a program exception unhandled.
> 
> What happened (aside from an unnecessary chunk of code running 
> increasing risk to no good end) is that the extra power of the A5 caused 
> oscillations greater than those seen in the A4. Those greater 
> oscillations took the 64-bit float beyond what would fit in the 16-bit 
> int. kablam. Operand Error. This is not a system saying "whoa, out of 
> range, abort".


"To determine the vulnerability of unprotected code, an analysis was performed 
on every operation which could give rise to an exception, including an Operand 
Error. [...] It is important to note that the decision to protect certain 
variables but not others was taken jointly by project partners at several 
contractual levels."

"There is no evidence that any trajectory data were used to analyse the 
behaviour of the unprotected variables, and it is even more important to note 
that it was jointly agreed not to include the Ariane 5 trajectory data in the 
SRI requirements and specification."


"It was the decision to cease the processor operation which finally proved 
fatal. Restart is not feasible since attitude is too difficult to re-calculate 
after a processor shutdown; therefore the Inertial Reference System becomes 
useless. The reason behind this drastic action lies in the culture within the 
Ariane programme of only addressing random hardware failures. From this point of 
view exception - or error - handling mechanisms are designed for a random 
hardware failure which can quite rationally be handled by a backup system."



> As for Lisp not helping:

"It has been stated to the Board that not all the conversions were protected 
because a maximum workload target of 80% had been set for the SRI computer"



CB
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <871xt8hbfz.fsf@thalassa.informatimago.com>
Dennis Lee Bieber <·······@ix.netcom.com> writes:
>         LISP wouldn't have helped -- since the A-4 code was supposed to 
> failure with values that large... And would have done the same thing if 
> plugged in the A-5. (Or are you proposing that the A-4 code is supposed 
> to ignore a performance requirement?)

Or perhaps it would have helped since LISP sources would have included
a little expert system that would have asked itself: "Do I really want
to commit  suicide now?  Let's see,  everything looks ok  but this old
code from A4...  I guess it's got Alzheimer, I'll ignore it for now".


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war?  Trust US presidents :-(
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <Olxlb.329829$mp.272035@rwcrnsc51.ops.asp.att.net>
"Scott McIntire" <····················@comcast.net> wrote in message ····························@sccrnsc01...
> It seems to me that the Agency would have fared better if they just used
> Lisp - which has bignums - and relied more on regression suites and less on
> the belief that static type checking systems would save the day.

I find that an odd conclusion. Given that the cost of bugs is so high
(especially in the cited case) I don't see a good reason for discarding
*anything* that leads to better correctness. Yes, bignums is a good
idea: overflow bugs in this day and age are as bad as C-style buffer
overruns. Why work with a language that allows them when there
are languages that don't?

But why should more regression testing mean less static type checking?
Both are useful. Both catch bugs. Why ditch one for the other?


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn687n$l6u$1@f1node01.rhrz.uni-bonn.de>
Marshall Spight wrote:
> "Scott McIntire" <····················@comcast.net> wrote in message ····························@sccrnsc01...
> 
>>It seems to me that the Agency would have fared better if they just used
>>Lisp - which has bignums - and relied more on regression suites and less on
>>the belief that static type checking systems would save the day.
> 
> 
> I find that an odd conclusion. Given that the cost of bugs is so high
> (especially in the cited case) I don't see a good reason for discarding
> *anything* that leads to better correctness. Yes, bignums is a good
> idea: overflow bugs in this day and age are as bad as C-style buffer
> overruns. Why work with a language that allows them when there
> are languages that don't?
> 
> But why should more regression testing mean less static type checking?
> Both are useful. Both catch bugs. Why ditch one for the other?

...because static type systems work by reducing the expressive power of 
a language. It can't be any different for a strict static type system. 
You can't solve the halting problem in a general-purpose language.

This means that eventually you might need to work around language 
restrictions, and this introduces new potential sources for bugs.

(Now you could argue that current sophisticated type systems cover 90% 
of all cases and that this is good enough, but then I would ask you for 
empirical studies that back this claim. ;)

I think soft typing is a good compromise, because it is a mere add-on to 
an otherwise dynamically typed language, and it allows programmers to 
override the decisions of the static type system when they know better.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1fzhlqc72.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> ...because static type systems work by reducing the expressive power
> of a language.

It depends a whole lot on what you consider "expressive".  In my book,
static type systems (at least some of them) work by increasing the
expressive power of the language because they let me express certain
intended invariants in a way that a compiler can check (and enforce!)
statically, thereby expediting the discovery of problems by shortening
the edit-compile-run-debug cycle.

> (Now you could argue that current sophisticated type systems cover 90%
> of all cases and that this is good enough, but then I would ask you
> for empirical studies that back this claim. ;)

In my own experience they seem to cover at least 99%.

(And where are _your_ empirical studies which show that "working around
language restrictions increases the potential for bugs"?)

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn780t$rnj$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>...because static type systems work by reducing the expressive power
>>of a language.
> 
> 
> It depends a whole lot on what you consider "expressive".  In my book,
> static type systems (at least some of them) work by increasing the
> expressive power of the language because they let me express certain
> intended invariants in a way that a compiler can check (and enforce!)
> statically, thereby expediting the discovery of problems by shortening
> the edit-compile-run-debug cycle.

The set of programs that are useful but cannot be checked by a static 
type system is by definition bigger than the set of useful programs that 
can be statically checked. So dynamically typed languages allow me to 
express more useful programs than statically typed languages.

>>(Now you could argue that current sophisticated type systems cover 90%
>>of all cases and that this is good enough, but then I would ask you
>>for empirical studies that back this claim. ;)
> 
> In my own experience they seem to cover at least 99%.

I don't question that. If this works well for you, keep it up. ;)

> (And where are _your_ empirical studies which show that "working around
> language restrictions increases the potential for bugs"?)

I don't need a study for that statement because it's a simple argument: 
if the language doesn't allow me to express something in a direct way, 
but requires me to write considerably more code then I have considerably 
more opportunities for making mistakes.


Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2ismg7l4h.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

> The set of programs that are useful but cannot be checked by a static
> type system is by definition bigger than the set of useful programs
> that can be statically checked.

By whose definition?  What *is* your definition of "useful"?  It is
clear to me that static typing improves maintainability, scalability,
and helps with the overall design of software.  (At least that's my
personal experience, and as others can attest, I do have reasonably
extensive experience either way.)

A 100,000 line program in an untyped language is useless to me if I am
trying to make modifications -- unless it is written in a highly
stylized way which is extensively documented (and which usually means
that you could have captured this style in static types).  So under
this definition of "useful" it may very well be that there are fewer
programs which are useful under dynamic typing than there are under
(modern) static typing.

> So dynamically typed languages allow
> me to express more useful programs than statically typed languages.

There are also programs which I cannot express at all in a purely
dynamically typed language.  (By "program" I mean not only the executable
code itself but also the things that I know about this code.)
Those are the programs which are protected against certain bad things
from happening without having to do dynamic tests to that effect
themselves.  (Some of these "bad things" are, in fact, not dynamically
testable at all.)

> I don't question that. If this works well for you, keep it up. ;)

Don't fear.  I will.

> > (And where are _your_ empirical studies which show that "working around
> > language restrictions increases the potential for bugs"?)
> 
> I don't need a study for that statement because it's a simple
> argument: if the language doesn't allow me to express something in a
> direct way, but requires me to write considerably more code then I
> have considerably more opportunities for making mistakes.

This assumes that there is a monotone function which maps token count
to error-proneness and that the latter depends on nothing else.  This
is a highly dubious assumption.  In many cases the few extra tokens
you write are exactly the ones that let the compiler verify that your
thinking process was accurate (to the degree that this fact is
captured by types).  If you get them wrong *or* if you got the
original code wrong, then the compiler can tell you.  Without the
extra tokens, the compiler is helpless in this regard.

To make a (not so far-fetched, btw :) analogy: Consider logical
statements and formal proofs. Making a logical statement is easy and
can be very short.  It is also easy to make mistakes without noticing;
after all saying something that is false while still believing it to
be true is extremely easy.  Just by looking at the statement it is
also often hard to tell whether the statement is right.  In fact,
computers have a hard time with this task, too.  Theorem-proving is
hard.
On the other hand, writing down the statement with a formal proof is
impossible to get wrong without anyone noticing because checking the
proof for validity is trivial compared to coming up with it in the
first place.  So even though writing the statement with a proof seems
harder, once you have done it and it passes the proof checker you can
rest assured that you got it right.  The longer "program" will have fewer
"bugs" on average.

Matthias
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <u160i905.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> Pascal Costanza <········@web.de> writes:
>
>> The set of programs that are useful but cannot be checked by a static
>> type system is by definition bigger than the set of useful programs
>> that can be statically checked.
>
> By whose definition?  What *is* your definition of "useful"?  It is
> clear to me that static typing improves maintainability, scalability,
> and helps with the overall design of software.  (At least that's my
> personal experience, and as others can attest, I do have reasonably
> extensive experience either way.)

The opposing point is to assert that *no* program that cannot be
statically checked is useful.  Are you really asserting that?
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m17k2wq9s7.fsf@tti5.uchicago.edu>
·············@comcast.net writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Pascal Costanza <········@web.de> writes:
> >
> >> The set of programs that are useful but cannot be checked by a static
> >> type system is by definition bigger than the set of useful programs
> >> that can be statically checked.
> >
> > By whose definition?  What *is* your definition of "useful"?  It is
> > clear to me that static typing improves maintainability, scalability,
> > and helps with the overall design of software.  (At least that's my
> > personal experience, and as others can attest, I do have reasonably
> > extensive experience either way.)
> 
> The opposing point is to assert that *no* program that cannot be
> statically checked is useful.  Are you really asserting that?

Actually, viewed from a certain angle, yes.  Every programmer who
writes a program ought to have a proof that the program is correct in
her mind.  (If not, fire her.)  It ought to be possible to formalize
that proof and to statically check it.

(Now, I am not saying that current type systems that are in practical
use let you do that.  But they go some of the way.)

Matthias
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031023161045.GX1454@mapcar.org>
On Thu, Oct 23, 2003 at 10:00:08AM -0500, Matthias Blume wrote:
> Actually, viewed from a certain angle, yes.  Every programmer who
> writes a program ought to have a proof that the program is correct in
> her mind.  (If not, fire her.)  It ought to be possible to formalize
> that proof and to statically check it.

Personally, I can't ever remember all the letters of the Greek alphabet.
Not to mention what that little squiggly arrow means.  How could I ever
formulate a proof in my mind without those?

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bna0qm$4n7$1@news.oberberg.net>
Matthew Danish wrote:

> On Thu, Oct 23, 2003 at 10:00:08AM -0500, Matthias Blume wrote:
> 
>>Actually, viewed from a certain angle, yes.  Every programmer who
>>writes a program ought to have a proof that the program is correct in
>>her mind.  (If not, fire her.)  It ought to be possible to formalize
>>that proof and to statically check it.
> 
> Personally, I can't ever remember all the letters of the Greek alphabet.
> Not to mention what that little squiggly arrow means.  How could I ever
> formulate a proof in my mind without those?

Getting the type declarations right in Java is nothing than a formal 
proof that the parameters and local variables in your code will never be 
assigned a value of an incompatible type.
And all of that without a single greek letter or squiggly arrow...

In essence, a proof is nothing but an explanation why your code has 
certain properties - with the added property that the explanation is so 
standardized that there is no room for misinterpretation (i.e. it's a 
"formal" proof), and the explanation is to complete that even the 
dumbest and most malevolent jackass will be forced to admit that the 
reasoning is correct (that's what it makes a "proof").

The formalism used for writing the proof down can be as mathematical or 
un-mathematical as you like.

Oh, and there is not such thing as a "correctness" proof (since "being 
correct" always means "conforming to some specification", so you don't 
prove correctness, you prove that the code fulfils some properties given 
by a specification).
Things that can be proven about code are:
* that the code never dereferences a null pointer
* that the code never accesses an out-of-index-range item in an array
* that the code never segfaults (usually by proving the above two items 
and some others)
* that the code will always give specific outputs to specific inputs 
(that's what most people mean when they say "correctness proof") (and 
you don't always prove full correctness - often, just some specific 
crucial parts of the computation are proven, the rest is demonstrated by 
testing)

These various proofs have wildly varying difficulties in different 
languages. For C, the first three items are exceedingly much work to 
prove, while in languages that don't use arrays as a standard data 
structure and that don't have null pointers, the first two items are 
almost trivial and the third item is essentially some reasoning that the 
total memory consumption is limited. (Memory leaks are an "interesting" 
topic in any language though - even garbage-collected languages can leak 
memory by inadvertently building progressively larger data structures. 
Of course, it's still much more work to check/prove that a C program 
leaks no memory - that's why we have such things like BoundsChecker for C.)

Regards,
Jo
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnapnd$v7c$1@f1node01.rhrz.uni-bonn.de>
Joachim Durchholz wrote:

> Oh, and there is not such thing as a "correctness" proof (since "being 
> correct" always means "conforming to some specification", so you don't 
> prove correctness, you prove that the code fulfils some properties given 
> by a specification).

A static type system checks conformity to restrictions that may not be 
part of the specification. So this might be superfluous.

> Things that can be proven about code are:
> * that the code never dereferences a null pointer
> * that the code never accesses an out-of-index-range item in an array
> * that the code never segfaults (usually by proving the above two items 
> and some others)
> * that the code will always give specific outputs to specific inputs 
> (that's what most people mean when they say "correctness proof") (and 
> you don't always prove full correctness - often, just some specific 
> crucial parts of the computation are proven, the rest is demonstrated by 
> testing)

The programming language that I use doesn't do the first three, without 
using a static type system. It does this by using dynamic checks. (BTW, 
as far as I know, most languages perform index-out-of-bonds checks at 
runtime, for very good reasons. Or do you still specify constant bounds?)

Google doesn't always give specific outputs to specific inputs. Is 
Google incorrect?

> These various proofs have wildly varying difficulties in different 
> languages. For C, the first three items are exceedingly much work to 
> prove, while in languages that don't use arrays as a standard data 
> structure and that don't have null pointers, the first two items are 
> almost trivial and the third item is essentially some reasoning that the 
> total memory consumption is limited.

C is as much a straw man wrt to untyped languages as Java is wrt 
statically type-checked languages.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l1ptgmztl7.fsf@budvar.future-i.net>
Pascal Costanza <········@web.de> writes:

>>Things that can be proven about code are:
>>* that the code never dereferences a null pointer
>>* that the code never accesses an out-of-index-range item in an array
>>* that the code never segfaults (usually by proving the above two items 
>>and some others)

>The programming language that I use doesn't do the first three,
>without using a static type system. It does this by using dynamic
>checks.

That's not quite the same: by 'proven about code' one normally means
proven for certain and without the need to run the program and see
what happens first.  Do your test cases cover all possible inputs?
If you think they do, can you prove that?

(Not that I'm arguing against comprehensive testing - but dynamic
checking at run time is really not the same as having something
statically checked for you right at the start, and checked for all
uses of the program.)

>(BTW, as far as I know, most languages perform index-out-of-bonds
>checks at runtime, for very good reasons. Or do you still specify
>constant bounds?)

Yes, this is an example of something you normally can't expect the
machine to prove, and so has to be checked for at run time.  Of course
there are many, many such properties (including the property of 'the
program works', for any non-trivial program) and so you have to test
and you have to check.  But still, static checking is sometimes so
nice to have that people will restrict their programs to help it
happen - for example using fixed array bounds as you suggest.  It
depends on how paranoid you are and how confident that your testing
will catch all bugs.

-- 
Ed Avis <··@membled.com>
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnb8d3$utu$1@f1node01.rhrz.uni-bonn.de>
Ed Avis wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>>Things that can be proven about code are:
>>>* that the code never dereferences a null pointer
>>>* that the code never accesses an out-of-index-range item in an array
>>>* that the code never segfaults (usually by proving the above two items 
>>>and some others)
> 
> 
>>The programming language that I use doesn't do the first three,
>>without using a static type system. It does this by using dynamic
>>checks.
> 
> 
> That's not quite the same: by 'proven about code' one normally means
> proven for certain and without the need to run the program and see
> what happens first.  Do your test cases cover all possible inputs?
> If you think they do, can you prove that?

I don't need test cases for that. The programming language that I use 
_never_ _ever_ dereferences null pointers, _never_ _ever_ accesses 
arrays beyond their bounds and _never_ produces core dumps, at least not 
in normal circumstances. I don't need a proof for that. All these things 
are checked dynamically and may throw appropriate exceptions. When they 
do so I have the opportunity to fix the problem that has caused the 
attempt to dereference a null pointer, or access an element that doesn't 
exist, etc., and then just continue the execution of the program.

>>(BTW, as far as I know, most languages perform index-out-of-bonds
>>checks at runtime, for very good reasons. Or do you still specify
>>constant bounds?)
> 
> Yes, this is an example of something you normally can't expect the
> machine to prove, and so has to be checked for at run time.  Of course
> there are many, many such properties (including the property of 'the
> program works', for any non-trivial program) and so you have to test
> and you have to check.  But still, static checking is sometimes so
> nice to have that people will restrict their programs to help it
> happen - for example using fixed array bounds as you suggest.

...and sometimes not.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Thant Tessman
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbtin$mcn$1@terabinaries.xmission.com>
Pascal Costanza wrote:

[...]

> I don't need test cases for that. The programming language that I use
>  _never_ _ever_ dereferences null pointers, _never_ _ever_ accesses 
> arrays beyond their bounds and _never_ produces core dumps, at least 
> not in normal circumstances. I don't need a proof for that. All these
>  things are checked dynamically and may throw appropriate exceptions.
>  When they do so I have the opportunity to fix the problem that has 
> caused the attempt to dereference a null pointer, or access an 
> element that doesn't exist, etc., and then just continue the 
> execution of the program.

There is a fundamental difference between throwing an exception at
runtime if something unreasonable is attempted, and proving at
compile-time that something unreasonable will not happen.

Elsewhere:

> This is why we are having this discussion. As I said before, there 
> are different programming styles.

I don't know if Matthias Blume has been too humble to mention it or not
yet, but he not only has experience programming in dynamically-typed
languages, he has implemented one. He is more than well-acquainted with
the style of programming you're defending.

Like others, I will not declare my unquestioning alliance to either side
of this debate, but I will say that static typing (with a good type
system) has all the positive properties Matthias attributes to it to a
degree that is impossible to comprehend without genuine
experience--experience it's not obvious you have.

(And lately, when *I* think about dynamic type systems, it's always with
the covert agenda of trying to translate the things about them I like
into a statically-typed system, even if no language (yet) supports such
a system.)

-thant
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l1fzhiz5sq.fsf@budvar.future-i.net>
Pascal Costanza <········@web.de> writes:

>I don't need test cases for that. The programming language that I use
>_never_ _ever_ dereferences null pointers,

I meant the analogue of a 'null pointer' access in your language of
choice.  Many languages have such a thing.  For example in Python:

    # Let's make a data structure containing two lists.
    a = [[1, 2, 3], [4, 5, 6]]

    # Now set one of them to 'null' or its moral equivalent.
    a[1] = None

    for l in a:
        print l[0]    # Whoops!

Of course this is a trivial example, just as deliberately setting a
pointer to null in C would be a trivial example.  But it's meant to
illustrate that dereferencing null (or nil) things can happen.

In a language with an expressive enough type system you could declare
a to have type 'list of lists of things that are not null' (in one way
or another) and have the compiler prove that no element of a ever
becomes null.  Of course, this may cramp your style in the rest of
your program and you may not enjoy writing in such a style that the
compiler can understand what is going on.  It's a tradeoff: how much
effort you're prepared to spend helping the compiler in exchange for
the help the compiler can give you.  This depends on the language.

>_never_ _ever_ accesses arrays beyond their bounds

Does your language not have dynamically sized arrays or lists?  If so,
what would happen with

    l = [0, 1, 2];
    print l[99];

or its moral equivalent?  Surely this is accessing an array beyond its
bounds.  Sometimes you'd be happy to catch this at run time, but for
some applications you might instead want the reassurance that the
out-of-bounds access can _never_ happen, which in turn simplifies the
rest of the code because there are fewer error cases to consider.

>and _never_ produces core dumps, at least not in normal
>circumstances.

Ah.  I didn't mean checking these things in the narrow sense of
avoiding core dumps in low-level languages like C - although that is a
good use for static typing in such languages.  I meant being able to
prove properties of programs in general, and while the things a
compiler can figure out are necessarily limited, the two examples
above are not the only things that can be checked.

For example you can also use a type system to say: the following
variable contains only positive values.  This function takes a list
and that one takes a pair of two lists.

>I don't need a proof for that. All these things are checked
>dynamically and may throw appropriate exceptions.  When they do so I
>have the opportunity to fix the problem that has caused the attempt
>to dereference a null pointer, or access an element that doesn't
>exist, etc., and then just continue the execution of the program.

And this is great, but you must recognize that for at least _some_
applications this isn't an option.

Really though, checking array bounds and nullness is at the
worthy-but-dull end of static type checking.  There's also the more
general principle of being able to say what type of objects a routine
takes and what it returns.  Again, you can check this at run time.
But I have found in the past that I get the bug fixed sooner if it can
be caught earlier.  (Writing the same algorithm in Haskell and Perl,
roughly concurrently, the Haskell version was much easier to write
because the compiler could point out 90% of my mistakes, and those the
compiler didn't catch were usually fairly obvious.  The Perl version
was just too confusing, even when I put in lots of runtime checking
and debugging information I was slow to figure out the error.  Still,
Perl syntax may take some of the blame, perhaps Haskell vs Scheme
would be a fairer contest.)

-- 
Ed Avis <··@membled.com>
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bndutm$8fh$1@news.oberberg.net>
Pascal,

either you consistently misunderstand my points, or you don't /want/ to 
understand, or you pretend not to understand.

Regardless of which of these possibilities holds, arguing with you isn't 
going to help you, me, or any onlookers - and I think that answering 
questions and points raised by others is going to be more productive for 
all of us.

Regards,
Jo
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3cdjg8pj.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> ·············@comcast.net writes:
>> 
>> The opposing point is to assert that *no* program that cannot be
>> statically checked is useful.  Are you really asserting that?
>
> Actually, viewed from a certain angle, yes.  Every programmer who
> writes a program ought to have a proof that the program is correct in
> her mind.  (If not, fire her.)  It ought to be possible to formalize
> that proof and to statically check it.

That's a little draconian.  When I write programs I often have no clue
as to what I am doing, let alone a proof that it is correct!
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1r813onxg.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > ·············@comcast.net writes:
> >> 
> >> The opposing point is to assert that *no* program that cannot be
> >> statically checked is useful.  Are you really asserting that?
> >
> > Actually, viewed from a certain angle, yes.  Every programmer who
> > writes a program ought to have a proof that the program is correct in
> > her mind.  (If not, fire her.)  It ought to be possible to formalize
> > that proof and to statically check it.
> 
> That's a little draconian.  When I write programs I often have no clue
> as to what I am doing, let alone a proof that it is correct!

You're fired.
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <myfirstname.mylastname-2310031108550001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > Matthias Blume <····@my.address.elsewhere> writes:
> > 
> > > ·············@comcast.net writes:
> > >> 
> > >> The opposing point is to assert that *no* program that cannot be
> > >> statically checked is useful.  Are you really asserting that?
> > >
> > > Actually, viewed from a certain angle, yes.  Every programmer who
> > > writes a program ought to have a proof that the program is correct in
> > > her mind.  (If not, fire her.)  It ought to be possible to formalize
> > > that proof and to statically check it.
> > 
> > That's a little draconian.  When I write programs I often have no clue
> > as to what I am doing, let alone a proof that it is correct!
> 
> You're fired.

Really?

While you're looking for someone to replace this person you just fired
(and rejecting all the applicants who aren't trained in how to produce
formal proofs of correctness) your competition is iteratively testing and
refining a product which, while it doesn't have a proof of correctness,
works well enough from the customer's point of view.  So your competition
gets the business because they have something to ship and you don't.  The
best you can offer is, "Wait!  Don't buy their stuff.  It might be
broken.  Just wait until we get our HR act together and you can buy *our*
product which we can *prove* doesn't have any bugs."  (Excecpt, of course,
that all you can really prove is that it doesn't have any type errors,
which is not the same thing.)

So the result of getting up on your formal-proof high-horse is that the
company is now bankrupt.

If I were one of your stockholders I'd say the wrong person got fired.

E.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1ekx3oi0v.fsf@tti5.uchicago.edu>
······················@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> > 
> > > Matthias Blume <····@my.address.elsewhere> writes:
> > > 
> > > > ·············@comcast.net writes:
> > > >> 
> > > >> The opposing point is to assert that *no* program that cannot be
> > > >> statically checked is useful.  Are you really asserting that?
> > > >
> > > > Actually, viewed from a certain angle, yes.  Every programmer who
> > > > writes a program ought to have a proof that the program is correct in
> > > > her mind.  (If not, fire her.)  It ought to be possible to formalize
> > > > that proof and to statically check it.
> > > 
> > > That's a little draconian.  When I write programs I often have no clue
> > > as to what I am doing, let alone a proof that it is correct!
> > 
> > You're fired.
> 
> Really?

Relax.  This was a joke.  (Wasn't that obvious?)

> While you're looking for someone to replace this person you just fired
> (and rejecting all the applicants who aren't trained in how to produce
> formal proofs of correctness) your competition is iteratively testing and
> refining a product which, while it doesn't have a proof of correctness,
> works well enough from the customer's point of view.  So your competition
> gets the business because they have something to ship and you don't.  The
> best you can offer is, "Wait!  Don't buy their stuff.  It might be
> broken.  Just wait until we get our HR act together and you can buy *our*
> product which we can *prove* doesn't have any bugs."

That's not what I said.  I said that the programmer has a proof in her
head. (At least she thinks she does.)  My point was that since she has
a proof, the proof obviously *exists* and *could* be written down and
*could* be statically verified if one only went to the trouble of
doing so.  (And again, even this is obviously much easier said than
done.)

>  (Excecpt, of course,
> that all you can really prove is that it doesn't have any type errors,
> which is not the same thing.)

No, I wasn't thinking of contemporary type errors.  I was thinking of
a real proof of correctness, in all glory.  The point is that even
though we all know that we cannot prove all correct programs correct
in general, we can do so for the programs we actually write (which is
a proper subset of the set of all correct programs).  Anyone who
claims his program is correct but it cannot be proven correct must
face the question "How do you know?"

> So the result of getting up on your formal-proof high-horse is that the
> company is now bankrupt.
>
> If I were one of your stockholders I'd say the wrong person got fired.

Let me repeat one more time what I actually said: I want to fire the
programmer who does not have a thorough understanding of what he/she
is doing.  Is that so wrong?  What I did not say was: Let's fire the
programmer who does not write down formal proofs for the things he/she
is doing.

Now, if you want to fire me because I insist on working with competent
collegues, well, so be it.  Actually, you don't have to because I
quit.  :-)

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2310031318010001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ······················@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@tti5.uchicago.edu>, Matthias Blume
> > <····@my.address.elsewhere> wrote:
> > 
> > > Joe Marshall <···@ccs.neu.edu> writes:
> > > 
> > > > Matthias Blume <····@my.address.elsewhere> writes:
> > > > 
> > > > > ·············@comcast.net writes:
> > > > >> 
> > > > >> The opposing point is to assert that *no* program that cannot be
> > > > >> statically checked is useful.  Are you really asserting that?
> > > > >
> > > > > Actually, viewed from a certain angle, yes.  Every programmer who
> > > > > writes a program ought to have a proof that the program is correct in
> > > > > her mind.  (If not, fire her.)  It ought to be possible to formalize
> > > > > that proof and to statically check it.
> > > > 
> > > > That's a little draconian.  When I write programs I often have no clue
> > > > as to what I am doing, let alone a proof that it is correct!
> > > 
> > > You're fired.
> > 
> > Really?
> 
> Relax.  This was a joke.  (Wasn't that obvious?)

Actually no.  When you wrote "Every programmer who writes a program ought
to have a proof that the program is correct in her mind.  (If not, fire
her.)" you sounded quite serious to me.


> > While you're looking for someone to replace this person you just fired
> > (and rejecting all the applicants who aren't trained in how to produce
> > formal proofs of correctness) your competition is iteratively testing and
> > refining a product which, while it doesn't have a proof of correctness,
> > works well enough from the customer's point of view.  So your competition
> > gets the business because they have something to ship and you don't.  The
> > best you can offer is, "Wait!  Don't buy their stuff.  It might be
> > broken.  Just wait until we get our HR act together and you can buy *our*
> > product which we can *prove* doesn't have any bugs."
> 
> That's not what I said.  I said that the programmer has a proof in her
> head. (At least she thinks she does.)

No, you said more than that.  You said that "it ought to be possible to
formalize the proof and to statically check it."  But the only way to tell
whether that is in fact possible is to actually do it.  So either you are
implicitly insisting that this proof be actually constructed and checked
or your position is vacuous.  If you're willing to take someone's word for
it that they have a proof in their head then you may just as well take
their word for it that the code works for whatever reason they choose to
have for saying so.

> My point was that since she has
> a proof, the proof obviously *exists* and *could* be written down and
> *could* be statically verified if one only went to the trouble of
> doing so.  (And again, even this is obviously much easier said than
> done.)

And unless you actually do it then all you really know is that she
*thinks* she has a proof in her head (and actually you don't really know
that either, especially if she knows that she will not be expected to
actually produce the proof, and that confessing to not having one will get
her fired).  So once again I say that unless you insist on having people
carry out the formal proofs your position is vacuous.


> > (Excecpt, of course,
> > that all you can really prove is that it doesn't have any type errors,
> > which is not the same thing.)
> 
> No, I wasn't thinking of contemporary type errors.  I was thinking of
> a real proof of correctness, in all glory.  The point is that even
> though we all know that we cannot prove all correct programs correct
> in general, we can do so for the programs we actually write (which is
> a proper subset of the set of all correct programs).  Anyone who
> claims his program is correct but it cannot be proven correct must
> face the question "How do you know?"

And IMO a perfectly legitimate answer to that question is, "Because I ran
it and it worked."  To which you will no doubt counter: but how do you
know that it will work the *next* time you run it, or if you run it under
different circumstances than those under which you tested it?  To which my
reply will be: how do you know that the exhibited proof is correct?  Oh,
you're going to run an automatic proof checker on it?  How do you know
that the proof checker is correct?  How do you know that the hardware on
which your proof checker runs is correct?  What happens if you get a
single-event upset in a processor register, or a bad byte of RAM?

The whole business of computing, theoreticians wishes to the contrary
notwithstanding, is at the end of the day still an empirical enterprise,
and always will be as long as computers and their users are part of the
physical world.


> > So the result of getting up on your formal-proof high-horse is that the
> > company is now bankrupt.
> >
> > If I were one of your stockholders I'd say the wrong person got fired.
> 
> Let me repeat one more time what I actually said: I want to fire the
> programmer who does not have a thorough understanding of what he/she
> is doing.  Is that so wrong?

That's not what you said.  What you said is that you want to fire the
programmer who lacks a very particular kind of understanding of what s/he
is doing.  I don't know whether it's "so wrong", but I submit that it
would be ultimately counterproductive.

> What I did not say was: Let's fire the
> programmer who does not write down formal proofs for the things he/she
> is doing.

Well, then your position is vacuous, as I pointed out above.

> Now, if you want to fire me because I insist on working with competent
> collegues, well, so be it.  Actually, you don't have to because I
> quit.  :-)

Thanks for saving me the trouble.  Seriously, if you were working for me
and you judged your colleagues incompetent simply because they confessed
to not having a formal proof in their head for the correctness of the code
they had written I would fire you.  I would do so with regret because I
think you're very smart and capable, but I would do it without
hesitation.  If you really think that having a formal proof of correctness
trumps all other considerations then IMO you have completely lost sight of
the big picture.

E.

P.S.  Suppose your task is to write a typesetting program and one of the
requirements is that the output look aesthetically pleasing.  How would
you go about proving that your code is correct?
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m165ifobqz.fsf@tti5.uchicago.edu>
·················@jpl.nasa.gov (Erann Gat) writes:


[ I leave this for reference: ]
> > > > > > Actually, viewed from a certain angle, yes.  Every programmer who
> > > > > > writes a program ought to have a proof that the program is correct in
> > > > > > her mind.  (If not, fire her.)  It ought to be possible to formalize
> > > > > > that proof and to statically check it.

[...]

> Actually no.  When you wrote "Every programmer who writes a program ought
> to have a proof that the program is correct in her mind.  (If not, fire
> her.)" you sounded quite serious to me.

I was then.  The "joke" was me firing Joe.  (I don't think that Joe
fits the description of the programmer I would fire -- even if he
himself claims otherwise. Not to mention that I have no power over
Joe's employment.)

> No, you said more than that.  You said that "it ought to be possible to
> formalize the proof and to statically check it."  But the only way to tell
> whether that is in fact possible is to actually do it.  So either you are
> implicitly insisting that this proof be actually constructed and checked
> or your position is vacuous.  If you're willing to take someone's word for
> it that they have a proof in their head then you may just as well take
> their word for it that the code works for whatever reason they choose to
> have for saying so.

The question was whether statically uncheckable programs are useful or
not.  My assertion is that useful programs must be statically
checkable, at least in principle.  The point is that this is not a
serious restriction because by virtue of the fact that competent
programmers *already* have proofs for their programs in their heads,
so *in principle* it should be possible to formalize those proofs.

The only time no formal proof can be given *in principle* is when
there, in fact, is no proof.  If there is no proof, we cannot know
whether the program is correct.  I don't consider programs for which I
cannot know whether they are correct (even in principle!) useful.

Now, the programmer might think he has a proof but, in fact, does not.
In that case the attempt of formalizing it would fail -- something
that would be relatively easy to detect.  So human errors
notwithstanding, programmers do reason about their code, and that
reasoning COULD be used to either verify correctness or to reject
program, reasoning, or both on the basis of the fact that the
reasoning was illogical.

> And unless you actually do it then all you really know is that she
> *thinks* she has a proof in her head (and actually you don't really know
> that either, especially if she knows that she will not be expected to
> actually produce the proof, and that confessing to not having one will get
> her fired).

This is true, but besides my point.

> And IMO a perfectly legitimate answer to that question is, "Because I ran
> it and it worked."  To which you will no doubt counter: but how do you
> know that it will work the *next* time you run it, or if you run it under
> different circumstances than those under which you tested it?

Right, that's how I would counter.  Your answer is not "perfectly
legitimate".

> To which my reply will be: how do you know that the exhibited proof
> is correct?

Because I checked it.

> Oh, you're going to run an automatic proof checker on
> it?  How do you know that the proof checker is correct?
>  How do you
> know that the hardware on which your proof checker runs is correct?
> What happens if you get a single-event upset in a processor
> register, or a bad byte of RAM?

What is the probability of this falsely giving a "proof ok" rather
than a core dump?

Sure, there has to be a "trusted computing base" just like every logic
has a set of axioms which we don't question any further.  The point is
that the trusted computing base can be small: proof checkers are
fairly simple programs, and enough inspection, paper-and-pencil
reasoning, and, yes, testing, will provide us with the confidence we
need.

> Thanks for saving me the trouble.  Seriously, if you were working for me
> and you judged your colleagues incompetent simply because they confessed
> to not having a formal proof in their head for the correctness of the code
> they had written I would fire you.

I did not say "formal proof in their head", please!  I said that the
proof in their head ought to be formalizable, which is something
entirely different.

You are right, btw, in that the discussion is becoming increasingly
vacuous.  Let's forget about the "I would fire her" remark, ok?  What
I really meant to express with that remark was my belief that every
programm actually *does* have a proof of correctness for her program
in her head.  Otherwise what she is doing amounts to randomly cranking
out code without any understanding at all.  A monkey at the keyboard.
That this is not what's going on was precisely my point: people do
reason about their code (albeit informally in most cases, and many of
them wouldn't be able to clearly communicate their reasoning).  That's
why I think that -- in principle -- all programs that people write and
that turn out to be actually correct are, in fact, provably correct.
Finding and writing down the formal proof in practice, however, is a
different story.

> I would do so with regret because I
> think you're very smart and capable, but I would do it without
> hesitation.  If you really think that having a formal proof of correctness
> trumps all other considerations then IMO you have completely lost sight of
> the big picture.

Well, glad we cleared that up.  I have in no way demanded formal
proofs of correctness from each of my co-workers.  Sorry for not
communicating well enough to make this clear from the beginning.

> P.S.  Suppose your task is to write a typesetting program and one of the
> requirements is that the output look aesthetically pleasing.  How would
> you go about proving that your code is correct?

Obviously, this task is not well-defined, so first I would ask the
person who requested the above to specify what he/she means by
"aesthetically pleasing" in concrete, well-defined terms. If I get a
good answer, I work with that.  If I don't, I would quit the job.

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2310031737290001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> The only time no formal proof can be given *in principle* is when
> there, in fact, is no proof.  If there is no proof, we cannot know
> whether the program is correct.  I don't consider programs for which I
> cannot know whether they are correct (even in principle!) useful.

I doubt that very much.  You're posting to usenet, which means you are
making use of a significant software infrastructure, which means that you
ipso facto find it useful.  I doubt very much that you could prove the
software infrastructure correct.  (I doubt very much that it *is*
correct.)  I am absolutely certain that you do not know that it is
correct, and that you never will.  I am also quite certain that you will
continue to use it (and therefore judge it useful) regardless.


> Now, the programmer might think he has a proof but, in fact, does not.
> In that case the attempt of formalizing it would fail -- something
> that would be relatively easy to detect.  So human errors
> notwithstanding, programmers do reason about their code, and that
> reasoning COULD be used to either verify correctness or to reject
> program, reasoning, or both on the basis of the fact that the
> reasoning was illogical.

Yes, but the only way to know whether or not this is possible in principle
is to actually do it in practice.  There are no non-constructive proofs of
the existence of a proof.


> > To which my reply will be: how do you know that the exhibited proof
> > is correct?
> 
> Because I checked it.

How do you know that you didn't make a mistake when you checked it?


> > Oh, you're going to run an automatic proof checker on
> > it?  How do you know that the proof checker is correct?
> >  How do you
> > know that the hardware on which your proof checker runs is correct?
> > What happens if you get a single-event upset in a processor
> > register, or a bad byte of RAM?
> 
> What is the probability of this falsely giving a "proof ok" rather
> than a core dump?

First, what difference does that make?  I thought you were arguing for the
desirability of proofs in an absolute sense, not a probabilistic one.  If
you are arguing probabilistically then we have to compare apples and
apples and ask how much effort you have to put into a proof to gain a
certain confidence in its correctness and compare that to how much effort
you have to put into empirical testing to gain the same level of
correctness.

Second, I'll bet you that even a "proven" correct program would not
produce the expected results when run on a Pentium with the fdiv bug.


> Sure, there has to be a "trusted computing base" just like every logic
> has a set of axioms which we don't question any further.  The point is
> that the trusted computing base can be small: proof checkers are
> fairly simple programs,

But they run on very complicated hardware.  And they are compiled by very
complicated compilers, at least if you want them to run fast.

> I did not say "formal proof in their head", please!  I said that the
> proof in their head ought to be formalizable, which is something
> entirely different.

I do not see it as entirely different.  The only way to know if a proof in
someone's head is formalizable is to formalize it.  Maybe they don't need
to do this in their head, but they do have to do it.  Otherwise you're
just blowing smoke.


> You are right, btw, in that the discussion is becoming increasingly
> vacuous.  Let's forget about the "I would fire her" remark, ok?

Fine.

> What
> I really meant to express with that remark was my belief that every
> programm actually *does* have a proof of correctness for her program
> in her head.  Otherwise what she is doing amounts to randomly cranking
> out code without any understanding at all.  A monkey at the keyboard.

There is a whole branch of research in evolutionary programming that uses
precisely that technique.  In fact, some biologists are starting to look
at biological systems in computational terms.  No one proved your DNA
correct, but it seems to get the job done nonetheless.

> That this is not what's going on was precisely my point: people do
> reason about their code (albeit informally in most cases, and many of
> them wouldn't be able to clearly communicate their reasoning).  That's
> why I think that -- in principle -- all programs that people write and
> that turn out to be actually correct are, in fact, provably correct.

And I'm saying that you're wrong.  You are wrong when you say that this is
the case, and you are wrong when you say (or imply) that this ought to be
the case.

> Finding and writing down the formal proof in practice, however, is a
> different story.

Indeed.  But unless one does write down the formal proof in practice, what
is the point?  Is there any content in your position beyond simply saying
that all else being equal it is better to think clearly about a problem
than not?  I'll agree with that, but it doesn't strike me as a
particularly noteworthy observation.

> > P.S.  Suppose your task is to write a typesetting program and one of the
> > requirements is that the output look aesthetically pleasing.  How would
> > you go about proving that your code is correct?
> 
> Obviously, this task is not well-defined

It is perfectly well defined, it's just defined in terms that are not
logical but rather psychological.  There are people who make their living
(indeed an entire industry devoted to) solving this problem.  You will
obviously not be among them.

E.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2ad7r74je.fsf@hanabi-air.shimizu.blume>
·················@jpl.nasa.gov (Erann Gat) writes:

> It is perfectly well defined, it's just defined in terms that are not
> logical but rather psychological.

I don't think it is well-defined at all.  Ask n people and you get n
answers.  That's not "well-defined".

> There are people who make their living (indeed an entire industry
> devoted to) solving this problem.

I know.  The point is that one can never say the program is "correct"
with respect to the requirement of having the typesetting be
aesthetical.  One can, maybe, make statements like "the majority of
our customers seems to be satisfied with the results".  But that's not
what "correctness" is about.

> You will obviously not be among them.

Indeed, I will not.  But that's more because I'm not very good at
arts.

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410030852220001@192.168.1.51>
In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ·················@jpl.nasa.gov (Erann Gat) writes:
> 
> > It is perfectly well defined, it's just defined in terms that are not
> > logical but rather psychological.
> 
> I don't think it is well-defined at all.  Ask n people and you get n
> answers.  That's not "well-defined".

No, that's not true.  It turns out that there are universal aesthetic
principles that are hard-wired into the human brain.  That's why the
Parthenon or a Frank Ghery building look better than a Bronx tenement.  To
everyone.

> > There are people who make their living (indeed an entire industry
> > devoted to) solving this problem.
> 
> I know.  The point is that one can never say the program is "correct"
> with respect to the requirement of having the typesetting be
> aesthetical.  One can, maybe, make statements like "the majority of
> our customers seems to be satisfied with the results".  But that's not
> what "correctness" is about.

What is it about then?  I thought that correctness is about conforming to
a specification.  But now you insist that only certain kinds of
specifications are allowed.  They have to be "well defined" whatever that
means.  Well, I think it's perfectly legitimate to desire a typesetting
program that produces good looking output, so I'd say your view of
correctness is too narrow.

> > You will obviously not be among them.
> 
> Indeed, I will not.  But that's more because I'm not very good at
> arts.

There's nothing wrong with that.  There is something wrong with saying
that it is illegitimate for others to strive to understand or care about
them, to dismiss an aesthetic specification because it is "not well
defined."  There are more things in heaven and earth, Matthias Blume, than
are dreamt of in your mathematics.

E.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m14qxy39ht.fsf@tti5.uchicago.edu>
·················@jpl.nasa.gov (Erann Gat) writes:

> [...] It turns out that there are universal aesthetic principles that are
> hard-wired into the human brain.  That's why the Parthenon or a
> Frank Ghery building look better than a Bronx tenement.  To
> everyone.

You still get n answers.  Admittedly, they will tend to be correlated.
If aesthetics are so universal, how come Windows XP looks so hideously
ugly?  (To name just one example.)

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031007420001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ·················@jpl.nasa.gov (Erann Gat) writes:
> 
> > [...] It turns out that there are universal aesthetic principles that are
> > hard-wired into the human brain.  That's why the Parthenon or a
> > Frank Ghery building look better than a Bronx tenement.  To
> > everyone.
> 
> You still get n answers.  Admittedly, they will tend to be correlated.
> If aesthetics are so universal, how come Windows XP looks so hideously
> ugly?  (To name just one example.)

Do you realize that you just proved my point by stating unequivocally that
Windows XP looks hideous?  You're right about that.  (Maybe you're not as
hopeless as an artist as you think.)  The reason is very simple: Microsoft
doesn't care about aesthetics.  Never has.  Probably never will.

That's one of the reasons I use a Mac.

E.

---

"The only problem with Microsoft is they just have no taste...I don't mean
that in a small way--I mean that in a big way, in the sense that they
don't think of original ideas, and they don't bring much culture into
their product�So I guess I am saddened, not by Microsoft's success--I have
no problem with their success; they've earned their success for the most
part--I have a problem with the fact that they just make really third-rate
products." 

-- Steve Jobs.  Triumph of the Nerds PBS documentary interview (May 1996)
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m11xt24iul.fsf@tti5.uchicago.edu>
·················@jpl.nasa.gov (Erann Gat) writes:

> > If aesthetics are so universal, how come Windows XP looks so hideously
> > ugly?  (To name just one example.)
> 
> Do you realize that you just proved my point by stating unequivocally that
> Windows XP looks hideous?  You're right about that.  (Maybe you're not as
> hopeless as an artist as you think.)  The reason is very simple: Microsoft
> doesn't care about aesthetics.  Never has.  Probably never will.

I bet you $100 that someone at Microsoft thinks that XP looks good.

> That's one of the reasons I use a Mac.

Same here.

Matthias

PS: Today is Panther day!!!
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031311320001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> I bet you $100 that someone at Microsoft thinks that XP looks good.

Choose any large enough group of people and you are bound to find one who
has no taste.

E.
From: Nils Goesche
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87smlidwpz.fsf@darkstar.cartan>
·················@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > I bet you $100 that someone at Microsoft thinks that XP looks
> > good.
> 
> Choose any large enough group of people and you are bound to
> find one who has no taste.

Well, actually, you have to choose a large group of people to
find one who /has/ taste.

Every day you see more evidence for this fact if you just have a
look at the people who willingly and voluntarily enter theaters
where so-called �musicals� are performed.  Millions of people
watch these celebrations of Kitsch and stupidity all over the
Free World and leave the musical theater as mindless drones,
their brains turned to mush (so it's probably a leftist plot, as
this is exactly the same effect a college education in the social
sciences has, nowadays).  The sole effect of the musical-an-sich
is to lower the taste of the proletarian masses down to the point
where they are willing to vote for those who claim that notions
like �taste� are entirely meaningless and not watching musicals
constitutes a so-called hate crime.  Hence there will be no
justice in this world until those Temples of Stupidity will be
torn down and the evil priests who perform these abominations
burn at the stake so they will rot in hell (I am told by reliable
sources that in hell, musicals are performed on a daily basis.
If I were the devil, I'd certainly do that, at least if I had
some ear-plugs at my disposal (and Barbra Streisand will play a
main part in each one of them)).

No, taste is out of fashion, these days.

Regards,
-- 
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID #xD26EF2A0
From: Russell Wallace
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9cc915.64291953@news.eircom.net>
On Fri, 24 Oct 2003 10:07:42 -0700, ·················@jpl.nasa.gov
(Erann Gat) wrote:

>Do you realize that you just proved my point by stating unequivocally that
>Windows XP looks hideous?  You're right about that.

No he isn't. (But that's a matter of opinion.)

>The reason is very simple: Microsoft
>doesn't care about aesthetics.  Never has.  Probably never will.

Now, this is not a matter of opinion: it's self-evident from a cursory
glance at how Microsoft's products have evolved over the years that
they care a great deal about aesthetics [1]. Whether you like the
result or not comes down to personal taste, but the resources they put
into it don't.

[1] I mean in the visual sense, of course. I wish they also cared a
bit more about aesthetics in the sense of engineering elegance.

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Russell Wallace
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9cc5f1.63488037@news.eircom.net>
On 24 Oct 2003 11:07:10 -0500, Matthias Blume
<····@my.address.elsewhere> wrote:

>You still get n answers.  Admittedly, they will tend to be correlated.
>If aesthetics are so universal, how come Windows XP looks so hideously
>ugly?  (To name just one example.)

I think Windows XP looks good. But really, the "they will tend to be
correlated" suffices, provided the correlation is strong enough to
make a living from - which, in fact, it is.

Now you're certainly entitled to say you're not interested in doing
any sort of work for which the output could not in principle be
mathematically defined, and if you can get a paycheck for that sort of
work, great! (What area do you work in, btw?)

But this does not make the dismissal of all other fields of endeavor
as "ill-defined" anything other than a form of mental blindness.

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2ismaiyg5.fsf@hanabi-air.shimizu.blume>
················@eircom.net (Russell Wallace) writes:

> On 24 Oct 2003 11:07:10 -0500, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> >You still get n answers.  Admittedly, they will tend to be correlated.
> >If aesthetics are so universal, how come Windows XP looks so hideously
> >ugly?  (To name just one example.)
> 
> I think Windows XP looks good.

Erann, are you paying attention? :-)

> But this does not make the dismissal of all other fields of endeavor
> as "ill-defined" anything other than a form of mental blindness.

I didn't dismiss them.  Their requirements simply cannot be part of a
statement of "correctness".  Picasso did not paint "correct"
paintings.  Mozart did not compose "correct" Music.  Microsoft does
not make a user interface which is aesthetically "correct", and
neither does Apple (even though I take the latter over the former any
day -- for reasons other than "correctness", at least as long as we
are talking aesthetics).

The bulk of this thread seems to confuse the issues of a) being unable
to prove something "correct" (for a given notion of correctness) and
b) being unable to state precisely what "correct" means in the first
place.

To the extend that software requirements can be considered
"correctness" requirements, I stand by my claim that "correct"
programs actually written by humans can be rigorously proved correct.
(To me, a "correctness requirement", by definition, can be formally
stated.)

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2710031316410001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ················@eircom.net (Russell Wallace) writes:
> 
> > On 24 Oct 2003 11:07:10 -0500, Matthias Blume
> > <····@my.address.elsewhere> wrote:
> > 
> > >You still get n answers.  Admittedly, they will tend to be correlated.
> > >If aesthetics are so universal, how come Windows XP looks so hideously
> > >ugly?  (To name just one example.)
> > 
> > I think Windows XP looks good.
> 
> Erann, are you paying attention? :-)

Some people like disco.  What can I say?

;-)

E.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjasn$v4c$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> ················@eircom.net (Russell Wallace) writes:

>>But this does not make the dismissal of all other fields of endeavor
>>as "ill-defined" anything other than a form of mental blindness.
> 
> I didn't dismiss them.  Their requirements simply cannot be part of a
> statement of "correctness".  Picasso did not paint "correct"
> paintings.  Mozart did not compose "correct" Music.  Microsoft does
> not make a user interface which is aesthetically "correct", and
> neither does Apple (even though I take the latter over the former any
> day -- for reasons other than "correctness", at least as long as we
> are talking aesthetics).

I think that "correctness" and "aesthetically pleasing" are not far from 
each other. For example, mathematicians strongly appreciate "elegant 
proofs". "Elegance" is definitely an aesthetical measure. And I am 
strongly convinced that "elegance" is also a suitable category for 
programs that we write. I believe that engineering terms like 
maintainability have some kind of elegance and beauty at its roots. I am 
convinced that fans of static type systems rave about the beauty and 
elegance of their type systems.

One important point is that aesthetics are apparently objectively 
measurable. See http://www.patternlanguage.com/archive/ieee/ieeetext.htm

Here is a quote: "The essence of the experiments is that you take the 
two things you are trying to compare and ask, for each one, is my 
wholeness increasing in the presence of this object? How about in the 
presence of this one? Is it increasing more or less? You might say this 
is a strange question; What if the answer is Don't know or They don't 
have any effect on me? Perfectly reasonable! That can happen. But the 
resolution is easy. What turns out to happen is that if you say to a 
person "Yes, it is a difficult question, it might even sound a bit 
nutty. But anyway, please humor me and just answer the question." Then 
it turns out that there is quite a striking statistical agreement, 
80-90%, very strong, as strong a level of agreement as one gets in any 
experiments in social science. [...] It turns out that these kind of 
measurements do correlate with real structural features in the thing and 
with the presence of life in the thing measured by other methods, so 
that it isn't just some sort of subjected I groove to this, and I don't 
groove to that and so on. But it is a way of measuring a real deep 
condition in the particular things that are being compared or looked at."

The fact that we have such a strong division between fans of static type 
systems and dynamic type systems is strange when you consider this. 
Obviously both sides see some very deep reasons for their respective 
positions, otherwise we wouldn't have such heated arguments.

It would be interesting to study where this division stems from.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m17k2q3a6d.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> I think that "correctness" and "aesthetically pleasing" are not far
> from each other. For example, mathematicians strongly appreciate
> "elegant proofs".

They may appreciate them, but they don't confuse them with "correct"
ones.  (Or so I should hope.)

> I am convinced that fans of static type systems rave about the
> beauty and elegance of their type systems.

Yes.  But as with mathematical proofs, this is once again not directly
related to correctness.  An ugly program can be correct (but it might
be harder to prove it correct), and a beautiful program can be buggy.

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjj7t$qb6$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>I think that "correctness" and "aesthetically pleasing" are not far
>>from each other. For example, mathematicians strongly appreciate
>>"elegant proofs".
> 
> They may appreciate them, but they don't confuse them with "correct"
> ones.  (Or so I should hope.)

Sure. But if there exist two proofs for the same theorem, and one is 
ugly and the other one elegant, they would clearly prefer the elegant 
proof. Some would even consider it to be closer to the "truth", in some 
irrational sense.

>>I am convinced that fans of static type systems rave about the
>>beauty and elegance of their type systems.
> 
> Yes.  But as with mathematical proofs, this is once again not directly
> related to correctness.  An ugly program can be correct (but it might
> be harder to prove it correct), and a beautiful program can be buggy.

Note that your remark doesn't fit my statement. I didn't talk about 
beautiful and elegant programs here.

Would you accept to work with a type system that you would consider ugly?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Ingvar Mattsson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87znfm3ckj.fsf@gruk.tech.ensign.ftech.net>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> > ················@eircom.net (Russell Wallace) writes:
> 
> >>But this does not make the dismissal of all other fields of endeavor
> >>as "ill-defined" anything other than a form of mental blindness.
> > I didn't dismiss them.  Their requirements simply cannot be part of a
> > statement of "correctness".  Picasso did not paint "correct"
> > paintings.  Mozart did not compose "correct" Music.  Microsoft does
> > not make a user interface which is aesthetically "correct", and
> > neither does Apple (even though I take the latter over the former any
> > day -- for reasons other than "correctness", at least as long as we
> > are talking aesthetics).
> 
> I think that "correctness" and "aesthetically pleasing" are not far
> from each other. For example, mathematicians strongly appreciate
> "elegant proofs". "Elegance" is definitely an aesthetical measure. And
> I am strongly convinced that "elegance" is also a suitable category
> for programs that we write. I believe that engineering terms like
> maintainability have some kind of elegance and beauty at its roots. I
> am convinced that fans of static type systems rave about the beauty
> and elegance of their type systems.
> 
> One important point is that aesthetics are apparently objectively
> measurable. See
> http://www.patternlanguage.com/archive/ieee/ieeetext.htm
[Quote elided]
> The fact that we have such a strong division between fans of static
> type systems and dynamic type systems is strange when you consider
> this. Obviously both sides see some very deep reasons for their
> respective positions, otherwise we wouldn't have such heated arguments.
> 
> It would be interesting to study where this division stems from.

I think it stems from a very deep divide between "technology" and
"science" (where the two are quite often confounded as being "the
same"). Technology is, deep down, "this works" and science is, deep
down, "this is truth". Quite often, "this is truth" leads to "this
works" and likewise, "this works" is often because "this is true".

But there are edge cases where all we can say with current
understanding "this works, but we have no clue as to why". At that
point, the scientist gets uncomfortable and the technologist says "ah,
well, it works, we can bother about why another day".

In this divide, I'd class the fans of dynamic typing in the
"technology" camp and the fans of static typing in the "science"
camp. The fact that it works in practice is not good enough, since it
is Ugly and Demonstrably Not Correct. On the other side, we see "it
works, it has some edge cases where it breaks down horribly run-time,
but we can and should use the condition-handling system to cope with
that and it's not that bad anyway, so mostly we can get away with a
naive approach".

Looks like I actually managed to find a real-life use for my
"philosophy of science" class, even if it wasn't where I expected. :)

//Ingvar
-- 
Sysadmin is brave
Machine is running for now
Backup on Friday
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjg9v$fre$1@f1node01.rhrz.uni-bonn.de>
Ingvar Mattsson wrote:

>>It would be interesting to study where this division stems from.
> 
> I think it stems from a very deep divide between "technology" and
> "science" (where the two are quite often confounded as being "the
> same"). Technology is, deep down, "this works" and science is, deep
> down, "this is truth". Quite often, "this is truth" leads to "this
> works" and likewise, "this works" is often because "this is true".
> 
> But there are edge cases where all we can say with current
> understanding "this works, but we have no clue as to why". At that
> point, the scientist gets uncomfortable and the technologist says "ah,
> well, it works, we can bother about why another day".
> 
> In this divide, I'd class the fans of dynamic typing in the
> "technology" camp and the fans of static typing in the "science"
> camp. The fact that it works in practice is not good enough, since it
> is Ugly and Demonstrably Not Correct. On the other side, we see "it
> works, it has some edge cases where it breaks down horribly run-time,
> but we can and should use the condition-handling system to cope with
> that and it's not that bad anyway, so mostly we can get away with a
> naive approach".

This is a very interesting thought. It looks like you are on to something.

However, I would object to the characterization "break down horribly at 
run-time". Dynamically typed languages don't let programs "break down 
horribly" (in the sense of producing core dumps). To the contrary, a 
condition-handling system can provide you features that might be 
_exactly_ what you need, and they do so at no cost whatsoever to the 
programmer.

Apart from that, your ideas seem to be a good place to start from.

> Looks like I actually managed to find a real-life use for my
> "philosophy of science" class, even if it wasn't where I expected. :)

Hmm, do you happen to have links or references? I would be interested to 
hear more about it.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Ingvar Mattsson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87vfqa3aur.fsf@gruk.tech.ensign.ftech.net>
Pascal Costanza <········@web.de> writes:

> Ingvar Mattsson wrote:
> 
> >>It would be interesting to study where this division stems from.
> > I think it stems from a very deep divide between "technology" and
> > "science" (where the two are quite often confounded as being "the
> > same"). Technology is, deep down, "this works" and science is, deep
> > down, "this is truth". Quite often, "this is truth" leads to "this
> > works" and likewise, "this works" is often because "this is true".
> > But there are edge cases where all we can say with current
> > understanding "this works, but we have no clue as to why". At that
> > point, the scientist gets uncomfortable and the technologist says "ah,
> > well, it works, we can bother about why another day".
> > In this divide, I'd class the fans of dynamic typing in the
> > "technology" camp and the fans of static typing in the "science"
> > camp. The fact that it works in practice is not good enough, since it
> > is Ugly and Demonstrably Not Correct. On the other side, we see "it
> > works, it has some edge cases where it breaks down horribly run-time,
> > but we can and should use the condition-handling system to cope with
> > that and it's not that bad anyway, so mostly we can get away with a
> > naive approach".
> 
> This is a very interesting thought. It looks like you are on to something.

Possibly. Rehashing stuff I heard in a lecture in 1996 doesn't really
make me feel I am on to something, apart from possibly seeing things
in a slightly different light. Who knows, it may even be a better
explanatory model than what has gone before.

> However, I would object to the characterization "break down horribly
> at run-time". Dynamically typed languages don't let programs "break
> down horribly" (in the sense of producing core dumps). To the
> contrary, a condition-handling system can provide you features that
> might be _exactly_ what you need, and they do so at no cost whatsoever
> to the programmer.

Well, that's why I said "we can and should use the condition-handling
system". Trying to cover all angles, you know.

> Apart from that, your ideas seem to be a good place to start from.
> 
> > Looks like I actually managed to find a real-life use for my
> > "philosophy of science" class, even if it wasn't where I expected. :)
> 
> Hmm, do you happen to have links or references? I would be interested
> to hear more about it.

Um. My lecturer was Ingemar Nordin, at Link�ping University. A quick
check seems to indicate that the literature we used is only available
in Swedish (so "non-translated"). 

There seems to be some stuff on the web. I'll check if I still have
the compendium from the course I took and see if there are any
references in there. If I haven't mentioned anything in a week-or-so,
mail me and remind me?

//Ingvar
-- 
((lambda (x y l) (format nil "~{~a~}" (loop for a in x for b in y with c = t
if a collect (funcall (if c #'char-upcase #'char-downcase) (elt (elt l a) b))
else collect #\space if c do (setq c ())))) '(76 1 0 0 nil 0 nil 0 3 0 5 nil 0
0 12 0 0 0) '(2 2 16 8 nil 1 nil 2 4 16 2 nil 9 1 1 13 10 11) (sort (loop for
foo being the external-symbols in :cl collect (string-upcase foo)) #'string<))
From: Peter Seibel
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3d6cisgvh.fsf@javamonkey.com>
Ingvar Mattsson <······@cathouse.bofh.se> writes:

> I think it stems from a very deep divide between "technology" and
> "science" (where the two are quite often confounded as being "the
> same"). Technology is, deep down, "this works" and science is, deep
> down, "this is truth". Quite often, "this is truth" leads to "this
> works" and likewise, "this works" is often because "this is true".
> 
> But there are edge cases where all we can say with current
> understanding "this works, but we have no clue as to why". At that
> point, the scientist gets uncomfortable and the technologist says
> "ah, well, it works, we can bother about why another day".

As long as we're discussing edge cases, isn't there another edge case
where the scientists say, "This is true, therefore there *must* be a
way to make it work"? At which point the technologist gets
uncomfortable.

Given the nature of scientific knowledge--things that were once
*known* to be true are later discovered to be simply untrue or at
least inadequate descriptions of reality--it seems like careful
scientists would at least allow for the possibility that their "truth"
may not be.

-Peter

P.S. This is in no way intended as commentary on whether sufficiently
sophisticated theories of static typing are or are not actual "truth".
Just pointing out that there are ways the "scientific" approach can
break down as well.

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Ingvar Mattsson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ekwx3c5e.fsf@gruk.tech.ensign.ftech.net>
Peter Seibel <·····@javamonkey.com> writes:

> Ingvar Mattsson <······@cathouse.bofh.se> writes:
> 
> > I think it stems from a very deep divide between "technology" and
> > "science" (where the two are quite often confounded as being "the
> > same"). Technology is, deep down, "this works" and science is, deep
> > down, "this is truth". Quite often, "this is truth" leads to "this
> > works" and likewise, "this works" is often because "this is true".
> > 
> > But there are edge cases where all we can say with current
> > understanding "this works, but we have no clue as to why". At that
> > point, the scientist gets uncomfortable and the technologist says
> > "ah, well, it works, we can bother about why another day".
> 
> As long as we're discussing edge cases, isn't there another edge case
> where the scientists say, "This is true, therefore there *must* be a
> way to make it work"? At which point the technologist gets
> uncomfortable.

Yes. Saying that, most brutally honest scientist will (when pressed)
admit that "this is true" is actually code for "this is true, to the
best of our knowledge". Even so, somethings that are true are not of
much practical use, yet. wasn't it Hardy who said "I am greatly
comforted that theoretical mathematics have no practical use" and yet
it is the foundation that modern crypto systems are built on?

//Ingvar
-- 
Q: What do you call a Discworld admin?
A: Chelonius Monk
From: Steve Schafer
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <c86tpvko825q8irh3klv8v8uvo8mfcl1rc@4ax.com>
On 28 Oct 2003 10:11:09 +0000, Ingvar Mattsson <······@cathouse.bofh.se>
wrote:

>Yes. Saying that, most brutally honest scientist will (when pressed)
>admit that "this is true" is actually code for "this is true, to the
>best of our knowledge".

As a scientist who sometimes thinks about such things, I'd go even
further: "This is the best model that we currently have for..." (and
leave "truth" completely out of the whole thing).

This is perhaps the hardest thing for the non-scientifically-inclined to
comprehend: While science is the process of searching for The Truth, we
never really get there. Instead, we just build models that are better
and better approximations of The Truth. The fundamental axiom of the
scientific method is that we believe that our approach takes us, over
time, asymptotically closer to The Truth (hopefully monotonically, but
not always).

Consequently, statements such as "It's a scientific fact" don't have any
meaning. If something is a fact, it's a fact, period, and science has
nothing to do with it. On the other hand, this opens science up to
attack from the outside, with statements like "It's only a theory" from
those who do not (or choose not to) understand that in science, the word
"theory" has a very specific meaning that isn't quite the same as its
colloquial interpretation. In science, "theory" is actually a much
loftier concept than "fact": A fact is merely a small chunk of The
Truth, whereas a theory is the path by which we discover new chunks of
The Truth. To mix metaphors, the difference between fact and theory is
the difference between having a fish and knowing how to fish.

-Steve
From: Adam Warner
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.28.00.31.32.41773@consulting.net.nz>
Hi Ingvar Mattsson,

> I think it stems from a very deep divide between "technology" and
> "science" (where the two are quite often confounded as being "the
> same"). Technology is, deep down, "this works" and science is, deep
> down, "this is truth". Quite often, "this is truth" leads to "this
> works" and likewise, "this works" is often because "this is true".
> 
> But there are edge cases where all we can say with current understanding
> "this works, but we have no clue as to why". At that point, the
> scientist gets uncomfortable and the technologist says "ah, well, it
> works, we can bother about why another day".
> 
> In this divide, I'd class the fans of dynamic typing in the "technology"
> camp and the fans of static typing in the "science" camp. The fact that
> it works in practice is not good enough, since it is Ugly and
> Demonstrably Not Correct. On the other side, we see "it works, it has
> some edge cases where it breaks down horribly run-time, but we can and
> should use the condition-handling system to cope with that and it's not
> that bad anyway, so mostly we can get away with a naive approach".
> 
> Looks like I actually managed to find a real-life use for my "philosophy
> of science" class, even if it wasn't where I expected. :)

You might like to consider that I reach the opposite conclusion from your
helpful exposition. It is the truth that static type implementations limit
the creation of some programs that would otherwise run correctly but can't
be proved correct at compile time. The scientist gets extremely
uncomfortable when the technologist says this doesn't matter because the
compiler seems to work for most programs we have encountered to date.

Wanting to write generically correct programs first and being able to rely
upon simple truths like 1+ an integer returns a bigger integer are
fundamental scientific desires. It is the technologist (perhaps engineer
is also a suitable term) that is more inclined to break abstractions in
the interests of machine efficiency. Many static type systems elevate
these broken abstractions to a pivotal position in the language (e.g.
through the widespread use of machine-level integers).

I see Lisp at its heart having a desire to allow the creation of any
conceivable generically correct program. Yet many pragmatic decisions have
been made in the interests of machine-level efficiency. The type system is
designed so that generically correct programs can be created first and
type restrictions imposed to ensure the program meets more constrained
specifications or efficiency goals.

Furthermore it is the scientist that wants every programming paradigm at
its disposal whereas the technologist is quick to tell someone that a
particular paradigm works. Lisp as the prototypical example of a
dynamically typed language is closer to this multi-paradigm approach than
any other statically typed language (plus the static type systems getting
the most praise appear to be linked to single-paradigm languages).

Regards,
Adam
From: Brian McNamara!
Subject: Psychology of type systems (was Re: Python from Wise Guy's Viewpoint)
Date: 
Message-ID: <bnjesr$bku$1@news-int.gatech.edu>
Pascal Costanza <········@web.de> once said:
>I think that "correctness" and "aesthetically pleasing" are not far from 
>each other. For example, mathematicians strongly appreciate "elegant 
>proofs". "Elegance" is definitely an aesthetical measure. And I am 
...
>One important point is that aesthetics are apparently objectively 
>measurable. See http://www.patternlanguage.com/archive/ieee/ieeetext.htm
...
>The fact that we have such a strong division between fans of static type 
>systems and dynamic type systems is strange when you consider this. 
>Obviously both sides see some very deep reasons for their respective 
>positions, otherwise we wouldn't have such heated arguments.
>
>It would be interesting to study where this division stems from.

Indeed.

I speculate that it might be a matter of human adaptiveness to the
tools at hand.  When you are accustommed to programming with static
types, suddenly being forced to use a dynamically-typed language feels
like being thrown into the middle of the ocean without a life
preserver.  Conversely, my impression is that when you are accustommed
to programming with dynamic types, being forced to use a
statically-typed language feels like being put in a straightjacket and
thrown into a padded room.

There is an interesting anecdote, which I think is printed in "History
of Programming Languages II" (I am recalling it from memory now, as I
don't have the book at hand) where Bjarne Stroustrup talks about early
design decisions for C++.  C++ is statically-typed (much more so than
C), but when experimenting with new language features, there was some
new feature that (for whatever reason) was not being statically
checked.  Despite the fact that this non-checked aspect of this feature
was well-documented, Bjarne found that good programmers became
completely incapable of finding bugs in their program stemming from the
lack of a static typecheck with this feature.  The moral seemed to be
that C++ programmers had become adapted to static typing, to the point
that they relied on it completely--when the tool didn't do it for them,
they were incapable of locating the problem themselves (this, despite
the fact that, a few years earlier, these same programmers had all been
using untyped languages, where they had to locate such errors "by hand"
all the time).

That particular anecdote was comparing "static typing" and "untyped",
but it seems that the same kind of thing still happens to a lesser
degree between static and dynamic typing.  For whatever reason, people
don't seem to be easily capable of "migrating" the skill sets used in
one setting to those needed in the other.

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Pascal Costanza
Subject: Re: Psychology of type systems (was Re: Python from Wise Guy's Viewpoint)
Date: 
Message-ID: <bnjgrm$frg$1@f1node01.rhrz.uni-bonn.de>
Brian McNamara! wrote:

>>It would be interesting to study where this division stems from.
> 
> Indeed.
> 
> I speculate that it might be a matter of human adaptiveness to the
> tools at hand.  When you are accustommed to programming with static
> types, suddenly being forced to use a dynamically-typed language feels
> like being thrown into the middle of the ocean without a life
> preserver.  Conversely, my impression is that when you are accustommed
> to programming with dynamic types, being forced to use a
> statically-typed language feels like being put in a straightjacket and
> thrown into a padded room.

These is a telling analogies, and partly explains the strong reactions 
on both sides.

In an argument like we currently have in this thread, the arguments of 
the respective other party then sound like this:

"Being thrown into the middle of the ocean without a life preserver 
might be exactly what you want, and more ofthen than you might expect."

"Being put in a straightjacket and thrown into a padded room still 
allows you to do 95% of the things you want to do, and in those rare 
circumstances in which this is not possible, you can still find a 
different approach that still works without the need to get rid off the 
straightjacket."

Both statements, put like that, sound clearly ridiculous. And both 
statements are not correctly rephrasing what has actually been said.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnaptn$v7c$2@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:

> I know.  The point is that one can never say the program is "correct"
> with respect to the requirement of having the typesetting be
> aesthetical.  One can, maybe, make statements like "the majority of
> our customers seems to be satisfied with the results".  But that's not
> what "correctness" is about.

On the other hand, the only important goal is that customer are 
satisfied with the results, no matter what program you write. If you do 
more than that you are wasting resources.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3d6cm3ams.fsf@rigel.goldenthreadtech.com>
Matthias Blume <····@my.address.elsewhere> writes:

> ·················@jpl.nasa.gov (Erann Gat) writes:
> 
> > It is perfectly well defined, it's just defined in terms that are not
> > logical but rather psychological.
> 
> I don't think it is well-defined at all.  Ask n people and you get n
> answers.  That's not "well-defined".

What makes you think that this would be true (n people --> n totally
different answers)?  There's plenty of evidence that suggests you're
just plain wrong here.


> > There are people who make their living (indeed an entire industry
> > devoted to) solving this problem.
> 
> I know.  The point is that one can never say the program is "correct"
> with respect to the requirement of having the typesetting be
> aesthetical.

Why not?  Presumably because you have some very narrow notion of "correct".


>  One can, maybe, make statements like "the majority of our customers
> seems to be satisfied with the results".  But that's not what
> "correctness" is about.

Ah, you _do_ have a very narrow (mostly useless) notion of "correct".


> > You will obviously not be among them.
> 
> Indeed, I will not.  But that's more because I'm not very good at
> arts.

I don't see how that has anything much to do with it.


/Jon
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3y8vb36vp.fsf@rigel.goldenthreadtech.com>
Matthias Blume <····@my.address.elsewhere> writes:

> ·················@jpl.nasa.gov (Erann Gat) writes:
> 
> > P.S.  Suppose your task is to write a typesetting program and one of the
> > requirements is that the output look aesthetically pleasing.  How would
> > you go about proving that your code is correct?
> 
> Obviously, this task is not well-defined, so first I would ask the
> person who requested the above to specify what he/she means by
> "aesthetically pleasing" in concrete, well-defined terms. If I get a
> good answer, I work with that.  If I don't, I would quit the job.

Hmmm.  Maybe I actually did have a proof in my head that you were
clueless.  You've even done the work here of giving a good first draft
of writing it out for me.

/Jon
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m265if74fa.fsf@hanabi-air.shimizu.blume>
·········@rcn.com (Jon S. Anthony) writes:

> Hmmm.  Maybe I actually did have a proof in my head that you were
> clueless.  You've even done the work here of giving a good first draft
> of writing it out for me.

Glad to see some really coherent, intelligent contributions to this
discussion.

Thanks!
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m38yna3a7v.fsf@rigel.goldenthreadtech.com>
Matthias Blume <····@my.address.elsewhere> writes:

> ·········@rcn.com (Jon S. Anthony) writes:
> 
> > Hmmm.  Maybe I actually did have a proof in my head that you were
> > clueless.  You've even done the work here of giving a good first draft
> > of writing it out for me.
> 
> Glad to see some really coherent, intelligent contributions to this
> discussion.

Glad to see you are beginning to get the point.

> Thanks!

You're welcome.

/Jon
From: Thomas F. Burdick
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <xcvbrs7qil3.fsf@famine.OCF.Berkeley.EDU>
·················@jpl.nasa.gov (Erann Gat) writes:

> And IMO a perfectly legitimate answer to that question is, "Because I ran
> it and it worked."  To which you will no doubt counter: but how do you
> know that it will work the *next* time you run it, or if you run it under
> different circumstances than those under which you tested it?  To which my
> reply will be: how do you know that the exhibited proof is correct?  Oh,
> you're going to run an automatic proof checker on it?  How do you know
> that the proof checker is correct?
[ snip ]

That last one's easy -- there are nice logical systems for which proof
checkers are *really* easy to write.  So you can boot strap your way
up to a useful one, with the base case being a hand-generated and
hand-checked proof.  The *far* more difficult question is: how do you
know that your logical specification means what you think it does?
That's essentially the initial question: how do you know your program
is correct?  "Because I ran it and it worked."

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m33cdj4lvk.fsf@rigel.goldenthreadtech.com>
Matthias Blume <····@my.address.elsewhere> writes:

> ······················@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@tti5.uchicago.edu>, Matthias Blume
> > <····@my.address.elsewhere> wrote:
> > 
> > > Joe Marshall <···@ccs.neu.edu> writes:
> > > 
...
> > > That's a little draconian.  When I write programs I often have no clue
> > > > as to what I am doing, let alone a proof that it is correct!
> > > 
> > > You're fired.
> > 
> > Really?
> 
> Relax.  This was a joke.  (Wasn't that obvious?)
> 
> > While you're looking for someone to replace this person you just fired
> > (and rejecting all the applicants who aren't trained in how to produce
> > formal proofs of correctness) your competition is iteratively testing and
> > refining a product which, while it doesn't have a proof of correctness,
> > works well enough from the customer's point of view.  So your competition
> > gets the business because they have something to ship and you don't.  The
> > best you can offer is, "Wait!  Don't buy their stuff.  It might be
> > broken.  Just wait until we get our HR act together and you can buy *our*
> > product which we can *prove* doesn't have any bugs."
> 
> That's not what I said.  I said that the programmer has a proof in her
> head. (At least she thinks she does.)  My point was that since she has

The problem for you here is that this makes Erann's interpretation
down right generous to your position.  Really.  I'd say noone
(including you) has a "proof" in their heads in such circumstances.


> a proof, the proof obviously *exists* and *could* be written down and
> *could* be statically verified if one only went to the trouble of
> doing so.  (And again, even this is obviously much easier said than
> done.)

You can't be serious.  Even we take your premise as true (that she
_thinks_ she has a proof) this in absolutely no way implies that she
does and even less that such a proof exists.  Let's see... I _think_ I
have a proof (in my head) that you are completely clueless wrt this
topic, therefore such a proof "obviously" exists and could be written
down.  Yep, makes real good sense.

/Jon
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2vfqf5oep.fsf@hanabi-air.shimizu.blume>
·········@rcn.com (Jon S. Anthony) writes:

> You can't be serious.  Even we take your premise as true (that she
> _thinks_ she has a proof) this in absolutely no way implies that she
> does and even less that such a proof exists.  Let's see... I _think_ I
> have a proof (in my head) that you are completely clueless wrt this
> topic, therefore such a proof "obviously" exists and could be written
> down.  Yep, makes real good sense.

No, my claim is: For every correct program written by a human there is
a correctness proof.  In other words, I find it unlikely that someone
writes a correct program, but there actually is no such proof.  People
do reason about the programs they write, and usually they are not too
far off from the truth -- especially if they actually got the code
right.

Your attempt at insulting me is cute, but it has little to do with
what I said.

Matthias
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <un0brm9az.fsf@STRIPCAPStelus.net>
Matthias Blume <····@my.address.elsewhere> writes:
> No, my claim is: For every correct program written by a human there is
> a correctness proof.  In other words, I find it unlikely that someone
> writes a correct program, but there actually is no such proof.  

There is room for Goedel here somewhere. [Hmm, see "G�del on Net" at
http://www.sm.luth.se/~torkel/eget/godel.html]

Formal proofs can only be done relative to a particular formalism. Within a
given (sufficiently complex) formalism it is impossible to prove all
statements that can be expressed in it.

Given that humans can bang out just about any possible piece of crap on the
keyboard if they are patient enough, it certainly follows that there are
programs that humans can write that cannot be proved correct.

Even if you restrict yourself to saying all *correct* programs humans can
write can be proved correct, you are on weak grounds. Formal proofs are only
sensical in the context of a particular formalism. It is certainly conceivable
that humans could consider a program correct according to informal standards
("hey it's working for me!", "hey that's pretty!"). It does not follow that
the formalisms currently at our disposal are rich enough to express the
correctness formally, however.

Humans work with a combination of logic, intuition, observation and caffeine.
What "feels" like a reasonable proof in one's head is unlikely to be easily
expressed as a formal proof.

> People do reason about the programs they write, and usually they are not too
> far off from the truth -- especially if they actually got the code right.

Reasonable enough. But this does not mean that there is a correctness *proof*
in the formal sense.

There is also the whole notion of a program being correct in one situation but
incorrect in another (e.g. using tried and true Ariane 4 software in the
Ariane 5 rocket), so even if you actually took the trouble of doing a formal
proof, you very quickly have to engage in informal thinking to adapt it to new
situations.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2r81268qk.fsf@hanabi-air.shimizu.blume>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> Given that humans can bang out just about any possible piece of crap on the
> keyboard if they are patient enough, it certainly follows that there are
> programs that humans can write that cannot be proved correct.

I know.  The question was whether that actually happens in practice.
Typical "G�del" statements tend to be pretty contrived.

(The reason why I _believe_ (yes, I do not -- and cannot, see above --
have a proof for that belief!) is that people reason (however
informally) about the correctness of the programs they write while
they are writing them.  That was my whole point.  I am actually pretty
amazed that there is such resistence to this idea.)

Matthias

PS: Anyway, enough time wasted.  Over and out from me.
From: Aatu Koskensilta
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <%d9nb.138$EL7.84@reader1.news.jippii.net>
Ray Blaak wrote:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
>>No, my claim is: For every correct program written by a human there is
>>a correctness proof.  In other words, I find it unlikely that someone
>>writes a correct program, but there actually is no such proof.  
> 
> 
> There is room for Goedel here somewhere. [Hmm, see "G�del on Net" at
> http://www.sm.luth.se/~torkel/eget/godel.html]
> 
> Formal proofs can only be done relative to a particular formalism. Within a
> given (sufficiently complex) formalism it is impossible to prove all
> statements that can be expressed in it.

And this is a good thing. If you could prove all statements that can be 
expressed in the formalism, you'd have an inconsistent system. Of 
course, what you meant to say is that there are - relative to a standard 
semantics - true statements which can't be proved.

There's another catch here: assuming human beings will only write a 
finite number of programs, there is a sound formalism in which all the 
correct programs can be proved correct and all incorrect proved incorrect.

Also, if we study formalisms that are stronger than Peano Arithmetic, 
say some subsystem of second order arithmetic, no statement not 
specifically constructed to demonstrate incompleteness are known to be 
undecided. For PA we have the Paris-Harrington theorem.

> Given that humans can bang out just about any possible piece of crap on the
> keyboard if they are patient enough, it certainly follows that there are
> programs that humans can write that cannot be proved correct.

Only if human beings produce an infinite number of programs.

> Even if you restrict yourself to saying all *correct* programs humans can
> write can be proved correct, you are on weak grounds. Formal proofs are only
> sensical in the context of a particular formalism. It is certainly conceivable
> that humans could consider a program correct according to informal standards
> ("hey it's working for me!", "hey that's pretty!"). It does not follow that
> the formalisms currently at our disposal are rich enough to express the
> correctness formally, however.

Then we use richer formalisms in which hopefully these criteria - as far 
as they are at all desirable - can be expressed and facts about programs 
proved. For example, we could add to Peano arithmetic various sorts of 
reflection schemata (which prove the consistency of PA), and get a 
stronger theory, and then repeat this for the new formalism so obtained 
and so on (this produces axiomatisable theories).

> Humans work with a combination of logic, intuition, observation and caffeine.
> What "feels" like a reasonable proof in one's head is unlikely to be easily
> expressed as a formal proof.

Obviously. You don't see many mathematicians producing formal proofs, 
either.

>>People do reason about the programs they write, and usually they are not too
>>far off from the truth -- especially if they actually got the code right.
> 
> 
> Reasonable enough. But this does not mean that there is a correctness *proof*
> in the formal sense.

There trivially is. If the program is correct, then there is a sound 
formalism with this bare fact as its sole axiom, which proves the 
correctness. What use this would be, I can't fathom.

> There is also the whole notion of a program being correct in one situation but
> incorrect in another (e.g. using tried and true Ariane 4 software in the
> Ariane 5 rocket), so even if you actually took the trouble of doing a formal
> proof, you very quickly have to engage in informal thinking to adapt it to new
> situations.

True. A program is correct relative to some specification as to how it 
should behave. Basicly we have a situation like this (in an ideal case):

  1) We have a number of extensional properties the program should have,
     e.g. in case of a function we have various sort of statements
     which should be true about the function
  2) We have an algorithm, which we wish to prove calculates a function
     for which the axioms of 1) come out true

An algorithm or program can be correct relative to some specification 
and incorrect relative to some other. Informal thinking is required at 
some level, since there is no formal system (at least currently, and not 
in the foreseeable future) which would determine what human beings 
actually want. For example, it's intuitively obvious that a spacecraft 
should not explode. There are ways of formalising this sort of things in 
modal logic, but none provide replacement for human insight.

-- 
Aatu Koskensilta (················@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
  - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uvfq96fi5.fsf@STRIPCAPStelus.net>
Aatu Koskensilta <················@xortec.fi> writes:
> Ray Blaak wrote:
> > Formal proofs can only be done relative to a particular formalism. Within a
> > given (sufficiently complex) formalism it is impossible to prove all
> > statements that can be expressed in it.
> 
> And this is a good thing. If you could prove all statements that can be 
> expressed in the formalism, you'd have an inconsistent system. Of 
> course, what you meant to say is that there are - relative to a standard 
> semantics - true statements which can't be proved.

Since I am trying to be conservative and avoid the philosphical debate of true,
I meant exactly what I said and no more: there are statements that can't be
proved in a given formalism.

For true statements that can't be proved, how do you know they are true? Were
they proved in some other formalism, perhaps a more powerful one? True according
to some oracle? According to human intuitive guesses? Platonically in an
absolute sense?

> There's another catch here: assuming human beings will only write a 
> finite number of programs, there is a sound formalism in which all the 
> correct programs can be proved correct and all incorrect proved incorrect.

The notion of "a correct program necessarily has a proof" is actually rather
vacuous. In the context of being formally correct, a correct program is only
correct if it can be shown that the statement describing the program can be
ultimately derived from the formalism's axioms. But that derivation is exactly
what a formal proof is.

That is, when doing formal proofs, programs are not "absolutely correct", but
only "formally correct".

In other words, "a correct program necessarily has a proof" means exactly "a
program with a proof necessarily has a proof". Not a very useful fact.

When a program is being initially written, the programmer does not know it is
correct, so even if there is a proof they still have to take the trouble to
define/discover it.

> > Given that humans can bang out just about any possible piece of crap on the
> > keyboard if they are patient enough, it certainly follows that there are
> > programs that humans can write that cannot be proved correct.
> 
> Only if human beings produce an infinite number of programs.

"This statement is false" is not provable in your favourite formalism, and I
just typed it. In Lisp: (defun Q () (eq nil (Q)))

> > It does not follow that the formalisms currently at our disposal are
> > rich enough to express the correctness formally, however.
> 
> Then we use richer formalisms in which hopefully these criteria - as far 
> as they are at all desirable - can be expressed and facts about programs 
> proved. For example, we could add to Peano arithmetic various sorts of 
> reflection schemata (which prove the consistency of PA), and get a 
> stronger theory, and then repeat this for the new formalism so obtained 
> and so on (this produces axiomatisable theories).

That we do this is a useful human activity and research into the kinds of
formalisms we need.

The axioms we introduce, however, are subject to considerable debate, such
that "we" decide that they correspond to things we observe, need, and can
implement on physical devices.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2810030728460001@192.168.1.51>
In article <·············@STRIPCAPStelus.net>, Ray Blaak
<········@STRIPCAPStelus.net> wrote:

> > > Given that humans can bang out just about any possible piece of crap
on the
> > > keyboard if they are patient enough, it certainly follows that there are
> > > programs that humans can write that cannot be proved correct.
> > 
> > Only if human beings produce an infinite number of programs.
> 
> "This statement is false" is not provable in your favourite formalism, and I
> just typed it. In Lisp: (defun Q () (eq nil (Q)))

Or:

(defun epimenides () (if (epimenides) nil t))

This one is also interesting:

(defun self-affirming () (if (self-affirming) t nil))

or the more succinct versions:

(defun ep () (not (ep)))
(defun sa () (sa))

These are of course infinite loops in Lisp, but they might not be in a
lazy language like Haskell.  Hm, I wonder if these would typecheck.

E.
From: Philip Armstrong
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <omk271-51r.ln1@trigger.kantaka.co.uk>
In article <··································@192.168.1.51>,
Erann Gat <·················@jpl.nasa.gov> wrote:
>These are of course infinite loops in Lisp, but they might not be in a
>lazy language like Haskell.  Hm, I wonder if these would typecheck.

Yup. hugs segfaults, whilst ghc6 says "*** Exception: <<loop>>"
if you try and invoke the ep function though :)

Phil

-- 
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <llr4by8r.fsf@comcast.net>
Philip Armstrong <····@armstrong.invalid> writes:

> In article <··································@192.168.1.51>,
> Erann Gat <·················@jpl.nasa.gov> wrote:
>>These are of course infinite loops in Lisp, but they might not be in a
>>lazy language like Haskell.  Hm, I wonder if these would typecheck.
>
> Yup. hugs segfaults, whilst ghc6 says "*** Exception: <<loop>>"
> if you try and invoke the ep function though :)

I thought code that typechecks would `never crash'?
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-2810031830550001@192.168.1.51>
In article <············@comcast.net>, ·············@comcast.net wrote:

> Philip Armstrong <····@armstrong.invalid> writes:
> 
> > In article <··································@192.168.1.51>,
> > Erann Gat <·················@jpl.nasa.gov> wrote:
> >>These are of course infinite loops in Lisp, but they might not be in a
> >>lazy language like Haskell.  Hm, I wonder if these would typecheck.
> >
> > Yup. hugs segfaults, whilst ghc6 says "*** Exception: <<loop>>"
> > if you try and invoke the ep function though :)
> 
> I thought code that typechecks would `never crash'?

That's a very good point.  If Hugs can be made to segfault something is
seriously wrong (particularly since Hugs says it's specifically designed
for teaching).

E.
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnnpjf$hbj$1$8300dec7@news.demon.co.uk>
Erann Gat wrote:

> In article <············@comcast.net>, ·············@comcast.net wrote:
> 
>> Philip Armstrong <····@armstrong.invalid> writes:
>> 
>> > In article <··································@192.168.1.51>,
>> > Erann Gat <·················@jpl.nasa.gov> wrote:
>> >>These are of course infinite loops in Lisp, but they might not be in a
>> >>lazy language like Haskell.  Hm, I wonder if these would typecheck.
>> >
>> > Yup. hugs segfaults, whilst ghc6 says "*** Exception: <<loop>>"
>> > if you try and invoke the ep function though :)
>> 
>> I thought code that typechecks would `never crash'?
> 
> That's a very good point.  If Hugs can be made to segfault something is
> seriously wrong (particularly since Hugs says it's specifically designed
> for teaching).

Personally I think it's an utterly trivial point, though the fact that
Hugs crashes rather than exits gracefully if it runs out of stack or
heap would seem to be a bit of a wart in it's implementation, I must
admit.

BTW, for the record, even static typing fans don't claim to have
solved the halting problem or offer guarantees that any program
will run in the finite memory available on any real computer.

Regards
--
Adrian Hey 
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-2910030829040001@192.168.1.51>
In article <·····················@news.demon.co.uk>, Adrian Hey
<····@NoSpicedHam.iee.org> wrote:

> Erann Gat wrote:
> 
> > In article <············@comcast.net>, ·············@comcast.net wrote:
> > 
> >> Philip Armstrong <····@armstrong.invalid> writes:
> >> 
> >> > In article <··································@192.168.1.51>,
> >> > Erann Gat <·················@jpl.nasa.gov> wrote:
> >> >>These are of course infinite loops in Lisp, but they might not be in a
> >> >>lazy language like Haskell.  Hm, I wonder if these would typecheck.
> >> >
> >> > Yup. hugs segfaults, whilst ghc6 says "*** Exception: <<loop>>"
> >> > if you try and invoke the ep function though :)
> >> 
> >> I thought code that typechecks would `never crash'?
> > 
> > That's a very good point.  If Hugs can be made to segfault something is
> > seriously wrong (particularly since Hugs says it's specifically designed
> > for teaching).
> 
> Personally I think it's an utterly trivial point, though the fact that
> Hugs crashes rather than exits gracefully if it runs out of stack or
> heap would seem to be a bit of a wart in it's implementation, I must
> admit.
>
> BTW, for the record, even static typing fans don't claim to have
> solved the halting problem or offer guarantees that any program
> will run in the finite memory available on any real computer.

Neither have Lisp fans, but in Lisp (at least in MCL) I can do this:

? (defun ep () (not (ep)))
EP
? (handler-case (ep) (serious-condition () "Sorry, not enough memory"))
"Sorry, not enough memory"
? 

Whether the absence of this capability is a "wart" or a "serious problem"
is, I suppose, a matter of opinion.  In my opinion, a core dump is the
moral equivalent of having the wheels fall off your car.  That might have
been acceptable in 1903, but it isn't in 2003.  Likewise for core dumps,
especially in a language that touts itself as "safe" in some sense of the
word.  IMHO of course.

E.
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnqc3g$5im$1$8300dec7@news.demon.co.uk>
Erann Gat wrote:

> Whether the absence of this capability is a "wart" or a "serious problem"
> is, I suppose, a matter of opinion.  In my opinion, a core dump is the
> moral equivalent of having the wheels fall off your car.  That might have
> been acceptable in 1903, but it isn't in 2003.  Likewise for core dumps,
> especially in a language that touts itself as "safe" in some sense of the
> word.  IMHO of course.

Sure, but this isn't really a language issue, or even a static typing
issue. It's just 1 implementation that would appear to have a broken
rts (not that I'm speaking from personal experience, I don't use Hugs).

Regards
--
Adrian Hey
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-3010030717040001@192.168.1.51>
In article <·····················@news.demon.co.uk>, Adrian Hey
<····@NoSpicedHam.iee.org> wrote:

> Erann Gat wrote:
> 
> > Whether the absence of this capability is a "wart" or a "serious problem"
> > is, I suppose, a matter of opinion.  In my opinion, a core dump is the
> > moral equivalent of having the wheels fall off your car.  That might have
> > been acceptable in 1903, but it isn't in 2003.  Likewise for core dumps,
> > especially in a language that touts itself as "safe" in some sense of the
> > word.  IMHO of course.
> 
> Sure, but this isn't really a language issue, or even a static typing
> issue. It's just 1 implementation that would appear to have a broken
> rts (not that I'm speaking from personal experience, I don't use Hugs).

But the mere existence of such an implementation (and the absence of loud
condemnation of that implementation from the static typing community)
completely undermines the central claim of static typing, which is that it
can allow you to avoid run-time errors.  I obviously cannot rely on peer
review from the static typing community to insure that such bugs are not
present, so the burden is on me to do so.  That leaves me in exactly the
same boat as with any other programming language, namely, that I have to
test my code -- all of it -- and I can't really rely on the compiler for
anything.

E.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnrfc6$os$1@news.oberberg.net>
Erann Gat wrote:

> Adrian Hey <····@NoSpicedHam.iee.org> wrote:
> 
>>Erann Gat wrote:
>>
>>>Whether the absence of this capability is a "wart" or a "serious problem"
>>>is, I suppose, a matter of opinion.  In my opinion, a core dump is the
>>>moral equivalent of having the wheels fall off your car.  That might have
>>>been acceptable in 1903, but it isn't in 2003.  Likewise for core dumps,
>>>especially in a language that touts itself as "safe" in some sense of the
>>>word.  IMHO of course.
>>
>>Sure, but this isn't really a language issue, or even a static typing
>>issue. It's just 1 implementation that would appear to have a broken
>>rts (not that I'm speaking from personal experience, I don't use Hugs).
> 
> But the mere existence of such an implementation (and the absence of loud
> condemnation of that implementation from the static typing community)
> completely undermines the central claim of static typing, which is that it
> can allow you to avoid run-time errors.

Actually, Hugs has been condemned more than once, last time within 
another subthread.
The condemnation isn't loud because loud condemnation would immediately 
trigger an invitation to create a better interpreter :-)

BTW while Haskell is making things easy easy on the programmer, it 
certainly doesn't make them easy for the implementor. Making a better 
interpreter would be a daunting task, even for somebody who has a good 
idea of what exactly he wanted to improve.

The saving grace of Hugs is that it is supposed to install 
out-of-the-box on any operating system.
This is also what makes Hugs prominent.
Now that GHC installs easily on Windows boxes, it's possible that GHCi 
is going to replace Hugs from its public predominance. At least for me, 
that's already the case: I downloaded both Hugs and GHC, but I didn't 
install Hugs because GHCi already worked for me.

Regards,
Jo
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bns0qb$49p$1$8302bc10@news.demon.co.uk>
Erann Gat wrote:

> But the mere existence of such an implementation (and the absence of loud
> condemnation of that implementation from the static typing community)
> completely undermines the central claim of static typing, which is that it
> can allow you to avoid run-time errors.  I obviously cannot rely on peer
> review from the static typing community to insure that such bugs are not
> present, so the burden is on me to do so.  That leaves me in exactly the
> same boat as with any other programming language, namely, that I have to
> test my code -- all of it -- and I can't really rely on the compiler for
> anything.

I really don't know what to say in response to this nonsense. You're
position is that mere existance of 1 broken implementation is 
sufficient to destoy the credibility of a large body theory and
all other non-broken implementations. Let us hope that no mere mortal
ever dares to exploit this theory again.

Get real.

Regards
--
Adrian Hey
    
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-3010031415360001@k-137-79-50-101.jpl.nasa.gov>
In article <·····················@news.demon.co.uk>, Adrian Hey
<····@NoSpicedHam.iee.org> wrote:

> Erann Gat wrote:
> 
> > But the mere existence of such an implementation (and the absence of loud
> > condemnation of that implementation from the static typing community)
> > completely undermines the central claim of static typing, which is that it
> > can allow you to avoid run-time errors.  I obviously cannot rely on peer
> > review from the static typing community to insure that such bugs are not
> > present, so the burden is on me to do so.  That leaves me in exactly the
> > same boat as with any other programming language, namely, that I have to
> > test my code -- all of it -- and I can't really rely on the compiler for
> > anything.
> 
> I really don't know what to say in response to this nonsense. You're
> position is that mere existance of 1 broken implementation is 
> sufficient to destoy the credibility of a large body theory and
> all other non-broken implementations.

That's right.  That is exactly my position.  It's not that buggy
implementations of theories render those theories suspect in general. 
Rather, it is a feature of this particular theory that a buggy
implementation renders it suspect.  That feature is that this particular
theory makes the claim that if one applies the theory then certain classes
of bugs are impossible.  A buggy implementation of the theory renders the
theory suspect because either: 1) the theory is wrong, 2) the theory is
correct but an implementor chose not to use it (in which case one must
wonder at the motivation for this design decision, particularly in an
implementation which specifically bills itself as targeted to newcomers,
on whom making a good first impression is very important) or 3) the theory
is correct, the implementor used it, and the result contained bugs
regardless, in which case that is an indication that the theory is too
complex (or something) to use reliably.

The situation is not unlike infamous Pentium FDIV bug.  The mere existence
of one buggy floating point coprocessor shows that you cannot rely ab
initio on any floating point coprocessor to give you the right answers.

> Let us hope that no mere mortal
> ever dares to exploit this theory again.

<shrug> Let us hope that no one lets down their guard in the testing
department because they believe that static typing makes it impossible to
have certain classes of run-time errors.

E.
From: Darius
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031031000528.000047b7.ddarius@hotpop.com>
On Thu, 30 Oct 2003 14:15:36 -0800
···@jpl.nasa.gov (Erann Gat) wrote:

> In article <·····················@news.demon.co.uk>, Adrian Hey
> <····@NoSpicedHam.iee.org> wrote:
> 
> > Erann Gat wrote:
> > 
> > > But the mere existence of such an implementation (and the absence
> > > of loud condemnation of that implementation from the static typing
> > > community) completely undermines the central claim of static
> > > typing, which is that it can allow you to avoid run-time errors. 
> > > I obviously cannot rely on peer review from the static typing
> > > community to insure that such bugs are not present, so the burden
> > > is on me to do so.  That leaves me in exactly the same boat as
> > > with any other programming language, namely, that I have to test
> > > my code -- all of it -- and I can't really rely on the compiler
> > > for anything.
> > 
> > I really don't know what to say in response to this nonsense. You're
> > position is that mere existance of 1 broken implementation is 
> > sufficient to destoy the credibility of a large body theory and
> > all other non-broken implementations.
> 
> That's right.  That is exactly my position.  It's not that buggy
> implementations of theories render those theories suspect in general. 
> Rather, it is a feature of this particular theory that a buggy
> implementation renders it suspect.  That feature is that this
> particular theory makes the claim that if one applies the theory then
> certain classes of bugs are impossible.  A buggy implementation of the
> theory renders the theory suspect because either: 1) the theory is
> wrong, 2) the theory is correct but an implementor chose not to use it
> (in which case one must wonder at the motivation for this design
> decision, particularly in an implementation which specifically bills
> itself as targeted to newcomers, on whom making a good first
> impression is very important) or 3) the theory is correct, the
> implementor used it, and the result contained bugs regardless, in
> which case that is an indication that the theory is too complex (or
> something) to use reliably.
> 
> The situation is not unlike infamous Pentium FDIV bug.  The mere
> existence of one buggy floating point coprocessor shows that you
> cannot rely ab initio on any floating point coprocessor to give you
> the right answers.
> 
> > Let us hope that no mere mortal
> > ever dares to exploit this theory again.
> 
> <shrug> Let us hope that no one lets down their guard in the testing
> department because they believe that static typing makes it impossible
> to have certain classes of run-time errors.

Yes.  -Type- errors.  What makes you think that that was related to a
type error?  No one has claimed static typing gets rid of all
run-time errors... well, except for dynamic typers.  Also, as others
have mentioned, even before this post, Hugs is written in C. C is weakly
typed.  I doubt anyone, static or dynamic typer, believes a weakly typed
system, static or dynamic, is better than a strongly typed one.  So...
I'm assuming no Lisp program has -ever- segfaulted (or
equivalent). What, they have? Heck, at least C doesn't claim to be a
strongly typed language, I guess we can't trust anything out of the Lisp
community. This situation is not unlike the infamous Pentium FDIV bug.
Let us hope no one lets down their guard and expects the Lisp runtime
system to throw an exception in any erroneous situation.  Oh well, I
guess this means everyone will have to implement their own typing just
like everyone implements their own floating point arithmetic.

Anyways, you do know who you sound like, right?
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-3010032222090001@192.168.1.51>
In article <·······························@hotpop.com>, Darius
<·······@hotpop.com> wrote:

> Yes.  -Type- errors.  What makes you think that that was related to a
> type error?

Because it compiled.  That means that the system thought - incorrectly -
that it returned a valid type when in fact it did not.  Given what I know
about how these things work (which admitedly isn't much) my guess is that
this is a flaw in the theory.  I believe this:

(defun foo () (not (foo)))

has type bottom, but I suspect the theory says it has type boolean.

> No one has claimed static typing gets rid of all
> run-time errors... well, except for dynamic typers.

No, the claim is that static typing eliminates errors that can be found
statically, as this one clearly could.

> Anyways, you do know who you sound like, right?

No, but I know who you sound like.

E.
From: Darius
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031031023637.00005ad9.ddarius@hotpop.com>
On Thu, 30 Oct 2003 22:22:09 -0800
···@jpl.nasa.gov (Erann Gat) wrote:

> In article <·······························@hotpop.com>, Darius
> <·······@hotpop.com> wrote:
> 
> > Yes.  -Type- errors.  What makes you think that that was related to
> > a type error?
> 
> Because it compiled.  That means that the system thought - incorrectly
> - that it returned a valid type when in fact it did not.

In fact, it did.

>  Given what I
> know about how these things work (which admitedly isn't much) my guess
> is that this is a flaw in the theory.

Perhaps, you should learn more.  The little that you do admit to knowing
seems wrong.  Unfortunately, I fear that you'd apply your vaunted
"open-mindedness".

>  I believe this:
> 
> (defun foo () (not (foo)))
> 
> has type bottom,

Bottom isn't a type, it's a value.

> but I suspect the theory says it has type boolean.

Because it does.  Furthermore, bottom is one of the three values that a
Bool can hold in Haskell (bottom is a member of all types) so the above
function "returning" bottom is still fine with type Bool.

> > No one has claimed static typing gets rid of all
> > run-time errors... well, except for dynamic typers.
> 
> No, the claim is that static typing eliminates errors that can be
> found statically, as this one clearly could.

No, the claim is that static typing eliminates -typing- errors that can
be found statically.  Hence being called static -typing-.  The further
claim is that it -can be used- to -help- find logic errors as well, but
that is certainly not the same as saying it finds -all- logic errors
statically.

> > Anyways, you do know who you sound like, right?
 
> No

I'll give you a hint.
"Ugh, +." O'Caml
"Ugh, mandatory whitespace" Python
"Ugh, parentheses" Lisp
From: Lauri Alanko
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnt3vr$178$1@la.iki.fi>
···@jpl.nasa.gov (Erann Gat) virkkoi:
> (defun foo () (not (foo)))
> 
> has type bottom, but I suspect the theory says it has type boolean.

Uh? You expect a type system to solve the halting problem?


Lauri Alanko
··@iki.fi
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bntoe2$kgr$2$8302bc10@news.demon.co.uk>
Lauri Alanko wrote:

> ···@jpl.nasa.gov (Erann Gat) virkkoi:
>> (defun foo () (not (foo)))
>> 
>> has type bottom, but I suspect the theory says it has type boolean.
> 
> Uh? You expect a type system to solve the halting problem?

Maybe this is the source of the confusion. The apparent bug in
Hugs seems to be nothing to do with it's type system, it just
doesn't have ghc's black hole detection at runtime, so get's
caught in a run away bottom until it dies (rather ungracefully).

This is certainly ugly, but it isn't a type error. (Though I can
see why it might look like one to someone under the impression
that bottom was a type.)

Regards
--
Adrian Hey
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-3110030850200001@192.168.1.51>
In article <·····················@news.demon.co.uk>, Adrian Hey
<····@NoSpicedHam.iee.org> wrote:

> Lauri Alanko wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) virkkoi:
> >> (defun foo () (not (foo)))
> >> 
> >> has type bottom, but I suspect the theory says it has type boolean.
> > 
> > Uh? You expect a type system to solve the halting problem?
> 
> Maybe this is the source of the confusion. The apparent bug in
> Hugs seems to be nothing to do with it's type system, it just
> doesn't have ghc's black hole detection at runtime, so get's
> caught in a run away bottom until it dies (rather ungracefully).
> 
> This is certainly ugly, but it isn't a type error. (Though I can
> see why it might look like one to someone under the impression
> that bottom was a type.)

I concede this point; I was wrong that this is a type error, and so this
is not as serious a problem as I thought.

I apologize to anyone I offended by making this claim in a confrontational
way.  (In retrospect I really should have posed it as a question, not a
statement.)  My intent was not to troll, but rather to present the
situation as I saw it.  I hope you see how the situation looked on the --
admitedly naive -- assumption that this was a type error.  I did make the
up-front disclaimer several times that I did not (and still do not) have a
good understanding of type theory.  But I'm working on it.

Thanks to all who offered constructive feedback.

E.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <wualndxy.fsf@comcast.net>
Darius <·······@hotpop.com> writes:

> I doubt anyone, static or dynamic typer, believes a weakly typed
> system, static or dynamic, is better than a strongly typed one.

Hey, I'll claim it.  In certain circumstances, a weakly typed system
is better.  Assembly code is a pain in the ass, but if you need
performance, it's almost always your best bet.  And because assembly
is weakly typed, you can do some really interesting tricks with
introspection and `runnable data'.

Of course, the first thing you should do with assembly code is
bootstrap a strongly dynamically typed system like Lisp.
From: Greg Menke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3oevwkdwd.fsf@europa.pienet>
·············@comcast.net writes:

> Darius <·······@hotpop.com> writes:
> 
> > I doubt anyone, static or dynamic typer, believes a weakly typed
> > system, static or dynamic, is better than a strongly typed one.
> 
> Hey, I'll claim it.  In certain circumstances, a weakly typed system
> is better.  Assembly code is a pain in the ass, but if you need
> performance, it's almost always your best bet.  And because assembly
> is weakly typed, you can do some really interesting tricks with
> introspection and `runnable data'.

On modern processors, its <very> difficult to write assembly that
maximizes performance for anything but the simplest routines.  A
reasonable C compiler will nearly always beat your best efforts.
OTOH, assembly is very handy for highly architecture-dependent
optimization of small routines like memory copies or managing context
when the registers which the C library depends on are changing.

Runnable data as binary code is entertaining as a hack, but its not
what you want when you're dealing with big systems that you'll have to
understand a year or two from now.

The advantage of using assembly is never weak typing, its brevity and
the ability to directly manipulate the bare metal, for better or
worse.

Gregm
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3cd8oyyo.fsf@comcast.net>
Greg Menke <··········@toadmail.com> writes:

> On modern processors, its <very> difficult to write assembly that
> maximizes performance for anything but the simplest routines.  A
> reasonable C compiler will nearly always beat your best efforts.

C compilers are good, but they don't take into account a number of
things that can be done in assembly.  Global register allocation is
a prime example.

> OTOH, assembly is very handy for highly architecture-dependent
> optimization of small routines like memory copies or managing context
> when the registers which the C library depends on are changing.

Yep.

> Runnable data as binary code is entertaining as a hack, but its not
> what you want when you're dealing with big systems that you'll have to
> understand a year or two from now.

But isn't that what a closure is?
From: Greg Menke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m34qxoovj8.fsf@europa.pienet>
·············@comcast.net writes:

> Greg Menke <··········@toadmail.com> writes:
> 
> > On modern processors, its <very> difficult to write assembly that
> > maximizes performance for anything but the simplest routines.  A
> > reasonable C compiler will nearly always beat your best efforts.
> 
> C compilers are good, but they don't take into account a number of
> things that can be done in assembly.  Global register allocation is
> a prime example.

True to some extent, but a C compiler isn't bound by maintainability
at the assembly level, available registers or memory can be allocated
as desired, loops and inlining arranged at arbitrary complexity, etc.
Which means the C compiler can structure code in whatever fashion
suits its perception of the software pattern.  A human usually won't
as they frequently operate under maintainabliity constraints, among
others, which effectively reduce the degree of achievable
optimization.

I'm not saying C beats assembly in all cases, but that
hand-constructed assembly seems most effective for small, well bounded
problems where abstract knowledge of the algorithm permits
optimization that a compiler cannot achieve.  As always, tradeoffs
apply.

Most platforms specify an ABI which imposes some general rules for
register allocation which put the human and C compiler on a similar
footing in terms of which registers are available.  What does "global
register allocation" mean, specifically?

> 
> > Runnable data as binary code is entertaining as a hack, but its not
> > what you want when you're dealing with big systems that you'll have to
> > understand a year or two from now.
> 
> But isn't that what a closure is?

Perhaps in some implementations of some languages.  I'm personally
glad theres a compiler between me and the assembly in such cases.

Gregm
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <fzh7ooie.fsf@comcast.net>
Greg Menke <··········@toadmail.com> writes:

> Most platforms specify an ABI which imposes some general rules for
> register allocation which put the human and C compiler on a similar
> footing in terms of which registers are available.  

It depends on the platform, of course, but in general the system will
run your code directly and only transfer control to its code if you
make a system call or an interrupt occurs.  You need to know which
registers are preserved across interrupts (generally, you hope that
*all* of them are because you can't depend on the ones that aren't
preserved), and which are preserved across system call (much more
leeway here because you know when they occur).

> What does "global register allocation" mean, specifically?

Inter-procedural register allocation is another term for it.

Assume for a moment that you are implementing a lisp system.  If you
write it in C, you may have a global value:

void * allocation_frontier;

That contains the address of the next unallocated word.  It is likely
that this will be allocated in a data block somewhere, so the code
that examines this will probably be a `load indirect absolute', i.e.,
read the address from the instruction stream then fetch the contents
of memory at that address.

It might be better for performance to commandeer one of the `scratch'
registers from the compiler and dedicate it to holding that constant.
From: Greg Menke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m38ymzeram.fsf@europa.pienet>
·············@comcast.net writes:

> Greg Menke <··········@toadmail.com> writes:
> 
> > Most platforms specify an ABI which imposes some general rules for
> > register allocation which put the human and C compiler on a similar
> > footing in terms of which registers are available.  
> 
> It depends on the platform, of course, but in general the system will
> run your code directly and only transfer control to its code if you
> make a system call or an interrupt occurs.  You need to know which
> registers are preserved across interrupts (generally, you hope that
> *all* of them are because you can't depend on the ones that aren't
> preserved), and which are preserved across system call (much more
> leeway here because you know when they occur).

The ABI speaks directly to issues like this, leaving pretty much the
same set available to the human and the C compiler.  Use the registers
otherwise at your peril.

> 
> > What does "global register allocation" mean, specifically?
> 
> Inter-procedural register allocation is another term for it.

<snip>

> It might be better for performance to commandeer one of the `scratch'
> registers from the compiler and dedicate it to holding that constant.

Thus likely reducing the optimization effectiveness of the compiler
for the entire program, since this register must now be left alone by
all code in the process, and potentially, in all libraries the program
calls.  Careful attention is required to the tradeoff between the
global optimization cost and whatever increased optimization the
reserved register might yield.

This seems a dubious argument for preferring assembly to C.  Are you
sure you're better at optimized assembly than a C compiler?

MIPS and PowerPC have ABI & instruction set features that allow the
compiler to do single instruction loads/stores from a chunk of memory
referenced by a given base address register, instead of the 2 or more
instruction sequences used for full 32 bit address indirect
loads/stores.  These regions are intended for small, heavily used data
items, just like the "frontier" pointer you suggest.

The effect at runtime is loads/stores to the frontier pointer symbol
take only 1 instruction.  It potentially costs a memory cycle, but if
the symbol is used frequently, its probably already in the cache.
Regardless, it doesn't cost the application the use of a register.

Gregm
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <4qxnoksp.fsf@comcast.net>
Greg Menke <··········@toadmail.com> writes:

> This seems a dubious argument for preferring assembly to C.  Are you
> sure you're better at optimized assembly than a C compiler?

I'm sure I'm no worse because I can always fall back on the optimized
output from the C compiler as a default.

I wouldn't say that in general that I am, but every now and then I
find a bottleneck in some piece of compiled C code where I can do
better.  If the bottleneck is serious enough, and the assembly code
replacement is simple enough and fast enough, then the tradeoff is
may be worth it.

I certainly don't use very much assembly code for the reasons you
brought up, but every now and then it comes in handy.  It's still my
favorite untyped language.
From: Stephen J. Bevan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3wuamdqt0.fsf@dino.dnsalias.com>
···@jpl.nasa.gov (Erann Gat) writes:
> <shrug> Let us hope that no one lets down their guard in the testing
> department because they believe that static typing makes it impossible to
> have certain classes of run-time errors.

Hugs is mostly written in C, unlike GHC which is mostly written in
Haskell.  I don't think anyone here is trying to argue that getting
code past a C compiler means it won't have run-time errors or doesn't
need testing.
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <fzh9mon7.fsf@ccs.neu.edu>
·······@dino.dnsalias.com (Stephen J. Bevan) writes:

> Hugs is mostly written in C, unlike GHC which is mostly written in
> Haskell.  I don't think anyone here is trying to argue that getting
> code past a C compiler means it won't have run-time errors or doesn't
> need testing.

Ah, so the theory is correct but the implementor chose not to use it.

Interesting....
 
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnu1bm$du$1$8300dec7@news.demon.co.uk>
Joe Marshall wrote:

> Ah, so the theory is correct but the implementor chose not to use it.
> 
> Interesting....

Nope. It's factually incorrect and illogical, but not interesting.

Regards
--
Adrian Hey
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vfq5l5yx.fsf@ccs.neu.edu>
Adrian Hey <····@NoSpicedHam.iee.org> writes:

> Joe Marshall wrote:
>
>> Ah, so the theory is correct but the implementor chose not to use it.
>> 
>> Interesting....
>
> Nope. It's factually incorrect and illogical, but not interesting.

How factually incorrect?  

Stephen J. Bevan wrote:
> Hugs is mostly written in C


Is this not true?
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo0o6v$7te$1$830fa17d@news.demon.co.uk>
Joe Marshall wrote:

> Adrian Hey <····@NoSpicedHam.iee.org> writes:
> 
>> Joe Marshall wrote:
>>
>>> Ah, so the theory is correct but the implementor chose not to use it.
>>> 
>>> Interesting....
>>
>> Nope. It's factually incorrect and illogical, but not interesting.
> 
> How factually incorrect?

Sorry, I should've been more explicit..

Factually Incorrect:
The impementor did not chose not to use the theory, nor did he chose
not to use Haskell. (It's a bit difficult to write programs in a
language which doesn't exist, as Haskell didn't at the time the
gofer system was originally developed.)

Now I suppose that Hugs could have gone through the same bootstrapping
process as ghc so that the compiler itself was written in Haskell.
But seeing as the Hugs runtime system is a bytecode interpreter this
would probably give rather slow compilation.  

Illogical:
The fact that gofer/Hugs was written in C does not imply that the
implementor chose not to use the theory (I.E. Type theory).
It's the designers of C who chose not to use it. 
 
> Stephen J. Bevan wrote:
>> Hugs is mostly written in C
> 
> 
> Is this not true?

Yes, that's true.

Regards
--
Adrian Hey
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <7lf971-t51.ln1@ID-7776.user.dfncis.de>
Erann Gat <···@jpl.nasa.gov> wrote:

> That's right.  That is exactly my position.  It's not that buggy
> implementations of theories render those theories suspect in general. 
> Rather, it is a feature of this particular theory that a buggy
> implementation renders it suspect.  

Hugs is implemented in C, not in Haskell.

> That feature is that this particular theory makes the claim that if
> one applies the theory then certain classes of bugs are impossible.

Even if Hugs was implemented in Haskell, that doesn't rule out that
Hugs contains errors that are not of the "certain class" caught
be the type system.

Nobody says that a static type system will catch all possible bugs.

But it will help you to automatically (or semi-automatically) test 
a class of bugs, with no need to write tests for it. So it speeds
up development.

Please don't confuse these two.

- Dirk
From: Thomas Lindgren
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3he1pvcyo.fsf@localhost.localdomain>
Dirk Thierbach <··········@gmx.de> writes:

> Nobody says that a static type system will catch all possible bugs.
> 
> But it will help you to automatically (or semi-automatically) test 
> a class of bugs, with no need to write tests for it. So it speeds
> up development.

So, *does* static typing speed up development (compared to dynamic
typing)? Are there any studies to support this claim?

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bntoe0$kgr$1$8302bc10@news.demon.co.uk>
Thomas Lindgren wrote:

> 
> Dirk Thierbach <··········@gmx.de> writes:
> 
>> Nobody says that a static type system will catch all possible bugs.
>> 
>> But it will help you to automatically (or semi-automatically) test
>> a class of bugs, with no need to write tests for it. So it speeds
>> up development.
> 
> So, *does* static typing speed up development (compared to dynamic
> typing)? Are there any studies to support this claim?

Erann pointed out one study that indicates the opposite. However
this was in comparison certain well known languages which are known
to have static type systems which are awkard, inflexible and
broken (weak) all at the same time.

If ML,OCaml,Clean and Haskell had been included in this study
I think a rather different picture would have emerged. It would
be interesting to have a rematch :-)

Regards
--
Adrian Hey
  
From: Thomas Lindgren
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m38yn1e64l.fsf@localhost.localdomain>
Adrian Hey <····@NoSpicedHam.iee.org> writes:

> If ML,OCaml,Clean and Haskell had been included in this study
> I think a rather different picture would have emerged. It would
> be interesting to have a rematch :-)

While programming language matches are exciting, what with all the
roaring and bloodletting, they also tend to be inconclusive. I'd
prefer a proper study :-) (Even better, several independent ones. ;-)

My guess is that most of the productivity advantages come from
features such as being garbage collected, not making type errors
(dynamically or statically), not inducing core dumps, having an
interactive toploop, and being higher-level than the
competition. Though by now competitors have busily assimilated most of
those points, of course.

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egoevx1pkx.fsf@sefirot.ii.uib.no>
Adrian Hey <····@NoSpicedHam.iee.org> writes:

> Thomas Lindgren wrote:

>> So, *does* static typing speed up development (compared to dynamic
>> typing)? Are there any studies to support this claim?

> If ML,OCaml,Clean and Haskell had been included in this study
> I think a rather different picture would have emerged. It would
> be interesting to have a rematch :-)

Interesting, but hardly convincing.  Any such study would really pitch
entire languages, not language features against each other.  The
quality of the programmers and nature of the task would also be
important parameters.

Regardless of any outcome, there would be plenty of room for
dismissing it.

That said, there's always the ICFP contest.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <r80tndqr.fsf@comcast.net>
Thomas Lindgren <···········@*****.***> writes:

> Dirk Thierbach <··········@gmx.de> writes:
>
>> Nobody says that a static type system will catch all possible bugs.
>> 
>> But it will help you to automatically (or semi-automatically) test 
>> a class of bugs, with no need to write tests for it. So it speeds
>> up development.
>
> So, *does* static typing speed up development (compared to dynamic
> typing)? Are there any studies to support this claim?

As I pointed out before, OCaml is a consistent contender in the ICFP
functional programming contest.

Nonetheless, I prefer Lisp.  I may bitch and moan about Haskell and ML
variants, but when you put it on an absolute scale, I'd much prefer
them to something like Visual Basic, Java, C, etc.

On usenet, de gustibus semper disputandum est.
From: Thomas Lindgren
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3sml8nyk7.fsf@localhost.localdomain>
·············@comcast.net writes:

> Thomas Lindgren <···········@*****.***> writes:
> 
> > So, *does* static typing speed up development (compared to dynamic
> > typing)? Are there any studies to support this claim?
> 
> As I pointed out before, OCaml is a consistent contender in the ICFP
> functional programming contest.

I was thinking of a more ordinary scenario: e.g., average joe
programmers developing and maintaining a full project over a longer
time. Or preferrably, several variants of such a study.

That is, can we a priori, when setting out to develop a new product,
say, expect higher productivity by using static typing (as compared to
dynamic typing)?

> I may bitch and moan about Haskell and ML variants, but when you put
> it on an absolute scale, I'd much prefer them to something like
> Visual Basic, Java, C, etc.

Well, to *that* I can only agree.

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa25c37$1@news.unimelb.edu.au>
···@jpl.nasa.gov (Erann Gat) writes:

>Adrian Hey <····@NoSpicedHam.iee.org> wrote:
>
>> I really don't know what to say in response to this nonsense.

Indeed!  Erann Gat's position here is so silly that it's hard to know
where to start in pointing out the flaws.

>> Your position is that mere existance of 1 broken implementation is 
>> sufficient to destoy the credibility of a large body theory and
>> all other non-broken implementations.
>
>That's right.  That is exactly my position.  It's not that buggy
>implementations of theories render those theories suspect in general. 
>Rather, it is a feature of this particular theory that a buggy
>implementation renders it suspect.  That feature is that this particular
>theory makes the claim that if one applies the theory then certain classes
>of bugs are impossible.

Do you remember _which_ class(es) of bugs?

Do you understand the relationship between the symptom exhibited in this
case, and those classes of bugs?

Let's try a quick quiz.  Which of the following kinds of errors fall in the
class which are ruled out by static type systems?

	- stack overflow?
	- array bounds errors?
	- memory management errors?
	- dereference uninitialized pointers?
	- type errors?

Which of those can result in segmentation faults?

>A buggy implementation of the theory renders the
>theory suspect because either: 1) the theory is wrong, 2) the theory is
>correct but an implementor chose not to use it (in which case one must
>wonder at the motivation for this design decision, particularly in an
>implementation which specifically bills itself as targeted to newcomers,
>on whom making a good first impression is very important)

You shouldn't have to wonder too hard.  Just think about the issues of
bootstrapping, portability, and efficiency for a little while.

>or 3) the theory
>is correct, the implementor used it, and the result contained bugs
>regardless, in which case that is an indication that the theory is too
>complex (or something) to use reliably.

Hah!

Let us hypothesize, for the sake of argument, that the seg fault in Hugs
was caused by overflowing the C stack.  There are other possibilities,
but this is at least a plausible cause.

Would that bug (not trapping stack overflow) still have occurred if Hugs
had been implemented in Haskell or some other strongly typed language?
Quite possibility.

Would this be an indication that Haskell is "too complex ... to use
reliably"?  No.  It would just be an indication that this particular
developer did not use it reliably.  (In fact of course, the reality is
that the problem occurred with C, not Haskell.)

Even if we did conclude that Haskell was "too complex ... to use reliably",
would this be an indication that we should stop using Haskell?  No, since
all the other alternatives are also too complex to use reliably.
The fact is that software development as a whole is a task which we do not
yet know how to do with complete reliability, no matter what languages we
use.  The fact that there exists a bug in a C program written by a Haskell
implementor says absolutely nothing about the relative effects on program
reliability of Haskell and Lisp.  Anyone who would argue that it does must
be clutching at straws.
 
--  
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <n0bhndby.fsf@comcast.net>
Fergus Henderson <···@cs.mu.oz.au> writes:

> Indeed!  Erann Gat's position here is so silly that it's hard to know
> where to start in pointing out the flaws.
>
> The fact that there exists a bug in a C program written by a Haskell
> implementor says absolutely nothing about the relative effects on program
> reliability of Haskell and Lisp.  Anyone who would argue that it does must
> be clutching at straws.

I don't find Erann Gat's position that silly.  When I heard him report
that Hugs ended up in a segfault I laughed out loud.

Now I grant you that the C code is to blame, and of course the Haskell
cannot be held responsible for type checking its implementation language,
but the irony is so perfect that the implementors really ought to have
done something about it.

Something like:

  ``Error:  Damnit, the underlying implemenation segfaulted.  
    That's what I get for implementing Haskell in C.

    Live dangerously (Y/N)?''
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bPQob.60092$9E1.261408@attbi_s52>
<·············@comcast.net> wrote in message ·················@comcast.net...
> Fergus Henderson <···@cs.mu.oz.au> writes:
>
> I don't find Erann Gat's position that silly.  When I heard him report
> that Hugs ended up in a segfault I laughed out loud.

The two most popular flavors of ice cream are chocolate and schadenfreude.
Actually, they taste pretty good together, too; sometimes I get a
double scoop, one of each. I like the waffle cones best.


> Now I grant you that the C code is to blame, and of course the Haskell
> cannot be held responsible for type checking its implementation language,
> but the irony is so perfect that the implementors really ought to have
> done something about it.

Vanilla and irony go together pretty well, too.


Marshall
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egr80t510x.fsf@sefirot.ii.uib.no>
···@jpl.nasa.gov (Erann Gat) writes:

> But the mere existence of such an implementation (and the absence of
> loud condemnation of that implementation from the static typing
> community)

I don't follow the lisp newsgroup much, but from this crossposted
thread, it would appear that the standards for "loud condemnation"
could be different enough in the two communities that you wouldn't
easily recongnize it.

I was going to respond to your other points, but I think it may be
more constructive to just accept our opinions are going to differ, and
go back to programming in our respective languages of preference.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9fc155$1@news.unimelb.edu.au>
Philip Armstrong <····@armstrong.invalid> writes:
>
> Yup. hugs segfaults, whilst ghc6 says "*** Exception: <<loop>>"
> if you try and invoke the ep function though :)

Which version of Hugs are you using?  I'm using the Feb 2001 version of
Hugs 98, and I get "ERROR - Control stack overflow", not a seg fault.

	main = print ep
	ep = not ep

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <eg65i8l73c.fsf@sefirot.ii.uib.no>
···@jpl.nasa.gov (Erann Gat) writes:

>>> Yup. hugs segfaults, whilst ghc6 says "*** Exception: <<loop>>"
>>> if you try and invoke the ep function though :)

>> I thought code that typechecks would `never crash'?

> That's a very good point.  If Hugs can be made to segfault something is
> seriously wrong (particularly since Hugs says it's specifically designed
> for teaching).

Yes.  It's called a bug.  These things happen, you know, in particular
since Hugs is implemented (mostly?) in C.

I've experienced bugs with any compiler or programming system I've
ever used seriously, usually in the form of an "internal error" of
some sort -- not quite as crude as a segfault, I suppose.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9fcfa8$1@news.unimelb.edu.au>
··········@ii.uib.no writes:

>···@jpl.nasa.gov (Erann Gat) writes:
>
>> That's a very good point.  If Hugs can be made to segfault something is
>> seriously wrong (particularly since Hugs says it's specifically designed
>> for teaching).
>
>Yes.  It's called a bug.  These things happen, you know, in particular
>since Hugs is implemented (mostly?) in C.
>
>I've experienced bugs with any compiler or programming system I've
>ever used seriously, usually in the form of an "internal error" of
>some sort -- not quite as crude as a segfault, I suppose.

Bear in mind that although a segfault in C code often means a wild pointer
of some kind, it is also C's user-unfriendly way of reporting stack overflow.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnomcc$o63$1@news.oberberg.net>
··········@ii.uib.no wrote:
> I've experienced bugs with any compiler or programming system I've
> ever used seriously, usually in the form of an "internal error" of
> some sort -- not quite as crude as a segfault, I suppose.

Actually, the first compiler I ever worked with didn't show any bugs. 
(It was a Pascal-like language with parameter type overloading - quite 
nice for the standards of the 80ies.)

Finding out that most compilers are buggy came as a shock to me...

Regards,
Jo
From: Mark Alexander Wotton
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbpu9lg.gp9.mwotton@pill3.orchestra.cse.unsw.EDU.AU>
On Wed, 29 Oct 2003 01:59:04 GMT, ·············@comcast.net posted:
> Philip Armstrong <····@armstrong.invalid> writes:
> 
>> In article <··································@192.168.1.51>,
>> Erann Gat <·················@jpl.nasa.gov> wrote:
>>>These are of course infinite loops in Lisp, but they might not be in a
>>>lazy language like Haskell.  Hm, I wonder if these would typecheck.
>>
>> Yup. hugs segfaults, whilst ghc6 says "*** Exception: <<loop>>"
>> if you try and invoke the ep function though :)
> 
> I thought code that typechecks would `never crash'?

For the second one, it's either run forever producing nothing useful or 
stop with an error message. I know which I'd prefer.

And for the first... implementers are human too, you know.:)

mrak

-- 
realise your life was only bait for a bigger fish
	-- aesop rock
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.28.16.42.24.178186@knm.org.pl>
On Tue, 28 Oct 2003 07:28:46 -0800, Erann Gat wrote:

> (defun ep () (not (ep)))
> (defun sa () (sa))
> 
> These are of course infinite loops in Lisp, but they might not be in a
> lazy language like Haskell.  Hm, I wonder if these would typecheck.

They do typecheck and they loop too. In fact GHC detects that they loop
and throws an exception instead.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2810031024240001@k-137-79-50-101.jpl.nasa.gov>
In article <······························@knm.org.pl>, Marcin 'Qrczak'
Kowalczyk <······@knm.org.pl> wrote:

> On Tue, 28 Oct 2003 07:28:46 -0800, Erann Gat wrote:
> 
> > (defun ep () (not (ep)))
> > (defun sa () (sa))
> > 
> > These are of course infinite loops in Lisp, but they might not be in a
> > lazy language like Haskell.  Hm, I wonder if these would typecheck.
> 
> They do typecheck and they loop too.

I presume they loop only if the result is actually used, otherwise what
does it mean for Haskell to be lazy?

> In fact GHC detects that they loop and throws an exception instead.

That seems wrong.  What would it do with:

(defun sa1 ()
  (process-http-transaction)
  (sa1))

Sigh.  I guess I really ought to bite the bullet and learn Haskell so I
can answer these questions myself.

E.
From: Jesse Tov
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbptgpq.717.tov@tov.student.harvard.edu>
Erann Gat <·················@jpl.nasa.gov>:
> I presume they loop only if the result is actually used, otherwise what
> does it mean for Haskell to be lazy?

Right (in GHCi):

Prelude> let a = a
Prelude> print (const "hello" a)
"hello"

> That seems wrong.  What would it do with:
> 
> (defun sa1 ()
>   (process-http-transaction)
>   (sa1))

Prelude> a
*** Exception: <<loop>>
Prelude> let b = b + 1
Prelude> b
*** Exception: <<loop>>
Prelude> let c = putStrLn "hi" >> c
Prelude> c
hi
hi
hi
...

(Someone in this thread posted that something like c above rather than
just putStrLn "hello world" is a better model for computing.  You need a
semantics in which effectful computations are never _|_, even if they
don't terminate.)  Anyway, I'm pretty sure you only get a <<loop>>
exception if 1) it doesn't terminate and 2) there are no side effects.
It's easy to see there are no side effects if the type doesn't contain
IO [1]:

Prelude> :type a
a :: forall t. t
Prelude> :type b
b :: Integer
Prelude> :type c
c :: forall b. IO b

As far as I know, this behavior is implementation defined.  In Hugs:

Prelude> let a = a in a `seq` 0

After about a minute, it's still trying.

> Sigh.  I guess I really ought to bite the bullet and learn Haskell so I
> can answer these questions myself.

It might be fun.  I certainly want to learn Common Lisp for that reason.

Jesse
[1] Unless you've done something sneaky with  unsafePerformIO :: IO a -> a
or the FFI.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.28.19.12.56.933026@knm.org.pl>
On Tue, 28 Oct 2003 10:24:24 -0800, Erann Gat wrote:

> I presume they loop only if the result is actually used, otherwise what
> does it mean for Haskell to be lazy?

Indeed.

> That seems wrong.  What would it do with:
> 
> (defun sa1 ()
>   (process-http-transaction)
>   (sa1))

This runs correctly. GHC detects at runtime when a value depends on
itself, like 'ep = not ep' and 'sa = sa', but not when a function just
applies itself or when an IO action does something before it reenters
itself.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2810031121590001@k-137-79-50-101.jpl.nasa.gov>
In article <······························@knm.org.pl>, Marcin 'Qrczak'
Kowalczyk <······@knm.org.pl> wrote:

> On Tue, 28 Oct 2003 10:24:24 -0800, Erann Gat wrote:
> 
> > I presume they loop only if the result is actually used, otherwise what
> > does it mean for Haskell to be lazy?
> 
> Indeed.
> 
> > That seems wrong.  What would it do with:
> > 
> > (defun sa1 ()
> >   (process-http-transaction)
> >   (sa1))
> 
> This runs correctly. GHC detects at runtime when a value depends on
> itself, like 'ep = not ep' and 'sa = sa', but not when a function just
> applies itself or when an IO action does something before it reenters
> itself.

Cool.

E.
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egvfq9x2hv.fsf@sefirot.ii.uib.no>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> In other words, "a correct program necessarily has a proof" means
> exactly "a program with a proof necessarily has a proof". Not a very
> useful fact.

So, the argument against formally provable programs is that incorrect
programs are (occasionally? more?) useful?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <usmld46fq.fsf@STRIPCAPStelus.net>
··········@ii.uib.no writes:
> Ray Blaak <········@STRIPCAPStelus.net> writes:
> 
> > In other words, "a correct program necessarily has a proof" means
> > exactly "a program with a proof necessarily has a proof". Not a very
> > useful fact.
> 
> So, the argument against formally provable programs is that incorrect
> programs are (occasionally? more?) useful?

No, it's that correctness is a real bitch to prove, however much we want to.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egr80wla9q.fsf@sefirot.ii.uib.no>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> ··········@ii.uib.no writes:

>> So, the argument against formally provable programs is that incorrect
>> programs are (occasionally? more?) useful?

> No, it's that correctness is a real bitch to prove, however much we
> want to. 

Yes, but that's why I say provABLE rather than provEN programs.
My impression was that arguments were put forth (possibly not by you)
that a substantial class of useful programs can be correct, but not
provable so, or simply incorrect, but still useful.

Perhaps this is a straw man?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <znfk6jj3.fsf@ccs.neu.edu>
··········@ii.uib.no writes:

> Yes, but that's why I say provABLE rather than provEN programs.
> My impression was that arguments were put forth (possibly not by you)
> that a substantial class of useful programs can be correct, but not
> provable so, or simply incorrect, but still useful.

I believe that a substantial number of useful programs are correct,
but not provably so.  I don't see how an incorrect program would be
that useful.  I *do* however see how a partially correct program would
be.
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uznfjl7nj.fsf@hotmail.com>
Joe Marshall <···@ccs.neu.edu> writes:

> ··········@ii.uib.no writes:
> 
> > Yes, but that's why I say provABLE rather than provEN programs.
> > My impression was that arguments were put forth (possibly not by you)
> > that a substantial class of useful programs can be correct, but not
> > provable so, or simply incorrect, but still useful.
> 
> I believe that a substantial number of useful programs are correct,
> but not provably so.  I don't see how an incorrect program would be
> that useful.  I *do* however see how a partially correct program would
> be.

Is partially correct like being partially pregnant? 
I assume that when you say paritally correct you mean that the program
correctly implements something close to the appropriate specification.

The vast majority of useful correct programs are provably so. Constructing a
given proof may take more man years then its worth, but I believe it can be
done.

The one example of a program that is possibly correct but not provably so
that easily comes to mind is the Miller-Rabin primality testing algoritm. It
was known to be probabilistically correct, but not known to be completely
correct.  If you believe the Riemann Hypothesis is true than you could prove
that the Miller-Rabin algorthim was always correct and not just
probablisitcly correct. Many Mathematicians believe the Rieman Hypothesis is
true. Perhaps it is and there is no proof of this fact. Something we have to
accept because of Godel.

However, esoteric mathematics aside, most every day useful correct programs
ought to be provably correct. I say this because they were constructed by a
programmer to do a specific task. The fact it works or almost works is not
some random accident. 

I will have to admit that I'm sure there are many programs that tackle
specific instances of NP hard problems that run in polynomial time for "most
inputs". I will admit that few people probably understand why it works for
"most inputs". I just do not believe the majority of code for database, OS,
web browsers, real time control systems, and every day software rely on
mathematics so subtle that there isn't a relatively boring but simple proof
of their correctness.
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <fzhar4pk.fsf@ccs.neu.edu>
·········@hotmail.com (Daniel C. Wang) writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> ··········@ii.uib.no writes:
>> 
>> > Yes, but that's why I say provABLE rather than provEN programs.
>> > My impression was that arguments were put forth (possibly not by you)
>> > that a substantial class of useful programs can be correct, but not
>> > provable so, or simply incorrect, but still useful.
>> 
>> I believe that a substantial number of useful programs are correct,
>> but not provably so.  I don't see how an incorrect program would be
>> that useful.  I *do* however see how a partially correct program would
>> be.
>
> Is partially correct like being partially pregnant? 

Most programs are partial functions.  I consider a `partially correct'
program one that is correct on a limited domain.

> I just do not believe the majority of code for database, OS,
> web browsers, real time control systems, and every day software rely on
> mathematics so subtle that there isn't a relatively boring but simple proof
> of their correctness.

We definitely differ here.  I believe that the majority of code for
database, OS, web browsers, etc.  rely on theorems and conjectures
that will *never* be proven.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2ptgeyzsx.fsf@hanabi-air.shimizu.blume>
Joe Marshall <···@ccs.neu.edu> writes:

> ·········@hotmail.com (Daniel C. Wang) writes:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> >
> >> ··········@ii.uib.no writes:
> >> 
> >> > Yes, but that's why I say provABLE rather than provEN programs.
> >> > My impression was that arguments were put forth (possibly not by you)
> >> > that a substantial class of useful programs can be correct, but not
> >> > provable so, or simply incorrect, but still useful.
> >> 
> >> I believe that a substantial number of useful programs are correct,
> >> but not provably so.  I don't see how an incorrect program would be
> >> that useful.  I *do* however see how a partially correct program would
> >> be.
> >
> > Is partially correct like being partially pregnant? 
> 
> Most programs are partial functions.  I consider a `partially correct'
> program one that is correct on a limited domain.

Let's make this slightly more formal: If the original correctness
claim is P but you can only prove some P' such that P implies P' (but
not vice versa), then you can say that the program is partially
correct.  (Of course, all programs are partially correct if you just
chose a weak enough P'.)

> > I just do not believe the majority of code for database, OS,
> > web browsers, real time control systems, and every day software rely on
> > mathematics so subtle that there isn't a relatively boring but simple proof
> > of their correctness.
> 
> We definitely differ here.  I believe that the majority of code for
> database, OS, web browsers, etc.  rely on theorems and conjectures
> that will *never* be proven.

You don't really differ, I think.  Or are you saying that they cannot
be proven *in principle*, i.e., that there truly *is* no proof -- no
matter how much effort it would take to actually write it down?

Matthias
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <871xsu87vr.fsf@thalassa.informatimago.com>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > ·········@hotmail.com (Daniel C. Wang) writes:
> > 
> > > Joe Marshall <···@ccs.neu.edu> writes:
> > >
> > >> ··········@ii.uib.no writes:
> > >> 
> > >> > Yes, but that's why I say provABLE rather than provEN programs.
> > >> > My impression was that arguments were put forth (possibly not by you)
> > >> > that a substantial class of useful programs can be correct, but not
> > >> > provable so, or simply incorrect, but still useful.
> > >> 
> > >> I believe that a substantial number of useful programs are correct,
> > >> but not provably so.  I don't see how an incorrect program would be
> > >> that useful.  I *do* however see how a partially correct program would
> > >> be.
> > >
> > > Is partially correct like being partially pregnant? 
> > 
> > Most programs are partial functions.  I consider a `partially correct'
> > program one that is correct on a limited domain.
> 
> Let's make this slightly more formal: If the original correctness
> claim is P but you can only prove some P' such that P implies P' (but
> not vice versa), then you can say that the program is partially
> correct.  (Of course, all programs are partially correct if you just
> chose a weak enough P'.)

More  concretely: you  have a  buggy software,  but as  user,  you end
knowing  the  inputs  that  lead  to  bugs and  avoid  (more  or  less
consciously) these specific  inputs. Overtime, you may be  able to use
the buggy software proficiently without crashing it anymore.

You've determined  empirically the limited domain of  inputs for which
the  program  is "correct".  

Hundred of millions users of commercial desktop software do it this way.

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <sml9mpxz.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Let's make this slightly more formal: If the original correctness
> claim is P but you can only prove some P' such that P implies P' (but
> not vice versa), then you can say that the program is partially
> correct.  (Of course, all programs are partially correct if you just
> chose a weak enough P'.)

Yes.  A good example is a program with a limited and unchecked string
buffer.  It works fine so long as the string doesn't get too long, but
it is clearly broken.  I'm sorry to report that I use such software on
a daily basis.

>> > I just do not believe the majority of code for database, OS,
>> > web browsers, real time control systems, and every day software rely on
>> > mathematics so subtle that there isn't a relatively boring but simple proof
>> > of their correctness.
>> 
>> We definitely differ here.  I believe that the majority of code for
>> database, OS, web browsers, etc.  rely on theorems and conjectures
>> that will *never* be proven.
>
> You don't really differ, I think.  Or are you saying that they cannot
> be proven *in principle*, i.e., that there truly *is* no proof -- no
> matter how much effort it would take to actually write it down?

The latter, actually.

Consider something like sendmail.  The rewrite rules for sendmail are
turing complete.  By Rice's theorem, you really can't prove anything
about what might happen to your mail if the rules are anything but
trivial.  Not even in priniciple.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1ad7h1jxv.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> >> We definitely differ here.  I believe that the majority of code for
> >> database, OS, web browsers, etc.  rely on theorems and conjectures
> >> that will *never* be proven.
> >
> > You don't really differ, I think.  Or are you saying that they cannot
> > be proven *in principle*, i.e., that there truly *is* no proof -- no
> > matter how much effort it would take to actually write it down?
> 
> The latter, actually.
> 
> Consider something like sendmail.

What, exactly, is the sense in which sendmail is "correct".  You get
to choose, but you have to tell me what it is that you consider
"correct".  Then we talk.

> The rewrite rules for sendmail are
> turing complete.  By Rice's theorem, you really can't prove anything
> about what might happen to your mail if the rules are anything but
> trivial.  Not even in priniciple.

True (but whether invoking Rice's theorem here is appropriate seems
somewhat questionable to me).  But you surely could make claims of the form

  "If the rules have such-and-such a property, then we can say
  this-and-that about what happens to my mail."

and, had sendmail actually been correct, prove them rigorously.

I don't claim that sendmail is actually correct in any desirable way.
So it is not really an example of a correct program which you cannot
prove correct.

[Invoking Turing or Rice is not helpful unless you make the connection
a bit more concrete.  Just because something is Turing-complete does
not mean that we cannot prove anything at all about it.  For example,
one can certainly prove the implementation of an LC reducer correct --
even though the (untyped) LC is Turing-complete.]

Matthias
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ad7houqp.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> I don't claim that sendmail is actually correct in any desirable way.
> So it is not really an example of a correct program which you cannot
> prove correct.

It's more of a `useful' (for certain definitions of useful) program
that will likely never be proven correct.

I'll continue to try to think of a concrete example.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnpsi7$9dv$2@news.oberberg.net>
Joe Marshall wrote:

> I believe that a substantial number of useful programs are correct,
> but not provably so.

With "not provably", do you mean "not provable with reasonable effort", 
or do you mean "not provable due to the Goedel incompleteness theorem"?

(Personally, I think the former is quite common though not so different 
as many people think, and that the latter yet has to occur for anybody.)

Regards,
Jo
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <k76mr6jr.fsf@ccs.neu.edu>
Joachim Durchholz <·················@web.de> writes:

> Joe Marshall wrote:
>
>> I believe that a substantial number of useful programs are correct,
>> but not provably so.
>
> With "not provably", do you mean "not provable with reasonable
> effort", or do you mean "not provable due to the Goedel incompleteness
> theorem"?

This is a subtle question.  Godel proved that no `sufficiently
powerful mathematical system can be both complete and consistent' (and
furthemore could not be made complete and consistent through the
addition of axioms).  He proved this by demonstrating a mechanism by
which one can synthesize true and unprovable statements.  Furthermore,
since the construction `crosses levels' it can't be overcome by adding
meta-rules.

But this doesn't mean that all incompleteness arises as a direct
consequence of this construction.  (Or that all constructable
incompleteness is isomorphic to this construction!)

The same is true of the halting problem.  It is easy to construct a
program that through self-reference cannot analyze whether it halts,
but there are other reasons a program might not halt, and there are
other ways to create a programs that cannot be decided.

The question is:  does Godel's incompleteness theorem or the halting
problem matter in `normal' programs?  This is a hot topic in math
(google `Natural independence phenomena') and there are applications
in computer science.

A good example is the `Hercules vs. the Hydra' game.

A Hydra is represented as a simple acyclic rooted tree.  On each turn,
Hercules cuts one head off the hydra by severing an edge that points
to a leaf node.  The Hydra, however, sprouts new heads:  the
grandparent node of the severed head grows copies of the damaged
branch.  On the first turn it grows one copy.  On the second, it grows
two copies, on the third it grows three copies, etc.

So if we start with this Hydra:

            *
           /|\
          * * *
         /| | |\
        * * * * *
       /  | | |\
      *   * * * *

Hercules can cut any edge pointing to a leaf node.  Suppose he chooses the
rightmost edge:

            *
           /|\
          * * *  <snip>
         /| | |     \   
        * * * *      *
       /  | | |\
      *   * * * *
 
The grandfather node of the cut edge is the root node.  It sprouts a copy
of the damaged branch


            *                        *
           /|\                    / | | \
          * * *     *            *  * *  *
         /| | |     |           /|  | |   \
        * * * *  +  *    =>    * *  * *    *
       /  | | |\    |\        /  |  | |\   |\
      *   * * * *   * *      *   *  * * *  * *

Now it's Hercules turn, he cuts another head off, but the hydra now sprouts
two copies of the damaged branch.

The question is, is there a strategy by which Hercules can kill the
hydra by decapitating all the leaf nodes until only the root node is
left?  The answer turns out to be `yes'.  Remarkably, *any* strategy
will do --- the Hydra will be defeated in a finite number of moves no
matter what Hercules does.  Also remarkable is the fact that this
*cannot* be proven using standard formal number theory (first-order
Peano axioms).

How does this apply to computer science?  Well Kruskal's tree theorem
establishes that certain orderings on trees are well-founded.  This is
useful in showing that certain systems of re-write rules will
terminate.  But Kruskal's theorem is undecidable in some relatively
strong formal systems.

This is just one example of incompleteness, but there are a lot of
other *really* simple examples:

   1.  Collatz's problem:  Take any positive integer, if even, divide
       by 2, if odd, multiply by 3 and add 1.  Does it *always*
       eventually reach 1?

   2.  Goldbach's conjecture:  Every positive even number is the sum of
       two primes.

   3.  A perfect number is the sum of its factors less than itself, for instance,
       6 = (2 * 3), (1 * 6)   6 = 1 + 2 + 3
      28 = (1 * 28), (2 * 14), (4 * 7)    28 = 1 + 2 + 14 + 4 + 7
       Are there any odd perfect numbers?

The fact that some of these extraordinarily difficult problems are so
trivial to state, and the fact that universal computation is so easy
to achieve (there exists a universal turing machine with 5 symbols
and 2 states!  There is a turing machine with 3 symbols and 2 states
that may be universal), leads me to the conclusion that it is rather
easy to accidentally run into one of these undecidable problems.
From: Torkel Franzen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vcb3cdapot6.fsf@beta13.sm.luth.se>
Joe Marshall <···@ccs.neu.edu> writes:

> This is just one example of incompleteness, but there are a lot of
> other *really* simple examples:

  There is no reason to believe that any of these three statements
is undecidable in PA.
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <y8v2pokd.fsf@ccs.neu.edu>
Torkel Franzen <······@sm.luth.se> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> This is just one example of incompleteness, but there are a lot of
>> other *really* simple examples:
>
>   There is no reason to believe that any of these three statements
> is undecidable in PA.

True.  On the other hand, there's no reason to believe that they are
decidable, either.
From: Torkel Franzen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vcb1xsupnlk.fsf@beta13.sm.luth.se>
Joe Marshall <···@ccs.neu.edu> writes:

 > True.  On the other hand, there's no reason to believe that they are
 > decidable, either.

 Right. But what, if anything, do you mean by saying that "there are a
lot of other *really* simple examples" of incompleteness, citing these
undecided statements?
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ekwuo5tq.fsf@ccs.neu.edu>
Torkel Franzen <······@sm.luth.se> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>  > True.  On the other hand, there's no reason to believe that they are
>  > decidable, either.
>
>  Right. But what, if anything, do you mean by saying that "there are a
> lot of other *really* simple examples" of incompleteness, citing these
> undecided statements?

I admit that these aren't proven examples of incompleteness (unlike
the Hercules and Hydra problem).  They could turn out to be provably
true, provably false, or yet other examples of incompleteness.  But
from a practical viewpoint, until a proof one way or the other is
shown, there is little difference between something that has resisted
all attempts of proof and something that has will continue to resist
all attempts of proof.
From: Torkel Franzen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vcbznfio3rk.fsf@beta13.sm.luth.se>
Joe Marshall <···@ccs.neu.edu> writes:

> But from a practical viewpoint, until a proof one way or the other is
> shown, there is little difference between something that has resisted
> all attempts of proof and something that has will continue to resist
> all attempts of proof.

  So this is what you meant by saying that these are examples of
incompleteness?
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <65i6o133.fsf@ccs.neu.edu>
Torkel Franzen <······@sm.luth.se> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> But from a practical viewpoint, until a proof one way or the other is
>> shown, there is little difference between something that has resisted
>> all attempts of proof and something that has will continue to resist
>> all attempts of proof.
>
>   So this is what you meant by saying that these are examples of
> incompleteness?

I should have said `possible examples of incompleteness'.

I think I read that some people suspect collatz's problem to be
undecideable in PA.
From: Torkel Franzen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vcbd6cez920.fsf@beta13.sm.luth.se>
Joe Marshall <···@ccs.neu.edu> writes:

> I should have said `possible examples of incompleteness'.

  In other words, as yet unsolved problems, the way the problem of the
existence of solutions of x^n+y^n=z^n for n>2 and x,y,z>0 was
unsolved.

> I think I read that some people suspect collatz's problem to be
> undecideable in PA.

  Such suggestions are sometimes made, but they have no basis
whatever.

  We do know that there are infinitely many true statements of the
form "the diophantine equation P(x1,..,xn)=0 has no solutions" that
are unprovable in PA.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2ad7j751v.fsf@hanabi-air.shimizu.blume>
Joe Marshall <···@ccs.neu.edu> writes:

> I believe that a substantial number of useful programs are correct,
> but not provably so.

Show me *one* example, and convince me that it is actually correct!
("Because I tested it and it worked." will not convince me.)

Matthias
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egism7w3fu.fsf@vipe.ii.uib.no>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <···@ccs.neu.edu> writes:

>> I believe that a substantial number of useful programs are correct,
>> but not provably so.

> Show me *one* example, and convince me that it is actually correct!
> ("Because I tested it and it worked." will not convince me.)

How about, "because I tested it for all possible inputs"? 

(I would have a hard time accepting that a program could actually
produce correct output for all input, but that there would not exist a
logical explanation why it was so.  But my intuition may be wrong, of
course)

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Tomasz Zielonka
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbq1jgl.inf.t.zielonka@zodiac.mimuw.edu.pl>
··········@ii.uib.no napisa�:
> Matthias Blume <····@my.address.elsewhere> writes:
> 
>> Joe Marshall <···@ccs.neu.edu> writes:
> 
>>> I believe that a substantial number of useful programs are correct,
>>> but not provably so.
> 
>> Show me *one* example, and convince me that it is actually correct!
>> ("Because I tested it and it worked." will not convince me.)
> 
> How about, "because I tested it for all possible inputs"? 

But then you have a proof, don't you?

Best regards,
Tom

-- 
.signature: Too many levels of symbolic links
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ptgevf0e.fsf@rigel.goldenthreadtech.com>
Tomasz Zielonka <··········@students.mimuw.edu.pl> writes:

> ··········@ii.uib.no napisa�:
> > Matthias Blume <····@my.address.elsewhere> writes:
> > 
> >> Joe Marshall <···@ccs.neu.edu> writes:
> > 
> >>> I believe that a substantial number of useful programs are correct,
> >>> but not provably so.
> > 
> >> Show me *one* example, and convince me that it is actually correct!
> >> ("Because I tested it and it worked." will not convince me.)
> > 
> > How about, "because I tested it for all possible inputs"? 
> 
> But then you have a proof, don't you?

Yes.  By complete enumeration.


/Jon
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3d6cevd4a.fsf@rigel.goldenthreadtech.com>
·········@rcn.com (Jon S. Anthony) writes:

> Tomasz Zielonka <··········@students.mimuw.edu.pl> writes:
> 
> > ··········@ii.uib.no napisa�:
> > > Matthias Blume <····@my.address.elsewhere> writes:
> > > 
> > >> Joe Marshall <···@ccs.neu.edu> writes:
> > > 
> > >>> I believe that a substantial number of useful programs are correct,
> > >>> but not provably so.
> > > 
> > >> Show me *one* example, and convince me that it is actually correct!
> > >> ("Because I tested it and it worked." will not convince me.)
> > > 
> > > How about, "because I tested it for all possible inputs"? 
> > 
> > But then you have a proof, don't you?
> 
> Yes.  By complete enumeration.

I have to retract this as Erann Gat's comments elsewhere indicate that
it is actually not true...

/Jon
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m21xsu4ztt.fsf@hanabi-air.shimizu.blume>
··········@ii.uib.no writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> 
> >> I believe that a substantial number of useful programs are correct,
> >> but not provably so.
> 
> > Show me *one* example, and convince me that it is actually correct!
> > ("Because I tested it and it worked." will not convince me.)
> 
> How about, "because I tested it for all possible inputs"? 

If you have actually done so, then you have provided a proof of its
correctness.  But Joe claims that the program is not provably correct.
Contradiction.
 
> (I would have a hard time accepting that a program could actually
> produce correct output for all input, but that there would not exist a
> logical explanation why it was so.  But my intuition may be wrong, of
> course)

Testing for all inputs can be done only if there is a finite set of
possible inputs.  In that case I have no problem with the claim that
someone exhaustively tested the program.

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-3010030722060001@192.168.1.51>
In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ··········@ii.uib.no writes:
> 
> > Matthias Blume <····@my.address.elsewhere> writes:
> > 
> > > Joe Marshall <···@ccs.neu.edu> writes:
> > 
> > >> I believe that a substantial number of useful programs are correct,
> > >> but not provably so.
> > 
> > > Show me *one* example, and convince me that it is actually correct!
> > > ("Because I tested it and it worked." will not convince me.)
> > 
> > How about, "because I tested it for all possible inputs"? 
> 
> If you have actually done so, then you have provided a proof of its
> correctness.

No you haven't.  It's entirely possible for a program to return the
correct answer for a given input on one run, and to return a different,
incorrect answer for the same input on a different run.  (Figuring out the
circumstances under which this can happen is left as an exercise.  Hint:
hidden persistent state is not required.)

E.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1n0bi1ze2.fsf@tti5.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> [...] It's entirely possible for a program to return the
> correct answer for a given input on one run, and to return a different,
> incorrect answer for the same input on a different run.

Well, assuming a deterministic setting and assuming that you are
actually talking about *all* of the input of the program, then, no,
this is not possible.  Non-determinism, btw, can be modeled using
additional input from an infinite tape containing random bits.  If
your program depends on those (as all programs on real hardware do to
some degree because of the possibility of hardware faults), then you
would have to take those into account, too.

Matthias
From: Rene de Visser
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnrffe$8b8$1@news1.wdf.sap-ag.de>
"Matthias Blume" <····@my.address.elsewhere> wrote in message
···················@tti5.uchicago.edu...
> Well, assuming a deterministic setting and assuming that you are
> actually talking about *all* of the input of the program, then, no,
> this is not possible.  Non-determinism, btw, can be modeled using
> additional input from an infinite tape containing random bits.  If
> your program depends on those (as all programs on real hardware do to
> some degree because of the possibility of hardware faults), then you
> would have to take those into account, too.
>
> Matthias

Chess programs are highly indeterministic when run one machines with more
than one processor.
This is because there is one set of hash tables which all processors access.
The processors are examining
different search lines, but these search lines have common positions (though
examined to different depths).

The first processor thread to hit a position updates it to the common hash
table. Later threads will only update the
hashtable if the position was analysed to less than required depth,
otherwise they take the evaluation stored
in the hash table.

And yes this results in large problems find bugs, because it is hard to
reproduce to the problem even with the same
input data.

Rene.
From: Wade Humeniuk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <1gcob.6471$EY3.2679@edtnps84>
Rene de Visser wrote:

> And yes this results in large problems find bugs, because it is hard to
> reproduce to the problem even with the same
> input data.

This reminds me of a story about a Northern Telecom phone switch
system.  Various switches in the system would pass administrative information
around the entire system, with one system as the "master" controller.  If this
master system encountered unresolvable errors and it felt it was unreliable it
would delegate its functions to another switch.  Well a problem occured
that was so systematic that each time the roles of the switches changed
the new master would fail in the same way and delegate out, only
to have the next switch fail.  The techs had a terrible time debugging as
just when they isolated the problem in the switch for anaylsis, it would
delegate out its responsibility and the problem would move to another machine.


Wade
From: Albert Lai
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <4uekwt3lz9.fsf@vex.net>
···@jpl.nasa.gov (Erann Gat) writes:

> No you haven't.  It's entirely possible for a program to return the
> correct answer for a given input on one run, and to return a different,
> incorrect answer for the same input on a different run.  (Figuring out the
> circumstances under which this can happen is left as an exercise.  Hint:
> hidden persistent state is not required.)

Sun flares!  Remember to see the aurora!

(There was a generation of Cray computers with redundancy built in
because the circuitry was fast enough to be vulnerable to
cosmic-ray-induced bit mutation...)
From: Adam Warner
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.31.10.50.06.126907@consulting.net.nz>
Hi Albert Lai,

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
>> No you haven't.  It's entirely possible for a program to return the
>> correct answer for a given input on one run, and to return a different,
>> incorrect answer for the same input on a different run.  (Figuring out the
>> circumstances under which this can happen is left as an exercise.  Hint:
>> hidden persistent state is not required.)
> 
> Sun flares!  Remember to see the aurora!
> 
> (There was a generation of Cray computers with redundancy built in
> because the circuitry was fast enough to be vulnerable to
> cosmic-ray-induced bit mutation...)

I suspect Erann wasn't intending you to break the model of software
abstraction. Input to this program can be tested extensively:

(defun mirror (object)
  (if (zerop (random 999999))
      0
      object))

Without discovering that the function occasionally doesn't return the same
object.

Regards,
Adam
From: Mark Alexander Wotton
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbq4g83.5es.mwotton@pill3.orchestra.cse.unsw.EDU.AU>
On Fri, 31 Oct 2003 23:50:08 +1300, Adam Warner posted:
> Hi Albert Lai,
> 
>> ···@jpl.nasa.gov (Erann Gat) writes:
>> 
>>> No you haven't.  It's entirely possible for a program to return the
>>> correct answer for a given input on one run, and to return a different,
>>> incorrect answer for the same input on a different run.  (Figuring out the
>>> circumstances under which this can happen is left as an exercise.  Hint:
>>> hidden persistent state is not required.)
>> 
>> Sun flares!  Remember to see the aurora!
>> 
>> (There was a generation of Cray computers with redundancy built in
>> because the circuitry was fast enough to be vulnerable to
>> cosmic-ray-induced bit mutation...)
> 
> I suspect Erann wasn't intending you to break the model of software
> abstraction. Input to this program can be tested extensively:
> 
> (defun mirror (object)
>   (if (zerop (random 999999))
>       0
>       object))
> 
> Without discovering that the function occasionally doesn't return the same
> object.

However, in order for this not to respond the same way in each program run,
there is some hidden persistent state in the random number generator. This
is just a shell-game trick. :)

mrak

-- 
realise your life was only bait for a bigger fish
	-- aesop rock
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <oevxmppg.fsf@ccs.neu.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> No you haven't.  It's entirely possible for a program to return the
> correct answer for a given input on one run, and to return a different,
> incorrect answer for the same input on a different run.  (Figuring out the
> circumstances under which this can happen is left as an exercise.  Hint:
> hidden persistent state is not required.)

(get-universal-time) on a fast-moving computer.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1oevxz6kj.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > No you haven't.  It's entirely possible for a program to return the
> > correct answer for a given input on one run, and to return a different,
> > incorrect answer for the same input on a different run.  (Figuring out the
> > circumstances under which this can happen is left as an exercise.  Hint:
> > hidden persistent state is not required.)
> 
> (get-universal-time) on a fast-moving computer.

The output of (get-universal-time) is certainly to be considered input
to the program, so if you enumerate all inputs, the enumeration would
have to account for this.  (Of course, this only shows that exhaustive
enumeration of all inputs is not really feasible for most programs.)

Matthias
From: Nick Name
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <qjwob.388602$R32.12853060@news2.tin.it>
Joe Marshall wrote:

> 
> (get-universal-time) on a fast-moving computer.

These all are side-effect functions, and you can't write them in haskell
as functions. The haskell type system ensures that your functions will
always be pure functions, however you can fool the compiler by using
inappropriately some extension like FFI or unsafePerformIO in
implementations wich allow that, but this is another story; in general
when you write a function in haskell, and the compiler proves that it's
of type "a -> b", you have a proof that it's side-effects free. IMHO,
this is a good thing.

V.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ekwtnd5q.fsf@comcast.net>
Nick Name <·········@RE.MOVEinwind.it> writes:

> Joe Marshall wrote:
>
>> 
>> (get-universal-time) on a fast-moving computer.
>
> These all are side-effect functions, and you can't write them in haskell
> as functions. 

I'm not sure everyone agrees that time is a side effect (the
Everettists might object there).

I think we can all agree that a function that returns a different
output for the same input cannot be considered `pure'.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnr5nm$s2n$1@news.oberberg.net>
··········@ii.uib.no wrote:
> Matthias Blume <····@my.address.elsewhere> writes:
>>Joe Marshall <···@ccs.neu.edu> writes:
>>>I believe that a substantial number of useful programs are correct,
>>>but not provably so.
>>Show me *one* example, and convince me that it is actually correct!
>>("Because I tested it and it worked." will not convince me.)
> How about, "because I tested it for all possible inputs"? 
> 
> (I would have a hard time accepting that a program could actually
> produce correct output for all input, but that there would not exist a
> logical explanation why it was so.  But my intuition may be wrong, of
> course)

There's one case where I have seen such proofs: it's relatively easy to 
prove the equivalence of boolean operator combinations by enumerating 
all possible inputs (you just have to check 2^N combinations for N input 
parameters, which is practicable to about a dozen or so).

OTOH having to manually check all combinations always leaves a deep 
dissatisfaction within me: I'm forced to accept that it works, but I 
don't know /why/.

Regards,
Jo
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnqq5c$459$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> 
>>I believe that a substantial number of useful programs are correct,
>>but not provably so.
> 
> 
> Show me *one* example, and convince me that it is actually correct!
> ("Because I tested it and it worked." will not convince me.)

new Object();

allocates an object on the heap most of the time ;)


Pascal

P.S.: This is only a joke. Of course, proofs of program correctness need 
to abstract away partial failures, and are still valid proofs.
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <8765i688eh.fsf@thalassa.informatimago.com>
Pascal Costanza <········@web.de> writes:
> P.S.: This is only a joke. Of course, proofs of program correctness
> need to abstract away partial failures, and are still valid proofs.

I don't agree.

Proofs of  algorithms may  abstract all they  want and still  be valid
mathematical proofs.

But proving a  program is a different affair, and  it would be useless
to prove  that a given program  works in perfect  conditions, but when
the user clicks  on the pixel at (432,542) while  pressing at the same
time on the 'k' key, it crashes or just give a wrong answer.

Or, when the Ariane-4 module was proved to be correct and did not work
so correctly once put in the Ariane-5.

Or  again, it's  completely  useless  to prove  that  your program  is
correct, if it crashes with the system when the user launches humpteen
background processes,  because you  did no take  into account  in your
abstract universe  the fact that the  OS may be heavily  loaded at one
time.


Of course,  this mean that all  program proof is  either impossible or
too costly (hence why it's  never done formally, ie, it's never done),
or concerns useless toy systems.


I'd still very much like to be able to have program proofs, but I hate
to get only "abstracted" partial proofs.  A test only shows that a bug
exists, not that there's no bug. Same for abstract proofs.

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m11xst1ixd.fsf@tti5.uchicago.edu>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> Or, when the Ariane-4 module was proved to be correct and did not work
> so correctly once put in the Ariane-5.

This only shows that "correctness" is never absolute but always
relative to some correctness claim P.  Ariane-4's P was sufficient to
work well with the Ariane-4, but the Ariane-5 required a different
criterion P'.  The module was certainly still correct relative to P,
even when put into the Ariane-5.  At was never proven correct with
respect to P' -- which is where the trouble came from.

> Of course,  this mean that all  program proof is  either impossible or
> too costly (hence why it's  never done formally, ie, it's never done),
> or concerns useless toy systems.

"useless toy system" as in, e.g, "AMD CPU"?

Matthias
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <brryr4lz.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> I believe that a substantial number of useful programs are correct,
>> but not provably so.
>
> Show me *one* example, and convince me that it is actually correct!
> ("Because I tested it and it worked." will not convince me.)

(+ x 1) defined over integer values of x.

No doubt there exists a value of x that cannot be incremented on your
computer.  This clearly does *not* correctly implement the increment
function.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1ekwu1qou.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> >
> >> I believe that a substantial number of useful programs are correct,
> >> but not provably so.
> >
> > Show me *one* example, and convince me that it is actually correct!
> > ("Because I tested it and it worked." will not convince me.)
> 
> (+ x 1) defined over integer values of x.
> 
> No doubt there exists a value of x that cannot be incremented on your
> computer.  This clearly does *not* correctly implement the increment
> function.

So then this is an /in/correct program, right?  You were supposed to
show me a /correct/ one.

Matthias
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ad7io4tl.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> I believe that a substantial number of useful programs are correct,
>> but not provably so.
>
> Show me *one* example, and convince me that it is actually correct!
> ("Because I tested it and it worked." will not convince me.)

What, you want a proof?
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2llr2yzr2.fsf@hanabi-air.shimizu.blume>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> >
> >> I believe that a substantial number of useful programs are correct,
> >> but not provably so.
> >
> > Show me *one* example, and convince me that it is actually correct!
> > ("Because I tested it and it worked." will not convince me.)
> 
> What, you want a proof?

Not if you can devise some other way of convincing me.

Matthias
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <d6cerxh1.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> Matthias Blume <····@my.address.elsewhere> writes:
>> 
>> > Joe Marshall <···@ccs.neu.edu> writes:
>> >
>> >> I believe that a substantial number of useful programs are correct,
>> >> but not provably so.
>> >
>> > Show me *one* example, and convince me that it is actually correct!
>> > ("Because I tested it and it worked." will not convince me.)
>> 
>> What, you want a proof?
>
> Not if you can devise some other way of convincing me.

The usual second resort is physical intimidation, but I suppose we
could out-vote you, that's a popular alternative.
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egn0bh4zhf.fsf@sefirot.ii.uib.no>
·············@comcast.net writes:

> Matthias Blume <····@my.address.elsewhere> writes:

>> Joe Marshall <···@ccs.neu.edu> writes:

>>> Matthias Blume <····@my.address.elsewhere> writes:

>>>> Joe Marshall <···@ccs.neu.edu> writes:

>>>>> I believe that a substantial number of useful programs are correct,
>>>>> but not provably so.

>>>> Show me *one* example, and convince me that it is actually
>>>> correct!  ("Because I tested it and it worked." will not convince
>>>> me.)

>>> What, you want a proof?

>> Not if you can devise some other way of convincing me.

> The usual second resort is physical intimidation, 

And the result is feigned submission, while muttering under one's
breath that the earth still orbits the sun. :-)

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <k76lmouj.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> Matthias Blume <····@my.address.elsewhere> writes:
>> 
>> > Joe Marshall <···@ccs.neu.edu> writes:
>> >
>> >> I believe that a substantial number of useful programs are correct,
>> >> but not provably so.
>> >
>> > Show me *one* example, and convince me that it is actually correct!
>> > ("Because I tested it and it worked." will not convince me.)
>> 
>> What, you want a proof?
>
> Not if you can devise some other way of convincing me.

I've actually been giving this some serious thought.

It is easy to construct programs that are either provable or correct,
but not both.  One can prove that there exist programs that are
correct, but not provably so, or incorrect but not provably so.

So there are three classes of programs: ones that are provably
correct, ones that are provably incorrect, and ones that are neither.
Now it is clear that a substantial number of useful programs are
provably incorrect (buggy software).  I'm sure that you believe that
there exist several useful programs that are provably correct.  Given
how difficult it is to prove some easily stated problems, do you
believe that every useful program falls into the first or second class?

Consider RSA.  It is secure only if factoring is hard.  No one has
proven that factoring is hard, so RSA may not actually encrypt
anything.  Nonetheless many people use it and believe that it does
work.  There are other cryptosystems that rely on P!=NP 

People generally have a good rationale behind their software design,
but that is a hell of a long way from a proof.
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-3110030835580001@192.168.1.51>
In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> wrote:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> >
> >> Matthias Blume <····@my.address.elsewhere> writes:
> >> 
> >> > Joe Marshall <···@ccs.neu.edu> writes:
> >> >
> >> >> I believe that a substantial number of useful programs are correct,
> >> >> but not provably so.
> >> >
> >> > Show me *one* example, and convince me that it is actually correct!
> >> > ("Because I tested it and it worked." will not convince me.)
> >> 
> >> What, you want a proof?
> >
> > Not if you can devise some other way of convincing me.
> 
> I've actually been giving this some serious thought.
> 
> It is easy to construct programs that are either provable or correct,
> but not both.  One can prove that there exist programs that are
> correct, but not provably so, or incorrect but not provably so.
> 
> So there are three classes of programs: ones that are provably
> correct, ones that are provably incorrect, and ones that are neither.

There is also a class of programs that are provably probably approximately
correct.  Are such programs correct or not?

E.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m165i51j5h.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> >
> >> Matthias Blume <····@my.address.elsewhere> writes:
> >> 
> >> > Joe Marshall <···@ccs.neu.edu> writes:
> >> >
> >> >> I believe that a substantial number of useful programs are correct,
> >> >> but not provably so.
> >> >
> >> > Show me *one* example, and convince me that it is actually correct!
> >> > ("Because I tested it and it worked." will not convince me.)
> >> 
> >> What, you want a proof?
> >
> > Not if you can devise some other way of convincing me.
> 
> I've actually been giving this some serious thought.
> 
> It is easy to construct programs that are either provable or correct,
> but not both.

Huh?  If they are provable, then they better be correct!  I doubt that
it is easy to construct programs that are correct but not provable.
How do you know they are correct?

>  One can prove that there exist programs that are
> correct, but not provably so, or incorrect but not provably so.

Yes.  They certainly exist.  But that it is easy to construct a
particular example, even if one tries.

Let P be some reasonable correctness criterion P (i.e., one that is
satisfiable by at least one program).  The only way that I can see to
generate a program satisfying P but not provable so is to enumerate a
certain infinite sequence of programs, e.g., *all* programs. Then you
can use the above theorem and say that there is at least one (in fact,
infinitely many) programs in this sequence which satisfies P but not
provably so.  The trouble with this is that even given this sequence
you cannot know which of its members are of that nature.  I find it
highly unlikely that humans intentionally or even just accidentally
write such programs.

> So there are three classes of programs: ones that are provably
> correct, ones that are provably incorrect, and ones that are neither.
> Now it is clear that a substantial number of useful programs are
> provably incorrect (buggy software).  I'm sure that you believe that
> there exist several useful programs that are provably correct.

Sure.

> Consider RSA.  It is secure only if factoring is hard.  No one has
> proven that factoring is hard, so RSA may not actually encrypt
> anything.  Nonetheless many people use it and believe that it does
> work.  There are other cryptosystems that rely on P!=NP 

So the correctness criterion "encrypts reliably" may or may not hold
for RSA.  If it doesn't, then RSA is not one of the examples that I am
asking you to provide.  If it does, then first convince me that it is
actually correct.  RSA may or may not be an counterexample to my
claim.  The funny thing is that the moment we find out it will be
clear that it is not.

> People generally have a good rationale behind their software design,
> but that is a hell of a long way from a proof.

True.  But just because it is a long way does not mean that the proof
does not exist.

Matthias
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1sml9z6on.fsf@tti5.uchicago.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Yes.  They certainly exist.  But that it is easy to construct a
                  does not mean than --^
> particular example, even if one tries.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ad7gor1y.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> Matthias Blume <····@my.address.elsewhere> writes:
>> 
>> > Joe Marshall <···@ccs.neu.edu> writes:
>> >
>> >> Matthias Blume <····@my.address.elsewhere> writes:
>> >> 
>> >> > Joe Marshall <···@ccs.neu.edu> writes:
>> >> >
>> >> >> I believe that a substantial number of useful programs are correct,
>> >> >> but not provably so.
>> >> >
>> >> > Show me *one* example, and convince me that it is actually correct!
>> >> > ("Because I tested it and it worked." will not convince me.)
>> >> 
>> >> What, you want a proof?
>> >
>> > Not if you can devise some other way of convincing me.
>> 
>> I've actually been giving this some serious thought.
>> 
>> It is easy to construct programs that are either provable or correct,
>> but not both.
>
> Huh?  If they are provable, then they better be correct!  

Well, mumble.  If the program asserts something provably true, it had
better be correct.  Correspondingly, if the program asserts something
provably wrong it had better be incorrect.

(defun add-one (x) x) ;; returns the successor of X under PA.

Trivially provable and terribly wrong.

> I doubt that it is easy to construct programs that are correct but
> not provable.  How do you know they are correct?

If Goldbach's conjecture is false, then it is decidable in PA.  

Trivial proof:  if Goldbach's conjecture is false, there exists an even
number > 4 that is not the sum of two primes.  A simple program can
examine all the integers less than that number and it could
demonstrate the fact that the number violates Goldbach's conjecture.

If A implies B, then ~B implies ~A.

Therefore, if Goldbach's conjecture is NOT decidable in PA, then
Goldbach's conjecture is NOT false.
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uism4ykzf.fsf@hotmail.com>
Joe Marshall <···@ccs.neu.edu> writes:
{stuff deleted}
> Consider RSA.  It is secure only if factoring is hard.  No one has
> proven that factoring is hard, so RSA may not actually encrypt
> anything.  Nonetheless many people use it and believe that it does
> work.  There are other cryptosystems that rely on P!=NP 

Actually, I think there are few if any deployed cryptosystems which are
known to be NP hard. Factoring is not known to be NP hard.

> People generally have a good rationale behind their software design,
> but that is a hell of a long way from a proof.

If your notion of a secure crypto systems is one that is NP hard. Then RSA
is not known to be a secure crypto system. i.e. No one knows if RSA is
"correct".

People use it as if it was secure, but they should know the risks involved.
RSA is a program that is not known to be correct by anyone. So it difficult
to construct a proof for it.

So the claim is that every program that is *known to be correct* by the
program  has a proof of it's correctness.

This claim is trivially true. If there is and misunderstading it's because
the notion of "correctness" and proof are being misunderstood or interpreted
in a fuzzy way that muddles the issues.

If you want to continue this discussion. You need to nail down what you
understand as a "correct program". If you can't provide a defintion for it
this is a pretty pointless argument.

When I say correct program I mean a program that correctly implements a
given specification.

In fact a "correct program" is a misnomer we should only talk about a
correct program with respect to a particular specification.

If you give the programer a specification and ask for an implementation of
that specification. If the programmer claims to have an implementation that
meets the specification and if that claim is true (or close enought to true)
you can take the programmers implementation and prove that program is correct.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <wuaknbg8.fsf@comcast.net>
·········@hotmail.com (Daniel C. Wang) writes:

> If you want to continue this discussion. You need to nail down what you
> understand as a "correct program". If you can't provide a defintion for it
> this is a pretty pointless argument.

Quite true.

I'm going after a rather squirrely version of `correctness'.  A
program is `correct' if all users of the program would cast doubt
elsewhere if they got an incorrect answer.  It is correct if everyone
seems to behave as if it is correct.

It's not a very satisfactory criterion.

On the other hand, what is a `proof' but a convincing argument?  

  If modus tollens is true, everyone will believe in it.
  Everyone doesn't believe in modus tollens
  Therefore, modus tollens is false, QED....?
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87d6cc58wj.fsf@thalassa.informatimago.com>
·········@hotmail.com (Daniel C. Wang) writes:
> In fact a "correct program" is a misnomer we should only talk about a
> correct program with respect to a particular specification.

Right.  Then all my programs have always been perfectly correct, since
I've never been asked to implement any _formal_ specification. :-)



-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnur07$mi$1@newsreader2.netcologne.de>
Joe Marshall wrote:

> ··········@ii.uib.no writes:
> 
> 
>>Yes, but that's why I say provABLE rather than provEN programs.
>>My impression was that arguments were put forth (possibly not by you)
>>that a substantial class of useful programs can be correct, but not
>>provable so, or simply incorrect, but still useful.
> 
> 
> I believe that a substantial number of useful programs are correct,
> but not provably so.  I don't see how an incorrect program would be
> that useful.  I *do* however see how a partially correct program would
> be.

Here is something I have found by accident: 
http://www.idiom.com/~zilla/Work/Softestim/softestim.html

 From that page:

"In addition to the claims in the paper, there is one additional claim 
that can easily be made:

* Program correctness cannot be determined

This claim is contrary to the view that formal specifications and 
program proofs can prove program correctness.

The argument here is that the specification must be formal in order to 
participate in the proof. Since the specification fully specifies the 
behavior of the program,the complexity (C) of the specification must be 
at least approximately as large as the complexity of a minimal program 
for the task at hand. More specifically, given some input it must be 
possible to determine the desired behavior of the program from the 
specification. Using this, a small program can be written that, given 
some input, exhaustively queries the specification until the 
corresponding output bit pattern (if any) is determined; this bit 
pattern is then output, thereby simulating the desired program. 
Formally, C(specification) + C(query program) >= C(program). We are left 
with the obvious question: if the specification is approximately of the 
same order of complexity as the program, how can we know that the 
specification is correct?"


Pascal
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-3110031602150001@k-137-79-50-101.jpl.nasa.gov>
In article <···········@newsreader2.netcologne.de>, Pascal Costanza
<········@web.de> wrote:

> Here is something I have found by accident: 
> http://www.idiom.com/~zilla/Work/Softestim/softestim.html
> 
>  From that page:
> 
> "In addition to the claims in the paper, there is one additional claim 
> that can easily be made:
> 
> * Program correctness cannot be determined
> 
> This claim is contrary to the view that formal specifications and 
> program proofs can prove program correctness.
> 
> The argument here is that the specification must be formal in order to 
> participate in the proof. Since the specification fully specifies the 
> behavior of the program,the complexity (C) of the specification must be 
> at least approximately as large as the complexity of a minimal program 
> for the task at hand.

No, I don't think this is true.

This issue was actually raised by McCarthy at the lisp conference during
Paul Graham's presentation.  His counterexample was the problem of matrix
inversion.  It is much easier to give a specification for matrix inversion
than it is to given an algorithm.  To specify what matrix inversion is you
don't have to know how to invert a matrix, only how to multiply and
compare them.

E.
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uad7gykea.fsf@hotmail.com>
Pascal Costanza <········@web.de> writes:
{stuff deleted}
> pattern is then output, thereby simulating the desired
> program. Formally, C(specification) + C(query program) >=
> C(program). We are left with the obvious question: if the
> specification is approximately of the same order of complexity as the
> program, how can we know that the specification is correct?"

By constructing a larger hairy even more complex proof! At no point, has
anyone claimed that complicated proofs don't exists. Also, I'm the first one
to admit that sometimes the formal proof just isn't worth the trouble. We
are arguing about if most programs claimed to be correct by programers have
a proof of their correctness.

There really isn't any point in arguing about this. Rather than trying to
misunderstand the claim to contradict it. If you put some thought into
understanding the claim it would save some of us a lot of consternation.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.31.23.47.36.264933@knm.org.pl>
On Sat, 01 Nov 2003 00:24:56 +0100, Pascal Costanza wrote:

> Using this, a small program can be written that, given 
> some input, exhaustively queries the specification until the 
> corresponding output bit pattern (if any) is determined; this bit 
> pattern is then output, thereby simulating the desired program. 
> Formally, C(specification) + C(query program) >= C(program).

No. How did you get that inequality? The complexity of the specification +
the complexity of the program which queries the specification can be much
smaller than the complexity of the actual practical program that we want
to verify.

I don't say that today it's realistic to formally verify whole big
real-life programs but that the argument is flawed.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <4qxooqnn.fsf@comcast.net>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> On Sat, 01 Nov 2003 00:24:56 +0100, Pascal Costanza wrote:
>
>> Using this, a small program can be written that, given 
>> some input, exhaustively queries the specification until the 
>> corresponding output bit pattern (if any) is determined; this bit 
>> pattern is then output, thereby simulating the desired program. 
>> Formally, C(specification) + C(query program) >= C(program).
>
> No. How did you get that inequality? The complexity of the specification +
> the complexity of the program which queries the specification can be much
> smaller than the complexity of the actual practical program that we want
> to verify.

Not via measures such as Kolmogorv complexity.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa3b97c$1@news.unimelb.edu.au>
·············@comcast.net writes:

>Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:
>
>> On Sat, 01 Nov 2003 00:24:56 +0100, Pascal Costanza wrote:
>>
>>> Using this, a small program can be written that, given 
>>> some input, exhaustively queries the specification until the 
>>> corresponding output bit pattern (if any) is determined; this bit 
>>> pattern is then output, thereby simulating the desired program. 
>>> Formally, C(specification) + C(query program) >= C(program).
>>
>> No. How did you get that inequality? The complexity of the specification +
>> the complexity of the program which queries the specification can be much
>> smaller than the complexity of the actual practical program that we want
>> to verify.
>
>Not via measures such as Kolmogorv complexity.

Which just goes to show that a program's Kolmogorv complexity has
no relationship with the ease of showing whether that program is correct.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <brrwp0cm.fsf@comcast.net>
Fergus Henderson <···@cs.mu.oz.au> writes:

> ·············@comcast.net writes:
>
>>Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:
>>
>>> On Sat, 01 Nov 2003 00:24:56 +0100, Pascal Costanza wrote:
>>>
>>>> Using this, a small program can be written that, given 
>>>> some input, exhaustively queries the specification until the 
>>>> corresponding output bit pattern (if any) is determined; this bit 
>>>> pattern is then output, thereby simulating the desired program. 
>>>> Formally, C(specification) + C(query program) >= C(program).
>>>
>>> No. How did you get that inequality? The complexity of the specification +
>>> the complexity of the program which queries the specification can be much
>>> smaller than the complexity of the actual practical program that we want
>>> to verify.
>>
>>Not via measures such as Kolmogorv complexity.
>
> Which just goes to show that a program's Kolmogorv complexity has
> no relationship with the ease of showing whether that program is correct.

Good lord, you can't be serious!
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa428b3$1@news.unimelb.edu.au>
·············@comcast.net writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>
>> ·············@comcast.net writes:
>>
>>>Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:
>>>
>>>> On Sat, 01 Nov 2003 00:24:56 +0100, Pascal Costanza wrote:
>>>>
>>>>> Formally, C(specification) + C(query program) >= C(program).
>>>>
>>>> No. How did you get that inequality? The complexity of the specification +
>>>> the complexity of the program which queries the specification can be much
>>>> smaller than the complexity of the actual practical program that we want
>>>> to verify.
>>>
>>>Not via measures such as Kolmogorv complexity.
>>
>> Which just goes to show that a program's Kolmogorv complexity has
>> no relationship with the ease of showing whether that program is correct.
>
>Good lord, you can't be serious!

Ah... now that you ask me to explain my reasoning, I see that I made
a mistake.  So I have to retract that.  The Kolmogorov complexity of a
program does have some relationship with the ease of showing whether it
is correct.

However, your statement "not via measures such as Kolmogorov complexity"
above is wrong.

I was thinking along the following lines.
Consider the following two programs to compute 1 + 1:

 (a)
	int main() { return 1 + 1; }
  
 (b)
 	int length_of_shortest_proof_of_fermats_last_theorem() {
		/* do an exhaustive search to find the shortest proof
		   of Fermat's last theorem, and return the length of
		   that proof */
		...
	}
	int main() {
		int len = length_of_shortest_proof_of_fermats_last_theorem();
		return (len > 10000) ? 2 : 3;
	}

Now, if there is a proof of Fermat's last theorem which takes less than
10000 proof steps, the second program is equivalent to the first.
But obviously the first program is a lot easier to prove correct
than the second.

My mistake was to think that if these two programs are equivalent,
then they have the same Kolmogorov complexity.  However, that's wrong.
The Kolmogorov complexity of a program P is not the length of the shortest
program equivalent to P; it is the length of the shortest program that will
will generate the text of P as its output.

But I think your statement above, that the Kolmogorov complexity of the
executable specification can't be much smaller than the complexity of
the actual practical program that we want to verify, was also based
on the same mistake.  If the actual program is significantly longer
than the specification, and its text is not especially compressible,
then it will obviously have a higher Kolmogorov complexity than the
executable specification.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <sml7muzi.fsf@comcast.net>
Fergus Henderson <···@cs.mu.oz.au> writes:

> But I think your statement above, that the Kolmogorov complexity of the
> executable specification can't be much smaller than the complexity of
> the actual practical program that we want to verify, was also based
> on the same mistake.  

That's not what I'm trying to assert.  I'm saying that the Kolmogorov
complexity of the executable specification *must* be at least as large
as the *simplest* program that implements it.  The specification can be
no simpler than that.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2n0bg90k2.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

> * Program correctness cannot be determined
> 
> This claim is contrary to the view that formal specifications and
> program proofs can prove program correctness.
> 
> The argument here is that the specification must be formal in order to
> participate in the proof. Since the specification fully specifies the
> behavior of the program,the complexity (C) of the specification must
> be at least approximately as large as the complexity of a minimal
> program for the task at hand. More specifically, given some input it
> must be possible to determine the desired behavior of the program from
> the specification. Using this, a small program can be written that,
> given some input, exhaustively queries the specification until the
> corresponding output bit pattern (if any) is determined; this bit
> pattern is then output, thereby simulating the desired
> program. Formally, C(specification) + C(query program) >=
> C(program). We are left with the obvious question: if the
> specification is approximately of the same order of complexity as the
> program, how can we know that the specification is correct?"

Of course this is complete nonsense.  It is *much* easier, to, e.g.,
say what a correct sorting algorithm is than to actually implement
one.  Moreover, the "enumerate all outputs and pick the first correct
one" construction might not be so easy to do for a variety of reasons.
I can see at least two:

  - the correctness criterion might not be effectively testable
  - the correctness criterion might include the time- and/or space-complexity
    of the program

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo0h6o$kbu$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>* Program correctness cannot be determined
>>
>>This claim is contrary to the view that formal specifications and
>>program proofs can prove program correctness.
>>
>>The argument here is that the specification must be formal in order to
>>participate in the proof. Since the specification fully specifies the
>>behavior of the program,the complexity (C) of the specification must
>>be at least approximately as large as the complexity of a minimal
>>program for the task at hand. More specifically, given some input it
>>must be possible to determine the desired behavior of the program from
>>the specification. Using this, a small program can be written that,
>>given some input, exhaustively queries the specification until the
>>corresponding output bit pattern (if any) is determined; this bit
>>pattern is then output, thereby simulating the desired
>>program. Formally, C(specification) + C(query program) >=
>>C(program). We are left with the obvious question: if the
>>specification is approximately of the same order of complexity as the
>>program, how can we know that the specification is correct?"
> 
> 
> Of course this is complete nonsense.  It is *much* easier, to, e.g.,
> say what a correct sorting algorithm is than to actually implement
> one.

Yes, but is it also easier to _formally specify_ what a correct sorting 
algorithm is than to implement one?

Without a formal specification for the correctness of an algorithm, you 
cannot prove the correctness of an algorithm.


Pascal
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0111030725030001@192.168.1.51>
In article <············@newsreader2.netcologne.de>, Pascal Costanza
<········@web.de> wrote:

> > Of course this is complete nonsense.  It is *much* easier, to, e.g.,
> > say what a correct sorting algorithm is than to actually implement
> > one.
> 
> Yes, but is it also easier to _formally specify_ what a correct sorting 
> algorithm is than to implement one?

Yes, of course.

s2 = sort(s1) iff:

len(s2) = len(s1) and
forall(x): if member(x,s2) then member(x,s1) and
forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]

That's much easier than actually sorting.

Matrix inversion makes an even better example.  Prime factoring is even
better than that.

E.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <y8v0nk39.fsf@comcast.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <············@newsreader2.netcologne.de>, Pascal Costanza
> <········@web.de> wrote:
>
>> > Of course this is complete nonsense.  It is *much* easier, to, e.g.,
>> > say what a correct sorting algorithm is than to actually implement
>> > one.
>> 
>> Yes, but is it also easier to _formally specify_ what a correct sorting 
>> algorithm is than to implement one?
>
> Yes, of course.
>
> s2 = sort(s1) iff:
>
> len(s2) = len(s1) and
> forall(x): if member(x,s2) then member(x,s1) and
> forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]
>
> That's much easier than actually sorting.

Nah, it's easier than sorting *efficiently*, but not any easier than
sorting.  Here's my trivial sort routine:

(defun trivial-sort (input)
  (while (not (satisfies-eranns-sort-specification input))
    (permute input)))

My program is not much larger than your specification.

This generalizes to any program that formally specifies a correct
algorithm.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.11.01.16.43.34.185473@knm.org.pl>
On Sat, 01 Nov 2003 16:12:59 +0000, prunesquallor wrote:

>> That's much easier than actually sorting.
> 
> Nah, it's easier than sorting *efficiently*, but not any easier than
> sorting.

But we want to prove the correctness of an efficient program, so it
doesn't matter that there exist another program which can be proven
correct.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <sml8nhlp.fsf@comcast.net>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> On Sat, 01 Nov 2003 16:12:59 +0000, prunesquallor wrote:
>
>>> That's much easier than actually sorting.
>> 
>> Nah, it's easier than sorting *efficiently*, but not any easier than
>> sorting.
>
> But we want to prove the correctness of an efficient program, so it
> doesn't matter that there exist another program which can be proven
> correct.

The original question was whether the complexity of the proof is less
than the complexity of the program.  The Kolmogorov complexity of a
program is informally `the shortest program that can generate the
output'.  By construction, any decidable algorithmic proof can be
trivially turned into a decidable program that computes the answer, so
the Kolmogorov complexity of any program is almost the same as the
Kolmogorov complexity of any proof of that program.

The *difficulty* of determining that a particular algorithm is a different
story, but it should be clear that the algorithmic complexity of the
proof is on par with the algorithmic complexity of the program itself.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.11.01.17.31.55.111556@knm.org.pl>
On Sat, 01 Nov 2003 17:06:44 +0000, prunesquallor wrote:

> The original question was whether the complexity of the proof is less
> than the complexity of the program.  The Kolmogorov complexity of a
> program is informally `the shortest program that can generate the
> output'.

The Kolmogorov complexity is useless as a measure of how hard is to see
whether a program is correct. It's because finding such a shortest program
for the given long program is generally harder than proving the short
program correct.

The claim that the program is as obvious as the specification because
there exists another program which does the same and has the size of
the specification is absurd, because we can't easily find that short
program until we prove the big program correct.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <k76jopac.fsf@comcast.net>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> On Sat, 01 Nov 2003 17:06:44 +0000, prunesquallor wrote:
>
>> The original question was whether the complexity of the proof is less
>> than the complexity of the program.  The Kolmogorov complexity of a
>> program is informally `the shortest program that can generate the
>> output'.
>
> The Kolmogorov complexity is useless as a measure of how hard is to see
> whether a program is correct. 

Not completely useless:  it gives a lower bound.

> The claim that the program is as obvious as the specification because
> there exists another program which does the same and has the size of
> the specification is absurd, because we can't easily find that short
> program until we prove the big program correct.

You misunderstand.  If the specification is correct, then it is
trivial to construct the short program from the specification.  It
will be a sucky short program, but it will exactly match the
specification.  

Now the problem is `proving' the specification to be correct.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.11.01.19.49.59.586031@knm.org.pl>
On Sat, 01 Nov 2003 19:35:25 +0000, prunesquallor wrote:

> You misunderstand.  If the specification is correct, then it is
> trivial to construct the short program from the specification.
> It will be a sucky short program, but it will exactly match the
> specification.

So what? How does it imply that the big program we want to verify is as
obvious as the specification?

> Now the problem is `proving' the specification to be correct.

The specification is meant to be so simple that a person agrees that if
a program conforms to the specification, he believes it does what he wants
it to do. It defines what does it mean to be correct.

The program we want to verify is much longer. It's not obvious from
looking at its code that it does what one wants it to do. A proof would
assure that. That it has the same Kolmogorov complexity is irrelevant -
it's still much longer and harder to understand.

The argument that the program and the specification are equivalently easy
to understand and verify manually is flawed. It talks about *a* program
derived from the specification, it doesn't imply anything about verifying
other programs that we want to verify.

Moreover, it's completely wrong if the specification includes complexity
requirements. Then the program automatically derived from the specification
usually doesn't even exist.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <8ymzom0u.fsf@comcast.net>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> On Sat, 01 Nov 2003 19:35:25 +0000, prunesquallor wrote:
>
>> You misunderstand.  If the specification is correct, then it is
>> trivial to construct the short program from the specification.
>> It will be a sucky short program, but it will exactly match the
>> specification.
>
> So what?  How does it imply that the big program we want to verify is as
> obvious as the specification?

It doesn't.  I never said it did.  I only said that they have equal
complexity under a particular formal definition of complexity.

>> Now the problem is `proving' the specification to be correct.
>
> The specification is meant to be so simple that a person agrees that if
> a program conforms to the specification, he believes it does what he wants
> it to do.  It defines what does it mean to be correct.

Yes, but if the specification is meant to be *formally correct*,
i.e. provable by an algorithm, then it must be at *least* as complex
as the simplest program that implements the specification.  The
specification cannot be simpler than simplest program that implements
it.

But if the specification is so simple that a person believes it does
what he wants it to do, then the simplest program implementing the
specification *also* is at most that simple.  In this case, a formal
proof is rather pointless.  On the other hand, if the simplest program
that implements a specification is too difficult to be understood,
then the specification itself must be too difficult to be understood
and again a formal proof is pointless.

> The argument that the program and the specification are equivalently easy
> to understand and verify manually is flawed.

I'm not arguing that.  I'm arguing that for any specfication there
exists a *minimal* program that is at least as easy to verify as the
specification.

That is:  for all formal specifications there exists a simplest program
such that the complexity of the specification plus the complexity of
the proof that the program meets it is higher than the complexity of
that simplest program.  i.e.

   C(specification) + C(proof) >= C(minimal program implementing spec)

Which is the inequality that started this subthread.

> It talks about *a* program derived from the specification, it
> doesn't imply anything about verifying other programs that we want
> to verify.

No, indeed it does not.  What it implies is that verifying the
*specification* will be at least as hard as verifying the minimal
program that fulfills it.

> Moreover, it's completely wrong if the specification includes complexity
> requirements. Then the program automatically derived from the specification
> usually doesn't even exist.

If the specification includes a maximum complexity requirement less
than its own complexity, such a program cannot exist.  This is
somewhat like `the smallest number not expressible in fewer than
eleven words'.

Here's such a spec:

    A program satisfying this specification is no longer than 100
    bytes.  It takes no arguments and it prints this:

    F529531CDDBC9542AB1317C69E128787BFB116D848CAE667DC3B4C3A12200D56
    A4FA514450595E134BD84FD80258411C098B4CD6A27BD657D3E33744FB16FF69
    6D08C1586A8691B049CB80A967804E91FC468A02D0F03ADA7A00C6CB6A889F1C
    8F2EF8D64E8AA26D0752AD1D06E5FAA3A6416A4F2D48BCEAEBC63A403207B2F8
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.11.01.21.04.13.455373@knm.org.pl>
On Sat, 01 Nov 2003 20:45:58 +0000, prunesquallor wrote:

>> Moreover, it's completely wrong if the specification includes complexity
>> requirements. Then the program automatically derived from the specification
>> usually doesn't even exist.
> 
> If the specification includes a maximum complexity requirement less
> than its own complexity, such a program cannot exist.

I mean the requirements about time and space complexity, i.e. efficiency.
In such case you can't write a program which tries every output possible
and checks if it conforms, yet there might exist compliant programs.

I believe that for computational problems the user is often satisfied with
the proof that the result is correct and empirical evidence that it's
computed fast enough. This rules out the program which tries every output
possible, even though this requirement was not a formal part of the
specification. In this case it makes sense to try to prove the correctness
of a program written by hand, and the fact that it's theoretically
possible to generate another program with this property (but horribly
slow) is irrelevant.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l1vfq2aoy9.fsf@budvar.future-i.net>
·············@comcast.net writes:

>Yes, but if the specification is meant to be *formally correct*,
>i.e. provable by an algorithm, then it must be at *least* as complex
>as the simplest program that implements the specification.  The
>specification cannot be simpler than simplest program that implements
>it.

Not if the specification includes time complexity:

    The output list is a permutation of the input list
    and the output list is sorted
    and the time taken is O(n * log n) where n = length input

Of course this whole discussion is restricted to specifications that
do have an implementation.

>On the other hand, if the simplest program that implements a
>specification is too difficult to be understood, then the
>specification itself must be too difficult to be understood and again
>a formal proof is pointless.

Only if you allow the trick of generating all possible outputs and
checking each one against the specification - which cannot be done in
the above example.  If time is not an issue then you are right of
course.

>If the specification includes a maximum complexity requirement less
>than its own complexity, such a program cannot exist.

You talked about this but I didn't really understand it, so I'll
assume that the rest of this article is not completely redundant :-(.

-- 
Ed Avis <··@membled.com>
From: Peter G. Hancock
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <oqptgbzpcz.fsf@premise.demon.co.uk>
>>>>> prunesquallor  wrote (on Sat, 01 Nov 2003 at 19:35):

    > If the specification is correct ...

I don't think that you can talk about a specification being correct.
You can talk about what it means, which may or may not match what 
the customer wants or needs. 

With a formal specifications, you can draw consequences from it,
or attempt to prove expected behaviour.  Very often, nasty surprises
emerge at this point -- and people go away and revise the spec which
they now understand a little better.  I'd guess that this is nowadays
the benefit of attempting to write a precise specification of vague
and confused (and correctible!) requirements. 

None of this is impeded by having a formal specification.  On the contrary,
because it is precise, you can explore its meaning. 

I may be wrong, but I think that correctness, to speak strictly, is an
attribute of (purported) proofs, not statements. Correctness = in
accordance with rules (of inference). 

Peter Hancock
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87znff39y4.fsf@thalassa.informatimago.com>
·······@spamcop.net (Peter G. Hancock) writes:

> >>>>> prunesquallor  wrote (on Sat, 01 Nov 2003 at 19:35):
> 
>     > If the specification is correct ...
> 
> I don't think that you can talk about a specification being correct.
> You can talk about what it means, which may or may not match what 
> the customer wants or needs. 
> 
> With a formal specifications, you can draw consequences from it,
> or attempt to prove expected behaviour.  Very often, nasty surprises
> emerge at this point -- and people go away and revise the spec which
> they now understand a little better.  I'd guess that this is nowadays
> the benefit of attempting to write a precise specification of vague
> and confused (and correctible!) requirements. 
> 
> None of this is impeded by having a formal specification.  On the contrary,
> because it is precise, you can explore its meaning. 
> 
> I may be wrong, but I think that correctness, to speak strictly, is an
> attribute of (purported) proofs, not statements. Correctness = in
> accordance with rules (of inference). 

Compare the specifications of sort  given by Erann and by Gareth.  The
difference is that one is correct  and the other is not.  Checking the
correctness  of  a  specification,  apart  from  internal  consistency
checks, is a  question of semantics.  You could  formalize it the same
way as semantics  are assigned to formal systems.  There is a semantic
model for a correct sort:
    
    s1 = ( 2 3 4 1 3 2 )
    s2 = ( 1 2 2 3 3 4 )

is an example for a correct formal specification, while:

    s1 = ( 2 3 4 1 3 2 )
    s2 = ( 4 4 4 4 4 4 )

is not.


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa45040$1@news.unimelb.edu.au>
·············@comcast.net writes:

>The original question was whether the complexity of the proof is less
>than the complexity of the program.

No.  The original question was whether the complexity of the _specification_
is less than the complexity of the program.  This was the question raised
by the web page that Pascal Constanza quoted:

 | We are left with the obvious question: if the specification is
 | approximately of the same order of complexity as the program, how can
 | we know that the specification is correct?

>The Kolmogorov complexity of a program is informally `the shortest program
>that can generate the output'.

That seems odd.  The Kolmogorov complexity of a string is the shortest
program that can generate the string, and a program is a string, so the
Kolmogorov complexity of a program P should be the shortest program that
can generate P, not the shortest program than can generate the output of P.

However, looking at the paper "Large Limits to Software Estimation" which
the web page that Pascal quoted, it does seem to use that definition.  So
I guess this inconsistent terminology is already established.

To disambiguate, I will always refer to "the Kolmogorov complexity of a
program's text" or "the Kolmogorov complexity of a program's output",
never just to "the Kolmogorov complexity of a program".

Now I can reiterate my earlier statement (this time with less ambiguous
terminology): the Komolgorov complexity of a program's output has absolutely
no relationship with the ease of proving that program to be correct.

>By construction, any decidable algorithmic proof can be
>trivially turned into a decidable program that computes the answer, so
>the Kolmogorov complexity of any program is almost the same as the
>Kolmogorov complexity of any proof of that program.

That doesn't follow.  At best I think you've only shown an inequality
(the Kolmogorov complexity of program's output is no greater than the
Kolmogorov complexity of a proof of that program), not an equivalence.

Different proofs will have different Komolgorov complexities
(I can always increase the complexity by inserting a sequence of
random steps in the proof, which do not help in reaching the conclusion).
So the idea that the complexity of the program's output could be
the same as all of these different proofs is clearly wrong.

>The *difficulty* of determining that a particular algorithm is a different
>story, but it should be clear that the algorithmic complexity of the
>proof is on par with the algorithmic complexity of the program itself.

No.  The complexity of the least complex proof will depend on how the
program is expressed.  But the complexity of the program's output (which
you are referring to as the complexity of the program) will only depend
on the output, not on the way the program is expressed.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Paul F. Dietz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <75ednV_6cYiWxTmiRVn-vA@dls.net>
Fergus Henderson wrote:

> Now I can reiterate my earlier statement (this time with less ambiguous
> terminology): the Komolgorov complexity of a program's output has absolutely
> no relationship with the ease of proving that program to be correct.

In any given formal system, there is only a finite number of programs
for which the Kolmogorov complexity of the program can be proved.  There
are infinitely many programs that can be show to satisfy some specification.

	Paul
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa463e0$1@news.unimelb.edu.au>
"Paul F. Dietz" <·····@dls.net> writes:

>Fergus Henderson wrote:
>
>> Now I can reiterate my earlier statement (this time with less ambiguous
>> terminology): the Komolgorov complexity of a program's output has absolutely
>> no relationship with the ease of proving that program to be correct.
>
>In any given formal system, there is only a finite number of programs
>for which the Kolmogorov complexity of the program can be proved.

Please don't use ambiguous terminology.  Are you talking about the
Kolmogorov complexity of the program's text, or of the program's output?

If the former, fine, but I don't see what that has to do with my statement.
If the latter, then you are wrong.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <oevvmu8q.fsf@comcast.net>
Fergus Henderson <···@cs.mu.oz.au> writes:

> ·············@comcast.net writes:
>
>>The original question was whether the complexity of the proof is less
>>than the complexity of the program.
>
> No.  The original question was whether the complexity of the _specification_
> is less than the complexity of the program.  This was the question raised
> by the web page that Pascal Constanza quoted:
>
>  | We are left with the obvious question: if the specification is
>  | approximately of the same order of complexity as the program, how can
>  | we know that the specification is correct?

If you look earlier in Pascal Constanza's message you will see that he
is talking about the *minimal* program.

>>The Kolmogorov complexity of a program is informally `the shortest program
>>that can generate the output'.
>
> That seems odd.  The Kolmogorov complexity of a string is the shortest
> program that can generate the string, and a program is a string, so the
> Kolmogorov complexity of a program P should be the shortest program that
> can generate P, not the shortest program than can generate the output of P.
>
> However, looking at the paper "Large Limits to Software Estimation" which
> the web page that Pascal quoted, it does seem to use that definition.  So
> I guess this inconsistent terminology is already established.

That's ok, it can be shown that these two usages are related by 
O(log n) factor.

> To disambiguate, I will always refer to "the Kolmogorov complexity of a
> program's text" or "the Kolmogorov complexity of a program's output",
> never just to "the Kolmogorov complexity of a program".
>
> Now I can reiterate my earlier statement (this time with less ambiguous
> terminology):  the Komolgorov complexity of a program's output has absolutely
> no relationship with the ease of proving that program to be correct.

Actually it does.  

Define the conditional Komolgorov complexity K(a|b) as the minimal length
of the program text that computes b when given input a.

It can be shown that K(a|b)=K(<a,b>) + K(b)
(modulo a O(log n) factor)

 where K(<a,b>) is the complexity of the encoding of pairs of input
 and output
   and
 K(b) is the complexity of the output.

So, the Kolmogorov complexity of a program's output, K(b), can be
easily derived from the complexity of the program K(a|b) and the
complexity of the encoding K(<a,b>).  If we make the simplifying
assumption of `null input', this reduces to K(b) = K(null|b)

thus the complexity of a program with no input is >= to the complexity
of its output modulo an O(log n) factor.

>>By construction, any decidable algorithmic proof can be
>>trivially turned into a decidable program that computes the answer, so
>>the Kolmogorov complexity of any program is almost the same as the
>>Kolmogorov complexity of any proof of that program.
>
> That doesn't follow.  At best I think you've only shown an inequality
> (the Kolmogorov complexity of program's output is no greater than the
> Kolmogorov complexity of a proof of that program), not an equivalence.
>
> Different proofs will have different Komolgorov complexities
> (I can always increase the complexity by inserting a sequence of
> random steps in the proof, which do not help in reaching the conclusion).
> So the idea that the complexity of the program's output could be
> the same as all of these different proofs is clearly wrong.

Obviously.  But there is a *minimum* below which you cannot go.

>>The *difficulty* of determining that a particular algorithm is a different
>>story, but it should be clear that the algorithmic complexity of the
>>proof is on par with the algorithmic complexity of the program itself.
>
> No.  The complexity of the least complex proof will depend on how the
> program is expressed.  But the complexity of the program's output (which
> you are referring to as the complexity of the program) will only depend
> on the output, not on the way the program is expressed.

Yes, but the complexity of the program is at least that of its output.
From: Gareth McCaughan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <874qxnaj3o.fsf@g.mccaughan.ntlworld.com>
··············@comcast.net" wrote:

[Erann:]
>> s2 = sort(s1) iff:
>>
>> len(s2) = len(s1) and
>> forall(x): if member(x,s2) then member(x,s1) and
>> forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]
>>
>> That's much easier than actually sorting.

[Prunesquallor:]
> Nah, it's easier than sorting *efficiently*, but not any easier than
> sorting.  Here's my trivial sort routine:
> 
> (defun trivial-sort (input)
>   (while (not (satisfies-eranns-sort-specification input))
>     (permute input)))
> 
> My program is not much larger than your specification.

You've omitted the definition of "permute", which is
quite important and not entirely trivial to implement
so as to make the algorithm correct.

> This generalizes to any program that formally specifies a correct
> algorithm.

Only when it is possible to enumerate possible outputs,
and there is no interactive element, and the specification
includes no statements about how long the program is
allowed to take to run.

-- 
Gareth McCaughan
.sig under construc
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0111032307380001@192.168.1.51>
In article <············@comcast.net>, ·············@comcast.net wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <············@newsreader2.netcologne.de>, Pascal Costanza
> > <········@web.de> wrote:
> >
> >> > Of course this is complete nonsense.  It is *much* easier, to, e.g.,
> >> > say what a correct sorting algorithm is than to actually implement
> >> > one.
> >> 
> >> Yes, but is it also easier to _formally specify_ what a correct sorting 
> >> algorithm is than to implement one?
> >
> > Yes, of course.
> >
> > s2 = sort(s1) iff:
> >
> > len(s2) = len(s1) and
> > forall(x): if member(x,s2) then member(x,s1) and
> > forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]
> >
> > That's much easier than actually sorting.
> 
> Nah, it's easier than sorting *efficiently*, but not any easier than
> sorting.  Here's my trivial sort routine:
> 
> (defun trivial-sort (input)
>   (while (not (satisfies-eranns-sort-specification input))
>     (permute input)))
> 
> My program is not much larger than your specification.

Only because you didn't write "permute".

> This generalizes to any program that formally specifies a correct
> algorithm.

No, it doesn't.

E.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <8ymynciv.fsf@comcast.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <············@comcast.net>, ·············@comcast.net wrote:
>> 
>> (defun trivial-sort (input)
>>   (while (not (satisfies-eranns-sort-specification input))
>>     (permute input)))
>> 
>> My program is not much larger than your specification.
>
> Only because you didn't write "permute".
>
>> This generalizes to any program that formally specifies a correct
>> algorithm.
>
> No, it doesn't.

Not this algorithm per se, but assuming that the output of
the specification is a printable lisp object, then this algorithm
will work:

(defun universal-algorithm (spec input)
  (loop with i from 0
        and output = (to-object i)
        when (satisfies-spec spec input output)
        return output))

(defun to-object (i)
  (let ((*read-eval* nil))
    (with-standard-io-syntax
      (ignore-errors (read-from-string (to-string i))))))

(defun to-string (i)
  (coerce (to-chars i) 'string))

(defun to-chars (i)
  (unless (zerop i)
    (multiple-value-bind (quo rem) (floor i 96)
      (cons 
        (char "
 !\"#$%&'()*+,-./0123456789:;<=>·@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~"
         rem)
        (to-chars quo)))))
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0211030724440001@192.168.1.51>
In article <············@comcast.net>, ·············@comcast.net wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <············@comcast.net>, ·············@comcast.net wrote:
> >> 
> >> (defun trivial-sort (input)
> >>   (while (not (satisfies-eranns-sort-specification input))
> >>     (permute input)))
> >> 
> >> My program is not much larger than your specification.
> >
> > Only because you didn't write "permute".
> >
> >> This generalizes to any program that formally specifies a correct
> >> algorithm.
> >
> > No, it doesn't.
> 
> Not this algorithm per se, but assuming that the output of
> the specification is a printable lisp object, then this algorithm
> will work:
> 
> (defun universal-algorithm (spec input)
>   (loop with i from 0
>         and output = (to-object i)
>         when (satisfies-spec spec input output)
>         return output))
> 
> (defun to-object (i)
>   (let ((*read-eval* nil))
>     (with-standard-io-syntax
>       (ignore-errors (read-from-string (to-string i))))))
> 
> (defun to-string (i)
>   (coerce (to-chars i) 'string))
> 
> (defun to-chars (i)
>   (unless (zerop i)
>     (multiple-value-bind (quo rem) (floor i 96)
>       (cons 
>         (char "
> 
!\"#$%&'()*+,-./0123456789:;<=>·@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~"
>          rem)
>         (to-chars quo)))))

SATISFIES-SPEC is undefined.  And would you kindly show me the adaptation
that solves the collatz problem?  That encrypts text according to the
rijndael algorithm?  That filters spam?

E.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <znfelp34.fsf@comcast.net>
···@jpl.nasa.gov (Erann Gat) writes:

>
> SATISFIES-SPEC is undefined.  And would you kindly show me the adaptation
> that solves the collatz problem?  That encrypts text according to the
> rijndael algorithm?  That filters spam?

Certainly.  Oh, wait, *you* have to supply the formal specification, I
just supply the algorithm that solves it.

Assuming you have a provable formal specification of the collatz
problem, the following paper discusses a technique finds an algorithm
that is within a factor of 5 of the fastest algorithm that provably
implements the formal spec.

See:
@article{ hutter:01fast,
  author =       "M. Hutter",
  title =        "The Fastest and Shortest Algorithm for All Well-Defined Problems",
  journal =      "International Journal of Foundations of Computer Science",
  publisher =    "World Scientific",
  volume =       "13",
  number =       "3",
  pages =        "431--443",
  year =         "2002",
  keywords =     "Acceleration, Computational Complexity,
                  Algorithmic Information Theory, Kolmogorov Complexity, Blum's
                  Speed-up Theorem, Levin Search.",
  http =          "http://www.hutter1.de/ai/pfastprg.htm",
  url =          "citeseer.nj.nec.com/hutter02fastest.html",
  url =          "http://arxiv.org/abs/cs.CC/0206022",
  ftp =          "ftp://ftp.idsia.ch/pub/techrep/IDSIA-16-00.ps.gz" }
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0211032002390001@192.168.1.51>
In article <············@comcast.net>, ·············@comcast.net wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> >
> > SATISFIES-SPEC is undefined.  And would you kindly show me the adaptation
> > that solves the collatz problem?  That encrypts text according to the
> > rijndael algorithm?  That filters spam?
> 
> Certainly.  Oh, wait, *you* have to supply the formal specification, I
> just supply the algorithm that solves it.

Fine, but you still have to write SATISFIES-SPEC first.

> Assuming you have a provable formal specification of the collatz
> problem,

I'm not sure what you mean by a "provable" specification.  The Collatz
problem is extremely simple.  See
http://mathworld.wolfram.com/CollatzProblem.html

> the following paper discusses a technique finds an algorithm
> that is within a factor of 5 of the fastest algorithm that provably
> implements the formal spec.

That is a truly remarkable result.  I have taken only a brief look at this
paper and it doesn't come acros as immediately bogus, but I have a hard
time believing that it is true because if it were it seems to me that it
would provide the answer to, e.g. whether P=NP, whether prime factoring
really is hard, etc. etc.

E.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa5e76b$1@news.unimelb.edu.au>
···@jpl.nasa.gov (Erann Gat) writes:

>·············@comcast.net wrote:
>
>> the following paper discusses a technique finds an algorithm
>> that is within a factor of 5 of the fastest algorithm that provably
>> implements the formal spec.
>
>That is a truly remarkable result.  I have taken only a brief look at this
>paper and it doesn't come acros as immediately bogus, but I have a hard
>time believing that it is true because if it were it seems to me that it
>would provide the answer to, e.g. whether P=NP,

It doesn't.  What it does do is this: given a formal system F, it produces
an algorithm A for solving NP-complete problems such that A is in P iff P=NP
is provable in F.  However, we don't know if A is actually in P or not,
and finding out whether A is in P is no easier than finding out whether
P=NP is provable in F.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0311030825320001@192.168.1.51>
In article <··········@news.unimelb.edu.au>, Fergus Henderson
<···@cs.mu.oz.au> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> >·············@comcast.net wrote:
> >
> >> the following paper discusses a technique finds an algorithm
> >> that is within a factor of 5 of the fastest algorithm that provably
> >> implements the formal spec.
> >
> >That is a truly remarkable result.  I have taken only a brief look at this
> >paper and it doesn't come acros as immediately bogus, but I have a hard
> >time believing that it is true because if it were it seems to me that it
> >would provide the answer to, e.g. whether P=NP,
> 
> It doesn't.  What it does do is this: given a formal system F, it produces
> an algorithm A for solving NP-complete problems such that A is in P iff P=NP
> is provable in F.  However, we don't know if A is actually in P or not,
> and finding out whether A is in P is no easier than finding out whether
> P=NP is provable in F.

That my well be, but according to what the abstract says (and what you
have said) I don't actually have to provide the proof that P=NP, the
algorithm will find that for me.  So I'll give it the Peano axioms and ask
it to solve the traveling salesman problem.  If it does so in polynomial
time (which can be determined empirically) then P=NP.  If not, then we
have proof that P=NP is not provable in PA.  Either way it's a major
breakthrough in mathetmatics, and the fact that it hasn't made the papers
indicates that it doesn't really do what the short description says it
does.

E.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1k76h1jaj.fsf@tti5.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

>  So I'll give it the Peano axioms and ask
> it to solve the traveling salesman problem.  If it does so in polynomial
> time (which can be determined empirically) then P=NP.
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

How do you do that?
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0311031014530001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> >  So I'll give it the Peano axioms and ask
> > it to solve the traveling salesman problem.  If it does so in polynomial
> > time (which can be determined empirically) then P=NP.
>         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> How do you do that?

Er, isn't it obvious?  You run the program on problems of various sizes,
measure the running time, and see if it grows exponentially or not (i.e.
if the log of the running time grows linearly with the problem size).

E.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa6946b$1@news.unimelb.edu.au>
···@jpl.nasa.gov (Erann Gat) writes:

>Fergus Henderson <···@cs.mu.oz.au> wrote:
>
>> ···@jpl.nasa.gov (Erann Gat) writes:
>> 
>> >·············@comcast.net wrote:
>> >
>> >> the following paper discusses a technique finds an algorithm
>> >> that is within a factor of 5 of the fastest algorithm that provably
>> >> implements the formal spec.
>> >
>> >That is a truly remarkable result.  I have taken only a brief look at this
>> >paper and it doesn't come acros as immediately bogus, but I have a hard
>> >time believing that it is true because if it were it seems to me that it
>> >would provide the answer to, e.g. whether P=NP,
>> 
>> It doesn't.  What it does do is this: given a formal system F, it produces
>> an algorithm A for solving NP-complete problems such that A is in P iff P=NP
>> is provable in F.  However, we don't know if A is actually in P or not,
>> and finding out whether A is in P is no easier than finding out whether
>> P=NP is provable in F.
>
>That my well be, but according to what the abstract says (and what you
>have said) I don't actually have to provide the proof that P=NP, the
>algorithm will find that for me.  So I'll give it the Peano axioms and ask
>it to solve the traveling salesman problem.  If it does so in polynomial
>time (which can be determined empirically)
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

No amount of black-box testing can determine whether an algorithm runs
in polynomial time.

Say you run exhaustive tests for all input up to a certain size, and up
to that point the performance curve looks exponential.  You still can't
be sure that it isn't about to suddenly level off.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0311031128420001@k-137-79-50-101.jpl.nasa.gov>
In article <··········@news.unimelb.edu.au>, Fergus Henderson
<···@cs.mu.oz.au> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> >Fergus Henderson <···@cs.mu.oz.au> wrote:
> >
> >> ···@jpl.nasa.gov (Erann Gat) writes:
> >> 
> >> >·············@comcast.net wrote:
> >> >
> >> >> the following paper discusses a technique finds an algorithm
> >> >> that is within a factor of 5 of the fastest algorithm that provably
> >> >> implements the formal spec.
> >> >
> >> >That is a truly remarkable result.  I have taken only a brief look at this
> >> >paper and it doesn't come acros as immediately bogus, but I have a hard
> >> >time believing that it is true because if it were it seems to me that it
> >> >would provide the answer to, e.g. whether P=NP,
> >> 
> >> It doesn't.  What it does do is this: given a formal system F, it produces
> >> an algorithm A for solving NP-complete problems such that A is in P
iff P=NP
> >> is provable in F.  However, we don't know if A is actually in P or not,
> >> and finding out whether A is in P is no easier than finding out whether
> >> P=NP is provable in F.
> >
> >That my well be, but according to what the abstract says (and what you
> >have said) I don't actually have to provide the proof that P=NP, the
> >algorithm will find that for me.  So I'll give it the Peano axioms and ask
> >it to solve the traveling salesman problem.  If it does so in polynomial
> >time (which can be determined empirically)
>        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> No amount of black-box testing can determine whether an algorithm runs
> in polynomial time.
> 
> Say you run exhaustive tests for all input up to a certain size, and up
> to that point the performance curve looks exponential.  You still can't
> be sure that it isn't about to suddenly level off.

Ah, a good point, but a moot one.  It turns out that the critical loophole
in Hutter's result is that he assumes that the high-order time bound of
the algorithm is computable in a time that is a function of program size
alone.  Hutter's method produces not only an algorithm but also its
high-order time complexity (and a proof of correctness thrown in for good
measure).  The cost of this is that he will miss programs that are faster
and provably correct, but whose high-order time complexity is not cheaply
computable.

So the title and the abstract are disingenuous.  The title should be, "The
fastest and shortest program with easily determinable high-order running
times..."

That is one very thick hedge.

E.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ptg9zdzk.fsf@comcast.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <············@comcast.net>, ·············@comcast.net wrote:
>
>> ···@jpl.nasa.gov (Erann Gat) writes:
>> 
>> >
>> > SATISFIES-SPEC is undefined.  And would you kindly show me the adaptation
>> > that solves the collatz problem?  That encrypts text according to the
>> > rijndael algorithm?  That filters spam?
>> 
>> Certainly.  Oh, wait, *you* have to supply the formal specification, I
>> just supply the algorithm that solves it.
>
> Fine, but you still have to write SATISFIES-SPEC first.

If the specification is formal, then it can algorithmically decide
if a putative input/output pair is indeed correct.  So I'll just
generate putative outputs and funcall your spec until I get one
that matches.

>> Assuming you have a provable formal specification of the collatz
>> problem,
>
> I'm not sure what you mean by a "provable" specification.  The Collatz
> problem is extremely simple.  See
> http://mathworld.wolfram.com/CollatzProblem.html

Yep, I'm familiar with it.

The problem with a `provable' specification is that it has shifted all
the correctness burden on the specification rather than the program.
So the program I write really doesn't have to do anything clever but
systematically look for answers and ask the specification if they are
correct.  If the specification is provably correct, then the program
is.

>> the following paper discusses a technique finds an algorithm
>> that is within a factor of 5 of the fastest algorithm that provably
>> implements the formal spec.
>
> That is a truly remarkable result.  I have taken only a brief look at this
> paper and it doesn't come acros as immediately bogus, but I have a hard
> time believing that it is true because if it were it seems to me that it
> would provide the answer to, e.g. whether P=NP, whether prime factoring
> really is hard, etc. etc.

It *could*, but you'd have to supply the proof in the specification!

There are a couple of `loopholes' in the paper.  It requires a formal
specification, and there's the usual asymptotic and constant factor
stuff, but the big one is that it ignores those programs that are
correct, but for which a proof that they implement the specification
doesn't exist within the proof framework.  (If it could find those,
then you'd be able to prove statements like `this statement is true
but not provable'.)
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0311030840030001@192.168.1.51>
In article <············@comcast.net>, ·············@comcast.net wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <············@comcast.net>, ·············@comcast.net wrote:
> >
> >> ···@jpl.nasa.gov (Erann Gat) writes:
> >> 
> >> >
> >> > SATISFIES-SPEC is undefined.  And would you kindly show me the adaptation
> >> > that solves the collatz problem?  That encrypts text according to the
> >> > rijndael algorithm?  That filters spam?
> >> 
> >> Certainly.  Oh, wait, *you* have to supply the formal specification, I
> >> just supply the algorithm that solves it.
> >
> > Fine, but you still have to write SATISFIES-SPEC first.
> 
> If the specification is formal, then it can algorithmically decide
> if a putative input/output pair is indeed correct.  So I'll just
> generate putative outputs and funcall your spec until I get one
> that matches.

Ah, but now you've added an additional constraint, namely, that my spec be
funcallable.  Not all formal specifications can be rendered as functions. 
For example, "This program terminates" can be formally rendered, but not
as a program.

So I say again, you must write SATISFIES-SPEC.  The reason I insist on
this is that you will find if you actually sit down to try to do it that
it is not nearly as simple as you seem to imagine it to be.

> >> Assuming you have a provable formal specification of the collatz
> >> problem,
> >
> > I'm not sure what you mean by a "provable" specification.  The Collatz
> > problem is extremely simple.  See
> > http://mathworld.wolfram.com/CollatzProblem.html
> 
> Yep, I'm familiar with it.
> 
> The problem with a `provable' specification is that it has shifted all
> the correctness burden on the specification rather than the program.
> So the program I write really doesn't have to do anything clever but
> systematically look for answers and ask the specification if they are
> correct.  If the specification is provably correct, then the program
> is.

But that only works if the specification is computable.  Not all formal
specifications are computable.  I can render any specification
uncomputable by adding the stipulation that the program must halt (or that
it must not halt).


> >> the following paper discusses a technique finds an algorithm
> >> that is within a factor of 5 of the fastest algorithm that provably
> >> implements the formal spec.
> >
> > That is a truly remarkable result.  I have taken only a brief look at this
> > paper and it doesn't come acros as immediately bogus, but I have a hard
> > time believing that it is true because if it were it seems to me that it
> > would provide the answer to, e.g. whether P=NP, whether prime factoring
> > really is hard, etc. etc.
> 
> It *could*, but you'd have to supply the proof in the specification!

That's not what the abstract says.  It only says that you have to provide
a formal system in which the correctness of the fastest algorithm can be
proven, not the proof itself.  The paper's algorithm purports to do that
for you.  What's more, it purports to do so in time that is within a
factor of 5 of the fastest algorithm for solving the problem itself plus a
(presumably very large) constant!  If this were true, it would be a
constructive proof that the question of whether P=NP is tractable, though
to actally get the answer you'd have to run the program, which could take
a long time.  Still, a constructive proof that P=NP is decidable would be
big, big news.

E.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <u15lxozs.fsf@comcast.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <············@comcast.net>, ·············@comcast.net wrote:
>
>> ···@jpl.nasa.gov (Erann Gat) writes:
>> 
>> > In article <············@comcast.net>, ·············@comcast.net wrote:
>> >
>> >> ···@jpl.nasa.gov (Erann Gat) writes:
>> >> 
>> >> >
>> >> > SATISFIES-SPEC is undefined.  And would you kindly show me the adaptation
>> >> > that solves the collatz problem?  That encrypts text according to the
>> >> > rijndael algorithm?  That filters spam?
>> >> 
>> >> Certainly.  Oh, wait, *you* have to supply the formal specification, I
>> >> just supply the algorithm that solves it.
>> >
>> > Fine, but you still have to write SATISFIES-SPEC first.
>> 
>> If the specification is formal, then it can algorithmically decide
>> if a putative input/output pair is indeed correct.  So I'll just
>> generate putative outputs and funcall your spec until I get one
>> that matches.
>
> Ah, but now you've added an additional constraint, namely, that my spec be
> funcallable.  Not all formal specifications can be rendered as functions. 
> For example, "This program terminates" can be formally rendered, but not
> as a program.

I think we're using different definitions of `formal'.

A `formal' spec would consist of a series of algorithmically
verifiable assertions.  A termination statement in your spec would not
be algorithmically verifiable.

>> > That is a truly remarkable result.  I have taken only a brief look at this
>> > paper and it doesn't come acros as immediately bogus, but I have a hard
>> > time believing that it is true because if it were it seems to me that it
>> > would provide the answer to, e.g. whether P=NP, whether prime factoring
>> > really is hard, etc. etc.
>> 
>> It *could*, but you'd have to supply the proof in the specification!
>
> That's not what the abstract says.  It only says that you have to provide
> a formal system in which the correctness of the fastest algorithm can be
> proven, not the proof itself.  The paper's algorithm purports to do that
> for you.  What's more, it purports to do so in time that is within a
> factor of 5 of the fastest algorithm for solving the problem itself plus a
> (presumably very large) constant!  If this were true, it would be a
> constructive proof that the question of whether P=NP is tractable, though
> to actally get the answer you'd have to run the program, which could take
> a long time.  Still, a constructive proof that P=NP is decidable would be
> big, big news.
>
> E.

Here's the abstract:

    An algorithm M is described that solves any well-defined problem p
    as quickly as the fastest algorithm computing a solution to p,
    save for a factor of 5 and low-order additive terms. M optimally
    distributes resources between the execution of provably correct
    p-solving programs and an enumeration of all proofs, including
    relevant proofs of program correctness and of time bounds on
    program runtimes. M avoids Blum's speed-up theorem by ignoring
    programs without correctness proof. M has broader applicability
    and can be faster than Levin's universal search, the fastest
    method for inverting functions save for a large multiplicative
    constant. An extension of Kolmogorov complexity and two novel
    natural measures of function complexity are used to show that the
    most efficient program computing some function f is also among the
    shortest programs provably computing f.

So it requires that p is a `well defined problem', *and* it only
searches those algorithms that have correctness proofs.

If your problem is undecidable within your formal framework, then it
isn't well-defined, and this algorithm won't help.
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0311031149480001@k-137-79-50-101.jpl.nasa.gov>
In article <············@comcast.net>, ·············@comcast.net wrote:

> I think we're using different definitions of `formal'.

Indeed.

> A `formal' spec would consist of a series of algorithmically
> verifiable assertions.

I see you are employing Humpty Dumpty's theory of semantics.

>  A termination statement in your spec would not
> be algorithmically verifiable.

Indeed.  Neither would a host of other useful assertions that can be
formally rendered, at least on the usual meaning of the word "formal", not
(obviously) in your Wonderland version.

[xnip]

> So it requires that p is a `well defined problem', *and* it only
> searches those algorithms that have correctness proofs.

And that have easily computable time bounds.  That's the hidden (and fatal
IMO) flaw.

> If your problem is undecidable within your formal framework, then it
> isn't well-defined, and this algorithm won't help.

Right.  But if the fastest algorithm for your problem happens not to have
easily computable time bounds then this algorithm will not find it either,
even if it provably solves the problem.

E.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <k76hxkhp.fsf@comcast.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <············@comcast.net>, ·············@comcast.net wrote:
>
>> I think we're using different definitions of `formal'.
>
> Indeed.
>
>> A `formal' spec would consist of a series of algorithmically
>> verifiable assertions.
>
> I see you are employing Humpty Dumpty's theory of semantics.
>
>>  A termination statement in your spec would not
>> be algorithmically verifiable.
>
> Indeed.  Neither would a host of other useful assertions that can be
> formally rendered, at least on the usual meaning of the word "formal", not
> (obviously) in your Wonderland version.

Certainly if a specification is `formal' one could `decide' whether a
particular input/output pair satisfied it.  If you put undecidable
statements in your specification, how could you possibly determine if
the spec was violated?

I'd be happy to use your definition of `formal', though.
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0311031301230001@k-137-79-50-101.jpl.nasa.gov>
In article <············@comcast.net>, ·············@comcast.net wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <············@comcast.net>, ·············@comcast.net wrote:
> >
> >> I think we're using different definitions of `formal'.
> >
> > Indeed.
> >
> >> A `formal' spec would consist of a series of algorithmically
> >> verifiable assertions.
> >
> > I see you are employing Humpty Dumpty's theory of semantics.
> >
> >>  A termination statement in your spec would not
> >> be algorithmically verifiable.
> >
> > Indeed.  Neither would a host of other useful assertions that can be
> > formally rendered, at least on the usual meaning of the word "formal", not
> > (obviously) in your Wonderland version.
> 
> Certainly if a specification is `formal' one could `decide' whether a
> particular input/output pair satisfied it.

No.  Please read, e.g.:

Godel, K.  On Formally Undecidable Propositions of Principia Mathematica
and Related Systems.

> If you put undecidable
> statements in your specification, how could you possibly determine if
> the spec was violated?

That is your problem, not mine.  You are the one making the claim.  It is
up to you to support it.

E.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1wuah5go6.fsf@tti5.uchicago.edu>
·············@comcast.net writes:

> Certainly if a specification is `formal' one could `decide' whether a
> particular input/output pair satisfied it.

Not if, for example, input and output are programs.

>  If you put undecidable
> statements in your specification, how could you possibly determine
> if the spec was violated?

Testing is the prototypical example: It shows the presence of bugs
(violations), not their absence (correctness).

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo6nq4$bia$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Testing is the prototypical example: It shows the presence of bugs
> (violations), not their absence (correctness).

Sidenote: I have heard this statement mentioned several times by several 
people, but it is complete and utter nonsense. Of course, test suites 
show the absence of bugs for all the tests that they execute. 
Furthermore, they do show the absence of bugs in the _running program_, 
not just some arbitrary statically determinable properties that might or 
might not be of interest.

Only successfully executing test suites provide a guarantee that a 
program actually _behaves_ in a useful way, at least for the cases 
covered in those test suites.

In principle, no statically checked property is immune against flaws in 
either the thinking process of the programmer, the specification against 
which a property is checked, or the tool that does the actual checking.

Tools that check static properties are only correct insofar they have 
been tested themselves.

You can't escape the fact that at some stage you just have to _believe_ 
that your set of tools is correct. You might have a higher or a lower 
degree of assurance that your code is correct, but there are effectively 
no 100% guarantees.


Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1sml558b3.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> 
> > Testing is the prototypical example: It shows the presence of bugs
> > (violations), not their absence (correctness).
> 
> Sidenote: I have heard this statement mentioned several times by
> several people, but it is complete and utter nonsense.

I wouldn't go that far.  It is pretty obviously true that the
statement is exactly right: if a test fails then you know there is a
bug.  If it does not fail, you only know that this time the program
worked.  *No* guarantee for next time or different input.

> Of course, test suites show the absence of bugs for all the tests
> that they execute.

Well, of course.  And not really: they only do that for these test
/under the particular circumstances/ that happened to be in effect at
test time.

> Furthermore, they do show the absence of bugs in the _running
> program_, not just some arbitrary statically determinable properties
> that might or might not be of interest.

Again, they only show that the program did not encounter a bug *this
time*.  With formal methods you get to choose which property you are
going to verify.  It is not the method's fault if you pick something
that is not relevant.

> Only successfully executing test suites provide a guarantee that a
> program actually _behaves_ in a useful way, at least for the cases
> covered in those test suites.

First, that's a pretty weak form of "guarantee": Only the cases
actually tested (and modulo what I wrote above regarding testing
conditions).  Second: Executing a program is a form of formal proof if
you look at it from the right angle.  It just so happens that it is
not proving a very general theorem.

> In principle, no statically checked property is immune against flaws
> in either the thinking process of the programmer, the specification
> against which a property is checked, or the tool that does the actual
> checking.

Indeed.  And, in principle, no testing framework is immune against
flaws in either the thinking process of the tester, the test cases
which are being checked, or the test framework that does the actual
testing.

> Tools that check static properties are only correct insofar they have
> been tested themselves.

Sure.  But they could be bootstrapped out of a much smaller "trusted
computing base".

> You can't escape the fact that at some stage you just have to
> _believe_ that your set of tools is correct. You might have a higher
> or a lower degree of assurance that your code is correct, but there
> are effectively no 100% guarantees.

True.  The point is that the less code you have to trust blindly, the
better.  That is pretty much the name of the game.

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo6pdq$e66$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
>>Matthias Blume wrote:
>>
>>>Testing is the prototypical example: It shows the presence of bugs
>>>(violations), not their absence (correctness).
>>
>>Sidenote: I have heard this statement mentioned several times by
>>several people, but it is complete and utter nonsense.
> 
> I wouldn't go that far.  It is pretty obviously true that the
> statement is exactly right: if a test fails then you know there is a
> bug.  If it does not fail, you only know that this time the program
> worked.  *No* guarantee for next time or different input.
> 
>>Of course, test suites show the absence of bugs for all the tests
>>that they execute.
> 
> Well, of course.  And not really: they only do that for these test
> /under the particular circumstances/ that happened to be in effect at
> test time.

...just repeat the test runs as often as you want, and make sure that 
you change the circumstances under which you run the tests in 
non-trivial ways.



Pascal
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l1r80o71rc.fsf@budvar.future-i.net>
Of course you also have to test your test suite... it could be that
the program does have a bug and the test isn't picking it up because
it is comparing expected output to expected output, instead of
expected output to actual output.  Or maybe the code which prints 'ok'
or 'not ok' is buggy so it always prints 'ok'.  And so on.  So you
also have to test your test suite.

It think it's pretty clear that no test suite will guarantee lack of
bugs, except for the particular inputs tested, and maybe not even then
if you allow that the test suite could be buggy.  Nobody is saying
that testing isn't a useful tool in practice, just as static checking
can be useful in practice.

-- 
Ed Avis <··@membled.com>
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1oevs5c51.fsf@tti5.uchicago.edu>
Ed Avis <··@membled.com> writes:

> Of course you also have to test your test suite... it could be that
> the program does have a bug and the test isn't picking it up because
> it is comparing expected output to expected output, instead of
> expected output to actual output.  Or maybe the code which prints 'ok'
> or 'not ok' is buggy so it always prints 'ok'.  And so on.  So you
> also have to test your test suite.
> 
> It think it's pretty clear that no test suite will guarantee lack of
> bugs, except for the particular inputs tested, and maybe not even then
> if you allow that the test suite could be buggy.  Nobody is saying
> that testing isn't a useful tool in practice, just as static checking
> can be useful in practice.

Exactly.  One thing that those who dismiss static checking in favor of
testing always neglect to mention is that testing involves several
static components.  As Ed said, it is necessary to convince oneself
that the test suite itself is correct.  Beyond that there is a strong
(and certainly not completely unjustified) belief that if I test a
program N times for some reasonably large N, then I can extrapolate
success to cases that were not tested with fairly high confidence.
But in reality this is a theorem which in itself needs justification.
The justification /cannot/ be obtained by more testing (or we have a
classic case of circular reasoning).

Matthias
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <wuagwlkm.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> ·············@comcast.net writes:
>
>> Certainly if a specification is `formal' one could `decide' whether a
>> particular input/output pair satisfied it.
>
> Not if, for example, input and output are programs.

That's true.  But again, how do you know if your output is in fact
correct?

>>  If you put undecidable
>> statements in your specification, how could you possibly determine
>> if the spec was violated?
>
> Testing is the prototypical example: It shows the presence of bugs
> (violations), not their absence (correctness).

Yes, but undecidable statements in the specification are equivalent
to writing tests that only fail if they halt.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2u15kxmur.fsf@hanabi-air.shimizu.blume>
·············@comcast.net writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > ·············@comcast.net writes:
> >
> >> Certainly if a specification is `formal' one could `decide' whether a
> >> particular input/output pair satisfied it.
> >
> > Not if, for example, input and output are programs.
> 
> That's true.  But again, how do you know if your output is in fact
> correct?

By proving the program correct that produces that output.

Matthias

 
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <1xsoy0dg.fsf@comcast.net>
·············@comcast.net writes:

> ···@jpl.nasa.gov (Erann Gat) writes:
>
>> In article <············@comcast.net>, ·············@comcast.net wrote:
>>
>>> I think we're using different definitions of `formal'.
>>
>> Indeed.
>>
>>> A `formal' spec would consist of a series of algorithmically
>>> verifiable assertions.
>>
>> I see you are employing Humpty Dumpty's theory of semantics.
>>
>>>  A termination statement in your spec would not
>>> be algorithmically verifiable.
>>
>> Indeed.  Neither would a host of other useful assertions that can be
>> formally rendered, at least on the usual meaning of the word "formal", not
>> (obviously) in your Wonderland version.

Since this entire discussion is predicated on using a computer to
prove things about a program, it hardly seems unreasonable to assume
that we are working in that particular domain!

I'll grant you that you could come up with any number of non-testable
assertions, print them on engraved invitations and call them `formal',
and I'd have no easy means to write a program to check them.

Except... since these are your specifications, and you are well aware
that there is no mechanical means to verify them, perhaps you would like
to act as an `oracle'?  (y-or-n-p "Did this pass?")
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0411030922060001@192.168.1.51>
In article <············@comcast.net>, ·············@comcast.net wrote:

> ·············@comcast.net writes:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> >
> >> In article <············@comcast.net>, ·············@comcast.net wrote:
> >>
> >>> I think we're using different definitions of `formal'.
> >>
> >> Indeed.
> >>
> >>> A `formal' spec would consist of a series of algorithmically
> >>> verifiable assertions.
> >>
> >> I see you are employing Humpty Dumpty's theory of semantics.
> >>
> >>>  A termination statement in your spec would not
> >>> be algorithmically verifiable.
> >>
> >> Indeed.  Neither would a host of other useful assertions that can be
> >> formally rendered, at least on the usual meaning of the word "formal", not
> >> (obviously) in your Wonderland version.
> 
> Since this entire discussion is predicated on using a computer to
> prove things about a program, it hardly seems unreasonable to assume
> that we are working in that particular domain!

True, but it is a big leap from there to suppose that specifications must
be funcallable.

> I'll grant you that you could come up with any number of non-testable
> assertions, print them on engraved invitations and call them `formal',
> and I'd have no easy means to write a program to check them.

I have a hard time believing you are truly as ignorant as you are
pretending to be here.

> Except... since these are your specifications, and you are well aware
> that there is no mechanical means to verify them, perhaps you would like
> to act as an `oracle'?  (y-or-n-p "Did this pass?")

That is in fact an example of a formal specification that cannot be
proved.  Here are some others:

1.  "The program shall control the attitude of the spacecraft to the
commanded attitude with an error of no more than one arcsecond."  (This
specification is not complete, but I hope you can see why your proposed
universal solution would fail in this case.)

2.  (forall x (equalp (f x) (g x)))

   where f is the program being constructed, and g is a supplied black-box
   executable.

E.
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <7k2guduq.fsf@ccs.neu.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <············@comcast.net>, ·············@comcast.net wrote:
>> 
>> Since this entire discussion is predicated on using a computer to
>> prove things about a program, it hardly seems unreasonable to assume
>> that we are working in that particular domain!
>
> True, but it is a big leap from there to suppose that specifications must
> be funcallable.

In absence of a *particular* specification language, there is
presumably some method by which one could take a specification and a
program and either prove or disprove that the program meets the
specification.  Are we starting with this assumption?  

If the spec is `formal', then presumably it is precise enough that one
could mechanically verify a proof within that spec.  (i.e. given a
putative proof and a specification, show that the putative proof is
indeed derivable from the formal spec.)  Does this match your
definition of `formal'?

If you agree with these, do you agree that it is possible to write a
program that uses the appropriate logical rules in the specification
to determine if the program is valid?

Would you agree that such a program would result in one of three
outcomes:  the program determines that the proof is correct, the
program determines that the proof is incorrect, the program is unable
to determine if the proof is correct or not?

So in theory one could write a program VERIFY that takes two
arguments, a formal specification and a putative proof, and returns
TRUE if the proof is correct, and FALSE if the proof is incorrect, and
*perhaps* returns an `I-dont-know' for *some* cases where it cannot
tell (or it enters an infinite loop).

Now if you curry the VERIFY program --- give it the specification
argument only --- do you not now have a predicate that you can apply
to a putative proof?

If you can write the VERIFY program in any language, you can certainly
write it in lisp, so wouldn't 
(lambda (specification) 
  (lambda (proof) (verify specification proof)))
be a curried version?  And once the value of specification was bound,
could I not FUNCALL the result?

>> I'll grant you that you could come up with any number of non-testable
>> assertions, print them on engraved invitations and call them `formal',
>> and I'd have no easy means to write a program to check them.
>
> I have a hard time believing you are truly as ignorant as you are
> pretending to be here.

I'm certainly becoming convinced that I am.  I'm beginning to believe that
the words `formal' and `proof' and `static' and `type' mean the opposite
of what I always thought they meant.

>> Except... since these are your specifications, and you are well aware
>> that there is no mechanical means to verify them, perhaps you would like
>> to act as an `oracle'?  (y-or-n-p "Did this pass?")
>
> That is in fact an example of a formal specification that cannot be
> proved.  Here are some others:
>
> 1.  "The program shall control the attitude of the spacecraft to the
> commanded attitude with an error of no more than one arcsecond."  (This
> specification is not complete, but I hope you can see why your proposed
> universal solution would fail in this case.)
>
> 2.  (forall x (equalp (f x) (g x)))
>
>    where f is the program being constructed, and g is a supplied black-box
>    executable.

Before I start down the road of arguing these, what do you consider a
reasonable basis for a formal specification?
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0411031258110001@k-137-79-50-101.jpl.nasa.gov>
In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <············@comcast.net>, ·············@comcast.net wrote:
> >> 
> >> Since this entire discussion is predicated on using a computer to
> >> prove things about a program, it hardly seems unreasonable to assume
> >> that we are working in that particular domain!
> >
> > True, but it is a big leap from there to suppose that specifications must
> > be funcallable.
> 
> In absence of a *particular* specification language, there is
> presumably some method by which one could take a specification and a
> program and either prove or disprove that the program meets the
> specification.  Are we starting with this assumption?  

Of course not.  Sheesh, have none of you formlists read Goedel?

> If the spec is `formal', then presumably it is precise enough that one
> could mechanically verify a proof within that spec.  (i.e. given a
> putative proof and a specification, show that the putative proof is
> indeed derivable from the formal spec.)  Does this match your
> definition of `formal'?

Yes.  Given a putative proof one can verify it, but *generating* such a
proof is an entirely different matter.

> If you agree with these

But I do not agree with these.  That's the whole point.  Obviously if
generating a proof were a simple mechanical process there would be nothing
to discuss.

[snip]

> If you can write the VERIFY program in any language, you can certainly
> write it in lisp, so wouldn't 
> (lambda (specification) 
>   (lambda (proof) (verify specification proof)))
> be a curried version?  And once the value of specification was bound,
> could I not FUNCALL the result?

Certainly, but 1) the burden is still on you then to write VERIFY, and 2)
the argument to your curried function is a proof, not a program, so this
whole line of reasoning is a red herring (at least with respect to the
argument that prunesquallor was making.

BTW, prunesquallor's original "universal algorithm" did not even attempt
to verify the program, but merely attempted to verify the I/O behavior of
the program on a particular input.


> Before I start down the road of arguing these, what do you consider a
> reasonable basis for a formal specification?

You tell me.  *You* are the one who is making the claim that formal
specifications are useful for a particular purpose, so the burden is on
*you* to specify what *you* mean when you make that claim.  If you specify
that the formal specification must be funcallable, that's fine, but my
immediate response is that your "language" for making formal
specificatiions is overly constrained, that there are things that I want
to be able to say, that I am able to say in certain formal specification
languages, that I am unable to say in your formal specification language
(because your "language" requires that my spec be computable).

Alternatively, if the specification language that you specify is powerful
enough that I can say everything I want to say then I can show (following
Goedel/Turing) that there are correct programs that have no proofs of
correctness.  I can even display programs that are correct with respect to
*useful* formal specifications that nonetheless have no proofs of
correctness.  None of this ought to be news to anyone.

It's really very simple.  "Formal" is not the same thing as "computable". 
Computable is a proper subset of formal.  That's really all there is to
it.  (Well, there are some other issues around the bend, like the fact
that we live in a finite universe, and so "computable in practice" is a
proper subset of "computable in principle", but we haven't yet gotten to
that part of the conversation.)

E.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1k76f6glg.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <············@comcast.net>, ·············@comcast.net wrote:
> >> 
> >> Since this entire discussion is predicated on using a computer to
> >> prove things about a program, it hardly seems unreasonable to assume
> >> that we are working in that particular domain!
> >
> > True, but it is a big leap from there to suppose that specifications must
> > be funcallable.
> 
> In absence of a *particular* specification language, there is
> presumably some method by which one could take a specification and a
> program and either prove or disprove that the program meets the
> specification.  Are we starting with this assumption?  
> 
> If the spec is `formal', then presumably it is precise enough that one
> could mechanically verify a proof within that spec.  (i.e. given a
> putative proof and a specification, show that the putative proof is
> indeed derivable from the formal spec.)  Does this match your
> definition of `formal'?

Let's give a concrete example:

Let C be a function mapping programs to programs.  For simplicity,
let's assume that input and output have equal syntax and semantics.
Let us write [[p]] for the "meaning" of a program p under our chosen
semantics.

Now let us specify what we consider a correct C:

   C is correct iff  \forall p . [[p]] = [[C(p)]]

Clearly, in most common calculi the question "[[p]] = [[q]]?" is not
decidable for arbitrary p and q (by Rice's theorem).  So the idea of
simply enumerating all q and trying to find one such that [[p]] =
[[q]] will not work.

Nevertheless, for specific C it is trival to show that [[p]] = [[C(p)]].
For example, take C to be the identity function.

Now make input and output languages different from each other, and the
above problem becomes that of compiler correctness.  In the same line
of argument that I used before, I strongly believe that any correct
compiler (in this sense) written by humans is provably correct even
though the set { q | [[q]]_{out} = [[p]]_{in} } is not recursive.
(The proof is just somewhat more involved than the one needed for the
identity function above. :-)

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0411031303000001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

>   In the same line
> of argument that I used before, I strongly believe that any correct
> compiler (in this sense) written by humans is provably correct even
> though the set { q | [[q]]_{out} = [[p]]_{in} } is not recursive.
> (The proof is just somewhat more involved than the one needed for the
> identity function above. :-)

That's fine, but do you acknowledge that given our current state of
knowledge this is an article of faith, not something that you can actually
demonstrate to be true?

E.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m17k2f6b2l.fsf@tti5.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> >   In the same line
> > of argument that I used before, I strongly believe that any correct
> > compiler (in this sense) written by humans is provably correct even
> > though the set { q | [[q]]_{out} = [[p]]_{in} } is not recursive.
> > (The proof is just somewhat more involved than the one needed for the
> > identity function above. :-)
> 
> That's fine, but do you acknowledge that given our current state of
> knowledge this is an article of faith, not something that you can actually
> demonstrate to be true?

Yes.  I never claimed otherwise.  (If someone mistook what I said,
then I apologize for not being clear enough.)

Matthias
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <8ymvzxs4.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Let's give a concrete example:

Yay!!!!

> Let C be a function mapping programs to programs.  For simplicity,
> let's assume that input and output have equal syntax and semantics.
> Let us write [[p]] for the "meaning" of a program p under our chosen
> semantics.
>
> Now let us specify what we consider a correct C:
>
>    C is correct iff  \forall p . [[p]] = [[C(p)]]

Ok, I'm not sure I understand how this is `formal'.  The problem is
that you are defining correctness as a function of `meaning', but
computers don't understand `meaning', they understand syntax.

In other words, the [[]] operator takes a string which we can operate
on and projects it into the domain of `meanings' which we cannot
represent in the computer.

Let me stop there and clarify that before getting to the next part.

> Clearly, in most common calculi the question "[[p]] = [[q]]?" is not
> decidable for arbitrary p and q (by Rice's theorem).  So the idea of
> simply enumerating all q and trying to find one such that [[p]] =
> [[q]] will not work.
>
> Nevertheless, for specific C it is trival to show that [[p]] = [[C(p)]].
> For example, take C to be the identity function.
>
> Now make input and output languages different from each other, and the
> above problem becomes that of compiler correctness.  In the same line
> of argument that I used before, I strongly believe that any correct
> compiler (in this sense) written by humans is provably correct even
> though the set { q | [[q]]_{out} = [[p]]_{in} } is not recursive.
> (The proof is just somewhat more involved than the one needed for the
> identity function above. :-)
>
> Matthias
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1fzh36fa1.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Let's give a concrete example:
> 
> Yay!!!!
> 
> > Let C be a function mapping programs to programs.  For simplicity,
> > let's assume that input and output have equal syntax and semantics.
> > Let us write [[p]] for the "meaning" of a program p under our chosen
> > semantics.
> >
> > Now let us specify what we consider a correct C:
> >
> >    C is correct iff  \forall p . [[p]] = [[C(p)]]
> 
> Ok, I'm not sure I understand how this is `formal'.  The problem is
> that you are defining correctness as a function of `meaning', but
> computers don't understand `meaning', they understand syntax.

Look at any textbook on programming language semantics.

> In other words, the [[]] operator takes a string which we can operate
> on and projects it into the domain of `meanings' which we cannot
> represent in the computer.

Of course we can. See above.

Matthias
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <4qxjztna.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> Matthias Blume <····@my.address.elsewhere> writes:
>> 
>> > Let's give a concrete example:
>> 
>> Yay!!!!
>> 
>> > Let C be a function mapping programs to programs.  For simplicity,
>> > let's assume that input and output have equal syntax and semantics.
>> > Let us write [[p]] for the "meaning" of a program p under our chosen
>> > semantics.
>> >
>> > Now let us specify what we consider a correct C:
>> >
>> >    C is correct iff  \forall p . [[p]] = [[C(p)]]
>> 
>> Ok, I'm not sure I understand how this is `formal'.  The problem is
>> that you are defining correctness as a function of `meaning', but
>> computers don't understand `meaning', they understand syntax.
>
> Look at any textbook on programming language semantics.

I have.  I'm familiar with semantics.  Nonetheless, equations mapping
syntax to semantic meaning aren't representable (at least as-is.  Do
you wish to establish a mapping?) because the right-hand side of the
equations are mathematical functions.

>> In other words, the [[]] operator takes a string which we can operate
>> on and projects it into the domain of `meanings' which we cannot
>> represent in the computer.
>
> Of course we can. See above.

Now *there's* a convincing argument.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1brrr6b7d.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> >
> >> Matthias Blume <····@my.address.elsewhere> writes:
> >> 
> >> > Let's give a concrete example:
> >> 
> >> Yay!!!!
> >> 
> >> > Let C be a function mapping programs to programs.  For simplicity,
> >> > let's assume that input and output have equal syntax and semantics.
> >> > Let us write [[p]] for the "meaning" of a program p under our chosen
> >> > semantics.
> >> >
> >> > Now let us specify what we consider a correct C:
> >> >
> >> >    C is correct iff  \forall p . [[p]] = [[C(p)]]
> >> 
> >> Ok, I'm not sure I understand how this is `formal'.  The problem is
> >> that you are defining correctness as a function of `meaning', but
> >> computers don't understand `meaning', they understand syntax.
> >
> > Look at any textbook on programming language semantics.
> 
> I have.  I'm familiar with semantics.  Nonetheless, equations mapping
> syntax to semantic meaning aren't representable (at least as-is.  Do
> you wish to establish a mapping?) because the right-hand side of the
> equations are mathematical functions.

I don't understand where your problem is.  You certainly can define
[[.]] formally, which makes the above definition of "correct" formal.
It is not computable, though -- which is my point.  You cannot write a
program that proves or disproves [[p]] = [[q]] for arbitrary p and q.
Neither can you write a program that proves or disproves an arbitrary
C correct.  But there are certainly some functions C for which we can
prove correctness.

My original claim, narrowed to this particular problem, reads: For any
correct C that was written by a human with the purpose of writing a
correct C, there will be a proof of C's correctness.

The other, more immediate point is that you cannot implement a correct
C by coding up a search that finds, for a given p, a q such that [[p]]
= [[q]].

> >> In other words, the [[]] operator takes a string which we can operate
> >> on and projects it into the domain of `meanings' which we cannot
> >> represent in the computer.
> >
> > Of course we can. See above.
> 
> Now *there's* a convincing argument.

Actually, the computer is the ultimate device for representing the
meaning of programs.  (But in the end that is besides the point.  You
don't have to "represent" something in order to be able to prove
theorems about it.)

Matthias
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa88a2d$1@news.unimelb.edu.au>
Joe Marshall <···@ccs.neu.edu> writes:

>If the spec is `formal', then presumably it is precise enough that one
>could mechanically verify a proof within that spec.  (i.e. given a
>putative proof and a specification, show that the putative proof is
>indeed derivable from the formal spec.)  Does this match your
>definition of `formal'?

No.  I can certainly imagine specifications and programs which are
written in a formal system, but for which the proof that the program
meets the specification cannot be expressed in the same formal system.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Ray Dillinger
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3FA92D5C.64368E74@sonic.net>
Fergus Henderson wrote:
> 
> Joe Marshall <···@ccs.neu.edu> writes:
> 
> >If the spec is `formal', then presumably it is precise enough that one
> >could mechanically verify a proof within that spec.  (i.e. given a
> >putative proof and a specification, show that the putative proof is
> >indeed derivable from the formal spec.)  Does this match your
> >definition of `formal'?
> 
> No.  I can certainly imagine specifications and programs which are
> written in a formal system, but for which the proof that the program
> meets the specification cannot be expressed in the same formal system.
> 

Isn't this exactly what Kurt Godel proved?  For any formally specified 
system beyond a minimal complexity, there will be some things that are 
true but undecidable? 

				Bear
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2y8uwxmww.fsf@hanabi-air.shimizu.blume>
·············@comcast.net writes:

> ·············@comcast.net writes:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> >
> >> In article <············@comcast.net>, ·············@comcast.net wrote:
> >>
> >>> I think we're using different definitions of `formal'.
> >>
> >> Indeed.
> >>
> >>> A `formal' spec would consist of a series of algorithmically
> >>> verifiable assertions.
> >>
> >> I see you are employing Humpty Dumpty's theory of semantics.
> >>
> >>>  A termination statement in your spec would not
> >>> be algorithmically verifiable.
> >>
> >> Indeed.  Neither would a host of other useful assertions that can be
> >> formally rendered, at least on the usual meaning of the word "formal", not
> >> (obviously) in your Wonderland version.
> 
> Since this entire discussion is predicated on using a computer to
> prove things about a program, it hardly seems unreasonable to assume
> that we are working in that particular domain!

Nonsense.  For example, there are many statements of the form A->B that
can be proved quite easily without being able to prove either A or B.

> I'll grant you that you could come up with any number of non-testable
> assertions, print them on engraved invitations and call them `formal',
> and I'd have no easy means to write a program to check them.

I already gave you some: a compiler's output should be semantically
equivalent to its input (under their respective semantics).  Even
though I cannot prove program equivalence for arbitrary pairs of
programs, I can do so for input-output pairs of a correct compiler.
 
> Except... since these are your specifications, and you are well aware
> that there is no mechanical means to verify them, perhaps you would like
> to act as an `oracle'?  (y-or-n-p "Did this pass?")

You don't need such an oracle.  Whoever brought up the idea of just
enumerating all outputs and testing for the one that fits the
specification needs it. (I forgot if that was you.)  The rest of us
doesn't.

Matthias
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m11xsp6vdx.fsf@tti5.uchicago.edu>
·············@comcast.net writes:

> A `formal' spec would consist of a series of algorithmically
> verifiable assertions.  A termination statement in your spec would not
> be algorithmically verifiable.

I don't think that we want to limit ourselves to algorithmically
verifiable assertions.  For example, a compiler is usually considered
correct if its output (under the semantics of the output language)
computes the same function as the input (under the semantics of the
input language).  Comparing functions for equality is undecidable in
general, but it is perfectly fine as a formal spec.

Matthias
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <sml4wl5t.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> ·············@comcast.net writes:
>
>> A `formal' spec would consist of a series of algorithmically
>> verifiable assertions.  A termination statement in your spec would not
>> be algorithmically verifiable.
>
> I don't think that we want to limit ourselves to algorithmically
> verifiable assertions.  

I think *you* do.  You were the one doubting the existance of useful
programs that were correct but not verifiable.

> For example, a compiler is usually considered
> correct if its output (under the semantics of the output language)
> computes the same function as the input (under the semantics of the
> input language).  Comparing functions for equality is undecidable in
> general, but it is perfectly fine as a formal spec.

It would, of course, mean that one could not mechanically verify that
the compiler is correct.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2ptg8xmsk.fsf@hanabi-air.shimizu.blume>
·············@comcast.net writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > ·············@comcast.net writes:
> >
> >> A `formal' spec would consist of a series of algorithmically
> >> verifiable assertions.  A termination statement in your spec would not
> >> be algorithmically verifiable.
> >
> > I don't think that we want to limit ourselves to algorithmically
> > verifiable assertions.  
> 
> I think *you* do.  You were the one doubting the existance of useful
> programs that were correct but not verifiable.

And what has one to do with the other?
 
> > For example, a compiler is usually considered
> > correct if its output (under the semantics of the output language)
> > computes the same function as the input (under the semantics of the
> > input language).  Comparing functions for equality is undecidable in
> > general, but it is perfectly fine as a formal spec.
> 
> It would, of course, mean that one could not mechanically verify that
> the compiler is correct.

Nonsense.
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ptg9z4ra.fsf@thalassa.informatimago.com>
···@jpl.nasa.gov (Erann Gat) writes:
> 
> Ah, but now you've added an additional constraint, namely, that my spec be
> funcallable.  Not all formal specifications can be rendered as functions. 
> For example, "This program terminates" can be formally rendered, but not
> as a program.
> [...]
> But that only works if the specification is computable.  Not all formal
> specifications are computable.  I can render any specification
> uncomputable by adding the stipulation that the program must halt (or that
> it must not halt).

Pragmatically, we don't need a "this program terminates" predicate.  I
could write any number of programs that would provably terminate in 20
billion years.  Too  bad the estimated death of the  universe is in 15
billion years.   What we need is  a "this program  terminates before x
seconds".  And this can perfectly be programmed.

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo69kt$i6d$1@news.oberberg.net>
Pascal Bourguignon wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
>>Ah, but now you've added an additional constraint, namely, that my spec be
>>funcallable.  Not all formal specifications can be rendered as functions. 
>>For example, "This program terminates" can be formally rendered, but not
>>as a program.
>>[...]
>>But that only works if the specification is computable.  Not all formal
>>specifications are computable.  I can render any specification
>>uncomputable by adding the stipulation that the program must halt (or that
>>it must not halt).
> 
> 
> Pragmatically, we don't need a "this program terminates" predicate.  I
> could write any number of programs that would provably terminate in 20
> billion years.  Too  bad the estimated death of the  universe is in 15
> billion years.   What we need is  a "this program  terminates before x
> seconds".  And this can perfectly be programmed.

You also need a predicate like "this program terminates before x*input 
size seconds" - which cannot be programmed.

Regards,
Jo
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87d6c9yzh2.fsf@thalassa.informatimago.com>
Joachim Durchholz <·················@web.de> writes:
> You also need a predicate like "this program terminates before x*input
> size seconds" - which cannot be programmed.

Either I don't  understand what you mean or there  will always exist a
function  to measure  the  input size  and  there is  an algorithm  to
multiply two numbers.

Note that the input size cannot be greater than the "bandwidth" of the
processor, so you  could really ignore it.  Here  we're not interested
in the complexity  of the algorithm, only in  its termination. You can
specify a maximum pragmatic input size though.

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa5d426$1@news.unimelb.edu.au>
·············@comcast.net writes:

>Assuming you have a provable formal specification of the collatz
>problem, the following paper discusses a technique finds an algorithm
>that is within a factor of 5 of the fastest algorithm that provably
>implements the formal spec.

Your statement is incorrect.  You have mis-paraphrased the conclusions
of the paper that you cite, which contained some crucial phrases
("asymptotically", "large additive constant") which limit the practical
applicability of its results.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <877k2k58od.fsf@thalassa.informatimago.com>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <············@newsreader2.netcologne.de>, Pascal Costanza
> <········@web.de> wrote:
> 
> > > Of course this is complete nonsense.  It is *much* easier, to, e.g.,
> > > say what a correct sorting algorithm is than to actually implement
> > > one.
> > 
> > Yes, but is it also easier to _formally specify_ what a correct sorting 
> > algorithm is than to implement one?
> 
> Yes, of course.
> 
> s2 = sort(s1) iff:
> 
> len(s2) = len(s1) and
> forall(x): if member(x,s2) then member(x,s1) and
> forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]
> 
> That's much easier than actually sorting.
> 
> Matrix inversion makes an even better example.  Prime factoring is even
> better than that.
> 
> E.

(defun sort-for-gat (list)
    (make-sequence 'list (length list) (car list)))

perfectly implements your specification.

Is it really easier to specify?


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Gareth McCaughan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87znff947e.fsf@g.mccaughan.ntlworld.com>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <············@newsreader2.netcologne.de>, Pascal Costanza
> > <········@web.de> wrote:
> > 
> > > > Of course this is complete nonsense.  It is *much* easier, to, e.g.,
> > > > say what a correct sorting algorithm is than to actually implement
> > > > one.
> > > 
> > > Yes, but is it also easier to _formally specify_ what a correct sorting 
> > > algorithm is than to implement one?
> > 
> > Yes, of course.
> > 
> > s2 = sort(s1) iff:
> > 
> > len(s2) = len(s1) and
> > forall(x): if member(x,s2) then member(x,s1) and
> > forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]
> > 
> > That's much easier than actually sorting.
> > 
> > Matrix inversion makes an even better example.  Prime factoring is even
> > better than that.
> > 
> > E.
> 
> (defun sort-for-gat (list)
>     (make-sequence 'list (length list) (car list)))
> 
> perfectly implements your specification.
> 
> Is it really easier to specify?

It's certainly very easy to specify, though (as you've
demonstrated) it's possible to get it wrong.

s2 results from correctly sorting s1 iff the following
two conditions hold.

  1 For all x (you can restrict to "for all x in s1" or
    "for all x in s2" if you like, to make this more
    checkable), the number of instances of x in s1 and
    the number of instances of x in s2 are equal.

  2 For all pairs (i,j) of valid indices in s2,
    i<j => s2[i]<=s2[j].

I'm not sure about "*much* easier", because there are
sorting algorithms that are very easy to specify. For
instance:

    Repeat as many times as the list has elements:
      For each pair of consecutive elements, in order:
        If the first is > the second, swap them.

-- 
Gareth McCaughan
.sig under construc
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ad7f4q3r.fsf@thalassa.informatimago.com>
Gareth McCaughan <·····@g.local> writes:

> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > In article <············@newsreader2.netcologne.de>, Pascal Costanza
> > > <········@web.de> wrote:
> > > 
> > > > > Of course this is complete nonsense.  It is *much* easier, to, e.g.,
> > > > > say what a correct sorting algorithm is than to actually implement
> > > > > one.
> > > > 
> > > > Yes, but is it also easier to _formally specify_ what a correct sorting 
> > > > algorithm is than to implement one?
> > > 
> > > Yes, of course.
> > > 
> > > s2 = sort(s1) iff:
> > > 
> > > len(s2) = len(s1) and
> > > forall(x): if member(x,s2) then member(x,s1) and
> > > forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]
> > > 
> > > That's much easier than actually sorting.
> > > 
> > > Matrix inversion makes an even better example.  Prime factoring is even
> > > better than that.
> > > 
> > > E.
> > 
> > (defun sort-for-gat (list)
> >     (make-sequence 'list (length list) (car list)))
> > 
> > perfectly implements your specification.

I was actually wrong on  this point because the specification contains
two bugs and I addressed only one. 
I read:       if x<y then s2[x]<=s2[y]
instead of:   if x<y then s1[x]<=s2[y]

A correct implementation would be:

  (defun sort-for-gat (list)
     (make-sequence 'list (length list) (apply (function max) list)))

> > Is it really easier to specify?
> 
> It's certainly very easy to specify, though (as you've
> demonstrated) it's possible to get it wrong.
> 
> s2 results from correctly sorting s1 iff the following
> two conditions hold.
> 
>   1 For all x (you can restrict to "for all x in s1" or
>     "for all x in s2" if you like, to make this more
>     checkable), the number of instances of x in s1 and
>     the number of instances of x in s2 are equal.
> 
>   2 For all pairs (i,j) of valid indices in s2,
>     i<j => s2[i]<=s2[j].

Your specification  is correct, but  using either one of  the proposed
restrictions would render it incorrect:

   for all x in s1 the number of instances of x in s1 and
          the number of instances of x in s2 are equal.
    
   s1 = ( 1 1 2 3 3 )
   s2 = ( 1 1 2 3 3 4 4 )


    for all x in s2  the number of instances of x in s1 and
          the number of instances of x in s2 are equal.

    s1 = ( 1 1 2 3 3 )
    s2 = ( 1 1 2 )


Bags are tricky.


> I'm not sure about "*much* easier", because there are
> sorting algorithms that are very easy to specify. For
> instance:
> 
>     Repeat as many times as the list has elements:
>       For each pair of consecutive elements, in order:
>         If the first is > the second, swap them.

An implementation is formal. 

I've  the feeling  that a  _formal_  specification may  often be  more
lengthy than a concise implementation.  That's probably why we do with
informal specifications most of the time.


Rewriting your specification and  implementation in the same formalism
we can see  that at least in this case,  the implementation is shorter
than the specification, and it does not even address what happens when
the input occupies more than  half the available memory, how much time
and stack  space is available to  realize it, what  should happen when
the elements are  not comparable by <=, etc.  Could  anybody show us a
_real_ (non  autoreferencial) formal specification that  is not bigger
than one of its trivial implementation?



(setq s (quote
(defspecification sort (s1) --> s2
  (and (listp s1)
       (listp s2)
       (forall x (= (count x s1) (count x s2)))
       (forall (i j)
               (imply
                (and (integerp i) (<= 0 i) (< i (length s2))
                     (integerp j) (<= 0 j) (< j (length s2))
                     (< i j))
                (<= (nth i s2) (nth j s2))))))
))

(setq i (quote
(defun sort (s1)
  (let ((s2 (copy-sequence s1)))
    (dotimes (n (length s2))
      (dotimes (i (1- (length s2)))
        (when (> (nth i s2) (nth (1+ i s2)))
          (psetf (nth i s2)      (nth (1+ i s2))
                 (nth (1+ i s2)) (nth i s2)))))))
         ))

(defun count-cons (tree)
  (if (atom tree)
    0
    (+ 1 (count-cons (car tree)) (count-cons (cdr tree)))))

(defun count-atom (tree)
  (cond
   ((null tree) 0)
   ((atom tree) 1)
   (t (+ (count-atom (car tree)) (count-atom (cdr tree))))))

(show (count-atom s) (count-atom i) (count-cons s)  (count-cons i))
==> (52 40 76 64)


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Gareth McCaughan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87vfq38lbn.fsf@g.mccaughan.ntlworld.com>
Pascal Bourguignon wrote:

[I said:]
> >   1 For all x (you can restrict to "for all x in s1" or
> >     "for all x in s2" if you like, to make this more
> >     checkable), the number of instances of x in s1 and
> >     the number of instances of x in s2 are equal.
> > 
> >   2 For all pairs (i,j) of valid indices in s2,
> >     i<j => s2[i]<=s2[j].
> 
> Your specification  is correct, but  using either one of  the proposed
> restrictions would render it incorrect:

D'oh! Of course they would. Make it: "for all x in the
union of s1 and s2". ... The psychology of bugs interests
me. In case it interests anyone else, here's what
happened. Originally I had three conditions, one of
which was "same number of elements in s1 and s2".
Then I noticed that that condition is unnecessary
when you have "for all x, same number of x's in
s1 and s2", and that (*with* the extra condition)
the restrictions I mentioned are innocuous. So,
being a moron, I made *both* changes. :-)

> Rewriting your specification and  implementation in the same formalism
> we can see  that at least in this case,  the implementation is shorter
> than the specification, and it does not even address what happens when
> the input occupies more than  half the available memory, how much time
> and stack  space is available to  realize it, what  should happen when
> the elements are  not comparable by <=, etc.  Could  anybody show us a
> _real_ (non  autoreferencial) formal specification that  is not bigger
> than one of its trivial implementation?

I'm not sure what "trivial" means here, but: sure, lots,
especially if restrictions on running time are allowed.

(define-specification partial-factorize
  ;; parameters
  ((n integer) -> (p integer) (q integer))
  ;; preconditions
  ((exist ((a integer) (b integer))
     (and (> a 1) (> b 1) (= n (* a b)))))
  ;; postconditions
  ((>= p q 2)
   (= n (* p q)))
  ;; upper bound on running time, up to constant factor
  (exp (sqrt (* (log n) (log (log n))))))

If you can implement a function that meets those specs
in anything like as small a space, I'd be very interested
to see it.

-- 
Gareth McCaughan
.sig under construc
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <xIQob.60522$ao4.160699@attbi_s51>
"Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
>
> Yes, but is it also easier to _formally specify_ what a correct sorting
> algorithm is than to implement one?

Sure.


Marshall
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa3b700$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Here is something I have found by accident: 
>http://www.idiom.com/~zilla/Work/Softestim/softestim.html
>
> From that page:
>
>"In addition to the claims in the paper, there is one additional claim 
>that can easily be made:
>
>* Program correctness cannot be determined
>
>This claim is contrary to the view that formal specifications and 
>program proofs can prove program correctness.
>
>The argument here is that the specification must be formal in order to 
>participate in the proof. Since the specification fully specifies the 
>behavior of the program, [...]

That assumption is wrong.  A specification should not necessarily be
required to fully specify the behaviour of the program.  For example,
a specification for a compiler might say that it should report an error
when given syntactically invalid input, but it need not specify exactly
what form the error message should take.

Even if it were true, what follows would still be wrong:

>the complexity (C) of the specification must be 
>at least approximately as large as the complexity of a minimal program 
>for the task at hand. [...]
>Using this, a small program can be written that, given 
>some input, exhaustively queries the specification until the 
>corresponding output bit pattern (if any) is determined; this bit 
>pattern is then output, thereby simulating the desired program. 
>Formally, C(specification) + C(query program) >= C(program).

Here the word "minimal" was suddenly dropped...

>We are left 
>with the obvious question: if the specification is approximately of the 
>same order of complexity as the program, how can we know that the
>specification is correct?"

... and now replaced with "the".

A _complete_ specification would be of the same order of complexity as
a _minimal_ program.  But specifications need not be complete,
and real programs (which must meet reasonable performance requirements)
are much much much more complex than the minimal program.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <7GQob.58509$275.144160@attbi_s53>
"Pascal Costanza" <········@web.de> wrote in message ················@newsreader2.netcologne.de...
> We are left
> with the obvious question: if the specification is approximately of the
> same order of complexity as the program, how can we know that the
> specification is correct?"

Specifications are correct by definition. If that weren't the case,
what would correctness mean? That God favors the program?
That it possesses some transcendant quality?

The definition of correctness is "conforms to the specification."


Marshall
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <873cd858by.fsf@thalassa.informatimago.com>
"Marshall Spight" <·······@dnai.com> writes:

> "Pascal Costanza" <········@web.de> wrote in message ················@newsreader2.netcologne.de...
> > We are left
> > with the obvious question: if the specification is approximately of the
> > same order of complexity as the program, how can we know that the
> > specification is correct?"
> 
> Specifications are correct by definition. If that weren't the case,
> what would correctness mean? That God favors the program?
> That it possesses some transcendant quality?
> 
> The definition of correctness is "conforms to the specification."

There  is  the correctness  of  the program,  and  then  there is  the
correctness of  the specification.  Have  a look at  the sepcification
Erann gave for sort:

    s2 = sort(s1) iff:

        len(s2) = len(s1) and
        forall(x): if member(x,s2) then member(x,s1) and
        forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]


I would not like to have my life depend on this specification!


And even with  a correct specification, you may still  want to ask you
if you  rather implement it or  if you rather  go and live free  on an
island with your S.O.


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Lex Spoon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3sml7g5rh.fsf@logrus.dnsalias.net>
>> The definition of correctness is "conforms to the specification."
>
> There  is  the correctness  of  the program,  and  then  there is  the
> correctness of  the specification.  Have  a look at  the sepcification
> Erann gave for sort:
>
>     s2 = sort(s1) iff:
>
>         len(s2) = len(s1) and
>         forall(x): if member(x,s2) then member(x,s1) and
>         forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]
>
>
> I would not like to have my life depend on this specification!

It's a good thing you.  Consider:

    s1 = [ 1, 1, 2, 2 ]
    s2 = [ 1, 2, 2, 2 ]

or worse:

    s1 = [ 1, 2, 3, 4 ]
    s2 = [ 4, 4, 4, 4]

These both match the spec.  This pair, on the other hand, does not
match the spec:

    s1 = [ 4, 3, 2, 1 ]
    s2 = [ 1, 2, 3, 4 ]

Consider x=0 and y=1.  x<y  but it is not true that s1[x]<=s2[y] .


To reiterate Pascal's point, one certainly does not want to treat the
spec as the ultimate definition of correctness.  The spec needs to be
verified, too.


And to rub in an earlier point from the thread, please note that
neither of these bugs is a type error.  I'm glad not to have seen it
in this thread, but I occasionally hear people -- sometimes respected
researchers -- claiming things like "once my program type checks in
SML, it tends to be correct".  I wouldn't want to trust my life to
code written by such a person.  :)  (Granted I wouldn't want to trust
my life to ANY software! :))


-Lex
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <0JVob.81057$e01.264784@attbi_s02>
"Lex Spoon" <···@cc.gatech.edu> wrote in message ···················@logrus.dnsalias.net...
> I'm glad not to have seen it
> in this thread, but I occasionally hear people -- sometimes respected
> researchers -- claiming things like "once my program type checks in
> SML, it tends to be correct".

Is it that you don't believe them?

All this does is describe a tendency; it doesn't say anything
hard and fast. Are you saying their experience is invalid,
or are you saying they are misrepresenting their experience?
Isn't it possible that, in fact, once their programs typecheck
in SML that they tend to be correct?


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo1a21$2i6$1@newsreader2.netcologne.de>
Marshall Spight wrote:
> "Lex Spoon" <···@cc.gatech.edu> wrote in message ···················@logrus.dnsalias.net...
> 
>>I'm glad not to have seen it
>>in this thread, but I occasionally hear people -- sometimes respected
>>researchers -- claiming things like "once my program type checks in
>>SML, it tends to be correct".
> 
> 
> Is it that you don't believe them?
> 
> All this does is describe a tendency; it doesn't say anything
> hard and fast. Are you saying their experience is invalid,
> or are you saying they are misrepresenting their experience?
> Isn't it possible that, in fact, once their programs typecheck
> in SML that they tend to be correct?

Here are my 0.02 eurocents:

Of course, this is perfectly possible.

However: Please keep in mind that such a statement is a description of 
someone's experience. The problems start as soon as you abuse this as a 
prescription for other people's work.


Pascal
From: Lex Spoon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3vfq1gcul.fsf@logrus.dnsalias.net>
"Marshall Spight" <·······@dnai.com> writes:
> "Lex Spoon" <···@cc.gatech.edu> wrote in message ···················@logrus.dnsalias.net...
>> I'm glad not to have seen it
>> in this thread, but I occasionally hear people -- sometimes respected
>> researchers -- claiming things like "once my program type checks in
>> SML, it tends to be correct".
>
> Is it that you don't believe them?

I don't.  They are being honest about their experience, but I am not
convinced that their programs are really that bug free.  For starters,
this is a claim from the person's memory, not from any objective study
of even one person's work.  Memory is very faulty, especially if you
aren't concentrating!  Second, there is no verification that the
programs have no bugs.  I strongly suspect that these people simply
don't run their code very much.

Most of all it completely boggles my intuition and experience.  Type
errors seem severe, to me, and so program lacking type errors is no
better than a hospital patient lacking large holes in the head.  When
your program tries to take the FFT of a banana, it tends to drop dead
immediately anyway.  Further, in the code I write and debug, type
errors have never in my memory caused a problem, whereas I've
certainly had plenty of other sorts of bugs.

Finally, I have actually written a moderate amount of SML.  I'm
careful with it, I believe I write in a good style for the language,
and I tend to have a small number of type errors anyway when I compile
it.  (To me, compiler errors tell you you need to inspect your code
some more.)  Nevertheless, my SML code seems to turn up with just as
many bugs as my code in any other language, once it has been in
existence for, say, fifteen minutes.

If someone actually comes up with a language and static checker where
a successful check almost always means a lack of bugs, then I'd very
much like to know about it.  But I don't see it really happening.  It
sounds too much like a silver bullet.  But anyway, if someone thinks
they have found such a checker, then they should be leaping all over
the idea and dropping everything else they are working on.  Correct
code is a BIG deal!


-Lex


PS -- I've heard it said about Ada as well as about SML :)
From: Stephen J. Bevan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m31xsoerv6.fsf@dino.dnsalias.com>
Lex Spoon <···@cc.gatech.edu> writes:
> "Marshall Spight" <·······@dnai.com> writes:
> > "Lex Spoon" <···@cc.gatech.edu> wrote in message ···················@logrus.dnsalias.net...
> >> I'm glad not to have seen it
> >> in this thread, but I occasionally hear people -- sometimes respected
> >> researchers -- claiming things like "once my program type checks in
> >> SML, it tends to be correct".
> >
> > Is it that you don't believe them?
> 
> I don't.  They are being honest about their experience, but I am not
> convinced that their programs are really that bug free.  For starters,
> this is a claim from the person's memory, not from any objective study
> of even one person's work.  Memory is very faulty, especially if you
> aren't concentrating!  ...
> [snip]

As you note Memory can be faulty which is why some of those on the
static side of things might well believe you are being honest in the
following, but question whether your memory is actually faulty :-

> ...  Further, in the code I write and debug, type
> errors have never in my memory caused a problem, whereas I've
> certainly had plenty of other sorts of bugs.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa76ed0$1@news.unimelb.edu.au>
Lex Spoon <···@cc.gatech.edu> writes:

>Further, in the code I write and debug, type
>errors have never in my memory caused a problem, whereas I've
>certainly had plenty of other sorts of bugs.

Many "other sorts of bugs", e.g. passing arguments in the wrong order
or misspelling some symbol, will show up as type errors.
Have you really never passed arguments in the wrong order,
and never mispelt an enumeration constant?

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vn2f71-461.ln1@ID-7776.user.dfncis.de>
Lex Spoon <···@cc.gatech.edu> wrote:

> And to rub in an earlier point from the thread, please note that
> neither of these bugs is a type error.  I'm glad not to have seen it
> in this thread, but I occasionally hear people -- sometimes respected
> researchers -- claiming things like "once my program type checks in
> SML, it tends to be correct".  I wouldn't want to trust my life to
> code written by such a person.  :)  

But you probably do "trust your life" (if I may say to :-) to people who
check the correctness of their programs statistically with unit tests.
Unit tests are never a complete specification, but still, they help
with changing the program until it does what is expected of it.

Type checking works very similarly. The difference is that it
generates a lot of the low level unit tests that you would otherwise
have to write explicitely. Once your program passes the low level
tests, the experience is that you have invested at this stage so much
thought that the higher level tests very often will also pass. That
doesn't mean you don't have to do them, of course.

The mechanism beneath "once my programs type checks, it tends to
be correct" and "once my programm passes all unit tests, it tends to
be correct" is the same.

- Dirk
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0111032310360001@192.168.1.51>
In article <··············@thalassa.informatimago.com>, Pascal Bourguignon
<····@thalassa.informatimago.com> wrote:

> "Marshall Spight" <·······@dnai.com> writes:
> 
> > "Pascal Costanza" <········@web.de> wrote in message
················@newsreader2.netcologne.de...
> > > We are left
> > > with the obvious question: if the specification is approximately of the
> > > same order of complexity as the program, how can we know that the
> > > specification is correct?"
> > 
> > Specifications are correct by definition. If that weren't the case,
> > what would correctness mean? That God favors the program?
> > That it possesses some transcendant quality?
> > 
> > The definition of correctness is "conforms to the specification."
> 
> There  is  the correctness  of  the program,  and  then  there is  the
> correctness of  the specification.  Have  a look at  the sepcification
> Erann gave for sort:
> 
>     s2 = sort(s1) iff:
> 
>         len(s2) = len(s1) and
>         forall(x): if member(x,s2) then member(x,s1) and
>         forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]

Yes, this spec has bugs.  Sue me.

It should have been:

forall(x): if member(x,s1) then member(x,s2)
                        ^                 ^
if x<y then s2[x]<=s2[y]
             ^

E.
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87r80rht24.fsf@thalassa.informatimago.com>
···@jpl.nasa.gov (Erann Gat) writes:
> >     s2 = sort(s1) iff:
> > 
> >         len(s2) = len(s1) and
> >         forall(x): if member(x,s2) then member(x,s1) and
> >         forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]
> 
> Yes, this spec has bugs.  Sue me.
> 
> It should have been:
> 
> forall(x): if member(x,s1) then member(x,s2)
>                        ^                 ^
> if x<y then s2[x]<=s2[y]
>              ^

That's not enough.  What  you want to say is that the  BAG s1 is equal
to the BAG s2, that is: the SET  of elements in S1 is equal to the SET
of elements in  S2 and the number of occurences of  each element in S1
is equal to the number of  occurences of the same element in S2.  What
you said is that s2 must be a subset of s1.

    s1 = ( 4 1 1 3 2 )
    s2 = ( 1 3 3 3 3 )

would match.


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0211030720140001@192.168.1.51>
In article <··············@thalassa.informatimago.com>, Pascal Bourguignon
<····@thalassa.informatimago.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> > >     s2 = sort(s1) iff:
> > > 
> > >         len(s2) = len(s1) and
> > >         forall(x): if member(x,s2) then member(x,s1) and
> > >         forall integers x,y in (0,len(s1)-1): if x<y then s1[x]<=s2[y]
> > 
> > Yes, this spec has bugs.  Sue me.
> > 
> > It should have been:
> > 
> > forall(x): if member(x,s1) then member(x,s2)
> >                        ^                 ^
> > if x<y then s2[x]<=s2[y]
> >              ^
> 
> That's not enough.  What  you want to say is that the  BAG s1 is equal
> to the BAG s2, that is: the SET  of elements in S1 is equal to the SET
> of elements in  S2 and the number of occurences of  each element in S1
> is equal to the number of  occurences of the same element in S2.  What
> you said is that s2 must be a subset of s1.
> 
>     s1 = ( 4 1 1 3 2 )
>     s2 = ( 1 3 3 3 3 )
> 
> would match.

No, that violates the revised condition 2.  4 is a member of s1 but not s2.

But the following is a counterexample:

s1 = (4 2 2 1)
s2 = (1 2 4 4)

Just goes to show that getting the spec right can sometimes be harder than
getting the program right.

E.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa430dc$1@news.unimelb.edu.au>
"Marshall Spight" <·······@dnai.com> writes:

>"Pascal Costanza" <········@web.de> wrote in message ················@newsreader2.netcologne.de...
>> We are left
>> with the obvious question: if the specification is approximately of the
>> same order of complexity as the program, how can we know that the
>> specification is correct?"
>
>Specifications are correct by definition. If that weren't the case,
>what would correctness mean? That God favors the program?

A specification can be incorrect with respect to a simpler partial
specification.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <874qxn4pml.fsf@thalassa.informatimago.com>
Fergus Henderson <···@cs.mu.oz.au> writes:

> "Marshall Spight" <·······@dnai.com> writes:
> 
> >"Pascal Costanza" <········@web.de> wrote in message ················@newsreader2.netcologne.de...
> >> We are left
> >> with the obvious question: if the specification is approximately of the
> >> same order of complexity as the program, how can we know that the
> >> specification is correct?"
> >
> >Specifications are correct by definition. If that weren't the case,
> >what would correctness mean? That God favors the program?
> 
> A specification can be incorrect with respect to a simpler partial
> specification.

More  importantly, it  can  be  incorrect with  respect  with what  is
wanted. If  you want a sort  function you better not  specify a weaker
property, be it feature-wise or performance-wise.


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uznfdbawa.fsf@STRIPCAPStelus.net>
"Marshall Spight" <·······@dnai.com> writes:
> "Pascal Costanza" <········@web.de> wrote in message ················@newsreader2.netcologne.de...
> > We are left
> > with the obvious question: if the specification is approximately of the
> > same order of complexity as the program, how can we know that the
> > specification is correct?"
> 
> Specifications are correct by definition. If that weren't the case,
> what would correctness mean? That God favors the program?
> That it possesses some transcendant quality?
> 
> The definition of correctness is "conforms to the specification."

Yes it is.

But those specifications usually (and informally) correspond to something
interesting that we want their implementations to do. In as much as they do
not describe what we intended, they can be "incorrect".

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmq1j$fqu$2@newsreader2.netcologne.de>
··········@ii.uib.no wrote:

> Ray Blaak <········@STRIPCAPStelus.net> writes:
> 
> 
>>In other words, "a correct program necessarily has a proof" means
>>exactly "a program with a proof necessarily has a proof". Not a very
>>useful fact.
> 
> 
> So, the argument against formally provable programs is that incorrect
> programs are (occasionally? more?) useful?

Yes, they are indeed occasionally more useful! ;)

Mind-blowing, eh? ;)

Here is some food for thought: 
http://www.dreamsongs.com/NewFiles/ObjectsHaveFailed.pdf

(I am actually interested to hear some comments from the statically 
typed folks, especially because that paper mentions ML and Haskell every 
now and then.)


Pascal
From: Thant Tessman
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo8pt6$ic5$1@terabinaries.xmission.com>
Pascal Costanza wrote:


> Here is some food for thought: 
> http://www.dreamsongs.com/NewFiles/ObjectsHaveFailed.pdf
> 
> (I am actually interested to hear some comments from the statically 
> typed folks, especially because that paper mentions ML and Haskell every 
> now and then.)

So when I first saw that video of Sinead O'Conner tearing up that 
picture of the Pope on Saturday Night Live, my very first thought was 
"Oh, she's Catholic."

My somewhat unexpected impression of Richard Gabriel's paper "Objects 
Have Failed" is "Oh, he's an OO nut."

As it relates to this thread, the paper seems to be in part about how 
static typing and OO (at least as originally envisioned) are inherently 
incompatible. I have two responses: 1) Duh! and 2) That's a problem with 
OO, not with static typing.

(I've actually thought a lot about OO and static typing, but I'm not 
done, so...)

-thant

-- 
America goes not abroad in search of monsters to destroy. She is
the well-wisher of the freedom and independence of all. She is
the champion and vindicator only of her own. -- John Quincy Adams
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egu15jw8jt.fsf@sefirot.ii.uib.no>
Thant Tessman <·····@acm.org> writes:

> Pascal Costanza wrote:

>> Here is some food for thought:
>> http://www.dreamsongs.com/NewFiles/ObjectsHaveFailed.pdf

Ah!  I downloaded and read the paper, but I couldn't find this message
in order to reply to it properly.

>> (I am actually interested to hear some comments from the statically
>> typed folks, especially because that paper mentions ML and Haskell
>> every now and then.)

The "Static Thinkers" chapter only deals with languages with weak
typing (it lists C++, Eiffel, Java).  While I agree with most what is
said, I can't really find any arguments there against HM type
systems.

I like the "living systems" sentence; but I'd also note that quite a
few programs are not "living" in this sense. 

> So when I first saw that video of Sinead O'Conner tearing up that
> picture of the Pope on Saturday Night Live, my very first thought was
> "Oh, she's Catholic."

:-)

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Aatu Koskensilta
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bWpnb.46$4h4.38@reader1.news.jippii.net>
Ray Blaak wrote:

> Aatu Koskensilta <················@xortec.fi> writes:
> 
>>Ray Blaak wrote:
>>
>>>Formal proofs can only be done relative to a particular formalism. Within a
>>>given (sufficiently complex) formalism it is impossible to prove all
>>>statements that can be expressed in it.
>>
>>And this is a good thing. If you could prove all statements that can be 
>>expressed in the formalism, you'd have an inconsistent system. Of 
>>course, what you meant to say is that there are - relative to a standard 
>>semantics - true statements which can't be proved.
> 
> 
> Since I am trying to be conservative and avoid the philosphical debate of true,
> I meant exactly what I said and no more: there are statements that can't be
> proved in a given formalism.

I don't see how this is weaker. Assume G is a statement that's 
undecidable in the formalism (and assume that we're speaking of an 
intepreted formalism here). Then so is ~G. One of these is true, since G 
or ~G is.

> For true statements that can't be proved, how do you know they are true? Were
> they proved in some other formalism, perhaps a more powerful one? True according
> to some oracle? According to human intuitive guesses? Platonically in an
> absolute sense?

Consider an interpreted theory T which we have accepted. Say T is PA. 
Now add to T all instances of Prov_T('P')-->P. These new reflection 
axioms do not *add* any new commitments: we have already accepted T, so 
we should accept that what it proves is true, and we can formalise this. 
There is no need to appeal to any oracle. We can continue this process 
to get T,T',T'',T''', &c. We need oracles when we get to stage omega+1 
(at least in certain technical renderings of this process), but that 
need not concern us here.

If you really have problems with arithmetical truth or mathematical 
truth in general I doubt I can help you here.

>>There's another catch here: assuming human beings will only write a 
>>finite number of programs, there is a sound formalism in which all the 
>>correct programs can be proved correct and all incorrect proved incorrect.
> 
> 
> The notion of "a correct program necessarily has a proof" is actually rather
> vacuous. In the context of being formally correct, a correct program is only
> correct if it can be shown that the statement describing the program can be
> ultimately derived from the formalism's axioms. But that derivation is exactly
> what a formal proof is.

Yes. But the interest we have in these formal derivations is that they 
are semantically valid.

> That is, when doing formal proofs, programs are not "absolutely correct", but
> only "formally correct".

If we have accepted a formalism T as valid, we have also accepted as 
valid all the derivations. Thus if we show that a desired property P 
provably holds of an algoritm A, then we have absolutely established 
P(A), i.e. the correctness of A.

> In other words, "a correct program necessarily has a proof" means exactly "a
> program with a proof necessarily has a proof". Not a very useful fact.

Every true statement can be proved in some sound formalism. This is not 
a very useful fact, but it's slighly more useful than "a program with a 
proof necessarily has a proof". We *do* have a reasonably good picture 
of what sorts of things are provable about recursive functions in 
systems and which are not.

> When a program is being initially written, the programmer does not know it is
> correct, so even if there is a proof they still have to take the trouble to
> define/discover it.

The progammer should be attempting to produce a correct program, 
however. The criteria of correctness might be very vague, e.g. 
correctness for a vaguely defined "ordinary" inputs or some such. If he 
doesn't have any sort of idea what is to count as correct behaviour for 
the program, then there's nothing to prevent him from simply writing

  #include <stdio.h>

  int main()
  {
    while(1)
    {
      printf("Hello world\n");
    }
  }

>>>Given that humans can bang out just about any possible piece of crap on the
>>>keyboard if they are patient enough, it certainly follows that there are
>>>programs that humans can write that cannot be proved correct.
>>
>>Only if human beings produce an infinite number of programs.
> 
> 
> "This statement is false" is not provable in your favourite formalism, and I
> just typed it. In Lisp: (defun Q () (eq nil (Q)))

Huh? Most formalisms usually concerned are incapable of expressing the 
sentence "This statement is false". I don't see how your little Lisp 
snippet "says" anything. And even so, what is it Q is supposed to 
correctly calculate?

Let's restrict attention to recursive functions of natural numbers. A 
specification is a set of statements S that should be true of the 
function the algorithm A desired computes. A proof of correctness for an 
algorithm A is simply a proof of S(A). How does the liar enter here?

>>>It does not follow that the formalisms currently at our disposal are
>>>rich enough to express the correctness formally, however.
>>
>>Then we use richer formalisms in which hopefully these criteria - as far 
>>as they are at all desirable - can be expressed and facts about programs 
>>proved. For example, we could add to Peano arithmetic various sorts of 
>>reflection schemata (which prove the consistency of PA), and get a 
>>stronger theory, and then repeat this for the new formalism so obtained 
>>and so on (this produces axiomatisable theories).
> 
> 
> That we do this is a useful human activity and research into the kinds of
> formalisms we need.

It's also up to an extent a completely mechanical procedure. For 
example, up to omega, adding reflection to PA gives recursively 
enumerable theories, all of which we should accept since we have already 
accepted PA. The resulting theories are very strong indeed.

> The axioms we introduce, however, are subject to considerable debate, such
> that "we" decide that they correspond to things we observe, need, and can
> implement on physical devices.

While we stay in the realm of recursive functions, there are no such 
considerations. We just have to model -- and this is the part where 
debate enters, not in accepting axioms stronger than those of PA -- our 
desiderata as properties of recursive functions.

And if we really do wish to take the limitations of machines into 
account, then we need not worry about undecidability: every question 
about a finite model is decidable. Of course, if the proofs themselves 
are of length 2^100000000000000000000000000000000000000, this might not 
be very helpful. But there are many things to help us here: for example, 
if we simply add to our proof systems higher order principles, then for 
every recursive function f there are an infinite of statements (of first 
order), the proofs of which take f(m) steps at first order level, but 
only m steps at higher order level.

There might really be cases in which a program can't be proved correct 
by us (although there trivially *exists* a proof). I have seen no 
evidence that there is such a program. Perhaps we could construct a 
program and a specification according to which it is correct just in 
case somehow we can never establish this, but this does not seem like a 
very likely thing.

-- 
Aatu Koskensilta (················@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
  - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87r80x9y3h.fsf@thalassa.informatimago.com>
Aatu Koskensilta <················@xortec.fi> writes:
> The progammer should be attempting to produce a correct program,
> however. The criteria of correctness might be very vague,
> e.g. correctness for a vaguely defined "ordinary" inputs or some
> such. If he doesn't have any sort of idea what is to count as correct
> behaviour for the program, then there's nothing to prevent him from
> simply writing
> 
>   #include <stdio.h>
> 
>   int main()
>   {
>     while(1)
>     {
>       printf("Hello world\n");
>     }
>   }

In any case, this is not an incorrect program!

Indeed,  it  has  a  perfectly  well defined  behavior,  and  it  even
terminates, and this can be  proved. (Like any interactive program, it
ends when the  user decides to type Ctrl-C, or  when the energy supply
is cut or at the end of the universe, whichever comes first).


Just to  point that the point of  view of the "scientist"  and that of
the "engineer"  are not different,  only that perhaps  the "scientist"
may forget some important hypothesis in his demonstrations.

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnm38v$104e$1@f1node01.rhrz.uni-bonn.de>
Pascal Bourguignon wrote:
> Aatu Koskensilta <················@xortec.fi> writes:
> 
>>The progammer should be attempting to produce a correct program,
>>however. The criteria of correctness might be very vague,
>>e.g. correctness for a vaguely defined "ordinary" inputs or some
>>such. If he doesn't have any sort of idea what is to count as correct
>>behaviour for the program, then there's nothing to prevent him from
>>simply writing
>>
>>  #include <stdio.h>
>>
>>  int main()
>>  {
>>    while(1)
>>    {
>>      printf("Hello world\n");
>>    }
>>  }
> 
> 
> In any case, this is not an incorrect program!
> 
> Indeed,  it  has  a  perfectly  well defined  behavior,  and  it  even
> terminates, and this can be  proved. (Like any interactive program, it
> ends when the  user decides to type Ctrl-C, or  when the energy supply
> is cut or at the end of the universe, whichever comes first).

...and, funnily enough, it can be seen as an excellent first step in a 
transition to a more contemporary understanding of computer science.

See 
http://faculty.olin.edu/~las/2001/07/www.ai.mit.edu/people/las/papers/rug.html 
in which the author argues that not

print("Hello, world!");

should be regarded as the quintessential notion of computing, but rather

while(true){ echo(); }


;-)

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Aatu Koskensilta
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <k0Mnb.333$4h4.88@reader1.news.jippii.net>
Pascal Bourguignon wrote:

> Aatu Koskensilta <················@xortec.fi> writes:
> 
>>The progammer should be attempting to produce a correct program,
>>however. The criteria of correctness might be very vague,
>>e.g. correctness for a vaguely defined "ordinary" inputs or some
>>such. If he doesn't have any sort of idea what is to count as correct
>>behaviour for the program, then there's nothing to prevent him from
>>simply writing
>>
>>  #include <stdio.h>
>>
>>  int main()
>>  {
>>    while(1)
>>    {
>>      printf("Hello world\n");
>>    }
>>  }
> 
> 
> In any case, this is not an incorrect program!

According to what criteria of correctness? My point was that the 
programmer must have *some* criteria of correctness in mind when 
producing a program, or else he can just produce an arbitrary program 
every time he's asked to do something. For example, if someone asks you 
to produce a calculator and you couldn't fathom what sort of correctness 
criteria might apply to the desired program, you could very well produce 
the program listed above. You always have *some* idea what is to count 
as correct behaviour for the program, even if it's in the some vague 
form like "for all test inputs my boss comes up, the program should not 
crash and the answer should not be horridly off from what I get if I do 
the addition with paper and pencil".

> Indeed,  it  has  a  perfectly  well defined  behavior,  and  it  even
> terminates, and this can be  proved. (Like any interactive program, it
> ends when the  user decides to type Ctrl-C, or  when the energy supply
> is cut or at the end of the universe, whichever comes first).

Of course. By this argument there are no non-terminating programs. But 
because these limits can be thought of to be arbitrarily great, when 
reasoning about programs we idealise them away.

> Just to  point that the point of  view of the "scientist"  and that of
> the "engineer"  are not different,  only that perhaps  the "scientist"
> may forget some important hypothesis in his demonstrations.

Of course.

-- 
Aatu Koskensilta (················@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
  - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uk76oio9l.fsf@STRIPCAPStelus.net>
Aatu Koskensilta <················@xortec.fi> writes:
> Ray Blaak wrote:
> > I meant exactly what I said and no more: there are statements that can't be
> > proved in a given formalism.
> 
> I don't see how this is weaker. Assume G is a statement that's 
> undecidable in the formalism (and assume that we're speaking of an 
> intepreted formalism here). Then so is ~G. One of these is true, since G 
> or ~G is.

Not necessarily. "Constructive" formalisms would require one of them to be
true. "Classical" formalisms allow the "law of the excluded middle", such that
"G or ~G" is a theorem, regardless what G is.

If you think this is silly, consider that most people agree that "either God
exists or God does not exist", but no one wants to be on the hook for proving
either disjunct.

> > "This statement is false" is not provable in your favourite formalism, and I
> > just typed it. In Lisp: (defun Q () (eq nil (Q)))
> 
> Huh? Most formalisms usually concerned are incapable of expressing the 
> sentence "This statement is false".

Goedel showed how to do it with just arithemtic.

> I don't see how your little Lisp snippet "says" anything. And even so, what
> is it Q is supposed to correctly calculate?

Q returns the result of comparing nil (i.e. "false") with itself. It doesn't
correctly calculate anything of course, since it is undecidable. This
manifests itself in "real life" as an infinite loop, i.e. no result is
returned.

The clearest, simplest, most elegant presentation of Goedel's result I have
seen is called "Beautifying G�del", by Eric C.R. Hehner (my old supervisor in
fact) at http://www.cs.toronto.edu/~hehner/God.pdf

It is quite instructive, going right to the heart of the issues. It also deals
with the "G v ~G" business above.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Torkel Franzen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <623454.0310290528.68f9be7e@posting.google.com>
Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<·············@STRIPCAPStelus.net>...

> The clearest, simplest, most elegant presentation of Goedel's result I have
> seen is called "Beautifying G�del", by Eric C.R. Hehner (my old supervisor in
> fact) at http://www.cs.toronto.edu/~hehner/God.pdf
> 
> It is quite instructive, going right to the heart of the issues. It also deals
> with the "G v ~G" business above.

 As pointed out by Aatu, this paper, which would be better titled
"Mangling G�del", contains some odd misconceptions, including a
highly original one about the second incompleteness theorem.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2ad7kbmbc.fsf@hanabi-air.shimizu.blume>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> > Huh? Most formalisms usually concerned are incapable of expressing the 
> > sentence "This statement is false".
> 
> Goedel showed how to do it with just arithemtic.

False.  What he managed to express using arithmetic was "This
statement has no proof."  or, more precisely, "There is no number that
is the encoding of a sequence of valid logical deductions ending with
this statement."

If you assume the statement to be true, then you have a true statement
that cannot be proved correct.  If you assume the statement to be
false (and the negation to be true), then you have to deal with
"supernaturals" because the proof encoded by the number (whose existence
the negation of the statement now postulates) must not be finite for
the logic to be consistent.  Otherwise you have a finite proof of a
false statement -- bad.

If you manage to express "This statement is false", then you
immediately have an inconsistent logic at your hands.

Matthias
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <un0bjjjle.fsf@STRIPCAPStelus.net>
Matthias Blume <····@my.address.elsewhere> writes:
> Ray Blaak <········@STRIPCAPStelus.net> writes:
> > > Huh? Most formalisms usually concerned are incapable of expressing the 
> > > sentence "This statement is false".
> > 
> > Goedel showed how to do it with just arithemtic.
> 
> False.  What he managed to express using arithmetic was "This
> statement has no proof."  or, more precisely, "There is no number that
> is the encoding of a sequence of valid logical deductions ending with
> this statement."

What I read discusses Goedel sentences G such that G is a sentence which says
of itself that it is not a theorem of T. See, for example, a discussion at
http://www.sm.luth.se/~torkel/eget/godel/theorems.html

> If you manage to express "This statement is false", then you
> immediately have an inconsistent logic at your hands.

"False" as in "not a theorem".

One does not immediately have an inconsistency. Just expressing the statement
(i.e. writing it down) does not do anything. One tries to show it is a theorem
or not, and either way cannot be done if we are to assume our formalism is
(omega-)consistent.

So we simply give up interpreting such sentences, leaving them unclassified.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Torkel Franzen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vcboevzyw8o.fsf@beta13.sm.luth.se>
Ray Blaak <········@STRIPCAPStelus.net> writes:


> > If you manage to express "This statement is false", then you
> > immediately have an inconsistent logic at your hands.
> 
> "False" as in "not a theorem".

  "False" does not mean "is not a theorem". You might care to look at
http://www.sm.luth.se/~torkel/eget/godel/nothing.html.
From: Aatu Koskensilta
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <lC4ob.23$Ve1.1@reader1.news.jippii.net>
Ray Blaak wrote:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
>>Ray Blaak <········@STRIPCAPStelus.net> writes:
>>
>>>>Huh? Most formalisms usually concerned are incapable of expressing the 
>>>>sentence "This statement is false".
>>>
>>>Goedel showed how to do it with just arithemtic.
>>
>>False.  What he managed to express using arithmetic was "This
>>statement has no proof."  or, more precisely, "There is no number that
>>is the encoding of a sequence of valid logical deductions ending with
>>this statement."
> 
> 
> What I read discusses Goedel sentences G such that G is a sentence which says
> of itself that it is not a theorem of T. See, for example, a discussion at
> http://www.sm.luth.se/~torkel/eget/godel/theorems.html

Is that supposed to contradict what I said?

>>If you manage to express "This statement is false", then you
>>immediately have an inconsistent logic at your hands.
> 
> 
> "False" as in "not a theorem".

You have a very queer notion of falsity of arithmetic statements. Truth 
and falsity of arithmetic statements can be precisely and 
*mathematically* defined, and it can be shown that these notions are not 
arithmetic. The notion "has a proof" and "does not have a proof" (in 
first order logic, from an axiomatisable theory) are both arithmetical, 
and hence cannot be coextensional with truth and falsity of arithmetical 
statements.

> One does not immediately have an inconsistency. Just expressing the statement
> (i.e. writing it down) does not do anything. One tries to show it is a theorem
> or not, and either way cannot be done if we are to assume our formalism is
> (omega-)consistent.

As a corollary of Tarski's result, if we can define a truth predicate 
True in a theory T of a language closed under contradictory negation in 
which can represent a suitable subset of its syntactical notions so that 
all instances of True('P')<==>P are provable then theory T is inconsistent.

-- 
Aatu Koskensilta (················@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
  - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uism527r4.fsf@STRIPCAPStelus.net>
I just read Alfred Tarski's _The Semantic Conception Of Truth And The
Foundations Of Semantics_. It is rare to read such a clear and carefully
written paper.

My question is this: why is it the case that "there must be true sentences
which are not provable"? Given that Tarski himself says "there exists a pair
of contradictory sentences neither of which is provable", couldn't it be the
case that the most you can say is "I don't know", i.e., I can't prove things
either way?

That is, I cannot decide on the sentence's truth, so for practical purposes I
simply give up.

In particular, G�del sentences don't seem to have any truth meaning: any
attempt to evaluate them give rises to the cyclical spinning of true then
false then true..., i.e. an infinite loop in practical terms.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Torkel Franzen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vcbhe1p25tf.fsf@beta13.sm.luth.se>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> In particular, G�del sentences don't seem to have any truth meaning: any
> attempt to evaluate them give rises to the cyclical spinning of true then
> false then true..., i.e. an infinite loop in practical terms.

  Not at all. The G�del sentence for a theory T is equivalent to "T is
consistent", and this equivalence is provable in T itself.
From: Aatu Koskensilta
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <0aqob.21$kc2.19@reader1.news.jippii.net>
Ray Blaak wrote:

> My question is this: why is it the case that "there must be true sentences
> which are not provable"? Given that Tarski himself says "there exists a pair
> of contradictory sentences neither of which is provable", couldn't it be the
> case that the most you can say is "I don't know", i.e., I can't prove things
> either way?

This is possible, but not without giving up the assumption that the 
language is closed under contradictory negation (and thus classical). 
The problem with the answer "I don't know" is that we can consider a 
sentence - known as the strengthened liar : "I don't know this sentence 
to be true". I don't know it to be true, hence it's true, and hence I 
*do* know that it's true.

There's a huge literature on this sort of problems, and various proposed 
ways out.

> That is, I cannot decide on the sentence's truth, so for practical purposes I
> simply give up.

This makes you embrace a queer notion of truth: truth is simply being 
known by you.

> In particular, G�del sentences don't seem to have any truth meaning: any
> attempt to evaluate them give rises to the cyclical spinning of true then
> false then true..., i.e. an infinite loop in practical terms.

No they don't. You are still confusing provability in a particular 
formal system with truth. For example, the G�del statement for PA says 
of itself that it's not provable, which is equivalent to the consistency 
of PA. We know that PA is consistent, since it's true in the natural 
numbers, and hence the sentence is true. There is no problem with the 
liar when we are dealing with arithmetic sentences, simply because the 
liar can't be formulated as an arithmetical statement.

Let's try: "this sentence is not provable in PA". Ok, assume it's not 
provable in PA... what now? Where does the infinite loop come now? 
Assume it's provable in PA. Ok, then PA is inconsistent (which it 
isn't)... what now? The infinite loop you refer to simply does not occur.

-- 
Aatu Koskensilta (················@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
  - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <u65i1cq0p.fsf@STRIPCAPStelus.net>
Aatu Koskensilta <················@xortec.fi> writes:
> Ray Blaak wrote:
> > That is, I cannot decide on the sentence's truth, so for practical
> > purposes I simply give up.
> 
> This makes you embrace a queer notion of truth: truth is simply being known by
> you.

Maybe. In practice, is it not what people do anyway? Regardless of the
existence of platonic truths, we don't believe/accept them until we
discover/derive them for ourselves, take them on faith from those in
authority, or have their effects communicated to us in some way (e.g. "the
water supply is poisoned" ==> "I get very sick").

> > In particular, G�del sentences don't seem to have any truth meaning: any
> > attempt to evaluate them give rises to the cyclical spinning of true then
> > false then true..., i.e. an infinite loop in practical terms.
> 
> No they don't. You are still confusing provability in a particular formal
> system with truth. 

But isn't the Tarski's notion of truth merely in the context of an
"essentially richer" metalanguage?  That is, it is "truth" relative to another
formal system, albeit richer than arithmetic.

> Let's try: "this sentence is not provable in PA". Ok, assume it's not provable
> in PA... what now? Where does the infinite loop come now? Assume it's provable
> in PA. Ok, then PA is inconsistent (which it isn't)... what now? The infinite
> loop you refer to simply does not occur.

Cool. There's the true but unprovable statement. I do wonder though: this is
trivially true for our minds, but how is it true in the Tarski sense? What is
the formal metalanguage that we can express the truth of this sentence in?

Also, consider the stronger "this sentence is provably not a theorem", i.e.,
"this sentence is an antitheorem". This is not "false" in the Tarski sense,
right? Can it be expressed in PA?

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Torkel Franzen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vcbvfq0fkqh.fsf@beta13.sm.luth.se>
Ray Blaak <········@STRIPCAPStelus.net> writes:

 >But isn't the Tarski's notion of truth merely in the context of an
 >"essentially richer" metalanguage?  That is, it is "truth" relative
 >to another formal system, albeit richer than arithmetic.

 Your comments are confused - I suspect you've learned too much from
Hehner.
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uislzpb39.fsf@STRIPCAPStelus.net>
Torkel Franzen <······@sm.luth.se> writes:
> Ray Blaak <········@STRIPCAPStelus.net> writes:
> >But isn't the Tarski's notion of truth merely in the context of an
> >"essentially richer" metalanguage?  That is, it is "truth" relative
> >to another formal system, albeit richer than arithmetic.
> 
> Your comments are confused - I suspect you've learned too much from Hehner.

Quite possibly and almost certainly, respectively.

I do find, however, that the effort of trying to identify the underlying
assumptions in this subthread quite instructive. I have already just learned
about Tarski, for example.

Regarding my statement quoted above, I note that in "The Semantic Conception
Of Truth And The Foundations Of Semantics" (the version I read is at
http://www.ditext.com/tarski/tarski-c.html), Tarski says that for the
equivalence:

     (T) X is true if, and only if, p.

  We shall call any such equivalence (with 'p' replaced by any sentence of
  the language to which the word "true" refers, and 'X' replaced by a name
  of this sentence) an "equivalence of the form (T)."

  Now at last we are able to put into a precise form the conditions under
  which we will consider the usage and the definition of the term "true" as
  adequate from the material point of view: we wish to use the term "true"
  in such a way that all equivalences of the form (T) can be asserted, and
  we shall call a definition of truth "adequate" if all these equivalences
  follow from it.

That is, something is "true" if all the (T) equivalences can be asserted.
Now, given that he is explictly talking about making such assertions in formal
languages only, I assume that "assert" means "show is a theorem" in the
metalanguage the sentences of the form (T) are expressed in.

I assume that "show is a theorem" means to show that one can, in principle,
derive the theorem from the axioms of the metalanguage in question, that is,
provide a proof.

Is my assumption wrong? One can try to show the "truth" of such (T) forms by
setting up another Tarski equivalence, i.e., "(T)" is true iff (T), but that is
just deferring the same problem to the next level. One can have some sort of
evaluation models, I suppose, but ultimately they are described in terms of yet
another formalism as well.

It seems to me in the end, we either have formal methods to prove things
"blindly", or we defer to (some set of) humans who say yay or nay for the truth
of things.

My normal usage of the word "true" is the everyday, instinctive one that I
think most people understand each other to use: the vaguely platonic one of
true things being that which are, etc.

In the context of formal proofs of correctness, however, such "truth" or does
not enter into it (and indeed, Tarksi explicitly says "the notion of truth never
coincides with that of provability"). One has only "is a theorem", "is not a
theorem", and (for most decent formalisms) "can't decide".

What I see with Tarksi's notion of "truth" (given my quite possibly confused
misunderstandings), is that "truth" for one level is "provability" in the next
level.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa263aa$1@news.unimelb.edu.au>
Ray Blaak <········@STRIPCAPStelus.net> writes:

>My question is this: why is it the case that "there must be true sentences
>which are not provable"? Given that Tarski himself says "there exists a pair
>of contradictory sentences neither of which is provable", couldn't it be the
>case that the most you can say is "I don't know", i.e., I can't prove things
>either way?

Consider the statements

	P: this sentence is not provable within the formal system.
and
	Q: P is false.

If P was provable within the formal system, then (since we're assuming
that the theory is consistent) it would have to be true, which would
imply that it was NOT provable within the formal system, which would be
a contradiction.  Therefore P must not be provable within the formal system.
That of course implies that P is true, and that Q is false.

>In particular, G�del sentences don't seem to have any truth meaning:

Sure they do.  For example, P is true, and Q is false.
We just can't prove that within the formal system.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Aatu Koskensilta
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <iYLnb.331$4h4.208@reader1.news.jippii.net>
Ray Blaak wrote:
> Aatu Koskensilta <················@xortec.fi> writes:
> 
>>Ray Blaak wrote:
>>
>>>I meant exactly what I said and no more: there are statements that can't be
>>>proved in a given formalism.
>>
>>I don't see how this is weaker. Assume G is a statement that's 
>>undecidable in the formalism (and assume that we're speaking of an 
>>intepreted formalism here). Then so is ~G. One of these is true, since G 
>>or ~G is.
> 
> 
> Not necessarily. "Constructive" formalisms would require one of them to be
> true. "Classical" formalisms allow the "law of the excluded middle", such that
> "G or ~G" is a theorem, regardless what G is.

G�del's theorem does not realy on non-constructive proof procedures.

> If you think this is silly, consider that most people agree that "either God
> exists or God does not exist", but no one wants to be on the hook for proving
> either disjunct.

I'm well aware of the possibilities of non-classical semantics for 
mathematics. I think most of them are incorrect, and fail to correctly 
model actual mathematical practice, e.g. I believe that the 
constructivists merely conflate truth with constructive provability.

>>>"This statement is false" is not provable in your favourite formalism, and I
>>>just typed it. In Lisp: (defun Q () (eq nil (Q)))
>>
>>Huh? Most formalisms usually concerned are incapable of expressing the 
>>sentence "This statement is false".
> 
> 
> Goedel showed how to do it with just arithemtic.

He showed no such thing. What G�del showed was that there is a primitive
recursive function G, s.t. when fed with the index A of a recursively 
enumerable theory (I'm of course glossing over many details here), it 
will produce a sentence G(A) which is not decidable in A if A is consistent.

In fact, there provably is no way in arithmetic to express a sentence 
saying of itself that it's not true. It's possible in arithmetic to 
express the truth of sentences up to some complexity, but the result 
will be such that it can't be applied to itself.

Tarski showed that in fact no language closed under contradictory 
negation can be augmented with a predicate T, s.t. all instances of the 
Tarski adequacy schema

  T('P') <==> P

hold. Much later work has gone into investigating what sorts of truth
predicates *can* be introduced if we drop the requirement for closure
under contradictory negation.

>>I don't see how your little Lisp snippet "says" anything. And even so, what
>>is it Q is supposed to correctly calculate?
> 
> 
> Q returns the result of comparing nil (i.e. "false") with itself. It doesn't
> correctly calculate anything of course, since it is undecidable. 

It's not undecidable, it simply does not halt. Single algorithms are not 
"undecidable", they are halting or non-halting - problems are 
undecidable or decidable. And it's very easy to prove this, even in 
certain formal systems. The algorithm Q would be correct relative to 
specification "go on without ever stopping and produce no output".

> The clearest, simplest, most elegant presentation of Goedel's result I have
> seen is called "Beautifying G�del", by Eric C.R. Hehner (my old supervisor in
> fact) at http://www.cs.toronto.edu/~hehner/God.pdf
> 
> It is quite instructive, going right to the heart of the issues. It also deals
> with the "G v ~G" business above.

I'm well aware of these issues.

-- 
Aatu Koskensilta (················@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
  - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uk76njhmp.fsf@STRIPCAPStelus.net>
Aatu Koskensilta <················@xortec.fi> writes:
> Ray Blaak wrote:
> >>Huh? Most formalisms usually concerned are incapable of expressing the 
> >>sentence "This statement is false".
>
> > Goedel showed how to do it with just arithemtic.
> 
> He showed no such thing. What G�del showed was that there is a primitive
> recursive function G, s.t. when fed with the index A of a recursively 
> enumerable theory (I'm of course glossing over many details here), it 
> will produce a sentence G(A) which is not decidable in A if A is consistent.

And yet what I read talks about G�del sentences saying of themselves "I am not
a thereom".

Do you have a different meaning of "true" and "false"? I mean only "is a
theorem" or "is not a theorem".

> In fact, there provably is no way in arithmetic to express a sentence 
> saying of itself that it's not true. It's possible in arithmetic to 
> express the truth of sentences up to some complexity, but the result 
> will be such that it can't be applied to itself.

Can you point me to one or two sources where I can read further? Tarksi? What
I have googled so far seems to be discussing "semantic truth".

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Aatu Koskensilta
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <Fc5ob.31$Ve1.30@reader1.news.jippii.net>
Ray Blaak wrote:

> Aatu Koskensilta <················@xortec.fi> writes:
> 
>>Ray Blaak wrote:
>>
>>>>Huh? Most formalisms usually concerned are incapable of expressing the 
>>>>sentence "This statement is false".
>>
>>>Goedel showed how to do it with just arithemtic.
>>
>>He showed no such thing. What G�del showed was that there is a primitive
>>recursive function G, s.t. when fed with the index A of a recursively 
>>enumerable theory (I'm of course glossing over many details here), it 
>>will produce a sentence G(A) which is not decidable in A if A is consistent.
> 
> 
> And yet what I read talks about G�del sentences saying of themselves "I am not
> a thereom".
> 
> Do you have a different meaning of "true" and "false"? I mean only "is a
> theorem" or "is not a theorem".

Of course. I have the perfectly ordinary mathematical notion of 
arithmetical truth, according to which, say "AxEyEz(x>2 ==> y and z are 
primes & x=y+z)" is true just in case every number greater than two can 
be written as the sum of two primes. This notion has a precise 
mathematical definition.

>>In fact, there provably is no way in arithmetic to express a sentence 
>>saying of itself that it's not true. It's possible in arithmetic to 
>>express the truth of sentences up to some complexity, but the result 
>>will be such that it can't be applied to itself.
> 
> 
> Can you point me to one or two sources where I can read further? Tarksi? What
> I have googled so far seems to be discussing "semantic truth".

What other kind of truth would you want? Tarski was the first to provide 
a mathematical definition for truth and satisfaction in a structure, and 
you should find an exposition of his definitions and basic results in 
any decent introductory book on model theory.

-- 
Aatu Koskensilta (················@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
  - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3llr2ve3h.fsf@rigel.goldenthreadtech.com>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> Do you have a different meaning of "true" and "false"? I mean only "is a
> theorem" or "is not a theorem".

If it is a theorem, then it is "true", i.e., there is a model for the
axioms and the derivation rules preserve truth, and the sentence for
the theorem follows from the axioms using the rules.

If it is not a theorem (it cannot be derived), then it may yet be
true, i.e., there is a model of the axioms which also has the sentence
in question true.  Actually, providing such a model is about the only
way to show that the thing is not a theorem.  Of course, it could also
be false, i.e., there is a model of the axioms where the sentence is
false.


/Jon
From: Torkel Franzen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vcb4qxqpplp.fsf@beta13.sm.luth.se>
·········@rcn.com (Jon S. Anthony) writes:

> If it is a theorem, then it is "true", i.e., there is a model for the
> axioms and the derivation rules preserve truth, and the sentence for
> the theorem follows from the axioms using the rules.

  What axioms? 

> If it is not a theorem (it cannot be derived), then it may yet be
> true, i.e., there is a model of the axioms which also has the sentence
> in question true. 

  So "PA is inconsistent" is true?
From: Aatu Koskensilta
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ClMnb.346$4h4.149@reader1.news.jippii.net>
Ray Blaak wrote:

  > The clearest, simplest, most elegant presentation of Goedel's result 
I have
> seen is called "Beautifying G�del", by Eric C.R. Hehner (my old supervisor in
> fact) at http://www.cs.toronto.edu/~hehner/God.pdf

There is at least one error in this presentation. G�del's second 
incompleteness theorem does not say that

"G�del's First Incompleteness Theorem says that a particular theory, if 
consistent, is incomplete. Its interest comes from the effort that was 
spent trying to make that theory complete. When a sentence is discovered 
that is neither a theorem nor an antitheorem, it can be made either one 
  of those, at our choice, by adding an axiom. G�del's Second 
incompleteness Theorem says that this process of adding axioms can never 
make the theory complete (and still consistent)."

but that a theory satisfying the Hilber-Bernays derivability conditions 
can't prove it's own inconsistency (or, actually, it can't prove any 
sentnece which is provably equivalent to the canonical consistency 
statement for the theory - a theory might prove a statement that is in 
reality equivalent to its own consistency).

-- 
Aatu Koskensilta (················@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
  - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uhe1rjfsh.fsf@STRIPCAPStelus.net>
Aatu Koskensilta <················@xortec.fi> writes:
> Ray Blaak wrote:
> > The clearest, simplest, most elegant presentation of Goedel's result 
> > I have seen is called "Beautifying G�del", by Eric C.R. Hehner 
> > (my old supervisor in fact) at http://www.cs.toronto.edu/~hehner/God.pdf
> 
> There is at least one error in this presentation. G�del's second
> incompleteness theorem does not say that [you can't fix things by adding
> axioms] but that [a theorey cannot prove its own consistency]

Hmm. Upon digging a bit I am running into discussions of L�b's theorem and
Kreisel's proof of *that* by showing adding axioms still give trouble.

So perhaps the paper was saying an indirect result. Still, you are right,
that's not what G�del was directly saying.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Torkel Franzen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vcbn0bjyv4t.fsf@beta13.sm.luth.se>
Ray Blaak <········@STRIPCAPStelus.net> writes:

 > So perhaps the paper was saying an indirect result. Still, you are right,
 > that's not what G�del was directly saying.

  It's not a matter of what G�del was or was not "directly saying".
Hehner's formulation of the second incompleteness theorem is a mistake
of the same order as saying that the fundamental theorem of arithmetic
states that there are infinitely many primes.
From: Aatu Koskensilta
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <7_4ob.28$Ve1.26@reader1.news.jippii.net>
Ray Blaak wrote:

> Aatu Koskensilta <················@xortec.fi> writes:
> 
>>Ray Blaak wrote:
>>
>>>The clearest, simplest, most elegant presentation of Goedel's result 
>>>I have seen is called "Beautifying G�del", by Eric C.R. Hehner 
>>>(my old supervisor in fact) at http://www.cs.toronto.edu/~hehner/God.pdf
>>
>>There is at least one error in this presentation. G�del's second
>>incompleteness theorem does not say that [you can't fix things by adding
>>axioms] but that [a theorey cannot prove its own consistency]
> 
> 
> Hmm. Upon digging a bit I am running into discussions of L�b's theorem and
> Kreisel's proof of *that* by showing adding axioms still give trouble.

There's no need to consider L�b's theorem or invoke Kreisel here. It's 
well known that PA (and very weak subtheories thereof) is essentially 
incomplete, i.e. any consistent axiomatisable extension is incomplete.

Also, in the article Hehner is, perhaps unknowingly, flirting with truth 
predicates - which might explain some of the misconceptions - with his 
"interpreting" function I, which is essentially a minimal partial truth 
predicate. If one wants to go in this direction, it would be much better 
not to confuse people with eccentric expositions and instead provide 
references to the literature on partial truth predicates.

-- 
Aatu Koskensilta (················@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
  - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3n0blwdkp.fsf@rigel.goldenthreadtech.com>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> The notion of "a correct program necessarily has a proof" is
> actually rather vacuous. In the context of being formally correct, a
> correct program is only correct if it can be shown that the
> statement describing the program can be ultimately derived from the
> formalism's axioms. But that derivation is exactly what a formal
> proof is.
> 
> That is, when doing formal proofs, programs are not "absolutely
> correct", but only "formally correct".
> 
> In other words, "a correct program necessarily has a proof" means
> exactly "a program with a proof necessarily has a proof". Not a very
> useful fact.
> 
> When a program is being initially written, the programmer does not
> know it is correct, so even if there is a proof they still have to
> take the trouble to define/discover it.

This is a good synopsis of the issue as I see it as well.  Thanks.


> That we do this is a useful human activity and research into the
> kinds of formalisms we need.
> 
> The axioms we introduce, however, are subject to considerable
> debate, such that "we" decide that they correspond to things we
> observe, need, and can implement on physical devices.

Absolutely.  Also well said.


/Jon
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <znflbh0i.fsf@comcast.net>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> Since I am trying to be conservative and avoid the philosphical debate of true,
> I meant exactly what I said and no more:  there are statements that can't be
> proved in a given formalism.

Just to save you a headache when someone tries to argue this, you need
a sufficiently powerful formalism.  There are trivial formalisms for
which all true statements are provable, but they aren't turing
complete (or interesting).
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnaqdv$v7g$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:

> No, my claim is: For every correct program written by a human there is
> a correctness proof.  In other words, I find it unlikely that someone
> writes a correct program, but there actually is no such proof.  People
> do reason about the programs they write, and usually they are not too
> far off from the truth -- especially if they actually got the code
> right.

We are getting at the heart of the issue: What does "there is" mean?

Either it means that something exist in principle without necessarily 
existing in reality.

Or it means that something exists in reality.

The first variant is an idealistic point of view, the second is a 
materialistic point of view, in a philosophical sense.

The materialistic point of view means that something cannot exist only 
in principle.

Both points of views cannot be proven. You have to believe either the 
one or the other. This means that it is an irrational choice by definition.

(How do you "prove" that something exists in principle? By transforming 
it into reality. But then it doesn't exist only in principle anymore.)

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m34qxy3a0z.fsf@rigel.goldenthreadtech.com>
Matthias Blume <····@my.address.elsewhere> writes:

> ·········@rcn.com (Jon S. Anthony) writes:
> 
> > You can't be serious.  Even we take your premise as true (that she
> > _thinks_ she has a proof) this in absolutely no way implies that she
> > does and even less that such a proof exists.  Let's see... I _think_ I
> > have a proof (in my head) that you are completely clueless wrt this
> > topic, therefore such a proof "obviously" exists and could be written
> > down.  Yep, makes real good sense.
> 
> No, my claim is: For every correct program written by a human there is
> a correctness proof.

Well, that is _not_ what you _said_.  For someone so concerned about
correctness, you should pay a little more attention to saying what you
_mean_.

> In other words, I find it unlikely that someone writes a correct
> program, but there actually is no such proof.  People do reason

What's this have to do with "Joe wrote some code, so Joe 'had a proof
in his head'"????


> Your attempt at insulting me is cute, but it has little to do with
> what I said.

Sorry - it has _everything_ to do with what you _said_.

/Jon
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1znfq1tgz.fsf@tti5.uchicago.edu>
·········@rcn.com (Jon S. Anthony) writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > ·········@rcn.com (Jon S. Anthony) writes:
> > 
> > > You can't be serious.  Even we take your premise as true (that she
> > > _thinks_ she has a proof) this in absolutely no way implies that she
> > > does and even less that such a proof exists.  Let's see... I _think_ I
> > > have a proof (in my head) that you are completely clueless wrt this
> > > topic, therefore such a proof "obviously" exists and could be written
> > > down.  Yep, makes real good sense.
> > 
> > No, my claim is: For every correct program written by a human there is
> > a correctness proof.
> 
> Well, that is _not_ what you _said_.  For someone so concerned about
> correctness, you should pay a little more attention to saying what you
> _mean_.

Right.  Let me quote myself verbatim:

  "I said that the programmer has a proof in her head. (At least she
  thinks she does.)  My point was that since she has a proof, the
  proof obviously *exists* and *could* be written down and *could* be
  statically verified if one only went to the trouble of doing so."

Now, what I should have added is that, of course, the verification
might fail -- indicating that that programmer was wrong thinking she
had a proof.  It is my belief that in those cases where the program
actually is correct it will be possible to either verify the proof
outright, or to fix whatever problems there are with it to make it go
through.  I strongly believe that there are virtually no correct
programs written by humans where this technique must fail.  (And in
those cases where it does, we wouldn't be able to find out that this
is in fact so.)

> > In other words, I find it unlikely that someone writes a correct
> > program, but there actually is no such proof.  People do reason
> 
> What's this have to do with "Joe wrote some code, so Joe 'had a proof
> in his head'"????

The following: The only way that the above could be false is that two
conditions are met:

  - Joe writes a correct program.
  - There is no proof for the correctness of that program (in the sense
    of "there is no such proof now and it is not possible for anyone
    to produce such a proof in the future").

I find this extremely unlikely because I believe that Joe already had
the sketch of the proof in his head when he wrote his correct program.
That sketch could be made into a full proof (by fleshing it out and
possibly by correcting a few non-fatal problems that it might have).

> > Your attempt at insulting me is cute, but it has little to do with
> > what I said.
> 
> Sorry - it has _everything_ to do with what you _said_.

Well, in a way, yes, you are right.  It shows that whenever you are
not absolutely precise, some smartass will come along and poke holes
into your argumentation, be it just for the fun of it or in the
pursuit of more serious agendas.  The software analogy for this is
that whenever you have not made absolutely sure that there are no weak
points in your program, someone will come along and exploit them.  I
find this a very compelling reason to take advantage of static
verification whenever possible.

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031017430001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> Well, in a way, yes, you are right.  It shows that whenever you are
> not absolutely precise, some smartass will come along and poke holes
> into your argumentation, be it just for the fun of it or in the
> pursuit of more serious agendas.  The software analogy for this is
> that whenever you have not made absolutely sure that there are no weak
> points in your program, someone will come along and exploit them.  I
> find this a very compelling reason to take advantage of static
> verification whenever possible.

Do you not see the irony here?  When people who like dynamic typing try to
use a statically type langauge they often feel like they are engaging in a
dialog with the compiler that is just as frustrating and pointless as the
the one you are having with j-anthony et al.  All he is doing is
attempting to hold you to the standards of logic and precision that you
yourself have advocated.

E.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1wuau3440.fsf@tti5.uchicago.edu>
·················@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > Well, in a way, yes, you are right.  It shows that whenever you are
> > not absolutely precise, some smartass will come along and poke holes
> > into your argumentation, be it just for the fun of it or in the
> > pursuit of more serious agendas.  The software analogy for this is
> > that whenever you have not made absolutely sure that there are no weak
> > points in your program, someone will come along and exploit them.  I
> > find this a very compelling reason to take advantage of static
> > verification whenever possible.
> 
> Do you not see the irony here?  When people who like dynamic typing try to
> use a statically type langauge they often feel like they are engaging in a
> dialog with the compiler that is just as frustrating and pointless as the
> the one you are having with j-anthony et al.  All he is doing is
> attempting to hold you to the standards of logic and precision that you
> yourself have advocated.

Except I wasn't frustrated.  In fact, he made my point, so how could I
be? :-)

Matthias

PS: The only frustrating thing is that some people get into
name-calling.  My compiler never does things like: "You can't use a
string argumnet with an int->int function, jackass!" or "I think you
are clueless.  Otherwise how come you try to add a function to a
number?"  Anyway, I can read past such syntactic noise, so I'm not as
frustrated as you might think.
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3he1y1mk3.fsf@rigel.goldenthreadtech.com>
Matthias Blume <····@my.address.elsewhere> writes:

> PS: The only frustrating thing is that some people get into
> name-calling.  My compiler never does things like: "You can't use a
> string argumnet with an int->int function, jackass!" or "I think you
> are clueless.  Otherwise how come you try to add a function to a
> number?"

You know, something like that might actually make these things kinda
fun and so more useable. :-|

/Jon
From: Paolo Amoroso
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <878ynae7is.fsf@plato.moon.paoloamoroso.it>
Matthias Blume writes:

> PS: The only frustrating thing is that some people get into
> name-calling.  My compiler never does things like: "You can't use a
> string argumnet with an int->int function, jackass!" or "I think you
> are clueless.  Otherwise how come you try to add a function to a

But maybe your compiler does call-by-name.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031036230001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

>   "I said that the programmer has a proof in her head. (At least she
>   thinks she does.)  My point was that since she has a proof, the
>   proof obviously *exists* and *could* be written down and *could* be
>   statically verified if one only went to the trouble of doing so."

I wrote myself a spam filter.  At no time did I have a proof in my head
that it is "correct".  In fact, I am quite certain that it is *not*
"correct" for any reasonable definition of "correct".  It is nonetheless
useful.  (In fact, it is indispensable.  I'm up to 400-500 spams a day now
with a growth rate that seems to be following Moore's law pretty closely. 
Very scary.)

So there is a counterexample to your theory.

It gets even worse than that.  On your view, if someone asked you to write
a spam filter your response would be to demand that they first precisely
define for you what a spam is.  But producing that precise definition is
the hard part.  Once you have a precise definition of spam in hand
rendering that definition into code is trivial.  (Spam detection, by the
way, is precisely analogous to aesthetic typesetting in that there are
some universal principles that one can apply despite the fact that
individual opinions on what is and is not spam will vary.)

Many - perhaps most - interesting programming problems are like that.  The
heavy lifting is in producing the spec, not rendering the spec into code. 
For those kinds of problems enforced static typing is often more of a
hindrance than a help because it prohibits you from discovering certain
kinds of problems (the ones that show up at run time) until you have
resolved *all* of the instances of another kind of problem, whether those
are relevant to the problem at hand or not.

E.
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3u15y1rcn.fsf@rigel.goldenthreadtech.com>
Matthias Blume <····@my.address.elsewhere> writes:

> ·········@rcn.com (Jon S. Anthony) writes:
> 
> > Matthias Blume <····@my.address.elsewhere> writes:
> > 
> > > ·········@rcn.com (Jon S. Anthony) writes:
> > > 
> > > > You can't be serious.  Even we take your premise as true (that she
> > > > _thinks_ she has a proof) this in absolutely no way implies that she
> > > > does and even less that such a proof exists.  Let's see... I _think_ I
> > > > have a proof (in my head) that you are completely clueless wrt this
> > > > topic, therefore such a proof "obviously" exists and could be written
> > > > down.  Yep, makes real good sense.
> > > 
> > > No, my claim is: For every correct program written by a human there is
> > > a correctness proof.
> > 
> > Well, that is _not_ what you _said_.  For someone so concerned about
> > correctness, you should pay a little more attention to saying what you
> > _mean_.
> 
> Right.  Let me quote myself verbatim:
> 
>   "I said that the programmer has a proof in her head. (At least she
>   thinks she does.)  My point was that since she has a proof, the
>   proof obviously *exists* and *could* be written down and *could* be
>   statically verified if one only went to the trouble of doing so."
> 
> Now, what I should have added is that, of course, the verification
> might fail -- indicating that that programmer was wrong thinking she
> had a proof.

This still indicates that you really believe that she "had a proof in
her head" at the time the she wrote the code.  I maintain that there
is absolutely no evidence for such a remarkable belief.

When you trot out the term "proof", especially in some formal
mathematical context as you do here, it has some pretty specific
meaning which really involves a level of verification.  Probably by
peers (as much as yourself) equally (or more) adept at dealing with
the reasoning and concepts involved.

>  It is my belief that in those cases where the program actually is
> correct it will be possible to either verify the proof outright, or
> to fix whatever problems there are with it to make it go through.

This is basically the halting problem.  I don't see how it in any way
helps your case.


> > What's this have to do with "Joe wrote some code, so Joe 'had a proof
> > in his head'"????
> 
> The following: The only way that the above could be false is that two
> conditions are met:
> 
>   - Joe writes a correct program.
>   - There is no proof for the correctness of that program (in the sense
>     of "there is no such proof now and it is not possible for anyone
>     to produce such a proof in the future").

There is a third (and extremely obvious) way in which it could be
false.  Joe did _not_ have a proof in his head when he wrote the code.
The fact that he reasoned various details through and _believed_ the
code was/is correct in no way shape or form indicates 1) that he had a
proof, 2) that there is a proof, or even 3) that he _thought_ he
had a proof.


> I find this extremely unlikely because I believe that Joe already had
> the sketch of the proof in his head when he wrote his correct program.

First a _sketch_ is not a proof.  Second, your belief here is truly
remarkable.


> pursuit of more serious agendas.  The software analogy for this is
> that whenever you have not made absolutely sure that there are no weak
> points in your program, someone will come along and exploit them.  I
> find this a very compelling reason to take advantage of static
> verification whenever possible.

Fair enough.


/Jon
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1ad7q4kxk.fsf@tti5.uchicago.edu>
·········@rcn.com (Jon S. Anthony) writes:

> > I find this extremely unlikely because I believe that Joe already had
> > the sketch of the proof in his head when he wrote his correct program.
> 
> [...]  Second, your belief here is truly remarkable.

What I find remarkable in this discussion is that anyone would find
this belief of mine remarkable.

Cheers,
Matthias
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3llra1quw.fsf@rigel.goldenthreadtech.com>
Matthias Blume <····@my.address.elsewhere> writes:

> ·········@rcn.com (Jon S. Anthony) writes:
> 
> > > I find this extremely unlikely because I believe that Joe already had
> > > the sketch of the proof in his head when he wrote his correct program.
> > 
> > [...]  Second, your belief here is truly remarkable.
> 
> What I find remarkable in this discussion is that anyone would find
> this belief of mine remarkable.

Wow.  That's even _more_ remarkable.

/Jon
From: Thant Tessman
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnc4lh$pcr$1@terabinaries.xmission.com>
Jon S. Anthony wrote:

>>Right.  Let me quote myself verbatim:
>>
>>  "I said that the programmer has a proof in her head. (At least she
>>  thinks she does.)  My point was that since she has a proof, the
>>  proof obviously *exists* and *could* be written down and *could* be
>>  statically verified if one only went to the trouble of doing so."
>>
>>Now, what I should have added is that, of course, the verification
>>might fail -- indicating that that programmer was wrong thinking she
>>had a proof.
> 
> 
> This still indicates that you really believe that she "had a proof in
> her head" at the time the she wrote the code.  I maintain that there
> is absolutely no evidence for such a remarkable belief.

I have actually had the bad fortune to work with what I call 
"cut-and-paste" computer programmers. You look at their work, and you 
get the strong suspicion that they have no genuine understanding *why* 
their programs work. They somehow program through imitation and 
experimentation, as if programming really was merely a matter of getting 
the incantation right. I'm always stunned when these kind of programmers 
get anything working, but they do--at least enough to keep them employed 
in such numbers that I've seen more than one of them.

I think these are the programmers that *don't* have the "proof" in their 
head that Matthias is referring to.

[...]

-thant
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1k76u2v50.fsf@tti5.uchicago.edu>
Thant Tessman <·····@acm.org> writes:

> I have actually had the bad fortune to work with what I call
> "cut-and-paste" computer programmers. You look at their work, and you
> get the strong suspicion that they have no genuine understanding *why*
> their programs work. They somehow program through imitation and
> experimentation, as if programming really was merely a matter of
> getting the incantation right. I'm always stunned when these kind of
> programmers get anything working, but they do--at least enough to keep
> them employed in such numbers that I've seen more than one of them.
> 
> 
> I think these are the programmers that *don't* have the "proof" in
> their head that Matthias is referring to.

Dang!  And here I was saying they don't exist.  But surely you must
forgive me for saying they ought to be fired...  <*duck*>

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031526250001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> Thant Tessman <·····@acm.org> writes:
> 
> > I have actually had the bad fortune to work with what I call
> > "cut-and-paste" computer programmers. You look at their work, and you
> > get the strong suspicion that they have no genuine understanding *why*
> > their programs work. They somehow program through imitation and
> > experimentation, as if programming really was merely a matter of
> > getting the incantation right. I'm always stunned when these kind of
> > programmers get anything working, but they do--at least enough to keep
> > them employed in such numbers that I've seen more than one of them.
> > 
> > 
> > I think these are the programmers that *don't* have the "proof" in
> > their head that Matthias is referring to.
> 
> Dang!  And here I was saying they don't exist.  But surely you must
> forgive me for saying they ought to be fired...  <*duck*>

I surely must not.  My current job is to take a program written by someone
else and port it to a different environment.  The original program is a
big hairball.  I have no idea how or why (or even if) it works.  For a
while I tried "proving" that all the changes I was making preserved the
original semantics, but that turned out to be a long hard slog (as Donald
Rumsfeld might put it).  So what I did instead was to generate a canonical
test case.  Now when I make a change I compare the output of the new
program to that of the original.  If they match I assume the two programs
do the same thing.  Now I can rip out huge chunks of code, rearrange
things to make them cleaner, etc. without having to think very much.  I
can just try them and see if they work.  Progress has been sped up by
orders of magnitude.  At the end of the day (well, the month, or maybe the
fiscal year) I will have written a program that "works" for some
reasonable definition of working, but I won't have the foggiest clue how
it does what it does.  (That's not quite true.  I actually do have a foggy
clue, but I could actally follow my current methodology without it.)

Point is, abandoning having a proof in my head was the only way to get the
job done in a reasonable amount of time.  Fortunately, my management is
more enlightened than you seem to be and I am still employed (for the
moment).

E.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bne06i$94l$1@news.oberberg.net>
Erann Gat wrote:
> Now I can rip out huge chunks of code, rearrange
> things to make them cleaner, etc. without having to think very much.
 > [...]
> 
> Point is, abandoning having a proof in my head was the only way to get the
> job done in a reasonable amount of time.  Fortunately, my management is
> more enlightened than you seem to be and I am still employed (for the
> moment).

But you have a "proof" in your head about the code you're authoring 
yourself, I'd guess. (And even the author of the original hairball 
should have had one - well, hairballs and clueless programmers often go 
together, and I'm pretty sure that the programmer who wrote that 
hairball isn't with the company anymore...)

Regards,
Jo
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2510030841030001@192.168.1.51>
In article <············@news.oberberg.net>, Joachim Durchholz
<·················@web.de> wrote:

> Erann Gat wrote:
> > Now I can rip out huge chunks of code, rearrange
> > things to make them cleaner, etc. without having to think very much.
>  > [...]
> > 
> > Point is, abandoning having a proof in my head was the only way to get the
> > job done in a reasonable amount of time.  Fortunately, my management is
> > more enlightened than you seem to be and I am still employed (for the
> > moment).
> 
> But you have a "proof" in your head about the code you're authoring 
> yourself, I'd guess.

Actually, no.  I've brought up the example of my spam filter before, which
I authored without having a proof of correctness in my head, but rather
using the same sort of generate-and-test method.

Certainly there are examples of people (perhaps even me :-) writing code
with a proof in their head, but the claim that this is universally true,
or that it is necessary (or even desirable) for producing useful code is
demonstrably false.

E.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnelt2$ina$1@news.oberberg.net>
Erann Gat wrote:

> Joachim Durchholz <·················@web.de> wrote:
> 
>>Erann Gat wrote:
>>
>>>Now I can rip out huge chunks of code, rearrange
>>>things to make them cleaner, etc. without having to think very much.
>>
>> > [...]
>>
>>>Point is, abandoning having a proof in my head was the only way to get the
>>>job done in a reasonable amount of time.  Fortunately, my management is
>>>more enlightened than you seem to be and I am still employed (for the
>>>moment).
>>
>>But you have a "proof" in your head about the code you're authoring 
>>yourself, I'd guess.
> 
> Actually, no.  I've brought up the example of my spam filter before, which
> I authored without having a proof of correctness in my head, but rather
> using the same sort of generate-and-test method.
> 
> Certainly there are examples of people (perhaps even me :-) writing code
> with a proof in their head, but the claim that this is universally true,
> or that it is necessary (or even desirable) for producing useful code is
> demonstrably false.

You can't explain why your code works? (and a program proof is nothing 
but "an explanation why the code works")
Come on. I bet you have a good idea of what your code is doing.
I'll grant you that you don't know exactly how the end results of every 
single run come - but that is not the point, the point is that you know 
what the code is doing, and that it is doing what you intended it to do.

That you don't have a formal definition of what is spam means that there 
is no global correctness proof for your code since there is nothing to 
prove against - but that doesn't mean that you don't have ideas about 
particular aspects and algorithms in your code!

Regards,
Jo
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2510031814310001@192.168.1.51>
In article <············@news.oberberg.net>, Joachim Durchholz
<·················@web.de> wrote:

> You can't explain why your code works?

That depends on what level of explanation you are willing to accept.  Of
course I can explain the operation of my spam filter at some level, but at
the heart of the thing is this horrifically complex regexp that has
evolved (literally) over time and that I no longer have a complete grip
on.  Nevertheless, it seems to work.

> Come on. I bet you have a good idea of what your code is doing.

Not one that I could reduce to a formal proof of "correctness" (whatever
that means when the artifact under discussion is a spam filter) even in
principle.

> I'll grant you that you don't know exactly how the end results of every 
> single run come - but that is not the point, the point is that you know 
> what the code is doing, and that it is doing what you intended it to do.

Actually there is behavior of the filter that is a complete mystery to
me.  Today, for example, I got two apparently identical messages.  One was
marked as spam, the other one was not (a false negative as it turns out). 
I have no idea why.  Of course, I could probably look into it and figure
it out, but at the moment I haven't a clue what is going on.

> That you don't have a formal definition of what is spam means that there 
> is no global correctness proof for your code since there is nothing to 
> prove against -

Yes, that is one of the points I am trying to make with this example.

> but that doesn't mean that you don't have ideas about 
> particular aspects and algorithms in your code!

Of course I have ideas.  That's a far cry from having something that could
be reduced to a formal proof of correctness.

E.
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uhe1wbtmp.fsf@hotmail.com>
·················@jpl.nasa.gov (Erann Gat) writes:
{stuff deleted}
> Actually there is behavior of the filter that is a complete mystery to
> me.  Today, for example, I got two apparently identical messages.  One was
> marked as spam, the other one was not (a false negative as it turns out). 
> I have no idea why.  Of course, I could probably look into it and figure
> it out, but at the moment I haven't a clue what is going on.
> 
> > That you don't have a formal definition of what is spam means that there 
> > is no global correctness proof for your code since there is nothing to 
> > prove against -
> 
> Yes, that is one of the points I am trying to make with this example.
> 
> > but that doesn't mean that you don't have ideas about 
> > particular aspects and algorithms in your code!
> 
> Of course I have ideas.  That's a far cry from having something that could
> be reduced to a formal proof of correctness.

Depends on what property you are trying to prove. I sure hope you have some
rigrous informal argument that whatever black magic goes on inside, it
doesn't eat you incoming mail and send it to /dev/null. I'm sure you have
some sort of informal reason to suspect your program is "correct" in this
sense. I suspect making that argument into a formal proof is more tractable
then you think. 
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87znfn3obi.fsf@sidious.geddis.org>
·········@hotmail.com (Daniel C. Wang) writes:
> Depends on what property you are trying to prove. I sure hope you have some
> rigrous informal argument that whatever black magic goes on inside, it
> doesn't eat you incoming mail and send it to /dev/null.

Of course there exist _some_ properties that could be proved about the code.
Nobody was claiming that _nothing_ was provable about the code.

The whole point was that the _interesting_ properties (e.g. "correctness")
aren't provable.

> I'm sure you have some sort of informal reason to suspect your program is
> "correct" in this sense. I suspect making that argument into a formal proof
> is more tractable then you think.

Nobody cares about this particular topic.  What he wanted to know was whether
his code correctly categorizes incoming email into spam/ham.  And it's a great
example, because formal proof methods aren't helpful (even in principle).

The fact that you can make up other properties which might be provable is a
useless red herring.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ufzhfa26l.fsf@hotmail.com>
Don Geddis <···@geddis.org> writes:

> ·········@hotmail.com (Daniel C. Wang) writes:
> > Depends on what property you are trying to prove. I sure hope you have some
> > rigrous informal argument that whatever black magic goes on inside, it
> > doesn't eat you incoming mail and send it to /dev/null.
> 
> Of course there exist _some_ properties that could be proved about the code.
> Nobody was claiming that _nothing_ was provable about the code.
> 
> The whole point was that the _interesting_ properties (e.g. "correctness")
> aren't provable.
> 
> > I'm sure you have some sort of informal reason to suspect your program is
> > "correct" in this sense. I suspect making that argument into a formal proof
> > is more tractable then you think.
> 
> Nobody cares about this particular topic.  What he wanted to know was whether
> his code correctly categorizes incoming email into spam/ham.  And it's a great
> example, because formal proof methods aren't helpful (even in principle).
> 
> The fact that you can make up other properties which might be provable is a
> useless red herring.

If the property "it is spam" is well defined, and you can give me a decision
procedure for determining what is spam and isn't spam. Then you've just
given me a definition for it which I can formalize if you like. If you don't
have a decision procedure than it isn't a well defined idea. It's not clear
to me if "spam" is a well defined idea or not. 

This is true for any property. 
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhuih$5cr$1@newsreader2.netcologne.de>
Daniel C. Wang wrote:

> If the property "it is spam" is well defined, and you can give me a decision
> procedure for determining what is spam and isn't spam. Then you've just
> given me a definition for it which I can formalize if you like. If you don't
> have a decision procedure than it isn't a well defined idea. It's not clear
> to me if "spam" is a well defined idea or not. 
> 
> This is true for any property. 

No. The "spam property" is not well defined. It depends largely on 
accidental personal preferences. However, there are algorithms available 
that can decide whether a mail is spam or not. See 
http://www.paulgraham.com/stopspam.html

The kicker really is that you cannot formalize the "spam property" but 
you can still effectively automate spam filtering.


Pascal
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <8765iazgeu.fsf@sidious.geddis.org>
·········@hotmail.com (Daniel C. Wang) writes:
> If the property "it is spam" is well defined, and you can give me a decision
> procedure for determining what is spam and isn't spam. Then you've just
> given me a definition for it which I can formalize if you like. If you don't
> have a decision procedure than it isn't a well defined idea. It's not clear
> to me if "spam" is a well defined idea or not. 

In some sense spam is well-defined, but still not formalizable.  For any
given individual, the person himself can function as an "oracle", and he can
easily and consistently categorize any given incoming email into spam vs. ham.

The programming task is as follows: given a training set of incoming email that
the oracle has already correctly categorized, construct an algorithm that will
categorize new previously-unseen email in the same way that the oracle's
(unknown) function will.

It's very easy to precisely find just how close you've come.  For any
proposed algorithm, we can run a new testing set of previously-unseen email,
and compare the classification produced by your algorithm to the one produced
by the oracle.  Presumably, you'll have some false positives and false
negatives.  If your algorithm produces _exactly_ the same classifications as
the oracle, then your work is done!

That seems to me to be a perfectly well-formed specification.  Sadly, the spec
has an operational component (evaluation of correctness requires running a
black-box test), so it's tough to reason about it.

But there's no confusion at all about what it means, or whether a given
algorithm satisfies the spec, or even between two potential algorithms, which
is better.

Now that you understand the problem, please tell me how automatic verification
methods are going to add to customer confidence in the delivered code.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <u1xsy9llc.fsf@hotmail.com>
Don Geddis <···@geddis.org> writes:
{stuff deleted}
> The programming task is as follows: given a training set of incoming email that
> the oracle has already correctly categorized, construct an algorithm that will
> categorize new previously-unseen email in the same way that the oracle's
> (unknown) function will.
> 
> It's very easy to precisely find just how close you've come.  For any
> proposed algorithm, we can run a new testing set of previously-unseen email,
> and compare the classification produced by your algorithm to the one produced
> by the oracle.  Presumably, you'll have some false positives and false
> negatives.  If your algorithm produces _exactly_ the same classifications as
> the oracle, then your work is done!
> 
> That seems to me to be a perfectly well-formed specification.  Sadly, the spec
> has an operational component (evaluation of correctness requires running a
> black-box test), so it's tough to reason about it.

Given that specification the only possibly correct implementation of a spam
filter is a dialog box that asks the users if the email is spam or not for
every new email. You should be able to prove that claim formally with a bit
of work.

> But there's no confusion at all about what it means, or whether a given
> algorithm satisfies the spec, or even between two potential algorithms, which
> is better.
> 
> Now that you understand the problem, please tell me how automatic verification
> methods are going to add to customer confidence in the delivered code.

It should tell your customer that what they want is probably not what they
really wanted, and you have to renegotiate a more resonable specification.
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ekwxw7jp.fsf@sidious.geddis.org>
I wrote:
> > The programming task is as follows: given a training set of incoming
> > email that the oracle has already correctly categorized, construct an
> > algorithm that will categorize new previously-unseen email in the same
> > way that the oracle's (unknown) function will.

·········@hotmail.com (Daniel C. Wang) writes:
> Given that specification the only possibly correct implementation of a spam
> filter is a dialog box that asks the users if the email is spam or not for
> every new email. You should be able to prove that claim formally with a bit
> of work.

No, you're incorrect.  Your answer shows that you understand neither science
nor programming.

Sure, the oracle _might_ implement any arbitrary function, and possibly the
past is no guide at all to the future.  This is exactly the same for the task
of wondering whether the sun will rise tomorrow, and apples fall to the ground.
It isn't logically implied by what we've observed in the past, yet Occam's
razor suggests trying the simplest explanation first.

Machine learning, automatic classification, abduction...you've thrown out whole
specialties of computer science by claiming "the only possible" implementation
is to ask the oracle for each new email.

In fact, what is likely the case is that the oracle is running a relatively
simple function (relative to all possible functions over the space).  There
is likely a lot of structure in oracle's function, and a clever enough program
(or person) can discover that structure.

Among other things, I'd be willing to be that if you gave me, a human being,
1000 examples of spam/ham classification for some random individual, I could
correctly classify the next 1000 emails for this same individual with great
accuracy (99%?).

In particular: whether you can write a program to be 100% accurate or not for
this spec depends greatly on what (hidden) classification function a given
individual is using for spam/ham partitioning.  Some functions will be
learnable; others won't.  You certainly can't formally prove that the _only_
correct implementation is to ask the oracle each time.  There may be other
correct implementations.

> > Now that you understand the problem, please tell me how automatic
> > verification methods are going to add to customer confidence in the
> > delivered code.

> It should tell your customer that what they want is probably not what they
> really wanted, and you have to renegotiate a more resonable specification.

I'm the customer.  I want my spam to go away without bothering me.  Why don't
you tell me what my specification should have been, and what I "really" wanted.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
I think they should continue the policy of not giving a Nobel Prize for
paneling.  -- Jack Handey, The New Mexican, 1988
From: Pekka P. Pirinen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ixbrs1xj16.fsf@ocoee.cam.harlequin.co.uk>
·········@hotmail.com (Daniel C. Wang) writes:
> Don Geddis <···@geddis.org> writes:
> {stuff deleted}
> > The programming task is as follows: given a training set of
> > incoming email that the oracle has already correctly categorized,
> > construct an algorithm that will categorize new previously-unseen
> > email in the same way that the oracle's (unknown) function will.
> > [...]
> > Now that you understand the problem, please tell me how automatic
> > verification methods are going to add to customer confidence in
> > the delivered code.
> 
> It should tell your customer that what they want is probably not what they
> really wanted, and you have to renegotiate a more resonable specification.

Don't tease him!  Let's cut to the chase: Once we've formulated a
specification that is actually testable and implementable, it'll be
something like "I only need to adjust it once a week; I get N emails a
week.  To prove the program is likely to achieve this, we collect a
representative test set of emails.  [defines procedure] If X% are
correctly classified, I accept the program."  Since the test set won't
be available during development, the programmer can only aim to build
something that has a high probability of passing.  So that would be
the property that one would aim to prove.  Clearly one could develop
statistical arguments for it; these arguments could be automatically
verified.

People have been bringing this up all along: There isn't a single
a-priori "correctness" property that can just be applied to real-world
applications.  There's always a whole host of application-dependent
interesting properties that one would like to be confident of.

To follow that line of thought a bit more: A spec for a spam filter
would also have requirements like "it doesn't lose mail", "it adds an
informative header but does not mangle the rest", "classification
speed is more than V kB/s".  While the classification requirement is
quite slippery, these are clear and can be proved with standard
methods.

Also, you let him redefine the question into "add to customer
confidence".  Usually programming techniques are judged by effort
required, maintainability of the resulting code, and other technical
criteria.  Customers are generally not asked to judge whether one
should use, say, higher-order functions or not.  Not that you wouldn't
use automatic verification as a selling point as well.  If course, the
_original_ issue was whether this was a programming task where the
programmer could not have an informal proof of correctness in her mind
(because the task wasn't in fact described by a set of requirements
that could be formalized).  And we've seen that they could.
-- 
Pekka P. Pirinen
  Quality control, n.:
	Assuring that the quality of a product does not get out of hand
	and add to the cost of its manufacture or design.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh47l$smc$1@news.oberberg.net>
Erann Gat wrote:

> Joachim Durchholz <·················@web.de> wrote:
> 
>>You can't explain why your code works?
> 
> That depends on what level of explanation you are willing to accept.  Of
> course I can explain the operation of my spam filter at some level, but at
> the heart of the thing is this horrifically complex regexp that has
> evolved (literally) over time and that I no longer have a complete grip
> on.  Nevertheless, it seems to work.

Well, then it's unmaintainable... which would make me frown seriously on 
your code if I were to judge it.
In other words: it's not professional.

(Sorry for the harsh words. And I don't want to imply that such harsh 
words are an appropriate description for your usual work.)

> Actually there is behavior of the filter that is a complete mystery to
> me.  Today, for example, I got two apparently identical messages.  One was
> marked as spam, the other one was not (a false negative as it turns out). 
> I have no idea why.  Of course, I could probably look into it and figure
> it out, but at the moment I haven't a clue what is going on.

Which is exactly the sort of maintenance problem that plagues 
"programming-by-experimentation" code, and why I frown on such code.

Regards,
Jo
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2610031115310001@192.168.1.51>
In article <············@news.oberberg.net>, Joachim Durchholz
<·················@web.de> wrote:

> Erann Gat wrote:
> 
> > Joachim Durchholz <·················@web.de> wrote:
> > 
> >>You can't explain why your code works?
> > 
> > That depends on what level of explanation you are willing to accept.  Of
> > course I can explain the operation of my spam filter at some level, but at
> > the heart of the thing is this horrifically complex regexp that has
> > evolved (literally) over time and that I no longer have a complete grip
> > on.  Nevertheless, it seems to work.
> 
> Well, then it's unmaintainable...

No, it isn't.  That's the truly interesting thing.  I am constantly
tweaking it to deal with new flavors of spam.

Actually, in my experience maintenance done by people who do not have a
deep understanding of what they are maintaining happens quite a lot.  It
is remarkable that it works at all, but it does (for some value of
"works").

> which would make me frown seriously on your code if I were to judge it.
> In other words: it's not professional.

<shrug> That is your prerogative, but I very much doubt that you could do
much better on this particular problem.  (If you want to contest that
claim please start a new thread with a more descriptive title.  This
thread is getting overcrowded.)

> (Sorry for the harsh words. And I don't want to imply that such harsh 
> words are an appropriate description for your usual work.)

No worries, you are entitled to your opinion, even if it's wrong ;-)

> > Actually there is behavior of the filter that is a complete mystery to
> > me.  Today, for example, I got two apparently identical messages.  One was
> > marked as spam, the other one was not (a false negative as it turns out). 
> > I have no idea why.  Of course, I could probably look into it and figure
> > it out, but at the moment I haven't a clue what is going on.
> 
> Which is exactly the sort of maintenance problem that plagues 
> "programming-by-experimentation" code, and why I frown on such code.

Tell you what, when I get around to looking into this I'll actually time
how long it takes me to figure it out.  I'll bet you I can debug and fix
the problem in under 15 minutes.

E.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhnhq$5fe$3@news.oberberg.net>
Erann Gat wrote:

> Joachim Durchholz <·················@web.de> wrote:
> 
>>Erann Gat wrote:
>>
>>
>>>Joachim Durchholz <·················@web.de> wrote:
>>>
>>>
>>>>You can't explain why your code works?
>>>
>>>That depends on what level of explanation you are willing to accept.  Of
>>>course I can explain the operation of my spam filter at some level, but at
>>>the heart of the thing is this horrifically complex regexp that has
>>>evolved (literally) over time and that I no longer have a complete grip
>>>on.  Nevertheless, it seems to work.
>>
>>Well, then it's unmaintainable...
> 
> No, it isn't.  That's the truly interesting thing.  I am constantly
> tweaking it to deal with new flavors of spam.
> 
> Actually, in my experience maintenance done by people who do not have a
> deep understanding of what they are maintaining happens quite a lot.  It
> is remarkable that it works at all, but it does (for some value of
> "works").

Well, yes, indeed.
I've seen enough software that "works" in this sense :-(

>>which would make me frown seriously on your code if I were to judge it.
>>In other words: it's not professional.
> 
> <shrug> That is your prerogative, but I very much doubt that you could do
> much better on this particular problem.  (If you want to contest that
> claim please start a new thread with a more descriptive title.  This
> thread is getting overcrowded.)

Agreed that it's overcrowded - but I wouldn't detect spam using a regexp 
anyway, I'd use a Bayesian filter which is more adaptable to personal 
taste - one person't spam it another person't bed lecture.

Regards,
Jo
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2710031315460001@k-137-79-50-101.jpl.nasa.gov>
In article <············@news.oberberg.net>, Joachim Durchholz
<·················@web.de> wrote:

> I wouldn't detect spam using a regexp 
> anyway, I'd use a Bayesian filter which is more adaptable to personal 
> taste - one person't spam it another person't bed lecture.

Agreed, but you can't prove a Bayesian filter "correct" either for that
very reason.

E.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnk5dj$egb$1@news.oberberg.net>
Erann Gat wrote:

> In article <············@news.oberberg.net>, Joachim Durchholz
> <·················@web.de> wrote:
> 
> 
>>I wouldn't detect spam using a regexp 
>>anyway, I'd use a Bayesian filter which is more adaptable to personal 
>>taste - one person't spam it another person't bed lecture.
> 
> 
> Agreed, but you can't prove a Bayesian filter "correct" either for that
> very reason.

Hey, I can prove it correct according to specification.
The difference is that I can draw one up for a Bayesian filter: it's 
essentially a specification how user input should be used to classify data.
The actual behaviour of the filter for each given spam isn't specified 
(and cannot be since it depends on user input, which is inherently 
unspecifiable) - but when I'm the program's maintainer, I don't care 
much about how each mail is classified, I care about other properties of 
the program, namely that it adjusts the word probabilities according to 
specification and user input. (/This/ is a concrete example of what I've 
been trying to say constantly: that a correctness proof for a program 
need not cover all of its input and output. That talking about a 
program's "correctness" in general isn't very fruitful - those ideas 
about Goedel undecidability theorems that have been floating around in 
this thread are both correct and entirely irrelevant, I haven't seen a 
single program in my life where this kind of borderline case was even 
remotely relevant.)

Regards,
Jo
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <usmld6f6o.fsf@STRIPCAPStelus.net>
Joachim Durchholz <·················@web.de> writes:
> /This/ is a concrete example of what I've been trying to say constantly:
> that a correctness proof for a program need not cover all of its input and
> output.

As baldly stated, this is not true. That stuff you did not cover can sink your
program's correctness.

Certainly practical proofs of actual programs are often doable only if you are
able to restrict yourself to proving well defined subsets of the program, but
you have to (informally!) argue convincingly that your particular subset is a
valid one.

> That talking about a program's "correctness" in general isn't very fruitful
> - those ideas about Goedel undecidability theorems that have been floating
> around in this thread are both correct and entirely irrelevant, I haven't
> seen a single program in my life where this kind of borderline case was even
> remotely relevant.

Given that those Goedel bits tend to involve self reference, and in practice
manifest themselves as infinite loops, I would say that those cases are more
common than you think.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnlp55$96v$1@news.oberberg.net>
Ray Blaak wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
>>/This/ is a concrete example of what I've been trying to say constantly:
>>that a correctness proof for a program need not cover all of its input and
>>output.
> 
> As baldly stated, this is not true. That stuff you did not cover can sink your
> program's correctness.

Not if the uncovered stuff isn't in the specifications.

> Certainly practical proofs of actual programs are often doable only if you are
> able to restrict yourself to proving well defined subsets of the program, but
> you have to (informally!) argue convincingly that your particular subset is a
> valid one.

Again: you almost never prove the entire semantics of a program.
You prove those parts of the semantics that are interesting. Technical 
stuff like freeness from memory leaks, and "social stuff" like adherence 
to the specifications (which almost never cover the entire semantics, 
and the customer couldn't care less about the format of the save files 
or similar internals).

>>That talking about a program's "correctness" in general isn't very fruitful
>>- those ideas about Goedel undecidability theorems that have been floating
>>around in this thread are both correct and entirely irrelevant, I haven't
>>seen a single program in my life where this kind of borderline case was even
>>remotely relevant.
> 
> Given that those Goedel bits tend to involve self reference, and in practice
> manifest themselves as infinite loops, I would say that those cases are more
> common than you think.

Non sequitur:
That Goedel self reference is of a /descriptive/ type: a system is made 
to describe itself, so that the meta level is mapped to the concrete level.
Infinite loops can have many causes. Even in those cases where 
self-reference is involved, the infinite loops that I have seen were 
entirely unrelated to the incompleteness theorems. Most infinite loops 
were simply functions which were shoving the responsibility for actually 
/doing/ something around to each other (such as implementing COND by 
calling COND, one of the first noteworthy bugs in my programming career 
- the bugs that I make today involve much more complicated call 
sequences *g*).

Regards,
Jo
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m365idymrl.fsf@rigel.goldenthreadtech.com>
Joachim Durchholz <·················@web.de> writes:

> You can't explain why your code works? (and a program proof is nothing
> but "an explanation why the code works")

You're committing a conversion fallacy here: A program proof is an
explanation of of why the code works, but a an explanation of why the
code works is clearly not necessarily a proof.


> Come on. I bet you have a good idea of what your code is doing.

Now you aren't even requring that there be an explanation, only a
"good idea" of what it is doing.


> I'll grant you that you don't know exactly how the end results of
> every single run come - but that is not the point, the point is that
> you know what the code is doing

Now you've gone back from "good idea" to "know".  Not at all the same.


> and that it is doing what you intended it to do.

He claims that this is true, but that it is a result of "generate and
test", which in this case in particular sounds about exactly right.


> That you don't have a formal definition of what is spam means that
> there is no global correctness proof for your code since there is
> nothing to prove against - but that doesn't mean that you don't have
> ideas about particular aspects and algorithms in your code!

Now we are back to ideas and reasonings about aspects of the code.  I
don't think any of us disagrees with this, just that you (and MB) seem
to then leap to the (amazing) conclusion that this _means_ there is a
"correctness proof" in the programmers head.

/Jon
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ullr8bu5l.fsf@hotmail.com>
·········@rcn.com (Jon S. Anthony) writes:

> Joachim Durchholz <·················@web.de> writes:
> 
> > You can't explain why your code works? (and a program proof is nothing
> > but "an explanation why the code works")
> 
> You're committing a conversion fallacy here: A program proof is an
> explanation of of why the code works, but a an explanation of why the
> code works is clearly not necessarily a proof.
> 

Lets restate the claim so it's a bit clearer and less controversial.

 Claim 1. Every programer who is worth employing, ought to have a "rigorous
informal argument" in their head of why the code they produced works
correctly.

 Claim 2. Any *correct* "rigorous informal argument" can with some work be
turned into a formal machine checkable proof. Doing this acutally may catch
some subtle corner cases in the rigorous informal argument, so bugs in the
code might have to be fixed.

Therefore it follows that for any software written that is believed to be
correct there exists a machine checkable proof of it's correctness.

Most people will probably find Claim 1 uncontroversial. Claim 2 is something
you believe after doing a lot of proof hacking, or if you happen to be a
decent Mathematician.

Someone may point out most software is shipped that is known to have
bugs. People ship sofware with bugs, because they can presumably make money
selling buggy software. This is not because bugs cannot be fixed, it's just
not worth the effort for lots of software. This is also true with complete
formal verification of software. 

Complete formal verification of software is technically fesiable. However,
it is not economically sensbile for all but a few safe critical
systems. This will change as verification becomes cheaper and the money
making potential of shipping buggy software trends toward zero.

I hope, everything above is completely non-controversial.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnfdv1$pm1$1@newsreader2.netcologne.de>
Daniel C. Wang wrote:

> ·········@rcn.com (Jon S. Anthony) writes:
> 
> 
>>Joachim Durchholz <·················@web.de> writes:
>>
>>
>>>You can't explain why your code works? (and a program proof is nothing
>>>but "an explanation why the code works")
>>
>>You're committing a conversion fallacy here: A program proof is an
>>explanation of of why the code works, but a an explanation of why the
>>code works is clearly not necessarily a proof.
> 
> Lets restate the claim so it's a bit clearer and less controversial.
> 
>  Claim 1. Every programer who is worth employing, ought to have a "rigorous
> informal argument" in their head of why the code they produced works
> correctly.

See http://www.mcs.vuw.ac.nz/~kjx/papers/nopp.pdf for a strongly 
opposing view. You might completely disagree with it, but at least this 
shows that you have made a conroversial statement.

>  Claim 2. Any *correct* "rigorous informal argument" can with some work be
> turned into a formal machine checkable proof. Doing this acutally may catch
> some subtle corner cases in the rigorous informal argument, so bugs in the
> code might have to be fixed.

Here is an example: "Whenever I want to find out whether the GUIs I have 
programmed look good, I let xyz take a close look at it. She does a 
terribly good job in this regard and has given me excellent suggestions 
for improvement numerous times before. Customers are almost always very 
satisfied with the results. I don't quite understand how this works, but 
this is most probably because I am color-blind."

This can be a correct and rigorous argument.

Now, you might think that this side-steps the actual topic of our 
discussion, but let's just change the statement slightly: "Whenever I 
want to find out whether the code I have written contains logical flaws, 
I let xyz take a close look at it. She does a terribly good job in this 
regard and has given me excellent suggestions for improvement numerous 
times before. I don't quite understand how she does it, but this is most 
probably because I am too focused on GUI design."

And if that is still too "human-centric", here is another variant: 
"Whenever I want to use a good library I check out CPAN. They have a 
terribly good collection of components and numerous times before, they 
have provided exactly what I needed. I don't quite understand most of 
the implementations, but this is most probably because I am not very 
good at Perl."

Just another one: "I don't know why this complicated algorithm works. I 
have tried to understand it many times before, and it has always given 
me a headache. Anyway, I have seen it mentioned in various books, 
written by obviously serious authors, and it seems to do the job really 
well. Anyway, it doesn't matter - I only need to implement the user 
interface around it, I don't need to understand quantum mechanics."

There is no clear-cut line between art and science.

> I hope, everything above is completely non-controversial.

No, not even remotely.


Pascal
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <u8yn8av4v.fsf@hotmail.com>
Pascal Costanza <········@web.de> writes:

> Daniel C. Wang wrote:
> >>You're committing a conversion fallacy here: A program proof is an
> >>explanation of of why the code works, but a an explanation of why the
> >>code works is clearly not necessarily a proof.
> > Lets restate the claim so it's a bit clearer and less controversial.
> >  Claim 1. Every programer who is worth employing, ought to have a
> > "rigorous
> > informal argument" in their head of why the code they produced works
> > correctly.
> 
> See http://www.mcs.vuw.ac.nz/~kjx/papers/nopp.pdf for a strongly
> opposing view. You might completely disagree with it, but at least
> this shows that you have made a conroversial statement.

I don't get it.. skiming the paper, I don't see how it is contradicting
the claim?

> >  Claim 2. Any *correct* "rigorous informal argument" can with some work be
> > turned into a formal machine checkable proof. Doing this acutally may catch
> > some subtle corner cases in the rigorous informal argument, so bugs in the
> > code might have to be fixed.
> 
> Here is an example: "Whenever I want to find out whether the GUIs I
> have programmed look good, I let xyz take a close look at it. She does
> a terribly good job in this regard and has given me excellent
> suggestions for improvement numerous times before. Customers are
> almost always very satisfied with the results. I don't quite
> understand how this works, but this is most probably because I am
> color-blind."
> 
> This can be a correct and rigorous argument.

yes.. and I don't see why this is horribly hard to formalize. In the
formalization you get to choose your axioms. Lets say you formally define
the set of individuals needed to audit or inspect your design before it is
released to the customer as part of a program management process. You simply
just need to "prove" that you got the right people to review your
code/design.

I think, people miss understand what a formal proof is. A formal proof is
just a very boring and detailed way of reasoning about anything. Sometimes
it's mathematical objects sometimes it a business process. 

> Now, you might think that this side-steps the actual topic of our
> discussion, but let's just change the statement slightly: "Whenever I
> want to find out whether the code I have written contains logical
> flaws, I let xyz take a close look at it. She does a terribly good job
> in this regard and has given me excellent suggestions for improvement
> numerous times before. I don't quite understand how she does it, but
> this is most probably because I am too focused on GUI design."

Your rigrours informal argument is simply using the "The other hacker down
the hall is smarter theorem". I didn't ask that the programer who wrote the
code turn their argument into a proof. I simply said it could be done.

> And if that is still too "human-centric", here is another variant:
> "Whenever I want to use a good library I check out CPAN. They have a
> terribly good collection of components and numerous times before, they
> have provided exactly what I needed. I don't quite understand most of
> the implementations, but this is most probably because I am not very
> good at Perl."

Again just because you don't understand, doesn't mean no one
understands. I'm currently acutually proving various theorems in a very
larger code base which, I don't understand. I don't have too either. You can
construct a rigorous proof without understanding every little detail.

Writing large mechanically verifiable proofs is very much like
programming. It is in a very litteral sense. If you can write programs you
can write the proof for it. 
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bngr34$8t3$1@newsreader2.netcologne.de>
Daniel C. Wang wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>Daniel C. Wang wrote:

>>> Claim 1. Every programer who is worth employing, ought to have a
>>>"rigorous
>>>informal argument" in their head of why the code they produced works
>>>correctly.
>>
>>See http://www.mcs.vuw.ac.nz/~kjx/papers/nopp.pdf for a strongly
>>opposing view. You might completely disagree with it, but at least
>>this shows that you have made a conroversial statement.
> 
> I don't get it.. skiming the paper, I don't see how it is contradicting
> the claim?

Take a closer look at section 13.

> Lets say you formally define
> the set of individuals needed to audit or inspect your design before it is
> released to the customer as part of a program management process. You simply
> just need to "prove" that you got the right people to review your
> code/design.

How do you do that?

>>Now, you might think that this side-steps the actual topic of our
>>discussion, but let's just change the statement slightly: "Whenever I
>>want to find out whether the code I have written contains logical
>>flaws, I let xyz take a close look at it. She does a terribly good job
>>in this regard and has given me excellent suggestions for improvement
>>numerous times before. I don't quite understand how she does it, but
>>this is most probably because I am too focused on GUI design."
> 
> 
> Your rigrours informal argument is simply using the "The other hacker down
> the hall is smarter theorem". I didn't ask that the programer who wrote the
> code turn their argument into a proof. I simply said it could be done.

How do you formalize "the other hacker is smarter" and turn it into a 
formal machine checkable proof?

> Writing large mechanically verifiable proofs is very much like
> programming. It is in a very litteral sense. If you can write programs you
> can write the proof for it. 

The fact that writing proofs is like programming doesn't mean that all 
kinds of programming are like writing proofs.


Pascal
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uptgk9cda.fsf@hotmail.com>
Pascal Costanza <········@web.de> writes:
{stuff deleted}
> 
> How do you do that?

Add the axioms "reviewed_by_bob" and "reviewed_by_alice".You probably ought
to include a PGP signed hash of the code they reviewed.
> 
> How do you formalize "the other hacker is smarter" and turn it into a
> formal machine checkable proof?

You just ask the smarter hacker to write down the proof for you. Writing
down any proof is quite easy, if you have sufficiently high-level
axioms. I'm not asking you to prove the correctness of your system using
basic set theory. Defining what the right set of acceptable axioms are is
mostly a social process which has a few minor technical requirements.


> > Writing large mechanically verifiable proofs is very much like
> > programming. It is in a very litteral sense. If you can write programs you
> > can write the proof for it.
> 
> The fact that writing proofs is like programming doesn't mean that all
> kinds of programming are like writing proofs.

They are in fact they isomorphic via the Curry-Howard isomorphism, but
that's only an informal claim not completely technically true claim.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh0l0$gmo$2@newsreader2.netcologne.de>
Daniel C. Wang wrote:

> Pascal Costanza <········@web.de> writes:
> {stuff deleted}
> 
>>How do you do that?
> 
> 
> Add the axioms "reviewed_by_bob" and "reviewed_by_alice".You probably ought
> to include a PGP signed hash of the code they reviewed.

Ha! Then I just add the axiom "my code is always correct"! ;-)

>>How do you formalize "the other hacker is smarter" and turn it into a
>>formal machine checkable proof?
> 
> You just ask the smarter hacker to write down the proof for you.

That would be a proof for the concrete code he has written. The 
conjecture is "the other hacker is smarter", not "the code of the other 
hacker is correct". That's a difference.



Pascal
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ullr89byr.fsf@hotmail.com>
Pascal Costanza <········@web.de> writes:
{stuff deleted}

> >>> Claim 1. Every programer who is worth employing, ought to have a
> >>>"rigorous
> >>>informal argument" in their head of why the code they produced works
> >>>correctly.
> >>
> >>See http://www.mcs.vuw.ac.nz/~kjx/papers/nopp.pdf for a strongly
> >>opposing view. You might completely disagree with it, but at least
> >>this shows that you have made a conroversial statement.
> > I don't get it.. skiming the paper, I don't see how it is
> > contradicting
> > the claim?
> 
> Take a closer look at section 13.

That doesn't contradict the claim since in the scenario described, arguably
the "programmer" didn't produce the code. He/she merely reused it.
We can argue about what it means to "produce a program", but dowloading
someone else code and including it in your code. I do not consider
"producing code".

If you like I'll restate Claim 1. As

Claim 1. Every original author of a program worth employing, ought to have a
"rigorous informal argument" in their head of why the code they originally
authored has certain useful properties.

(Arguably, this means the same thing as what I originally wrote, but people
seem to have strong notions of what "programming" and "correctness" mean.)
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh0vr$hvo$1@newsreader2.netcologne.de>
Daniel C. Wang wrote:

> Pascal Costanza <········@web.de> writes:
> {stuff deleted}
> 
> 
>>>>>Claim 1. Every programer who is worth employing, ought to have a
>>>>>"rigorous
>>>>>informal argument" in their head of why the code they produced works
>>>>>correctly.
>>>>
>>>>See http://www.mcs.vuw.ac.nz/~kjx/papers/nopp.pdf for a strongly
>>>>opposing view. You might completely disagree with it, but at least
>>>>this shows that you have made a conroversial statement.
>>>
>>>I don't get it.. skiming the paper, I don't see how it is
>>>contradicting
>>>the claim?
>>
>>Take a closer look at section 13.
> 
> That doesn't contradict the claim since in the scenario described, arguably
> the "programmer" didn't produce the code. He/she merely reused it.
> We can argue about what it means to "produce a program", but dowloading
> someone else code and including it in your code. I do not consider
> "producing code".

The section mentioned above doesn't desribe a mere reuse of someone 
else's code. They made serious changes to the code, without actually 
understanding it. Still, the result is obviously "correct". (At least, 
it was "useful".)

> If you like I'll restate Claim 1. As
> 
> Claim 1. Every original author of a program worth employing, ought to have a
> "rigorous informal argument" in their head of why the code they originally
> authored has certain useful properties.

There's no "original author" in the example above. The example describes 
an accidental program written by two programmers who don't have a clue 
what the other one is doing. They don't even know each other.


Pascal
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egptgjnq62.fsf@sefirot.ii.uib.no>
Pascal Costanza <········@web.de> writes:

> "Whenever I want to use a good library I check out CPAN. They have a
> terribly good collection of components and numerous times before, they
> have provided exactly what I needed. I don't quite understand most of
> the implementations, 

Well, as you're not writing the code, so I don't see why you should
have to know how it works.  You should know roughly what it does
though, and when you stitch the pieces together, you should have an
idea of why.

> but this is most probably because I am not very good at Perl."

...and then you may find yourself fired one day :-)

> No, not even remotely.

The only case I can agree with not having some understanding of why
code should work, is the cut-and-paste type of programming (which
isn't really programming), and the random-twiddle-and-test.  

(The extreme case being genetic programming, where you actually let the
computer morph/permute/recombine the code in an entirely random way,
and continue with the variants that shows the most promise.)

-kzm

PS: I once tried to untangle a thousand-line function, but failed.  It
was obviously to convoluted and obstuse to actually be understood.
Given good unit tests around it, I could possibly have succeeded, but
we didn't have those, and in the end I just left it alone.
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnio4d$cq7$1@newsreader2.netcologne.de>
··········@ii.uib.no wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>"Whenever I want to use a good library I check out CPAN. They have a
>>terribly good collection of components and numerous times before, they
>>have provided exactly what I needed. I don't quite understand most of
>>the implementations, 
> 
> 
> Well, as you're not writing the code, so I don't see why you should
> have to know how it works.  You should know roughly what it does
> though, and when you stitch the pieces together, you should have an
> idea of why.

Why "should" I? In a strict sense, I only need to understand it when it 
doesn't work.

>>but this is most probably because I am not very good at Perl."
> 
> 
> ...and then you may find yourself fired one day :-)
> 
> 
>>No, not even remotely.
> 
> 
> The only case I can agree with not having some understanding of why
> code should work, is the cut-and-paste type of programming (which
> isn't really programming), and the random-twiddle-and-test.

What's your definition of "programming" then? ;-P

> (The extreme case being genetic programming, where you actually let the
> computer morph/permute/recombine the code in an entirely random way,
> and continue with the variants that shows the most promise.)

Huh?!? And they call this "programming"?!? ;)

> PS: I once tried to untangle a thousand-line function, but failed.  It
> was obviously to convoluted and obstuse to actually be understood.
> Given good unit tests around it, I could possibly have succeeded, but
> we didn't have those, and in the end I just left it alone.

What language was it written in that didn't allow you to hack a simple 
unit testing tool and then proceed?


Pascal
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <eg1xszng6t.fsf@sefirot.ii.uib.no>
Pascal Costanza <········@web.de> writes:

>> Well, as you're not writing the code, so I don't see why you should
>> have to know how it works.  You should know roughly what it does
>> though, and when you stitch the pieces together, you should have an
>> idea of why.

> Why "should" I? In a strict sense, I only need to understand it when
> it doesn't work.

Right.  In theory, you could tie together components without any clear
idea, and have it work correctly the first time.  I'm usually not that
good/lucky, so I prefer to have a rough sketch of what I want to
achieve and how to go about it before I start implementing. "Should"
is just my judgement, YMMV.

>> The only case I can agree with not having some understanding of why
>> code should work, is the cut-and-paste type of programming (which
>> isn't really programming), and the random-twiddle-and-test.

> What's your definition of "programming" then? ;-P

The act of writing a program?  What's the problem?  Copying it doesn't
count as writing it, IMHO.

(I interpreted the "informal proof" to be about ab initio
implementation of algorithms, rather than modifying existing code.  As
some people have pointed out, people do copy and apply modifications
with little or no clue as to how and why.)

>> (The extreme case being genetic programming, where you actually let the
>> computer morph/permute/recombine the code in an entirely random way,
>> and continue with the variants that shows the most promise.)

> Huh?!? And they call this "programming"?!? ;)

Sure, it produces a program.  
Isn't this more or less what you advocate, btw?

I mean, either you have an idea of how the program achieves its goals
(the mental, non-formal proof method), or you don't.  This is the
extreme case, and it uses an entirely random process to do the
twiddling -- and thus certainly no logical understanding or
intention. 

One could get the impression from this thread that some people
actually do program this way, but I still doubt they haven't at least
got an intuitive reason (aka. a hunch) about the causes and effects of
modifications.

> What language was it written in that didn't allow you to hack a simple
> unit testing tool and then proceed?

C++.  I'm not saying it would have been impossible, but IMHO it wasn't
worth the effort (to me, that is -- it could definitely have improved
the general maintainability of the system).  I suspect the problem
with retrofitting unit tests to OO systems, is that unless they are
designed for it from scratch, objects will have quite a lot of
interdependence on each other's state, and it is quite hard to set up
the required environment.  This is obviously bad design, but IME it's
way to easy to end up like that with OO.  (And for the record, I think
Paul Graham agrees in one of his books.) 

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnj4k5$ln8$1@f1node01.rhrz.uni-bonn.de>
··········@ii.uib.no wrote:

> I suspect the problem
> with retrofitting unit tests to OO systems, is that unless they are
> designed for it from scratch, objects will have quite a lot of
> interdependence on each other's state, and it is quite hard to set up
> the required environment.  This is obviously bad design, but IME it's
> way to easy to end up like that with OO.  (And for the record, I think
> Paul Graham agrees in one of his books.) 

I don't agree with everything Paul Graham says.

And here is a unit testing framework in 25 lines of ANSI Common Lisp:

(defun string-qualifier-p (qualifiers)
   (and (= (length qualifiers) 1)
        (stringp (car qualifiers))))

(define-method-combination test-suite ()
   ((set-up (:set-up))
    (tear-down (:tear-down))
    (cases string-qualifier-p))
   (flet ((call-cases (cases)
            (mapcar (lambda (case)
                      `(let ((result (call-method ,case)))
                         (format t "~&~A => ~A~%"
                                 ',(car (method-qualifiers case))
                                 result)
                         result))
                    (stable-sort
                     cases #'string-lessp
                     :key (lambda (case)
                            (car (method-qualifiers case)))))))
     `(progn (call-method ,(car set-up))
        (let ((result (and ,@(call-cases cases))))
          (call-method ,(car tear-down))
          (if result (format t "~&succeeded~%")
            (format t "~&failed~%"))
          result))))

(defgeneric test-something ()
   (:method-combination test-suite))

(defmethod test-something :set-up ()
   (format t "~&testing something~%"))

(defmethod test-something "case 1" ()
   (= (+ 1 1) 2))

(defmethod test-something "case 2" ()
   (= (+ 2 2) 4))

(defmethod test-something :tear-down ()
   (format t "~&finished testing something~%")
   nil)

CL-USER 1 > (test-something)
testing something
case 1 => T
case 2 => T
finished testing something
succeeded
T

CL-USER 2 > (defmethod test-something "case 3" ()
               (= (+ 3 4) 8))
#<STANDARD-METHOD TEST-SOMETHING ("case 3") NIL 205FE764>

CL-USER 3 > (test-something)
testing something
case 1 => T
case 2 => T
case 3 => NIL
finished testing something
failed
NIL

CL-USER 4 > (defmethod test-something "case 3" ()
               (= (+ 3 4) 7))
#<STANDARD-METHOD TEST-SOMETHING ("case 3") NIL 205F55FC>

CL-USER 5 > (test-something)
testing something
case 1 => T
case 2 => T
case 3 => T
finished testing something
succeeded
T


Pascal

P.S.: I don't understand all the details about method combination. I 
have mainly copied and pasted example code from 
http://www.lispworks.com/reference/HyperSpec/Body/m_defi_4.htm and 
tweaked it until it worked.

Development time: Roughly 1.5 hours, on my own. (I have used 
define-method-combination before, but I have never implemented a testing 
framework.)

The code still needs some improvements. When a test case fails it should 
signal an exception, and not just print NIL, so that you have a chance 
to inspect the stack and see what excatly happened. I guess this would 
mean an additional two or three lines of code.

I leave this as an exercise to the reader. ;)

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egoew2n6cg.fsf@vipe.ii.uib.no>
Pascal Costanza <········@web.de> writes:

> ··········@ii.uib.no wrote:

>> I suspect the problem
>> with retrofitting unit tests to OO systems, is that unless they are
>> designed for it from scratch, objects will have quite a lot of
>> interdependence on each other's state, and it is quite hard to set up
>> the required environment.  This is obviously bad design, but IME it's
>> way to easy to end up like that with OO.  (And for the record, I think
>> Paul Graham agrees in one of his books.)

> I don't agree with everything Paul Graham says.

Me neither, but I think he's on to something in this case.

> And here is a unit testing framework in 25 lines of ANSI Common Lisp:

Good for you; but what's the point?  If I'm unable to work out the
prerequisites for and the necessary consequences of running a
particular function, how can I tell whether the test fails because my
changes broke it, or because my tests incorrectly initializes/examines
the surrounding objects?  And if I need to set up virtually every
other object in the system, then what's the point?  It's no longer a
unit test, but a system test, and we did have those.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnj9tb$v2q$1@f1node01.rhrz.uni-bonn.de>
··········@ii.uib.no wrote:

>>And here is a unit testing framework in 25 lines of ANSI Common Lisp:
> 
> Good for you; but what's the point?  If I'm unable to work out the
> prerequisites for and the necessary consequences of running a
> particular function, how can I tell whether the test fails because my
> changes broke it, or because my tests incorrectly initializes/examines
> the surrounding objects?  And if I need to set up virtually every
> other object in the system, then what's the point?  It's no longer a
> unit test, but a system test, and we did have those.

Ah, I have thought you couldn't add unit tests to the program because of 
a more fundamental problem.

There was no way to interactively examine what the program did?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Thomas Lindgren
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m34qxwdzc4.fsf@localhost.localdomain>
·········@hotmail.com (Daniel C. Wang) writes:

>  Claim 1. Every programer who is worth employing, ought to have a "rigorous
> informal argument" in their head of why the code they produced works
> correctly.

Again, where there is no clear specification, how can one prove
correctness?

Correctness for a program implementing the typical ITU specification?
Or an IETF RFC? And we haven't even gotten into the typical bespoke
applications, GUIs etc. Or programming languages. Or spam filters,
pattern recognizers, etc.

> I hope, everything above is completely non-controversial.

Not quite :-) 

Given the recent societal nostalgia for the 70s and 80s, maybe a
correctness revival isn't so strange after all. But the advocates seem
to be forgetting why it didn't catch on.

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uu15w9fyt.fsf@hotmail.com>
Thomas Lindgren <···········@*****.***> writes:

> ·········@hotmail.com (Daniel C. Wang) writes:
> 
> >  Claim 1. Every programer who is worth employing, ought to have a "rigorous
> > informal argument" in their head of why the code they produced works
> > correctly.
> 
> Again, where there is no clear specification, how can one prove
> correctness?

Okay, I'll restate the claim so it is less controversial.

Claim 1. Every programer who is worth employing, ought to have a "rigorous
 informal argument" in their head of why the code they produced has certain
 useful properties. (i.e. works in a sensible way)

Here you don't need a "formal spec". Just a clear intention. Like, I'm
writting a spam filter. The spam filter shouldn't send all my mail to
/dev/null. When programers write pieces of code they better have a clear
intention of what it is doing or else I might as well hire a random number
generator to write programs for me. 


> Correctness for a program implementing the typical ITU specification?
> Or an IETF RFC? And we haven't even gotten into the typical bespoke
> applications, GUIs etc. Or programming languages. Or spam filters,
> pattern recognizers, etc.
> 
> > I hope, everything above is completely non-controversial.
> 
> Not quite :-) 
> 
> Given the recent societal nostalgia for the 70s and 80s, maybe a
> correctness revival isn't so strange after all. But the advocates seem
> to be forgetting why it didn't catch on.

I'm curious as to why you think it didn't catch on. To me it's clearly an
issue of economics. The informal ways of producing correct enough software
that is profitable just make more sense then the formal approaches.  The
costs outweigh the benefits.

However, the costs are getting lower and the need for correctness guarantees
in everyday "consumer" software are steadily increasing.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bngs30$a9u$1@newsreader2.netcologne.de>
Daniel C. Wang wrote:

> 
> Thomas Lindgren <···········@*****.***> writes:
> 
> 
>>·········@hotmail.com (Daniel C. Wang) writes:
>>
>>
>>> Claim 1. Every programer who is worth employing, ought to have a "rigorous
>>>informal argument" in their head of why the code they produced works
>>>correctly.
>>
>>Again, where there is no clear specification, how can one prove
>>correctness?
> 
> 
> Okay, I'll restate the claim so it is less controversial.
> 
> Claim 1. Every programer who is worth employing, ought to have a "rigorous
>  informal argument" in their head of why the code they produced has certain
>  useful properties. (i.e. works in a sensible way)
> 
> Here you don't need a "formal spec". Just a clear intention. Like, I'm
> writting a spam filter. The spam filter shouldn't send all my mail to
> /dev/null. When programers write pieces of code they better have a clear
> intention of what it is doing or else I might as well hire a random number
> generator to write programs for me. 

Who cares about /dev/null?

A useful spec for a spam filter looks roughly like follows: "The spam 
filter should sort all spam mails into a spam mailbox for possible later 
review, and all other mails into my inbox."

The heart of the problem is: how do you formalize what the spam filter 
should do in detail? Or better yet: do you actually _want_ to have a 
formal description for the spam filter?

You might not want a formal spec for the spam filter because this makes 
it easier for spammers to break the filter. This is not guesswork - see 
http://www.paulgraham.com/spam.html

Now, obviously the best approach so far for filtering spam are Bayesian 
filters. And of course, you can formalize how Bayesian filters work - 
but that's besides the point. The spec above didn't mention the actual 
approach to take. And it shouldn't. The amazing thing about Bayesian 
filters is that you don't know what their actual classification scheme 
is, and that they adapt their classification scheme to the needs of a 
single user.

Furthermore, if someone finds a convincingly better way to classify spam 
in the future, one would most likely want to change the filtering algorithm.

The important point here is: There is no formal way to get from the spec 
above to the actual solution. You can only formalize parts of the 
infrastructure the let you plug in spam filters. And this is relatively 
boring.


Pascal
From: Thomas Lindgren
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3llr7ht9y.fsf@localhost.localdomain>
·········@hotmail.com (Daniel C. Wang) writes:

> Here you don't need a "formal spec". Just a clear intention. Like, I'm
> writting a spam filter. The spam filter shouldn't send all my mail to
> /dev/null. When programers write pieces of code they better have a clear
> intention of what it is doing or else I might as well hire a random number
> generator to write programs for me. 

However, it also would suggest (as would seem reasonable to me)
gradations of incorrectness. Sending mail to /dev/null may be
unacceptable incorrectness for a spam filter, while displaying it in a
window one pixel off to the left may be acceptable. Add sliding scales
to suit.

Also, there are the non-functional requirements to be considered.

As an internal such requirement, I think "legibility" ranks higher
than "correctness" (in the strong sense). Many programmers actually
can explain their twisted, awful code and be convinced that it works
(it might even work) but that's little or no help. There is an awful
lot of unreadable code out there, from algorithms papers to reams of
open source to clueless undergraduates of varying brilliance. This
invites problems when the author quits, the code has to change, etc.

The way past that appears to be social rather than mathematical,
though. (Or alternatively, the "heroic" way of rewriting the hairball
from scratch, or meditating on the zen-ness of the original code and
gaining Enlightenment.)

> > Given the recent societal nostalgia for the 70s and 80s, maybe a
> > correctness revival isn't so strange after all. But the advocates seem
> > to be forgetting why it didn't catch on.
> 
> I'm curious as to why you think it didn't catch on. To me it's clearly an
> issue of economics. The informal ways of producing correct enough software
> that is profitable just make more sense then the formal approaches.  The
> costs outweigh the benefits.

Recall: for these purposes, we're talking "correctness" rather than
"sensibleness".

My impression is that very few software development projects attempt to
prove correctness of their code. The only I can think of are a few academic
projects (though not any concrete ones offhand), safety-critical programs,
and perhaps "military money".

There was a period in the 70s to mid 80s (I caught the tail end of
that wave while an undergraduate) when proofs of correctness were
touted as "the next big thing" for programmers. But basically it was
too difficult to get this right, and we instead moved on to
higher-level languages that didn't express the worst bugs and various
social processes like code walkthroughs, testing, etc.

(Testing has taken the place of correctness proofs as we know; my
favorite hypothesis is that this is because a testing plan can be
examined and agreed upon more easily than a formal proof or a formal
specification, and that passing the tests at least shows that *the
whole system* can *perform something deemed useful* by the involved
parties. Also, the proven code needs to be tested as well.)

> However, the costs are getting lower and the need for correctness guarantees
> in everyday "consumer" software are steadily increasing.

Currently, the main issue seems to be to close all the Windows so
thoughtfully left open. I'm not sure to what extent this requires
correctness proofs per se, though.

My impression is that we are moving in the other direction: consumer
software is and will be upgraded dynamically (perhaps even silently)
to get rid of bugs, work around hardware limitations, etc., AND
improve functionality at the same time.

Consumer devices will increasingly do this too, at least partially,
perhaps wholly. (For example, I first upgraded my mobile phone
software in 1999.)

Oh well, this got a bit longer than I thought. Hope you like it :-)

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3r810x7yr.fsf@rigel.goldenthreadtech.com>
·········@hotmail.com (Daniel C. Wang) writes:

> ·········@rcn.com (Jon S. Anthony) writes:
> 
> > Joachim Durchholz <·················@web.de> writes:
> > 
> > > You can't explain why your code works? (and a program proof is nothing
> > > but "an explanation why the code works")
> > 
> > You're committing a conversion fallacy here: A program proof is an
> > explanation of of why the code works, but a an explanation of why the
> > code works is clearly not necessarily a proof.
> > 
> 
>  Claim 1. Every programer who is worth employing, ought to have a "rigorous
> informal argument" in their head of why the code they produced works
> correctly.

This is already very controversial from a variety of view points.  For
example, what is a "rigorous" but "informal" argument?  Doesn't make
much sense to me.  Also, this business of having (let's say _any_ kind
of _argument_) "in their head" is one of the most amazing parts of all
these claims.  Especially when not one shred of evidence for it has
been produced or offered or even plausibly defended "a priori".
Lastly, the notion of "correct" is typically "ill defined" for any
kind of detailed reasoning about it.


>  Claim 2. Any *correct* "rigorous informal argument" can with some
> work be turned into a formal machine checkable proof. Doing this
> acutally may catch some subtle corner cases in the rigorous informal
> argument, so bugs in the code might have to be fixed.

Strictly speaking this isn't true (Goedel and all that again).  But
with a little relaxation (less hyperbole) this proabably isn't going
to be too controversial.


> Therefore it follows that for any software written that is believed to be
> correct there exists a machine checkable proof of it's correctness.

This OTOH is a truly amazing conclusion.  Just because I _believe_ X
is "correct" and even evince a plausible argument for it, in no way
indicates that X _is_ "correct" let alone that there is a proof for it
in a deductive system that is machine checkable.


> Complete formal verification of software is technically fesiable.

I'd say that this is true in extremely restricted (though definitely
useful) niches (where it may or may not be practical) or true in some
uninteresting theoretical sense ("given that I can actually provide a
rigorous definition of 'correct', we can proceed to formalize this and
then prove that in this deductive system over here which we have
agreed is "good enough" for what we are doing").


> I hope, everything above is completely non-controversial.

You can't be serious.

/Jon
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh37l$s5j$1@news.oberberg.net>
Jon S. Anthony wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
>>You can't explain why your code works? (and a program proof is nothing
>>but "an explanation why the code works")
> 
> 
> You're committing a conversion fallacy here: A program proof is an
> explanation of of why the code works, but a an explanation of why the
> code works is clearly not necessarily a proof.

It is.
A proof is nothing but an explanation why some property holds.
It may be elliptic and informal - but in practice, most proofs are quite 
elliptic and informal (apart from those printed in mathematical journals).
More to the point, the reasoning inside the programmers' heads is 
already quite near to formality - which is probably because programs are 
quite formal entities which makes formalization of statements about them 
relatively easy, and also because programming is a sort of training in 
formal thinking as well. (The second one is just a guess.)

Regards,
Jo
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3n0bnydnx.fsf@rigel.goldenthreadtech.com>
Joachim Durchholz <·················@web.de> writes:

> Jon S. Anthony wrote:
> 
> > Joachim Durchholz <·················@web.de> writes:
> >
> >>You can't explain why your code works? (and a program proof is nothing
> >>but "an explanation why the code works")
> > You're committing a conversion fallacy here: A program proof is an
> > explanation of of why the code works, but a an explanation of why the
> > code works is clearly not necessarily a proof.
> 
> It is.
> A proof is nothing but an explanation why some property holds.

That's clearly insufficient.  It must start with true premises,
provide a set of valid deductions which end with the "property".  An
explanation may lack all of these attributes.

Maybe this is a terminology conflict.  From where I come from a
"proof" is a _correct_ _justification_ within some formal context of
some proposition.  There are a lot of details in the background that
most people don't care about most of the time (the rules of inference
used, that they maintain truth, etc.)  But the essential thing is that
the "proof" starts with "truth" maintains it through whatever
reasoning that is used and ends with the thing to be demonstrated.

An "explanation" is a description for why something is, happened, is
believed, etc.  It may (and often does) involve false premises,
invalid reasoning, and even false conclusions.  All of this may be
_very_ subtle and non obvious, yet still wrong.  There are plenty of
examples in the history of mathematics.  There are vastly more (and
more obvious less subtle) in the daily news.

People give explanations for all sorts of things, whether they are
true or not, and the reasoning exhibited may or may not be valid.

Your claim above is that _any_ explanation is a proof.  If you don't
believe this, then you did indeed commit a conversion fallacy (an
example of invalid reasoning yielding a false conclusion).


/Jon
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhndc$5fe$2@news.oberberg.net>
Jon S. Anthony wrote:

> Maybe this is a terminology conflict.  From where I come from a
> "proof" is a _correct_ _justification_ within some formal context of
> some proposition.

That's a _formal_ proof.

We were talking about _informal_ proofs inside the programmer's head.

Regards,
Jo
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ismaxysr.fsf@rigel.goldenthreadtech.com>
Joachim Durchholz <·················@web.de> writes:

> Jon S. Anthony wrote:
> 
> > Maybe this is a terminology conflict.  From where I come from a
> > "proof" is a _correct_ _justification_ within some formal context of
> > some proposition.
> 
> That's a _formal_ proof.
> 
> We were talking about _informal_ proofs inside the programmer's head.

Whether it is "formal" or "informal" is irrelevant.  That really
should be obvious.  You're just talking nonsense here.


And to restate it: do you believe that any explanation is a proof?
Your previous post certainly seemed to claim this and was thus
contained a conversion fallacy.


/Jon
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnk53k$e80$2@news.oberberg.net>
Jon S. Anthony wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
> 
>>Jon S. Anthony wrote:
>>
>>
>>>Maybe this is a terminology conflict.  From where I come from a
>>>"proof" is a _correct_ _justification_ within some formal context of
>>>some proposition.
>>
>>That's a _formal_ proof.
>>
>>We were talking about _informal_ proofs inside the programmer's head.
> 
> 
> Whether it is "formal" or "informal" is irrelevant.  That really
> should be obvious.  You're just talking nonsense here.

Good bye.

-Jo
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m34qxuxmgf.fsf@rigel.goldenthreadtech.com>
Joachim Durchholz <·················@web.de> writes:

> Jon S. Anthony wrote:
> 
> > Joachim Durchholz <·················@web.de> writes:
> >
> >>Jon S. Anthony wrote:
> >>
> >>
> >>>Maybe this is a terminology conflict.  From where I come from a
> >>>"proof" is a _correct_ _justification_ within some formal context of
> >>>some proposition.
> >>
> >>That's a _formal_ proof.
> >>
> >>We were talking about _informal_ proofs inside the programmer's head.
> > Whether it is "formal" or "informal" is irrelevant.  That really
> > should be obvious.  You're just talking nonsense here.
> 
> Good bye.

> >And to restate it: do you believe that any explanation is a proof?
> >Your previous post certainly seemed to claim this and was thus
> >contained a conversion fallacy.

Good answer...

/Jon
From: Peter G. Hancock
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <oqznfo1udy.fsf@premise.demon.co.uk>
>>>>> Erann Gat wrote (on Sat, 25 Oct 2003 at 16:41):

    > Certainly there are examples of people (perhaps even me :-) writing code
    > with a proof in their head, but the claim that this is universally true,
    > or that it is necessary (or even desirable) for producing useful code is
    > demonstrably false.

Suppose you could capture the proof in your head and write it down formally.
It would look pretty much like the program itself, with "extreme" commenting,
showing why each bit worked.  It's obviously not necessary to have it, but it 
might be valuable -- who knows? 

A spam filter is an interesting case.  I think you have a kind of
weak, beaurocratic, ugly specification for the program you write.
You constantly revise it, to strengthen it (or improve the specification),
or weaken it (to have less false positives). 

Peter Hancock
From: Christopher C. Stacy
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <u4qxwotdm.fsf@dtpq.com>
>>>>> On Sun, 26 Oct 2003 04:43:21 +0000, Peter G Hancock ("Peter") writes:
 Peter> A spam filter is an interesting case.  I think you have a kind of
 Peter> weak, beaurocratic, ugly specification for the program you write.
 Peter> You constantly revise it, to strengthen it (or improve the specification),
 Peter> or weaken it (to have less false positives). 

You think this state of affairs is different than most "real world" programs?
From: Peter G. Hancock
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <oqptgi1gqh.fsf@premise.demon.co.uk>
>>>>> Christopher C Stacy wrote (on Sun, 26 Oct 2003 at 16:28):

>>>>> On Sun, 26 Oct 2003 04:43:21 +0000, Peter G Hancock ("Peter") writes:
    Peter> A spam filter is an interesting case.  I think you have a
    Peter> kind of weak, beaurocratic, ugly specification for the
    Peter> program you write.  You constantly revise it, to strengthen
    Peter> it (or improve the specification), or weaken it (to have
    Peter> less false positives).

    > You think this state of affairs is different than most "real
    > world" programs?

I think a spam filter is a case par excellence where the specification
evolves all the time, almost biologically.  To some extent that
happens with all "real world" programs. 

Peter
From: Thant Tessman
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnc6qq$qa0$1@terabinaries.xmission.com>
Matthias Blume wrote:
> Thant Tessman <·····@acm.org> writes:
> 
> 
>>I have actually had the bad fortune to work with what I call
>>"cut-and-paste" computer programmers. You look at their work, and you
>>get the strong suspicion that they have no genuine understanding *why*
>>their programs work. They somehow program through imitation and
>>experimentation, as if programming really was merely a matter of
>>getting the incantation right. I'm always stunned when these kind of
>>programmers get anything working, but they do--at least enough to keep
>>them employed in such numbers that I've seen more than one of them.
>>
>>
>>I think these are the programmers that *don't* have the "proof" in
>>their head that Matthias is referring to.
> 
> 
> Dang!  And here I was saying they don't exist.  But surely you must
> forgive me for saying they ought to be fired...  <*duck*>

Actually, they seem to make great systems administrators.

(And this is not at all meant to slight systems administrators. It's 
just definitely a different skill-set, and one I certainly don't have.)

-thant
From: Peter G. Hancock
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <oq8yn93amw.fsf@premise.demon.co.uk>
>>>>> Thant Tessman wrote (on Fri, 24 Oct 2003 at 22:50):

    > Matthias Blume wrote:
    >> Thant Tessman <·····@acm.org> writes:
    >> 
    >>> I have actually had the bad fortune to work with what I call
    >>> "cut-and-paste" computer programmers. You look at their work, and you
    >>> get the strong suspicion that they have no genuine understanding *why*
    >>> their programs work. They somehow program through imitation and
    >>> experimentation, as if programming really was merely a matter of
    >>> getting the incantation right. I'm always stunned when these kind of
    >>> programmers get anything working, but they do--at least enough to keep
    >>> them employed in such numbers that I've seen more than one of them.
    >>> 
    >>> 
    >>> I think these are the programmers that *don't* have the "proof" in
    >>> their head that Matthias is referring to.
    >> Dang!  And here I was saying they don't exist.  But surely you must
    >> forgive me for saying they ought to be fired...  <*duck*>

    > Actually, they seem to make great systems administrators.

You're right!  Maybe it's a difference between "making" and "fixing"?

There's a talent or skill I call "voodoo" which I think can be very
valuable when you are debugging hundreds of thousands of lines of
other peoples code.  I guess that ultimately it's based on some kind
of (often cynical) insight into human nature.  (If programmers can be
called human, smiley.) I don't think its incompatible with a approach
to programming based on reasoning things out logically. 

Although it's maybe not directly relevant to any particular point of
this fascinating exploding thread, I'd like to recommend a great
general-audience article by Leslie Lamport at

  http://research.microsoft.com/users/lamport/pubs/future-of-computing.pdf

called "The Future of Computing: Logic or Biology".  It's real interesting
about the role of superstition:

  "You may soon be able to take your laptop to a faith healer, who
   will lay his hands on the keyboard and pray for the recovery of 
   the operating system."

Peter Hancock
From: Rob Warnock
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vUudnfITfJLM6waiXTWc-g@speakeasy.net>
Peter G. Hancock <·······@spamcop.net> wrote:
+---------------
| Thant Tessman wrote (on Fri, 24 Oct 2003 at 22:50):
| > Actually, they seem to make great systems administrators.
| 
| You're right!  Maybe it's a difference between "making" and "fixing"?
| 
| There's a talent or skill I call "voodoo" ...
+---------------

What, you're saying systems administration is a cargo cult? [1]
(Well, maybe for *poor* sysadmins...)


-Rob

[1] See <URL:http://en.wikipedia.org/wiki/Cargo_cult_programming>
    and <URL:http://www.physics.brocku.ca/etc/cargo_cult_science.html>

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnc74f$pve$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Thant Tessman <·····@acm.org> writes:
> 
> 
>>I have actually had the bad fortune to work with what I call
>>"cut-and-paste" computer programmers. You look at their work, and you
>>get the strong suspicion that they have no genuine understanding *why*
>>their programs work. They somehow program through imitation and
>>experimentation, as if programming really was merely a matter of
>>getting the incantation right. I'm always stunned when these kind of
>>programmers get anything working, but they do--at least enough to keep
>>them employed in such numbers that I've seen more than one of them.
>>
>>
>>I think these are the programmers that *don't* have the "proof" in
>>their head that Matthias is referring to.
> 
> 
> Dang!  And here I was saying they don't exist.  But surely you must
> forgive me for saying they ought to be fired...  <*duck*>


People learn best by imitating other people's behavior. It helps them to 
get accustomed to something they don't know yet. Later on they start to 
understand what they are actually doing. See 
http://alistair.cockburn.us/crystal/books/asd/asdthumbnail.htm

This is probably one of the reasons why open source models are 
successful. See http://www.dreamsongs.com/MobSoftware.html


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ad7qgtr4.fsf@comcast.net>
Thant Tessman <·····@acm.org> writes:

> I have actually had the bad fortune to work with what I call
> "cut-and-paste" computer programmers. You look at their work, and you
> get the strong suspicion that they have no genuine understanding *why*
> their programs work. They somehow program through imitation and
> experimentation, as if programming really was merely a matter of
> getting the incantation right. I'm always stunned when these kind of
> programmers get anything working, but they do--at least enough to keep
> them employed in such numbers that I've seen more than one of them.

What's even scarier is that they *think* they are programming.
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3znfpz5yg.fsf@rigel.goldenthreadtech.com>
Thant Tessman <·····@acm.org> writes:

> Jon S. Anthony wrote:
> 
> > This still indicates that you really believe that she "had a proof in
> > her head" at the time the she wrote the code.  I maintain that there
> > is absolutely no evidence for such a remarkable belief.
> 
> I have actually had the bad fortune to work with what I call
> "cut-and-paste" computer programmers. You look at their work, and you
> get the strong suspicion that they have no genuine understanding *why*
> their programs work. They somehow program through imitation and
> experimentation, as if programming really was merely a matter of
> getting the incantation right. I'm always stunned when these kind of
> programmers get anything working, but they do--at least enough to keep
> them employed in such numbers that I've seen more than one of them.

Sure, these sort of "code monkeys" exist, but they are about as many
sigmas to the left as those with "proofs" in their head are to the
right.


> I think these are the programmers that *don't* have the "proof" in
> their head that Matthias is referring to.

If so, it doesn't do his case much good.  I still maintain that even
several sigmas to the right (i.e., excellent programmers) do not have
"proofs" in their heads when they are designing or writing programs
(or their parts).  There's just absolutely no evidence to support such
an amazing claim.

And as I stated - reasoning about things and putting things together
in what "we" might agree is a "logical" way, is no way a _proof_ in
any reasonable sense of the term.  And no, I don't mean it has to be a
_formal_ proof as in all steps show to conform to a deductive system.
I mean they have nothing like even the "informal" proofs that working
mathematicians use on a daily basis.

/Jon
From: Peter G. Hancock
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <oq4qxw39qw.fsf@premise.demon.co.uk>
>>>>> Jon S Anthony wrote (on Sat, 25 Oct 2003 at 16:34):

    > ... - reasoning about things and putting things together
    > in what "we" might agree is a "logical" way, is no way a _proof_ in
    > any reasonable sense of the term.  And no, I don't mean it has to be a
    > _formal_ proof as in all steps show to conform to a deductive system.
    > I mean they have nothing like even the "informal" proofs that working
    > mathematicians use on a daily basis.

Don't you think this "reasoning" is something like the debris of a proof? 
Each step (line of code) is something logical -- if you could capture it,
you could begin to think about putting the steps together. 

What an amazingly interesting thread!

Peter Hancock
From: Thant Tessman
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnk37o$54b$1@terabinaries.xmission.com>
Jon S. Anthony wrote:

> [...]  I still maintain that even
> several sigmas to the right (i.e., excellent programmers) do not have
> "proofs" in their heads when they are designing or writing programs
> (or their parts).  There's just absolutely no evidence to support such
> an amazing claim. [...]

When I have a bug in a program I've written, I assume that at least one 
of the following holds:

1) There is a flaw in my logic,
2) There is a flaw in the translation of my logic into code,
3) There is a disjoint between the behavior of a tool I'm putting to 
use, and my expectations of that tool.

Most importantly, I assume--or more strongly put: I know a priori--that 
the source of the bug can *always* be understood and its solution 
"formally" described given enough time and rational effort. I have no 
problem with describing the above as a "latent proof." I also have no 
problem categorizing those who work under the above conditions as 
programmers, and those who don't as "code monkeys."

-thant

-- 
America goes not abroad in search of monsters to destroy. She is
the well-wisher of the freedom and independence of all. She is
the champion and vindicator only of her own. -- John Quincy Adams
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ekwyxq1i.fsf@rigel.goldenthreadtech.com>
Thant Tessman <·····@acm.org> writes:

> Jon S. Anthony wrote:
> 
> > [...]  I still maintain that even
> > several sigmas to the right (i.e., excellent programmers) do not have
> > "proofs" in their heads when they are designing or writing programs
> > (or their parts).  There's just absolutely no evidence to support such
> > an amazing claim. [...]
> 
> When I have a bug in a program I've written, I assume that at least
> one of the following holds:
> 
> 1) There is a flaw in my logic,
> 2) There is a flaw in the translation of my logic into code,
> 3) There is a disjoint between the behavior of a tool I'm putting to
> use, and my expectations of that tool.
> 
> Most importantly, I assume--or more strongly put: I know a
> priori--that the source of the bug can *always* be understood and its
> solution "formally" described given enough time and rational effort. I
> have no problem with describing the above as a "latent proof."

1. that's post facto. and thus not relevant
2. "latent proof" (as described) is clearly not a proof


> I also have no problem categorizing those who work under the above
> conditions as programmers, and those who don't as "code monkeys."

Sounds OK to me.  But it's irrelevant.


/Jon
From: Thant Tessman
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnk74c$5qb$1@terabinaries.xmission.com>
Jon S. Anthony wrote:

[...]

> 1. that's post facto. and thus not relevant
> 2. "latent proof" (as described) is clearly not a proof

The claim is not that a programmer has worked out a formal proof in 
their head that their program is correct. The claim is that the 
programmer works under the premise that such a proof exists. Otherwise, 
there is no logic by which a program can be described as working or not 
working.

Given that such a proof exists, the more of that proof we formalize in 
the guise of a type system, the more confident we can be of the 
correctness of our programs.

-thant
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m38yn6xmk4.fsf@rigel.goldenthreadtech.com>
Thant Tessman <·····@acm.org> writes:

> Jon S. Anthony wrote:
> 
> [...]
> 
> > 1. that's post facto. and thus not relevant
> > 2. "latent proof" (as described) is clearly not a proof
> 
> The claim is not that a programmer has worked out a formal proof in
> their head that their program is correct. The claim is that the
> programmer works under the premise that such a proof
> exists.

Now _this_ is perfectly sensible and a plausible claim for at least a
number of cases.  I would relax even this a bit by saying the
programmer works under the assumption that his _reasoning_ and
starting points are "correct".  That is, he _believes_ what he is
doing is "correct".

I'm still not convinced they _typically_ work under the impression
that a _proof_ exists that what they are doing is "correct".  Mostly
because typically "correct" is ill (or at least fuzzily) defined and
thus not even amenable to "proof" (in any rigorous sense - formal or
informal).

This is most emphatically _not_ what has been claimed by others
previously.  They have indeed made the _truly remarkable_ claim that
the programmer does indeed have a "proof in his head" _during_
construction.


> Otherwise, there is no logic by which a program can be described as
> working or not working.

Actually, there is - does it do what the consumer of it expects?  If
yes, then it is "working", otherwise not.


> Given that such a proof exists, the more of that proof we formalize
> in the guise of a type system, the more confident we can be of the
> correctness of our programs.

Sure.  But there is a big leap here.  Just because the programmer
believes what he is doing is "correct", doesn't mean that it is.  So,
the proof may not exist.

Now, as I pointed out elsewhere, the "errors" in this belief may be
very subtle and nonobvious.  So,

* the program may be considered to work by its consumer

* it actually works in practice (errors never manefest)

* clearly is useful in practice

* the programmer believes it to be "correct" and may have very strong
  reasons to believe it correct

And yet it is actually not provable (because at any plausible "formal"
level it is actually _not_ correct).


/Jon
From: Thant Tessman
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnki5v$8e0$1@terabinaries.xmission.com>
Jon S. Anthony wrote:

[...]

> This is most emphatically _not_ what has been claimed by others
> previously.  They have indeed made the _truly remarkable_ claim that
> the programmer does indeed have a "proof in his head" _during_
> construction.

A programmer works under the assumption that certain logical properties 
are associated with the bits of code they type in. I happen to think 
that those logical properties and the way they tie together to make a 
working program is not too inaccurately described as a "proof," albeit 
an informal one. And I think any decent programmer has a grasp of 
exactly those logical properties and interactions in mind as they code, 
even if they are constantly revising this "proof" as they code. If you 
don't like the word "proof" used to describe this, then fine. But that's 
not the interesting part of this discussion.


>>Otherwise, there is no logic by which a program can be described as
>>working or not working.
> 
> 
> Actually, there is - does it do what the consumer of it expects?  If
> yes, then it is "working", otherwise not.

If a customer expects a program to produce a given output when a 
specific input is provided, then they have a mental model of what it is 
that program is supposed to do. The fidelity with which the program 
manifests that mental model becomes the measure by which a program is 
said to work or not work. This model and its corresponding program are 
still amenable to proof. The fact that a program produces expected 
output for a given input only serves to increase confidence that the 
program will produce *correct* output for input the customer hasn't 
bothered to consider in advance, that is, output the customer *doesn't* 
expect. (Otherwise, why write the program at all?) This is why 
correctness is usually much more than a matter of "producing expected 
output."

On the other hand, a customer may have no such mental model. Their 
demands may be more aesthetically driven. But in this case, it is 
fallacious to describe the program's output as "expected." The output 
can be described as "satisfactory," but even in this case, the 
*programmer* is not allowed the luxury of not knowing what is really 
going on inside the program.


> Sure.  But there is a big leap here.  Just because the programmer
> believes what he is doing is "correct", doesn't mean that it is.  So,
> the proof may not exist.

No. There is always a rational explanation for why the program produces 
the results it does. (This is in fact at the heart of why computers are 
useful in the first place.) The proof one way or the other whether a 
program works exists independent of whether we as programmers have 
successfully formalized it. If a program works for all the inputs tried, 
but doesn't work for inputs not yet tried, then a proof exists for why 
the program works for the inputs tried, and a proof exists for why the 
program doesn't work for inputs not tried.

While we can never have full confidence that a program works, the more 
of the proof that a program works is made manifest in type systems, the 
more confidence we can have in the correctness of our programs. 
Programmers who are unconcerned with correctness, and only concerned 
with whether the program produces output some customer somewhere is 
willing to pay for are, in my opinion, the reason so much software is of 
such low quality. Of course the customer is the reason the industry 
exists, but I find it much more rewarding to educate the customer into 
demanding higher-quality than trying to get away with whatever just works.

-thant
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ism9w9ir.fsf@rigel.goldenthreadtech.com>
Thant Tessman <·····@acm.org> writes:

> Jon S. Anthony wrote:
> 
> [...]
> 
> > This is most emphatically _not_ what has been claimed by others
> > previously.  They have indeed made the _truly remarkable_ claim that
> > the programmer does indeed have a "proof in his head" _during_
> > construction.
> 
> A programmer works under the assumption that certain logical
> properties are associated with the bits of code they type in. I happen
> to think that those logical properties and the way they tie together
> to make a working program is not too inaccurately described as a
> "proof," albeit an informal one.

I think it is unfortunate that a term with such a long and established
history of meaning should be "coopted" for this in some circles.  It
only serves to produce miscommunication.

> And I think any decent programmer has a grasp of exactly those
> logical properties and interactions in mind as they code, even if
> they are constantly revising this "proof" as they code. If you don't
> like the word "proof" used to describe this, then fine. But that's
> not the interesting part of this discussion.

I'd say that if this is all that is being claimed (while using
conflating well established terms in the process), then there just
plain isn't anything interesting in this discussion at all.


> >>Otherwise, there is no logic by which a program can be described as
> >>working or not working.
> > Actually, there is - does it do what the consumer of it expects?  If
> > yes, then it is "working", otherwise not.
> 
> If a customer expects a program to produce a given output when a
> specific input is provided, then they have a mental model of what it
> is that program is supposed to do. The fidelity with which the program
> manifests that mental model becomes the measure by which a program is
> said to work or not work. This model and its corresponding program are
> still amenable to proof.

What makes you think the "mental model" is in any way formalizable?
This is another point that seems to separate the camps here.  I see no
evidence that this is typically the case.  Only in certain very narrow
niches (say an ECU or some such) will this be likely.


> The fact that a program produces expected output for a given input
> only serves to increase confidence that the program will produce
> *correct* output for input the customer hasn't bothered to consider
> in advance, that is, output the customer *doesn't* expect.

I disagree with the conclusion and an intermediate assumption.  The
"*correct*" here is really "expected".  That is, when the input is
evinced, the customer will indeed expect the output.  Note: this does
not mean he _knows_ exactly what the output will be, only that it
conforms to some preconceived notion.  If this sounds "fuzzy", well,
you're right.  And that's also part of the point.

I think what you are getting at here is the situation where there is a
set of predetermined inputs and their corresponding outputs (basically
test cases, as in a test suite).  From which you then state (of
course) that there is additional input for which there was no
previously matched output and thus this output cannot have been
"expected".  But that's different from the original point.


> On the other hand, a customer may have no such mental model. Their
> demands may be more aesthetically driven. But in this case, it is
> fallacious to describe the program's output as "expected."

Why?


> > Sure.  But there is a big leap here.  Just because the programmer
> > believes what he is doing is "correct", doesn't mean that it is.  So,
> > the proof may not exist.
> 
> No. There is always a rational explanation for why the program
> produces the results it does.

A "rational explanation" is not a proof - in any reasonable
interpretation the term "proof" as has been established by a long
detailed history of where and how it is used (both in formal and
informal contexts).  I think it is _highly_ unfortunate that people
want to make this conflation of "explanation" and "proof", as it
causes all sorts of misunderstanding.

Secondly, a rational explanation (even proof) of why the program does
what it does, does not mean it is "correct".  Actually, I would say that
the _typical_ case for when such detailed reasoning is used (and maybe
even made explicit), is when the programmer is trying to determine why
the program is _broken_.


> computers are useful in the first place.) The proof one way or the
> other whether a program works exists independent of whether we as
> programmers have successfully formalized it.

Hmmm, Platonist eh? :-) If we were talking about a mathematical
proposition, I would in general agree.  The problem here is that the
definition of "works" is typically not clear or precise enough to
claim anything about the existence of any proof one way or the other.
OTOH, post facto (the customer calls up and says, "hey, this thing
does ... for input ... and that's wrong because I _want_ it to do such
and such."), this statement "suddenly" becomes exactly right.


/Jon
From: Thant Tessman
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <xYTnb.2150$5M.53548@dfw-read.news.verio.net>
Jon S. Anthony wrote:

[...]

> Hmmm, Platonist eh? :-)

This discussion is going nowhere, but I feel compelled to defend my 
intellectual honor and declare publicly that I am absolutely not a 
Platonist.

-thant
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m31xsvx1h5.fsf@rigel.goldenthreadtech.com>
Thant Tessman <·····@acm.org> writes:

> Jon S. Anthony wrote:
> 
> [...]
> 
> > Hmmm, Platonist eh? :-)
> 
> This discussion is going nowhere

Agreed.


>, but I feel compelled to defend my intellectual honor and declare
>publicly that I am absolutely not a Platonist.

OK, but to be clear, I certainly in no way meant that as an insult.


/Jon
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uwuaq86nt.fsf@hotmail.com>
Thant Tessman <·····@acm.org> writes:

> Jon S. Anthony wrote:
> 
> [...]
> 
> > 1. that's post facto. and thus not relevant
> > 2. "latent proof" (as described) is clearly not a proof
> 
> The claim is not that a programmer has worked out a formal proof in
> their head that their program is correct. The claim is that the
> programmer works under the premise that such a proof
> exists. Otherwise, there is no logic by which a program can be
> described as working or not working.
> 
> Given that such a proof exists, the more of that proof we formalize in
> the guise of a type system, the more confident we can be of the
> correctness of our programs.
> 

This such an obvious statement, I'm totally confused as to why we have to
have this long and painful discussion about it. Perhaps, it just hasn't been
state clearly enough. I think Thant's statement is about as clearly stated
as one could hope for, without having to construct a formal argument in
higher-order logic. :)
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3r80xwdvm.fsf@rigel.goldenthreadtech.com>
·········@hotmail.com (Daniel C. Wang) writes:

> Thant Tessman <·····@acm.org> writes:
> 
> > Jon S. Anthony wrote:
> > 
> > [...]
> > 
> > > 1. that's post facto. and thus not relevant
> > > 2. "latent proof" (as described) is clearly not a proof
> > 
> > The claim is not that a programmer has worked out a formal proof in
> > their head that their program is correct. The claim is that the
> > programmer works under the premise that such a proof
> > exists. Otherwise, there is no logic by which a program can be
> > described as working or not working.
> > 
> > Given that such a proof exists, the more of that proof we formalize in
> > the guise of a type system, the more confident we can be of the
> > correctness of our programs.
> > 
> 
> This such an obvious statement.

See comments elsewhere.  Neither of These claims is really
controversial.  It's just that the first is not what was originally
claimed and for the second the issue is whether the proof exists or
even can exist.

/Jon
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnboat$l4q$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:

>>What's this have to do with "Joe wrote some code, so Joe 'had a proof
>>in his head'"????
> 
> 
> The following: The only way that the above could be false is that two
> conditions are met:
> 
>   - Joe writes a correct program.
>   - There is no proof for the correctness of that program (in the sense
>     of "there is no such proof now and it is not possible for anyone
>     to produce such a proof in the future").
> 
> I find this extremely unlikely because I believe that Joe already had
> the sketch of the proof in his head when he wrote his correct program.
> That sketch could be made into a full proof (by fleshing it out and
> possibly by correcting a few non-fatal problems that it might have).

Here is an example: Assume you are asked to write an interpreter. For 
the sake of simplicity, we assume that it should be a Lisp interpreter, 
because I can give you a complete implementation in one line of code:

(loop (print (eval (read)))) ;-)

Now, the specification states that this interpreter should never run 
into an endless loop.

There are two possible ways to respond to this spec:

- The first programmer tells the customer that he cannot write a program 
that can solve the halting problem. He even gives them a proof.

- The second programmer adds the feature that a certain combination of 
key strokes breaks out of the read-eval-print loop.

In the second case, the programmer has found a solution for the problem.

Do you think the programmer of the second solution had a "proof in her 
head" that this actually works?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m165ie4iys.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> 
> >>What's this have to do with "Joe wrote some code, so Joe 'had a proof
> >>in his head'"????
> > The following: The only way that the above could be false is that
> > two
> 
> > conditions are met:
> >   - Joe writes a correct program.
> 
> >   - There is no proof for the correctness of that program (in the sense
> >     of "there is no such proof now and it is not possible for anyone
> >     to produce such a proof in the future").
> > I find this extremely unlikely because I believe that Joe already
> > had
> 
> > the sketch of the proof in his head when he wrote his correct program.
> > That sketch could be made into a full proof (by fleshing it out and
> > possibly by correcting a few non-fatal problems that it might have).
> 
> Here is an example: Assume you are asked to write an interpreter. For
> the sake of simplicity, we assume that it should be a Lisp
> interpreter, because I can give you a complete implementation in one
> line of code:
> 
> 
> (loop (print (eval (read)))) ;-)
> 
> Now, the specification states that this interpreter should never run
> into an endless loop.
> 
> 
> There are two possible ways to respond to this spec:
> 
> - The first programmer tells the customer that he cannot write a
>   program that can solve the halting problem. He even gives them a
>   proof.
> 
> 
> - The second programmer adds the feature that a certain combination of
>   key strokes breaks out of the read-eval-print loop.
> 
> 
> In the second case, the programmer has found a solution for the problem.
> 
> Do you think the programmer of the second solution had a "proof in her
> head" that this actually works?

Yes.  Having been there and done *precisely* that I can assure you of
it (at least in my case).  And, btw., my proof sketch was initially
wrong because I managed to overlook one case.  Trying to write it out
in full would have revealed the problem. (The language in question was
Scheme, and my system would infinitely loop without possibility of
interrupting it using ctrl-c if you typed in ((call/cc call/cc)
(call/cc call/cc)) because it managed to send the bytecode engine onto
an internal loop without a heap limit check in it.)  However, the
proof was at least mostly correct: its flaw was not fatal: both it and
the implementation could be fixed trivially.

Anyway, all I'm saying is that we all think about why the code we
write works.  In those cases where it actually does work, I claim that
it would be possible to make the informal reasoning of the programmer
precise in the sense of a formal proof.

I have no idea why this is controversial.

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbq28$ujm$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:

> Anyway, all I'm saying is that we all think about why the code we
> write works.  In those cases where it actually does work, I claim that
> it would be possible to make the informal reasoning of the programmer
> precise in the sense of a formal proof.
> 
> I have no idea why this is controversial.

This is getting really tedious...

Here is a paper that might give you some ideas: 
http://www.ageofsig.org/people/bcsmith/print/smith-foundtns.pdf

(The link seems to be broken at the moment - damn, obviously they 
haven't carried out a formal proof that the web always works. Anyway, 
the paper is "The Foundations of Computing" by Brian Cantwell Smith.)

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uptgmmkjp.fsf@STRIPCAPStelus.net>
Matthias Blume <····@my.address.elsewhere> writes:
> Anyway, all I'm saying is that we all think about why the code we
> write works.  In those cases where it actually does work, I claim that
> it would be possible to make the informal reasoning of the programmer
> precise in the sense of a formal proof.
> 
> I have no idea why this is controversial.

Because of the term "formal". Just how formal was your formal proof? What
formalism were you using? What specification language did you use and how did
you verify the correctness of your proof?

Or, you did simple make it "somewhat more formal" than the sketch you started
with? Certainly that is useful to do, prehaps even necessary, but that is not
the same as having a formal proof.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031400190001@k-137-79-50-101.jpl.nasa.gov>
Matthias Blume <····@my.address.elsewhere> writes:
> Anyway, all I'm saying is that we all think about why the code we
> write works.  In those cases where it actually does work, I claim that
> it would be possible to make the informal reasoning of the programmer
> precise in the sense of a formal proof.
> 
> I have no idea why this is controversial.

Because it's wrong.  :-)

We do not all think about why the code we write works.  More often than
not working code is produced by a generate-and-test algorithm.

E.
From: Mark Carroll
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3Ug*4OM5p@news.chiark.greenend.org.uk>
In article <··································@k-137-79-50-101.jpl.nasa.gov>,
Erann Gat <·················@jpl.nasa.gov> wrote:
>
>Matthias Blume <····@my.address.elsewhere> writes:
>> Anyway, all I'm saying is that we all think about why the code we
>> write works.  In those cases where it actually does work, I claim that
(snip)
>We do not all think about why the code we write works.  More often than
>not working code is produced by a generate-and-test algorithm.

Generally I find it faster to figure out what would work, satisfy
myself that it would (in an informal proof type of way), and then
write the code. At least, if I have to resort to generate-and-test, I
use the results of those experiments to help me figure out that
informal proof before moving on, otherwise I'll be beset by gnawing
doubts until I do.

Then again, as I've noted before, I really do benefit from static
typing, and it's languages like Modula-3 that have worked well for me
in getting larger, complex programs working well. But, I have friends
for whom the opposite is true. One's mileage varies, I submit.

-- Mark
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031705570001@k-137-79-50-101.jpl.nasa.gov>
In article <·········@news.chiark.greenend.org.uk>, Mark Carroll
<·····@chiark.greenend.org.uk> wrote:

> In article <··································@k-137-79-50-101.jpl.nasa.gov>,
> Erann Gat <·················@jpl.nasa.gov> wrote:
> >
> >Matthias Blume <····@my.address.elsewhere> writes:
> >> Anyway, all I'm saying is that we all think about why the code we
> >> write works.  In those cases where it actually does work, I claim that
> (snip)
> >We do not all think about why the code we write works.  More often than
> >not working code is produced by a generate-and-test algorithm.
> 
> Generally I find it faster to figure out what would work, satisfy
> myself that it would (in an informal proof type of way), and then
> write the code. At least, if I have to resort to generate-and-test, I
> use the results of those experiments to help me figure out that
> informal proof before moving on, otherwise I'll be beset by gnawing
> doubts until I do.

I think it depends on the problem you're trying to solve, and I think it's
a continuum, not a dichotomy.  For writing my spam filter the process I
followed was a lot more like generate-and-test than it was like a proof of
correctness.  (The whole notion of "correctness" with regards to spam
filters feels to me like a type error.)

As in all things Gat's first law applies.

E.
From: Mark Carroll
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <5sy*odK6p@news.chiark.greenend.org.uk>
In article <·········@news.chiark.greenend.org.uk>,
Mark Carroll  <·····@chiark.greenend.org.uk> wrote:
(snip)
>Generally I find it faster to figure out what would work, satisfy
>myself that it would (in an informal proof type of way), and then
>write the code. At least, if I have to resort to generate-and-test, I
(snip)

FWIW I've been doing some self-monitoring while programming lately and
AFAICT much of my "informal proof" type of thinking seems to be along
the lines of trying to see if all the possible cases are covered in a
manner that satsifies some set of invariants, where I have enough of a
grasp on what the cases and invariants are that I feel pretty sure
I've covered my bases well.

Where that is on the scale between generate-and-test and formal
correctness proofs, I'll let others judge. (-:

-- Mark
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bob8td$pue$1@news.oberberg.net>
Mark Carroll wrote:

> In article <·········@news.chiark.greenend.org.uk>,
> Mark Carroll  <·····@chiark.greenend.org.uk> wrote:
> (snip)
> 
>>Generally I find it faster to figure out what would work, satisfy
>>myself that it would (in an informal proof type of way), and then
>>write the code. At least, if I have to resort to generate-and-test, I
> 
> (snip)
> 
> FWIW I've been doing some self-monitoring while programming lately and
> AFAICT much of my "informal proof" type of thinking seems to be along
> the lines of trying to see if all the possible cases are covered in a
> manner that satsifies some set of invariants, where I have enough of a
> grasp on what the cases and invariants are that I feel pretty sure
> I've covered my bases well.
> 
> Where that is on the scale between generate-and-test and formal
> correctness proofs, I'll let others judge. (-:

It's quite near to a formal proof.
Generate-and-test actually is a formal proof if you generate all the 
cases, it's orthogonal to the formality of a proof.
BTW formality just means "everything is worded in a precise and 
unambiguous way", something that programmers are accustomed anyway. 
Formality doesn't necessarily mean funny symbols all over the place :-)

Regards,
Jo
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bne0ae$94l$2@news.oberberg.net>
Ray Blaak wrote:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
>>Anyway, all I'm saying is that we all think about why the code we
>>write works.  In those cases where it actually does work, I claim that
>>it would be possible to make the informal reasoning of the programmer
>>precise in the sense of a formal proof.
>>
>>I have no idea why this is controversial.
> 
> Because of the term "formal". Just how formal was your formal proof?

It's formal enough if you rewrite the reasoning as a series of assertions.
I'm pretty sure that most programmers have a good grasp of boolean 
statements, right?

Regards,
Jo
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uad7ni3l4.fsf@STRIPCAPStelus.net>
Joachim Durchholz <·················@web.de> writes:
> Ray Blaak wrote:
> > Because of the term "formal". Just how formal was your formal proof?
> 
> It's formal enough if you rewrite the reasoning as a series of assertions.
> I'm pretty sure that most programmers have a good grasp of boolean 
> statements, right?

The series of assertions are only sensical against a specification of
correctness written in the same formalism (usually a form of boolean logic).

E.g. a proof of an addition algorithm needs to specify what (its version of)
addition is first, as well as to show that the implementation indeed satifies
such a specification. It is not enough to simply say "it does addition"
(e.g. consider size or precision limitations, execution time, etc.). 

[Note, execution time is a particularly nasty gotcha. If your specification
doesn't even mention it, then the simplest infinite loop is a conforming
implementation, since you cannot prove it will ever fail. You have to also say
when you want your results, at least a little.]

If you don't have formally specified what it is you are trying to prove, then
you are not applying/generating/deriving your proof formally.

Such specifications are difficult to formulate and it is yet another problem
to make them bug free, to know they reasonably or usefully match reality, etc.

Now, one's informal somewhat rigorous proof might indeed quite useful and
quite good enough, but that is not the same as being formal.

The (idealized) litmus test for a formal proof is that you can give it to a
machine to verify.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uu15ymla1.fsf@STRIPCAPStelus.net>
Matthias Blume <····@my.address.elsewhere> writes:
> The following: The only way that the above could be false is that two
> conditions are met:
> 
>   - Joe writes a correct program.
>   - There is no proof for the correctness> of that program (in the sense
>     of "there is no such proof now and it is not possible for anyone
>     to produce such a proof in the future").
> 
> I find this extremely unlikely because I believe that Joe already had
> the sketch of the proof in his head when he wrote his correct program.
> That sketch could be made into a full proof (by fleshing it out and
> possibly by correcting a few non-fatal problems that it might have).

This is the heart of "our" disagreement with you. The sketch of the proof
(which is necessarily informal) could NOT necessarily be made into a full
proof.

The sketch of the proof could (should) be readily fleshed out enough to
convince another human being of the correctness. The sketch of the proof might
often in principle be able to be developed into a formal proof.

It is just that it cannot be certain that you *always* can do it. In fact it
is usually really really difficult to do it at all, just because of the
tediousness of doing everything formally.

The key here is "formal". Has long as you are saying *formal* proofs always
can be found, then I disagree.

People's reasoning steps are not restricted formal logic. People's talents are
in fact at glossing over the details. Formal methods, on the other hand, are
precisely about verifying all of the details, which is what makes them so
difficult.

Note also, that "there is no proof for the correctness" should instead be
stated as "it is not known if there is a proof for the correctness".

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1oew62vck.fsf@tti5.uchicago.edu>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> > The following: The only way that the above could be false is that two
> > conditions are met:
> > 
> >   - Joe writes a correct program.
> >   - There is no proof for the correctness> of that program (in the sense
> >     of "there is no such proof now and it is not possible for anyone
> >     to produce such a proof in the future").
> > 
> > I find this extremely unlikely because I believe that Joe already had
> > the sketch of the proof in his head when he wrote his correct program.
> > That sketch could be made into a full proof (by fleshing it out and
> > possibly by correcting a few non-fatal problems that it might have).
> 
> This is the heart of "our" disagreement with you. The sketch of the proof
> (which is necessarily informal) could NOT necessarily be made into a full
> proof.

Indeed, we have to agree to disagree here.  A proof (however informal)
can be called a proof only if -- at least in principle -- it could be
formalized.

> The sketch of the proof could (should) be readily fleshed out enough to
> convince another human being of the correctness. The sketch of the proof might
> often in principle be able to be developed into a formal proof.
> 
> It is just that it cannot be certain that you *always* can do it. In fact it
> is usually really really difficult to do it at all, just because of the
> tediousness of doing everything formally.

I don't dispute the fact that it is exceedingly difficult.

> People's reasoning steps are not restricted formal logic. People's
> talents are in fact at glossing over the details. Formal methods, on
> the other hand, are precisely about verifying all of the details,
> which is what makes them so difficult.

Sure.  But if the humans claim to prove things informally even though
there is no formal proof for them, then this is nothing more than
black magic.  I don't believe in black magic.  If there is an informal
proof that deserves the name, then thery is also a formal one.

> Note also, that "there is no proof for the correctness" should instead be
> stated as "it is not known if there is a proof for the correctness".

No, no, no!  I was specifically talking about the existence of proofs,
not whether or not we know about that existence.

To state my belief one last time succinctly:

Conjecture:

   If a human writes a correct (in some formal sense) program that has
   a practical purpose (**), then there is a (formal) proof of
   correctness (in the same formal sense of "correct") for that
   program.

(**) Without this disclaimer one could easily disprove this conjecture
as follows: Systematically write down *all* programs.  Among them
there will be correct programs (infinitely many, in fact) for which
there is no correctness proof. qed.

By the way, it is impossible to give a counterexample to the above
conjecture.  Proof: A counterexample, by definition, is some program
which we can prove to be both correct and impossible to prove correct.
Unless you can (non-constructively) show the existence of a
counterexample some other way, you cannot disprove the conjecture.
(The most obvious non-constructive way of showing the existence of a
counterexample I avoided with that (**) condition.)

Needless to say, I cannot *prove* the conjecture either, so I guess it
will forever have to be that: a conjecture.

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031512360001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> To state my belief one last time succinctly:
> 
> Conjecture:
> 
>    If a human writes a correct (in some formal sense) program that has
>    a practical purpose (**), then there is a (formal) proof of
>    correctness (in the same formal sense of "correct") for that
>    program.

This is very different from your original claim, which was that the person
writing the code has a proof of correctness *in their head* when they
wrote the code.


> By the way, it is impossible to give a counterexample to the above
> conjecture.

That's because it's not a conjecture, it's a definition of the word
"useful".  You had to throw that disclaimer in there precisely to rule out
correct programs that have no proofs of correctness, the Goedel programs. 
What you are really conjecturing (now -- your position keeps changing) is
that no Goedel program (that is, no correct program for which a proof does
not exist) is useful.  But you haven't defined "useful" so your (latest)
"conjecture" is vacuous.

BTW, it's easy to give a counterexample to your original conjecture. 
Write a provably correct program.  Give the source code to someone else
and have them copy it.  (Or, if you prefer, describe the algorithm to
someone and have them generate new code based on that description.)  That
second person has now written a provably correct program but (most likely)
without a proof of correctness in their head at the time they wrote it.

It is also possible to write correct programs using generate-and-test
methods (or evolutionary programming methods) without ever having a proof
of correctness in your head.

E.
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3vfqdz4pv.fsf@rigel.goldenthreadtech.com>
·················@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > To state my belief one last time succinctly:
> > 
> > Conjecture:
> > 
> >    If a human writes a correct (in some formal sense) program that has
> >    a practical purpose (**), then there is a (formal) proof of
> >    correctness (in the same formal sense of "correct") for that
> >    program.
> 
> This is very different from your original claim, which was that the person
> writing the code has a proof of correctness *in their head* when they
> wrote the code.

Exactly.  Modulo the fact that "in some formal sense" is ill defined
(i.e., handwaving) I have no problem with the above statement as a
proposition.  The problem I do have with it is that it just punts the
whole issue off to what constitutes "correctness".  Or maybe more to
the point it suggests that "correctness" (often?, typically? always?)
has a rigorously "formal" aspect to it.


/Jon
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ullramfo1.fsf@STRIPCAPStelus.net>
Matthias Blume <····@my.address.elsewhere> writes:
> To state my belief one last time succinctly:
> 
> Conjecture:
> 
>    If a human writes a correct (in some formal sense) program that has
>    a practical purpose (**), then there is a (formal) proof of
>    correctness (in the same formal sense of "correct") for that
>    program.
[...]
> By the way, it is impossible to give a counterexample to the above
> conjecture.  Proof: A counterexample, by definition, is some program
> which we can prove to be both correct and impossible to prove correct.
> Unless you can (non-constructively) show the existence of a
> counterexample some other way, you cannot disprove the conjecture.

But we cannot usually give a formal sense of "correct" either, but instead
only an informal or at most approximate one. And thus it all unravels.

You end up expressing the correctness as a (very long) statement in some
formalism. And that's where you run into Goedel limitations: not all
statements are provable in a given formalism.

You are saying that *in principle* it could be done. I am saying that *in
principle* it could not, because correctness cannot be nailed down properly.

Now in practice, people can and do (and should do) "good enough" proofs.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <65iegtfr.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> To state my belief one last time succinctly:
>
> Conjecture:
>
>    If a human writes a correct (in some formal sense) program that has
>    a practical purpose (**), then there is a (formal) proof of
>    correctness (in the same formal sense of "correct") for that
>    program.
>
> (**) Without this disclaimer one could easily disprove this conjecture
> as follows: Systematically write down *all* programs.  Among them
> there will be correct programs (infinitely many, in fact) for which
> there is no correctness proof. qed.

Just as a thought experiment:  consider the space of *all* programs of
a size no larger than than, say, 10 megabytes of ascii source.  What
is the ratio of `correct + provable' programs to `correct +
non-provable' programs?

Couldn't I intend to write a correct program but accidentally write
one that is correct and unprovable?
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87he1x3pss.fsf@sidious.geddis.org>
> Matthias Blume <····@my.address.elsewhere> writes:
> > Conjecture:
> >    If a human writes a correct (in some formal sense) program that has
> >    a practical purpose (**), then there is a (formal) proof of
> >    correctness (in the same formal sense of "correct") for that
> >    program.

·············@comcast.net writes:
> Just as a thought experiment:  consider the space of *all* programs of
> a size no larger than than, say, 10 megabytes of ascii source.  What
> is the ratio of `correct + provable' programs to `correct +
> non-provable' programs?

Just to add another wrinkle to this (silly) debate: it doesn't make any sense
to talk about "provable" in the absence of some specific proof system.
Nothing is non-provable in the abstract.  The various Godel constructions
rely on knowing the fixed proof system first.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
I learned from a young age that books can be your friends.  But guess what can
be your enemy: a globe.
	-- Deep Thoughts, by Jack Handey [1999]
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ismcyeku.fsf@comcast.net>
Don Geddis <···@geddis.org> writes:

>> Matthias Blume <····@my.address.elsewhere> writes:
>> > Conjecture:
>> >    If a human writes a correct (in some formal sense) program that has
>> >    a practical purpose (**), then there is a (formal) proof of
>> >    correctness (in the same formal sense of "correct") for that
>> >    program.
>
> ·············@comcast.net writes:
>> Just as a thought experiment:  consider the space of *all* programs of
>> a size no larger than than, say, 10 megabytes of ascii source.  What
>> is the ratio of `correct + provable' programs to `correct +
>> non-provable' programs?
>
> Just to add another wrinkle to this (silly) debate: it doesn't make any sense
> to talk about "provable" in the absence of some specific proof system.
> Nothing is non-provable in the abstract.  The various Godel constructions
> rely on knowing the fixed proof system first.

I'm pretty flexible.  Let's start with the Peano axioms.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bne14u$9f2$1@news.oberberg.net>
·············@comcast.net wrote:
> Couldn't I intend to write a correct program but accidentally write
> one that is correct and unprovable?

A program that's accidentally correct is unmaintainable, since it's 
unlikely that the next programmer who adds a modification will preserve 
the accidental correctness property.
(Actually, on rare occurrences, I had to maintain such software.)

Note that accidental correctness properly encloses unprovably correct 
software.

Regards,
Jo
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vfqdcjp0.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> ·············@comcast.net wrote:
>> Couldn't I intend to write a correct program but accidentally write
>> one that is correct and unprovable?
>
> A program that's accidentally correct is unmaintainable, since it's
> unlikely that the next programmer who adds a modification will
> preserve the accidental correctness property.

Why do you assert this?

Consider public key cryptography.  It rests upon the difficulty of
factoring.  No one has proven that factoring is difficult, but many
have tried.  No one has shown that factoring is easy, either.

It may not be provable one way or the other; it could an unrecognized
axiom.

Yet it is not very likely that a programmer who adds a modification
will suddenly discover a proof that factoring is easy and thus break
the program.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2smlhrn9n.fsf@hanabi-air.shimizu.blume>
·············@comcast.net writes:

> Joachim Durchholz <·················@web.de> writes:
> 
> > ·············@comcast.net wrote:
> >> Couldn't I intend to write a correct program but accidentally write
> >> one that is correct and unprovable?
> >
> > A program that's accidentally correct is unmaintainable, since it's
> > unlikely that the next programmer who adds a modification will
> > preserve the accidental correctness property.
> 
> Why do you assert this?
> 
> Consider public key cryptography.  It rests upon the difficulty of
> factoring.  No one has proven that factoring is difficult, but many
> have tried.  No one has shown that factoring is easy, either.

Good example.  One better not make "is unbreakable" part of the
correctness criterion.  But the correctness criterion could be
"implements RSA faithfully".  And the programmer who wrote the code is
probably not thinking "I write this loop here because it makes the
code unbreakable".  She probably thinks "I write this loop here
because it makes my program correspond to what's in those nice papers
by Rivest et al."

Matthias
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnemb6$ir6$1@news.oberberg.net>
·············@comcast.net wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
> 
>>·············@comcast.net wrote:
>>
>>>Couldn't I intend to write a correct program but accidentally write
>>>one that is correct and unprovable?
>>
>>A program that's accidentally correct is unmaintainable, since it's
>>unlikely that the next programmer who adds a modification will
>>preserve the accidental correctness property.
> 
> 
> Why do you assert this?
> 
> Consider public key cryptography.  It rests upon the difficulty of
> factoring.  No one has proven that factoring is difficult, but many
> have tried.  No one has shown that factoring is easy, either.
> 
> It may not be provable one way or the other; it could an unrecognized
> axiom.
> 
> Yet it is not very likely that a programmer who adds a modification
> will suddenly discover a proof that factoring is easy and thus break
> the program.

Cryptography is rather atypical.
Most code is written for commercial purposes, and is more-or-less 
straightforward formalization of informal statements.
Actually, for such code, I don't want to prove every single detail of 
the code, I want to make sure that the code has the properties that the 
customer wanted... which is (usually) a small subset of what the code 
actually does. (Yes, I have been working on several projects in the 
commercial area. Most of them were seriously underspecified. Actually, 
formal methods would help pinpointing the underspecified areas more 
exactly IMHO... but that can of worms is on a tangent that would lead us 
far away from an already-far-too-bloated thread.)

Regards,
Jo
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <znfpxadw.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> Cryptography is rather atypical.
> Most code is written for commercial purposes, and is more-or-less
> straightforward formalization of informal statements.

I still don't know why you say this.  Do you have access to most of
the code written for commercial purposes?  Is there some scientific
sampling of commercial code?  Is this industry wide?  Wouldn't some
industries have different needs than others (and thus use different
coding styles)?  

A lot of commercial code uses databases.  SQL is *anything* but
straightforward formalization.  A lot of commercial code uses
transactions.  They aren't straightforward either.  A lot of
commercial code uses asynchronous communication.  That's not very
straightforward.

I've been working in industry and academia for some time and I've seen
only a miniscule amount of code compared to the amount that exists.
The only statement that I'm comfortable making about most code is that
I haven't seen it.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh4cn$smc$2@news.oberberg.net>
·············@comcast.net wrote:
> 
> A lot of commercial code uses databases.  SQL is *anything* but
> straightforward formalization.

If you really think that, you've been missing a lot of experience with 
formal methods. SQL has a very solid formal foundation.

Regards,
Jo
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <873cdf394m.fsf@sidious.geddis.org>
> ·············@comcast.net wrote:
> > A lot of commercial code uses databases.  SQL is *anything* but
> > straightforward formalization.

Joachim Durchholz <·················@web.de> writes:
> If you really think that, you've been missing a lot of experience with formal
> methods. SQL has a very solid formal foundation.

No it doesn't.  The common negation-as-failure operation results in very
dubious semantics, as related to the assertions of the base facts in the
database.

(SQL is basically an inference engine, and negation-as-failure is similar
to the issue we've been talking about elsewhere in this thread, namely the
confusion between proving false vs. not being able to prove true.  Those
aren't the same thing, and treating them as the same leads to lots of subtle
problems.)

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Seen on the door to a light-wave lab:
"Do not look into laser with remaining good eye."
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <d6cj8zjg.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> ·············@comcast.net wrote:
>> A lot of commercial code uses databases.  SQL is *anything* but
>> straightforward formalization.
>
> If you really think that, you've been missing a lot of experience with
> formal methods. SQL has a very solid formal foundation.

I know that SQL is based on formal foundations.
You stated:

> Most code is written for commercial purposes, and is more-or-less
> straightforward formalization of informal statements.

Taking informal statements and turning them into straightforward SQL
frequently causes very serious performance problems.  Taking informal
statements and turning them into rather convoluted SQL that performs
well is how many people make their living.  Apparently it is the case
that straightforward formalization of informal statements is not a
characteristic of some code that is written for commercial purposes.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhnku$5fe$4@news.oberberg.net>
·············@comcast.net wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
> 
>>·············@comcast.net wrote:
>>
>>>A lot of commercial code uses databases.  SQL is *anything* but
>>>straightforward formalization.
>>
>>If you really think that, you've been missing a lot of experience with
>>formal methods. SQL has a very solid formal foundation.
> 
> I know that SQL is based on formal foundations.
> You stated:
> 
>>Most code is written for commercial purposes, and is more-or-less
>>straightforward formalization of informal statements.
> 
> Taking informal statements and turning them into straightforward SQL
> frequently causes very serious performance problems.  Taking informal
> statements and turning them into rather convoluted SQL that performs
> well is how many people make their living.  Apparently it is the case
> that straightforward formalization of informal statements is not a
> characteristic of some code that is written for commercial purposes.

Agreed, but SQL equivalences should really be automatically checkable - 
SQL isn't even Turing complete.

Regards,
Jo
From: Garry Hodgson
Subject: Re: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2003102710171067267854@k2.sage.att.com>
Joachim Durchholz <·················@web.de> wrote:

> ·············@comcast.net wrote:
> > Couldn't I intend to write a correct program but accidentally write
> > one that is correct and unprovable?
> 
> A program that's accidentally correct is unmaintainable, since it's 
> unlikely that the next programmer who adds a modification will preserve 
> the accidental correctness property.
> (Actually, on rare occurrences, I had to maintain such software.)

i'll never forget the look on a friends face when, several decades ago,
he was being introduced to the code he'd be maintaining on a new (for him)
project.  he was told, "this routine does xxx.  don't ever, for any reason,
touch this routine.  you can't even change the comments, or the whole
system won't work anymore."  the sad thing was, they were serious.
and correct.


----
Garry Hodgson, Technology Consultant, AT&T Labs

Be happy for this moment.
This moment is your life.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjldr$7el$1@news.oberberg.net>
Garry Hodgson wrote:

> [...] "this routine does xxx.  don't ever, for any reason, touch this
> routine.  you can't even change the comments, or the whole system
> won't work anymore."  the sad thing was, they were serious. and
> correct.

Luckily, I never encountered such a program. A friend of mine did... and
it was written in Pascal, so it's possible to write unmaintainable code
in /any/ language...
There's one difference though: months after he had first seen that
function, he found out why changes would break the system, an corrected
the issue. (Unfortunately, I forgot the details.)

Regards,
Jo
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uvfqfd88v.fsf@STRIPCAPStelus.net>
Matthias Blume <····@my.address.elsewhere> writes:
> That's not what I said.  I said that the programmer has a proof in her
> head. (At least she thinks she does.)  My point was that since she has
> a proof, the proof obviously *exists* and *could* be written down and
> *could* be statically verified if one only went to the trouble of
> doing so.  (And again, even this is obviously much easier said than
> done.)

Much much easier said than done. So much so that practical formal methods are
not currently useful.

Still, one can *attempt* to program in a proof-like style, whereby you code
according to the assumptions you know, reduce the number of exception cases,
etc. 

That is, the attempt, the effort to think about such things even informally
gives better code. 

Prototyping code out of ignorance is still useful though, since it lets you
discover requirements and assumptions better.

> The point is that even though we all know that we cannot prove all correct
> programs correct in general, we can do so for the programs we actually write
> (which is a proper subset of the set of all correct programs).

We can't do so, at least if you insist on being formal. It is too difficult in
general.

> Anyone who claims his program is correct but it cannot be proven correct
> must face the question "How do you know?"

In the end they need to be able to give a convincing argument, but that is not
the same as being formal. There are many ways of convincing humans.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egy8varhzd.fsf@sefirot.ii.uib.no>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> Matthias Blume <····@my.address.elsewhere> writes:

>> That's not what I said.  I said that the programmer has a proof in her
>> head. (At least she thinks she does.)  

Translation: she needs to have an idea of what kinds of input a
function can expect, and what kind of outputs it should generate, and
some kind of rough mental sketch on how the code goes about ensuring
that.  (That wasn't so hard, was it?)

> Much much easier said than done. So much so that practical formal
> methods are not currently useful.

I wouldn't say "not useful", but perhaps overkill in many cases.  Note
that static typing is the lightweight version of this, and handles the
two first points: ensuring that input and output belong to certain
categories.  If the type system is used a bit actively, you can IMHO
do a lot here.

You can of course use it in cases when you don't know exactly how to
solve a subproblem, but you do know its type. Just throw in:

        solve_subproblem :: (type declaration here)
        solve_subproblem = undefined

to defer it to later, while type-checking the rest of your program.

BTW, I'm not convinced you could successfully remove the type system
from a language like Haskell -- how would you handle e.g. partially
application of functions, for instance?

I find the static typing invaluable when refactoring, perhaps I'm
denser than most programmers or something, but I seem simply unable to
rearrange blocks of code without making errors.  The occasionally
quoted "type correct means correct" is a grave overstatement when
developing code, but when refactoring, it is almost always true.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Alex McGuire
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9fam$ko0$1@news-reader1.wanadoo.fr>
Matthias Blume wrote:
<snip>

>>>>>Actually, viewed from a certain angle, yes.  Every programmer who
>>>>>writes a program ought to have a proof that the program is correct in
>>>>>her mind.  (If not, fire her.)  It ought to be possible to formalize
>>>>>that proof and to statically check it.
>>>>>          
>>>>>
<snip>

>That's not what I said.  I said that the programmer has a proof in her
>head. (At least she thinks she does.)  My point was that since she has
>a proof, the proof obviously *exists* and *could* be written down and
>*could* be statically verified if one only went to the trouble of
>doing so.  (And again, even this is obviously much easier said than
>done.)
>
<snip>

>
>  
>
>> (Excecpt, of course,
>>that all you can really prove is that it doesn't have any type errors,
>>which is not the same thing.)
>>    
>>
>
>No, I wasn't thinking of contemporary type errors.  I was thinking of
>a real proof of correctness, in all glory.  The point is that even
>though we all know that we cannot prove all correct programs correct
>in general, we can do so for the programs we actually write (which is
>a proper subset of the set of all correct programs).  Anyone who
>claims his program is correct but it cannot be proven correct must
>face the question "How do you know?"
>  
>

I'm not sure what you mean by a proof here. Do you mean proof as in a 
formal mathematical proof? Formally proving correctness of programs is 
very difficult, even for a few lines of code, it would not be practical 
for much larger programs. A pre-requisite would be a formal description 
of the requirements, which I have never seen from a client, nor do I 
want to. To clarify things, can you give me a formal proof that the 
following java code correctly sums an array of integers?

public double sumArray(int[] array){
    int sum = 0;
    for (int i = 0; i < array.length; ++i){
       sum += array[i];
    }
    return sum;
}

I don't believe anyone can guarantee the correctness of any substantial 
work, nor should they be expected to. Obviously this is not to say that 
people shouldn't do their best to reduce the number of bugs in their 
work. For me, unit testing and constant refactoring seem to work best.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m11xt3obdj.fsf@tti5.uchicago.edu>
Alex McGuire <····@alexmcguire.com> writes:

> I'm not sure what you mean by a proof here. Do you mean proof as in a
> formal mathematical proof? Formally proving correctness of programs is
> very difficult, even for a few lines of code, it would not be
> practical for much larger programs.

I know.  As I have said many times by now, I was merely talking about
the existence of a formal proof (and therefore, about the theoretical
possibility of producing it).

> A pre-requisite would be a formal
> description of the requirements, which I have never seen from a
> client, nor do I want to.

Indeed, this is one of the major hurdles.

> To clarify things, can you give me a formal
> proof that the following java code correctly sums an array of
> integers?
> 
> 
> public double sumArray(int[] array){
>     int sum = 0;
>     for (int i = 0; i < array.length; ++i){
>        sum += array[i];
>     }
>     return sum;
> }

I think this can be proved fairly easily using Hoare-style logic.
Basically, you show that at the beginning of each iteration of the
loop you have the invariant sum = \sum_{k=0}^{i-1} array[k].  At the
end of each iteration you then have sum = \sum_{k=0}^{i} array[k].
The loop terminates when i = array.length, so at this point we have
sum = \sum_{k=0}^{array.length-1} array[k] which is what we wanted to
prove.

Obviously, a truly formal proof is much longer, but it would merely
fill in the gaps that I left in the above...

Matthias
From: Peter Seibel
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3d6cn1owc.fsf@javamonkey.com>
Matthias Blume <····@my.address.elsewhere> writes:

> Alex McGuire <····@alexmcguire.com> writes:

[snip]

> > To clarify things, can you give me a formal proof that the
> > following java code correctly sums an array of integers?
> > 
> > 
> > public double sumArray(int[] array){
> >     int sum = 0;
> >     for (int i = 0; i < array.length; ++i){
> >        sum += array[i];
> >     }
> >     return sum;
> > }
> 
> I think this can be proved fairly easily using Hoare-style logic.
> Basically, you show that at the beginning of each iteration of the
> loop you have the invariant sum = \sum_{k=0}^{i-1} array[k].  At the
> end of each iteration you then have sum = \sum_{k=0}^{i} array[k].
> The loop terminates when i = array.length, so at this point we have
> sum = \sum_{k=0}^{array.length-1} array[k] which is what we wanted to
> prove.

> Obviously, a truly formal proof is much longer, but it would merely
> fill in the gaps that I left in the above...

Uh, unless sum overflows. Better check that proof again.

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m21xt373yj.fsf@hanabi-air.shimizu.blume>
Peter Seibel <·····@javamonkey.com> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Alex McGuire <····@alexmcguire.com> writes:
> 
> [snip]
> 
> > > To clarify things, can you give me a formal proof that the
> > > following java code correctly sums an array of integers?
> > > 
> > > 
> > > public double sumArray(int[] array){
> > >     int sum = 0;
> > >     for (int i = 0; i < array.length; ++i){
> > >        sum += array[i];
> > >     }
> > >     return sum;
> > > }
> > 
> > I think this can be proved fairly easily using Hoare-style logic.
> > Basically, you show that at the beginning of each iteration of the
> > loop you have the invariant sum = \sum_{k=0}^{i-1} array[k].  At the
> > end of each iteration you then have sum = \sum_{k=0}^{i} array[k].
> > The loop terminates when i = array.length, so at this point we have
> > sum = \sum_{k=0}^{array.length-1} array[k] which is what we wanted to
> > prove.
> 
> > Obviously, a truly formal proof is much longer, but it would merely
> > fill in the gaps that I left in the above...
> 
> Uh, unless sum overflows. Better check that proof again.

Indeed.  Once you sit down and do every step, even the trivial-looking
ones, you find such bugs.  Ok, so the above code is, in fact, not correct.
(This means that one should better not be able to prove it!)

Anyway, assuming Alex was not asking a trick question and truly wanted
to know how one goes about proving things like the one he asked
(provided they are actually true), let's modify his "theorem" to

"... correctly sums an array of integers modulo 2^32." (I'm assuming
32-bit integers here.)

If we are talking about SML code we could look at, say,

   fun sumList l = let fun sl ([], sum) = sum
                         | sl (h :: t, sum) = sl (t, h + sum)
                   in sl (l, 0)
                   end

and try to prove "If sumList returns, then the result is the sum of
                  the integers in the argument list."
(Notice the "if ... returns" condition which will not be satisfied on
overflow -- which is what makes this go through.)

This (or the modified Java statement -- unless I'm overlooking yet another
pitfall) is easily provable using, e.g., the technique that I outlined.

Matthias
From: Gareth McCaughan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87vfqfip05.fsf@g.mccaughan.ntlworld.com>
Alex McGuire wrote:

> Matthias Blume wrote:
...
>> No, I wasn't thinking of contemporary type errors.  I was thinking of
>> a real proof of correctness, in all glory.  The point is that even
>> though we all know that we cannot prove all correct programs correct
>> in general, we can do so for the programs we actually write (which is
>> a proper subset of the set of all correct programs).  Anyone who
>> claims his program is correct but it cannot be proven correct must
>> face the question "How do you know?"
> 
> I'm not sure what you mean by a proof here. Do you mean proof as in a
> formal mathematical proof? Formally proving correctness of programs is
> very difficult, even for a few lines of code, it would not be
> practical for much larger programs. A pre-requisite would be a formal
> description of the requirements, which I have never seen from a
> client, nor do I want to. To clarify things, can you give me a formal
> proof that the following java code correctly sums an array of integers?
> 
> public double sumArray(int[] array){
>     int sum = 0;
>     for (int i = 0; i < array.length; ++i){
>        sum += array[i];
>     }
>     return sum;
> }

I'm not Matthias, but here's my guess at the sort of thing
he might consider appropriate.

// Return the sum of all elements in the array,
// mod 2^32.
// XXX: Why does this return a double? If the idea
// is to avoid overflow, why do we accumulate with
// an int?

public double sumArray(int[] array) {
  int sum = 0;
  for (int i=0; i<array.length; ++i) {
    // loop invariant: sum == (sum of array elements with
    // indices < i) mod 2^32
    sum += array[i];
  }
  // on exit from the loop, the invariant holds with
  // i == array.length, so that's the sum of *all*
  // elements.
  return sum;
}

I'd guess Matthias wouldn't expect to see all that
actually embedded in the code, but he would want
the programmer to have a clear enough understanding
that she could provide it quickly and confidently
if required.

Converting that to a really formal proof would be
tiresome (depending on how formal "really formal"
is taken to be) but easy.

I don't agree with Matthias's position, but I wouldn't
want to hire someone who *couldn't* provide a correctness
(or incorrectness) proof for a piece of code that simple.
Would you?

Disclaimer: I've written about 20 lines of Java *ever*,
so I may have missed things. I wouldn't advise hiring
me to write Java without a bit of time in the schedule
for me to learn the language (and, more to the point,
the libraries) better :-).

-- 
Gareth McCaughan
.sig under construc
From: Alex McGuire
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnaqje$4ec$1@news-reader2.wanadoo.fr>
Gareth McCaughan wrote:

>Alex McGuire wrote:
>
>  
>
>>Matthias Blume wrote:
>>    
>>
>...
>  
>
>>>No, I wasn't thinking of contemporary type errors.  I was thinking of
>>>a real proof of correctness, in all glory.  The point is that even
>>>though we all know that we cannot prove all correct programs correct
>>>in general, we can do so for the programs we actually write (which is
>>>a proper subset of the set of all correct programs).  Anyone who
>>>claims his program is correct but it cannot be proven correct must
>>>face the question "How do you know?"
>>>      
>>>
>>I'm not sure what you mean by a proof here. Do you mean proof as in a
>>formal mathematical proof? Formally proving correctness of programs is
>>very difficult, even for a few lines of code, it would not be
>>practical for much larger programs. A pre-requisite would be a formal
>>description of the requirements, which I have never seen from a
>>client, nor do I want to. To clarify things, can you give me a formal
>>proof that the following java code correctly sums an array of integers?
>>
>>public double sumArray(int[] array){
>>    int sum = 0;
>>    for (int i = 0; i < array.length; ++i){
>>       sum += array[i];
>>    }
>>    return sum;
>>}
>>    
>>
>
>I'm not Matthias, but here's my guess at the sort of thing
>he might consider appropriate.
>
>// Return the sum of all elements in the array,
>// mod 2^32.
>// XXX: Why does this return a double? If the idea
>// is to avoid overflow, why do we accumulate with
>// an int?
>  
>

Well spotted. This was really a mistake, I wasn't trying to be clever. 
Looks like the code didn't actually do what I
thought it did. Is this another problem with such proofs? For example, I 
can prove that the quicksort _algorithm_ works, but how would I
prove that my code correctly implements that algorithm, and there aren't 
subtle errors like the one I inadvertently wrote above.

In my experience at least I find much more bugs due to the incorrect 
implementation of algorithms, rather than the use of invalid algorithms.

>public double sumArray(int[] array) {
>  int sum = 0;
>  for (int i=0; i<array.length; ++i) {
>    // loop invariant: sum == (sum of array elements with
>    // indices < i) mod 2^32
>    sum += array[i];
>  }
>  // on exit from the loop, the invariant holds with
>  // i == array.length, so that's the sum of *all*
>  // elements.
>  return sum;
>}
>I'd guess Matthias wouldn't expect to see all that
>actually embedded in the code, but he would want
>the programmer to have a clear enough understanding
>that she could provide it quickly and confidently
>if required.
>
>Converting that to a really formal proof would be
>tiresome (depending on how formal "really formal"
>is taken to be) but easy.
>
>I don't agree with Matthias's position, but I wouldn't
>want to hire someone who *couldn't* provide a correctness
>(or incorrectness) proof for a piece of code that simple.
>Would you?
>
>Disclaimer: I've written about 20 lines of Java *ever*,
>so I may have missed things. I wouldn't advise hiring
>me to write Java without a bit of time in the schedule
>for me to learn the language (and, more to the point,
>the libraries) better :-).
>
>  
>
From: Gareth McCaughan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87fzhii3x1.fsf@g.mccaughan.ntlworld.com>
Alex McGuire wrote:

>>> public double sumArray(int[] array){
>>>     int sum = 0;
>>>     for (int i = 0; i < array.length; ++i){
>>>        sum += array[i];
>>>     }
>>>     return sum;
>>> }
>>
>> I'm not Matthias, but here's my guess at the sort of thing
>> he might consider appropriate.
>>
>> // Return the sum of all elements in the array,
>> // mod 2^32.
>> // XXX: Why does this return a double? If the idea
>> // is to avoid overflow, why do we accumulate with
>> // an int?
> 
> Well spotted. This was really a mistake, I wasn't trying to be
> clever. Looks like the code didn't actually do what I
> thought it did. Is this another problem with such proofs? For example,
> I can prove that the quicksort _algorithm_ works, but how would I
> prove that my code correctly implements that algorithm, and there
> aren't subtle errors like the one I inadvertently wrote above.

It seems to me that it's a *strength* of such proofs,
or at least of the careful attention they make you pay
to the algorithm. Witness the fact that I (looking for
a proof) found the mistake and you (not looking for a
proof) didn't :-).

> In my experience at least I find much more bugs due to the incorrect
> implementation of algorithms, rather than the use of invalid
> algorithms.

That doesn't much surprise me. But when you have
(formally or informally, explicitly or implicitly)
a proof that (part of) your program is correct,
that applies to the *program* and not only to
the *algorithm*. (Which is why, e.g., the "proof"
I offered for your sumArray function had all that
"mod 2^32" stuff in it.)

So I don't think "the implementation can be buggy
as well as the algorithm" is a reason for not having
proofs for your programs.

-- 
Gareth McCaughan
.sig under construc
From: Ray Blaak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uznfrd8jl.fsf@STRIPCAPStelus.net>
······················@jpl.nasa.gov (Erann Gat) writes:
> Just wait until we get our HR act together and you can buy *our* product
> which we can *prove* doesn't have any bugs."  (Excecpt, of course, that all
> you can really prove is that it doesn't have any type errors, which is not
> the same thing.)

While I agree with your essential point, I should point out that with "proper"
formal methods you can do quite a bit better than detecting type errors. You
can prove that, assuming a sane execution environment, your implementation
realizes the specification, where the spec can describe behaviour as well as
types, given a suitable spec language.

The "sane environment" assumption is the kicker, as is the work required to
actually take the trouble of verifying/deriving an honest-to-goodness real
non-trivial application, as well as the work required to write a bug free spec
(and who verifies the spec is anything reasonable or correct?).

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2310031330590001@k-137-79-50-101.jpl.nasa.gov>
In article <·············@STRIPCAPStelus.net>, Ray Blaak
<········@STRIPCAPStelus.net> wrote:

> ······················@jpl.nasa.gov (Erann Gat) writes:
> > Just wait until we get our HR act together and you can buy *our* product
> > which we can *prove* doesn't have any bugs."  (Excecpt, of course, that all
> > you can really prove is that it doesn't have any type errors, which is not
> > the same thing.)
> 
> While I agree with your essential point, I should point out that with "proper"
> formal methods you can do quite a bit better than detecting type errors. You
> can prove that, assuming a sane execution environment, your implementation
> realizes the specification, where the spec can describe behaviour as well as
> types, given a suitable spec language.
> 
> The "sane environment" assumption is the kicker, as is the work required to
> actually take the trouble of verifying/deriving an honest-to-goodness real
> non-trivial application, as well as the work required to write a bug free spec
> (and who verifies the spec is anything reasonable or correct?).

Yes, your point is well taken.  I am in fact a fan of formal methods. 
They can be very useful for finding certain classes of bugs that are hard
to find any other way (race conditions for example).  But they are neither
necessary nor sufficient for producing "correct" code (whatever that
means).

See http://archive.larc.nasa.gov/shemesh/Lfm2000/Proc/cpp15.pdf for an
interesting case study.

E.
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <y8vbetgr.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> Matthias Blume <····@my.address.elsewhere> writes:
>> 
>> > ·············@comcast.net writes:
>> >> 
>> >> The opposing point is to assert that *no* program that cannot be
>> >> statically checked is useful.  Are you really asserting that?
>> >
>> > Actually, viewed from a certain angle, yes.  Every programmer who
>> > writes a program ought to have a proof that the program is correct in
>> > her mind.  (If not, fire her.)  It ought to be possible to formalize
>> > that proof and to statically check it.
>> 
>> That's a little draconian.  When I write programs I often have no clue
>> as to what I am doing, let alone a proof that it is correct!
>
> You're fired.

See what static typing does to one's mind?  You've turned into a PHB!
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1n0bromls.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> >
> >> Matthias Blume <····@my.address.elsewhere> writes:
> >> 
> >> > ·············@comcast.net writes:
> >> >> 
> >> >> The opposing point is to assert that *no* program that cannot be
> >> >> statically checked is useful.  Are you really asserting that?
> >> >
> >> > Actually, viewed from a certain angle, yes.  Every programmer who
> >> > writes a program ought to have a proof that the program is correct in
> >> > her mind.  (If not, fire her.)  It ought to be possible to formalize
> >> > that proof and to statically check it.
> >> 
> >> That's a little draconian.  When I write programs I often have no clue
> >> as to what I am doing, let alone a proof that it is correct!
> >
> > You're fired.
> 
> See what static typing does to one's mind?  You've turned into a PHB!

Right.  Static typing is strong medicine.  Beware of those side effects!

No, seriously.  I think that static typing actually helps quite a lot
at exactly that "fuzzy" stage of program design.

I bet everyone has experienced the following scenario (I have many
times): You try to figure out some difficult problem, and you are
stumped.  So you go to your buddy next office, meaning to ask for help
with the solution.  And while you are explaining to him what the
problem is in the first place and why you are having difficulties with
it you suddenly go "I got it!"  The mere act of carefully explaining
one's own thinking processes to some patient listener, i.e., the act
of putting these processes into words, helps.

Now, this is exactly my experience with static typing:  I often start
out like Joe without a clue of what I am doing.  (Of course, I don't
tell my PHB so I don't get fired like Joe just did. :-) What I am
doing at this stage is mostly fiddling with types (think "abstract
interfaces").  In effect, I am trying to explain what I am planning on
doing to the computer.  At this stage, no actual interaction with the
machine is necessary -- most of the time I already know what will
typecheck and what won't.  The mere act of trying to express myself in
the language of types helps me with crystallizing my thoughts into
something that is workable (and which, by nature of the process, has a
good chance of passing the type checker).

The fact that the type checker will also detect a certain amount of
clerical errors in my code is a bonus.  The main benefit (as far as
initial design goes) comes from the above effect of being forced to
explain my thoughts clearly to someone.

Matthias
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <myfirstname.mylastname-2310031115120001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> The fact that the type checker will also detect a certain amount of
> clerical errors in my code is a bonus.

That depends on what you are trying to accomplish.  If you are forced to
spend time fixing clerical errors that are not really relevant to the
problem you are trying to solve and you are competing with someone who is
free to ignore those errors and move on then you will lose.

> The main benefit (as far as
> initial design goes) comes from the above effect of being forced to
> explain my thoughts clearly to someone.

That is not a benefit exclusive to static typing.  The same benefit can be
(and is) had from getting a program to run in a dynamically typed system.

E.
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <eg3cdisx9o.fsf@sefirot.ii.uib.no>
······················@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:

>> The fact that the type checker will also detect a certain amount of
>> clerical errors in my code is a bonus.

> That depends on what you are trying to accomplish.  If you are forced to
> spend time fixing clerical errors that are not really relevant to the
> problem you are trying to solve

I don't follow this.  How can a type error not be relevant?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Paul F. Dietz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <zqSdnfHYHcA_jQSiRVn-jA@dls.net>
··········@ii.uib.no wrote:

> I don't follow this.  How can a type error not be relevant?

The type annotations you were forced to add could be wrong.

	Paul
From: Dirk Thierbach
Subject: Static typing (was: Python from Wise Guy's Viewpoint)
Date: 
Message-ID: <ohin61-h91.ln1@ID-7776.user.dfncis.de>
Paul F. Dietz <·····@dls.net> wrote:
> ··········@ii.uib.no wrote:

>> I don't follow this.  How can a type error not be relevant?

> The type annotations you were forced to add could be wrong.

In Hindley-Milner style typing (without extensions), this can never
happen. 

No type annotations you add can make the type error go away. A type
error always points to some error in the code, often a quite trivial
one (like wrong parenthesis, or swapped variable names, etc.).

With some of the extensions (e.g., Haskell type classes) you 
sometimes have to add annotations to help the compiler to decide
what particular instance of a type class you mean. But this happens
only with top-level functions, and you cannot add a "wrong" annotation.

This is different to other statically typed languages, where you
indeed have to add type annotations or casts to make an "irrelevant"
type error go away. Yes, this is very annoying.

- Dirk
From: Pascal Costanza
Subject: Re: Static typing
Date: 
Message-ID: <bnb8gl$utu$2@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Paul F. Dietz <·····@dls.net> wrote:
> 
>>··········@ii.uib.no wrote:
> 
> 
>>>I don't follow this.  How can a type error not be relevant?
> 
> 
>>The type annotations you were forced to add could be wrong.
> 
> 
> In Hindley-Milner style typing (without extensions), this can never
> happen. 
> 
> No type annotations you add can make the type error go away. A type
> error always points to some error in the code, often a quite trivial
> one (like wrong parenthesis, or swapped variable names, etc.).

...or a wrong type annotation?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Adrian Hey
Subject: Re: Static typing
Date: 
Message-ID: <bnbi12$qnj$2$8302bc10@news.demon.co.uk>
Pascal Costanza wrote:

> Dirk Thierbach wrote:
>> In Hindley-Milner style typing (without extensions), this can never
>> happen.
>> 
>> No type annotations you add can make the type error go away. A type
>> error always points to some error in the code, often a quite trivial
>> one (like wrong parenthesis, or swapped variable names, etc.).
> 
> ...or a wrong type annotation?

Certainly a wrong type annotation will be flagged as an error.
As Dirk pointed out, the Hindley Milner type system does
not *require* any type annotations to type check a program.
But most programmers will add them anyway as essential
documentation for functions. The fact that the compiler
will automagically verify any such annotation is *correct*
is a big win.

Regards
--
Adrian Hey
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <eg1xt2r9cy.fsf@sefirot.ii.uib.no>
"Paul F. Dietz" <·····@dls.net> writes:

>> I don't follow this.  How can a type error not be relevant?

> The type annotations you were forced to add could be wrong.

Why would I be forced to add type annotations?  I often add them
last, when everything else is working. If they're not obvious, I can
query the system for them to see whether they correspond to my
expectations. 

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbeh9$p74$1@f1node01.rhrz.uni-bonn.de>
··········@ii.uib.no wrote:
> "Paul F. Dietz" <·····@dls.net> writes:
> 
>>>I don't follow this.  How can a type error not be relevant?
> 
>>The type annotations you were forced to add could be wrong.
> 
> Why would I be forced to add type annotations?

I recall reading that even Haskell requires you to add type annotations 
every now and then. But I don't remember where (and I might be wrong).


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Mark Carroll
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <Rjf*GcL5p@news.chiark.greenend.org.uk>
In article <············@f1node01.rhrz.uni-bonn.de>,
Pascal Costanza  <········@web.de> wrote:
(snip)
>I recall reading that even Haskell requires you to add type annotations 
>every now and then. But I don't remember where (and I might be wrong).

Yes, it does, but it's pretty infrequently IMHO, and trivially easy if
you're already thinking clearly enough to be writing correct code at
all.

-- Mark
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <d6cnili9.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> I bet everyone has experienced the following scenario (I have many
> times): You try to figure out some difficult problem, and you are
> stumped.  So you go to your buddy next office, meaning to ask for help
> with the solution.  And while you are explaining to him what the
> problem is in the first place and why you are having difficulties with
> it you suddenly go "I got it!"  The mere act of carefully explaining
> one's own thinking processes to some patient listener, i.e., the act
> of putting these processes into words, helps.
>
> Now, this is exactly my experience with static typing:  I often start
> out like Joe without a clue of what I am doing.  (Of course, I don't
> tell my PHB so I don't get fired like Joe just did. :-) What I am
> doing at this stage is mostly fiddling with types (think "abstract
> interfaces").  In effect, I am trying to explain what I am planning on
> doing to the computer.  At this stage, no actual interaction with the
> machine is necessary -- most of the time I already know what will
> typecheck and what won't.  The mere act of trying to express myself in
> the language of types helps me with crystallizing my thoughts into
> something that is workable (and which, by nature of the process, has a
> good chance of passing the type checker).

I think you know where I stand on static type checking, but to
re-iterate to the people that didn't read the argument last time
it surfaced....

   I welcome every bit of help the computer gives me, and if it can
   find a problem before I know about it, great!  Static type checking
   is fine with me here.

   I get a little peeved, however, when the computer complains
   because it can't figure out whether there is a problem or not.

   I *really* don't like decorating my code with types.

To the extent that a static type checker lets me live with those
preferences, I'm all for it.  Clearly a lot of brain-dead statically
typed languages violate a lot of those.
From: Dirk Thierbach
Subject: Static typing (was: Python from Wise Guy's Viewpoint)
Date: 
Message-ID: <8h9n61-321.ln1@ID-7776.user.dfncis.de>
·············@comcast.net wrote:

> I think you know where I stand on static type checking, but to
> re-iterate to the people that didn't read the argument last time
> it surfaced....
> 
>   I welcome every bit of help the computer gives me, and if it can
>   find a problem before I know about it, great!  Static type checking
>   is fine with me here.

>   I get a little peeved, however, when the computer complains
>   because it can't figure out whether there is a problem or not.

You get this problem mostly in the "brain-dead" statically languages
that have a typesystem which is just not strong enough, so they include
type-casts in the language to work around this problem. 

>   I *really* don't like decorating my code with types.

Especially if you have to do it several times, like in some languages.

But in the presence of type-inference, you don't have to decorate your
code with types -- the compiler does that for you.

But on the other hands, you do write tests for your code, don't you?

Writing type annotions is just like writing tests -- it allows you
to focus on what you reall want to write, and it allows the compiler
to verify for test so your code really does what you expect it to do.

For simple functions, I usually don't write type annotations. For
difficult functions, I write down the type first (because that's 
easier then writing the function itself), and once I have sorted out
the type, I usually have enough hints in my head to make writing the
function easy.

And once after I have corrected all the typing errors in the function
I wrote (added missing parenthesis, etc.), i.e. once the tests pass,
the function is usually correct.

> To the extent that a static type checker lets me live with those
> preferences, I'm all for it.  Clearly a lot of brain-dead statically
> typed languages violate a lot of those.

Yes. That's why it is important to destinguish between statically
typed languages and statically typed languages. Some of them are quite
brain-dead, some are not.

If you're curious about a static type system that lets you live with
those preferences, give Haskell or OCaml a try.

- Dirk
From: Joe Marshall
Subject: Re: Static typing
Date: 
Message-ID: <fzhiu0lv.fsf@ccs.neu.edu>
Dirk Thierbach <··········@gmx.de> writes:

> Writing type annotions is just like writing tests -- it allows you
> to focus on what you reall want to write, and it allows the compiler
> to verify for test so your code really does what you expect it to do.

Type annotations are a very limited form of test.  Tests that check
co-variant and contravariant conditions can be hard to encode in a
type.  For example, I write a function that partitions a list into two
sublists.  The sum of the lengths of the two sublists must equal the
length of the input list.  I don't know how to encode that as a type
declaration.
From: Dirk Thierbach
Subject: Re: Static typing
Date: 
Message-ID: <mr0o61-3i1.ln1@ID-7776.user.dfncis.de>
Joe Marshall <···@ccs.neu.edu> wrote:
> Dirk Thierbach <··········@gmx.de> writes:
> Type annotations are a very limited form of test.  

Yes. On the other hand, they are more powerful then test-by-example,
because they test classes of values of every execution path, instead
of single values on one particular execution path.

It's a tradeoff.

> Tests that check co-variant and contravariant conditions can be hard
> to encode in a type.  For example, I write a function that
> partitions a list into two sublists.  The sum of the lengths of the
> two sublists must equal the length of the input list.  I don't know
> how to encode that as a type declaration.

And you don't have to: Just write a normal test. A type system doesn't
replace all normal tests. But in some ways it subsumes some of them.

And with type inference, it comes for free: If you don't want it, just
don't write any type annotations.

- Dirk
From: L.J. Buitinck
Subject: Re: Static typing
Date: 
Message-ID: <bnb2d6$lbt$1@info.service.rug.nl>
Dirk Thierbach wrote:
>>  I *really* don't like decorating my code with types.
> 
> Especially if you have to do it several times, like in some languages.

actually, I do like writing explicit type definitions and declarations 
-- I like it better than writing lots of comments explaining what kinds 
of arguments functions expect, as I sometimes have to do in Scheme.
type declarations are comments readable to both machine and programmer.

> For simple functions, I usually don't write type annotations. For
> difficult functions, I write down the type first (because that's 
> easier then writing the function itself), and once I have sorted out
> the type, I usually have enough hints in my head to make writing the
> function easy.
> 
> And once after I have corrected all the typing errors in the function
> I wrote (added missing parenthesis, etc.), i.e. once the tests pass,
> the function is usually correct.

right!

-- 
Segui il tuo corso, e lascia dir le genti.

Lars
From: Ed Avis
Subject: Re: Static typing (was: Python from Wise Guy's Viewpoint)
Date: 
Message-ID: <l1n0bqzt40.fsf@budvar.future-i.net>
Dirk Thierbach <··········@gmx.de> writes:

>Writing type annotions is just like writing tests -- it allows you to
>focus on what you reall want to write, and it allows the compiler to
>verify for test so your code really does what you expect it to do.
>
>For simple functions, I usually don't write type annotations. For
>difficult functions, I write down the type first (because that's
>easier then writing the function itself), and once I have sorted out
>the type, I usually have enough hints in my head to make writing the
>function easy.

Heh, so you are saying that this 'type-first development' is akin to
the test-first development method recommended by some gurus.

Unlike some up-front design work, writing a type annotation is design
that can be checked by the computer, and is constantly re-checked so
it cannot get out of date or be inconsistent with the code.

-- 
Ed Avis <··@membled.com>
From: Aurélien Campéas
Subject: Re: Static typing (was: Python from Wise Guy's Viewpoint)
Date: 
Message-ID: <pan.2003.10.31.15.49.25.443033@wanadoo.fr>
On Fri, 24 Oct 2003 11:55:20 +0200, Dirk Thierbach wrote:

 
> You get this problem mostly in the "brain-dead" statically languages
> that have a typesystem which is just not strong enough, so they include
> type-casts in the language to work around this problem. 

In my understanding, those brain-dead languages have to include downcast
operators because they have object-oriented semantics. It seems type
errors are unavoidable in an oo system - if you want it to be useful (iow
type errors are part of the "real world").

I'm curious to know how OCaml "solves" the problem of
covariance/contravariance (without a downcast operator).

Aur�lien.
From: Thant Tessman
Subject: Re: Static typing
Date: 
Message-ID: <bo5qe6$ksl$1@terabinaries.xmission.com>
Aur�lien Camp�as wrote:

> In my understanding, those brain-dead languages have to include downcast
> operators because they have object-oriented semantics. It seems type
> errors are unavoidable in an oo system - if you want it to be useful (iow
> type errors are part of the "real world").  [...]

OO isn't inherently unsafe, it's just inherently incomplete--if that 
makes any sense.

-thant
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9q8j$bfo$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> No, seriously.  I think that static typing actually helps quite a lot
> at exactly that "fuzzy" stage of program design.
> 
> I bet everyone has experienced the following scenario (I have many
> times): You try to figure out some difficult problem, and you are
> stumped.  So you go to your buddy next office, meaning to ask for help
> with the solution.  And while you are explaining to him what the
> problem is in the first place and why you are having difficulties with
> it you suddenly go "I got it!"  The mere act of carefully explaining
> one's own thinking processes to some patient listener, i.e., the act
> of putting these processes into words, helps.
> 
> Now, this is exactly my experience with static typing:  I often start
> out like Joe without a clue of what I am doing.  (Of course, I don't
> tell my PHB so I don't get fired like Joe just did. :-) What I am
> doing at this stage is mostly fiddling with types (think "abstract
> interfaces").  In effect, I am trying to explain what I am planning on
> doing to the computer.  At this stage, no actual interaction with the
> machine is necessary -- most of the time I already know what will
> typecheck and what won't.  The mere act of trying to express myself in
> the language of types helps me with crystallizing my thoughts into
> something that is workable (and which, by nature of the process, has a
> good chance of passing the type checker).

That's great if it works for you. Go ahead, keep it up. (I don't mean 
this sarcastically!)

But why on earth do you want to _force_ anybody else to use the same 
approach when it might not work for everyone?

You have just described a creative process. Creative processes are _by 
definition_ not formal. The important stuff happens exactly in the 
moment when you go "eureka". Everyone has their own preferred approach 
to make this happen.

All this nonsense about proof of program correctness, avoiding certain 
classes of program errors, achieving efficiency, and so on, are just 
posteriori rationalizations of what is essentially an irrational process.

I think what you really gain when you use a static type system is a 
certain perspective on the problem you are trying to solve. And this is 
exactly what helps in solving problems: gaining different perspectives. 
The perspective you describe is just the perspective you feel most 
comfortable with, not more and not less.

And if some of your assumptions of a program hold from different 
perspectives it makes you feel more convinced that you have found a 
right solution. But _no approach whatsoever guarantees correctness_.

For people who prefer dynamic type systems, test suites work exactly the 
same way - they are just another perspective on the same problem.

There is another important fact that you should consider: You probably 
know a lot people who immediately agree with your point of view, and are 
very happy to affirm to you, and to themselves, that the approach you 
describe is really the best way to develop software.

But _exactly the same thing_ happens for people who like the approach 
that better goes along with dynamically checked languages. They also 
know quite a lot of people who affirm them and each other.

I see this as clear evidence that there are just different programming 
styles that fit to different type of people.

(Of course, it is an open question what is best when you have a team of 
programmers - should they beter be a homogeneous or a heterogeneous 
group. I don't know.)


Pascal
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <mjemb.18879$Tr4.38945@attbi_s03>
"Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
>
> That's great if it works for you. Go ahead, keep it up. (I don't mean
> this sarcastically!)
>
> But why on earth do you want to _force_ anybody else to use the same
> approach when it might not work for everyone?

If I'm working on a team of 100 programmers, I would want to
"force" them all to use static typing because doing so leads to
faster development with fewer bugs.


> You have just described a creative process. Creative processes are _by
> definition_ not formal.

Whoops! If creative processes are by definition not formal, then
programming is not a creative process (since programming is
absolutely formal.) I guess I have to disagree with your premise.
Computer programs are formal systems. But that doesn't mean
that constructing these formal systems is not a creative process.


> All this nonsense about proof of program correctness, avoiding certain
> classes of program errors, achieving efficiency, and so on, are just
> posteriori rationalizations of what is essentially an irrational process.

Uh, what? Are you saying programs are irrational or that programming
is irrational? What can you mean by this? And do you really mean
to say that "avoiding certain classes of errors" is nonsense? If so,
give me more nonsense!


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbs5r$l2s$1@f1node01.rhrz.uni-bonn.de>
Marshall Spight wrote:
> "Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
> 
>>That's great if it works for you. Go ahead, keep it up. (I don't mean
>>this sarcastically!)
>>
>>But why on earth do you want to _force_ anybody else to use the same
>>approach when it might not work for everyone?
> 
> 
> If I'm working on a team of 100 programmers, I would want to
> "force" them all to use static typing because doing so leads to
> faster development with fewer bugs.

Where is the empirical study that backs this claim?

If there is no empirical study, why do you want to force anyone to do 
anything on the basis of pure speculation?

What if that team of programmers are Lispers or Smalltalkers who are 
good at what they are doing?

>>You have just described a creative process. Creative processes are _by
>>definition_ not formal.
> 
> Whoops! If creative processes are by definition not formal, then
> programming is not a creative process (since programming is
> absolutely formal.) I guess I have to disagree with your premise.
> Computer programs are formal systems. But that doesn't mean
> that constructing these formal systems is not a creative process.

Yes, computer programs are formal systems. The process of writing a 
computer program is not a formal process. Otherwise you could completely 
automate it. Don't confuse the process with the end result!

>>All this nonsense about proof of program correctness, avoiding certain
>>classes of program errors, achieving efficiency, and so on, are just
>>posteriori rationalizations of what is essentially an irrational process.
> 
> Uh, what? Are you saying programs are irrational or that programming
> is irrational?

Programming is irrational. It requires that you go "eureka" every now 
and then. These are the moments when you gain a better understanding of 
a problem domain. Understanding a problem domain is not a rational 
process by itself. This is why formal software development processes 
don't work.

> What can you mean by this? And do you really mean
> to say that "avoiding certain classes of errors" is nonsense? If so,
> give me more nonsense!

You are already doing great. No need to add to that.


Pascal


-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031315130001@k-137-79-50-101.jpl.nasa.gov>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
<········@web.de> wrote:

> > If I'm working on a team of 100 programmers, I would want to
> > "force" them all to use static typing because doing so leads to
> > faster development with fewer bugs.
> 
> Where is the empirical study that backs this claim?

In fact, what little empirical evidence exists points to the exact
opposite conclusion.

E.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <G_gmb.20663$HS4.72132@attbi_s01>
"Erann Gat" <·················@jpl.nasa.gov> wrote in message
·······································@k-137-79-50-101.jpl.nasa.gov...
> In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
> <········@web.de> wrote:
>
> > > If I'm working on a team of 100 programmers, I would want to
> > > "force" them all to use static typing because doing so leads to
> > > faster development with fewer bugs.
> >
> > Where is the empirical study that backs this claim?
>
> In fact, what little empirical evidence exists points to the exact
> opposite conclusion.

Citation, please.


Marshall
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031531320001@k-137-79-50-101.jpl.nasa.gov>
In article <·····················@attbi_s01>, "Marshall Spight"
<·······@dnai.com> wrote:

> "Erann Gat" <·················@jpl.nasa.gov> wrote in message
> ·······································@k-137-79-50-101.jpl.nasa.gov...
> > In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
> > <········@web.de> wrote:
> >
> > > > If I'm working on a team of 100 programmers, I would want to
> > > > "force" them all to use static typing because doing so leads to
> > > > faster development with fewer bugs.
> > >
> > > Where is the empirical study that backs this claim?
> >
> > In fact, what little empirical evidence exists points to the exact
> > opposite conclusion.
> 
> Citation, please.

Sorry, I thought this was common knowledge around here by now.

Gat, E.  Lisp as an alternative to Java. Intelligence 11(4): 21-24, 2000.

http://www.flownet.com/gat/papers/lisp-java.pdf

There's a more comprehensive followup study by Lutz Prechelt that includes
Python.  I don't have the reference handy.  STFW.

E.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <yKlmb.22637$HS4.86914@attbi_s01>
"Erann Gat" <·················@jpl.nasa.gov> wrote in message
·······································@k-137-79-50-101.jpl.nasa.gov...
> > >
> > Citation, please.
>
> Sorry, I thought this was common knowledge around here by now.
>
> Gat, E.  Lisp as an alternative to Java. Intelligence 11(4): 21-24, 2000.
>
> http://www.flownet.com/gat/papers/lisp-java.pdf

Extremely interesting read; thank you.


> There's a more comprehensive followup study by Lutz Prechelt that includes
> Python.  I don't have the reference handy.

Easily found with the info you provided. Scanning tickled a memory, and I
remembered this page:

http://www.norvig.com/java-lisp.html

Also interesting. I suppose now I must take the "Norvig challenge"
and write the test code in Java and see how I do. I'll try to locate
a few hours during which I won't have a four-year-old regularly
asking me for attention. Late at night, say. :-)

Unfortunately, I wasn't much satisfied by the reliability
parts of the studies. While development time is an important
metric, reliability is also important.

In fact, I'll pretty much concede that Java is not great as
language for a single-person development effort, and Lisp
probably is. What more concerns me, though, is what
happens with a hundred programmers over a year or
two. Sadly, this is *much* more difficult to measure.


> STFW.

Smile when you say that, stranger. :-)


Marshall
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410032135120001@192.168.1.51>
In article <·····················@attbi_s01>, "Marshall Spight"
<·······@dnai.com> wrote:

> In fact, I'll pretty much concede that Java is not great as
> language for a single-person development effort, and Lisp
> probably is. What more concerns me, though, is what
> happens with a hundred programmers over a year or
> two.

With Lisp you never need that many programmers or that much time.  :-)

E.
From: Alain Picard
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87n0bpxxnn.fsf@memetrics.com>
·················@jpl.nasa.gov (Erann Gat) writes:

> In article <·····················@attbi_s01>, "Marshall Spight"
> <·······@dnai.com> wrote:
>
>> In fact, I'll pretty much concede that Java is not great as
>> language for a single-person development effort, and Lisp
>> probably is. What more concerns me, though, is what
>> happens with a hundred programmers over a year or
>> two.
>
> With Lisp you never need that many programmers or that much time.  :-)

I know you put a smiley in there, but, in fact, this is an
_extremely_ important point.  In Lisp, you DO need fewer 
people and time to accomplish the same job [than Java, C++, lang du jour].

Given that the main obstacle in a programming endeavor is
usually communication, the importance of this fact should
not be underestimated; in fact, if you can halve the number
of programmers, you probably quadruple your chances of success.

I was on a project once with 30 programmers for 20 months.  I'm pretty
sure 4 or 5 good lispers could could have done it in 6.
From: Thomas Lindgren
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m365id33cb.fsf@localhost.localdomain>
"Marshall Spight" <·······@dnai.com> writes:

> In fact, I'll pretty much concede that Java is not great as
> language for a single-person development effort, and Lisp
> probably is. What more concerns me, though, is what
> happens with a hundred programmers over a year or
> two. Sadly, this is *much* more difficult to measure.

Have a look at Ericsson's experiences with Erlang.

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bndvg0$8nt$1@news.oberberg.net>
Thomas Lindgren wrote:

> "Marshall Spight" <·······@dnai.com> writes:
> 
>>In fact, I'll pretty much concede that Java is not great as
>>language for a single-person development effort, and Lisp
>>probably is. What more concerns me, though, is what
>>happens with a hundred programmers over a year or
>>two. Sadly, this is *much* more difficult to measure.
> 
> Have a look at Ericsson's experiences with Erlang.

Erlang is a language with run-time type checking, but personally, I 
attribute Erlang's productivity gains more to the existence of 
high-level no-fuss idioms.

Which, incidentally, is also what I think that makes Lisp and Smalltalk 
productive. The high-level idioms for strings in C, for example, are far 
from no-fuss - you can't handle strings without constantly thinking 
about handling that extra null byte, about memory allocation issues if 
you want to construct strings at runtime, etc. etc.

For some reasons, statically-type languages tend to have fussy 
interfaces - probably due to over-paranoid efficiency considerations. 
And, maybe, because people with an efficiency focus will not even start 
to think about a runtime-typed language.

That doesn't make statically-typed languages unsuitable for high-level 
no-fuss interfaces though. Take a look at ML languages, or at Haskell :-)

Regards,
Jo
From: Thomas Lindgren
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3n0bp0yit.fsf@localhost.localdomain>
Joachim Durchholz <·················@web.de> writes:

> Thomas Lindgren wrote:
> 
> > "Marshall Spight" <·······@dnai.com> writes:
> >
> >>In fact, I'll pretty much concede that Java is not great as
> >>language for a single-person development effort, and Lisp
> >>probably is. What more concerns me, though, is what
> >>happens with a hundred programmers over a year or
> >>two. Sadly, this is *much* more difficult to measure.
> > Have a look at Ericsson's experiences with Erlang.
> 
> Erlang is a language with run-time type checking, but personally, I
> attribute Erlang's productivity gains more to the existence of
> high-level no-fuss idioms.

Also recall that Erlang has served well in heavy-weight projects. Dynamic
typing has, according to repeated testimonies (in comp.lang.functional
for example), not been found a problem.

You raise a good point: I'd claim that for a programming language,
"serving your market", your developers, is far more important than
whatever theory adorns it, e.g., as regards static typing or its
absence.

> That doesn't make statically-typed languages unsuitable for high-level
> no-fuss interfaces though. Take a look at ML languages, or at Haskell
> :-)

I'll ask the audience what Marshall Spight asked: have there been any
50+ programmer multi-year industrial ML or Haskell projects? What were
the experiences?

(As regards stat.typed languages, while I've been in the dynamic camp
for a long time, I still retain a soft spot for Miranda :-)

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bghmb.19362$Tr4.39949@attbi_s03>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> Marshall Spight wrote:
> >
> > If I'm working on a team of 100 programmers, I would want to
> > "force" them all to use static typing because doing so leads to
> > faster development with fewer bugs.
>
> Where is the empirical study that backs this claim?

Certainly having strong empirical evidence around this
issue would be great. Anyone got some? I don't, but
I'd be happy to hear about some.

Lacking such, I'm going to rely on my own experience.
At work, we've got hundreds of programmers at work.
The large systems written in Python don't hold up so
well as they scale up towards tens or hundreds of
thousands of lines. The Java programs scale nicely
into the hundreds of thousands of lines of code.

As a result, many of our larger Python systems are
being rewritten in C++ or Java.

I will totally admit that it is difficult to separate
the performance scalability issue from the development
scalability issue here. But I will assert from observation
that Java scales nicely on both counts.

I will also totally admit that for smaller programs
and for prototypes, Python has a clear edge.


> If there is no empirical study, why do you want
> to force anyone to do anything ...

Risk management. I know the strong typing languages
scale up in development, and my experience indicates
that dynamically typed languages don't. I do not
consider this proof, and in fact, would be glad
for more empirical evidence. But I also put some
stock in my own experience.


> on the basis of pure speculation?

Actually it's on the basis of decades of experience.
This isn't the end-all, by any means, but it
ain't hay, either.


> What if that team of programmers are Lispers or Smalltalkers
> who are good at what they are doing?

If I was commisioning a new team of developers for
a large commercial system, I would use neither
of these languages, if only out of a concern for
the supply of good programmers. If I took over
management of such a team, I'd be an idiot to
dictate anything technical to them, particularly
language choice.


> >>You have just described a creative process. Creative processes are _by
> >>definition_ not formal.
> >
> > Whoops! If creative processes are by definition not formal, then
> > programming is not a creative process (since programming is
> > absolutely formal.) I guess I have to disagree with your premise.
> > Computer programs are formal systems. But that doesn't mean
> > that constructing these formal systems is not a creative process.
>
> Yes, computer programs are formal systems. The process of writing a
> computer program is not a formal process. Otherwise you could completely
> automate it.

Fair enough.


> Don't confuse the process with the end result!

I certainly did not. In fact, I explicitly drew that distinction. (Below.)


> > Uh, what? Are you saying programs are irrational or that programming
> > is irrational?
>
> Programming is irrational. It requires that you go "eureka" every now
> and then. These are the moments when you gain a better understanding of
> a problem domain. Understanding a problem domain is not a rational
> process by itself. This is why formal software development processes
> don't work.
>
> > What can you mean by this? And do you really mean
> > to say that "avoiding certain classes of errors" is nonsense? If so,
> > give me more nonsense!
>
> You are already doing great. No need to add to that.

With that reply, you win points for rhetoric but not for substance.


Marshall
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <smlifdr7.fsf@comcast.net>
"Marshall Spight" <·······@dnai.com> writes:

> "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
>> Marshall Spight wrote:
>> >
>> > If I'm working on a team of 100 programmers, I would want to
>> > "force" them all to use static typing because doing so leads to
>> > faster development with fewer bugs.
>>
>> Where is the empirical study that backs this claim?
>
> Certainly having strong empirical evidence around this
> issue would be great. Anyone got some? I don't, but
> I'd be happy to hear about some.

In the ICFP programming contest OCaml seems to be a consistent
contender.

(Even a dynamic-type afficionado such as I has to admit that.)
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <s6imb.19971$e01.37972@attbi_s02>
<·············@comcast.net> wrote in message ·················@comcast.net...
> "Marshall Spight" <·······@dnai.com> writes:
>
> > "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> >> Marshall Spight wrote:
> >> >
> >> > If I'm working on a team of 100 programmers, I would want to
> >> > "force" them all to use static typing because doing so leads to
> >> > faster development with fewer bugs.
> >>
> >> Where is the empirical study that backs this claim?
> >
> > Certainly having strong empirical evidence around this
> > issue would be great. Anyone got some? I don't, but
> > I'd be happy to hear about some.
>
> In the ICFP programming contest OCaml seems to be a consistent
> contender.
>
> (Even a dynamic-type afficionado such as I has to admit that.)

Yes, I do find that fact interesting! OCaml certainly seems to be
worth studying closely, for the above and other reasons.


Marshall
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3n0bpz1kg.fsf@rigel.goldenthreadtech.com>
"Marshall Spight" <·······@dnai.com> writes:

> Lacking such, I'm going to rely on my own experience.
> At work, we've got hundreds of programmers at work.
> The large systems written in Python don't hold up so
> well as they scale up towards tens or hundreds of
> thousands of lines.

I've not had this issue with Common Lisp.  And ime, 10K "lines" of
Lisp is worth over a magnitude more of Java/C/C++/Ada/et.al.


> The Java programs scale nicely into the hundreds of thousands of
> lines of code.


> I will totally admit that it is difficult to separate the
> performance scalability issue from the development scalability issue
> here. But I will assert from observation that Java scales nicely on
> both counts.

Which only shows how dodgy these bits of anecdotal evidence are.  For
example, ime, I've seen the Java side have all sorts of problems wrt
performance scalability.  It's not too good from a maintenance pov
either - the single paradigm aspect of it kills it for many things in
this area.


> Risk management. I know the strong typing languages
> scale up in development, and my experience indicates
> that dynamically typed languages don't. I do not
> consider this proof, and in fact, would be glad
> for more empirical evidence. But I also put some
> stock in my own experience.

Mine is exactly the opposite.  I think a big part of this is that
there are vastly more variables involved.


> > What if that team of programmers are Lispers or Smalltalkers
> > who are good at what they are doing?
> 
> If I was commisioning a new team of developers for
> a large commercial system, I would use neither
> of these languages, if only out of a concern for
> the supply of good programmers.

Since, in general, I really do believe (again ime) you need an order
of magnitude less programmers to get the job done and done right with
these (well, Lisp at least - I haven't been involved in a large
Smalltalk project), the supply of good programmers will actually
exceed that for stuff like Java/C/C++.


/Jon
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <%tzmb.26026$e01.52691@attbi_s02>
"Jon S. Anthony" <·········@rcn.com> wrote in message ···················@rigel.goldenthreadtech.com...
> "Marshall Spight" <·······@dnai.com> writes:
>
> > Lacking such, I'm going to rely on my own experience.
> > At work, we've got hundreds of programmers at work.
> > The large systems written in Python don't hold up so
> > well as they scale up towards tens or hundreds of
> > thousands of lines.
>
> I've not had this issue with Common Lisp.  And ime, 10K "lines" of
> Lisp is worth over a magnitude more of Java/C/C++/Ada/et.al.

Given the existence of market pressures, this strikes me
as being impossible. Twice as good might be possible,
but ten times as good? Or perhaps there are other issues
involved. Maybe Lisp is ten times as good, but humans
capable of being worthy Lisp programmers are 20 times
more scarce.


> > The Java programs scale nicely into the hundreds of thousands of
> > lines of code.
>
> > I will totally admit that it is difficult to separate the
> > performance scalability issue from the development scalability issue
> > here. But I will assert from observation that Java scales nicely on
> > both counts.
>
> Which only shows how dodgy these bits of anecdotal evidence are.

Agreed.


> For example, ime, I've seen the Java side have all
> sorts of problems wrt performance scalability.

You can write unscalable code in any language.


>  It's not too good from a maintenance pov
> either - the single paradigm aspect of it kills it for many things in
> this area.

Okay. Your experience is different from mine.


> > Risk management. I know the strong typing languages
> > scale up in development, and my experience indicates
> > that dynamically typed languages don't. I do not
> > consider this proof, and in fact, would be glad
> > for more empirical evidence. But I also put some
> > stock in my own experience.
>
> Mine is exactly the opposite.  I think a big part of this is that
> there are vastly more variables involved.

I'm not sure what you mean by that. Are you saying that
massive development efforts are very complex and
may be quite different from one another? I'd certainly
agree with that.


Marshall
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ad7pytrs.fsf@rigel.goldenthreadtech.com>
"Marshall Spight" <·······@dnai.com> writes:

> "Jon S. Anthony" <·········@rcn.com> wrote in message ···················@rigel.goldenthreadtech.com...
> > "Marshall Spight" <·······@dnai.com> writes:
> >
> > > Lacking such, I'm going to rely on my own experience.
> > > At work, we've got hundreds of programmers at work.
> > > The large systems written in Python don't hold up so
> > > well as they scale up towards tens or hundreds of
> > > thousands of lines.
> >
> > I've not had this issue with Common Lisp.  And ime, 10K "lines" of
> > Lisp is worth over a magnitude more of Java/C/C++/Ada/et.al.
> 
> Given the existence of market pressures, this strikes me
> as being impossible.

You are forgetting about the completely counter acting pressure of the
PHB.  Really, "market pressures" is trotted out as indicative of
something in these sorts of contexts, but is in reality pretty
irrelevant to how/why choices are made.


> Twice as good might be possible, but ten times as good?

Actually, it can be much _more_ than this, ref: Greenspun's Tenth
Rule.


> > Mine is exactly the opposite.  I think a big part of this is that
> > there are vastly more variables involved.
> 
> I'm not sure what you mean by that. Are you saying that
> massive development efforts are very complex and
> may be quite different from one another? I'd certainly
> agree with that.

You don't have to go "massive" - it's just a fact that for different
projects there are a lot of variables (undoubtedly mostly unknown)
which cab affect the outcome.


/Jon
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <oOCmb.17643$mZ5.78960@attbi_s54>
"Jon S. Anthony" <·········@rcn.com> wrote in message ···················@rigel.goldenthreadtech.com...
> "Marshall Spight" <·······@dnai.com> writes:
> > >
> > > I've not had this issue with Common Lisp.  And ime, 10K "lines" of
> > > Lisp is worth over a magnitude more of Java/C/C++/Ada/et.al.
> >
> > Given the existence of market pressures, this strikes me
> > as being impossible.
>
> You are forgetting about the completely counter acting
> pressure of the PHB.

I'm suspicious of that explanation. All the engineering
organizations I've been in have had these kinds of decisions
made by senior engineers who are now managing engineers.


> Really, "market pressures" is trotted out as indicative of
> something in these sorts of contexts, but is in reality pretty
> irrelevant to how/why choices are made.

I could say the same thing for the PHB hypothesis.

Honestly, are you really saying that, despite billions of
dollars to be made in software development, Lisp is
still barely used commercially, though it's ten times
more efficient in software development with no corresponding
downside?

If I thought I had a 10:1 advantage in software development
over everyone else, I'd be doing a startup.


> Actually, it can be much _more_ than this, ref: Greenspun's Tenth
> Rule.

Gotta love Greenspun's Tenth Rule.


> You don't have to go "massive" - it's just a fact that for different
> projects there are a lot of variables (undoubtedly mostly unknown)
> which cab affect the outcome.

Agreed.


Marshall
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3vfqdx6wj.fsf@rigel.goldenthreadtech.com>
"Marshall Spight" <·······@dnai.com> writes:

> "Jon S. Anthony" <·········@rcn.com> wrote in message ···················@rigel.goldenthreadtech.com...
> > "Marshall Spight" <·······@dnai.com> writes:
> > > >
> > > > I've not had this issue with Common Lisp.  And ime, 10K "lines" of
> > > > Lisp is worth over a magnitude more of Java/C/C++/Ada/et.al.
> > >
> > > Given the existence of market pressures, this strikes me
> > > as being impossible.
> >
> > You are forgetting about the completely counter acting
> > pressure of the PHB.
> 
> I'm suspicious of that explanation. All the engineering
> organizations I've been in have had these kinds of decisions
> made by senior engineers who are now managing engineers.

Excellent.  Obviously YMMV...


> > Really, "market pressures" is trotted out as indicative of
> > something in these sorts of contexts, but is in reality pretty
> > irrelevant to how/why choices are made.
> 
> I could say the same thing for the PHB hypothesis.
> 
> Honestly, are you really saying that, despite billions of
> dollars to be made in software development, Lisp is
> still barely used commercially, though it's ten times
> more efficient in software development with no corresponding
> downside?

What makes you think there are billions of dollars to be made in SW
development that is significantly more productive?  It's even
plausible that in this context "more" is "worse".


> If I thought I had a 10:1 advantage in software development
> over everyone else, I'd be doing a startup.

Yes.  But note that is still not enough to be successful.  A great
deal more comes into play than simply having a magic wand in this
respect.  Of course, it doesn't hurt to have that wand in this context
either.


/Jon
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <IXDmb.28806$HS4.110867@attbi_s01>
"Jon S. Anthony" <·········@rcn.com> wrote in message ···················@rigel.goldenthreadtech.com...
> "Marshall Spight" <·······@dnai.com> writes:
>
> > I'm suspicious of that explanation. All the engineering
> > organizations I've been in have had these kinds of decisions
> > made by senior engineers who are now managing engineers.
>
> Excellent.  Obviously YMMV...

Yeah, maybe I'm lucky. My current place is awesome...


> > Honestly, are you really saying that, despite billions of
> > dollars to be made in software development, Lisp is
> > still barely used commercially, though it's ten times
> > more efficient in software development with no corresponding
> > downside?
>
> What makes you think there are billions of dollars to be made in SW
> development that is significantly more productive?  It's even
> plausible that in this context "more" is "worse".

Hmmm. I'm not sure I follow. I'll agree that in high tech business,
there are a lot more issues than just software. There's also timing,
luck, marketing, and competition. But I'm not so jaded (yet, anyway)
that I consider the software part irrelevant.


> > If I thought I had a 10:1 advantage in software development
> > over everyone else, I'd be doing a startup.
>
> Yes.  But note that is still not enough to be successful.  A great
> deal more comes into play than simply having a magic wand in this
> respect.  Of course, it doesn't hurt to have that wand in this context
> either.

Agreed.


Marshall
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87u15v3m8w.fsf@sidious.geddis.org>
"Marshall Spight" <·······@dnai.com> writes:
> Honestly, are you really saying that, despite billions of dollars to be
> made in software development, Lisp is still barely used commercially,
> though it's ten times more efficient in software development with no
> corresponding downside?

Yes.

(The main downside is lack of market popularity, which has a lot of
consequences.  But there is no technical downside.)

> If I thought I had a 10:1 advantage in software development
> over everyone else, I'd be doing a startup.

Paul Graham did just that:
        http://www.paulgraham.com/avg.html

_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
From: Alain Picard
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87brs6k0rs.fsf@memetrics.com>
Matthias Blume <····@my.address.elsewhere> writes:

>
> You're fired.

No worries.  Joe: You're Hired!

-- 
It would be difficult to construe        Larry Wall, in  article
this as a feature.			 <·····················@netlabs.com>
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <brs6tz4c.fsf@ccs.neu.edu>
Alain Picard <·······················@optushome.com.au> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
>
>>
>> You're fired.
>
> No worries.  Joe: You're Hired!

Swoit!  I'm hooning to Austrailian for a bonzo job.
Got my lagerphone to scare off the drop bears.
Toss me a tinny.
From: Thomas Lindgren
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3vfqfrchw.fsf@localhost.localdomain>
Matthias Blume <····@my.address.elsewhere> writes:

> Every programmer who writes a program ought to have a proof that the
> program is correct in her mind. (If not, fire her.)

Don't forget to fire the specification writer afterwards. Then the
requirements guy. Then the customer.

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1vfqgne2g.fsf@tti5.uchicago.edu>
Thomas Lindgren <···········@*****.***> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Every programmer who writes a program ought to have a proof that the
> > program is correct in her mind. (If not, fire her.)
> 
> Don't forget to fire the specification writer afterwards. Then the
> requirements guy. Then the customer.

Unfortunately, I am aware of "the Real World".  In any case, is this
really any excuse for shipping code of which we don't know will always
work, written by programmers who we didn't fire even though they
didn't know what they were doing, writing to specifications that were
inconsistent, driven by requirements that were unreasonable to begin
with, asked for by customers who were clueless?

Matthias
From: Thomas F. Burdick
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <xcvismfrcjs.fsf@famine.OCF.Berkeley.EDU>
Matthias Blume <····@my.address.elsewhere> writes:

> Thomas Lindgren <···········@*****.***> writes:
> 
> > Matthias Blume <····@my.address.elsewhere> writes:
> > 
> > > Every programmer who writes a program ought to have a proof that the
> > > program is correct in her mind. (If not, fire her.)
> > 
> > Don't forget to fire the specification writer afterwards. Then the
> > requirements guy. Then the customer.
> 
> Unfortunately, I am aware of "the Real World".  In any case, is this
> really any excuse for shipping code of which we don't know will always
> work, written by programmers who we didn't fire even though they
> didn't know what they were doing, writing to specifications that were
> inconsistent, driven by requirements that were unreasonable to begin
> with, asked for by customers who were clueless?

What a jackass!  So, if you haven't fired yourself, please share your
amazing system that allows you to prove arbitrary properties of your
code, and to specify what "correct" means in a way that's not just
another programming language (which would then need to be proven
correct using ... ?)

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1ad7rohdi.fsf@tti5.uchicago.edu>
···@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> please share your
> amazing system that allows you to prove arbitrary properties of your
> code, and to specify what "correct" means in a way that's not just
> another programming language (which would then need to be proven
> correct using ... ?)

The system: mathematics in general, logic in particular.
Correctness: Certain statements (depending on the problem domain) which
   I want to hold true for my programs.
Another programming language: To some degree, yes, logic is "another
   programming language".  We are getting fairly deep into philosophy
   here if we want to discuss how much justification, e.g.,
   foundational proofs require.  Let's say we take, e.g., ZFC for
   granted.  Let's build things from there.

And in case you ask: No, this does not let me prove "arbitrary"
properties.  But, obviously, the ones I can't prove I can't claim my
code to possess.

In any case, what I tried to express was that even though there are
customers who might not always be on top of things as far as
expectations go (sorry having offended you with the "clueless"
tonge-in-cheek remark), and even though requirements as well as
specifications are often either imprecise or self-contradictory or
both, this is NO EXCUSE for a programmer to not think about (and prove
to herself) the correctness of the code she writes.  I think what I am
asking here is fairly modest.  Shame on you for calling me names for
it!

Matthias
From: Thomas F. Burdick
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <xcvekx3qitq.fsf@famine.OCF.Berkeley.EDU>
Matthias Blume <····@my.address.elsewhere> writes:

> ···@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> 
> > please share your
> > amazing system that allows you to prove arbitrary properties of your
> > code, and to specify what "correct" means in a way that's not just
> > another programming language (which would then need to be proven
> > correct using ... ?)
> 
> The system: mathematics in general, logic in particular.
> Correctness: Certain statements (depending on the problem domain) which
>    I want to hold true for my programs.

So it's "correctness" that's broken.  If we're talking about
applications not just sorting algorithms, at least.  In every logical
system I've seen, if you any specification for a program's behavior
will be so complex, you've just recreated the problem: so your program
has certain properties, but how do you know that what you wrote means
what you think it did?

> Another programming language: To some degree, yes, logic is "another
>    programming language".

In fact, to such a degree that the solution is just as bad as the
problem.

> I think what I am asking here is fairly modest.  Shame on you for
> calling me names for it!

What you've been asking for seems to have becore a moving target, but
in the post I responded to, it sure as hell wasn't modest; it was
phrased in an insulting manner, too.  Shame on you.

(And I got no shame, anyway)

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9oto$994$2@newsreader2.netcologne.de>
Matthias Blume wrote:

> In any case, what I tried to express was that even though there are
> customers who might not always be on top of things as far as
> expectations go (sorry having offended you with the "clueless"
> tonge-in-cheek remark), and even though requirements as well as
> specifications are often either imprecise or self-contradictory or
> both, this is NO EXCUSE for a programmer to not think about (and prove
> to herself) the correctness of the code she writes.  I think what I am
> asking here is fairly modest.

No, you are asking for more. You are asking for the proof to be 
automatically executable.


Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2ekx37541.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

> No, you are asking for more. You are asking for the proof to be
> automatically executable.

Would people kindly stop telling me what I am asking for?
Thank you.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnap6b$v7a$2@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>No, you are asking for more. You are asking for the proof to be
>>automatically executable.
> 
> Would people kindly stop telling me what I am asking for?
> Thank you.

I am terribly sorry, but a static type system automatically executes a 
proof about certain properties of a program. And you said you want 
static type systems.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2n0bq68le.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> > Pascal Costanza <········@web.de> writes:
> > 
> >>No, you are asking for more. You are asking for the proof to be
> >>automatically executable.
> > Would people kindly stop telling me what I am asking for?
> 
> > Thank you.
> 
> I am terribly sorry, but a static type system automatically executes a
> proof about certain properties of a program. And you said you want
> static type systems.

That was in a different part of the discussion.  The topic had changed
slightly.  I did not ask for automatic correctness proofs.  Even
though it would be nice to have them, it is clearly not reasonable to
ask that.

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbeal$p72$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Matthias Blume wrote:
>>
>>>Pascal Costanza <········@web.de> writes:
>>>
>>>
>>>>No, you are asking for more. You are asking for the proof to be
>>>>automatically executable.
>>>
>>>Would people kindly stop telling me what I am asking for?
>>
>>>Thank you.
>>
>>I am terribly sorry, but a static type system automatically executes a
>>proof about certain properties of a program. And you said you want
>>static type systems.
> 
> 
> That was in a different part of the discussion.  The topic had changed
> slightly.  I did not ask for automatic correctness proofs.  Even
> though it would be nice to have them, it is clearly not reasonable to
> ask that.

...but at least, you have used this reasoning to justify asking for 
statically checkable proofs.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1smlin0i3.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> > Pascal Costanza <········@web.de> writes:
> > 
> >>Matthias Blume wrote:
> >>
> >>>Pascal Costanza <········@web.de> writes:
> >>>
> >>>
> >>>>No, you are asking for more. You are asking for the proof to be
> >>>>automatically executable.
> >>>
> >>>Would people kindly stop telling me what I am asking for?
> >>
> >>>Thank you.
> >>
> >>I am terribly sorry, but a static type system automatically executes a
> >>proof about certain properties of a program. And you said you want
> >>static type systems.
> > That was in a different part of the discussion.  The topic had
> > changed
> 
> > slightly.  I did not ask for automatic correctness proofs.  Even
> > though it would be nice to have them, it is clearly not reasonable to
> > ask that.
> 
> ...but at least, you have used this reasoning to justify asking for
> statically checkable proofs.

Let's say it came up during the discussion.  IIRC, at some point it
was you who claimed that there are correct programs which are
impossible to verify statically.  This was the moment when the focus
shifted: I do not believe this claim to be truthful and argued to this
end, but I am not as crazy as to say that static verification in this
sense is possible through existing (real-world) type systems.  Sorry
if this didn't come across clearly.

Do we have agreement, at least as far as this meta-discussion on who
said what when goes?

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbfgv$p76$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>Matthias Blume wrote:
>>
>>>Pascal Costanza <········@web.de> writes:
>>>
>>>>Matthias Blume wrote:

>>>I did not ask for automatic correctness proofs.  Even
>>>though it would be nice to have them, it is clearly not reasonable to
>>>ask that.
>>
>>...but at least, you have used this reasoning to justify asking for
>>statically checkable proofs.
> 
> 
> Let's say it came up during the discussion.  IIRC, at some point it
> was you who claimed that there are correct programs which are
> impossible to verify statically.  This was the moment when the focus
> shifted: I do not believe this claim to be truthful and argued to this
> end, but I am not as crazy as to say that static verification in this
> sense is possible through existing (real-world) type systems.  Sorry
> if this didn't come across clearly.

Ah, ok, I didn't see this subtle difference. Thanks for clarification.

> Do we have agreement, at least as far as this meta-discussion on who
> said what when goes?

OK, fine by me. ;)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <GZdmb.19029$e01.35682@attbi_s02>
"Matthias Blume" <····@my.address.elsewhere> wrote in message ···················@tti5.uchicago.edu...
>  IIRC, at some point it
> was you who claimed that there are correct programs which are
> impossible to verify statically.  This was the moment when the focus
> shifted: I do not believe this claim to be truthful and argued to this
> end ...

Is not Godel's Theorem proof that there exist correct programs
which cannot be proven correct? I say this even though I
am still a strong advocate of static typing.


Marshall
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1smli33h6.fsf@tti5.uchicago.edu>
"Marshall Spight" <·······@dnai.com> writes:

> "Matthias Blume" <····@my.address.elsewhere> wrote in message ···················@tti5.uchicago.edu...
> >  IIRC, at some point it
> > was you who claimed that there are correct programs which are
> > impossible to verify statically.  This was the moment when the focus
> > shifted: I do not believe this claim to be truthful and argued to this
> > end ...
> 
> Is not Godel's Theorem proof that there exist correct programs
> which cannot be proven correct? I say this even though I
> am still a strong advocate of static typing.

Yes.  My point is that humans do not write such programs, at least not
intentionally for real-world applications.

Matthias
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ptgms9gt.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> "Marshall Spight" <·······@dnai.com> writes:
>
>> "Matthias Blume" <····@my.address.elsewhere> wrote in message ···················@tti5.uchicago.edu...
>> >  IIRC, at some point it
>> > was you who claimed that there are correct programs which are
>> > impossible to verify statically.  This was the moment when the focus
>> > shifted: I do not believe this claim to be truthful and argued to this
>> > end ...
>> 
>> Is not Godel's Theorem proof that there exist correct programs
>> which cannot be proven correct? I say this even though I
>> am still a strong advocate of static typing.
>
> Yes.  My point is that humans do not write such programs, at least not
> intentionally for real-world applications.

I happen to believe that this sort of program is *much* more common
than most people think.  A single loop is all you need to encode some
*very* difficult problems (e.g. Collatz problem).

Any loop or recursion is the fixed-point of some function.  But fixed
points are very complex.  The fixed point of (lambda (x) (- (* x x)
u)) parameterized over complex values is the Mandelbrot set.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bndug7$8bg$2@news.oberberg.net>
Marshall Spight wrote:
> 
> Is not Godel's Theorem proof that there exist correct programs
> which cannot be proven correct? I say this even though I
> am still a strong advocate of static typing.

One can translate his proof into this wording, yes.
Not that real-life programs are of any interest. If you can't argue that 
the program is correct, you shouldn't release it for production use.

Regards,
Jo
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <4dwmb.24743$Fm2.11719@attbi_s04>
"Joachim Durchholz" <·················@web.de> wrote in message ·················@news.oberberg.net...
> If you can't argue that the program is correct,
> you shouldn't release it for production use.

I tend to agree. But I'd like to see if I can't nail
this down further, which is why I'm trying to
see if anyone can come up with a program
that is small, useful, and not provably typesafe.
Ideally this would be expressed in a statically
typed language, with a program that would not
compile, but that if the typechecker were somehow
turned off, would run safely and usefully.

It seems to me that the amount of effort required
to produce such an example will be telling.


Marshall
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2510030853150001@192.168.1.51>
In article <·····················@attbi_s04>, "Marshall Spight"
<·······@dnai.com> wrote:

> "Joachim Durchholz" <·················@web.de> wrote in message
·················@news.oberberg.net...
> > If you can't argue that the program is correct,
> > you shouldn't release it for production use.
> 
> I tend to agree. But I'd like to see if I can't nail
> this down further, which is why I'm trying to
> see if anyone can come up with a program
> that is small, useful, and not provably typesafe.
> Ideally this would be expressed in a statically
> typed language, with a program that would not
> compile, but that if the typechecker were somehow
> turned off, would run safely and usefully.
> 
> It seems to me that the amount of effort required
> to produce such an example will be telling.

How about:

(defun example ()
  (format t "~&Enter two numbers")
  (let ( (x (read)) (y (read)) )
    (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))

This is small, (marginally) useful, and not type safe.

Note that this program can handle cases like this:

? (example)
Enter two numbers
5/3
#c(17/13 5.93)
The sum of 5/3 and #c(1.3076923076923077 5.93) is #c(2.9743589743589745 5.93)


I can express this in a statically typed language:

main () {
  SOME_TYPE x,y;
  cout << "Enter two numbers\n";
  x = read();
  y = read();
  cout << "The sum of " << x << " and " << y " is " << x + y;
}

This will not compile in standard C++.  (Interestingly, it can be made to
compile with the addition of a suitable library, but the effort required
to produce such a library is telling.)

E.
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnednq$f9i$1@news-int2.gatech.edu>
·················@jpl.nasa.gov (Erann Gat) once said:
>How about:
>
>(defun example ()
>  (format t "~&Enter two numbers")
>  (let ( (x (read)) (y (read)) )
>    (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))
>
>This is small, (marginally) useful, and not type safe.
>
>Note that this program can handle cases like this:
>
>? (example)
>Enter two numbers
>5/3
>#c(17/13 5.93)
>The sum of 5/3 and #c(1.3076923076923077 5.93) is #c(2.9743589743589745 5.93)

I believe the Haskell equivalent would be something like

   readNum :: (Num a)=> IO Maybe a

   do print "Enter two numbers"
      mx <- readNum
      my <- readNum
      print (maybe "Sorry, not numbers"
                   (\sum -> "The sum is "++(show sum))
                   mx `(liftM (+))` my)

and it would exhibit roughly the same behavior.

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2510031152540001@192.168.1.51>
In article <············@news-int2.gatech.edu>, ·······@prism.gatech.edu
(Brian McNamara!) wrote:

> ·················@jpl.nasa.gov (Erann Gat) once said:
> >How about:
> >
> >(defun example ()
> >  (format t "~&Enter two numbers")
> >  (let ( (x (read)) (y (read)) )
> >    (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))
> >
> >This is small, (marginally) useful, and not type safe.
> >
> >Note that this program can handle cases like this:
> >
> >? (example)
> >Enter two numbers
> >5/3
> >#c(17/13 5.93)
> >The sum of 5/3 and #c(1.3076923076923077 5.93) is #c(2.9743589743589745 5.93)
> 
> I believe the Haskell equivalent would be something like
> 
>    readNum :: (Num a)=> IO Maybe a
> 
>    do print "Enter two numbers"
>       mx <- readNum
>       my <- readNum
>       print (maybe "Sorry, not numbers"
>                    (\sum -> "The sum is "++(show sum))
>                    mx `(liftM (+))` my)
> 
> and it would exhibit roughly the same behavior.

Nope.

__   __ __  __  ____   ___      _________________________________________
||   || ||  || ||  || ||__      Hugs 98: Based on the Haskell 98 standard
||___|| ||__|| ||__||  __||     Copyright (c) 1994-2002
||---||         ___||           World Wide Web: http://haskell.org/hugs
||   ||                         Report bugs to: ·········@haskell.org
||   || Version: November 2002  _________________________________________

Haskell 98 mode: Restart with command line option -98 to enable extensions

Reading file "/usr/local/lib/hugs/lib/Prelude.hs":
                   
Hugs session for:
/usr/local/lib/hugs/lib/Prelude.hs
Type :? for help
Prelude> Prelude> :load "foo.hs"
Reading file "foo.hs":
ERROR "foo.hs":1 - Illegal type "IO Maybe a" in constructor application
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnejgq$hl3$1@news-int2.gatech.edu>
·················@jpl.nasa.gov (Erann Gat) once said:
>(Brian McNamara!) wrote:
>> I believe the Haskell equivalent would be something like
>> 
>>    readNum :: (Num a)=> IO Maybe a
>> 
>>    do print "Enter two numbers"
>>       mx <- readNum
>>       my <- readNum
>>       print (maybe "Sorry, not numbers"
>>                    (\sum -> "The sum is "++(show sum))
>>                    mx `(liftM (+))` my)
>> 
>> and it would exhibit roughly the same behavior.
>
>Nope.

<sigh> Yes, yes, you've proven that code written "by hand" typically
contains little bugs.  It should read

   readNum :: (Num a)=> IO (Maybe a)   -- note extra parens

Also, I see now that liftM should be liftM2.  

It is still not a full working program, just a fragment which suggests
how it's possible to do this.  You'll have to find someone who actually
programs in Haskell if you want to turn this into an actual full
working program.  I am just an amateur Haskeller, and I don't have a
Haskell compiler at hand.

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Mark Carroll
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <tVi*dES5p@news.chiark.greenend.org.uk>
In article <············@news-int2.gatech.edu>,
Brian McNamara! <·······@prism.gatech.edu> wrote:
(snip)
><sigh> Yes, yes, you've proven that code written "by hand" typically
>contains little bugs.  It should read
>
>   readNum :: (Num a)=> IO (Maybe a)   -- note extra parens
>
>Also, I see now that liftM should be liftM2.  
(snip)

This is probably nearer what you want:

readNum :: (Num a, Read a) => IO (Maybe a)

readNum = 
    do attempt <- try ((liftM read) getLine >>= evaluate)
       either (const (return Nothing)) (return . Just) attempt

example :: IO ()

example = 
    do putStrLn "Enter two numbers"
       x <- readNum
       y <- readNum
       putStrLn (maybe isNotNumber (isNumber x y) (liftM2 (+) x y))
       where
       isNotNumber = "Sorry, not numbers"
       isNumber (Just x) (Just y) z = "The sum of " ++ show x ++ " and " ++ show y ++ " is " ++ show z


I think it could be prettier - I've not put much thought into it.

-- Mark
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2510031845300001@192.168.1.51>
In article <·········@news.chiark.greenend.org.uk>, Mark Carroll
<·····@chiark.greenend.org.uk> wrote:

> In article <············@news-int2.gatech.edu>,
> Brian McNamara! <·······@prism.gatech.edu> wrote:
> (snip)
> ><sigh> Yes, yes, you've proven that code written "by hand" typically
> >contains little bugs.  It should read

Sorry, I'm not trying to be obstreporous.  I just don't know Haskell.

> 
> This is probably nearer what you want:

Nope.

Prelude> :load "foo.hs"
Reading file "foo.hs":
Dependency analysis
ERROR "foo.hs":3 - Undefined variable "evaluate"
Prelude> 


> I've not put much thought into it.

That is apparent.

E.
From: Mark Carroll
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <nRB*GXS5p@news.chiark.greenend.org.uk>
In article <··································@192.168.1.51>,
Erann Gat <·················@jpl.nasa.gov> wrote:
(snip)
>Prelude> :load "foo.hs"
>Reading file "foo.hs":
>Dependency analysis
>ERROR "foo.hs":3 - Undefined variable "evaluate"
>Prelude> 
(snip)

Just look for it in in the libraries that come with your compiler!
For instance, with a recent GHC, importing Control.Exception and
Control.Monad will probably get you everything you need for this
program.

-- Mark
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <cizmb.25501$Fm2.12604@attbi_s04>
"Erann Gat" <·················@jpl.nasa.gov> wrote in message ·······································@192.168.1.51...
> In article <·····················@attbi_s04>, "Marshall Spight"
> <·······@dnai.com> wrote:
>
> > "Joachim Durchholz" <·················@web.de> wrote in message
> ·················@news.oberberg.net...
> > > If you can't argue that the program is correct,
> > > you shouldn't release it for production use.
> >
> > I tend to agree. But I'd like to see if I can't nail
> > this down further, which is why I'm trying to
> > see if anyone can come up with a program
> > that is small, useful, and not provably typesafe.
> > Ideally this would be expressed in a statically
> > typed language, with a program that would not
> > compile, but that if the typechecker were somehow
> > turned off, would run safely and usefully.
> >
> > It seems to me that the amount of effort required
> > to produce such an example will be telling.
>
> How about:
>
> (defun example ()
>   (format t "~&Enter two numbers")
>   (let ( (x (read)) (y (read)) )
>     (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))

I am astonished to discover that you think a typesafe
language can't handle adding together two numbers
read as input.

Certainly no compiler can make inferrences based on
information that's not available yet, such as input data.
Did you think that typesafe languages are not allowed
to have input?

What does this program do if you feed it bad inputs?
Or is + defined on every possible input, destroying
the symbol's relationship with addition?


> This is small, (marginally) useful, and not type safe.
>
> Note that this program can handle cases like this:

I will admit to being impressed, but this capability
isn't relevant to the question at hand as near as
I can tell.


Marshall
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2510031208000001@192.168.1.51>
In article <·····················@attbi_s04>, "Marshall Spight"
<·······@dnai.com> wrote:

> "Erann Gat" <·················@jpl.nasa.gov> wrote in message
·······································@192.168.1.51...
> > In article <·····················@attbi_s04>, "Marshall Spight"
> > <·······@dnai.com> wrote:
> >
> > > "Joachim Durchholz" <·················@web.de> wrote in message
> > ·················@news.oberberg.net...
> > > > If you can't argue that the program is correct,
> > > > you shouldn't release it for production use.
> > >
> > > I tend to agree. But I'd like to see if I can't nail
> > > this down further, which is why I'm trying to
> > > see if anyone can come up with a program
> > > that is small, useful, and not provably typesafe.
> > > Ideally this would be expressed in a statically
> > > typed language, with a program that would not
> > > compile, but that if the typechecker were somehow
> > > turned off, would run safely and usefully.
> > >
> > > It seems to me that the amount of effort required
> > > to produce such an example will be telling.
> >
> > How about:
> >
> > (defun example ()
> >   (format t "~&Enter two numbers")
> >   (let ( (x (read)) (y (read)) )
> >     (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))
> 
> I am astonished to discover that you think a typesafe
> language can't handle adding together two numbers
> read as input.

I never made any such claim.  Please do me the courtesy of entertaining
the possibility that I am not a complete moron.

> Certainly no compiler can make inferrences based on
> information that's not available yet, such as input data.

Yes, that would be the point I was making with my example.

> Did you think that typesafe languages are not allowed
> to have input?

Obivously not.  I do think that it is significantly harder to deal with
input (and math) in typesafe languages precisely because, as you say, no
compiler can make inferences based on information that is not available
yet, such as input data.

> What does this program do if you feed it bad inputs?

It throws a run-time exception obviously:

? (example)
Enter two numbers
foo
baz
> Error: value FOO is not of the expected type NUMBER.
> While executing: CCL::+-2
> Type Command-. to abort.
See the Restarts� menu item for further choices.
1 > 

(Could you not have figured that out on your own?)

> > This is small, (marginally) useful, and not type safe.
> >
> > Note that this program can handle cases like this:
> 
> I will admit to being impressed, but this capability
> isn't relevant to the question at hand as near as
> I can tell.

Why not?  You posed a challenge, and I responded to it.  If this is not
relevant perhaps you need to rethink the parameters of your challenge.

E.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <%ACmb.16829$mZ5.78629@attbi_s54>
"Erann Gat" <·················@jpl.nasa.gov> wrote in message ·······································@192.168.1.51...
> In article <·····················@attbi_s04>, "Marshall Spight"
> <·······@dnai.com> wrote:
> > > > I tend to agree. But I'd like to see if I can't nail
> > > > this down further, which is why I'm trying to
> > > > see if anyone can come up with a program
> > > > that is small, useful, and not provably typesafe.
> > > > Ideally this would be expressed in a statically
> > > > typed language, with a program that would not
> > > > compile, but that if the typechecker were somehow
> > > > turned off, would run safely and usefully.
> > > >
> > > How about:
> > >
> > > (defun example ()
> > >   (format t "~&Enter two numbers")
> > >   (let ( (x (read)) (y (read)) )
> > >     (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))
> >
> > I am astonished to discover that you think a typesafe
> > language can't handle adding together two numbers
> > read as input.
>
> I never made any such claim.  Please do me the courtesy of entertaining
> the possibility that I am not a complete moron.

Okay, okay, my apologies. But you showed no sign of taking
me seriously. Writing such a program is trivial in most any
statically typed language, so I couldn't see any good-faith
effort to respond to my question on your part.


> > Certainly no compiler can make inferrences based on
> > information that's not available yet, such as input data.
>
> Yes, that would be the point I was making with my example.

Okay, so your example was rhetorical. But it didn't illuminate
anything we didn't both already know.


> > Did you think that typesafe languages are not allowed
> > to have input?
>
> Obivously not.

That was *my* turn to be rhetorical.


> I do think that it is significantly harder to deal with
> input (and math) in typesafe languages precisely because, as you say, no
> compiler can make inferences based on information that is not available
> yet, such as input data.

I don't see how. In both cases, you try to validate the input, and
if it fails you throw a runtime exception. It's the same in both
kinds of languages.

There are a number of places, such as this, where static typing
doesn't help you, but that doesn't mean these situations are
*harder* in statically typed languages; it means the language
has to "fall back" to the behavior of dynamically typed languages.
These situations are *the same* as in dynamically typed languages.


> > What does this program do if you feed it bad inputs?
>
> It throws a run-time exception obviously:
> [...]
> (Could you not have figured that out on your own?)

I have several answers to that:

1) It was a rhetorical question
2) Yes
3) Not without a lisp system
4) It's what I suspected the answer would be, but this is
lisp, so you can never tell when the other guy is going
to say something like, "oh, but the macro system uses
eval to rewrite the parse tree at runtime, and the code
actually goes *back in time* so that, while the programmer
is typing the code into Emacs (of course) his hands
are *guided* into typing correct input. This all
happens in O(-n) time, where n is the number
of extra downcasts you would have had to type
in a Java program, as if Java could ever do anything
so clever." :-)


> > > This is small, (marginally) useful, and not type safe.
> > >
> > > Note that this program can handle cases like this:
> >
> > I will admit to being impressed, but this capability
> > isn't relevant to the question at hand as near as
> > I can tell.
>
> Why not?  You posed a challenge, and I responded to it.

But you didn't, really. You said the point you were
making with your example was to illustrate that the
compiler can't make inferrences based on information
that's not available at compile time, such as input data.
So this example demonstrates a limit of the applicability
of type systems, but it doesn't demonstrate a small,
useful program that can't be proven correct by a
type system. The spirit of the challenge is to find
a place where the type system *gets in the way*,
not to find an area where the type system doesn't
help; that's easy.


>  If this is not relevant perhaps you need to rethink
> the parameters of your challenge.

I'm sorry, it sounds like what you're saying here is that
if I set up a challenge, and someone else responds to it,
and the response isn't relevant, then that is my responsibility.
Since I'm doing you the requested courtesy, that can't
be what you mean, but I don't see any other way to
interpret what you just said.


Marshall
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2510031830290001@192.168.1.51>
In article <·····················@attbi_s54>, "Marshall Spight"
<·······@dnai.com> wrote:

> I couldn't see any good-faith
> effort to respond to my question on your part.

Then you need to open your mind.

> > I do think that it is significantly harder to deal with
> > input (and math) in typesafe languages precisely because, as you say, no
> > compiler can make inferences based on information that is not available
> > yet, such as input data.
> 
> I don't see how. In both cases, you try to validate the input, and
> if it fails you throw a runtime exception. It's the same in both
> kinds of languages.

The same you say?  Then would you kindly show me how to reproduce the
behavior of my example in the statically typed language of your choice in
the same number of lines of code?

> There are a number of places, such as this, where static typing
> doesn't help you, but that doesn't mean these situations are
> *harder* in statically typed languages; it means the language
> has to "fall back" to the behavior of dynamically typed languages.
> These situations are *the same* as in dynamically typed languages.

No they aren't, because in a statically typed language you have to
additional work to produce the behavior that a dynamically typed language
gives you for free.  If you didn't you would have a dynamically typed
language.


> > > > This is small, (marginally) useful, and not type safe.
> > > >
> > > > Note that this program can handle cases like this:
> > >
> > > I will admit to being impressed, but this capability
> > > isn't relevant to the question at hand as near as
> > > I can tell.
> >
> > Why not?  You posed a challenge, and I responded to it.
> 
> But you didn't, really.

But I did.  Really.

> You said the point you were
> making with your example was to illustrate that the
> compiler can't make inferrences based on information
> that's not available at compile time, such as input data.

That's right.

> So this example demonstrates a limit of the applicability
> of type systems, but it doesn't demonstrate a small,
> useful program that can't be proven correct by a
> type system.

I think it does both.  Why do you think these two possibilities are
mutually exclusive?

> The spirit of the challenge is to find
> a place where the type system *gets in the way*,
> not to find an area where the type system doesn't
> help; that's easy.

I believe that my example illustrates exactly such a circumstance.  Of
course, we can't really settle this until you produce an equivalent
program in a statically typed language so that we can actually compare
them.

E.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ysQmb.35137$e01.65242@attbi_s02>
"Erann Gat" <·················@jpl.nasa.gov> wrote in message ·······································@192.168.1.51...
> In article <·····················@attbi_s54>, "Marshall Spight"
> <·······@dnai.com> wrote:
>
> > I couldn't see any good-faith
> > effort to respond to my question on your part.
>
> Then you need to open your mind.

Nothing I like better than being talked down to.


> The same you say?  Then would you kindly show me how to reproduce the
> behavior of my example in the statically typed language of your choice in
> the same number of lines of code?

Well, I guess that's *your* challenge, then, but it's not mine.


> No they aren't, because in a statically typed language you have to
> additional work to produce the behavior that a dynamically typed language
> gives you for free.  If you didn't you would have a dynamically typed
> language.

I'll acknowledge that sometimes a statically typed language means
more typing, and I'll decline to quantify the difference.


> > > Why not?  You posed a challenge, and I responded to it.
> >
> > But you didn't, really.
>
> But I did.  Really.

You posted a reply, but the reply was not an answer
to the question, as it purported to be. Your program
was a non-sequitur; it proves a point that was not
under debate, that I already knew, and that I've
explicitly conceded.


> > So this example demonstrates a limit of the applicability
> > of type systems, but it doesn't demonstrate a small,
> > useful program that can't be proven correct by a
> > type system.
>
> I think it does both.  Why do you think these two possibilities are
> mutually exclusive?

I didn't say they were mutually exclusive; I just said
that you only did one of them.


> > The spirit of the challenge is to find
> > a place where the type system *gets in the way*,
> > not to find an area where the type system doesn't
> > help; that's easy.
>
> I believe that my example illustrates exactly such a circumstance.  Of
> course, we can't really settle this until you produce an equivalent
> program in a statically typed language so that we can actually compare
> them.

Even then it won't be settled, because you are debating
one issue while I am debating another.


Marshall
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2610030940280001@192.168.1.51>
In article <·····················@attbi_s02>, "Marshall Spight"
<·······@dnai.com> wrote:

> > The same you say?  Then would you kindly show me how to reproduce the
> > behavior of my example in the statically typed language of your choice in
> > the same number of lines of code?
> 
> Well, I guess that's *your* challenge, then, but it's not mine.

You made the claim.  I'm just asking you to provide support for it.

> You posted a reply, but the reply was not an answer
> to the question, as it purported to be.

Yes it was.  Here's your original post:

> "Pascal Costanza" <········@web.de> wrote in message
> ·················@f1node01.rhrz.uni-bonn.de ... > ... there exist programs
> that work but
> > that cannot be statically typechecked. These programs objectively exist.
> > By definition, I cannot express them in a statically typed language. I
> agree these programs exist.
> 
> It would be really interesting to see a small but useful example
> of a program that will not pass a statically typed language.
> It seems to me that how easy it is to generate such programs
> will be an interesting metric.
> 
> Anyone? (Sorry, I'm a static typing guy, so my brain is
> warped away from such programs. :-) 

My program was small (by any reasonable measure), useful (by some
reasonable measure), and would not compile in a statically typed
language.  (This is not to say that its functionality cannot be reproduced
in a statically typed language, as obviously it can.)

What you originally said you wanted to know was how much effort it took to
produce an example.  The answer is that it took less than a minute.

> Your program was a non-sequitur; it proves a point that was not
> under debate, that I already knew, and that I've explicitly conceded.

My program was not intended to prove any point, it was simply intended to
be a response to your request for a data point on how much effort it takes
to generate an example conforming to your criteria.

> Even then it won't be settled, because you are debating
> one issue while I am debating another.

I think you are very confused.  The only thing being debated here (as far
as I can tell) is whether or not my response was on-point, which I
maintain it was.

It might help us get back on track if you clarified what you believe to be
the topic at hand.

E.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <OsVmb.27974$9E1.94770@attbi_s52>
"Erann Gat" <·················@jpl.nasa.gov> wrote in message ·······································@192.168.1.51...
>
> It might help us get back on track if you clarified what you believe to be
> the topic at hand.

I must respectfully decline. I no longer hold any hope of
us having a meaningful conversation.


Marshall
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2610031222410001@192.168.1.51>
In article <·····················@attbi_s52>, "Marshall Spight"
<·······@dnai.com> wrote:

> "Erann Gat" <·················@jpl.nasa.gov> wrote in message
·······································@192.168.1.51...
> >
> > It might help us get back on track if you clarified what you believe to be
> > the topic at hand.
> 
> I must respectfully decline. I no longer hold any hope of
> us having a meaningful conversation.

A self-fulfulling prophecy.

Good bye then.

E.
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <b8rs61-m68.ln1@ID-7776.user.dfncis.de>
Erann Gat <·················@jpl.nasa.gov> wrote:

> How about:
> 
> (defun example ()
>  (format t "~&Enter two numbers")
>  (let ( (x (read)) (y (read)) )
>    (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))
> 
> This is small, (marginally) useful, and not type safe.

> Note that this program can handle cases like this:
> 
> ? (example)
> Enter two numbers
> 5/3
> #c(17/13 5.93)
> The sum of 5/3 and #c(1.3076923076923077 5.93) is #c(2.9743589743589745 5.93)

I think both Brian (and in consequence, Mark) ignored that. Here +
seems to deal with dynamic types, casting and converting them
according to some internal linear order.

Of course you cannot write this directly in any language that doesn't
provide such a function in the default libraries. In Haskell, + is
overloaded at compile-time (and only for arguments of the same type,
which has good reasons). It is not overloaded at runtime. So to make
it type safe, we simply have create a datatype with the apropriate
tags, and do the conversion as required.

Here's the program with a few comments, in literate Haskell format (lhs)
with leading > for program code:

> import Complex
> import Ratio
> import Maybe

I here just do integers, rational numbers and complex numbers. Add
more at your pleasure.

> data Number = NInteger Integer | NRational Rational | 
>               NComplex (Complex Float)

For every sort of number, we have to say how we want the conversion
done. (Conversion at compile time already has similar functions).

> toComplex :: Number -> Complex Float
> toComplex (NComplex x)  = x
> toComplex (NRational x) = (fromRational x) :+ 0
> toComplex (NInteger x)  = (fromInteger x) :+ 0

> toRat :: Number -> Rational
> toRat (NRational x) = x
> toRat (NInteger x) = fromInteger x

Now we can define our new plus operation in a straightforward way.
Note that + is overloaded at compile time, and will add the
right type once both sides have equal type.

> infixl 6 <+>

> (<+>) :: Number -> Number -> Number
> (NComplex x)  <+> y             = NComplex (x + toComplex y)
> x             <+> (NComplex y)  = NComplex ((toComplex x) + y)
> (NRational x) <+> y             = NRational (x + toRat y)
> x             <+> (NRational y) = NRational ((toRat x) + y)
> (NInteger x)  <+> (NInteger y)  = NInteger (x + y)

That's it. The only thing that is left is to deal with input and 
output. For output, we just make Number an instance of the Show typeclass:

> instance Show Number where
>   showsPrec p (NInteger x)  = showsPrec p x
>   showsPrec p (NRational x) = showsPrec p x
>   showsPrec p (NComplex x)  = showsPrec p x

Input is a bit more difficult, because we have to decide which type
and type tag to use based on the number that is parsed. First, here's
a general function that doesn't abort with an error when reading, but
returns a Maybe type.

> readMaybe :: Read a => String -> Maybe a
> readMaybe s = case [x | (x,t) <- reads s, ("","") <- lex t] of
>                    [x] -> Just x
>                    _   -> Nothing

Now we repeatedly try to parse the input and return the first choice
where we succeed. An instance for the Read class would work in a
similar way, but I didn't do that because the precision and parsing
stuff clutters the example IMHO too much.

> infixr 1 `choose`
> (z, y) `choose` x = maybe x y z

> readNumber :: String -> Number 
> readNumber s = ((readMaybe s), (\x -> NComplex x)) `choose`
>            	 ((readMaybe s), (\x -> NRational x)) `choose`
>            	 ((readMaybe s), (\x -> NInteger x)) `choose`
>            	 (error "Not a number") 

And now we can write the program exactly as you did:

> f = do
>  s <- getLine
>  let x = readNumber s
>  s <- getLine
>  let y = readNumber s
>  print ("The sum of " ++ show x ++ " and " ++ show y ++ " is " ++ 
>         show (x <+> y))
>  return ()

Here's a protocol from ghci:

Main> f
3 % 4
5
"The sum of 3 % 4 and 5 is 23 % 4"
Main> f
1.0 :+ 3.5
3 % 4
"The sum of 1.0 :+ 3.5 and 3 % 4 is 1.75 :+ 3.5"
Main> f
2.4
2  
*** Exception: Not a number

(because I didn't to Float's, so 2.4 cannot be parsed.)

Erann Gat <·················@jpl.nasa.gov> wrote:
> I can express this in a statically typed language: [...]
> This will not compile in standard C++.  (Interestingly, it can be made to
> compile with the addition of a suitable library, but the effort required
> to produce such a library is telling.)

It may be interesting to compare the C++ library with the couple of
lines used above. It may be also interesting to compare it with the
implementation of + in Lisp (which I don't have here). The only
important bit IMHO are the conversion functions and the implementation
of the plus operation, and I don't think you can get avoid doing a
case distinction for both of them. (The I/O stuff would just "vanish"
into the library if the library had been written with something like
this in mind from the start).

The catch of course is that the plus operation can operate on a lot
of number-like objects, and usually there is no default linear ordering
between their types that you can use for automatic conversion. For
example, you could define plus on a finite field, say GF(3). What
would in mean to add such a number to a real number? How would you add
a complex number over such a finite field to a real number? Etc.

- Dirk
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bngqgm$7u9$1@newsreader2.netcologne.de>
Dirk Thierbach wrote:
> Erann Gat <·················@jpl.nasa.gov> wrote:
> 
> 
>>How about:
>>
>>(defun example ()
>> (format t "~&Enter two numbers")
>> (let ( (x (read)) (y (read)) )
>>   (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))
>>
>>This is small, (marginally) useful, and not type safe.
> 
> 
>>Note that this program can handle cases like this:
>>
>>? (example)
>>Enter two numbers
>>5/3
>>#c(17/13 5.93)
>>The sum of 5/3 and #c(1.3076923076923077 5.93) is #c(2.9743589743589745 5.93)
> 
> 
> I think both Brian (and in consequence, Mark) ignored that. Here +
> seems to deal with dynamic types, casting and converting them
> according to some internal linear order.
> 
> Of course you cannot write this directly in any language that doesn't
> provide such a function in the default libraries. In Haskell, + is
> overloaded at compile-time (and only for arguments of the same type,
> which has good reasons). It is not overloaded at runtime. So to make
> it type safe, we simply have create a datatype with the apropriate
> tags, and do the conversion as required.

Customer: "No, no, no, I didn't mean only numbers, I mean arbitrary 
expressions!"

Welcome to Macintosh Common Lisp Version 5.0!
? (defun example ()
     (format t "~&Enter two numbers~%")
     (let ((x (eval (read))) (y (eval (read))))
       (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))
example
? (example)
Enter two numbers
5
(sqrt -1)
The sum of 5 and #c(0 1) is #c(5 1)
nil
? (setf i -1)
-1
? (example)
Enter two numbers
5
(sqrt i)
The sum of 5 and #c(0 1) is #c(5 1)
nil


Customer: "Yes, that's better!"

;)


Pascal
From: Stephen J. Bevan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3oew4kkyd.fsf@dino.dnsalias.com>
Pascal Costanza <········@web.de> writes:
> Customer: "No, no, no, I didn't mean only numbers, I mean arbitrary
> expressions!"
> 
> Welcome to Macintosh Common Lisp Version 5.0!
> ? (defun example ()
>      (format t "~&Enter two numbers~%")
>      (let ((x (eval (read))) (y (eval (read))))
>        (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))
> example
> ? (example)
> Enter two numbers
> 5
> (sqrt -1)
> The sum of 5 and #c(0 1) is #c(5 1)
> nil
> ? (setf i -1)
> -1
> ? (example)
> Enter two numbers
> 5
> (sqrt i)
> The sum of 5 and #c(0 1) is #c(5 1)
> nil
> 
> 
> Customer: "Yes, that's better!"

Customer: "No, no, no, I meant arbitrary Mathematic expressions."
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh0dg$gmo$1@newsreader2.netcologne.de>
Stephen J. Bevan wrote:

> Customer: "No, no, no, I meant arbitrary Mathematic expressions."

?!?


Pascal
From: Stephen J. Bevan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3k76rlxex.fsf@dino.dnsalias.com>
Pascal Costanza <········@web.de> writes:
> Stephen J. Bevan wrote:
> 
> > Customer: "No, no, no, I meant arbitrary Mathematic expressions."
> 
> ?!?

"Mathematica"
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh7fu$s9f$2@newsreader2.netcologne.de>
Stephen J. Bevan wrote:

> Pascal Costanza <········@web.de> writes:
> 
>>Stephen J. Bevan wrote:
>>
>>
>>>Customer: "No, no, no, I meant arbitrary Mathematic expressions."
>>
>>?!?
> 
> 
> "Mathematica"

:-)
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <qtht61-p8f.ln1@ID-7776.user.dfncis.de>
Stephen J. Bevan <·······@dino.dnsalias.com> wrote:
> Pascal Costanza <········@web.de> writes:
>> Customer: "No, no, no, I didn't mean only numbers, I mean arbitrary
>> expressions!"
>> 

>> ? (defun example ()
>>      (format t "~&Enter two numbers~%")
>>      (let ((x (eval (read))) (y (eval (read))))
>>        (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))

>> Customer: "Yes, that's better!"

> Customer: "No, no, no, I meant arbitrary Mathematic expressions."

Customer: "You mean every user can now execute arbitrary code, and
gain access to functions that usually require administrator rights for
my program? Are you insane? Turn that off immediately! And, BTW, I
only want the user to enter gaussian integers (complex numbers over
integers) and integers. Everything else should produce an error."

Seriously again: There is no 'eval' function in OCaml, Hugs, or GHC,
and I am glad there is none. The danger of taking a shortcut and
opening up huge security holes is just too great. And this is not a
matter of static typing, either, just a design decision. All three
compilers have an interactive mode, where you can add new code to the
program already in memory. It wouldn't be too hard to add a function
that would only be available in interactive mode that operates on a
strings in the same way the interactive mode operates on console
input. 

- Dirk
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhg9b$dfv$1@newsreader2.netcologne.de>
Dirk Thierbach wrote:

> Stephen J. Bevan <·······@dino.dnsalias.com> wrote:
> 
>>Pascal Costanza <········@web.de> writes:
>>
>>>Customer: "No, no, no, I didn't mean only numbers, I mean arbitrary
>>>expressions!"
>>>
> 
> 
>>>? (defun example ()
>>>     (format t "~&Enter two numbers~%")
>>>     (let ((x (eval (read))) (y (eval (read))))
>>>       (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))
> 
> 
>>>Customer: "Yes, that's better!"
> 
> 
>>Customer: "No, no, no, I meant arbitrary Mathematic expressions."
> 
> 
> Customer: "You mean every user can now execute arbitrary code, and
> gain access to functions that usually require administrator rights for
> my program? Are you insane? Turn that off immediately! And, BTW, I
> only want the user to enter gaussian integers (complex numbers over
> integers) and integers. Everything else should produce an error."

(defun myread ()
   (let ((*read-eval* nil))
     (read)))

(defun myeval (form)
   (check-admissibility form)
   (apply (car form) (mapcar #'myeval (cdr form))))

(defun example ()
   (format t "~&Enter two numbers~%")
   (let ((x (myeval (myread))) (y (myeval (myread))))
     (format t "~&The sum of ~A and ~A is ~A" x y (+ x y))))


The details of CHECK-ADMISSIBILITY are left as an exercise to the reader. ;)


Pascal
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <956v61-8v.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

> The details of CHECK-ADMISSIBILITY [to restrict eval] are left as an
> exercise to the reader. ;)

But they are the hard part. Would you risk to write it this way if you
personally would be accountable for any errors in it that may cause
loss of security, by paying, say, a billion Euro in that case?

If you want to be sure that only a small number of options are allowed,
you list all of them. You don't allow everything and then disallow those
things you think could be harmful. You just cannot be sure you didn't
forget anything.

Never use eval on user supplied values if security is important. Don't
even think about it. The risk is just too great.

- Dirk
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnj4tb$ln8$2@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Pascal Costanza <········@web.de> wrote:
> 
>>Dirk Thierbach wrote:
> 
> 
>>The details of CHECK-ADMISSIBILITY [to restrict eval] are left as an
>>exercise to the reader. ;)
> 
> 
> But they are the hard part. Would you risk to write it this way if you
> personally would be accountable for any errors in it that may cause
> loss of security, by paying, say, a billion Euro in that case?
> 
> If you want to be sure that only a small number of options are allowed,
> you list all of them. You don't allow everything and then disallow those
> things you think could be harmful. You just cannot be sure you didn't
> forget anything.

I am sorry, I don't get your point.

(defun check-admissibility (form)
   (or (symbolp form)
       (member (car form) '(cons car cdr + - * /))))

Or check, if they are the public symbols of a package that you have defined.

What's the problem?!?

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Alexander Schmolck
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <yfsbrs2u5ee.fsf@black132.ex.ac.uk>
Pascal Costanza <········@web.de> writes:

> I am sorry, I don't get your point.
> 
> (defun check-admissibility (form)
>    (or (symbolp form)
>        (member (car form) '(cons car cdr + - * /))))
> 
> Or check, if they are the public symbols of a package that you have defined.
> 
> What's the problem?!?

Maybe something like this?

 (* 10000000000000000000000000000000000000000000000000000000
 10000000000000000000000000000000000000000000000 etc.)

'as
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjb53$v4c$3@f1node01.rhrz.uni-bonn.de>
Alexander Schmolck wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>I am sorry, I don't get your point.
>>
>>(defun check-admissibility (form)
>>   (or (symbolp form)
>>       (member (car form) '(cons car cdr + - * /))))
>>
>>Or check, if they are the public symbols of a package that you have defined.
>>
>>What's the problem?!?
> 
> 
> Maybe something like this?
> 
>  (* 10000000000000000000000000000000000000000000000000000000
>  10000000000000000000000000000000000000000000000 etc.)

I don't get the point.

CL-USER 1 > (* 10000000000000000000000000000000000000000000000000000000
  10000000000000000000000000000000000000000000000 etc.)

Error: The variable ETC. is unbound.
   1 (continue) Try evaluating ETC. again.
   2 Specify a value to use this time instead of evaluating ETC..
   3 Specify a value to set ETC. to.
   4 (abort) Return to level 0.
   5 Return to top loop level 0.

Type :b for backtrace, :c <option number> to proceed,  or :? for other 
options

CL-USER 2 : 1 > :c 5

CL-USER 3 > (* 10000000000000000000000000000000000000000000000000000000
  10000000000000000000000000000000000000000000000)
100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Alexander Schmolck
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <yfs3cdetyfr.fsf@black132.ex.ac.uk>
Pascal Costanza <········@web.de> writes:

> >>What's the problem?!?
> > Maybe something like this?
> >  (* 10000000000000000000000000000000000000000000000000000000
> >  10000000000000000000000000000000000000000000000 etc.)
> 
> I don't get the point.

Sorry for being unclear. I was just asking whether something along the lines
of your safe eval wouldn't still be vulnerable to a DoS attack.

'as
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjqe3$bni$1@newsreader2.netcologne.de>
Alexander Schmolck wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>>>What's the problem?!?
>>>
>>>Maybe something like this?
>>> (* 10000000000000000000000000000000000000000000000000000000
>>> 10000000000000000000000000000000000000000000000 etc.)
>>
>>I don't get the point.
> 
> 
> Sorry for being unclear. I was just asking whether something along the lines
> of your safe eval wouldn't still be vulnerable to a DoS attack.

Welcome to Macintosh Common Lisp Version 5.0!
? (defun fac (x)
     (if (= x 0) 1
         (* x (fac (- x 1)))))
fac
? (fac 100000)
 > Error: Stack overflow on control stack.
 >        To globally increase stack space,
 >        increase *minimum-stack-overflow-size*
 > While executing: "Unknown"
 > Type Command-/ to continue, Command-. to abort.
 > If continued: Continue with a larger stack
See the Restarts� menu item for further choices.
1 >


...now add an exception handler around calls to myeval that handles 
stack overflow, and don't allow the user code access to functions and 
variables that can manipulate these settings. (With an appropriate 
exception handler, the user wouldn't see the error as printed above and 
wouldn't be able to issue the possible restarts.)

I know this doesn't completely answer your question, but it might give 
you a clue about what is possible in modern Common Lisp implementations, 
and how one could approach these things.


Pascal
From: Alexander Schmolck
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <yfsu15us353.fsf@black132.ex.ac.uk>
Pascal Costanza <········@web.de> writes:

> Alexander Schmolck wrote:
> 
> > Pascal Costanza <········@web.de> writes:
> >
> >>>>What's the problem?!?
> >>>
> >>>Maybe something like this?
> >>> (* 10000000000000000000000000000000000000000000000000000000
> >>> 10000000000000000000000000000000000000000000000 etc.)
> >>
> >>I don't get the point.
> > Sorry for being unclear. I was just asking whether something along the
> > lines of your safe eval wouldn't still be vulnerable to a DoS attack.
> 
> Welcome to Macintosh Common Lisp Version 5.0!
> ? (defun fac (x)
>      (if (= x 0) 1
>          (* x (fac (- x 1)))))
> fac
> ? (fac 100000)
>  > Error: Stack overflow on control stack.
>  >        To globally increase stack space,
>  >        increase *minimum-stack-overflow-size*
>  > While executing: "Unknown"
>  > Type Command-/ to continue, Command-. to abort.
>  > If continued: Continue with a larger stack
> See the Restarts� menu item for further choices.
> 1 >
> 
> 
> ...now add an exception handler around calls to myeval that handles stack
> overflow, and don't allow the user code access to functions and variables that
> can manipulate these settings. (With an appropriate exception handler, the
> user wouldn't see the error as printed above and wouldn't be able to issue the
> possible restarts.)
> 
> I know this doesn't completely answer your question, but it might give you a
> clue about what is possible in modern Common Lisp implementations, and how
> one could approach these things.

I was just nitpicking. 

While I'm at it, are you sure this is "how one could approach these things"?

Isn't the stack overflow just due to the function being coded/compiled poorly
(viz. no tail call optimization), if so, given 'properly' implemented
functions, wouldn't the system have already ground to a halt before any
exceptions would be raised unless you either carefully code something to
estimate/prevent computational ressource usage (customized versions of
potentially ressource hungry functions, or eval'ing in a separate, monitored
thread?)?

'as
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnkm3o$p9s$1@newsreader2.netcologne.de>
Alexander Schmolck wrote:

> I was just nitpicking. 

OK. :)

> While I'm at it, are you sure this is "how one could approach these things"?
> 
> Isn't the stack overflow just due to the function being coded/compiled poorly
> (viz. no tail call optimization), if so, given 'properly' implemented
> functions, wouldn't the system have already ground to a halt before any
> exceptions would be raised unless you either carefully code something to
> estimate/prevent computational ressource usage (customized versions of
> potentially ressource hungry functions, or eval'ing in a separate, monitored
> thread?)?

Hmm, I have a problem to parse your statements here. However, I would 
say that it should be easy to restrict resources in a way so that you 
can get the right exceptions and then deal with it.

(Yes, in the example given above, a "proper" tail recursive version 
would exhibit other behavior, but I think that's besides the point.

Pascal
From: Espen Vestre
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <kwhe1tlo32.fsf@merced.netfonds.no>
Pascal Costanza <········@web.de> writes:

> I know this doesn't completely answer your question, but it might give
> you a clue about what is possible in modern Common Lisp
> implementations, and how one could approach these things.

Sure. But

CL-USER 14 > (mp:process-run-function "junk" nil (lambda()(time (integer-length (expt 10 1000000)))))

makes my lispworks feel like wading through a tar pit for a minute or 
two :-)

In an advanced server setting, you may want to put in a lot of checks
to avoid that single requests can generate too cpu-consuming threads on
the server. But I guess most server applications out there on the web
are vulnerable to "cpu-grabbing" dos attacks anyway...
-- 
  (espen)
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmqes$fqu$5@newsreader2.netcologne.de>
Espen Vestre wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>I know this doesn't completely answer your question, but it might give
>>you a clue about what is possible in modern Common Lisp
>>implementations, and how one could approach these things.
> 
> 
> Sure. But
> 
> CL-USER 14 > (mp:process-run-function "junk" nil (lambda()(time (integer-length (expt 10 1000000)))))
> 
> makes my lispworks feel like wading through a tar pit for a minute or 
> two :-)

Hmm, just don't let mp:process-run-function go through the admissibility 
check? ;)

In some situations, it's probably best to define the set of admissible 
forms in an additive way, not in a subtractive way. (I.e., don't say 
what is not allowed and permit everything else, but rather say what is 
allowed and don't permit everything else.)


Pascal
From: Espen Vestre
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <kwfzheohlr.fsf@merced.netfonds.no>
Alexander Schmolck <··········@gmx.net> writes:

> Maybe something like this?
> 
>  (* 10000000000000000000000000000000000000000000000000000000
>  10000000000000000000000000000000000000000000000 etc.)

What, except for _possibly_ a DoS-attack, would you be able to accomplish 
by that?
-- 
  (espen)
From: Alexander Schmolck
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <yfs65iatynj.fsf@black132.ex.ac.uk>
Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:

> Alexander Schmolck <··········@gmx.net> writes:
> 
> > Maybe something like this?
> > 
> >  (* 10000000000000000000000000000000000000000000000000000000
> >  10000000000000000000000000000000000000000000000 etc.)
> 
> What, except for _possibly_ a DoS-attack, would you be able to accomplish 
> by that?

DoS attack was what I had in mind.

'as
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bm4071-d4g.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

[restricting eval]

> What's the problem?!?

I am sorry, I didn't look carefully enough. My fault. I thought you
were still calling eval, but you are not, you are simulating it.
That's good, because then it is easy to replace check-admissibility
with something like lookup-function-by-name, and it will happily
run in a statically typed language without eval. 

(Effectively, this is a mini-interpreter on its own, and this is how
it should be.)

- Dirk
From: Gareth McCaughan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87vfqaeaz1.fsf@g.mccaughan.ntlworld.com>
Pascal Costanza wrote:

> >>Dirk Thierbach wrote:
> >
> >>The details of CHECK-ADMISSIBILITY [to restrict eval] are left as an
> >>exercise to the reader. ;)
> > But they are the hard part. Would you risk to write it this way if
> > you
> > personally would be accountable for any errors in it that may cause
> > loss of security, by paying, say, a billion Euro in that case?
> > If you want to be sure that only a small number of options are
> > allowed,
> > you list all of them. You don't allow everything and then disallow those
> > things you think could be harmful. You just cannot be sure you didn't
> > forget anything.
> 
> I am sorry, I don't get your point.
> 
> (defun check-admissibility (form)
>    (or (symbolp form)
>        (member (car form) '(cons car cdr + - * /))))
> 
> Or check, if they are the public symbols of a package that you have defined.
> 
> What's the problem?!?

I'm afraid you just illustrated it.

(+ 123
   (format-my-hard-disk)
   (send-pornographic-emails)
   (insult-boss)
   (launch-missiles :override t)
   (loop (summon-nasal-demons)))

It's *not* that hard to write a safe CHECK-ADMISSIBILITY,
but you can't write secure software with an attitude that
says "What's te problem?!?".

Oh, and you forgot to set *READ-EVAL* to nil. Probably
some other things too.

-- 
Gareth McCaughan
.sig under construc
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnkepa$f2h$1@newsreader2.netcologne.de>
Gareth McCaughan wrote:

> Oh, and you forgot to set *READ-EVAL* to nil. Probably
> some other things too.

I have set *read-eval* to nil. Check the code again.

This just proves that you haven't actually read it. (which by the way 
indeed has a bug, but a different one - I just haven't checked it, I 
only posted it to illustrate an idea)


Pascal
From: Gareth McCaughan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87r80xeqoe.fsf@g.mccaughan.ntlworld.com>
Pascal Costanza <········@web.de> writes:

> Gareth McCaughan wrote:
> 
> > Oh, and you forgot to set *READ-EVAL* to nil. Probably
> > some other things too.
> 
> I have set *read-eval* to nil. Check the code again.
> 
> This just proves that you haven't actually read it.

Ahem. You're right. :-) So here's another bug: literals
other than symbols -- such as, e.g., numbers -- won't
get through. But that's probably what you are referring
to when you say

>                                                     (which by the way
> indeed has a bug, but a different one - I just haven't checked it, I
> only posted it to illustrate an idea)

Sorry about that.

-- 
Gareth McCaughan
.sig under construc
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmq80$fqu$3@newsreader2.netcologne.de>
Gareth McCaughan wrote:

> Sorry about that.

OK, no problem. ;)


Pascal
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031025231041.GF1454@mapcar.org>
On Sat, Oct 25, 2003 at 02:59:12PM +0000, Marshall Spight wrote:
> I tend to agree. But I'd like to see if I can't nail
> this down further, which is why I'm trying to
> see if anyone can come up with a program
> that is small, useful, and not provably typesafe.
> Ideally this would be expressed in a statically
> typed language, with a program that would not
> compile, but that if the typechecker were somehow
> turned off, would run safely and usefully.

- val Y = (fn h => (fn x => h (x x)) (fn x => h (x x)));
stdIn:15.10-15.15 Error: operator is not a function [circularity]
  operator: 'Z
  in expression:
    x x

I know the reason why it doesn't work, but it does satisfy your requirements.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uu15wbx74.fsf@hotmail.com>
Just to add some more gasoline  to the fire... 

Dependent Types Ensure Partial Correctness of Theorem Provers, by Andrew
W. Appel and Amy P. Felty. Accepted for publication, Journal of Functional
Programming, 2002.

http://www.cs.princeton.edu/~appel/papers/prover/prover.pdf

Abstract

Static type systems in programming languages allow many errors to be
detected at compile time that wouldn't be detected until runtime
otherwise. Dependent types are more expres- sive than the type systems in
most programming languages, so languages that have them should allow
programmers to detect more errors earlier. In this paper, using the Twelf
sys- tem, we show that dependent types in the logic programming setting can
be used to ensure partial correctness of programs which implement theorem
provers, and thus avoid runtime errors in proof search and proof
construction. We present two examples: a tactic-style interactive theorem
prover and a union-find decision procedure.

BTW with termination and coverage checking in the newer versions of Twelf,
you can establish total correctness for quite a large set of interesting
programs. 

I'm also eagerly awaiting an offical release of Delphin too.

http://cs-www.cs.yale.edu/homes/carsten/papers/delphin.pdf

I have some problems with their treatment of HOAS these days and prefer the
approach taken by languages such as FreshML. However, I suspect that in a
decade or so we'll have a kick ass program language and type system in which
when it type checks you get a 100% guarantee that your favorite non-trival
property is actually correct. Who knows maybe even we can convince all those
Mathematicians to start programing in some successor to Twelf.

BTW I'm sure an accent Roman engineer would be in complete awe of the Golden
Gate bridge and similar modern civil engineering marvels. I will be very
disappointed if in 3000 years from now, software "engineering" still looks
very much like what is today.

Arguing about whether or not static checking is better than runtime checking
*today*, is near sighted. Static checking of software is the future
of software engineering!

I believe this because I understand that the technology is on the way and
getting easier and cheaper to use, and that our society is becoming more
reliant on software. It's simply a question of when the demand curve for
better software is high-enough and the cost of delivering formally verified
systems is low enough. At that point, formal verification will become a
customer requirement for everything. I personally hope, I will see this
happen within my life time, and it takes less then 3000 years.

Until then... we are stuck with the diappointing reality.
 http://www.eff.org/Legal/ISP_liability/20031016_eff_pr.php

I'm waiting for the day that the US congress pass a resolution that all
software used for tallying votes comes with some formally checkable
assurance of an unforgable audit trail. I'm quietly hoping that these machines
get deployed and someone riggs and election... (and gets caught of course)
then maybe people will start to care about static checking. :)
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <QzEmb.30611$e01.59276@attbi_s02>
"Matthew Danish" <·······@andrew.cmu.edu> wrote in message ··························@mapcar.org...
>
> - val Y = (fn h => (fn x => h (x x)) (fn x => h (x x)));
> stdIn:15.10-15.15 Error: operator is not a function [circularity]
>   operator: 'Z
>   in expression:
>     x x
>
> I know the reason why it doesn't work, but it does satisfy your requirements.

Uh, what does it do?


Marshall
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031026013608.GG1454@mapcar.org>
On Sun, Oct 26, 2003 at 12:29:36AM +0000, Marshall Spight wrote:
> "Matthew Danish" <·······@andrew.cmu.edu> wrote in message ··························@mapcar.org...
> >
> > - val Y = (fn h => (fn x => h (x x)) (fn x => h (x x)));
> > stdIn:15.10-15.15 Error: operator is not a function [circularity]
> >   operator: 'Z
> >   in expression:
> >     x x
> >
> > I know the reason why it doesn't work, but it does satisfy your requirements.
> 
> Uh, what does it do?

It is the fix-point combinator.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Neelakantan Krishnaswami
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbpm8ao.jjq.neelk@gs3106.sp.cs.cmu.edu>
In article <·····················@mapcar.org>, Matthew Danish wrote:
> On Sat, Oct 25, 2003 at 02:59:12PM +0000, Marshall Spight wrote:
>> I tend to agree. But I'd like to see if I can't nail this down
>> further, which is why I'm trying to see if anyone can come up with
>> a program that is small, useful, and not provably typesafe.
>> Ideally this would be expressed in a statically typed language,
>> with a program that would not compile, but that if the typechecker
>> were somehow turned off, would run safely and usefully.
> 
> - val Y = (fn h => (fn x => h (x x)) (fn x => h (x x)));
> stdIn:15.10-15.15 Error: operator is not a function [circularity]
>   operator: 'Z
>   in expression:
>     x x
> 
> I know the reason why it doesn't work, but it does satisfy your
> requirements.

This function won't do anything very useful in CL or Scheme, either!
You've written the call-by-name Y combinator, and it will loop until
the stack overflows in a strict language. Try the call-by-value Y
combinator instead:

  $ ocaml -rectypes
  	  Objective Caml version 3.06
  
  # let y f = (fun x -> f (fun a -> x x a))
              (fun x -> f (fun a -> x x a));;
  val y : (('a -> 'b) -> 'a -> 'b) -> 'a -> 'b

Incidentally, I caught this bug because of, er, type-checking. Your
code has the inferred type:

  # let y h = (fun x -> h (x x)) (fun x -> h (x x));;
  val y : ('a -> 'a) -> 'a

This surprised me since I expected 'a to have been a function type
rather than an arbitrary type variable.

-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnel0e$i8g$1@news.oberberg.net>
Marshall Spight wrote:

> "Joachim Durchholz" <·················@web.de> wrote:
> 
>>If you can't argue that the program is correct,
>>you shouldn't release it for production use.
> 
> I tend to agree. But I'd like to see if I can't nail
> this down further, which is why I'm trying to
> see if anyone can come up with a program
> that is small, useful, and not provably typesafe.

Matthias' and my position is that such a program doesn't exist (and 
cannot even exist), but all mental progress comes as things not 
considered possible before, so I'm feeling ready for some mental 
progress :-)

> It seems to me that the amount of effort required
> to produce such an example will be telling.

That depends.
It once took me a full year to find a good example to showcase a subtle 
inconsistency in Eiffel's semantics. Once I had found such an example, 
it's very easy to understand and follow, and producing more examples 
requires little if any effort.
It may require a similar a-ha experience in this case :-)
(Of course, a-ha experiences are rare and precious, so I win both ways: 
either no example shows up, which proves my point, or a good example 
indeed shows up, in which case I have learned something substantial *g*)

Regards,
Jo
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <Y5Cmb.14943$mZ5.77489@attbi_s54>
"Joachim Durchholz" <·················@web.de> wrote in message ·················@news.oberberg.net...
> Marshall Spight wrote:
>
> > I tend to agree. But I'd like to see if I can't nail
> > this down further, which is why I'm trying to
> > see if anyone can come up with a program
> > that is small, useful, and not provably typesafe.
>
> Matthias' and my position is that such a program doesn't exist (and
> cannot even exist),

Woo hoo!


> but all mental progress comes as things not
> considered possible before, so I'm feeling ready for some mental
> progress :-)

Right on!


> > It seems to me that the amount of effort required
> > to produce such an example will be telling.
>
> That depends.
> It once took me a full year to find a good example to showcase a subtle
> inconsistency in Eiffel's semantics. Once I had found such an example,
> it's very easy to understand and follow, and producing more examples
> requires little if any effort.

Was it covariant argument types?

And in any event, the fact that it took a year still makes for
a different situation than if it had taken ten minutes.


> (Of course, a-ha experiences are rare and precious, so I win both ways:
> either no example shows up, which proves my point, or a good example
> indeed shows up, in which case I have learned something substantial *g*)

That's right where I'm coming from!


Marshall
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh2tb$rsj$1@news.oberberg.net>
Marshall Spight wrote:

> "Joachim Durchholz" <·················@web.de> wrote:
> 
>>It once took me a full year to find a good example to showcase a subtle
>>inconsistency in Eiffel's semantics. Once I had found such an example,
>>it's very easy to understand and follow, and producing more examples
>>requires little if any effort.
> 
> Was it covariant argument types?

No, that's quite standard staples now.
It was about postconditions and replicated inheritance.

> And in any event, the fact that it took a year still makes for
> a different situation than if it had taken ten minutes.

Actually, I could reproduce hordes of problems in that area in a minute now.
It's been an a-ha experience that enabled me to see some things clearly 
that were obscure (and hence difficult) before.

Regards,
Jo
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <PvVmb.27992$9E1.95136@attbi_s52>
"Joachim Durchholz" <·················@web.de> wrote in message ·················@news.oberberg.net...
> Marshall Spight wrote:
>
> > Was it covariant argument types?
>
> No, that's quite standard staples now.
> It was about postconditions and replicated inheritance.

I'm dying to hear.


> > And in any event, the fact that it took a year still makes for
> > a different situation than if it had taken ten minutes.
>
> Actually, I could reproduce hordes of problems in that area in a minute now.
> It's been an a-ha experience that enabled me to see some things clearly
> that were obscure (and hence difficult) before.

I meant for the first one. I get your point that once you've figured
it out the first time, successive related problems are easy to
generate.


Marshall
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhnai$5fe$1@news.oberberg.net>
Marshall Spight wrote:

> "Joachim Durchholz" <·················@web.de> wrote:
> 
>>Marshall Spight wrote:
>>
>>>Was it covariant argument types?
>>
>>No, that's quite standard staples now.
>>It was about postconditions and replicated inheritance.
> 
> I'm dying to hear.

Awww... got me. I didn't think it would be interesting to anybody here.

It's something that not a single language or run-time framework that I 
know of gets right (the sole potential exception that I know of is the 
COM+ framework from Microsoft, which seems to have gotten it right by 
accident - it that's true, it's embarassing anyway...)

Anyway, here goes for anybody who's still interested:

The base case what "diamond inheritance".
No, no, don't run away, it turned out to be a case of "diamond 
subtyping", and subtypes are a very real part of many functional 
languages. It boiled down to the question whether there is any case of a 
real "diamond subtyping", i.e. a subtype relationship like
     A
    / \
   B   C
    \ /
     D
where B and C are doing fundamentally different things in their 
interpretation of A, so that D would be forced to "be-an A" in two 
fundamentally different ways.

(The postcondition stuff turned out to be rather irrelevant at this 
point, I had some contradictions with postconditions and replicated 
inheritance and wanted to know the true roots of the confusion. And I 
had to prove to the language designer that there's a serious problem at 
the heart of the issue, not just misunderstandings on my side.)

Now, where does one find a D type that is-an A type in several, 
fundamentally different ways?
I had a nagging feeling that there should be one, but all examples that 
I could come up with were hellishly contrived and wouldn't convince 
anybody than myself. Heck, I even started to believe I'm chasing wild 
geese and that the semantics involved might be unsound, but that 
problems would never occur because all occurrences would be as contrived 
as the examples I was producing.

Until I hit groups and arithmetic. Assuming you're doing a very 
mathematical subtype relationship for integer, you see that the Integer 
type is a subtype of the (more abstract) Group type, but in two 
different ways: integers form both an additive and a multiplicative group.
In other words: if the abstract Group type defines operations like
   neutral:: -> Group
   op:: Group -> Group -> Group
then Integer has two neutral elements (namely "one" and "zero"), and it 
has two operations (namely + and *). The Integer type is also a Group 
type, but twice and in different ways. I don't even need the 
intermediate B and C types.

The almost immediate second example that dawned on my was sort order. 
Even in commercial programming, which is very, very far from theoretical 
considerations like "diamond subtyping", most objects have several sort 
orders. Customers may be sorted by name, status, zip code, and a 
gazillion other properties, for example. Isn't a Customer record an 
Ordered item, exactly once for each sort order that applies to it?

True, in a functional context, sort orders can easily be defind in an ad 
hoc fashion, so this is less of a problem for functional languages than 
for object-oriented languages where you usually don't have a way of 
constructing and passing around ad-hoc sort functions... but a 
programmer in a functional language will have difficulties getting the 
sort function checked against the Ordered type, so the inability to cope 
with diamond subtyping misses type checking opportunities.

The concept that really solves all this is the following: every piece of 
data has multiple interfaces. If you have a strict hierarchical subtype 
relationship, the interface form a monotonous series of subsets; 
however, in the case of diamond subtyping, the data has the same 
interface several times, for different aspects of the data.

In other words: each "type" is not a set but a bag of "interfaces".

>>>And in any event, the fact that it took a year still makes for
>>>a different situation than if it had taken ten minutes.
>>
>>Actually, I could reproduce hordes of problems in that area in a minute now.
>>It's been an a-ha experience that enabled me to see some things clearly
>>that were obscure (and hence difficult) before.
> 
> I meant for the first one. I get your point that once you've figured
> it out the first time, successive related problems are easy to
> generate.

That's what I wanted to say :-)

Regards,
Jo
From: Lex Spoon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3oew4af40.fsf@logrus.dnsalias.net>
Joachim Durchholz <·················@web.de> writes:

> Marshall Spight wrote:
>> Is not Godel's Theorem proof that there exist correct programs
>> which cannot be proven correct? I say this even though I
>> am still a strong advocate of static typing.
>
> One can translate his proof into this wording, yes.
> Not that real-life programs are of any interest. If you can't argue
> that the program is correct, you shouldn't release it for production
> use.

While this is true, do keep in mind that neither the reasoning nor the
spec needs to be something amenable to formal logic.  Formal logic is
powerful stuff, but it doesn't handle everything.

Consider specs like these:

     1. The AI is fun to play against.

     2. The transition is pleasant to look at.

     3. The interface is usable.

     4. The program does what I felt like it should do.
     

Even when you have a spec that can be formalized profitably, the
reasoning may not be simple at all.  Have you tried proving many
programs correct?


-Lex
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bna04d$4d0$1@news.oberberg.net>
Pascal Costanza wrote:

> Matthias Blume wrote:
> 
>> [Snip] I think what I am
>> asking here is fairly modest.
> 
> No, you are asking for more. You are asking for the proof to be 
> automatically executable.

A programmer who writes code that's too complicated for automated 
reasoning will, in 99% of all cases, have written code that's too 
complicated for others to understand. And, for that matter, it will also 
be too complicated for himself to understand.
(I have seen such code. Some of it was written by myself.)

In practice, whenever code is written, the programmer should always be 
able to explain why and how his code works. The reasoning used in such 
explanations is generally simple (unless the programmer just invented a 
new algorithm - something that isn't done very often nowadays). The 
reasoning is in fact so simple that even an automatic inference engine 
should be able to reproduce it without help from the programmer.
(How many of your loops and recursions go beyond iterating over a 
precomputed collection? Not many, I'd guess, unless you're routinely 
using some /very/ unusual patterns - and iterating over a collection 
would be easy enough to program as a heuristic into any theorem prover.)

Regards,
Jo
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnap2b$v7a$1@f1node01.rhrz.uni-bonn.de>
Joachim Durchholz wrote:
> Pascal Costanza wrote:
> 
>> Matthias Blume wrote:
>>
>>> [Snip] I think what I am
>>> asking here is fairly modest.
>>
>>
>> No, you are asking for more. You are asking for the proof to be 
>> automatically executable.
> 
> 
> A programmer who writes code that's too complicated for automated 
> reasoning will, in 99% of all cases, have written code that's too 
> complicated for others to understand. And, for that matter, it will also 
> be too complicated for himself to understand.
> (I have seen such code. Some of it was written by myself.)

99% is a statistical measure. Where do you get your numbers from?

> In practice, whenever code is written, the programmer should always be 
> able to explain why and how his code works.

That's not automated reasoning.

> The reasoning used in such 
> explanations is generally simple (unless the programmer just invented a 
> new algorithm - something that isn't done very often nowadays). 

What's your sample set?


And even if this indeed didn't happen very often, it would still happen. 
So there are situations in which automated reasoning can be a hindrance. 
That's all I am trying to say.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bndta6$7st$1@news.oberberg.net>
Pascal Costanza wrote:
> Joachim Durchholz wrote:
> 
>> Pascal Costanza wrote:
>>
>>> Matthias Blume wrote:
>>>
>>>> [Snip] I think what I am
>>>> asking here is fairly modest.
>>>
>>> No, you are asking for more. You are asking for the proof to be 
>>> automatically executable.
>>
>> A programmer who writes code that's too complicated for automated 
>> reasoning will, in 99% of all cases, have written code that's too 
>> complicated for others to understand. And, for that matter, it will 
>> also be too complicated for himself to understand.
>> (I have seen such code. Some of it was written by myself.)
> 
> 99% is a statistical measure. Where do you get your numbers from?

Please read the statement and don't nitpick the wording.

>> In practice, whenever code is written, the programmer should always be 
>> able to explain why and how his code works.
> 
> That's not automated reasoning.

It's just a small step away from formal reasoning.
Besides, in case you overlooked that: Matthias and me aren't advocating 
automated reasoning, we're advocating automated checking. That's a /far/ 
easier task, and (in my personal opinion) something that can be made to 
work even for average programmers if you're careful to keep the 
high-brow terminology out. (In other words: say "list" instead of 
"monad", say "compiler-checked source code annotations, similar to and 
encompassing type declarations" instead of "theorem checker", etc. etc.)

A good checker will still do a lot of inference on its own - it would 
take too much time to write down every elementary reasoning step. But 
type inference has shown that this can go a long way, with little 
compile-time overhead.

>> The reasoning used in such explanations is generally simple (unless 
>> the programmer just invented a new algorithm - something that isn't 
>> done very often nowadays). 
> 
> What's your sample set?

Don't you have one yourself?
Then shut up; not having such a sample set is a clear indication that 
you don't have enough experience to contribute useful observations.
If, on the other hand, you do have a sample set, then come forth with 
your personal observations; just attacking other people's data instead 
of presenting your own data (and, subsequently, making yourself 
attackable) is just an attempt at rhethoric outmaneuvering. And not the 
kind of discussion I'm willing to participate in.

> And even if this indeed didn't happen very often, it would still happen. 
> So there are situations in which automated reasoning can be a hindrance. 
> That's all I am trying to say.

And it's nonsense.
If automated reasoning doesn't help, work with the traditional "dynamic" 
(i.e. run-time) test methods.
All I'm trying to say is that static methods can take a lot of the work 
out of run-time testing. Making the static methods workable is something 
that was long disregarded; type inference is an important first step 
towards that goal, and I see a lot of things that can be done on top of 
that.
Dismissing static verification (in whatever form) just because it isn't 
applicable to any situation seems to be rather shortsighted to me.

Regards,
Jo
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87fzhi5t7x.fsf@sidious.geddis.org>
Joachim Durchholz <·················@web.de> writes:
> A programmer who writes code that's too complicated for automated reasoning
> will, in 99% of all cases, have written code that's too complicated for
> others to understand.

You're wildly over-optimistic about how competent automated reasoning is.
Human beings deal with complexity that is many, many orders of magnitude
greater than anything an automated prover could handle.

> And, for that matter, it will also be too complicated
> for himself to understand.

Humans are incredible better than machines at understanding things.  From the
way you write, one would think that AI has already arrived.  Sadly, it hasn't.
Almost any software of interest is understood far better by human programmers
than by any automated methods.

> In practice, whenever code is written, the programmer should always be able
> to explain why and how his code works.

I don't object to this piece.  But that's got very little to do with whether
an automated reasoning engine could generate the proof (or even just check
it!).

> The reasoning used in such explanations is generally simple (unless the
> programmer just invented a new algorithm - something that isn't done very
> often nowadays).

You appear not to understand automated reasoning at all.  Whether an algorithm
is new or old has very little to do with whether it is hard to check with an
automated prover or not.

> The reasoning is in fact so simple that even an automatic inference engine
> should be able to reproduce it without help from the programmer.

You're so wrong about this, I don't know where to begin.  Let's just start
from the fact that the customer really cares about impact on the real world,
and often much of the concern is whether the specifications match the real
world problem.  Automated approaches only check internal consistency, and
really don't apply to verifying that the program does what the customer wants.

But even skipping that, no automated system comes anywhere near the reasoning
power of a human being.  Deductions that humans can make easily are far, far
beyond state-of-the-art inference engines.

So: a programmer may be able to explain to another programmer how and why a
certain piece of code works, without an automated system being able to prove
anything useful about that same piece of code.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Sheep haiku:
sheepskin seatcovers / winter warm and summer cool / little lambs no more
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bndud5$8bg$1@news.oberberg.net>
Don Geddis wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
>>A programmer who writes code that's too complicated for automated reasoning
>>will, in 99% of all cases, have written code that's too complicated for
>>others to understand.
> 
> You're wildly over-optimistic about how competent automated reasoning is.
> Human beings deal with complexity that is many, many orders of magnitude
> greater than anything an automated prover could handle.

Then take a look at what type inference can do. The kind of inference 
done there is absolutely amazing (unless you take a look at how they 
work, which takes a lot of the amazement out *g*).

Or take a look at the code that's written by most programmers. (Most 
software is still written in the business area, where the tasks are not 
about inventing new algorithms but about sticking existing algorithms 
and modules together. The inference in these areas is usually just a 
simple series of modus ponens, i.e. "A->B and A implies we can assume 
B", and any automated prover worth its disk space can follow such a chain.)

>>And, for that matter, it will also be too complicated
>>for himself to understand.
> 
> Humans are incredible better than machines at understanding things.  From the
> way you write, one would think that AI has already arrived.  Sadly, it hasn't.
> Almost any software of interest is understood far better by human programmers
> than by any automated methods.

You're asking for more than is needed.
I agree that AI is very far from understanding any program. Yet it's 
possible to have theorem checkers look at selected program properties 
and verify that they hold, or show the exact source code position that 
destroys some property.
Improving program property checkers is, of course, an area that can take 
up an infinite amount of research and development time since there's 
always room for improvement - but the available experience with type 
inference (and a little automated reasoning background from my 
university time) have led me to believe that the available techniques 
are already good enough to be massively helpful.

>>In practice, whenever code is written, the programmer should always be able
>>to explain why and how his code works.
> 
> I don't object to this piece.  But that's got very little to do with whether
> an automated reasoning engine could generate the proof (or even just check
> it!).

Annotate a program with loop invariants and their equivalent in 
recursive situations.
After that, an inference engine will be able to infer whatever property 
you want checked: loop invariants are enough to make the problem 
decidable. (In theory, this may not be enough since the reasoning might 
still take exponential time; type inference experience has shown that 
this is rarely if ever a practical problem.)

>>The reasoning used in such explanations is generally simple (unless the
>>programmer just invented a new algorithm - something that isn't done very
>>often nowadays).
> 
> You appear not to understand automated reasoning at all.  Whether an algorithm
> is new or old has very little to do with whether it is hard to check with an
> automated prover or not.

You didn't understand what I was trying to say.

Inference for code that just uses existing algorithms is rather 
straightforward. It's equivalent to constructive logic, which is 
actually decidable. (Actually, that was the grounds for my thesis, so I 
do have some knowledge about what I'm talking about here *g*)

Inferring the correctness of the quicksort algorithm proper, or any new 
algorithm in general that the inference algorithm doesn't know about or 
(equivalently) doesn't have the loop invariants for - /that/ is 
undecidable in general, and not something I'd advocate to automate.

>>The reasoning is in fact so simple that even an automatic inference engine
>>should be able to reproduce it without help from the programmer.
> 
> You're so wrong about this, I don't know where to begin.  Let's just start
> from the fact that the customer really cares about impact on the real world,
> and often much of the concern is whether the specifications match the real
> world problem.  Automated approaches only check internal consistency, and
> really don't apply to verifying that the program does what the customer wants.

Right. Selecting the properties to check can be a challenge.
However, currently, the situtation is usually that you have a lot of 
well-defined properties that you'd like to check (such as a guarantee 
that it won't segfault) and that can't be checked other than by testing.

See, I'm much more modest than you made me out to be ;-)

> But even skipping that, no automated system comes anywhere near the reasoning
> power of a human being.  Deductions that humans can make easily are far, far
> beyond state-of-the-art inference engines.

Right.
My point is that most reasoning over software doesn't need the full 
power of the human mind. Most of any software is about moving the right 
data into the right position, and /that/ is really simple to prove.

> So: a programmer may be able to explain to another programmer how and why a
> certain piece of code works, without an automated system being able to prove
> anything useful about that same piece of code.

Unlikely.
Most code is very simple-minded, in my experience.
Code that isn't simple-minded is "clever", which (by my book) translates 
to "unmaintainable unless annotated with so many comments and 
explanations that writing a formal proof would have been less work".

Actually, the explanations that describe why some particularly clever 
code works already /are/ the proof. All that needs to be done is 
transforming these words into a series of assertions on program state at 
various key points in the code. (Any programmer who can write clever 
code should be able to write the assertions that make a less ingenious 
programmer understand why it works, right?)

Regards,
Jo
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <llr9e69w.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> Unlikely.
> Most code is very simple-minded, in my experience.
> Code that isn't simple-minded is "clever", which (by my book)
> translates to "unmaintainable unless annotated with so many comments
> and explanations that writing a formal proof would have been less
> work".

(defun kernel (s i)
  "S must be a list of three elements.

   Element one is a boolean.
   Elements two and three are possibly empty lists.

   This function returns a new list of the same type,
   where the boolean has been negated and the third
   list has been extended by three elements.

   The second list will be extended by one element
   if the input boolean is true, otherwise it is
   left alone."
  (list (not (car s))
	(if (car s)
	    (cadr s)
	  (cons i (cadr s)))
	(cons 'y (cons i (cons 'z (caddr s))))))

(defconstant k0 '(t () (x)))

(defun mystery (list)
  "This function, when given a list starts with
   K0 and applies KERNEL to it as many times as there are
   elements in LIST.

   The resulting 3-element list is examined.
   If the second element is NULL, this function returns,
   if the first element is T, we iterate on the second
   element, otherwise we iterate on the third." 
  (let ((result (reduce #'kernel list :initial-value k0)))
    (cond ((null (cadr result)))
	  ((car result) (mystery (cadr result)))
	  (t (mystery (caddr result))))))

Is this `clever'?  Will it type check?  Is it correct?
Can you prove it?
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bne4pd$5g0$1@news-int.gatech.edu>
·············@comcast.net once said:
>(defun kernel (s i)
...
>(defun mystery (list)
...
>Is this `clever'?  Will it type check?  Is it correct?
>Can you prove it?

I wasn't following much of this portion of the sub-thread, but I can
answer "will it typecheck" with "yes".

Here's a C++ version, which uses the FC++ library (for lists and
"foldl") and the Boost library (for tuples).  I think I have transcribed
it from LISP correctly.  I added some instrumentation so that it
produces output.

----------------------------------------------------------------------
#include <iostream>
using std::cout; using std::endl; using std::ostream;

#include "prelude.hpp"
#include "stream_ops.hpp"
using namespace boost::fcpp;

#include "boost/tuple/tuple.hpp"
using boost::tuple; using boost::get;

typedef char Element;   // since the LISP was consing 'y and 'z
typedef list<Element> List;
typedef tuple<bool,List,List> Tuple;

// pretty-print tuples
ostream& operator<<( ostream& o, const Tuple& t ) {
   o << "<" << get<0>(t) << ","
            << get<1>(t) << ","
            << get<2>(t) << ">";
   return o;
}

Tuple kernel( Tuple s, Element i ) {
   cout << "Calling kernel with " << s << " and " << i << endl;
   using boost::fcpp::cons;
   List tmp;
   if( get<0>(s) )
      tmp = get<1>(s);
   else
      tmp = cons( i, get<1>(s) );
   return Tuple( !get<0>(s),
                 tmp,
                 cons( 'y', cons( i, cons( 'z', get<2>(s) ) ) ) );
}

Tuple k0( true, NIL, cons('x',NIL) );

void mystery( List l ) {
   cout << "Calling mystery with " << l << endl;
   Tuple result = foldl( ptr_to_fun(&kernel), k0, l );
   if( null( get<1>(result) ) )
      return;
   else if( get<0>(result) )
      mystery( get<1>(result) );
   else
      mystery( get<2>(result) );
}

int main() {
   List l = cons('a',cons('b',NIL));
   mystery(l);
}
----------------------------------------------------------------------

When I run that, I get

----------------------------------------------------------------------
Calling mystery with [a,b]
Calling kernel with <1,[],[x]> and a
Calling kernel with <0,[],[y,a,z,x]> and b
Calling mystery with [b]
Calling kernel with <1,[],[x]> and b
----------------------------------------------------------------------

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <r811cjke.fsf@comcast.net>
·······@prism.gatech.edu (Brian McNamara!) writes:

> ·············@comcast.net once said:
>>(defun kernel (s i)
> ...
>>(defun mystery (list)
> ...
>>Is this `clever'?  Will it type check?  Is it correct?
>>Can you prove it?
>
> I wasn't following much of this portion of the sub-thread, but I can
> answer "will it typecheck" with "yes".

Curious.  How does it determine the return type of MYSTERY without
knowing if MYSTERY returns?
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bned5e$8g2$1@news-int.gatech.edu>
·············@comcast.net once said:
>·······@prism.gatech.edu (Brian McNamara!) writes:
>> I wasn't following much of this portion of the sub-thread, but I can
>> answer "will it typecheck" with "yes".
>
>Curious.  How does it determine the return type of MYSTERY without
>knowing if MYSTERY returns?

I just made mystery have return type "void" in the C++ code.  This was
based on looking at

   (defun mystery (list)
     (let ((result (reduce #'kernel list :initial-value k0)))
       (cond ((null (cadr result)))
             ((car result) (mystery (cadr result)))
             (t (mystery (caddr result))))))

and deciding that in each call to mystery, either
 - we return nothing (the first arm of the "cond")
or
 - we return whatever a recursive call returns (the 2nd/3rd arms)

Perhaps it should have instead returned a boolean; I don't know the
precise semantics of LISP's
   (cond (boolCondition))
versus
   (cond (boolCondition value))
In the first case, what is returned?


In any case, note that

   // Assume "Int" is an infinite-precision "int" type (BigNum)
   Int f( Int x ) {
      if( x == 0 )
         return 0;
      else
         return f(x+1);
   }

is well-typed despite the fact that inputs greater than zero cause the
function to go recursing off into infinity.  Indeed, the function type
is inferrable from its body.

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnek6j$i04$1@news.oberberg.net>
·············@comcast.net wrote:
> Joachim Durchholz <·················@web.de> writes:
> 
>>Unlikely.
>>Most code is very simple-minded, in my experience.
>>Code that isn't simple-minded is "clever", which (by my book)
>>translates to "unmaintainable unless annotated with so many comments
>>and explanations that writing a formal proof would have been less
>>work".
> 
> (defun kernel (s i) [...]
> 
> (defconstant k0 '(t () (x)))
> 
> (defun mystery (list) [...]
> 
> Is this `clever'?

I don't know. It's certainly convoluted, and it's unclear what it's 
intended to do.

 > Will it type check?

Given a good type system, most certainly yes.
It might require some rearranging - which may not be a bad thing after 
all, making things to simple that a type checker can handle it will also 
make it more palatable for humans.

 > Is it correct?

This depends entirely on what it's intended to do, and I'm totally 
stymied about it.
(I have to admit that I didn't spend more than half a minute with it. If 
forced to guess at gun point, I'd say it's somehow related to the COND 
function by construction, but that's about all I can say.)

> Can you prove it?

Lacking a specification, of course not.

I could prove other properties: that it has no side effects, for 
example. (That might be interesting to a compiler, and the proof is 
easy: it doesn't use any functions with side effects, assuming that 
"reduce" is side-effect-free.)
Or that it doesn't use names from any lexical closure that it might be 
called from.
If such code were part of a program I was examining, I'd also ask for a 
termination proof, and for proofs about bounds in memory and time usage, 
all of this possibly even before I'm interested in a correctness proof 
(not all maintenance is about program errors, sometimes I might just be 
hunting a memory leak or a CPU hog and couldn't care less what the 
function does or is intended to do).

Regards,
Jo
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87brs452f1.fsf@sidious.geddis.org>
> > Joachim Durchholz <·················@web.de> writes:
> >>A programmer who writes code that's too complicated for automated reasoning
> >>will, in 99% of all cases, have written code that's too complicated for
> >>others to understand.

I wrote:
> > You're wildly over-optimistic about how competent automated reasoning is.
> > Human beings deal with complexity that is many, many orders of magnitude
> > greater than anything an automated prover could handle.

Joachim Durchholz <·················@web.de> writes:
> Then take a look at what type inference can do. The kind of inference done
> there is absolutely amazing (unless you take a look at how they work, which
> takes a lot of the amazement out *g*).

I'm not amazed by type inference.  It's a hugely simpler problem than logical
inference.

We were talking about proving programs correct.  That requires a whole lot more
than a mere type inference engine.

> Or take a look at the code that's written by most programmers. (Most software
> is still written in the business area, where the tasks are not about
> inventing new algorithms but about sticking existing algorithms and modules
> together. The inference in these areas is usually just a simple series of
> modus ponens, i.e. "A->B and A implies we can assume B", and any automated
> prover worth its disk space can follow such a chain.)

I don't know why you keep bringing up "new" vs. "existing" algorithms.  I don't
understand how you think that impacts complexity at all.

But in any case, I disagree strongly that typical code can be translated to
trivial logic.  Just as a quick example, running computer programs maintain
a lot of state, and computer operations modify that state.  Expressing state
change over time can be extremely subtle in logic.  The AI folks addressed this
as "the frame problem", and there are no easy solutions (in logic).

> I agree that AI is very far from understanding any program. Yet it's
> possible to have theorem checkers look at selected program properties and
> verify that they hold, or show the exact source code position that destroys
> some property.

Well of course this is trivially true.  If you restrict your "selected program
properties" to things that theorem provers can check, then they'll work just
fine.

Now you have to show that the tiny trivial subset of properties you're willing
to look at is an interesting one, and can solve any useful problem.  This is
a long, long way from talking about what the human programmer knows about his
code, and might be able to communicate with another human.

In particular, most informal proofs of program correctness would _not_ fall
into the set of things easily checkable by an automated prover.

> Improving program property checkers is, of course, an area that can take up
> an infinite amount of research and development time since there's always room
> for improvement - but the available experience with type inference (and a
> little automated reasoning background from my university time) have led me to
> believe that the available techniques are already good enough to be massively
> helpful.

The more ambitious you make their goals, the more work the programmer has to
do to hand-feed them the proof steps.  Whether this additional programmer
effort is worth the verified conclusions you wind up with is still very much
open to debate.  In this thread, we're already getting arguments about the
effort required to use simple type checkers.  Surely you agree that as you
want your automated prover to take on more tasks (like program correctness),
that even more programmers will find the additional effort not worth the
cost?

> Annotate a program with loop invariants and their equivalent in recursive
> situations.  After that, an inference engine will be able to infer whatever
> property you want checked: loop invariants are enough to make the problem
> decidable.

Hmm, let me see.  Let's say I write an inference engine, just to pick one
domain.  And it seems to work just fine.  This is one of the rare domains that
it's possible to describe exactly in a formal sense.

You honestly believe you could prove my inference engine correct (i.e. sound
& complete) using automated methods?  Just by writing down loop invariants?

Not a chance.

> Inference for code that just uses existing algorithms is rather
> straightforward. It's equivalent to constructive logic, which is actually
> decidable. (Actually, that was the grounds for my thesis, so I do have some
> knowledge about what I'm talking about here *g*)

I really have no idea what you're talking about.  Why do you think that
new vs. existing algorithms has anything to do with the complexity of the
problem?  Plenty of well-known algorithms are very, very complex.  And a new
algorithm might be very simple.

Given the halting problem, proving much of anything about algorithms in general
basically requires you to simulate their execution.  I.e., it isn't decidable.
I don't see how you think you can claim that "existing algorithms" are
equivalent to a decidable problem.

> Inferring the correctness of the quicksort algorithm proper, or any new
> algorithm in general that the inference algorithm doesn't know about or
> (equivalently) doesn't have the loop invariants for - /that/ is undecidable
> in general, and not something I'd advocate to automate.

Quicksort isn't a new algorithm.  Since quicksort is already an existing
algorithm, but you agree that automated inference on it is infeasible, I
no longer see how your distinguishing between code that your approach will
work on, and code that it won't.  It seems that you're saying nothing more
interesting than "some code is simple enough that automated deduction could
prove it correct; other code is too hard."  That's probably true, but useless!

> Right. Selecting the properties to check can be a challenge.
> However, currently, the situtation is usually that you have a lot of
> well-defined properties that you'd like to check (such as a guarantee that it
> won't segfault) and that can't be checked other than by testing.

Sometimes the language itself (like Lisp) can guarantee this for you for all
programs in the language.  So you don't need to prove it about each individual
program.

Surely that's a better approach than yours?  Simply choose a more appropriate
language.

> My point is that most reasoning over software doesn't need the full power of
> the human mind. Most of any software is about moving the right data into the
> right position, and /that/ is really simple to prove.

I disagree with you.  I don't think that most of the interesting properties
about real software are easy to prove.

> Most code is very simple-minded, in my experience.
> Code that isn't simple-minded is "clever", which (by my book) translates to
> "unmaintainable unless annotated with so many comments and explanations that
> writing a formal proof would have been less work".

The gap between clear comments (even for complex algorithms) and formal
proofs (especially ones amenable to automated methods) is huge.  You're
completely mis-characterizing it to suggest that it's a similar amount of
effort.

> Actually, the explanations that describe why some particularly clever code
> works already /are/ the proof. All that needs to be done is transforming
> these words into a series of assertions on program state at various key
> points in the code.

"All that needs to be done..."  :-)

Most of the time, that transformation will be more effort that writing the
entire program was in the first place.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
I realize that I'm generalizing here, but as is often the case when I
generalize, I don't care.  -- Dave Barry
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87llra5tlf.fsf@sidious.geddis.org>
> ···@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> > please share your amazing system that allows you to prove arbitrary
> > properties of your code, and to specify what "correct" means

Matthias Blume <····@my.address.elsewhere> writes:
> The system: mathematics in general, logic in particular.
> Correctness: Certain statements (depending on the problem domain) which
>    I want to hold true for my programs.

Surely you're joking.  Three obvious problems with your approach:

1. Logic only connects axioms to conclusions.  How do you get confidence that
   your axioms are correct in the first place?  How do you know whether your
   program specification actually matches the real world needs?

2. Logic is not decidable, so even in theory this approach doesn't solve all
   problems.

3. Even if logic might theoretically work, you've been asking for automated
   proofs.  The state of the art in automated theorem proving is far, far
   below the complexity of real-world software.  If you restrict the software
   you produce to code that can be proven using an inference engine, you'll
   simply be unable to produce much interesting code.

Yours is not a programming approach that will succeed for anyone who needs to
deliver real code to real customers, even if they were sympathetic to your
points.

> And in case you ask: No, this does not let me prove "arbitrary"
> properties.  But, obviously, the ones I can't prove I can't claim my
> code to possess.

So, as it turns out, you can't prove any properties of your code that your
customers care about.  Since your approach only applied to irrelevant trivial
details of the code, why is this argument taking place?  Who cares?

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
((12 + 144 + 20 + (3 * 4^1/2)) / 7) + (5 * 11) = 9^2 + 0
A Dozen, a Gross and a Score,
plus three times the square root of four,
divided by seven,
plus five times eleven,
equals nine squared and not a bit more.
From: Kenny Tilton
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <8DXlb.30373$pT1.23906@twister.nyc.rr.com>
Thomas F. Burdick wrote:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> 
>>Thomas Lindgren <···········@*****.***> writes:
>>
>>
>>>Matthias Blume <····@my.address.elsewhere> writes:
>>>
>>>
>>>>Every programmer who writes a program ought to have a proof that the
>>>>program is correct in her mind. (If not, fire her.)
>>>
>>>Don't forget to fire the specification writer afterwards. Then the
>>>requirements guy. Then the customer.
>>
>>Unfortunately, I am aware of "the Real World".  In any case, is this
>>really any excuse for shipping code of which we don't know will always
>>work, written by programmers who we didn't fire even though they
>>didn't know what they were doing, writing to specifications that were
>>inconsistent, driven by requirements that were unreasonable to begin
>>with, asked for by customers who were clueless?

<heh-heh> That is why we like Lisp. Makes all those things (a rather 
accurate description of my career in software development) manageable. 
It is easier to work with a slow-setting glue, and Lisp code is 
veritably non-setting. It just stops changing once it is right, tho it 
is ready to start changing again if the spec moves.

I guess I keep my job, because I always have a proof in mind: the code 
is correct when it stops changing.

> What a jackass! 

Down, Oakland! Down!

:)

-- 
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
  http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
From: Paul Wallich
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn98n9$859$1@reader1.panix.com>
Matthias Blume wrote:

> Thomas Lindgren <···········@*****.***> writes:
> 
> 
>>Matthias Blume <····@my.address.elsewhere> writes:
>>
>>
>>>Every programmer who writes a program ought to have a proof that the
>>>program is correct in her mind. (If not, fire her.)
>>
>>Don't forget to fire the specification writer afterwards. Then the
>>requirements guy. Then the customer.
> 
> 
> Unfortunately, I am aware of "the Real World".  In any case, is this
> really any excuse for shipping code of which we don't know will always
> work, written by programmers who we didn't fire even though they
> didn't know what they were doing, writing to specifications that were
> inconsistent, driven by requirements that were unreasonable to begin
> with, asked for by customers who were clueless?

Without a solid definition of "the program is correct" all of this is 
really posturing, and not even interesting posturing at that. Among the 
choices:

the program will do what the customer wanted (ha)
the program will do what the customer asked for (maybe)
the program will do what the req/spec people asked for
the program will conform to the written spec
the program will do what the programmer intended
the program will do what the programmer documented
the program will fail only in certain relatively harmless ways
and a bunch of others

feasible formal proofs apply only to some of those definitions and not 
even in any monotonic fashion

paul
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9on7$994$1@newsreader2.netcologne.de>
Paul Wallich wrote:

> Without a solid definition of "the program is correct" all of this is 
> really posturing, and not even interesting posturing at that. Among the 
> choices:
> 
> the program will do what the customer wanted (ha)
> the program will do what the customer asked for (maybe)
> the program will do what the req/spec people asked for
> the program will conform to the written spec
> the program will do what the programmer intended
> the program will do what the programmer documented
> the program will fail only in certain relatively harmless ways
> and a bunch of others
> 
> feasible formal proofs apply only to some of those definitions and not 
> even in any monotonic fashion

All these "choices" have a common theme - you know beforehand what you 
want. (Apart from the fact that there is always a customer involved - 
that's also not always the case.)

What about the following choice:

the program will support the customer in ways they didn't even dream about

How would you formalize that?


Pascal
From: Paul Wallich
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bna7bg$jfu$1@reader1.panix.com>
Pascal Costanza wrote:

> Paul Wallich wrote:
> 
>> Without a solid definition of "the program is correct" all of this is 
>> really posturing, and not even interesting posturing at that. Among 
>> the choices:
>>
>> the program will do what the customer wanted (ha)
>> the program will do what the customer asked for (maybe)
>> the program will do what the req/spec people asked for
>> the program will conform to the written spec
>> the program will do what the programmer intended
>> the program will do what the programmer documented
>> the program will fail only in certain relatively harmless ways
>> and a bunch of others
>>
>> feasible formal proofs apply only to some of those definitions and not 
>> even in any monotonic fashion
> 
> 
> All these "choices" have a common theme - you know beforehand what you 
> want. (Apart from the fact that there is always a customer involved - 
> that's also not always the case.)
> 
> What about the following choice:
> 
> the program will support the customer in ways they didn't even dream about
> 
> How would you formalize that?

That's in the first one. The customer doesn't have to know they want it 
until they see it. (In fact, if you look at a lot of software currently
being sold or distributed, the customer doesn't know it does what they 
want until they have been convinced painfully and at great length. But 
perhaps that's not what you intended.)

In some ways it's easier to prove your version of correctness, because 
it doesn't require nearly as rigorous a semantics of what the customer
believes they want before the program is written...

Practically speaking, of course, I'm on the side of very limited 
definitions of "correct" that entail a clear understanding that 
"correct" is not always or even often a useful descriptor.

paul
From: Thomas Lindgren
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3oew6agxy.fsf@localhost.localdomain>
Matthias Blume <····@my.address.elsewhere> writes:

> Thomas Lindgren <···········@*****.***> writes:
> 
> > Matthias Blume <····@my.address.elsewhere> writes:
> > 
> > > Every programmer who writes a program ought to have a proof that the
> > > program is correct in her mind. (If not, fire her.)
> > 
> > Don't forget to fire the specification writer afterwards. Then the
> > requirements guy. Then the customer.
> 
> Unfortunately, I am aware of "the Real World".  In any case, is this
> really any excuse for shipping code of which we don't know will always
> work, written by programmers who we didn't fire even though they
> didn't know what they were doing, writing to specifications that were
> inconsistent, driven by requirements that were unreasonable to begin
> with, asked for by customers who were clueless?

Requirements and specifications can be 'reasonable' and 'consistent'
in an everyday sense of the word, yet not mathematical enough to
provide a basis for a correctness proof. Indeed, this is normally the
case. 

In the vast majority of cases, customers furthermore prefer a
deliverable that does what they want to one that does something
provably correct. I'd call that shrewd rather than clueless, actually.

In short, firing the programmer for not providing a correctness
proof doesn't seem very constructive.

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1wuaun0sg.fsf@tti5.uchicago.edu>
Thomas Lindgren <···········@*****.***> writes:

> In short, firing the programmer for not providing a correctness
> proof doesn't seem very constructive.

Read what I actually wrote.  I never suggested such a thing.

(I said: fire the programmer who doesn't have a correctness proof --
even just an informal one -- in her mind when she writes her code.  I
also believe that such a programmer does not exist.  That was actually
my point, but it seems to be completely lost on some participants in
this discussion.  I am amazed that this is even controversial at all.)

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbetp$p74$3@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:

> (I said: fire the programmer who doesn't have a correctness proof --
> even just an informal one -- in her mind when she writes her code.  I
> also believe that such a programmer does not exist.  That was actually
> my point, but it seems to be completely lost on some participants in
> this discussion.  I am amazed that this is even controversial at all.)

This is why we are having this discussion. As I said before, there are 
different programming styles.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8t14$p1o$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> ·············@comcast.net writes:
> 
> 
>>Matthias Blume <····@my.address.elsewhere> writes:
>>
>>
>>>Pascal Costanza <········@web.de> writes:
>>>
>>>
>>>>The set of programs that are useful but cannot be checked by a static
>>>>type system is by definition bigger than the set of useful programs
>>>>that can be statically checked.
>>>
>>>By whose definition?  What *is* your definition of "useful"?  It is
>>>clear to me that static typing improves maintainability, scalability,
>>>and helps with the overall design of software.  (At least that's my
>>>personal experience, and as others can attest, I do have reasonably
>>>extensive experience either way.)
>>
>>The opposing point is to assert that *no* program that cannot be
>>statically checked is useful.  Are you really asserting that?
> 
> 
> Actually, viewed from a certain angle, yes.  Every programmer who
> writes a program ought to have a proof that the program is correct in
> her mind.  (If not, fire her.)  It ought to be possible to formalize
> that proof and to statically check it.

You are thinking about a certain set of programs and a distinct 
programming style that seems to work well for you. Other people may 
prefer a different programming style and care about a different set of 
programs.

> (Now, I am not saying that current type systems that are in practical
> use let you do that.  But they go some of the way.)

Please inform me as soon as they go all the way, because I might 
reconsider my point of view then. Until then I use what works best for me.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1znfsne9e.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> > Actually, viewed from a certain angle, yes.  Every programmer who
> 
> > writes a program ought to have a proof that the program is correct in
> > her mind.  (If not, fire her.)  It ought to be possible to formalize
> > that proof and to statically check it.
> 
> You are thinking about a certain set of programs and a distinct
> programming style that seems to work well for you. Other people may
> prefer a different programming style and care about a different set of
> programs.

Do you mean that I talk about programs written by programmers who know
what they are doing while you are talking about a different set of
programs?

The problem really is that you often say "is correct" and "cannot be
statically checked" about the same set of problems.  But how can *you*
yourself possibly know the first to be true given that you think the
second is true?

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn90hd$p26$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>>Actually, viewed from a certain angle, yes.  Every programmer who
>>
>>>writes a program ought to have a proof that the program is correct in
>>>her mind.  (If not, fire her.)  It ought to be possible to formalize
>>>that proof and to statically check it.
>>
>>You are thinking about a certain set of programs and a distinct
>>programming style that seems to work well for you. Other people may
>>prefer a different programming style and care about a different set of
>>programs.
> 
> Do you mean that I talk about programs written by programmers who know
> what they are doing while you are talking about a different set of
> programs?

The cool thing about dynamically typed languages is that you don't need 
to know what you are doing when you start to write a program. You gain 
an understanding of the problem you try to solve during development, by 
just trying out things and see if they work.

Of course, in the end I should have gained a fairly deep understanding 
of the problem, otherwise I have failed. But you seem to suggest that I 
shouldn't even start programming before I have gained a complete 
understanding. And in my view this is a waste of resources.

The process that you undergo when you try to figure out a problem 
consists of automatable and non-automatable elements. I prefer to let 
the automatable elements be executed by my computer from the very 
beginning. I see the computer as a tool that supports my reasoning 
process here.

At the end of the day, when I have finished my understanding process I 
have also come up with a solution to the problem as a working program. 
At that stage, if you want to be really sure that certain conditions are 
always met and can never be violated it makes sense to _add_ static 
checks that you can even tailor to the concrete problem you have already 
solved.

The difference between a specification and a program is that I can test 
the program. ;)

> The problem really is that you often say "is correct" and "cannot be
> statically checked" about the same set of problems.  But how can *you*
> yourself possibly know the first to be true given that you think the
> second is true?

For example I cannot check whether my program will always be able to 
successfully connect to the internet or not. I can still know that my 
program is "correct". This is a very simple example but it is 
nonetheless one that illustrates that dynamic checking can be much 
better than static checking.

A similar situation appears when you don't know what actual code your 
program will run in a dynamically extensible system. Just another example.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F981373.3010606@ps.uni-sb.de>
Pascal Costanza wrote:
> 
> The cool thing about dynamically typed languages is that you don't need 
> to know what you are doing when you start to write a program. You gain 
> an understanding of the problem you try to solve during development, by 
> just trying out things and see if they work.
> 
> Of course, in the end I should have gained a fairly deep understanding 
> of the problem, otherwise I have failed. But you seem to suggest that I 
> shouldn't even start programming before I have gained a complete 
> understanding. And in my view this is a waste of resources.

Even if you prefer this approach to programming - which definitely is 
not suitable for all problem domains - a type system can be very useful 
guidance for gaining understanding, so it might actually save resources.

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9og9$8p3$2@newsreader2.netcologne.de>
Andreas Rossberg wrote:

> Pascal Costanza wrote:
> 
>>
>> The cool thing about dynamically typed languages is that you don't 
>> need to know what you are doing when you start to write a program. You 
>> gain an understanding of the problem you try to solve during 
>> development, by just trying out things and see if they work.
>>
>> Of course, in the end I should have gained a fairly deep understanding 
>> of the problem, otherwise I have failed. But you seem to suggest that 
>> I shouldn't even start programming before I have gained a complete 
>> understanding. And in my view this is a waste of resources.
> 
> 
> Even if you prefer this approach to programming - which definitely is 
> not suitable for all problem domains - a type system can be very useful 
> guidance for gaining understanding, so it might actually save resources.

Yes, _can_ be.

In my case, I feel distracted by a tool that complains about things that 
are not relevant to my flow of thinking. I have experienced this very 
often. If you feel that a static type system supports your flow of 
thinking that's great for you. Just go ahead and use it to your advantage.


Pascal
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9vp9$46h$2@news.oberberg.net>
Pascal Costanza wrote:

> In my case, I feel distracted by a tool that complains about things that 
> are not relevant to my flow of thinking. I have experienced this very 
> often.

Which tools were that?

Regards,
Jo
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <MVdmb.18789$Tr4.39684@attbi_s03>
"Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
>
> In my case, I feel distracted by a tool that complains about things that
> are not relevant to my flow of thinking. I have experienced this very
> often. If you feel that a static type system supports your flow of
> thinking that's great for you. Just go ahead and use it to your advantage.

With respect, this suggests to me that you have not learned
how to effectively use static typing.

Which I find interesting, since I never really thought of
static typing as something that one needs to learn how
to use. Hmmm...


Marshall
From: Lex Spoon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ad7qow2x.fsf@logrus.dnsalias.net>
"Marshall Spight" <·······@dnai.com> writes:

> "Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
>>
>> In my case, I feel distracted by a tool that complains about things that
>> are not relevant to my flow of thinking. I have experienced this very
>> often. If you feel that a static type system supports your flow of
>> thinking that's great for you. Just go ahead and use it to your advantage.
>
> With respect, this suggests to me that you have not learned
> how to effectively use static typing.
>
> Which I find interesting, since I never really thought of
> static typing as something that one needs to learn how
> to use. Hmmm...
>


I've had such troubles with SML's modules system.  I've been killed by
eqtype-ness of types, for example, to the point that I simply didn't
use equtypes any longer. I also had trouble trying to implement a
compiler that can use different backends by changing the contents of
one files, e.g. "Module Backend = MIPSBackend".  I got type errors
everywhere even though the code was simple and obviously correct.  I
could have worked around all this by using explicit HOF's, but it
should have been cake for the modules system.  Actually, I'm sure it
is cake, if you know the modules system well.  But the point is that
the static type checker didn't understand what I was doing and I
didn't know how to change things to placate the type checker.

The HOL people have also had trouble with ML modules.  They very much
want a thm type that is private to *two* modules: the central Thm
module, and a separate module that loads and saves theorems to disk.
They have clearly put a lot of thought into how to work around this.

Aside from modules, I only occasionally bump into a problem and I
can't remember being unable to work around something.  My main
dissatisfaction was wanting to be able to slide in a whole subclass as
a parameter to an existing module; it would require a rewrite to the
module to allow the extra flexibility, which is very odd to someone
used to OO languages.  Maybe O'Caml fixes this up.

In summary, static types definitely add some overhead for new and
semi-new programmers.  There's a tradeoff between features gained and
features lost, as well as between cognitive processes automated and
those that are forced to be manual.  Further, on the last issue in
particular, I have always had the intuition that some people are very
good at extracting extra guarantees from the type system and others
just trudge along.  A static type checker does much more, I've always
thought, in some people's hands than in others'.


-Lex
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <BBqmb.22832$Fm2.11187@attbi_s04>
"Lex Spoon" <···@cc.gatech.edu> wrote in message ···················@logrus.dnsalias.net...
> A static type checker does much more, I've always
> thought, in some people's hands than in others'.

Yeah, I think that's my take-away from this thread.
Static type systems have always felt like my first
best tool for correctness, but perhaps that's just
me, and others like me.


Marshall
From: Russell Wallace
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9c9f9e.53676165@news.eircom.net>
On Sat, 25 Oct 2003 08:35:45 GMT, "Marshall Spight" <·······@dnai.com>
wrote:

>Yeah, I think that's my take-away from this thread.
>Static type systems have always felt like my first
>best tool for correctness, but perhaps that's just
>me, and others like me.

I rather like having static type checking, not so much for correctness
(I find the bugs it picks up show up on even cursory testing) but for
refactoring; change one thing, let the compiler show me the other
places my code needs to change to accomodate that.

But, sigh, this advantage evaporates to the extent my type
declarations become "I don't know" :P

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egismbnnea.fsf@sefirot.ii.uib.no>
················@eircom.net (Russell Wallace) writes:

> But, sigh, this advantage evaporates to the extent my type
> declarations become "I don't know" :P

Sounds like you would want type inference, then?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Russell Wallace
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9d2067.86649257@news.eircom.net>
On 27 Oct 2003 08:35:41 +0100, ··········@ii.uib.no wrote:

>················@eircom.net (Russell Wallace) writes:
>
>> But, sigh, this advantage evaporates to the extent my type
>> declarations become "I don't know" :P
>
>Sounds like you would want type inference, then?

Type inference is a worthwhile invention, and I'm surprised more of
the people who use static typing aren't demanding it, but it doesn't
solve my problem, which is that there are a lot of places in the
program where types _really can be_ any of several possibilities
depending on what else is going on.

Dynamic typing is like garbage collection; the advantage of GC isn't
that it lets you eliminate the free() calls from your code, it's that
it lets you program in a style in which there wouldn't be anywhere to
put them in the first place.

(Mind you, _partial_ type inference (infer and check types where
determinable, keep quiet about it where not) would be a lovely thing
to have, and I'd happily buy a product that implemented them
properly.)

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Nikodemus Siivola
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjeim$k9e$1@nyytiset.pp.htv.fi>
In comp.lang.lisp Russell Wallace <················@eircom.net> wrote:

> (Mind you, _partial_ type inference (infer and check types where
> determinable, keep quiet about it where not) would be a lovely thing
> to have, and I'd happily buy a product that implemented them
> properly.)

Well, then you can help yourself to a Common Lisp of your liking. ;)

I'm sure CMUCL and SBCL developers are happy to accept donations from
you, but if you insist on buying instead of supporting, there's always
SCL.

Others (ACL, MCL, OpenMCL, LispWorks, etc) may do inference as well,
but I wouldn't know.

Cheers,

 -- Nikodemus
From: Russell Wallace
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9d62f9.103694409@news.eircom.net>
On Mon, 27 Oct 2003 15:45:27 +0000 (UTC), Nikodemus Siivola
<······@random-state.net> wrote:

>In comp.lang.lisp Russell Wallace <················@eircom.net> wrote:
>
>> (Mind you, _partial_ type inference (infer and check types where
>> determinable, keep quiet about it where not) would be a lovely thing
>> to have, and I'd happily buy a product that implemented them
>> properly.)
>
>Well, then you can help yourself to a Common Lisp of your liking. ;)

Yes indeed, I think Common Lisp gets it more or less right.

As it happens, my current plan is to use Scheme for my next project,
since it also gets it close enough to right and Scheme has
implementations that target the Java platform whereas Common Lisp
doesn't; but if and when I start another Windows project, I'll likely
go with Corman Lisp.

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Edi Weitz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87brs2edcq.fsf@bird.agharta.de>
On Mon, 27 Oct 2003 18:28:32 GMT, ················@eircom.net (Russell Wallace) wrote:

> As it happens, my current plan is to use Scheme for my next project,
> since it also gets it close enough to right and Scheme has
> implementations that target the Java platform whereas Common Lisp
> doesn't

Check out "Armed Bear Lisp" <http://www.cliki.net/Armed%20Bear%20Lisp>
for a Common Lisp that runs on a JVM.

Edi.
From: Russell Wallace
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9d8509.112415677@news.eircom.net>
On 27 Oct 2003 19:37:25 +0100, Edi Weitz <···@agharta.de> wrote:

>Check out "Armed Bear Lisp" <http://www.cliki.net/Armed%20Bear%20Lisp>
>for a Common Lisp that runs on a JVM.

Thanks!

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l1brrwe4ps.fsf@budvar.future-i.net>
················@eircom.net (Russell Wallace) writes:

>Mind you, _partial_ type inference (infer and check types where
>determinable, keep quiet about it where not) would be a lovely thing
>to have,

This is exactly what Hindley-Milner typing does.

-- 
Ed Avis <··@membled.com>
From: Simon Helsen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <Pine.SOL.4.44.0311011515360.8965-100000@crete.uwaterloo.ca>
On 1 Nov 2003, Ed Avis wrote:

>Date: 01 Nov 2003 10:57:35 +0000
>From: Ed Avis <··@membled.com>
>Newsgroups: comp.lang.lisp, comp.lang.functional
>Subject: Re: Python from Wise Guy's Viewpoint
>
>················@eircom.net (Russell Wallace) writes:
>
>>Mind you, _partial_ type inference (infer and check types where
>>determinable, keep quiet about it where not) would be a lovely thing
>>to have,
>
>This is exactly what Hindley-Milner typing does.

no, of course not. Soft Typing does that. In the following paper done for
Scheme:

http://citeseer.nj.nec.com/9622.html

	Simon
From: Darius
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031101160555.00004110.ddarius@hotpop.com>
On Sat, 1 Nov 2003 15:16:44 -0500
Simon Helsen <·······@computer.org> wrote:

> On 1 Nov 2003, Ed Avis wrote:
> 
> >Date: 01 Nov 2003 10:57:35 +0000
> >From: Ed Avis <··@membled.com>
> >Newsgroups: comp.lang.lisp, comp.lang.functional
> >Subject: Re: Python from Wise Guy's Viewpoint
> 
> >················@eircom.net (Russell Wallace) writes:
> 
> >>Mind you, _partial_ type inference (infer and check types where
> >>determinable, keep quiet about it where not) would be a lovely thing
> >>to have,
> 
> >This is exactly what Hindley-Milner typing does.
> 
> no, of course not. Soft Typing does that. In the following paper done
> for Scheme:
> 
> http://citeseer.nj.nec.com/9622.html
> 
> 	Simon

I take your soft typing and raise you Complete Type Inference:
http://citeseer.nj.nec.com/widera01sketch.html
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <znffn5up.fsf@comcast.net>
Darius <·······@hotpop.com> writes:

> On Sat, 1 Nov 2003 15:16:44 -0500
> Simon Helsen <·······@computer.org> wrote:
>
>> On 1 Nov 2003, Ed Avis wrote:
>> 
>> >Date: 01 Nov 2003 10:57:35 +0000
>> >From: Ed Avis <··@membled.com>
>> >Newsgroups: comp.lang.lisp, comp.lang.functional
>> >Subject: Re: Python from Wise Guy's Viewpoint
>> 
>> >················@eircom.net (Russell Wallace) writes:
>> 
>> >>Mind you, _partial_ type inference (infer and check types where
>> >>determinable, keep quiet about it where not) would be a lovely thing
>> >>to have,
>> 
>> >This is exactly what Hindley-Milner typing does.
>> 
>> no, of course not. Soft Typing does that. In the following paper done
>> for Scheme:
>> 
>> http://citeseer.nj.nec.com/9622.html
>> 
>> 	Simon
>
> I take your soft typing and raise you Complete Type Inference:
> http://citeseer.nj.nec.com/widera01sketch.html

I like it.  It complains about programs that will provably throw an
exception no matter what the input, but it won't reject a program that
it cannot prove incorrect.  Of course, it isn't a `sound' type system.
From: Russell Wallace
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9c99be.52171072@news.eircom.net>
On 23 Oct 2003 10:51:41 -0500, Matthias Blume
<····@my.address.elsewhere> wrote:

>The problem really is that you often say "is correct" and "cannot be
>statically checked" about the same set of problems.  But how can *you*
>yourself possibly know the first to be true given that you think the
>second is true?

Personally I find that as the requirements for my programs get more
complex, so increases the percentage of places in the code where the
correct type declaration would be "It could be any of the following,
depending...". I then have to either greenspun my own dynamic type
system or just use a dynamically typed language. At this point I can't
really think of any program anyone would pay me to write for which the
latter wouldn't be the best solution.

Of course, the relationship between the various clauses in the
"depending..." could in principle be statically checked - but that
would be a job for a theorem prover, not a type checker. (If anyone
knows of a theorem prover that can read a wodge of Scheme code and
start proving useful theorems about it, let me know! ^.^)

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9f84fd$1@news.unimelb.edu.au>
················@eircom.net (Russell Wallace) writes:

>Personally I find that as the requirements for my programs get more
>complex, so increases the percentage of places in the code where the
>correct type declaration would be "It could be any of the following,
>depending...".

So use a discriminated union type, or abstract it away using a type class
(in Haskell/Clean/Mercury -- in other languages use similar features,
e.g. interfaces in Java).

>I then have to either greenspun my own dynamic type
>system or just use a dynamically typed language.

That seems a bit extreme.  What was the difficulty with just
using a discriminated union type or a type class?

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Russell Wallace
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9ff63d.114792195@news.eircom.net>
On Wed, 29 Oct 2003 09:14:45 GMT, Fergus Henderson <···@cs.mu.oz.au>
wrote:

>················@eircom.net (Russell Wallace) writes:
>
>>Personally I find that as the requirements for my programs get more
>>complex, so increases the percentage of places in the code where the
>>correct type declaration would be "It could be any of the following,
>>depending...".
>
>So use a discriminated union type, or abstract it away using a type class
>(in Haskell/Clean/Mercury -- in other languages use similar features,
>e.g. interfaces in Java).

Yes, that's what I've done in my current project (a discriminated
union type in C++).

>>I then have to either greenspun my own dynamic type
>>system or just use a dynamically typed language.
>
>That seems a bit extreme.  What was the difficulty with just
>using a discriminated union type or a type class?

Well, that's what I meant - basically I've ended up using my
discriminated union type so extensively, I've come to the conclusion
it would have simplified matters overall to have just used a language
based around dynamic typing in the first place.

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egn0bjw3o2.fsf@vipe.ii.uib.no>
················@eircom.net (Russell Wallace) writes:

> On Wed, 29 Oct 2003 09:14:45 GMT, Fergus Henderson <···@cs.mu.oz.au>
> wrote:
> 
>> ················@eircom.net (Russell Wallace) writes:

>>> Personally I find that as the requirements for my programs get more
>>> complex, so increases the percentage of places in the code where the
>>> correct type declaration would be "It could be any of the following,
>>> depending...".

>> So use a discriminated union type, or abstract it away using a type class
>> (in Haskell/Clean/Mercury -- in other languages use similar features,
>> e.g. interfaces in Java).

> Yes, that's what I've done in my current project (a discriminated
> union type in C++).

>> That seems a bit extreme.  What was the difficulty with just
>> using a discriminated union type or a type class?

AFAICS, a function that can take "any of the following types,
depending..." will either look like

(not valid code, but should give the 

  foo x = ... 
        case (typeof x) of Integer -> bar x y
                           String  -> zot x z

or

  foo x = ... f x ... g x ..

where f and g must also be able to handle any of the types that x can
have.   While it's not clear cut, the former maps most closely to a
discriminated union type (you need to list the types explicitly
anyway), the latter to a type class solution. 

I.e.  (hopefullly valid Haskell code)

   data T = I Integer | S String

   foo x = ...
        case x of T i -> bar i y
                  S s -> zot s z

and

   class C a where
        f :: ...
        g :: ...

   instance C Integer where
        f i = ...
        g i = ...

   instance C String where
        f s = ...
        g s = ...

   foo x = ... f x ... g x ...

The function will now accept any type of class C, and you can add any
type to this class by specifying how f and g works for it.  The type
inference will also often work out the "depending..." part.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn84cn$bor$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>The set of programs that are useful but cannot be checked by a static
>>type system is by definition bigger than the set of useful programs
>>that can be statically checked.
> 
> 
> By whose definition?  What *is* your definition of "useful"?  It is
> clear to me that static typing improves maintainability, scalability,
> and helps with the overall design of software.  (At least that's my
> personal experience, and as others can attest, I do have reasonably
> extensive experience either way.)
> 
> A 100,000 line program in an untyped language is useless to me if I am
> trying to make modifications -- unless it is written in a highly
> stylized way which is extensively documented (and which usually means
> that you could have captured this style in static types).  So under
> this definition of "useful" it may very well be that there are fewer
> programs which are useful under dynamic typing than there are under
> (modern) static typing.

A statically typed program is useless if one tries to make modifications 
_at runtime_. There are software systems out there that make use of 
dynamic modifications, and they have a strong advantage in specific 
areas because of this.

If you can come up with a static type system for an unrestricted runtime 
metaobject protocol, then I am fine with static typing.

>>So dynamically typed languages allow
>>me to express more useful programs than statically typed languages.
> 
> 
> There are also programs which I cannot express at all in a purely
> dynamically typed language.  (By "program" I mean not only the executable
> code itself but also the things that I know about this code.)
> Those are the programs which are protected against certain bad things
> from happening without having to do dynamic tests to that effect
> themselves.

This is a circular argument. You are already suggesting the solution in 
your problem description.

> (Some of these "bad things" are, in fact, not dynamically
> testable at all.)

For example?

>>I don't question that. If this works well for you, keep it up. ;)
> 
> 
> Don't fear.  I will.
> 
> 

...and BTW, please let me keep up using dynamically typed languages, 
because this works well for me!

(That's the whole of my answer to the original question, why one would 
want to give up static typing.)

>>>(And where are _your_ empirical studies which show that "working around
>>>language restrictions increases the potential for bugs"?)
>>
>>I don't need a study for that statement because it's a simple
>>argument: if the language doesn't allow me to express something in a
>>direct way, but requires me to write considerably more code then I
>>have considerably more opportunities for making mistakes.
> 
> 
> This assumes that there is a monotone function which maps token count
> to error-proneness and that the latter depends on nothing else.  This
> is a highly dubious assumption.  In many cases the few extra tokens
> you write are exactly the ones that let the compiler verify that your
> thinking process was accurate (to the degree that this fact is
> captured by types).  If you get them wrong *or* if you got the
> original code wrong, then the compiler can tell you.  Without the
> extra tokens, the compiler is helpless in this regard.

See the example of downcasts in Java.

> To make a (not so far-fetched, btw :) analogy: Consider logical
> statements and formal proofs. Making a logical statement is easy and
> can be very short.  It is also easy to make mistakes without noticing;
> after all saying something that is false while still believing it to
> be true is extremely easy.  Just by looking at the statement it is
> also often hard to tell whether the statement is right.  In fact,
> computers have a hard time with this task, too.  Theorem-proving is
> hard.
> On the other hand, writing down the statement with a formal proof is
> impossible to get wrong without anyone noticing because checking the
> proof for validity is trivial compared to coming up with it in the
> first place.  So even though writing the statement with a proof seems
> harder, once you have done it and it passes the proof checker you can
> rest assured that you got it right.  The longer "program" will have fewer
> "bugs" on average.

Yes, but then you have a proof that is tailored to the statement you 
have made. The claim of people who favor static type systems is that 
static type systems are _generally_ helpful.


Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m18yncot3j.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> > There are also programs which I cannot express at all in a purely
> 
> > dynamically typed language.  (By "program" I mean not only the executable
> > code itself but also the things that I know about this code.)
> > Those are the programs which are protected against certain bad things
> > from happening without having to do dynamic tests to that effect
> > themselves.
> 
> This is a circular argument. You are already suggesting the solution
> in your problem description.

Is it?  Am I?  Is it too much to ask to know that the invariants that
my code relies on will, in fact, hold when it gets to execute?
Actually, if you think that this problem description already contains
the solution which is static typing, then we are basically on the same
page here.

> ...and BTW, please let me keep up using dynamically typed languages,
> because this works well for me!

Since I have no power over what you do, I am forced to grant you this
wish.  (Lucky you!)

> See the example of downcasts in Java.

You had to dig out the poorest example you could think of, didn't you?
Make a note of it: When I talk about the power of static typing, I am
*not* thinking of Java!

> > To make a (not so far-fetched, btw :) analogy: Consider logical
> > statements and formal proofs. Making a logical statement is easy and
> > can be very short.  It is also easy to make mistakes without noticing;
> > after all saying something that is false while still believing it to
> > be true is extremely easy.  Just by looking at the statement it is
> > also often hard to tell whether the statement is right.  In fact,
> > computers have a hard time with this task, too.  Theorem-proving is
> > hard.
> > On the other hand, writing down the statement with a formal proof is
> > impossible to get wrong without anyone noticing because checking the
> > proof for validity is trivial compared to coming up with it in the
> > first place.  So even though writing the statement with a proof seems
> > harder, once you have done it and it passes the proof checker you can
> > rest assured that you got it right.  The longer "program" will have fewer
> > "bugs" on average.
> 
> Yes, but then you have a proof that is tailored to the statement you
> have made. The claim of people who favor static type systems is that
> static type systems are _generally_ helpful.

I am not sure you "got" it: Yes, the proof is tailored to the
statement (how else could it be?!), but the axioms and rules of its
underlying proof system are not.  Just like not every program has the
same type even though the type system is fixed.

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8tv1$p1s$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>>There are also programs which I cannot express at all in a purely
>>
>>>dynamically typed language.  (By "program" I mean not only the executable
>>>code itself but also the things that I know about this code.)
>>>Those are the programs which are protected against certain bad things
>>>from happening without having to do dynamic tests to that effect
>>>themselves.
>>
>>This is a circular argument. You are already suggesting the solution
>>in your problem description.
> 
> 
> Is it?  Am I?  Is it too much to ask to know that the invariants that
> my code relies on will, in fact, hold when it gets to execute?

Yes, because the need might arise to change the invariants at runtime, 
and you might not want to stop the program and restart it in order just 
to change it.

> Actually, if you think that this problem description already contains
> the solution which is static typing, then we are basically on the same
> page here.
> 
> 
>>...and BTW, please let me keep up using dynamically typed languages,
>>because this works well for me!
> 
> 
> Since I have no power over what you do, I am forced to grant you this
> wish.  (Lucky you!)

:-)

>>See the example of downcasts in Java.
> 
> 
> You had to dig out the poorest example you could think of, didn't you?
> Make a note of it: When I talk about the power of static typing, I am
> *not* thinking of Java!

OK, sorry, this was my mistake. I have picked this example because it 
has been mentioned in another branch of this thread.

>>>To make a (not so far-fetched, btw :) analogy: Consider logical
>>>statements and formal proofs. Making a logical statement is easy and
>>>can be very short.  It is also easy to make mistakes without noticing;
>>>after all saying something that is false while still believing it to
>>>be true is extremely easy.  Just by looking at the statement it is
>>>also often hard to tell whether the statement is right.  In fact,
>>>computers have a hard time with this task, too.  Theorem-proving is
>>>hard.
>>>On the other hand, writing down the statement with a formal proof is
>>>impossible to get wrong without anyone noticing because checking the
>>>proof for validity is trivial compared to coming up with it in the
>>>first place.  So even though writing the statement with a proof seems
>>>harder, once you have done it and it passes the proof checker you can
>>>rest assured that you got it right.  The longer "program" will have fewer
>>>"bugs" on average.
>>
>>Yes, but then you have a proof that is tailored to the statement you
>>have made. The claim of people who favor static type systems is that
>>static type systems are _generally_ helpful.
> 
> 
> I am not sure you "got" it: Yes, the proof is tailored to the
> statement (how else could it be?!), but the axioms and rules of its
> underlying proof system are not.  Just like not every program has the
> same type even though the type system is fixed.

Yes, but you have much more freedom when you write an arbitrary proof 
than when you need to make a type system happy.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9vkq$46h$1@news.oberberg.net>
Pascal Costanza wrote:

> Matthias Blume wrote:
> 
>> Pascal Costanza <········@web.de> writes:
>>
>>
>>>> There are also programs which I cannot express at all in a purely
>>>> dynamically typed language.  (By "program" I mean not only the 
>>>> executable
>>>> code itself but also the things that I know about this code.)
>>>> Those are the programs which are protected against certain bad things
>>>> from happening without having to do dynamic tests to that effect
>>>> themselves.
>>>
>>> This is a circular argument. You are already suggesting the solution
>>> in your problem description.
>>
>> Is it?  Am I?  Is it too much to ask to know that the invariants that
>> my code relies on will, in fact, hold when it gets to execute?
> 
> Yes, because the need might arise to change the invariants at runtime, 
> and you might not want to stop the program and restart it in order just 
> to change it.

Then it's not an invariant.
Or the invariant is something like "foo implies invariant_1 and not foo 
implies invariant_2", where "foo" is the condition that changes over the 
lifetime of the object.

Invariants are, by definition, the properties of an object that will 
always hold.


Or are you talking about system evolution and maintenance?
That would be an entirely new aspect in the discussion, and you should 
properly forewarn us so that we know for sure what you're talking about.


Regards,
Jo
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnaoka$v78$1@f1node01.rhrz.uni-bonn.de>
Joachim Durchholz wrote:

> Or are you talking about system evolution and maintenance?
> That would be an entirely new aspect in the discussion, and you should 
> properly forewarn us so that we know for sure what you're talking about.

Did I forget to mention this in the specifications? Sorry. ;)

Yes, I want my software to be adaptable to unexpected circumstances.

(I can't give you a better specification, by definition.)

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8fa6$eh0$1@news.oberberg.net>
Pascal Costanza wrote:
> See the example of downcasts in Java.

Please do /not/ draw your examples from Java, C++, or Eiffel. Modern 
static type systems are far more flexible and powerful, and far less 
obtrusive than the type systems used in these languages.

A modern type system has the following characteristics:

1. It's safe: Code that type checks cannot assign type-incorrect values 
(as opposed to Eiffel).

2. It is expressive: There's no need to write type casts (as opposed to 
C++ and Java). (The only exceptions where type casts are necessary are 
those where it is logically unavoidable: e.g. when importing binary data 
from an untyped source.)

3. It is unobtrusive: The compiler can infer most if not all types by 
itself. Modifying some code so that it is slightly more general will 
thus automatically acquire the appropriate slightly more general type.

4. It is powerful: any type may have other types as parameters. Not only 
for container types such as Array <Integer>, but also for other 
purposes. Advanced type systems can even express mutually recursive 
types - an (admittedly silly) example: trees that have alternating node 
types on paths from root to leaves. (And all that without type casts, 
Mum! *g*)

Regards,
Jo
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8mbc$l00$1@f1node01.rhrz.uni-bonn.de>
Joachim Durchholz wrote:
> Pascal Costanza wrote:
> 
>> See the example of downcasts in Java.
> 
> 
> Please do /not/ draw your examples from Java, C++, or Eiffel. Modern 
> static type systems are far more flexible and powerful, and far less 
> obtrusive than the type systems used in these languages.

This was just one obvious example in which you need a workaround to make 
the type system happy. There exist others.

> A modern type system has the following characteristics:

I know what modern type systems do.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9vcg$42i$2@news.oberberg.net>
Pascal Costanza wrote:

> Joachim Durchholz wrote:
> 
>> Pascal Costanza wrote:
>>
>>> See the example of downcasts in Java.
>>
>> Please do /not/ draw your examples from Java, C++, or Eiffel. Modern 
>> static type systems are far more flexible and powerful, and far less 
>> obtrusive than the type systems used in these languages.
> 
> This was just one obvious example in which you need a workaround to make 
> the type system happy. There exist others.

Then give these examples, instead of presenting us with strawman examples.

>> A modern type system has the following characteristics:
> 
> I know what modern type systems do.

Then I don't understand your point of view.

Regards,
Jo
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <hPdmb.18663$Fm2.9880@attbi_s04>
"Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
>
> See the example of downcasts in Java.

Downcasts in Java are not a source of problems.
They may well be indicative of a theoretical
hole (in fact I'm pretty sure they are,) but they
are not something that actually causes
problems in the real world.


Marshall
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bndsch$764$1@news.oberberg.net>
Marshall Spight wrote:

> "Pascal Costanza" <········@web.de> wrote:
> 
>>See the example of downcasts in Java.
> 
> Downcasts in Java are not a source of problems.

Huh?
Alone the need to downcast whenever I take something out of a container 
would suffice to term it as a "serious problem".
Unless you meant: it's not the downcasts that are the problem, it's the 
many language mechanisms that require downcasts that are.

Regards,
Jo
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <c4wmb.25002$Tr4.49479@attbi_s03>
"Joachim Durchholz" <·················@web.de> wrote in message ·················@news.oberberg.net...
> Marshall Spight wrote:
>
> > Downcasts in Java are not a source of problems.
>
> Huh?
> Alone the need to downcast whenever I take something out of a container
> would suffice to term it as a "serious problem".

Why? Here, you've assumed your conclusion. You've not given
me any reason. I will not accept any a priori criticisms of
downcasting except this one: downcasting requires the programmer
to type more characters.

Downcasts are not a source of problems in Java because
empirically, no problems result from them. (It's hard to
prove the absense of something, eh?) Years pass; hundreds
of thousands of lines of code are written, with no errors
arising from downcasting the result of ArrayList.get().
That has been my experience.

Does the existence of downcasts point out a place where
Java is lame? Absolutely. Does extra effort result from
this lameness? Certainly. Does this extra effort cause
bugs? Nope.

(Anyway, the situation is much better with Java generics,
available now in prerelease form; mainsteam in the next
major version.)


Marshall
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnejhq$hlv$1@news.oberberg.net>
Marshall Spight wrote:
> 
> Does the existence of downcasts point out a place where
> Java is lame? Absolutely. Does extra effort result from
> this lameness? Certainly. Does this extra effort cause
> bugs? Nope.

Bugs are not the only thing that I would term a "serious problem". 
Actually, I'd call everything a "serious problem" that eats up 
productivity. Of course, some problems are more serious than others; the 
absence of a good way to catch exceptions in C++ constructors (and, 
hence, that an exception thrown somewhere within a C++ constructor will 
almost inevitably leak memory) is worse than Java's ubiquitous need for 
downcasts.
But I also consider the unavailability of type inference in Java a 
"serious problem". Actually I fear that my preferences have changed from 
"static typing > run-time typing" to "type inference > run-time typing > 
explicit static typing".

> (Anyway, the situation is much better with Java generics,
> available now in prerelease form; mainsteam in the next
> major version.)

Good thing, that.
Let's just hope that having to wait a decade for Java generics was worth 
it... Java is probably the inevitable next step in my career. Well, 
there are worse languages, that's what I always tell myself (and it 
almost suffices to calm me down...)

Regards,
Jo
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <I%Bmb.15505$9E1.69499@attbi_s52>
"Joachim Durchholz" <·················@web.de> wrote in message ·················@news.oberberg.net...
> the
> absence of a good way to catch exceptions in C++ constructors (and,
> hence, that an exception thrown somewhere within a C++ constructor will
> almost inevitably leak memory)

Java nails this one.


> is worse than Java's ubiquitous need for
> downcasts.
> But I also consider the unavailability of type inference in Java a
> "serious problem".

Me, too, sigh.


> Java is probably the inevitable next step in my career. Well,
> there are worse languages, that's what I always tell myself (and it
> almost suffices to calm me down...)

I got assigned to a Java project in 1996 and went kicking and
screaming. Within 3 months, I thought it was great. YMMV.


Marshall
From: Rainer Deyke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <TeCmb.26070$Fm2.12858@attbi_s04>
Joachim Durchholz wrote:
> Of course, some problems are more serious than others;
> the absence of a good way to catch exceptions in C++ constructors
> (and, hence, that an exception thrown somewhere within a C++
> constructor will almost inevitably leak memory)

This is simply not the case.  There is no problem with catching exceptions
in constructors in C++, and 99% of the time it isn't necessary anyway.


-- 
Rainer Deyke - ·······@eldwood.com - http://eldwood.com
From: Nikodemus Siivola
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8hn7$2pu$1@nyytiset.pp.htv.fi>
In comp.lang.lisp Matthias Blume <····@my.address.elsewhere> wrote:

Apologies for the out-of-context snippage:

> A 100,000 line program in an untyped language is useless to me if I am
                               ^^^^^^^

Your choice of word here makes me suspect that you _may_ understand
something quite different than most of the residents of cll and clp by
dynamic typing:

 dynamic typing is *not* the same as untyped!

Of course, maybe it was just an unfortunate choice of words.

Cheers,

 -- Nikodemus
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m11xt4q9gv.fsf@tti5.uchicago.edu>
Nikodemus Siivola <······@random-state.net> writes:

> In comp.lang.lisp Matthias Blume <····@my.address.elsewhere> wrote:
> 
> Apologies for the out-of-context snippage:
> 
> > A 100,000 line program in an untyped language is useless to me if I am
>                                ^^^^^^^
> 
> Your choice of word here makes me suspect that you _may_ understand
> something quite different than most of the residents of cll and clp by
> dynamic typing:
> 
>  dynamic typing is *not* the same as untyped!

Ah, are we quibbling about *that* again?   Words, words, words...

If you want to know how much I know about the difference between typed
and untyped (or "statically typed" vs. "dynamically typed" as you
prefer), look up my track record on implementing languages in either
part of the PL world.

Yes, "dynamically typed" programs are "typed", but the word "type"
here means something quite different from what it means when it is
used with the qualifier "static".  I prefer the latter use, and from
that point of view there is only one (static) type in dynamically
typed programs, hence my use of the word "untyped".  (If you have only
one (static) type, you might as well not even think about that fact.)

Anyway, unfortunate or not, we are both thinking about the same class
of languages.  That shall suffice.

Matthias

PS: When I say "untyped" I mean it as in "the _untyped_ lambda
calculus".
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8s6a$sn0$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:

> PS: When I say "untyped" I mean it as in "the _untyped_ lambda
> calculus".

What terms would you use to describe the difference between dynamically 
and weakly typed languages, then?

For example, Smalltalk is clearly "more" typed than C is. Describing 
both as "untyped" seems a little bit unfair to me.

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1d6cootlf.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> 
> > PS: When I say "untyped" I mean it as in "the _untyped_ lambda
> > calculus".
> 
> What terms would you use to describe the difference between
> dynamically and weakly typed languages, then?
> 
> 
> For example, Smalltalk is clearly "more" typed than C is. Describing
> both as "untyped" seems a little bit unfair to me.

Safe and unsafe.

BTW, C is typed, Smalltalk is untyped.  C's type system just happens
to be unsound (in the sense that, as you observed, well-typed programs
can still be unsafe).

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8tco$qim$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Matthias Blume wrote:
>>
>>
>>>PS: When I say "untyped" I mean it as in "the _untyped_ lambda
>>>calculus".
>>
>>What terms would you use to describe the difference between
>>dynamically and weakly typed languages, then?
>>
>>
>>For example, Smalltalk is clearly "more" typed than C is. Describing
>>both as "untyped" seems a little bit unfair to me.
> 
> 
> Safe and unsafe.
> 
> BTW, C is typed, Smalltalk is untyped.  C's type system just happens
> to be unsound (in the sense that, as you observed, well-typed programs
> can still be unsafe).

Can you give me a reference to a paper, or some other literature, that 
defines the terminology that you use?

I have tried to find a consistent set of terms for this topic, and have 
only found the paper "Type Systems" by Luca Cardelli 
(http://www.luca.demon.co.uk/Bibliography.htm#Type systems )

He uses the terms of static vs. dynamic typing and strong vs. weak 
typing, and these are described as orthogonal classifications. I find 
this terminology very clear, consistent and useful. But I am open to a 
different terminology.

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F980363.9020407@ps.uni-sb.de>
Pascal Costanza wrote:
> Matthias Blume wrote:
> 
>> Pascal Costanza <········@web.de> writes:
>>
>>
>>> Matthias Blume wrote:
>>>
>>>
>>>> PS: When I say "untyped" I mean it as in "the _untyped_ lambda
>>>> calculus".
>>>
>>>
>>> What terms would you use to describe the difference between
>>> dynamically and weakly typed languages, then?
>>>
>>>
>>> For example, Smalltalk is clearly "more" typed than C is. Describing
>>> both as "untyped" seems a little bit unfair to me.
>>
>>
>>
>> Safe and unsafe.
>>
>> BTW, C is typed, Smalltalk is untyped.  C's type system just happens
>> to be unsound (in the sense that, as you observed, well-typed programs
>> can still be unsafe).
> 
> 
> Can you give me a reference to a paper, or some other literature, that 
> defines the terminology that you use?
> 
> I have tried to find a consistent set of terms for this topic, and have 
> only found the paper "Type Systems" by Luca Cardelli 
> (http://www.luca.demon.co.uk/Bibliography.htm#Type systems )
> 
> He uses the terms of static vs. dynamic typing and strong vs. weak 
> typing, and these are described as orthogonal classifications. I find 
> this terminology very clear, consistent and useful. But I am open to a 
> different terminology.

My copy,

   http://research.microsoft.com/Users/luca/Papers/TypeSystems.A4.pdf

on page 3 defines safety as orthogonal to typing in the way Matthias 
suggested.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn92c5$uu4$2@f1node01.rhrz.uni-bonn.de>
Andreas Rossberg wrote:

>> Can you give me a reference to a paper, or some other literature, that 
>> defines the terminology that you use?
>>
>> I have tried to find a consistent set of terms for this topic, and 
>> have only found the paper "Type Systems" by Luca Cardelli 
>> (http://www.luca.demon.co.uk/Bibliography.htm#Type systems )
>>
>> He uses the terms of static vs. dynamic typing and strong vs. weak 
>> typing, and these are described as orthogonal classifications. I find 
>> this terminology very clear, consistent and useful. But I am open to a 
>> different terminology.
> 
> 
> My copy,
> 
>   http://research.microsoft.com/Users/luca/Papers/TypeSystems.A4.pdf
> 
> on page 3 defines safety as orthogonal to typing in the way Matthias 
> suggested.

Yes, but it says dynamically typed vs statically typed where Matthias 
says untyped vs typed.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  Römerstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F981231.90404@ps.uni-sb.de>
Pascal Costanza wrote:
>>
>> My copy,
>>
>>   http://research.microsoft.com/Users/luca/Papers/TypeSystems.A4.pdf
>>
>> on page 3 defines safety as orthogonal to typing in the way Matthias 
>> suggested.
> 
> Yes, but it says dynamically typed vs statically typed where Matthias 
> says untyped vs typed.

Huh? On page 2 Cardelli defines typed vs. untyped. Table 1 on page 5 
clearly identifies Lisp as an untyped (but safe) language. He also 
speaks of statical vs. dynamical _checking_ wrt safety, but where do you 
find a definition of dynamic typing?

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9qtq$c9f$1@newsreader2.netcologne.de>
Andreas Rossberg wrote:

> Pascal Costanza wrote:
> 
>>>
>>> My copy,
>>>
>>>   http://research.microsoft.com/Users/luca/Papers/TypeSystems.A4.pdf
>>>
>>> on page 3 defines safety as orthogonal to typing in the way Matthias 
>>> suggested.
>>
>>
>> Yes, but it says dynamically typed vs statically typed where Matthias 
>> says untyped vs typed.
> 
> 
> Huh? On page 2 Cardelli defines typed vs. untyped. Table 1 on page 5 
> clearly identifies Lisp as an untyped (but safe) language. He also 
> speaks of statical vs. dynamical _checking_ wrt safety, but where do you 
> find a definition of dynamic typing?

Hmm, maybe I was wrong. I will need to check that again - it was some 
time ago that I have read the paper. Oh dear, I am getting old. ;)

Thanks for pointing this out.


Pascal
From: Kenny Tilton
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <DfRlb.28762$pT1.4127@twister.nyc.rr.com>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>The set of programs that are useful but cannot be checked by a static
>>type system is by definition bigger than the set of useful programs
>>that can be statically checked.
> 
> 
> By whose definition?  What *is* your definition of "useful"?  It is
> clear to me that static typing improves maintainability, scalability,
> and helps with the overall design of software. 

That sounds right. When I divided a large app into half a dozen sensible 
packages, several violations of clean design were revealed. But just a 
few, and there was a ton of code.

I did a little C++ and Java once, porting Cells to those languages. This 
was existing code, so I did not have to explore as I coded. It was a 
total pain, but then it was pretty easy to get working because so many 
casual goofs got caught by the compiler.

I just would never want to write original code this way, because then I 
am working fast and loose, doing this, doing that, leaving all sorts of 
code in limbo which would have to be straightened out to satisfy a compiler.

The other problem with static typing is that it does not address the 
real problem with scaling, viz, the exponential explosion of state 
interdependencies. A compiler cannot check the code I neglect to write, 
leaving state change unpropagated to dependent other state, nor can it 
check the sequence of correctly typed statements to make sure state used 
in calculation X is updated before I use that state.

kenny

-- 
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
  http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1wuawounu.fsf@tti5.uchicago.edu>
Kenny Tilton <·······@nyc.rr.com> writes:

> The other problem with static typing is that it does not address the
> real problem with scaling, viz, the exponential explosion of state
> interdependencies. A compiler cannot check the code I neglect to
> write, leaving state change unpropagated to dependent other state, nor
> can it check the sequence of correctly typed statements to make sure
> state used in calculation X is updated before I use that state.

Yes, the usefulness of static types seems to be inversely proportional
to the imperativeness of one's programming style (Haskell, Miranda).
Static types *really* shine in purely functional settings.  In mostly
functional settings (SML, OCaml) they lose some of their expressive
"punch" if you start playing with mutable data structures.  In
languages that heavily rely on imperative features (mutable state,
object identity, imperative I/O, exceptions) their usefulness goes
increasingly down the drain.

Matthias
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1ismgou09.fsf@tti5.uchicago.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Yes, the usefulness of static types seems to be inversely proportional
> to the imperativeness of one's programming style (Haskell, Miranda).
> Static types *really* shine in purely functional settings (****).
[...]

Obviously, the parenthetical remark "(Haskell, Miranda)" should be where
the (****) is.

Matthias
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87k76v9531.fsf@thalassa.informatimago.com>
Matthias Blume <····@my.address.elsewhere> writes:
> A 100,000 line program in an untyped language is useless to me if I am
> trying to make modifications -- unless it is written in a highly
> stylized way which is extensively documented (and which usually means
> that you could have captured this style in static types). 

The  only untyped  languages I  know are  assemblers. (ISTR  that even
intercal can't be labelled "untyped" per se).

Are we speaking about assembler here?

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1ismfokx2.fsf@tti5.uchicago.edu>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> The  only untyped  languages I  know are  assemblers. (ISTR  that even
> intercal can't be labelled "untyped" per se).
> 
> Are we speaking about assembler here?

No, we are speaking different definitions of "typed" and "untyped"
here.  Even assembler is typed if you look at it the right way.

As I said before, I mean "untyped" as in "The Untyped Lambda
Calculus" which is a well-established term.

Matthias
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <tGfmb.19042$Tr4.39782@attbi_s03>
"Pascal Bourguignon" <····@thalassa.informatimago.com> wrote in message ···················@thalassa.informatimago.com...
>
> The  only untyped  languages I  know are  assemblers. (ISTR  that even
> intercal can't be labelled "untyped" per se).
>
> Are we speaking about assembler here?

BCPL!


Marshall
From: Andrew Dalke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spIlb.1548$I04.1454@newsread4.news.pas.earthlink.net>
Pascal Costanza:
> The set of programs that are useful but cannot be checked by a static
> type system is by definition bigger than the set of useful programs that
> can be statically checked. So dynamically typed languages allow me to
> express more useful programs than statically typed languages.

Ummm, both are infinite and both are countably infinite, so those sets
are the same size.  You're falling for Hilbert's Paradox.

Also, while I don't know a proof, I'm pretty sure that type inferencing
can do addition (and theorem proving) so is equal in power to
programming.

> I don't need a study for that statement because it's a simple argument:
> if the language doesn't allow me to express something in a direct way,
> but requires me to write considerably more code then I have considerably
> more opportunities for making mistakes.

The size comparisons I've seen (like the great programming language
shootout) suggest that Ocaml and Scheme require about the same amount
of code to solve small problems.  Yet last I saw, Ocaml is strongly typed
at compile time.  How do you assume then that strongly&statically typed
languages require "considerable more code"?

                    Andrew
                    ·····@dalkescientific.com
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn84l4$c6r$1@newsreader2.netcologne.de>
Andrew Dalke wrote:

> Pascal Costanza:
> 
>>The set of programs that are useful but cannot be checked by a static
>>type system is by definition bigger than the set of useful programs that
>>can be statically checked. So dynamically typed languages allow me to
>>express more useful programs than statically typed languages.
> 
> 
> Ummm, both are infinite and both are countably infinite, so those sets
> are the same size.  You're falling for Hilbert's Paradox.
> 
> Also, while I don't know a proof, I'm pretty sure that type inferencing
> can do addition (and theorem proving) so is equal in power to
> programming.

Just give me a static type system CLOS + MOP.

>>I don't need a study for that statement because it's a simple argument:
>>if the language doesn't allow me to express something in a direct way,
>>but requires me to write considerably more code then I have considerably
>>more opportunities for making mistakes.
> 
> 
> The size comparisons I've seen (like the great programming language
> shootout) suggest that Ocaml and Scheme require about the same amount
> of code to solve small problems.  Yet last I saw, Ocaml is strongly typed
> at compile time.  How do you assume then that strongly&statically typed
> languages require "considerable more code"?

_small_ problems?


Pascal
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <k76wf14h.fsf@ccs.neu.edu>
"Andrew Dalke" <······@mindspring.com> writes:

> Pascal Costanza:
>> The set of programs that are useful but cannot be checked by a static
>> type system is by definition bigger than the set of useful programs that
>> can be statically checked. So dynamically typed languages allow me to
>> express more useful programs than statically typed languages.
>
> Ummm, both are infinite and both are countably infinite, so those sets
> are the same size.  You're falling for Hilbert's Paradox.

They aren't the same size if you limit the length of the program.  This
is a reasonable restriction if you are interested in programs that might
be realizable within your lifetime.

> Also, while I don't know a proof, I'm pretty sure that type inferencing
> can do addition (and theorem proving) so is equal in power to
> programming.

Yes, this is true.  But it is also the case that a powerful enough
static type checker cannot be proven to halt or produce an answer in a
time less than that required to run the program being checked.  It
makes little difference if the type checker produces the answer or the
program produces the answer if they both take about the same time to
run.  Of course, it is generally more difficult to program in the type
metalanguage than in the target language.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8dal$dma$1@news.oberberg.net>
Andrew Dalke wrote:
> Pascal Costanza:
> 
>>The set of programs that are useful but cannot be checked by a static
>>type system is by definition bigger than the set of useful programs that
>>can be statically checked. So dynamically typed languages allow me to
>>express more useful programs than statically typed languages.
> 
> Ummm, both are infinite and both are countably infinite, so those sets
> are the same size.  You're falling for Hilbert's Paradox.

The sets in question are not /all/ dynamically/statically typed 
programs, they are all dynamically/statically typed programs that fit 
any item in the set of specifications in existence. Which is a very 
finite set.

> Also, while I don't know a proof, I'm pretty sure that type inferencing
> can do addition (and theorem proving) so is equal in power to
> programming.

Nope. It depends on the type system used: some are decidable, some are 
undecidable, and for some, decidability is unknown.

Actually, for decidable type inference systems, there's also the 
distinction between exponential, polynomial, O (N log N), and linear 
behaviour; for some systems, the worst-case behaviour is unknown but 
benevolent in practice.

The vast majority of practical programming languages use a type 
inference system where the behavior is known to be O (N log N) or better :-)
(meaning that the other type systems and associated inference algorithms 
are research subjects and/or research tools)

Regards,
Jo
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F97C3BD.7020408@ps.uni-sb.de>
Joachim Durchholz wrote:
> 
> The vast majority of practical programming languages use a type 
> inference system where the behavior is known to be O (N log N) or better 

Not true, unfortunately. Type inference for almost all FP languages is a 
derivative from the original Hindley/Milner algorithm for ML, which is 
known to have exponential worst-case behaviour. Interestingly, such 
cases never show up in practice, most realistic programs can be checked 
in subquadratic time and space. For that reason even the inventors of 
the algorithm originally believed it was polynomial, until somebody 
found a counterexample.

The good news is that, for similar reasons, undecidable type checking 
need not be a hindrance in practice.

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: William Lovas
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbpdj1a.nle.wlovas@force.stwing.upenn.edu>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote:
> Marshall Spight wrote:
>> But why should more regression testing mean less static type checking?
>> Both are useful. Both catch bugs. Why ditch one for the other?
> 
> ...because static type systems work by reducing the expressive power of 
> a language. It can't be any different for a strict static type system. 
> You can't solve the halting problem in a general-purpose language.

What do you mean by "reducing the expressive power of the language"?  There
are many general purpose statically typed programming languages that are
Turing complete, so it's not a theoretical consideration, as you allude.

> This means that eventually you might need to work around language 
> restrictions, and this introduces new potential sources for bugs.
> 
> (Now you could argue that current sophisticated type systems cover 90% 
> of all cases and that this is good enough, but then I would ask you for 
> empirical studies that back this claim. ;)

Empirically, i write a lot of O'Caml code, and i never have to write
something in a non-intuitive manner to work around the type system.  On the
contrary, every type error the compiler catches in my code indicates code
that *doesn't make sense*.  I'd hate to imagine code that doesn't make
sense passing into regression testing.  What if i forget to test a
non-sensical condition?

On the flip-side of the coin, i've also written large chunks of Scheme
code, and I *did* find myself making lots of nonsense errors that weren't
caught until run time, which significantly increased development time
and difficulty.

Furthermore, thinking about types during the development process keeps me
honest: i'm much more likely to write code that works if i've spent some
time understanding the problem and the types involved.  This sort of
pre-development thinking helps to *eliminate* potential sources for bugs,
not introduce them.  Even Scheme advocates encourage this (as in Essentials
of Programming Languages by Friedman, Wand, and Haynes).

> I think soft typing is a good compromise, because it is a mere add-on to 
> an otherwise dynamically typed language, and it allows programmers to 
> override the decisions of the static type system when they know better.

When do programmers know better?  An int is an int and a string is a
string, and nary the twain shall be treated the same.  I would rather
``1 + "bar"'' signal an error at compile time than at run time.

Personally, i don't understand all this bally-hoo about "dynamic languages"
being the next great leap.  Static typing is a luxury!

William
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1brs9qc3z.fsf@tti5.uchicago.edu>
William Lovas <······@force.stwing.upenn.edu> writes:

> [...] Static typing is a luxury!

Very well put!
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn774d$qj3$1@newsreader2.netcologne.de>
William Lovas wrote:

> In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote:
> 
>>Marshall Spight wrote:
>>
>>>But why should more regression testing mean less static type checking?
>>>Both are useful. Both catch bugs. Why ditch one for the other?
>>
>>...because static type systems work by reducing the expressive power of 
>>a language. It can't be any different for a strict static type system. 
>>You can't solve the halting problem in a general-purpose language.
> 
> What do you mean by "reducing the expressive power of the language"?  There
> are many general purpose statically typed programming languages that are
> Turing complete, so it's not a theoretical consideration, as you allude.

For example, static type systems are incompatible with dynamic 
metaprogramming. This is objectively a reduction of expressive power, 
because programs that don't allow for dynamic metaprogramming can't be 
extended in certain ways at runtime, by definition.

>>This means that eventually you might need to work around language 
>>restrictions, and this introduces new potential sources for bugs.
>>
>>(Now you could argue that current sophisticated type systems cover 90% 
>>of all cases and that this is good enough, but then I would ask you for 
>>empirical studies that back this claim. ;)
> 
> Empirically, i write a lot of O'Caml code, and i never have to write
> something in a non-intuitive manner to work around the type system.  On the
> contrary, every type error the compiler catches in my code indicates code
> that *doesn't make sense*.  I'd hate to imagine code that doesn't make
> sense passing into regression testing.  What if i forget to test a
> non-sensical condition?

You need some testing discipline, which is supported well by unit 
testing frameworks.

> On the flip-side of the coin, i've also written large chunks of Scheme
> code, and I *did* find myself making lots of nonsense errors that weren't
> caught until run time, which significantly increased development time
> and difficulty.
> 
> Furthermore, thinking about types during the development process keeps me
> honest: i'm much more likely to write code that works if i've spent some
> time understanding the problem and the types involved.  This sort of
> pre-development thinking helps to *eliminate* potential sources for bugs,
> not introduce them.  Even Scheme advocates encourage this (as in Essentials
> of Programming Languages by Friedman, Wand, and Haynes).

Yes, thinking about a problem to understand it better occasionally helps 
to write better code. This has nothing to do with static typing. This 
could also be achieved by placing some other arbitrary restrictions on 
your coding style.

>>I think soft typing is a good compromise, because it is a mere add-on to 
>>an otherwise dynamically typed language, and it allows programmers to 
>>override the decisions of the static type system when they know better.
> 
> When do programmers know better?  An int is an int and a string is a
> string, and nary the twain shall be treated the same.  I would rather
> ``1 + "bar"'' signal an error at compile time than at run time.

Such code would easily be caught very soon in your unit tests.


Pascal
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <7YIlb.1697$ao4.6695@attbi_s51>
"Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
> >
> > When do programmers know better?  An int is an int and a string is a
> > string, and nary the twain shall be treated the same.  I would rather
> > ``1 + "bar"'' signal an error at compile time than at run time.
>
> Such code would easily be caught very soon in your unit tests.

Provided you think to write such a test, and expend the effort
to do so. Contrast to what happens in a statically typed language,
where this is done for you automatically.

Unit tests are great; I heartily endorse them. But they *cannot*
do everything that static type checking can do. Likewise,
static type checking *cannot* do everything unit testing
can do.

So again I ask, why is it either/or? Why not both? I've had
*great* success building systems with comprehensive unit
test suites in statically typed languages. The unit tests catch
some bugs, and the static type checking catches other bugs.


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn83ph$akr$1@newsreader2.netcologne.de>
Marshall Spight wrote:

> "Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
> 
>>>When do programmers know better?  An int is an int and a string is a
>>>string, and nary the twain shall be treated the same.  I would rather
>>>``1 + "bar"'' signal an error at compile time than at run time.
>>
>>Such code would easily be caught very soon in your unit tests.
> 
> 
> Provided you think to write such a test, and expend the effort
> to do so. Contrast to what happens in a statically typed language,
> where this is done for you automatically.

There are other things that are done automatically for me in dynamically 
typed languages that I care more about than such static checks. I don't 
recall ever writing 1 + "bar". (Yes, this is a rhetorical statement. ;)

> Unit tests are great; I heartily endorse them. But they *cannot*
> do everything that static type checking can do. Likewise,
> static type checking *cannot* do everything unit testing
> can do.

Right.

> So again I ask, why is it either/or? Why not both? I've had
> *great* success building systems with comprehensive unit
> test suites in statically typed languages. The unit tests catch
> some bugs, and the static type checking catches other bugs.

That's great for you, and if it works for you, just keep it up.

But I have given reasons why one would not want to have static type 
checking by default. All I am trying to say is that this depends on the 
context. Static type systems are definitely not _generally_ better than 
dynamic type systems.


Pascal
From: Dirk Thierbach
Subject: Static typing (was: Python from Wise Guy's Viewpoint)
Date: 
Message-ID: <90fk61-j71.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> You need some testing discipline, which is supported well by unit 
> testing frameworks.

IMHO it helps to think about static typing as a special kind of unit
tests. Like unit tests, they verify that for some input values, the
function in question will produce the correct output values. Unlike
unit tests, they do this for a class of values, instead of testing
statistically by example. And unlike unit tests, they are pervasive:
Every execution path will be automatically tested; you don't have
to invest brain power to make sure you don't forget one.

Type inference will automatically write unit tests for you (besides
other uses like hinting that a routine may be more general than you
thought). But since the computer is not very smart, they will test
only more or less trivial things. But that's still good, because then
you don't have to write the trivial unit tests, and only have to care
about the non-trivial ones.

Type annotations are an assertion language that you use to write down
that kind of unit tests. 

> Static type systems are claimed to generally improve your code. I
> don't see that.

They do it for the same reason that unit tests do:

* They are executable documention.

* By writing them down first, you focus on what you want to do.

* They help with refactoring.

etc.

Of course you can replace the benefits of static typing by enough unit
tests. But they are different verification tools: For some kind of
problems, one is better, for other kinds, the other. There's no reason
not to use both.

- Dirk
From: Pascal Costanza
Subject: Re: Static typing
Date: 
Message-ID: <bn8d2h$u4o$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Pascal Costanza <········@web.de> wrote:
> 
>>You need some testing discipline, which is supported well by unit 
>>testing frameworks.
> 
> IMHO it helps to think about static typing as a special kind of unit
> tests. Like unit tests, they verify that for some input values, the
> function in question will produce the correct output values. Unlike
> unit tests, they do this for a class of values, instead of testing
> statistically by example. And unlike unit tests, they are pervasive:
> Every execution path will be automatically tested; you don't have
> to invest brain power to make sure you don't forget one.

This is clear.

> Type inference will automatically write unit tests for you (besides
> other uses like hinting that a routine may be more general than you
> thought). But since the computer is not very smart, they will test
> only more or less trivial things. But that's still good, because then
> you don't have to write the trivial unit tests, and only have to care
> about the non-trivial ones.

Unless the static type system takes away the expressive power that I need.

> Type annotations are an assertion language that you use to write down
> that kind of unit tests. 

Yep.

> Of course you can replace the benefits of static typing by enough unit
> tests. But they are different verification tools: For some kind of
> problems, one is better, for other kinds, the other. There's no reason
> not to use both.

I have given reasons when not to use a static type system in this 
thread. Please take a look at the Smalltalk MOP or the CLOS MOP and tell 
me what a static type system should look like for these languages!


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Re: Static typing
Date: 
Message-ID: <bq5l61-7a6.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Unless the static type system takes away the expressive power that I need.

Even within a static type system, you can always revert to "dynamic
typing" by introducing a sufficiently universal datatype (say,
s-expressions).

Usually the need for real runtime flexiblity is quite localized (but
of course this depends of the application). Unless you really need runtime
flexibility nearly everywhere (and I cannot think of an example where
this is the case), the universal datatype approach works quite well
(though you loose the advantages of static typing in these places, of
course, and you have to compensate with more unit tests).

> I have given reasons when not to use a static type system in this 
> thread. 

Nobody forces you to use a static type system. Languages, with their
associated type systems, are *tools*, and not religions. You use
what is best for the job.

But it's a bit stupid to frown upon everything else but one's favorite
way of doing things. There are other ways. They may work a bit
differently, and it might be not obvious how to do it if you're used
to doing it differently, but that doesn't mean other ways are
completely stupid. And you might actually learn something once
you know how to do it both ways :-)

> Please take a look at the Smalltalk MOP or the CLOS MOP and tell 
> me what a static type system should look like for these languages!

You cannot take an arbitrary language and attach a good static type
system to it. Type inference will be much to difficult, for example.
There's a fine balance between language design and a good type system
that works well with it.

If you want to use Smalltalk or CLOS with dynamic typing and unit
tests, use them. If you want to use Haskell or OCaml with static typing
and type inference, use them. None is really "better" than the other.
Both have their advantages and disadvantages. But don't dismiss
one of them just because you don't know better.

- Dirk
From: Pascal Costanza
Subject: Re: Static typing
Date: 
Message-ID: <bn9o9v$8p3$1@newsreader2.netcologne.de>
Dirk Thierbach wrote:

> Pascal Costanza <········@web.de> wrote:
> 

>>I have given reasons when not to use a static type system in this 
>>thread. 
> 
> 
> Nobody forces you to use a static type system. Languages, with their
> associated type systems, are *tools*, and not religions. You use
> what is best for the job.

_exactly!_

That's all I have been trying to say in this whole thread.

Marshall Spight asked 
http://groups.google.com/groups?selm=MoEkb.821534%24YN5.832338%40sccrnsc01 
why one would not want to use a static type system, and I have tried to 
give some reasons.

I am not trying to force anyone to use a dynamically checked language. I 
am not even trying to convince anyone. I am just trying to say that 
someone might have very good reasons if they didn't want to use a static 
  type system.

>>Please take a look at the Smalltalk MOP or the CLOS MOP and tell 
>>me what a static type system should look like for these languages!
> 
> 
> You cannot take an arbitrary language and attach a good static type
> system to it. Type inference will be much to difficult, for example.
> There's a fine balance between language design and a good type system
> that works well with it.

Right. As I said before, you need to reduce the expressive power of the 
language.

> If you want to use Smalltalk or CLOS with dynamic typing and unit
> tests, use them. If you want to use Haskell or OCaml with static typing
> and type inference, use them. None is really "better" than the other.
> Both have their advantages and disadvantages. But don't dismiss
> one of them just because you don't know better.

dito

Thank you for rephrasing this in a probably better understandable way.

Pascal
From: Dirk Thierbach
Subject: Re: Static typing
Date: 
Message-ID: <gc8n61-321.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:
>> You cannot take an arbitrary language and attach a good static type
>> system to it. Type inference will be much to difficult, for example.
>> There's a fine balance between language design and a good type system
>> that works well with it.

> Right. As I said before, you need to reduce the expressive power of the 
> language.

Maybe that's where the problem is. One doesn't need to reduce the
"expressive power". I don't know your particular application, but what
you seem to need is the ability to dynamically change the program
execution. There's more than one way to do that. And MOPs (like
macros) are a powerful tool and sometimes quite handy, but it's also
easy to shoot yourself severly into your own foot with MOPs if you're
not careful, and often there are better solutions than using MOPs (for
example, apropriate flexible datatypes).

I may be wrong, but I somehow have the impression that it is difficult
to see other ways to solve a problem if you haven't done it in that
way at least once. So you see that with different tools, you cannot do
it in exactly the same way as with the old tools, and immediately you
start complaining that the new tools have "less expressive power",
just because you don't see that you have to use them in a different
way.  The "I can do lot of things with macros in Lisp that are
impossible to do in other languages" claim seems to have a similar
background.

I could complain that Lisp or Smalltalk have "less expressive power"
because I cannot declare algebraic datatypes properly, I don't have
pattern matching to use them efficiently, and there is no automatic
test generation (i.e., type checking) for my datatypes. But there
are ways to work around this, so when programming in Lisp or Smalltalk,
I do it in the natural way that is appropriate for these languages,
instead of wasting my time with silly complaints. 

The only way out is IMHO to learn as many languages as possible, and
to learn as many alternative styles of solving problems as possible.
Then pick the one that is apropriate, and don't say "this way has
most expressive power, all others have less". In general, this will
be just wrong.

- Dirk
From: Nikodemus Siivola
Subject: Re: Static typing
Date: 
Message-ID: <bnbeef$nuc$1@nyytiset.pp.htv.fi>
In comp.lang.lisp Dirk Thierbach <··········@gmx.de> wrote:

> I could complain that Lisp or Smalltalk have "less expressive power"
> because I cannot declare algebraic datatypes properly, I don't have
> pattern matching to use them efficiently, and there is no automatic
> test generation (i.e., type checking) for my datatypes. B

Would Qi apply here?

 http://www.simulys.com/guideto.htm

Cheers,

 -- Nikodemus
From: Dirk Thierbach
Subject: Re: Static typing
Date: 
Message-ID: <014o61-2h2.ln1@ID-7776.user.dfncis.de>
Nikodemus Siivola <······@random-state.net> wrote:
> In comp.lang.lisp Dirk Thierbach <··········@gmx.de> wrote:

>> I could complain that Lisp or Smalltalk have "less expressive power"
>> because I cannot declare algebraic datatypes properly, I don't have
>> pattern matching to use them efficiently, and there is no automatic
>> test generation (i.e., type checking) for my datatypes. B

> Would Qi apply here?
> 
> http://www.simulys.com/guideto.htm

Qi is certainly interesting. It looks very ML-ish, so I suppose the
answer to "How do I add static typing to Lisp?" is "You implement
ML in Lisp" :-)

The type system is very flexible, and you can encode a lot into types
because it has a complete theorem prover, but you probably pay for
that with severe speed penalties.

Most important: Since Qi tries to make static typing optional, and
it also has to deal with the impure features of Lisp, there is
no type inference:

  Qi is an explicitly typed language; this means that all defined
  functions must be accompanied by their intended type. Failure to
  supply a type will produce an error message.

So no automatic tests. 

- Dirk
From: Pascal Costanza
Subject: Re: Static typing
Date: 
Message-ID: <bnbds3$uui$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Pascal Costanza <········@web.de> wrote:
> 
>>Dirk Thierbach wrote:
>>
>>>You cannot take an arbitrary language and attach a good static type
>>>system to it. Type inference will be much to difficult, for example.
>>>There's a fine balance between language design and a good type system
>>>that works well with it.
> 
> 
>>Right. As I said before, you need to reduce the expressive power of the 
>>language.
> 
> 
> Maybe that's where the problem is. One doesn't need to reduce the
> "expressive power". I don't know your particular application, but what
> you seem to need is the ability to dynamically change the program
> execution. There's more than one way to do that. 

Of course there is more than one way to do anything. You can do 
everything in assembler. The important point is: what are the convenient 
ways to do these things? (And convenience is a subjective matter.)

Expressive power is not Turing equivalence.

> I may be wrong, but I somehow have the impression that it is difficult
> to see other ways to solve a problem if you haven't done it in that
> way at least once.

No, you need several attempts to get used to a certain programming 
style. These things don't fall from the sky. When you write your first 
program in a new language, it is very likely that you a) try to imitate 
what you have done in other languages you knew before and b) that you 
don't know the standard idioms of the new language.

Mastering a programming language is a very long process.

> So you see that with different tools, you cannot do
> it in exactly the same way as with the old tools, and immediately you
> start complaining that the new tools have "less expressive power",
> just because you don't see that you have to use them in a different
> way.  The "I can do lot of things with macros in Lisp that are
> impossible to do in other languages" claim seems to have a similar
> background.

No, you definitely can do a lot of things with macros in Lisp that are 
impossible to do in other languages. There are papers that show this 
convincingly. Try 
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-453.pdf for a 
start. Then continue, for example, with some articles on Paul Graham's 
website, or download and read his book "On Lisp".

> I could complain that Lisp or Smalltalk have "less expressive power"
> because I cannot declare algebraic datatypes properly, 

I don't see why this shouldn't be possible, but I don't know.

> I don't have
> pattern matching to use them efficiently, 

http://www.cliki.net/fare-matcher

> and there is no automatic
> test generation (i.e., type checking) for my datatypes.

http://www.plt-scheme.org/software/mrflow/

> The only way out is IMHO to learn as many languages as possible, and
> to learn as many alternative styles of solving problems as possible.

Right.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: John Atwood
Subject: Re: Static typing
Date: 
Message-ID: <bnbm2k$ijh$1@cvpjaws03.dhcp.cv.hp.com>
Pascal Costanza  <········@web.de> wrote:

>Mastering a programming language is a very long process.
>
>> So you see that with different tools, you cannot do
>> it in exactly the same way as with the old tools, and immediately you
>> start complaining that the new tools have "less expressive power",
>> just because you don't see that you have to use them in a different
>> way.  The "I can do lot of things with macros in Lisp that are
>> impossible to do in other languages" claim seems to have a similar
>> background.
>
>No, you definitely can do a lot of things with macros in Lisp that are 
>impossible to do in other languages. There are papers that show this 
>convincingly. Try 
>ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-453.pdf for a 
>start. Then continue, for example, with some articles on Paul Graham's 
>website, or download and read his book "On Lisp".

That's a great paper; however, see Steele's later work:
	http://citeseer.nj.nec.com/steele94building.html


John
From: Pascal Costanza
Subject: Re: Static typing
Date: 
Message-ID: <bnjbg6$v4e$1@f1node01.rhrz.uni-bonn.de>
John Atwood wrote:
> Pascal Costanza  <········@web.de> wrote:

>>No, you definitely can do a lot of things with macros in Lisp that are 
>>impossible to do in other languages. There are papers that show this 
>>convincingly. Try 
>>ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-453.pdf for a 
>>start. Then continue, for example, with some articles on Paul Graham's 
>>website, or download and read his book "On Lisp".
> 
> That's a great paper; however, see Steele's later work:
> 	http://citeseer.nj.nec.com/steele94building.html

Yes, I have read that paper. If you want to work with monads, you 
probably want a static type system.

(And I think he still likes Scheme and Lisp. ;)

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: John Atwood
Subject: Re: Static typing
Date: 
Message-ID: <bnjn3h$24i$1@cvpjaws03.dhcp.cv.hp.com>
Pascal Costanza  <········@web.de> wrote:
>John Atwood wrote:
>> Pascal Costanza  <········@web.de> wrote:
>>>No, you definitely can do a lot of things with macros in Lisp that are 
>>>impossible to do in other languages. There are papers that show this 
>>>convincingly. Try 
>>>ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-453.pdf for a 
>>>start. Then continue, for example, with some articles on Paul Graham's 
>>>website, or download and read his book "On Lisp".
>> 
>> That's a great paper; however, see Steele's later work:
>> 	http://citeseer.nj.nec.com/steele94building.html
>
>Yes, I have read that paper. If you want to work with monads, you 
>probably want a static type system.
>
>(And I think he still likes Scheme and Lisp. ;)


Perhaps, but the paper shows convincingly that statically typed languages 
can do a lot of things that Lispers use macros for. Work in 
meta-programming, aka mult-stage programming, shows how to more finely 
control the power of macros, reflection, etc. See, e.g.:
	http://citeseer.nj.nec.com/sheard00accomplishments.html
	http://citeseer.nj.nec.com/taha99multistage.html

John
From: Pascal Costanza
Subject: Re: Static typing
Date: 
Message-ID: <bnjrh3$dsd$1@newsreader2.netcologne.de>
John Atwood wrote:

> Pascal Costanza  <········@web.de> wrote:
> 
>>John Atwood wrote:
>>
>>>Pascal Costanza  <········@web.de> wrote:
>>>
>>>>No, you definitely can do a lot of things with macros in Lisp that are 
>>>>impossible to do in other languages. There are papers that show this 
>>>>convincingly. Try 
>>>>ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-453.pdf for a 
>>>>start. Then continue, for example, with some articles on Paul Graham's 
>>>>website, or download and read his book "On Lisp".
>>>
>>>That's a great paper; however, see Steele's later work:
>>>	http://citeseer.nj.nec.com/steele94building.html
>>
>>Yes, I have read that paper. If you want to work with monads, you 
>>probably want a static type system.
>>
>>(And I think he still likes Scheme and Lisp. ;)
> 
> Perhaps, but the paper shows convincingly that statically typed languages 
> can do a lot of things that Lispers use macros for.

Right. I have no problems with that. Monads are pretty interesting, and 
monads and static typing go very well together.

> Work in 
> meta-programming, aka mult-stage programming, shows how to more finely 
> control the power of macros, reflection, etc. See, e.g.:
> 	http://citeseer.nj.nec.com/sheard00accomplishments.html
> 	http://citeseer.nj.nec.com/taha99multistage.html

Wait: Meta-programming and multi-stage programming is not the same 
thing. The latter is only a subset of the former.

The metaprogramming facility that doesn't let you call the meta program 
  from the base program and vice versa in the same environment, o 
grasshopper, is not the true metaprogramming facility.

Pascal
From: Marshall Spight
Subject: Re: Static typing
Date: 
Message-ID: <5Cdmb.18630$Fm2.9443@attbi_s04>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
>
> Expressive power is not Turing equivalence.

Agreed.

So, does anyone have a formal definition of "expressive power?"
Metrics? Examples? Theoretical foundations?

It seems like a hard concept to pin down. "Make it possible
to write programs that contain as few characters as possible"
strikes me as a really bad definition; it suggests that
bzip2-encoded C++ would be really expressive.


Marshall
From: Joe Marshall
Subject: Re: Static typing
Date: 
Message-ID: <u15ysaf8.fsf@ccs.neu.edu>
"Marshall Spight" <·······@dnai.com> writes:

> "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
>>
>> Expressive power is not Turing equivalence.
>
> Agreed.
>
> So, does anyone have a formal definition of "expressive power?"
> Metrics? Examples? Theoretical foundations?

http://citeseer.nj.nec.com/felleisen90expressive.html

It's a start.
From: Dirk Thierbach
Subject: Re: Static typing
Date: 
Message-ID: <b81o61-3i1.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

> Of course there is more than one way to do anything. You can do 
> everything in assembler. The important point is: what are the convenient 
> ways to do these things? (And convenience is a subjective matter.)

Yes. The point is: It may be as convenient to do in one language as in
the other language. You just need a different approach.

> No, you definitely can do a lot of things with macros in Lisp that are 
> impossible to do in other languages. 

We just had this discussion here, and I am not going to repeat it.
I know Paul Graham's website, and I know many examples of what you
can do with macros. Macros are a wonderful tool, but you really can
get most of what you can do with macros by using HOFs. There are
some things that won't work, the most important of which is that
you cannot force calculation at compile time, and you have to hope
that the compiler does it for you (ghc actually does it sometimes.)

- Dirk
From: Dirk Thierbach
Subject: Re: Static typing
Date: 
Message-ID: <e55o61-co2.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:

>> I don't have pattern matching to use them efficiently, 

> http://www.cliki.net/fare-matcher

Certainly an improvement, but no way to declare datatypes (i.e.,
pattern constructors) yet:

  There also needs be improvements to the infrastructure to build
  pattern constructors, so that you may build pattern constructors and
  destructors at the same time (much like you do when you define ML
  types).

The following might also be a show-stopper (I didn't test it,
but it doesn't look good):

  ; FIXME: several branches of an "or" pattern can't share variables;
  ; variables from all branches are visible in guards and in the body,
  ; and previous branches may have bound variables before failing.
  ; This is rather bad.

The following comment is also interesting:

  Nobody reported using the matcher -- ML/Erlang style pattern
  matching seemingly isn't popular with LISP hackers.

Again, the way to get the benefits of "more expressive languages"
like ML in Lisp seems to be to implement part of them on top of Lisp :-)

>> and there is no automatic test generation (i.e., type checking) for
>> my datatypes.

> http://www.plt-scheme.org/software/mrflow/

I couldn't find any details on this page (it says "coming soon"), but
the name suggest a dataflow analyzer. As I have already said, the
problem with attaching static typing and inference to an arbitrary
language is that it is difficult to get it working without changing
the language design. Pure functional features make type inference
easy, imperative features make them hard. Full dataflow analysis might
help, but I'd have to look more closely to see if it works out.

- Dirk
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8b8i$cl4$2@news.oberberg.net>
Pascal Costanza wrote:

> For example, static type systems are incompatible with dynamic 
> metaprogramming. This is objectively a reduction of expressive power, 
> because programs that don't allow for dynamic metaprogramming can't be 
> extended in certain ways at runtime, by definition.

What is dynamic metaprogramming?

Regards,
Jo
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8des$u4q$1@f1node01.rhrz.uni-bonn.de>
Joachim Durchholz wrote:
> Pascal Costanza wrote:
> 
>> For example, static type systems are incompatible with dynamic 
>> metaprogramming. This is objectively a reduction of expressive power, 
>> because programs that don't allow for dynamic metaprogramming can't be 
>> extended in certain ways at runtime, by definition.
> 
> What is dynamic metaprogramming?

Writing programs that inspect and change themselves at runtime.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Ken Rose
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F97FA6C.2020704@tfb.com>
Pascal Costanza wrote:
> Joachim Durchholz wrote:
> 
>> Pascal Costanza wrote:
>>
>>> For example, static type systems are incompatible with dynamic 
>>> metaprogramming. This is objectively a reduction of expressive power, 
>>> because programs that don't allow for dynamic metaprogramming can't 
>>> be extended in certain ways at runtime, by definition.
>>
>>
>> What is dynamic metaprogramming?
> 
> 
> Writing programs that inspect and change themselves at runtime.

Ah.  I used to do that in assembler.  I always felt like I was aiming a 
shotgun between my toes.

When did self-modifying code get rehabilitated?

  - ken
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8ujk$qiu$1@f1node01.rhrz.uni-bonn.de>
Ken Rose wrote:
> Pascal Costanza wrote:
> 
>> Joachim Durchholz wrote:
>>
>>> Pascal Costanza wrote:
>>>
>>>> For example, static type systems are incompatible with dynamic 
>>>> metaprogramming. This is objectively a reduction of expressive 
>>>> power, because programs that don't allow for dynamic metaprogramming 
>>>> can't be extended in certain ways at runtime, by definition.
>>>
>>> What is dynamic metaprogramming?
>>
>> Writing programs that inspect and change themselves at runtime.
> 
> Ah.  I used to do that in assembler.  I always felt like I was aiming a 
> shotgun between my toes.
> 
> When did self-modifying code get rehabilitated?

I think this was in the late 70's.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Ken Rose
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F9807C8.1010800@tfb.com>
Pascal Costanza wrote:
> Ken Rose wrote:
> 
>> Pascal Costanza wrote:
>>
>>> Joachim Durchholz wrote:
>>>
>>>> Pascal Costanza wrote:
>>>>
>>>>> For example, static type systems are incompatible with dynamic 
>>>>> metaprogramming. This is objectively a reduction of expressive 
>>>>> power, because programs that don't allow for dynamic 
>>>>> metaprogramming can't be extended in certain ways at runtime, by 
>>>>> definition.
>>>>
>>>>
>>>> What is dynamic metaprogramming?
>>>
>>>
>>> Writing programs that inspect and change themselves at runtime.
>>
>>
>> Ah.  I used to do that in assembler.  I always felt like I was aiming 
>> a shotgun between my toes.
>>
>> When did self-modifying code get rehabilitated?
> 
> 
> I think this was in the late 70's.

Have you got a good reference for the uninitiated?

Thanks

   - ken
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn926t$uu4$1@f1node01.rhrz.uni-bonn.de>
Ken Rose wrote:
> Pascal Costanza wrote:
> 
>> Ken Rose wrote:
>>
>>> Pascal Costanza wrote:
>>>
>>>> Joachim Durchholz wrote:
>>>>
>>>>> Pascal Costanza wrote:
>>>>>
>>>>>> For example, static type systems are incompatible with dynamic 
>>>>>> metaprogramming. This is objectively a reduction of expressive 
>>>>>> power, because programs that don't allow for dynamic 
>>>>>> metaprogramming can't be extended in certain ways at runtime, by 
>>>>>> definition.
>>>>>
>>>>>
>>>>>
>>>>> What is dynamic metaprogramming?
>>>>
>>>>
>>>>
>>>> Writing programs that inspect and change themselves at runtime.
>>>
>>>
>>>
>>> Ah.  I used to do that in assembler.  I always felt like I was aiming 
>>> a shotgun between my toes.
>>>
>>> When did self-modifying code get rehabilitated?
>>
>>
>>
>> I think this was in the late 70's.
> 
> 
> Have you got a good reference for the uninitiated?

http://www.laputan.org/ref89/ref89.html and 
http://www.laputan.org/brant/brant.html are probably good starting 
points. http://www-db.stanford.edu/~paepcke/shared-documents/mopintro.ps 
is an excellent paper, but not for the faint of heart. ;)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Joachim Durchholz
Subject: MOPs (warning: LONG)
Date: 
Message-ID: <bn9vae$42i$1@news.oberberg.net>
Pascal Costanza wrote:

> Joachim Durchholz wrote:
> 
>> Pascal Costanza wrote:
>>
>>> For example, static type systems are incompatible with dynamic 
>>> metaprogramming. This is objectively a reduction of expressive power, 
>>> because programs that don't allow for dynamic metaprogramming can't 
>>> be extended in certain ways at runtime, by definition.
>>
>> What is dynamic metaprogramming?
> 
> Writing programs that inspect and change themselves at runtime.

That's just the first part of the answer, so I have to make the second 
part of the question explicit:

What is dynamic metaprogramming good for?

I looked into the papers that you gave the URLs on later, but I'm still 
missing a compelling reason to use MOP. As far as I can see from the 
papers, MOP is a bit like pointers: very powerful, very dangerous, and 
it's difficult to envision a system that does the same without the power 
and danger but such systems do indeed exist.


(For a summary, scroll to the end of this post.)


Just to enumerate the possibilities in the various URLs given:

- Prioritized forwarding to components
(I think that's a non-recommended technique, as it either makes the 
compound object highly dependent on the details of its constituents, 
particularly if a message is understood by many contituents - but 
anyway, here goes:) Any language that has good support for higher-order 
functions can to this directly.

- Dynamic fields
Frankly, I don't understand why on earth one would want to have objects 
with a variant set of fields. I could do the same easily by adding a 
dictionary to the objects, and be done with it (and get the additional 
benefit that the dictionary entries will never collide with a field name).
Conflating the name spaces of field names and dictionary keys might 
offer some syntactic advantages (callers don't need to differentiate 
between static and dynamic fields), but I fail to imagine any good use 
for this all... (which may, of course, be lack of imagination on my 
side, so I'd be happy to see anybody explain a scenario that needs 
exactly this - and then I'll try to see how this can be done without MOP 
*g*).

- Dynamic protection (based on sender's class/type)
This is a special case of "multiple views" (implement protection by 
handing out a view with a restricted subset of functions to those 
classes - other research areas have called this "capability-based 
programming").

- Multiple views
Again, in a language with proper handling for higher-order functions 
(HOFs), this is easy: a view is just a record of accessor functions, and 
a hidden reference to the record for which the view holds. (If you 
really need that.)
Note that in a language with good HOF support, calls that go through 
such records are syntactically indistinguishable from normal function 
calls. (Such languages do exist; I know for sure that this works with 
Haskell.)

- Protocol matching
I simply don't understand what's the point with this: yes of course this 
can be done using MOP, but where's the problem that's being simplified 
with that approach?

- Collection of performance data
That's nonportable anyway, so it can be built right into the runtime, 
and with less gotchas (if measurement mechanisms are integrated into the 
runtime, they will rather break than produce bogus data - and I prefer a 
broken instrument to one that will silently give me nonsense readings, 
thank you).

- Result caching
Languages with good HOF support usually have a "memo" or "memoize" 
function that does exactly this.

- Coercion
Well, of all things, this really doesn't need MOP to work well.

- Persistency
(and, as the original author forgot: network proxies - the issues are 
similar)
Now here's a thing that indeed cannot be retrofitted to a language 
without MOP.
(Well, performance counting can't be retrofitted as well, but that's 
just a programmer's tool that I'd /expect/ to be part of the development 
system. I have no qualms about MOP in the developer system, but IMHO it 
should not be part of production code, and persistence and proxying for 
remote objects are needed for running productive systems.)


For the first paper, this leaves me with a single valid application for 
a MOP. At which point I can say that I can require that "any decent 
language should have this built in": not in the sense that every 
run-time system should include a working TCP/IP stack, but that every 
run-time system should include mechanisms for marshalling and 
unmarshalling objects (and quite many do).


On to the second paper (Brant/Foote/Johnson/Roberts).

- Image stripping
I.e. finding out which functions might be called by a given application.
While this isn't Smalltalk-specific, it's specific to dynamic languages, 
so this doesn't count: finding the set of called functions is /trivial/ 
in a static language, since statically-typed languages don't usually 
offer ways to construct function calls from lexical elements as typical 
dynamic languages do.

- Class collaboration, interaction diagrams
Useful and interesting tools.
Of course, if the compiler is properly modularized, it's easy to write 
them based on the string representation, instead of using reflective 
capabilities.

- Synchronized methods, pre/postcondition checking
Here, the sole advantage of having an implementation in source code 
instead of in the run-time system seems to be that no recompilation is 
necessary if one wishes to change the status (method is synchronized or 
not, assertions are checked or not).
Interestingly, this is not a difference between MOP and no MOP, it's a 
difference between static and dynamic languages.
Even that isn't too interesting. For example, I have worked with Eiffel 
compilers, and at least two of them do not require any recompilation if 
you want to enable or disable assertion checking (plus, at least for one 
compiler, it's possible to switch checking on and off on a per-program, 
per-class, or even per-function basis), so this isn't the exclusive 
domain of dynamic languages.
Of course, such things are easier to add as an afterthought if the 
system is dynamic and such changes can be done with user code - but 
since language and run-time system design are as much about giving power 
as guarantees to the developer, and giving guarantees necessarily 
entails restricting what a developer can do, I'm entirely unconvinced 
that a dynamic language is the better way to do that.

- Multimethods
Well, I don't see much value in them anyway...


... On to Andreas Paepcke's paper.
I found it more interesting than the other two because it clearly spells 
out what MOPs are intended to be good for.

One of the main purposes, in Paepcke's view, is making it easier to 
write tools. In fact reflective systems make this easier, because all 
the tricky details of converting source code into an internal data 
object have already been handled by the compiler.
On the other hand, I don't quite see why this should be more difficult 
for a static language.
Of course, if the language designer "just wanted to get it to compile", 
anybody who wants to write tools for the language has to rewrite the 
parser and decorator, simply because the original tools are not built 
for separating these phases (to phrase it in a polite manner). However, 
in the languages where it's easy to "get it to compile" without 
compromising modularity, I have seen lots of user-written tools, too. I 
think the main difference is that when designing a run-time system for 
introspection, designers are forced to do a very modular compiler design 
- which is a Good Thing, but you can do a good design for a 
non-introspective language just as well :-)

In other words, I don't think that writing tools provides enough reason 
for introspection: the goals can be attained in other ways, too.


The other main purpose in his book is the ability to /extend/ the 
language (and, as should go without saying, without affecting code that 
doesn't use the extensions).
He claims it's good for experimentation (to which I agree, but I 
wouldn't want or need code for language experimentation in production code).

Oh, I see that's already enough of reasons by his book... not by mine.



Summary:
========

Most reasons given for the usefulness of a MOP are irrelevant. The 
categories here are (in no particular order):
* Unneeded in a language without introspection (the argument becomes 
circular)
* Easily replaced by good higher-order function support
* Programmer tools (dynamic languages tend to be better here, but that's 
more of a historical accident: languages with a MOP are usually highly 
dynamic, so a good compiler interface is a must - but nothing prevents 
the designers of static languages from building their compilers with a 
good interface, and in fact some static languages have rich tool 
cultures just like the dynamic ones)

A few points have remained open, either because I misunderstood what the 
respective author meant, or because I don't see any problem in handling 
the issues statically, or because I don't see any useful application of 
the mechanism. The uses include:
* Dynamic fields
* Protocol matching
* Coercion

And, finally, there's the list of things that can be done using MOP, but 
where I think that they are better handled as part of the run-time system:
* (Un-)Marshalling
* Synchronization
* Multimethods

For (un-)marshalling, I think that this should be closed off and hidden 
from the programmer's powers because it opens up all the implementation 
details of all the objects. Anybody inspecting source code will have to 
check the entire sources to be sure that a private field in a record is 
truly private, and not accessed via the mechanisms that make user-level 
implementation of (un-)marshalling possible.
Actually, all you need is a builtin pair of functions that convert some 
data object from and to a byte stream; user-level code can then still 
implement all the networking protocol layers, connection semantics etc.

For synchronization, guarantees are more important than flexibility. To 
be sure that a system has no race conditions, I must be sure that the 
locking mechanism in place (whatever it is) will work across all 
modules, regardless of author. Making libraries interoperate that use 
different locking strategies sounds like a nightmare to me - and if 
everybody must use the same locking strategy, it should be part of the 
language, not part of a user-written MOP library.
However, that's just a preliminary view; I'd be interested in hearing 
reports from people who actually encountered such a situation (I 
haven't, so I may be seeing problems where there aren't any).

For multimethods, I don't see that they should be part of a language 
anyway - but that's a discussion for another thread that I don't wish to 
repeat now (and this post is too long already).


Rambling mode OFF.

Regards,
Jo
From: Craig Brozefsky
Subject: Re: MOPs (warning: LONG)
Date: 
Message-ID: <87znfrtgol.fsf@piracy.red-bean.com>
Joachim Durchholz <·················@web.de> writes:

> And, finally, there's the list of things that can be done using MOP,
> but where I think that they are better handled as part of the run-time
> system:
> * (Un-)Marshalling
> * Synchronization
> * Multimethods

The MOP is an interface to the run-time system for common object
services.  I do not understand your position that these would be
better handled by the run-time.

> For (un-)marshalling, I think that this should be closed off and
> hidden from the programmer's powers because it opens up all the
> implementation details of all the objects.

What if I want to (un-)marshall from/to something besides a byte
stream, such as an SQL database?  I don't want one of the object
services my system depends on to be so opaque because a peer thought I
would be better off that way.  Then again, I have never understand the
desire to hide things in programming languages.

> Anybody inspecting source code will have to check the entire sources
> to be sure that a private field in a record is truly private, and
> not accessed via the mechanisms that make user-level implementation
> of (un-)marshalling possible.

If you look at the MOP in CLOS, you can use the slot-value-using-class
method to ensure that getting/setting the slot thru any interface will
trigger the appropriate code.  It does not matter, private, public,
wether they use SLOT-VALUE or an accessor.  This is also useful for
transaction mgmt.

The MOP is an interface to the run-time's object services.



-- 
Sincerely, Craig Brozefsky <·····@red-bean.com>
No war! No racist scapegoating! No attacks on civil liberties!
Chicago Coalition Against War & Racism: www.chicagoantiwar.org
From: Andrew Dalke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <kZAlb.1060$I04.795@newsread4.news.pas.earthlink.net>
Pascal Costanza:
> ...because static type systems work by reducing the expressive power of
> a language. It can't be any different for a strict static type system.
> You can't solve the halting problem in a general-purpose language.
>
> This means that eventually you might need to work around language
> restrictions, and this introduces new potential sources for bugs.

Given what I know of embedded systems, I can effectively
guarantee you that all the code on the rocket was proven
to halt in not only a finite amount of time but a fixed amount of
time.

So while what you say may be true for a general purpose
language, that appeal to the halting problem doesn't apply given
a hard real time constraint.

                    Andrew
                    ·····@dalkescientific.com
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn77a3$qj3$2@newsreader2.netcologne.de>
Andrew Dalke wrote:

> Pascal Costanza:
> 
>>...because static type systems work by reducing the expressive power of
>>a language. It can't be any different for a strict static type system.
>>You can't solve the halting problem in a general-purpose language.
>>
>>This means that eventually you might need to work around language
>>restrictions, and this introduces new potential sources for bugs.
> 
> 
> Given what I know of embedded systems, I can effectively
> guarantee you that all the code on the rocket was proven
> to halt in not only a finite amount of time but a fixed amount of
> time.

Yes, this is a useful restriction for a certian scenario. I don't have 
anything against restrictions put on code, provided these restrictions 
are justified.

Static type systems are claimed to generally improve your code. I don't 
see that.


Pascal
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f972315$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Marshall Spight wrote:
>> But why should more regression testing mean less static type checking?
>> Both are useful. Both catch bugs. Why ditch one for the other?
>
>...because static type systems work by reducing the expressive power of 
>a language. It can't be any different for a strict static type system. 
>You can't solve the halting problem in a general-purpose language.

Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway. 
They generally have some support for *optional* dynamic typing.

This is IMHO a good trade-off.  Most of the time, you want static typing;
it helps in the design process, with documentation, error checking, and
efficiency.  Sometimes you need a bit more flexibility than the
static type system allows, and then in those few cases, you can make use
of dynamic typing ("univ" in Mercury, "Dynamic" in ghc,
"System.Object" in C#, etc.).  The need to do this is not uncommon
in languages like C# and Java that don't support parametric polymorphism,
but pretty rare in languages that do.

>I think soft typing is a good compromise, because it is a mere add-on to 
>an otherwise dynamically typed language, and it allows programmers to 
>override the decisions of the static type system when they know better.

Soft typing systems give you dynamic typing unless you explicitly ask
for static typing.  That is the wrong default, IMHO.  It works much
better to add dynamic typing to a statically typed language than the
other way around.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031023015234.GU1454@mapcar.org>
On Thu, Oct 23, 2003 at 12:38:50AM +0000, Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> >Marshall Spight wrote:
> >> But why should more regression testing mean less static type checking?
> >> Both are useful. Both catch bugs. Why ditch one for the other?
> >
> >...because static type systems work by reducing the expressive power of 
> >a language. It can't be any different for a strict static type system. 
> >You can't solve the halting problem in a general-purpose language.
> 
> Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
> OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway. 
> They generally have some support for *optional* dynamic typing.
> 
> This is IMHO a good trade-off.  Most of the time, you want static typing;
> it helps in the design process, with documentation, error checking, and
> efficiency.  Sometimes you need a bit more flexibility than the
> static type system allows, and then in those few cases, you can make use
> of dynamic typing ("univ" in Mercury, "Dynamic" in ghc,
> "System.Object" in C#, etc.).  The need to do this is not uncommon
> in languages like C# and Java that don't support parametric polymorphism,
> but pretty rare in languages that do.

The trouble with these `dynamic' extensions is that they are `dynamic
type systems' from a statically typed viewpoint.  A person who uses
truly dynamically typed languages would not consider them to be the
same thing.

In SML, for example, such an extension might be implemented using a sum
type, even using an `exn' type so that it can be extended in separate
places.  The moment this system fails (and when a true dynamic system
carries on) is when such a type is redefined.  The reason is because the
new type is not considered to be the same as the old type, due to
generativity of type names, and old code requires recompilation.

I'm told Haskell has extensions that will work around even this, but the
last time I tried to play with those, it failed miserably because
Haskell doesn't really support an interactive REPL so there was no way
to test it.  (Maybe this was ghc's fault?)

As for Java/C#, downcasting is more of an example of static type systems
getting in the way of OOP rather than of a dynamic type system.  (It's
because those languages are the result of an unholy union between the
totally dynamic Smalltalk and the awkwardly static C++).

> >I think soft typing is a good compromise, because it is a mere add-on to 
> >an otherwise dynamically typed language, and it allows programmers to 
> >override the decisions of the static type system when they know better.
> 
> Soft typing systems give you dynamic typing unless you explicitly ask
> for static typing.  That is the wrong default, IMHO.  It works much
> better to add dynamic typing to a statically typed language than the
> other way around.

I view static typing as an added analysis stage.  In that light, it
makes no sense to `add' dynamic typing to it.  Also, I think that static
typing should be part of a more comprehensive static analysis phase
which itself is part of a greater suite of tests.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031023075825.GV1454@mapcar.org>
Here's a link to a relavant system that may be worthwhile to check out:
http://www.simulys.com/guideto.htm

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn7a3p$1h6$1@newsreader2.netcologne.de>
Fergus Henderson wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>Marshall Spight wrote:
>>
>>>But why should more regression testing mean less static type checking?
>>>Both are useful. Both catch bugs. Why ditch one for the other?
>>
>>...because static type systems work by reducing the expressive power of 
>>a language. It can't be any different for a strict static type system. 
>>You can't solve the halting problem in a general-purpose language.
> 
> 
> Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
> OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway. 
> They generally have some support for *optional* dynamic typing.
> 
> This is IMHO a good trade-off.  Most of the time, you want static typing;
> it helps in the design process, with documentation, error checking, and
> efficiency.

+ Design process: There are clear indications that processes like 
extreme programming work better than processes that require some kind of 
specification stage. Dynamic typing works better with XP than static 
typing because with dynamic typing you can write unit tests without 
having the need to immediately write appropriate target code.

+ Documentation: Comments are usually better for handling documentation. 
;) If you want your "comments" checked, you can add assertions.

+ Error checking: I can only guess what you mean by this. If you mean 
something like Java's checked exceptions, there are clear signs that 
this is a very bad feature.

+ Efficiency: As Paul Graham puts it, efficiency comes from profiling. 
In order to achieve efficiency, you need to identify the bottle-necks of 
your program. No amount of static checks can identify bottle-necks, you 
have to actually run the program to determine them.

> Sometimes you need a bit more flexibility than the
> static type system allows, and then in those few cases, you can make use
> of dynamic typing ("univ" in Mercury, "Dynamic" in ghc,
> "System.Object" in C#, etc.).  The need to do this is not uncommon
> in languages like C# and Java that don't support parametric polymorphism,
> but pretty rare in languages that do.

I wouldn't count the use of java.lang.Object as a case of dynamic 
typing. You need to explicitly cast objects of this type to some class 
in order to make useful method calls. You only do this to satisfy the 
static type system. (BTW, this is one of the sources for potential bugs 
that you don't have in a decent dynamically typed language.)

>>I think soft typing is a good compromise, because it is a mere add-on to 
>>an otherwise dynamically typed language, and it allows programmers to 
>>override the decisions of the static type system when they know better.
> 
> Soft typing systems give you dynamic typing unless you explicitly ask
> for static typing.  That is the wrong default, IMHO.  It works much
> better to add dynamic typing to a statically typed language than the
> other way around.

I don't think so.


Pascal
From: Ralph Becket
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3638acfd.0310230039.306b14f@posting.google.com>
Pascal Costanza <········@web.de> wrote in message news:<············@newsreader2.netcologne.de>...
> 
> + Design process: There are clear indications that processes like 
> extreme programming work better than processes that require some kind of 
> specification stage. Dynamic typing works better with XP than static 
> typing because with dynamic typing you can write unit tests without 
> having the need to immediately write appropriate target code.

This is utterly bogus.  If you write unit tests beforehand, you are 
already pre-specifying the interface that the code to be tested will 
present.

I fail to see how dynamic typing can confer any kind of advantage here.

> + Documentation: Comments are usually better for handling documentation. 
> ;) If you want your "comments" checked, you can add assertions.

Are you seriously claiming that concise, *automatically checked* 
documentation (which is one function served by explicit type 
declarations) is inferior to unchecked, ad hoc commenting?

For one thing, type declarations *cannot* become out-of-date (as
comments can and often do) because a discrepancy between type
declaration and definition will be immidiately flagged by the compiler.

> + Error checking: I can only guess what you mean by this. If you mean 
> something like Java's checked exceptions, there are clear signs that 
> this is a very bad feature.

I think Fergus was referring to static error checking, but (and forgive 
me if I'm wrong here) that's a feature you seem to insist has little or
no practical value - indeed, you seem to claim it is even an impediment
to productive programming.  I'll leave this point as one of violent 
disagreement...

> + Efficiency: As Paul Graham puts it, efficiency comes from profiling. 
> In order to achieve efficiency, you need to identify the bottle-necks of 
> your program. No amount of static checks can identify bottle-necks, you 
> have to actually run the program to determine them.

I don't think you understand much about language implementation.
A strong, expressive, static type system provides for optimisations
that cannot be done any other way.  These optimizations alone can be
expected to make a program several times faster.  For example:
- no run-time type checks need be performed;
- data representation is automatically optimised by the compiler 
  (e.g. by pointer tagging);
- polymorphic code can be inlined and/or specialised according to each
  application;
- if the language does not support dynamic typing then values need not
  carry their own type identifiers around with them, thereby saving 
  space;
- if the language does support explicit dynamic typing, then only 
  those places using that facility need plumb in the type identifiers 
  (something done automatically by the compiler.)

On top of all that, you can still run your code through the profiler, 
although the need for hand-tuned optimization (and consequent code
obfuscation) may be completely obviated by the speed advantage 
conferred by the compiler exploiting a statically checked type system.

> I wouldn't count the use of java.lang.Object as a case of dynamic 
> typing. You need to explicitly cast objects of this type to some class 
> in order to make useful method calls. You only do this to satisfy the 
> static type system. (BTW, this is one of the sources for potential bugs 
> that you don't have in a decent dynamically typed language.)

No!  A thousand times, no!

Let me put it like this.  Say I have a statically, expressively, strongly 
typed language L.  And I have another language L' that is identical to
L except it lacks the type system.  Now, any program in L that has the
type declarations removed is also a program in L'.  The difference is
that a program P rejected by the compiler for L can be converted to a
program P' in L' which *may even appear to run fine for most cases*.  
However, and this is the really important point, P' is *still* a 
*broken* program.  Simply ignoring the type problems does not make 
them go away: P' still contains all the bugs that program P did.

> > Soft typing systems give you dynamic typing unless you explicitly ask
> > for static typing.  That is the wrong default, IMHO.  It works much
> > better to add dynamic typing to a statically typed language than the
> > other way around.
> 
> I don't think so.

Yes, but your arguments are unconvincing.  I should point out that 
most of the people on comp.lang.functional (a) probably used weakly/
dynamically typed languages for many years, and at an expert level,
before discovering statically typed (declarative) programming and 
(b) probably still do use such languages on a regular basis.  
Expressive, static typing is not a message shouted from ivory towers 
by people lacking real-world experience.

Why not make the argument more concrete?  Present a problem 
specification for an every-day programming task that you think 
seriously benefits from dynamic typing.  Then we can discuss the 
pros and cons of different approaches.

-- Ralph
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <myfirstname.mylastname-2310030857350001@192.168.1.51>
In article <···························@posting.google.com>,
····@cs.mu.oz.au (Ralph Becket) wrote:

> Let me put it like this.  Say I have a statically, expressively, strongly 
> typed language L.  And I have another language L' that is identical to
> L except it lacks the type system.  Now, any program in L that has the
> type declarations removed is also a program in L'.  The difference is
> that a program P rejected by the compiler for L can be converted to a
> program P' in L' which *may even appear to run fine for most cases*.  
> However, and this is the really important point, P' is *still* a 
> *broken* program.  Simply ignoring the type problems does not make 
> them go away: P' still contains all the bugs that program P did.

No.  The fallacy in this reasoning is that you assume that "type error"
and "bug" are the same thing.  They are not.  Some bugs are not type
errors, and some type errors are not bugs.  In the latter circumstance
simply ignoring them can be exactly the right thing to do.

(On the other hand, many, perhaps most, type errors are bugs, and so
having a type system provide warnings can be a very useful thing IMO.)

E.
From: Ralph Becket
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3638acfd.0310231647.16db77b4@posting.google.com>
······················@jpl.nasa.gov (Erann Gat) wrote in message news:<·······································@192.168.1.51>...
> 
> No.  The fallacy in this reasoning is that you assume that "type error"
> and "bug" are the same thing.  They are not.  Some bugs are not type
> errors, and some type errors are not bugs.  In the latter circumstance
> simply ignoring them can be exactly the right thing to do.

Just to be clear, I do not believe "bug" => "type error".  However, I do
claim that "type error" (in reachable code) => "bug".  If at some point
a program P' (in L') may eventually abort with an exception due to an
ill typed function application then I would insist that P' is buggy.

Here's the way I see it:
(1) type errors are extremely common;
(2) an expressive, statically checked type system (ESCTS) will identify 
  almost all of these errors at compile time;
(3) type errors flagged by a compiler for an ESCTS can pinpoint the source
  of the problem whereas ad hoc assertions in code will only identify a
  symptom of a type error;
(4) the programmer does not have to litter type assertions in a program
  written in a language with an ESCTS;
(5) an ESCTS provides optimization opportunities that would otherwise
  be unavailable to the compiler;
(6) there will be cases where the ESCTS requires one to code around a
  constraint that is hard/impossible to express in the ESCTS (the more
  expressive the type system, the smaller the set of such cases will be.)

The question is whether the benefits of (2), (3), (4) and (5) outweigh 
the occasional costs of (6).

-- Ralph
From: Paul F. Dietz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <KrKdnVmOxJrQfgWiRVn-vg@dls.net>
Ralph Becket wrote:

> Here's the way I see it:
> (1) type errors are extremely common;
> (2) an expressive, statically checked type system (ESCTS) will identify 
>   almost all of these errors at compile time;
> (3) type errors flagged by a compiler for an ESCTS can pinpoint the source
>   of the problem whereas ad hoc assertions in code will only identify a
>   symptom of a type error;
> (4) the programmer does not have to litter type assertions in a program
>   written in a language with an ESCTS;
> (5) an ESCTS provides optimization opportunities that would otherwise
>   be unavailable to the compiler;
> (6) there will be cases where the ESCTS requires one to code around a
>   constraint that is hard/impossible to express in the ESCTS (the more
>   expressive the type system, the smaller the set of such cases will be.)

However,

(7) Developing reliable software also requires extensive testing to
   detect bugs other than type errors, and
(8) These tests will usually detect most of the bugs that static
   type checking would have detected.

So the *marginal* benefit of static type checking is reduced, unless you
weren't otherwise planning to test your code very well.

BTW, is (3) really justified?  My (admittedly old) experience with ML
was that type errors can be rather hard to track back to their sources.

	Paul
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <BTgmb.19555$e01.36761@attbi_s02>
"Paul F. Dietz" <·····@dls.net> wrote in message ···························@dls.net...
>
> (7) Developing reliable software also requires extensive testing to
>    detect bugs other than type errors, and
> (8) These tests will usually detect most of the bugs that static
>    type checking would have detected.

Whether and to what degree (8) is true is the big open
question in this debate. Does anyone have any objective
metrics? Anyone know how to get them?

This strikes me as a very hard thing to quantify.


Marshall
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9d5adc$1@news.unimelb.edu.au>
"Paul F. Dietz" <·····@dls.net> writes:

>Ralph Becket wrote:
>
>> Here's the way I see it:
...
>> (3) type errors flagged by a compiler for an ESCTS can pinpoint the source
>>   of the problem whereas ad hoc assertions in code will only identify a
>>   symptom of a type error;
...
>BTW, is (3) really justified?  My (admittedly old) experience with ML
>was that type errors can be rather hard to track back to their sources.

It depends on whether you declare the types of functions or not.
If you leave it up to the compiler to infer the types of all the
functions, then compilers have a difficult job of pinpointing errors,
because sometimes your incorrectly-implemented functions will be
type-correct, just with a different type than you expected, and this
will then leave to type errors further up the call tree.

But declaring the intended types of functions improves things dramatically.
If you get a type error, and you can't immediately figure out what is wrong,
declaring the intended types of the functions involved and recompiling
will allow you to quickly pinpoint the problem.

Of course, the type checker's error messages won't tell you _exactly_
where the error is; they can only point out inconsistencies, e.g. between
the code for a function and its type declaration, or between the code
for a function and one or more of the type declarations for the functions
that it calls.  But such inconsistencies should be easy to resolve;
the programmer should be able to tell which of the contradictory parts
are wrong.  (The exception is when the inconsistency actually reveals
a design error; in the worst case, a major redesign may be required.)

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410030836220001@192.168.1.51>
In article <····························@posting.google.com>,
····@cs.mu.oz.au (Ralph Becket) wrote:

> ······················@jpl.nasa.gov (Erann Gat) wrote in message
news:<·······································@192.168.1.51>...
> > 
> > No.  The fallacy in this reasoning is that you assume that "type error"
> > and "bug" are the same thing.  They are not.  Some bugs are not type
> > errors, and some type errors are not bugs.  In the latter circumstance
> > simply ignoring them can be exactly the right thing to do.
> 
> Just to be clear, I do not believe "bug" => "type error".  However, I do
> claim that "type error" (in reachable code) => "bug".

But that just begs the question of what you consider a type error.  Does
the following code contain a type error?

(defun rsq (a b)
  "Return the square root of the sum of the squares of a and b"
  (sqrt (+ (* a a) (* b b))))

How about this one?

(defun rsq1 (a b)
  (or (ignore-errors (rsq a b)) 'FOO))

or:

(defun rsq2 (a b)
  (or (ignore-errors (rsq a b)) (error "Foo")))


> Here's the way I see it:
> (1) type errors are extremely common;

In my experience they are quite rare.

> (2) an expressive, statically checked type system (ESCTS) will identify 
>   almost all of these errors at compile time;

And then some.  That's the problem.

> (3) type errors flagged by a compiler for an ESCTS can pinpoint the source
>   of the problem whereas ad hoc assertions in code will only identify a
>   symptom of a type error;

Really?  If there's a type mismatch how does the type system know if the
problem is in the caller or the callee?

> (4) the programmer does not have to litter type assertions in a program
>   written in a language with an ESCTS;

But he doesn't have to litter type assertions in a program written in a
language without an ESCTS either.

> (5) an ESCTS provides optimization opportunities that would otherwise
>   be unavailable to the compiler;

That is true.  Whether this benefit outweighs the drawbacks is arguable.

> (6) there will be cases where the ESCTS requires one to code around a
>   constraint that is hard/impossible to express in the ESCTS (the more
>   expressive the type system, the smaller the set of such cases will be.)
> 
> The question is whether the benefits of (2), (3), (4) and (5) outweigh 
> the occasional costs of (6).

Yes, that's what it comes down to.  There are both costs and benefits. 
The balance probably tips one way in some circumstances, the other way in
others.

E.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn86o9$gfh$1@newsreader2.netcologne.de>
Ralph Becket wrote:

> Pascal Costanza <········@web.de> wrote in message news:<············@newsreader2.netcologne.de>...
> 
>>+ Design process: There are clear indications that processes like 
>>extreme programming work better than processes that require some kind of 
>>specification stage. Dynamic typing works better with XP than static 
>>typing because with dynamic typing you can write unit tests without 
>>having the need to immediately write appropriate target code.
> 
> 
> This is utterly bogus.  If you write unit tests beforehand, you are 
> already pre-specifying the interface that the code to be tested will 
> present.
> 
> I fail to see how dynamic typing can confer any kind of advantage here.

Read the literature on XP.

>>+ Documentation: Comments are usually better for handling documentation. 
>>;) If you want your "comments" checked, you can add assertions.
> 
> 
> Are you seriously claiming that concise, *automatically checked* 
> documentation (which is one function served by explicit type 
> declarations) is inferior to unchecked, ad hoc commenting?

I am sorry, but in my book, assertions are automatically checked.

> For one thing, type declarations *cannot* become out-of-date (as
> comments can and often do) because a discrepancy between type
> declaration and definition will be immidiately flagged by the compiler.

They same holds for assertions as soon as they are run by the test suite.

>>+ Error checking: I can only guess what you mean by this. If you mean 
>>something like Java's checked exceptions, there are clear signs that 
>>this is a very bad feature.
> 
> 
> I think Fergus was referring to static error checking, but (and forgive 
> me if I'm wrong here) that's a feature you seem to insist has little or
> no practical value - indeed, you seem to claim it is even an impediment
> to productive programming.  I'll leave this point as one of violent 
> disagreement...

It has value for certain cases, but not in general.

>>+ Efficiency: As Paul Graham puts it, efficiency comes from profiling. 
>>In order to achieve efficiency, you need to identify the bottle-necks of 
>>your program. No amount of static checks can identify bottle-necks, you 
>>have to actually run the program to determine them.
> 
> 
> I don't think you understand much about language implementation.

...and I don't think you understand much about dynamic compilation. Have 
you ever checked some not-so-recent-anymore work about, say, the HotSpot 
virtual machine?

> A strong, expressive, static type system provides for optimisations
> that cannot be done any other way.  These optimizations alone can be
> expected to make a program several times faster.  For example:
> - no run-time type checks need be performed;
> - data representation is automatically optimised by the compiler 
>   (e.g. by pointer tagging);
> - polymorphic code can be inlined and/or specialised according to each
>   application;
> - if the language does not support dynamic typing then values need not
>   carry their own type identifiers around with them, thereby saving 
>   space;
> - if the language does support explicit dynamic typing, then only 
>   those places using that facility need plumb in the type identifiers 
>   (something done automatically by the compiler.)

You are only talking about micro-efficiency here. I don't care about 
that, my machine is fast enough for a decent dynamically typed language.

> On top of all that, you can still run your code through the profiler, 
> although the need for hand-tuned optimization (and consequent code
> obfuscation) may be completely obviated by the speed advantage 
> conferred by the compiler exploiting a statically checked type system.

Have you checked this?

>>I wouldn't count the use of java.lang.Object as a case of dynamic 
>>typing. You need to explicitly cast objects of this type to some class 
>>in order to make useful method calls. You only do this to satisfy the 
>>static type system. (BTW, this is one of the sources for potential bugs 
>>that you don't have in a decent dynamically typed language.)
> 
> 
> No!  A thousand times, no!
> 
> Let me put it like this.  Say I have a statically, expressively, strongly 
> typed language L.  And I have another language L' that is identical to
> L except it lacks the type system.  Now, any program in L that has the
> type declarations removed is also a program in L'.  The difference is
> that a program P rejected by the compiler for L can be converted to a
> program P' in L' which *may even appear to run fine for most cases*.  
> However, and this is the really important point, P' is *still* a 
> *broken* program.  Simply ignoring the type problems does not make 
> them go away: P' still contains all the bugs that program P did.

You are making several mistakes here. I don't argue for languages that 
don't have a type system, I argue for languages that are dynamically 
typed. We are not debating strong typing.

Furthermore, a program P that is rejected by L is not necessarily broken.

>>>Soft typing systems give you dynamic typing unless you explicitly ask
>>>for static typing.  That is the wrong default, IMHO.  It works much
>>>better to add dynamic typing to a statically typed language than the
>>>other way around.
>>
>>I don't think so.
> 
> 
> Yes, but your arguments are unconvincing.  I should point out that 
> most of the people on comp.lang.functional (a) probably used weakly/
> dynamically typed languages for many years, and at an expert level,
> before discovering statically typed (declarative) programming and 

Weak and dynamic typing is not the same thing.

> (b) probably still do use such languages on a regular basis.  
> Expressive, static typing is not a message shouted from ivory towers 
> by people lacking real-world experience.

> Why not make the argument more concrete?  Present a problem 
> specification for an every-day programming task that you think 
> seriously benefits from dynamic typing.  Then we can discuss the 
> pros and cons of different approaches.

No. The original question asked in this thread was along the lines of 
why abandon static type systems and why not use them always. I don't 
need to convince you that a proposed general solution doesn't always 
work, you have to convince me that it always works.

Otherwise I could come up with some other arbitrary restriction and 
claim that it is a general solution for writing better programs, and ask 
you to give counter-examples as well. This is not a reasonable approach 
IMHO.

There are excellent programs out there that have been written with 
static type systems, and there are also excellent programs out there 
that have been written without static type systems. This is a clear 
indication that static type systems are not a necessary condition for 
writing excellent programs.

Furthermore, there are crap programs out there that have been written 
with static type systems, so a static type system is also not a 
sufficient condition for writing good software.

The burden of proof is on the one who proposes a solution.


Pascal
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8etq$e8o$1@news.oberberg.net>
Pascal Costanza wrote:

> Ralph Becket wrote:
>> I fail to see how dynamic typing can confer any kind of advantage here.
> 
> Read the literature on XP.

Note that most literature contrasts dynamic typing with the static type 
systems of C++ and/or Java. Good type systems are /far/ better.

>> Are you seriously claiming that concise, *automatically checked* 
>> documentation (which is one function served by explicit type 
>> declarations) is inferior to unchecked, ad hoc commenting?
> 
> I am sorry, but in my book, assertions are automatically checked.

But only at runtime, where a logic flaw may or may not trigger the 
assertion.
(Assertions are still useful: if they are active, they prove that the 
errors checked by them didn't occur in a given program run. This can 
still be useful. But then, production code usually runs with assertion 
checking off - which is exactly the point where knowing that some bug 
occurred would be more important...)

>> For one thing, type declarations *cannot* become out-of-date (as
>> comments can and often do) because a discrepancy between type
>> declaration and definition will be immidiately flagged by the compiler.
> 
> They same holds for assertions as soon as they are run by the test suite.

A test suite can never catch all permutations of data that may occur (on 
a modern processor, you can't even check the increment-by-one operation 
with that, the universe will end before the CPU has counted even half of 
the full range).

>>> + Efficiency: As Paul Graham puts it, efficiency comes from 
>>> profiling. In order to achieve efficiency, you need to identify the 
>>> bottle-necks of your program. No amount of static checks can identify 
>>> bottle-necks, you have to actually run the program to determine them.
>>
>> I don't think you understand much about language implementation.
> 
> 
> ....and I don't think you understand much about dynamic compilation. 
> Have you ever checked some not-so-recent-anymore work about, say, the 
> HotSpot virtual machine?

Well, I did - and the results were, ahem, unimpressive.
Besides, HotSpot is for Java, which is statically typed, so I don't 
really see your point here... unless we're talking about different VMs.

And, yes, VMs got pretty fast these days (and that actually happened 
several years ago).
It's only that compiled languages still have a good speed advantage - 
making a VM fast requires just that extra amount of effort which, when 
invested into a compiler, will make the compiled code still run faster 
than the VM code.
Also, I have seen several cases where VM code just plain sucked 
performance-wise until it was carefully hand-optimized. (A concrete 
example: the all-new, great graphics subsystem for Squeak that could do 
wonders like rendering fonts with all sorts of funky effects, do 3D 
transformations on the fly, and whatnot... I left Squeak before those 
optimizations became mainstream, but I'm pretty sure that Squeak got 
even faster. Yet Squeak is still a bit sluggish... only marginally so, 
and certainly no more sluggish than the bloatware that's around and that 
commercial programmers are forced to write, but efficiency is simply 
more of a concern and a manpower hog than with a compiled language.)

> There are excellent programs out there that have been written with 
> static type systems, and there are also excellent programs out there 
> that have been written without static type systems. This is a clear 
> indication that static type systems are not a necessary condition for 
> writing excellent programs.

Hey, there are also excellent programs written in assembly. By your 
argument, using a higher language is not a necessary condition for 
writing excellent languages.

The question is: what effort goes into an excellent program? Is static 
typing a help or a hindrance?

One thing I do accept: that non-inferring static type systems like those 
of C++ and Java are a PITA. Changing a type in some interface tends to 
cost a day or more, chasing all the consequences in callers, subclasses, 
and whatnot, and I don't need that (though it does tell me all the 
places where I should take a look to check if the change didn't break 
anything, so this isn't entirely wasted time).
I'm still unconvinced that an inferring type system is worse than 
run-time type checking. (Except for that "dynamic metaprogramming" thing 
I'd like to know more about. In my book, things that are overly powerful 
are also overly uncontrollable, but that may be an exception.)

Regards,
Jo
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8nhq$l04$1@f1node01.rhrz.uni-bonn.de>
Joachim Durchholz wrote:
> Pascal Costanza wrote:
> 
>> Ralph Becket wrote:
>>
>>> I fail to see how dynamic typing can confer any kind of advantage here.
>>
>> Read the literature on XP.
> 
> Note that most literature contrasts dynamic typing with the static type 
> systems of C++ and/or Java. Good type systems are /far/ better.

You are changing topics here.

In a statically typed language, when I write a test case that calls a 
specific method, I need to write at least one class that implements at 
least that method, otherwise the code won't compile.

In a dynamically typed language I can concentrate on writing the test 
cases first and don't need to write dummy code to make some arbitrary 
static checker happy.

>>> Are you seriously claiming that concise, *automatically checked* 
>>> documentation (which is one function served by explicit type 
>>> declarations) is inferior to unchecked, ad hoc commenting?
>>
>> I am sorry, but in my book, assertions are automatically checked.
> 
> But only at runtime, where a logic flaw may or may not trigger the 
> assertion.

I don't care about that difference. My development environment is 
flexible enough to make execution of test suites a breeze. I don't need 
a separate compilation and linking stage to make this work.

> (Assertions are still useful: if they are active, they prove that the 
> errors checked by them didn't occur in a given program run. This can 
> still be useful. But then, production code usually runs with assertion 
> checking off - which is exactly the point where knowing that some bug 
> occurred would be more important...)

Don't let your production code run with assertion checking off then.

>>> For one thing, type declarations *cannot* become out-of-date (as
>>> comments can and often do) because a discrepancy between type
>>> declaration and definition will be immidiately flagged by the compiler.
>>
>> They same holds for assertions as soon as they are run by the test suite.
> 
> A test suite can never catch all permutations of data that may occur (on 
> a modern processor, you can't even check the increment-by-one operation 
> with that, the universe will end before the CPU has counted even half of 
> the full range).

I hear that in the worst case scenarios, static type checking in modern 
type systems needs exponential time, but for most practical cases this 
doesn't matter. Maybe it also doesn't matter for most practical cases 
that you can't check all permutations of data in a test suite.

>>>> + Efficiency: As Paul Graham puts it, efficiency comes from 
>>>> profiling. In order to achieve efficiency, you need to identify the 
>>>> bottle-necks of your program. No amount of static checks can 
>>>> identify bottle-necks, you have to actually run the program to 
>>>> determine them.
>>>
>>> I don't think you understand much about language implementation.
>>
>> ....and I don't think you understand much about dynamic compilation. 
>> Have you ever checked some not-so-recent-anymore work about, say, the 
>> HotSpot virtual machine?
> 
> Well, I did - and the results were, ahem, unimpressive.

The results that are reported in the papers I have read are very 
impressive. Can you give me the references to the papers you have read?

> Besides, HotSpot is for Java, which is statically typed, so I don't 
> really see your point here... unless we're talking about different VMs.

Oh, so you haven't read the literature? And above you said you did.

Well, the research that ultimately lead to the HotSpot Virtual Machine 
originated in virtual machines for Smalltalk and for Self. Especially 
Self is an "extremely" dynamic language, but they still managed to make 
it execute reasonably fast.

When all you wanted to say is that Java is not fast, that's not quite 
true. The showstopper for Java is the Swing library. Java itself is very 
fast.

In certain cases it's even faster than C++ because the HotSpot VM can 
make optimizations that a static compiler cannot make. (For example, 
inline virtual methods that are known not to be overridden in currently 
loaded classes.)

> And, yes, VMs got pretty fast these days (and that actually happened 
> several years ago).
> It's only that compiled languages still have a good speed advantage - 
> making a VM fast requires just that extra amount of effort which, when 
> invested into a compiler, will make the compiled code still run faster 
> than the VM code.
> Also, I have seen several cases where VM code just plain sucked 
> performance-wise until it was carefully hand-optimized. (A concrete 
> example: the all-new, great graphics subsystem for Squeak that could do 
> wonders like rendering fonts with all sorts of funky effects, do 3D 
> transformations on the fly, and whatnot... I left Squeak before those 
> optimizations became mainstream, but I'm pretty sure that Squeak got 
> even faster. Yet Squeak is still a bit sluggish... only marginally so, 
> and certainly no more sluggish than the bloatware that's around and that 
> commercial programmers are forced to write, but efficiency is simply 
> more of a concern and a manpower hog than with a compiled language.)

I know sluggish software written in statically typed languages.

>> There are excellent programs out there that have been written with 
>> static type systems, and there are also excellent programs out there 
>> that have been written without static type systems. This is a clear 
>> indication that static type systems are not a necessary condition for 
>> writing excellent programs.
> 
> Hey, there are also excellent programs written in assembly. By your 
> argument, using a higher language is not a necessary condition for 
> writing excellent languages.

Right.

> The question is: what effort goes into an excellent program? Is static 
> typing a help or a hindrance?

Right, that's the question.

> I'm still unconvinced that an inferring type system is worse than 
> run-time type checking. (Except for that "dynamic metaprogramming" thing 
> I'd like to know more about. In my book, things that are overly powerful 
> are also overly uncontrollable, but that may be an exception.)

Check the literature about metaobject protocols.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Remi Vanicat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ekx4xaym.dlv@wanadoo.fr>
Pascal Costanza <········@web.de> writes:

> Joachim Durchholz wrote:
>> Pascal Costanza wrote:
>>
>>> Ralph Becket wrote:
>>>
>>>> I fail to see how dynamic typing can confer any kind of advantage here.
>>>
>>> Read the literature on XP.
>> Note that most literature contrasts dynamic typing with the static
>> type systems of C++ and/or Java. Good type systems are /far/ better.
>
> You are changing topics here.
>
> In a statically typed language, when I write a test case that calls a
> specific method, I need to write at least one class that implements at
> least that method, otherwise the code won't compile.

Not in ocaml.
ocaml is statically typed.

-- 
R�mi Vanicat
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8qjl$smo$1@f1node01.rhrz.uni-bonn.de>
Remi Vanicat wrote:
> Pascal Costanza <········@web.de> writes:

>>In a statically typed language, when I write a test case that calls a
>>specific method, I need to write at least one class that implements at
>>least that method, otherwise the code won't compile.
> 
> Not in ocaml.
> ocaml is statically typed.

How does ocaml make sure that you don't get a message-not-understood 
exception at runtime then?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Remi Vanicat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87znfsvuqh.dlv@wanadoo.fr>
Pascal Costanza <········@web.de> writes:

> Remi Vanicat wrote:
>> Pascal Costanza <········@web.de> writes:
>
>>>In a statically typed language, when I write a test case that calls a
>>>specific method, I need to write at least one class that implements at
>>>least that method, otherwise the code won't compile.
>> Not in ocaml.
>> ocaml is statically typed.
>
> How does ocaml make sure that you don't get a message-not-understood
> exception at runtime then?

It make the verification when you call the test. I explain :

you could define :

let f x = x #foo

which is a function taking an object x and calling its method
foo, even if there is no class having such a method.

When sometime latter you do a :

f bar

then, and only then the compiler verify that the bar object have a foo
method.

By the way, It might give you some headache when you have made a
spelling error to a method name (because the error is not seen by the
compiler where it happen, but latter, where the function using the
wrong method is used).



-- 
R�mi Vanicat
From: Simon Helsen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <Pine.SOL.4.44.0310231146060.10061-100000@crete.uwaterloo.ca>
On Thu, 23 Oct 2003, Remi Vanicat wrote:

>> How does ocaml make sure that you don't get a message-not-understood
>> exception at runtime then?
>
>It make the verification when you call the test. I explain :
>
>you could define :
>
>let f x = x #foo
>
>which is a function taking an object x and calling its method
>foo, even if there is no class having such a method.
>
>When sometime latter you do a :
>
>f bar
>
>then, and only then the compiler verify that the bar object have a foo
>method.

you might want to mention that this is possible because of 'extensible
record types'. Well, there is a good chance the pyhton/lisp community will
not understand this, but it illustrates that a lot of the arguments
(probably on both sides in fact) are based on ignorance.

One more thing I remembered from a heavy cross-group fight between
comp.lang.smalltalk and c.l.f. quite a while ago, is that so-called
'dynamically typed' languages are useful because they allow you to
incrementally develop ill-typed programs into better-typed programs (the
XP-way), where the ill-typed programs already (partially) work. OTOH, with
a static type system, you have to think more in advance to get the types
right. XP-people consider this a hindrance and that is what people mean
with 'the type system getting the way'. With a Haskell-style or even
Ocaml-style type system, you cannot seriously argue that you can write a
program which cannot be easily(!) converted into one that fits such type
systems. By program, I mean 'a finished production-reade piece of
software', not a 'snapshot' in the development cycle.

The arguments from the smalltalk people are arguably defendable and this
is why this kind of discussion will pop up again and again. Using either
static or dynamic (Blume: untyped) type systems is not the point at all.
What actually matters is your development style/phylosophy and this is
more an issue of software engineering really.

Ok, I am phasing out again.

	Regards,

	Simon
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8v4e$p20$1@f1node01.rhrz.uni-bonn.de>
Simon Helsen wrote:
> On Thu, 23 Oct 2003, Remi Vanicat wrote:
> 
> 
>>>How does ocaml make sure that you don't get a message-not-understood
>>>exception at runtime then?
>>
>>It make the verification when you call the test. I explain :
>>
>>you could define :
>>
>>let f x = x #foo
>>
>>which is a function taking an object x and calling its method
>>foo, even if there is no class having such a method.
>>
>>When sometime latter you do a :
>>
>>f bar
>>
>>then, and only then the compiler verify that the bar object have a foo
>>method.
> 
> 
> you might want to mention that this is possible because of 'extensible
> record types'. Well, there is a good chance the pyhton/lisp community will
> not understand this, but it illustrates that a lot of the arguments
> (probably on both sides in fact) are based on ignorance.

Do you have a reference for extensible record types. Google comes up, 
among other things, with Modula-3, and I am pretty sure that's not what 
you mean.

> One more thing I remembered from a heavy cross-group fight between
> comp.lang.smalltalk and c.l.f. quite a while ago, is that so-called
> 'dynamically typed' languages are useful because they allow you to
> incrementally develop ill-typed programs into better-typed programs (the
> XP-way), where the ill-typed programs already (partially) work.

Sometimes the ill-typed program is all I need because it helps me to 
solve a problem that is covered by that program nonetheless.

> OTOH, with
> a static type system, you have to think more in advance to get the types
> right. XP-people consider this a hindrance and that is what people mean
> with 'the type system getting the way'. With a Haskell-style or even
> Ocaml-style type system, you cannot seriously argue that you can write a
> program which cannot be easily(!) converted into one that fits such type
> systems. By program, I mean 'a finished production-reade piece of
> software', not a 'snapshot' in the development cycle.
> 
> The arguments from the smalltalk people are arguably defendable and this
> is why this kind of discussion will pop up again and again. Using either
> static or dynamic (Blume: untyped) type systems is not the point at all.
> What actually matters is your development style/phylosophy and this is
> more an issue of software engineering really.

Exactly. Very well put!

Thanks,
Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9d571f$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Do you have a reference for extensible record types. Google comes up, 
>among other things, with Modula-3, and I am pretty sure that's not what 
>you mean.

The following are at least a bit closer to what is meant:

[1]	Mark P Jones and Simon Peyton Jones. Lightweight Extensible
	Records for Haskell. In Haskell Workshop, Paris, September 1999.
	<http://citeseer.nj.nec.com/jones99lightweight.html>

[2]	B. R. Gaster and M. P. Jones. A polymorphic type
	system for extensible records and variants. Technical
	report NOTTCS-TR-96-3, Department of Computer Science,
	University of Nottingham, UK, 1996. Available from URL
	<http://www.cs.nott.ac.uk/Department/Staff/~mpj/polyrec.html>.
	<http://citeseer.nj.nec.com/gaster96polymorphic.html>.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnkhfu$it1$1@newsreader2.netcologne.de>
Fergus Henderson wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>Do you have a reference for extensible record types. Google comes up, 
>>among other things, with Modula-3, and I am pretty sure that's not what 
>>you mean.
> 
> 
> The following are at least a bit closer to what is meant:
> 
> [1]	Mark P Jones and Simon Peyton Jones. Lightweight Extensible
> 	Records for Haskell. In Haskell Workshop, Paris, September 1999.
> 	<http://citeseer.nj.nec.com/jones99lightweight.html>
> 
> [2]	B. R. Gaster and M. P. Jones. A polymorphic type
> 	system for extensible records and variants. Technical
> 	report NOTTCS-TR-96-3, Department of Computer Science,
> 	University of Nottingham, UK, 1996. Available from URL
> 	<http://www.cs.nott.ac.uk/Department/Staff/~mpj/polyrec.html>.
> 	<http://citeseer.nj.nec.com/gaster96polymorphic.html>.

Thanks.


Pascal
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8ti4$p1q$1@f1node01.rhrz.uni-bonn.de>
Remi Vanicat wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Remi Vanicat wrote:
>>
>>>Pascal Costanza <········@web.de> writes:
>>
>>>>In a statically typed language, when I write a test case that calls a
>>>>specific method, I need to write at least one class that implements at
>>>>least that method, otherwise the code won't compile.
>>>
>>>Not in ocaml.
>>>ocaml is statically typed.
>>
>>How does ocaml make sure that you don't get a message-not-understood
>>exception at runtime then?
> 
> 
> It make the verification when you call the test. I explain :
> 
> you could define :
> 
> let f x = x #foo
> 
> which is a function taking an object x and calling its method
> foo, even if there is no class having such a method.
> 
> When sometime latter you do a :
> 
> f bar
> 
> then, and only then the compiler verify that the bar object have a foo
> method.

Doesn't this mean that the occurence of such compile-time errors is only 
delayed, in the sense that when the test suite grows the compiler starts 
to issue type errors?

Anyway, that's an interesting case that I haven't known about before. 
Thanks.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Test cases and static typing (was: Python from Wise Guy's Viewpoint)
Date: 
Message-ID: <6ogn61-f61.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Remi Vanicat wrote:

>>>>> In a statically typed language, when I write a test case that
>>>>> calls a specific method, I need to write at least one class that
>>>>> implements at least that method, otherwise the code won't
>>>>> compile.

>>>>Not in ocaml.
>>>>ocaml is statically typed.

>> It make the verification when you call the test. I explain :
>> let f x = x #foo
>> 
>> which is a function taking an object x and calling its method
>> foo, even if there is no class having such a method.
>> 
>> When sometime latter you do a :
>> 
>> f bar
>> 
>> then, and only then the compiler verify that the bar object have a foo
>> method.

BTW, the same thing is true for any language with type inference.  In
Haskell, there are to methods and objects. But to test a function, you
can write

test_func f = if (f 1 == 1) && (f 2 == 42) then "ok" else "fail"

The compiler will infer that test_func has type

test_func :: (Integer -> Integer) -> String

(I am cheating a bit, because actually it will infer a more general type),
so you can use it to test any function of type Integer->Integer, regardless
if you have written it already or not.

> Doesn't this mean that the occurence of such compile-time errors is only 
> delayed, in the sense that when the test suite grows the compiler starts 
> to issue type errors?

As long as you parameterize over the functions (or objects) you want to
test, there'll be no compile-time errors. That's what functioncal 
programming and type inference are good for: You can abstract everything
away just by making it an argument. And you should know that, since
you say that you know what modern type-systems can do.


But the whole case is moot anyway, IMHO: You write the tests because
you want them to fail until you have written the correct code that
makes them pass, and it is not acceptable (especially if you're doing
XP) to continue as long as you have failing tests. You have to do the
minimal edit to make all the tests pass *right now*, not later on.

It's the same with compile-time type errors. The only difference is
that they happen at compile-time, not at test-suite run-time, but the
necessary reaction is the same: Fix your code so that all tests (or
the compiler-generated type "tests") pass. Then continue with the next
step. 

I really don't see why one should be annoying to you, and you strongly
prefer the other. They're really just the same thing. Just imagine
that you run your test suite automatically when you compile your
program.

- Dirk
From: Pascal Costanza
Subject: Re: Test cases and static typing
Date: 
Message-ID: <bnb9ut$uu4$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Pascal Costanza <········@web.de> wrote:
> 
>>Remi Vanicat wrote:
> 
> 
>>>>>>In a statically typed language, when I write a test case that
>>>>>>calls a specific method, I need to write at least one class that
>>>>>>implements at least that method, otherwise the code won't
>>>>>>compile.
> 
> 
>>>>>Not in ocaml.
>>>>>ocaml is statically typed.
> 
> 
>>>It make the verification when you call the test. I explain :
>>>let f x = x #foo
>>>
>>>which is a function taking an object x and calling its method
>>>foo, even if there is no class having such a method.
>>>
>>>When sometime latter you do a :
>>>
>>>f bar
>>>
>>>then, and only then the compiler verify that the bar object have a foo
>>>method.
> 
> 
> BTW, the same thing is true for any language with type inference.  In
> Haskell, there are to methods and objects. But to test a function, you
> can write
> 
> test_func f = if (f 1 == 1) && (f 2 == 42) then "ok" else "fail"
> 
> The compiler will infer that test_func has type
> 
> test_func :: (Integer -> Integer) -> String
> 
> (I am cheating a bit, because actually it will infer a more general type),
> so you can use it to test any function of type Integer->Integer, regardless
> if you have written it already or not.

OK, I have got it. No, that's not what I want. What I want is:

testxyz obj = (concretemethod obj == 42)

Does the code compile as long as concretemethod doesn't exist?

>>Doesn't this mean that the occurence of such compile-time errors is only 
>>delayed, in the sense that when the test suite grows the compiler starts 
>>to issue type errors?
> 
> 
> As long as you parameterize over the functions (or objects) you want to
> test, there'll be no compile-time errors. That's what functioncal 
> programming and type inference are good for: You can abstract everything
> away just by making it an argument. And you should know that, since
> you say that you know what modern type-systems can do.

Yes, I know that. I have misunderstood the claim. Does the code I 
propose above work?

> But the whole case is moot anyway, IMHO: You write the tests because
> you want them to fail until you have written the correct code that
> makes them pass, and it is not acceptable (especially if you're doing
> XP) to continue as long as you have failing tests. You have to do the
> minimal edit to make all the tests pass *right now*, not later on.
> 
> It's the same with compile-time type errors. The only difference is
> that they happen at compile-time, not at test-suite run-time, but the
> necessary reaction is the same: Fix your code so that all tests (or
> the compiler-generated type "tests") pass. Then continue with the next
> step. 

The type system might test too many cases.

> I really don't see why one should be annoying to you, and you strongly
> prefer the other. They're really just the same thing. Just imagine
> that you run your test suite automatically when you compile your
> program.

I don't compile my programs. Not as a distinct conscious step during 
development. I write pieces of code and execute them immediately. It's 
much faster to run the code than to explicitly compile and/or run a type 
checker.

This is a completely different style of developing code.

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Re: Test cases and static typing
Date: 
Message-ID: <b12o61-3i1.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

> OK, I have got it. No, that's not what I want. What I want is:
> 
> testxyz obj = (concretemethod obj == 42)
> 
> Does the code compile as long as concretemethod doesn't exist?

No. Does your test pass as long as conretemthod doesn't exist? It doesn't,
for the same reason.

>> It's the same with compile-time type errors. The only difference is
>> that they happen at compile-time, not at test-suite run-time, but the
>> necessary reaction is the same: Fix your code so that all tests (or
>> the compiler-generated type "tests") pass. Then continue with the next
>> step. 

> The type system might test too many cases.

I have never experienced that, because every expression that is valid
code will have a proper type.

Can you think of an example (not in C++ or Java etc.) where the type
system may check too many cases?

> I don't compile my programs. Not as a distinct conscious step during 
> development. I write pieces of code and execute them immediately. 

I know. I sometimes do the same with Haskell: I use ghc in interactive
mode, write a piece of code and execute it immediately (which means it
gets compiled and type checked). When it works, I paste it into
the file. If there was a better IDE, I wouldn't have to do that,
but even in this primitive way it works quite well.

> It's much faster to run the code than to explicitly compile and/or
> run a type checker.

Unless your modules get very large, or you're in the middle of some
big refactoring, compiling or running the type checker is quite fast.

> This is a completely different style of developing code.

I have known this style of developing code for quite some time :-)

- Dirk
From: Pascal Costanza
Subject: Re: Test cases and static typing
Date: 
Message-ID: <bnbp46$uji$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Pascal Costanza <········@web.de> wrote:
> 
>>Dirk Thierbach wrote:
> 
>>OK, I have got it. No, that's not what I want. What I want is:
>>
>>testxyz obj = (concretemethod obj == 42)
>>
>>Does the code compile as long as concretemethod doesn't exist?
> 
> No. Does your test pass as long as conretemthod doesn't exist? It doesn't,
> for the same reason.

As long as I am writing only tests, I don't care. When I am in the mood 
of writing tests, I want to write as many tests as possible, without 
having to think about whether my code is acceptable for the static type 
checker or not.

>>>It's the same with compile-time type errors. The only difference is
>>>that they happen at compile-time, not at test-suite run-time, but the
>>>necessary reaction is the same: Fix your code so that all tests (or
>>>the compiler-generated type "tests") pass. Then continue with the next
>>>step. 
> 
>>The type system might test too many cases.
> 
> I have never experienced that, because every expression that is valid
> code will have a proper type.
> 
> Can you think of an example (not in C++ or Java etc.) where the type
> system may check too many cases?

Here is one:

(defun f (x)
   (unless (< x 200)
     (cerror "Type another number"
             "You have typed a wrong number")
     (f (read)))
   (* x 2))

Look up 
http://www.lispworks.com/reference/HyperSpec/Body/f_cerror.htm#cerror 
before complaining.



Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Re: Test cases and static typing
Date: 
Message-ID: <f98o61-gl3.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:

> As long as I am writing only tests, I don't care. When I am in the mood 
> of writing tests, I want to write as many tests as possible, without 
> having to think about whether my code is acceptable for the static type 
> checker or not.

Then just do it. Write as many tests as you care; when you're ready,
compile them and run the type checker.

Where's the problem? One thing that is certainly missing for Haskell or
OCaml is a good IDE that supports as many styles of programming
as possible, but even with the primitive command line versions at the
moment you can do what you want in this case.

And anyway, this would be a question of the IDE, not of static typing.

>>>The type system might test too many cases.

>> I have never experienced that, because every expression that is valid
>> code will have a proper type.

>> Can you think of an example (not in C++ or Java etc.) where the type
>> system may check too many cases?

> Here is one:
> 
> (defun f (x)
>   (unless (< x 200)
>     (cerror "Type another number"
>             "You have typed a wrong number")
>     (f (read)))
>   (* x 2))

> Look up 
> http://www.lispworks.com/reference/HyperSpec/Body/f_cerror.htm#cerror 
> before complaining.

Done. But I still miss the point. OTOH, Lisp certainly doesn't check
types, so I don't see how a Lisp program can be an example of the
type system checking too many cases. OTOH, this example is tied to
a specific Lisp debugger feature. I think it would be better to give
an example in a statically typed language.

I could probably rewrite the code with an approximation to cerror
(with the restriction that non-local control structures don't 
translate one to one), but even then I don't see why the type system
would test too many cases for this example.

- Dirk
From: Pascal Costanza
Subject: Re: Test cases and static typing
Date: 
Message-ID: <bnc8dj$pvi$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Pascal Costanza <········@web.de> wrote:
> 
> 
>>As long as I am writing only tests, I don't care. When I am in the mood 
>>of writing tests, I want to write as many tests as possible, without 
>>having to think about whether my code is acceptable for the static type 
>>checker or not.
> 
> 
> Then just do it. Write as many tests as you care; when you're ready,
> compile them and run the type checker.
> 
> Where's the problem? One thing that is certainly missing for Haskell or
> OCaml is a good IDE that supports as many styles of programming
> as possible, but even with the primitive command line versions at the
> moment you can do what you want in this case.
> 
> And anyway, this would be a question of the IDE, not of static typing.

A flexible and useful IDE must treat static type checking as a separate 
tool. It needs to be able to do useful things with code that isn't 
correct yet. Modern IDES even give you parameter lists, etc., when the 
code isn't actually completely parsable.

And that's all I wanted from the very beginning - static typing as an 
additional tool, not as one that I don't have any other choice than use 
by default.

>>>>The type system might test too many cases.
> 
> 
>>>I have never experienced that, because every expression that is valid
>>>code will have a proper type.
> 
> 
>>>Can you think of an example (not in C++ or Java etc.) where the type
>>>system may check too many cases?
> 
> 
>>Here is one:
>>
>>(defun f (x)
>>  (unless (< x 200)
>>    (cerror "Type another number"
>>            "You have typed a wrong number")
>>    (f (read)))
>>  (* x 2))
> 
> 
>>Look up 
>>http://www.lispworks.com/reference/HyperSpec/Body/f_cerror.htm#cerror 
>>before complaining.
> 
> Done. But I still miss the point. OTOH, Lisp certainly doesn't check
> types, so I don't see how a Lisp program can be an example of the
> type system checking too many cases. OTOH, this example is tied to
> a specific Lisp debugger feature. I think it would be better to give
> an example in a statically typed language.

No, it's not better to give an example in a different language. The 
whole point of my argument is that the code above cannot be statically 
type-checked. And the feature presented is extremely helpful, even in 
end user applications.

> I could probably rewrite the code with an approximation to cerror
> (with the restriction that non-local control structures don't 
> translate one to one), but even then I don't see why the type system
> would test too many cases for this example.

I don't want an "approximation of cerror". I want cerror!



Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Marshall Spight
Subject: Re: Test cases and static typing
Date: 
Message-ID: <YBhmb.19492$Tr4.40240@attbi_s03>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
>
> And that's all I wanted from the very beginning - static typing as an
> additional tool, not as one that I don't have any other choice than use
> by default.

I can get behind that idea! It strikes me as being better
than what one has now with either statically typed
languages or dynamically typed languages.


Marshall
From: Raffael Cavallaro
Subject: Re: Test cases and static typing
Date: 
Message-ID: <aeb7ff58.0310292040.5b76c47f@posting.google.com>
"Marshall Spight" <·······@dnai.com> wrote in message news:<·····················@attbi_s03>...
> "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> >
> > And that's all I wanted from the very beginning - static typing as an
> > additional tool, not as one that I don't have any other choice than use
> > by default.
> 
> I can get behind that idea! It strikes me as being better
> than what one has now with either statically typed
> languages or dynamically typed languages.


Then the addition of parameterized types to goo, might interest you.

<http://www.csail.mit.edu/research/abstracts/abstracts03/dynamic-languages/03knight.pdf>

Also see the main goo page:
<http://www.ai.mit.edu/~jrb/goo/>
From: Dirk Thierbach
Subject: Re: Test cases and static typing
Date: 
Message-ID: <fjro61-7r7.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:

> A flexible and useful IDE must treat static type checking as a separate 
> tool. It needs to be able to do useful things with code that isn't 
> correct yet. 

I don't agree with the "must", but type checking is a seperate phase
in the compiler. It should be possible to make an IDE that treats
it that way. But I doubt that particular point is high on the priority
list of any potential programmer of an IDE. 

> And that's all I wanted from the very beginning - static typing as an 
> additional tool, not as one that I don't have any other choice than use 
> by default.

And that's fine, but it is not an issue of static typing.

>>>>>The type system might test too many cases.

> No, it's not better to give an example in a different language. The 
> whole point of my argument is that the code above cannot be statically 
> type-checked. 

You can look now at two examples of code like this that can be
statically type-checked. 

And I had the impression that you wanted to explain why a "type system
might test too many cases". I still don't understand this argument. I
don't see any "case" that the type system will test in the above
program, let alone "too many".

>> I could probably rewrite the code with an approximation to cerror
>> (with the restriction that non-local control structures don't 
>> translate one to one), but even then I don't see why the type system
>> would test too many cases for this example.

> I don't want an "approximation of cerror". I want cerror!

Then use Lisp and cerror. Nobody forces you to use anything else. The
problem is again that you want to do it only in exactly the same way
as you are used to doing it. You don't see how to do it in another
language, and then you say "it cannot be done". And that's just wrong.

So can we settle on "you like to do it your way, but it is possible
to do everything you want in a statically typed language if
you're a little bit more flexible"? (That means of course that if
you refuse to be a bit more flexible, it cannot be done in exactly
the same way -- after all, they are different languages.)

As long as you say "this cannot be done" you'll get answers showing
you that it can indeed be done, only in a way that is a little bit
different. Then you say "yes, but that's not how I want it. You're
trying to force to use me something I don't want!".

It gets a bit silly after some iterations. Let's stop it.

- Dirk
From: Pascal Costanza
Subject: Re: Test cases and static typing
Date: 
Message-ID: <bncpn0$bp5$1@newsreader2.netcologne.de>
Dirk Thierbach wrote:

> Pascal Costanza <········@web.de> wrote:
> 
> 
>>A flexible and useful IDE must treat static type checking as a separate 
>>tool. It needs to be able to do useful things with code that isn't 
>>correct yet. 
> 
> I don't agree with the "must", but type checking is a seperate phase
> in the compiler. It should be possible to make an IDE that treats
> it that way. But I doubt that particular point is high on the priority
> list of any potential programmer of an IDE. 

No, it's only high on the priority list of actual programmers of IDEs. 
For example, you could check what Eclipse has to offer for Java in that 
regard.

>>And that's all I wanted from the very beginning - static typing as an 
>>additional tool, not as one that I don't have any other choice than use 
>>by default.
> 
> And that's fine, but it is not an issue of static typing.
> 
>>>>>>The type system might test too many cases.
> 
>>No, it's not better to give an example in a different language. The 
>>whole point of my argument is that the code above cannot be statically 
>>type-checked. 
> 
> You can look now at two examples of code like this that can be
> statically type-checked.

I am not convinced, but let's play that game for a while: OK, we have 
three programs that have the same behavior, and they especially all 
behave well. Only two of them can be statically type checked.

This completes my proof that static type systems reduce expressive power.

> And I had the impression that you wanted to explain why a "type system
> might test too many cases". I still don't understand this argument. I
> don't see any "case" that the type system will test in the above
> program, let alone "too many".
> 
>>>I could probably rewrite the code with an approximation to cerror
>>>(with the restriction that non-local control structures don't 
>>>translate one to one), but even then I don't see why the type system
>>>would test too many cases for this example.
> 
>>I don't want an "approximation of cerror". I want cerror!
> 
> Then use Lisp and cerror. Nobody forces you to use anything else. The
> problem is again that you want to do it only in exactly the same way
> as you are used to doing it. You don't see how to do it in another
> language, and then you say "it cannot be done". And that's just wrong.

I don't say it cannot be done. Don't put words into my mouth.

In this case, you actually need to write additional code to simulate 
dynamic checks in a statically typed language that a dynamically typed 
language gives you for free. I am sorry, but I don't see any advantage 
in such an approach.

> So can we settle on "you like to do it your way, but it is possible
> to do everything you want in a statically typed language if
> you're a little bit more flexible"? (That means of course that if
> you refuse to be a bit more flexible, it cannot be done in exactly
> the same way -- after all, they are different languages.)

Well, in my book the computer should adapt to what I want, and not the 
other way around.

> As long as you say "this cannot be done" you'll get answers showing
> you that it can indeed be done, only in a way that is a little bit
> different. Then you say "yes, but that's not how I want it. You're
> trying to force to use me something I don't want!".

No, I haven't said it cannot be done. I have talked about expressive 
power, and that's something different.

> It gets a bit silly after some iterations.

Indeed.


Pascal
From: Henrik Motakef
Subject: Re: Test cases and static typing
Date: 
Message-ID: <861xt2utek.fsf@pokey.internal.henrik-motakef.de>
Dirk Thierbach <··········@gmx.de> writes:

> OTOH, Lisp certainly doesn't check types,

Ho hum. I realize that this is probably not what you meant, but given
the existence of usenet archives, I have to oppose this statement
anyway ;-)

Lisp implementations are not /required/ to check types at compile time
(they are at run time, when they encounter a CHECK-TYPE form), but
that doesn't neccessarily mean they don't.

Even if the CL type system isn't as friendly for such things as the
ones of Ocaml or Haskell may be (try proving whether a value is of
type (satisfies (lambda (x) (= x (get-universal-time))))[1] for a
start), some implementations really do honor you optional type
declarations, and even do some significant type inferencing. They
won't just abort compilation if they think you program could be
improved type-wise, but they will issue a warning, which is just as
good IMHO.

> OTOH, this example is tied to a specific Lisp debugger feature.

The Lisp debugger is a standardized part of the language, just like
the type system is.


[1] Yes, yes, you'll have to create a named function for that lambda
    expression in reality scince SATISFIES doesn't actually accept
    lambdas for some reason, but you get the point.
From: ··········@ii.uib.no
Subject: Re: Test cases and static typing
Date: 
Message-ID: <egoew6pdsv.fsf@sefirot.ii.uib.no>
Pascal Costanza <········@web.de> writes:

> Dirk Thierbach wrote:

>>> testxyz obj = (concretemethod obj == 42)

>> Does the code compile as long as concretemethod doesn't exist?
>> No. Does your test pass as long as conretemthod doesn't exist? It
>> doesn't, for the same reason.

> As long as I am writing only tests, I don't care. When I am in the
> mood of writing tests, I want to write as many tests as possible,
> without having to think about whether my code is acceptable for the
> static type checker or not.

Uh...the type system will let you *write* what you want, it will just
stop you from *running* those tests.  Which are obviously going to
fail anyway.  Okay, so perhaps you for some reason needs to write the
tests for nonexistent code, and then run something else that you keep
in the same file.  You then need to add

        concretemethod = undefined

which just goes to show you how useless static typing is.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Pascal Costanza
Subject: Re: Test cases and static typing
Date: 
Message-ID: <bnc3rv$r6q$1@f1node01.rhrz.uni-bonn.de>
··········@ii.uib.no wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Dirk Thierbach wrote:
> 
> 
>>>>testxyz obj = (concretemethod obj == 42)
> 
> 
>>>Does the code compile as long as concretemethod doesn't exist?
>>>No. Does your test pass as long as conretemthod doesn't exist? It
>>>doesn't, for the same reason.
> 
> 
>>As long as I am writing only tests, I don't care. When I am in the
>>mood of writing tests, I want to write as many tests as possible,
>>without having to think about whether my code is acceptable for the
>>static type checker or not.
> 
> 
> Uh...the type system will let you *write* what you want, it will just
> stop you from *running* those tests.  Which are obviously going to
> fail anyway.  Okay, so perhaps you for some reason needs to write the
> tests for nonexistent code, and then run something else that you keep
> in the same file.  

+ not "for some reason". Writing tests for nonexistent code is in fact 
one of the key ideas of extreme programming.

+ No, I don't want to run code that happens to be in the same file. My 
development environment can already do some very useful things with the 
test code even if it isn't statically type-checkable yet.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: Test cases and static typing
Date: 
Message-ID: <3f9d45cb$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>What I want is:
>
>testxyz obj = (concretemethod obj == 42)
>
>Does the code compile as long as concretemethod doesn't exist?

No, because the likelihood of that being a typo (e.g. for `concrete_method')
is too high.

I recently added support to the Mercury compiler for an option
"--allow-stubs".  For the equivalent code in Mercury, if this option
is enabled, then it will compile if there is a _declaration_ for
concretemethod, even if there is no _definition_.  The compiler will
issue a warning and automatically generate a stub definition which just
throws an exception if it is ever called.

It would be fairly straight-forward to also add support for allowing
code like that to compile even if there was no declaration, but that
does not seems like a good idea to me -- it would make it easier for
typos to go unnoticed, with insufficient compensating benefit.

I'm sure it would also be easy for developers of other statically typed
languages to implement what you want, if they thought it was a good idea.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Test cases and static typing
Date: 
Message-ID: <bnjhms$fri$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>What I want is:
>>
>>testxyz obj = (concretemethod obj == 42)
>>
>>Does the code compile as long as concretemethod doesn't exist?
> 
> No, because the likelihood of that being a typo (e.g. for `concrete_method')
> is too high.

This proves that a static type system requires you to write more code 
than strictly necessary. (Please think twice before you react. This is 
not meant as a pejorative remark, even if it strongly sounds like one. 
It's just an objective truth. Even if you think that this is how it 
should be, it doesn't make my statement wrong.)

> I recently added support to the Mercury compiler for an option
> "--allow-stubs".  For the equivalent code in Mercury, if this option
> is enabled, then it will compile if there is a _declaration_ for
> concretemethod, even if there is no _definition_.  The compiler will
> issue a warning and automatically generate a stub definition which just
> throws an exception if it is ever called.
> 
> It would be fairly straight-forward to also add support for allowing
> code like that to compile even if there was no declaration, but that
> does not seems like a good idea to me -- it would make it easier for
> typos to go unnoticed, with insufficient compensating benefit.

A good development environment gives you immediate feedback on such 
kinds of typos. A good compiler for a dynamically type language issues a 
warning. So these typos don't go unnoticed. The only difference is that 
a dynamically typed language trusts the programmer by default, whereas a 
statically typed languages doesn't trust the programmer. (To rephrase 
it: A statically typed language gives you stricter support, while the 
dynamically typed language gives you weaker support. But that's a 
actually more or less the same statement.)

> I'm sure it would also be easy for developers of other statically typed
> languages to implement what you want, if they thought it was a good idea.

Of course.

It might be interesting to note that dynamically typed language are 
probably a really bad idea when you don't have a good IDE. The features 
that fans of statically typed languages care about are usually regarded 
as part of the development environment's jobs. This is only to indicate 
that programming in a dynamically typed language is not as horrible as 
you might think when seen in the right context.

And here is one of the probable replies: Statically typed languages 
don't require a sophisticated IDE in order to do useful work. This might 
be an advantage in some scenarios.

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: Test cases and static typing
Date: 
Message-ID: <3f9f6ecd$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:
>Fergus Henderson wrote:
>> Pascal Costanza <········@web.de> writes:
>>>What I want is:
>>>
>>>testxyz obj = (concretemethod obj == 42)
>>>
>>>Does the code compile as long as concretemethod doesn't exist?
>> 
>> No, because the likelihood of that being a typo (e.g. for `concrete_method')
>> is too high.
>
>This proves that a static type system requires you to write more code 
>than strictly necessary.

True in the sense that "there exists a static type system that requires ...".
That is true for several static type systems that I know --
but the amount of extra code required is very very small.
E.g. in Haskell it is usually sufficient to just write

	concretemethod = error "stub"

However, your statement would be false if you tried to generalize it to
all languages and language implementations that have static type systems.
As I said, it would be easy to modify a statically typed system to
optionally allow references to undefined functions, and indeed there
are some systems which do that.  (For example I think ObjectCenter,
and interpretive environment for C/C++ programs, did that.)

If a couple of potential users of Mercury were to ask for it, I would
go ahead and add support for this to the Mercury implementation.  But
so far, to the best of my knowledge, no Mercury user has ever asked for it.

>> It would be fairly straight-forward to also add support for allowing
>> code like that to compile even if there was no declaration, but that
>> does not seems like a good idea to me - it would make it easier for
>> typos to go unnoticed, with insufficient compensating benefit.
>
>A good development environment gives you immediate feedback on such 
>kinds of typos. A good compiler for a dynamically type language issues a 
>warning. So these typos don't go unnoticed.

My experience is that compiler warnings are too easily ignored.

As for error highlighting in IDEs, well... Firstly, as you yourself
mentioned, not everyone wants to use a complicated IDE.

Secondly, even in such an IDE, I think errors could still slip through
unnoticed.  For example, consider the following scenario.  You might
write a call to a new function, which will get highlighted.  But you
expect that, since the function is not yet defined so you ignore the
highlighting.  Then you write another call to the function, which also
gets highlighted, and again you ignore it, since you expected that.
Finally you write the definition of the function, and run the program.
The compiler reports a few dozen warnings, which you ignore, since they
all seem to be referring to some undefined stubs in part of the program
that one of your colleagues is responsible for.  Then you run a few tests,
which all work, so you check your change in to the shared repository and
go home for the weekend.

Your colleague, who is working all weekend to get things ready for an
important demo, does a cvs update and incorporates your change. But when
he runs _his_ test case, it now crashes with an error about
"undefined function 'tingamajig_handler'"!  He greps the source
for "tingamajig", but the only occurrence is the one single place
where it is called, which happens to have absolutely no comments about
what it is supposed to do.  In desparation, he tries calling you,
but your mobile phone's battery has run flat.  He tries to implement
"tingamajig_handler" himself, but has no idea of what invariants it
is supposed to enforce, and eventually gives up.  The demo on Monday
morning is a complete failure.

On Monday afternoon, he finally catches up with you, and tells you of his
woes.  You see immediately that the problem was just a simple typo --
the function was named "thingamajig_handler", not "tingamajig_handler".


A far-fetched hypothetical?  Possibly.  But if you tell us that
"typos don't go unnoticed", please forgive me if I am a little skeptical ;-)

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F98010A.5000804@ps.uni-sb.de>
Pascal Costanza wrote:
>>
>> But only at runtime, where a logic flaw may or may not trigger the 
>> assertion.
> 
> I don't care about that difference. My development environment is 
> flexible enough to make execution of test suites a breeze. I don't need 
> a separate compilation and linking stage to make this work.
> 
>> (Assertions are still useful: if they are active, they prove that the 
>> errors checked by them didn't occur in a given program run. This can 
>> still be useful. But then, production code usually runs with assertion 
>> checking off - which is exactly the point where knowing that some bug 
>> occurred would be more important...)
> 
> Don't let your production code run with assertion checking off then.

You don't seem to see the fundamental difference, which has been stated 
as "Static typing shows the absence of [certain classes of] errors, 
while testing [with assertions] can only show the presence of errors." 
When you actively use a type system as a tool and turn it to your 
advantage that "certain class" can be pretty large, btw.

> I hear that in the worst case scenarios, static type checking in modern 
> type systems needs exponential time, but for most practical cases this 
> doesn't matter. Maybe it also doesn't matter for most practical cases 
> that you can't check all permutations of data in a test suite.

Come on, you're comparing apples and wieners. The implications are 
completely different.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <udgmb.19422$e01.35877@attbi_s02>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
>
> In a statically typed language, when I write a test case that calls a
> specific method, I need to write at least one class that implements at
> least that method, otherwise the code won't compile.
>
> In a dynamically typed language I can concentrate on writing the test
> cases first and don't need to write dummy code to make some arbitrary
> static checker happy.

This is a non-issue. In both cases, you need the implementing code
if you want to be able to run the testcase, and you don't need the
implementing code if you don't.


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnc4bg$r6s$1@f1node01.rhrz.uni-bonn.de>
Marshall Spight wrote:
> "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> 
>>In a statically typed language, when I write a test case that calls a
>>specific method, I need to write at least one class that implements at
>>least that method, otherwise the code won't compile.
>>
>>In a dynamically typed language I can concentrate on writing the test
>>cases first and don't need to write dummy code to make some arbitrary
>>static checker happy.
> 
> 
> This is a non-issue. In both cases, you need the implementing code
> if you want to be able to run the testcase, and you don't need the
> implementing code if you don't.

No, in a dynamically typed language, I don't need the implementation to 
be able to run the testcase.

Among other things:

- the test cases can serve as a kind of todo-list. I run the testsuite, 
and it gives me an exception. This shows what portion of code I can work 
on next.

- when a test case gives me an exception, I can inspect the runtime 
environment and analyze how far the test case got, what it already 
successfully did, what is missing, and maybe even why it is missing. 
With a statically typed language, I wouldn't be able to get that far.

Furthermore, when I am still in the exceptional situation, I can change 
variable settings, define a function on the fly, return some value from 
a yet undefined method by hand to see if it can make the rest of the 
code work, and so on.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <spammers_must_die-2410031501530001@k-137-79-50-101.jpl.nasa.gov>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
<········@web.de> wrote:

> Marshall Spight wrote:
> > "Pascal Costanza" <········@web.de> wrote in message
·················@f1node01.rhrz.uni-bonn.de...
> > 
> >>In a statically typed language, when I write a test case that calls a
> >>specific method, I need to write at least one class that implements at
> >>least that method, otherwise the code won't compile.
> >>
> >>In a dynamically typed language I can concentrate on writing the test
> >>cases first and don't need to write dummy code to make some arbitrary
> >>static checker happy.
> > 
> > 
> > This is a non-issue. In both cases, you need the implementing code
> > if you want to be able to run the testcase, and you don't need the
> > implementing code if you don't.
> 
> No, in a dynamically typed language, I don't need the implementation to 
> be able to run the testcase.
> 
> Among other things:
> 
> - the test cases can serve as a kind of todo-list. I run the testsuite, 
> and it gives me an exception. This shows what portion of code I can work 
> on next.
> 
> - when a test case gives me an exception, I can inspect the runtime 
> environment and analyze how far the test case got, what it already 
> successfully did, what is missing, and maybe even why it is missing. 
> With a statically typed language, I wouldn't be able to get that far.

To be fair, you can do both of those things in statically typed langauges
too, except that you get the error at compile time rather than run time.

> Furthermore, when I am still in the exceptional situation, I can change 
> variable settings, define a function on the fly, return some value from 
> a yet undefined method by hand to see if it can make the rest of the 
> code work, and so on.

This is a good point.

E.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <7shmb.19700$e01.37396@attbi_s02>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> Marshall Spight wrote:
> >
> >>In a statically typed language, when I write a test case that calls a
> >>specific method, I need to write at least one class that implements at
> >>least that method, otherwise the code won't compile.
> >>
> >>In a dynamically typed language I can concentrate on writing the test
> >>cases first and don't need to write dummy code to make some arbitrary
> >>static checker happy.
> >
> > This is a non-issue. In both cases, you need the implementing code
> > if you want to be able to run the testcase, and you don't need the
> > implementing code if you don't.
>
> No, in a dynamically typed language, I don't need the implementation to
> be able to run the testcase.

You need it to be able to run the testcase and have it succeed.
If you just want to fail with undefined method, that's exactly
the same as the compile error.


> Among other things:
>
> - the test cases can serve as a kind of todo-list. I run the testsuite,
> and it gives me an exception. This shows what portion of code I can work
> on next.

The compile errors also serve as a kind of todo list. I run the compiler,
and it gives me an error. This shows what portion of the code
I have to write next.


> - when a test case gives me an exception, I can inspect the runtime
> environment and analyze how far the test case got, what it already
> successfully did, what is missing, and maybe even why it is missing.
> With a statically typed language, I wouldn't be able to get that far.

Okay, so if you want to write testcases for two methods without
writing either, you have to stub in *both* methods and write
one before you can execute the testcases for one successfully.
You'd have to do this eventually anyway; the static compiler
will impose the requirement that you write stubs for the second
one before you execute the first. So I'd admit that the statically
typed language would put a tiny ordering on trivial tasks that
wouldn't otherwise be there.

(Aren't you supposed to write the method right after you
write the testcases, though?)

All those other things you've mentioned are also possible
for statically typed languages as well. (Inspect the runtime
environment, analyze how far, etc.)


> Furthermore, when I am still in the exceptional situation, I can change
> variable settings, define a function on the fly, return some value from
> a yet undefined method by hand to see if it can make the rest of the
> code work, and so on.

I'll acknowledge dynamic languages have an advantage in interactive
execution, which may be considerable.


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnc8kr$pvi$2@f1node01.rhrz.uni-bonn.de>
Marshall Spight wrote:

> (Aren't you supposed to write the method right after you
> write the testcases, though?)

No.

>>Furthermore, when I am still in the exceptional situation, I can change
>>variable settings, define a function on the fly, return some value from
>>a yet undefined method by hand to see if it can make the rest of the
>>code work, and so on.
> 
> 
> I'll acknowledge dynamic languages have an advantage in interactive
> execution, which may be considerable.

Thank you.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: John Atwood
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bncgki$mji$1@cvpjaws03.dhcp.cv.hp.com>
Pascal Costanza  <········@web.de> wrote:

>- when a test case gives me an exception, I can inspect the runtime 
>environment and analyze how far the test case got, what it already 
>successfully did, what is missing, and maybe even why it is missing. 
>With a statically typed language, I wouldn't be able to get that far.
>
>Furthermore, when I am still in the exceptional situation, I can change 
>variable settings, define a function on the fly, return some value from 
>a yet undefined method by hand to see if it can make the rest of the 
>code work, and so on.

That's because you're in an interpreted environemt, not because you're 
using a dynamically typed language.  Interpreters for statically typed 
languages allow the same.

John
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031025112411.GE1454@mapcar.org>
On Sat, Oct 25, 2003 at 12:37:38AM +0000, John Atwood wrote:
> Pascal Costanza  <········@web.de> wrote:
> 
> >- when a test case gives me an exception, I can inspect the runtime 
> >environment and analyze how far the test case got, what it already 
> >successfully did, what is missing, and maybe even why it is missing. 
> >With a statically typed language, I wouldn't be able to get that far.
> >
> >Furthermore, when I am still in the exceptional situation, I can change 
> >variable settings, define a function on the fly, return some value from 
> >a yet undefined method by hand to see if it can make the rest of the 
> >code work, and so on.
> 
> That's because you're in an interpreted environemt, not because you're 
> using a dynamically typed language.  Interpreters for statically typed 
> languages allow the same.

Wrong on all counts.

* Most Common Lisp environments compile to native code, even when
  working interactively.
  SBCL, for example, has no interpreter whatsoever.  The interpreter is
  simulated by calling the compiler and evaluating the resulting
  function immediately.
* There exists statically typed language implementations which do the
  same (SML/NJ)
* The behavior of redefinition in a statically typed environment
  is far different from the behavior in a dynamically typed environment.
  For one thing, generativity of names kicks in, which makes it
  basically impossible to redefine types and functions without
  recompiling all uses (and thus restarting your program), in a static
  environment.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: John Atwood
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmdso$p78$1@cvpjaws03.dhcp.cv.hp.com>
Matthew Danish  <·······@andrew.cmu.edu> wrote:
>On Sat, Oct 25, 2003 at 12:37:38AM +0000, John Atwood wrote:
>> Pascal Costanza  <········@web.de> wrote:
>> 
>> >- when a test case gives me an exception, I can inspect the runtime 
>> >environment and analyze how far the test case got, what it already 
>> >successfully did, what is missing, and maybe even why it is missing. 
>> >With a statically typed language, I wouldn't be able to get that far.
>> >
>> >Furthermore, when I am still in the exceptional situation, I can change 
>> >variable settings, define a function on the fly, return some value from 
>> >a yet undefined method by hand to see if it can make the rest of the 
>> >code work, and so on.
>> 
>> That's because you're in an interpreted environemt, not because you're 
>> using a dynamically typed language.  Interpreters for statically typed 
>> languages allow the same.
>
>Wrong on all counts.
>
>* Most Common Lisp environments compile to native code, even when
>  working interactively.
>  SBCL, for example, has no interpreter whatsoever.  The interpreter is
>  simulated by calling the compiler and evaluating the resulting
>  function immediately.

If the code is executed in the environment, and one can execute
arbitrary snippets of code, it's an interpreted environment, 
regardless of whether the code executed is native, bytecode,
or other.

>* There exists statically typed language implementations which do the
>  same (SML/NJ)

Yes, these are among those I have in mind when I say "Interpreters for 
statically typed languages allow the same."

>* The behavior of redefinition in a statically typed environment
>  is far different from the behavior in a dynamically typed environment.
>  For one thing, generativity of names kicks in, which makes it
>  basically impossible to redefine types and functions without
>  recompiling all uses (and thus restarting your program), in a static
>  environment.

Yes, and that's a good thing. It prevents the program form getting in an 
unreachable/inconsistent state, and secondly, in an FP, especially a pure 
FP, with explicit state, one need not run the program from the start to 
test whatever code is of interest at the moment, because the state can be 
created via test cases.


John
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031028203458.GT1454@mapcar.org>
On Tue, Oct 28, 2003 at 06:52:08PM +0000, John Atwood wrote:
> Matthew Danish  <·······@andrew.cmu.edu> wrote:
> >* Most Common Lisp environments compile to native code, even when
> >  working interactively.
> >  SBCL, for example, has no interpreter whatsoever.  The interpreter is
> >  simulated by calling the compiler and evaluating the resulting
> >  function immediately.
> 
> If the code is executed in the environment, and one can execute
> arbitrary snippets of code, it's an interpreted environment, 
> regardless of whether the code executed is native, bytecode,
> or other.

So long as you are clear on the meaning.  For most people, calling
something an interpreted environment implies lack of compiler.

> >* The behavior of redefinition in a statically typed environment
> >  is far different from the behavior in a dynamically typed environment.
> >  For one thing, generativity of names kicks in, which makes it
> >  basically impossible to redefine types and functions without
> >  recompiling all uses (and thus restarting your program), in a static
> >  environment.
> 
> Yes, and that's a good thing. It prevents the program form getting in an 
> unreachable/inconsistent state, and secondly, in an FP, especially a pure 
> FP, with explicit state, one need not run the program from the start to 
> test whatever code is of interest at the moment, because the state can be 
> created via test cases.

And it also gets in the way of the flexibility traditionally associated
with dynamically typed languages.  It gets in the way of development and
debugging as well.  And as for pure FP, can you recreate network
connection state at will?  So that the other end doesn't even know
something went wrong (besides the delay)?

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmsi6$l3u$1@newsreader2.netcologne.de>
John Atwood wrote:

>>* The behavior of redefinition in a statically typed environment
>> is far different from the behavior in a dynamically typed environment.
>> For one thing, generativity of names kicks in, which makes it
>> basically impossible to redefine types and functions without
>> recompiling all uses (and thus restarting your program), in a static
>> environment.
> 
> 
> Yes, and that's a good thing. It prevents the program form getting in an 
> unreachable/inconsistent state,

Oh dear, that argument again.

No, to repeat this for the nth time, that's not _generally_ a good 
thing. It also prevents the program from getting in a certain class of 
states that would still be reachable/consistent. So, in some situations 
it might be a _bad_ thing.


Pascal
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9d556c$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>In a statically typed language, when I write a test case that calls a 
>specific method, I need to write at least one class that implements at 
>least that method, otherwise the code won't compile.

No -- you don't need to implement the method.  You only need to declare it.

Even the need to declare it is really just a property of implementations,
not languages.

>Well, the research that ultimately lead to the HotSpot Virtual Machine 
>originated in virtual machines for Smalltalk and for Self. Especially 
>Self is an "extremely" dynamic language, but they still managed to make 
>it execute reasonably fast.

Please correct me if I'm wrong, but as I understand it, iterating over a
collection of values is still going to require keeping some representation
of the type of each element around at runtime, and testing the type for
each element accessed, in case it is not the expected type.  AFAIK neither
HotSpot nor the Self compiler do the kind of optimizations which would
be needed to avoid that.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjqr5$cbp$1@newsreader2.netcologne.de>
Fergus Henderson wrote:

> Pascal Costanza <········@web.de> writes:

>>Well, the research that ultimately lead to the HotSpot Virtual Machine 
>>originated in virtual machines for Smalltalk and for Self. Especially 
>>Self is an "extremely" dynamic language, but they still managed to make 
>>it execute reasonably fast.
> 
> 
> Please correct me if I'm wrong, but as I understand it, iterating over a
> collection of values is still going to require keeping some representation
> of the type of each element around at runtime, and testing the type for
> each element accessed, in case it is not the expected type.  AFAIK neither
> HotSpot nor the Self compiler do the kind of optimizations which would
> be needed to avoid that.

You don't need to check the type on each access. If you only copy a 
value from one place to other, and both places are untyped, you don't 
need any check at all.

Furthermore, if I remember correctly, dynamically compiled systems use 
type inferencing at runtime to reduce the number of type checks.


Pascal
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9f8f49$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>
>> Pascal Costanza <········@web.de> writes:
>
>>>Well, the research that ultimately lead to the HotSpot Virtual Machine 
>>>originated in virtual machines for Smalltalk and for Self. Especially 
>>>Self is an "extremely" dynamic language, but they still managed to make 
>>>it execute reasonably fast.
>> 
>> Please correct me if I'm wrong, but as I understand it, iterating over a
>> collection of values is still going to require keeping some representation
>> of the type of each element around at runtime, and testing the type for
>> each element accessed, in case it is not the expected type.  AFAIK neither
>> HotSpot nor the Self compiler do the kind of optimizations which would
>> be needed to avoid that.
>
>You don't need to check the type on each access. If you only copy a 
>value from one place to other, and both places are untyped, you don't 
>need any check at all.

Great.  I feel so much better now.  Now my type errors are free to
propagate throughout my program's data structures, so that when they
are finally detected, it may be far from the true source of the problem.

But the example that I was thinking of did actually want to access the
value, not just copy it.

>Furthermore, if I remember correctly, dynamically compiled systems use 
>type inferencing at runtime to reduce the number of type checks.

In cases such as the one described above, they may reduce the number of
times that the type of the _collection_ is checked, but they won't be
able to avoid checking the element type at every element access.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnojre$iqq$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Fergus Henderson wrote:
>>
>>
>>>Pascal Costanza <········@web.de> writes:
>>
>>>>Well, the research that ultimately lead to the HotSpot Virtual Machine 
>>>>originated in virtual machines for Smalltalk and for Self. Especially 
>>>>Self is an "extremely" dynamic language, but they still managed to make 
>>>>it execute reasonably fast.
>>>
>>>Please correct me if I'm wrong, but as I understand it, iterating over a
>>>collection of values is still going to require keeping some representation
>>>of the type of each element around at runtime, and testing the type for
>>>each element accessed, in case it is not the expected type.  AFAIK neither
>>>HotSpot nor the Self compiler do the kind of optimizations which would
>>>be needed to avoid that.
>>
>>You don't need to check the type on each access. If you only copy a 
>>value from one place to other, and both places are untyped, you don't 
>>need any check at all.
> 
> 
> Great.  I feel so much better now.  Now my type errors are free to
> propagate throughout my program's data structures, so that when they
> are finally detected, it may be far from the true source of the problem.

Now, you have changed the topic from optimization to catching errors 
again. Could you please focus what you want to talk about?

And guess what, "in 99% of all cases, such type errors don't occur in 
practice, at least not in my eperience". ;-P (Sorry, couldn't resist. I 
sincerely hope you read this as a joke, and not as an attack. ;)

> But the example that I was thinking of did actually want to access the
> value, not just copy it.
> 
>>Furthermore, if I remember correctly, dynamically compiled systems use 
>>type inferencing at runtime to reduce the number of type checks.
> 
> In cases such as the one described above, they may reduce the number of
> times that the type of the _collection_ is checked, but they won't be
> able to avoid checking the element type at every element access.

Why? If the collection happens to contain only elements of a single type 
(or this type at most), you only need to check write accesses if they 
violate this condition. As long as they don't, you don't need to check 
read accesses.

"In 99% of all cases, write accesses occur rarely, at least ..." - well, 
you know the game. ;)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9fe9b1$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:
>Fergus Henderson wrote:
>>Pascal Costanza <········@web.de> writes:
>>>Fergus Henderson wrote:
>>>>Pascal Costanza <········@web.de> writes:
>>>Furthermore, if I remember correctly, dynamically compiled systems use 
>>>type inferencing at runtime to reduce the number of type checks.
>> 
>> In cases such as the one described above, they may reduce the number of
>> times that the type of the _collection_ is checked, but they won't be
>> able to avoid checking the element type at every element access.
>
>Why? If the collection happens to contain only elements of a single type 
>(or this type at most), you only need to check write accesses if they 
>violate this condition. As long as they don't, you don't need to check 
>read accesses.

So which, if any, implementations of dynamic languages actually perform such
optimizations?

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Lex Spoon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m38yn2guvc.fsf@logrus.dnsalias.net>
Fergus Henderson <···@cs.mu.oz.au> writes:
>>Why? If the collection happens to contain only elements of a single type 
>>(or this type at most), you only need to check write accesses if they 
>>violate this condition. As long as they don't, you don't need to check 
>>read accesses.
>
> So which, if any, implementations of dynamic languages actually perform such
> optimizations?


I'm sure every implementation does this "optimization", because it is
simply less code.  The only time you get a dynamic type error are:

       1. You try to call a method, but the object has no such method.

       2. You call a primitive function or method, and the primitive
       balks.  (This would include trying to write a string into an
       array-of-byte.)


I suppose you could add this one, which also applies to statically
typed languages:

      3. The code explicitly checks for type information.


If you are simply doing "x := y" then there is no checking required.


Regarding your earlier question, though, the great trick in Self was
to remember the result of a check and thus avoid doing it again
whenever possible.  If you do "y := x + 1", and you determine that "x"
is a floating point number, then you know that "y" will also be a
floating point number immediately afterwards.


This points to a general observation.  Dealing with the #1 style
dynamic type errors is a subset of dealing with dynamic dispatch in
general.  If you try to execute "x + 1" or "(foo data-structure)", you
will need to locate which "+" method or which branch of foo's case
statement to execute.  A dynamic type error means that you decide
to use method "typeError" or branch "type error".  Furthermore, any
optimizations that get rid of these dynamic lookups, will also get
rid of type checks just by their nature.  If "x + 1" always uses the
floating-point "+", then clearly it cannot ever use the "typeError" 
version of "+".


-Lex
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa26092$1@news.unimelb.edu.au>
Lex Spoon <···@cc.gatech.edu> writes:
>Fergus Henderson <···@cs.mu.oz.au> writes:
>>[someone wrote:]
>>>Why? If the collection happens to contain only elements of a single type 
>>>(or this type at most), you only need to check write accesses if they 
>>>violate this condition. As long as they don't, you don't need to check 
>>>read accesses.
>>
>> So which, if any, implementations of dynamic languages actually perform such
>> optimizations?
>
>I'm sure every implementation does this "optimization", because it is
>simply less code.

You're wrong.  I think you misunderstood what optimization I'm talking about.

>The only time you get a dynamic type error are:
>
>       1. You try to call a method, but the object has no such method.

Calling a method is a read access.  We were discussing optimizations that
ensure that you *don't* need to do a dynamic check for each read access.

>If you are simply doing "x := y" then there is no checking required.

Yes, we covered that already.  But that's not what is happening in
the scenario that I was describing.  The scenario that I'm describing is

	Collection c;

	...
	foreach x in c do
		use(x);

where use(x) might be a method call, a field access, or similar.
For example, perhaps the collection is a set of integers, and you
are computing their sum, so use(x) would be "sum += x".

I think that in these situations, dynamically typed language
implementations will do O(N) dynamic type checks, where N is the number
of elements in "c".  In theory it is possible to optimize these away,
but I don't know of any such implementations that do, and I would be
suprised if there are any (except perhaps in limited cases, e.g. when
the collection is an array or is implemented using an array).

>Regarding your earlier question, though, the great trick in Self was
>to remember the result of a check and thus avoid doing it again
>whenever possible.  If you do "y := x + 1", and you determine that "x"
>is a floating point number, then you know that "y" will also be a
>floating point number immediately afterwards.

Sure.  That helps an implementation avoid checking the type of the collection
"c" at every element access.  But it doesn't help the implementation avoid
checking the type of the element "x" at each iteration of the loop.

>This points to a general observation.  Dealing with the #1 style
>dynamic type errors is a subset of dealing with dynamic dispatch in
>general.  [....] any optimizations that get rid of these dynamic lookups,
>will also get rid of type checks just by their nature.

That's true.  But the difference between dynamically typed languages and
statically typed languages is that in dynamically typed languages, *every*
data access (other than just copying data around) involves a dynamic dispatch.
Sure, implementations can optimize a lot of them away.  But generally you're
still left lots that your implementation can't optimize away, but which
would not be present in a statically typed language, such as the O(N)
dynamic type checks in the example above.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ism5ndam.fsf@comcast.net>
Fergus Henderson <···@cs.mu.oz.au> writes:

> But the difference between dynamically typed languages and
> statically typed languages is that in dynamically typed languages, *every*
> data access (other than just copying data around) involves a dynamic dispatch.
> Sure, implementations can optimize a lot of them away.  But generally you're
> still left lots that your implementation can't optimize away, but which
> would not be present in a statically typed language, such as the O(N)
> dynamic type checks in the example above.

That's what the type-checking hardware is for.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa3b8a4$1@news.unimelb.edu.au>
·············@comcast.net writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>
>> But the difference between dynamically typed languages and
>> statically typed languages is that in dynamically typed languages, *every*
>> data access (other than just copying data around) involves a dynamic dispatch.
>> Sure, implementations can optimize a lot of them away.  But generally you're
>> still left lots that your implementation can't optimize away, but which
>> would not be present in a statically typed language, such as the O(N)
>> dynamic type checks in the example above.
>
>That's what the type-checking hardware is for.

Did you forget a smiley?

In case not: type-checking hardware has been tried already, and failed.

(Anyway, type-checking hardware would only solve part of the problem, I think.
Dynamic type checking imposes two costs: one is the time cost of performing
the checks, and the other is the locality cost due to the code size increase.
Type-checking hardware avoids the code size increases, but I don't think it
helps with the time cost of performing the checks.)

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <k76kp1o9.fsf@comcast.net>
Fergus Henderson <···@cs.mu.oz.au> writes:

> ·············@comcast.net writes:
>
>>Fergus Henderson <···@cs.mu.oz.au> writes:
>>
>>> But the difference between dynamically typed languages and
>>> statically typed languages is that in dynamically typed languages, *every*
>>> data access (other than just copying data around) involves a dynamic dispatch.
>>> Sure, implementations can optimize a lot of them away.  But generally you're
>>> still left lots that your implementation can't optimize away, but which
>>> would not be present in a statically typed language, such as the O(N)
>>> dynamic type checks in the example above.
>>
>>That's what the type-checking hardware is for.
>
> Did you forget a smiley?

No, I never use smileys.

> In case not:  type-checking hardware has been tried already, and failed.

News to me.  I've used type checking hardware and it works like a charm.

> (Anyway, type-checking hardware would only solve part of the problem, I think.
> Dynamic type checking imposes two costs: one is the time cost of performing
> the checks, and the other is the locality cost due to the code size increase.
> Type-checking hardware avoids the code size increases, but I don't think it
> helps with the time cost of performing the checks.)

Actually it works quite well with performing the checks.  In general,
type checking is much quicker than computation, and in general it can
be performed in parallel with computation (you simply discard the
bogus result if it fails).  You don't need very much hardware, either.
From: Greg Menke
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ad7gowld.fsf@europa.pienet>
·············@comcast.net writes:

> Fergus Henderson <···@cs.mu.oz.au> writes:
> 
> > ·············@comcast.net writes:
> >
> >>Fergus Henderson <···@cs.mu.oz.au> writes:
> >>
> > (Anyway, type-checking hardware would only solve part of the problem, I think.
> > Dynamic type checking imposes two costs: one is the time cost of performing
> > the checks, and the other is the locality cost due to the code size increase.
> > Type-checking hardware avoids the code size increases, but I don't think it
> > helps with the time cost of performing the checks.)
> 
> Actually it works quite well with performing the checks.  In general,
> type checking is much quicker than computation, and in general it can
> be performed in parallel with computation (you simply discard the
> bogus result if it fails).  You don't need very much hardware, either.

As is also amply demonstrated by ECC hardware that operates right
alongside the memory- considerably easier than doing it in software.

Gregm
From: Adam Warner
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.31.14.29.00.223078@consulting.net.nz>
Hi Fergus Henderson,

> Yes, we covered that already.  But that's not what is happening in
> the scenario that I was describing.  The scenario that I'm describing is
> 
> 	Collection c;
> 
> 	...
> 	foreach x in c do
> 		use(x);
> 
> where use(x) might be a method call, a field access, or similar.
> For example, perhaps the collection is a set of integers, and you
> are computing their sum, so use(x) would be "sum += x".
> 
> I think that in these situations, dynamically typed language
> implementations will do O(N) dynamic type checks, where N is the number
> of elements in "c".  In theory it is possible to optimize these away,
> but I don't know of any such implementations that do, and I would be
> suprised if there are any (except perhaps in limited cases, e.g. when
> the collection is an array or is implemented using an array).

I've implemented a collection of integers as a list:

* (disassemble
    (compile nil
      (lambda ()
        (declare (optimize (safety 0)))
        (let ((c '(1 2 3 4 5 6 7 8 9 10)))
          (loop for x of-type (integer 1 10) in c
                sum x of-type fixnum)))))

; Compiling lambda nil:
; Compiling Top-Level Form:
 
482C1160:       .entry "lambda nil"()        ; (function nil fixnum)
      78:       pop     dword ptr [ebp-8]
      7B:       lea     esp, [ebp-32]
      7E:       xor     eax, eax             ; No-arg-parsing entry point
      80:       mov     ecx, [#x482C1158]    ; '(1 2 3 4 ...)
      86:       xor     edx, edx
      88:       jmp     L1
      8A: L0:   mov     eax, [ecx-3]
      8D:       mov     ecx, [ecx+1]
      90:       add     edx, eax
      92: L1:   cmp     ecx, #x2800000B      ; nil
      98:       jne     L0
      9A:       mov     ecx, [ebp-8]
      9D:       mov     eax, [ebp-4]
      A0:       add     ecx, 2
      A3:       mov     esp, ebp
      A5:       mov     ebp, eax
      A7:       jmp     ecx

No type checks. And 32-bit assembly arithmetic (but the type bits are
still present. most-positive-fixnum is 536,870,911).

>>Regarding your earlier question, though, the great trick in Self was
>>to remember the result of a check and thus avoid doing it again
>>whenever possible.  If you do "y := x + 1", and you determine that "x"
>>is a floating point number, then you know that "y" will also be a
>>floating point number immediately afterwards.
> 
> Sure.  That helps an implementation avoid checking the type of the collection
> "c" at every element access.  But it doesn't help the implementation avoid
> checking the type of the element "x" at each iteration of the loop.

Lisp provides standard ways to supply declarations. Some implementations
will trust those declarations in order to optimise the code at compile
time.

>>This points to a general observation.  Dealing with the #1 style
>>dynamic type errors is a subset of dealing with dynamic dispatch in
>>general.  [....] any optimizations that get rid of these dynamic lookups,
>>will also get rid of type checks just by their nature.
> 
> That's true.  But the difference between dynamically typed languages and
> statically typed languages is that in dynamically typed languages, *every*
> data access (other than just copying data around) involves a dynamic dispatch.
> Sure, implementations can optimize a lot of them away.  But generally you're
> still left lots that your implementation can't optimize away, but which
> would not be present in a statically typed language, such as the O(N)
> dynamic type checks in the example above.

Again, Lisp provides standard ways to supply declarations. Some
implementations will trust those declarations in order to optimise the
code at compile time. And use the declarations for type inference.

Followup-To is set as comp.lang.lisp.

Regards,
Adam
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa3b48a$1@news.unimelb.edu.au>
Adam Warner <······@consulting.net.nz> writes:

 >Hi Fergus Henderson,
 >
 >> Yes, we covered that already.  But that's not what is happening in
 >> the scenario that I was describing.  The scenario that I'm describing is
 >> 
 >> 	Collection c;
 >> 
 >> 	...
 >> 	foreach x in c do
 >> 		use(x);
 >> 
 >> where use(x) might be a method call, a field access, or similar.
 >> For example, perhaps the collection is a set of integers, and you
 >> are computing their sum, so use(x) would be "sum += x".
 >> 
 >> I think that in these situations, dynamically typed language
 >> implementations will do O(N) dynamic type checks, where N is the number
 >> of elements in "c".  In theory it is possible to optimize these away,
 >> but I don't know of any such implementations that do, and I would be
 >> suprised if there are any (except perhaps in limited cases, e.g. when
 >> the collection is an array or is implemented using an array).
 >
 >I've implemented a collection of integers as a list:
 >
 >* (disassemble
 >    (compile nil
 >      (lambda ()
 >        (declare (optimize (safety 0)))
 >        (let ((c '(1 2 3 4 5 6 7 8 9 10)))
 >          (loop for x of-type (integer 1 10) in c
 >                sum x of-type fixnum)))))
...
 >No type checks.
...
 >Lisp provides standard ways to supply declarations. Some implementations
 >will trust those declarations in order to optimise the code at compile
 >time.

That's completely different to the optimization that was being discussed.
Trusting the programmer's type declarations without actually checking them
properly, i.e. at the expense of safety, is a desperate measure.  It is not
at all the same thing as proving at compile time that the checks are not
needed.

I want my programs to run fast.  But I *don't* want this to have to come
at the expense of losing safety.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Adam Warner
Subject: Those who don't know Lisp are doomed to underestimate it [was Re: Python from Wise Guy's Viewpoint]
Date: 
Message-ID: <pan.2003.11.02.01.27.27.246542@consulting.net.nz>
Hi Fergus Henderson,

> Adam Warner <······@consulting.net.nz> writes:
> 
>  >Hi Fergus Henderson,
>  >
>  >> Yes, we covered that already.  But that's not what is happening in
>  >> the scenario that I was describing.  The scenario that I'm describing is
>  >> 
>  >> 	Collection c;
>  >> 
>  >> 	...
>  >> 	foreach x in c do
>  >> 		use(x);
>  >> 
>  >> where use(x) might be a method call, a field access, or similar.
>  >> For example, perhaps the collection is a set of integers, and you
>  >> are computing their sum, so use(x) would be "sum += x".
>  >> 
>  >> I think that in these situations, dynamically typed language
>  >> implementations will do O(N) dynamic type checks, where N is the number
>  >> of elements in "c".  In theory it is possible to optimize these away,
>  >> but I don't know of any such implementations that do, and I would be
>  >> suprised if there are any (except perhaps in limited cases, e.g. when
>  >> the collection is an array or is implemented using an array).
>  >
>  >I've implemented a collection of integers as a list:
>  >
>  >* (disassemble
>  >    (compile nil
>  >      (lambda ()
>  >        (declare (optimize (safety 0)))
>  >        (let ((c '(1 2 3 4 5 6 7 8 9 10)))
>  >          (loop for x of-type (integer 1 10) in c
>  >                sum x of-type fixnum)))))
> ...
>  >No type checks.
> ...
>  >Lisp provides standard ways to supply declarations. Some implementations
>  >will trust those declarations in order to optimise the code at compile
>  >time.
> 
> That's completely different to the optimization that was being discussed.
> Trusting the programmer's type declarations without actually checking them
> properly, i.e. at the expense of safety, is a desperate measure.  It is not
> at all the same thing as proving at compile time that the checks are not
> needed.
> 
> I want my programs to run fast.  But I *don't* want this to have to come
> at the expense of losing safety.

Fine. We will keep the collection of integers as a list since that's a
little more ambitious than simply declaring an array of a particular type.
I'll replace 10 with 11 in the list so it will violate an integer
between 1 and 10 type declaration.

A little bit of macrology first because I don't want to explicitly specify
the integer type:[0]

(defmacro integer-list (&rest list)
  `(list ,@(loop for element in list collect `(the (integer 1 10) ,element))))

Now let's compile code similar to before:

* (compile nil (lambda ()
                 (declare (optimize (safety 0)))
                   (let ((c (integer-list 1 2 3 4 5 6 7 8 9 11)))
                     (loop for x in c sum x))))

; In: lambda nil

;   (integer-list 1 2 3 4 ...)
; ==>
;   (list (the # 1) (the # 2) (the # 3) (the # 4) ...)
; Note: The tenth argument never returns a value.
; 
; Compiling lambda nil: 

; In: lambda nil

;   (integer-list 1 2 3 4 ...)
; --> list 
; ==>
;   (the (integer 1 10) 11)
; Warning: This is not a (values (integer 1 10) &rest t):
;   11
; 
; Compiling Top-Level Form: 

; Compilation unit finished.
;   1 warning
;   1 note


#<Function "lambda nil" {48263539}>

The compiler detected that a type declaration was violated. We are
yet to run the program. Let's fix the program first and take a look
at its disassembly:

* (compile nil (lambda ()
                 (declare (optimize (safety 0)))
                   (let ((c (integer-list 1 2 3 4 5 6 7 8 9 10)))
                     (loop for x in c sum x))))
; Compiling lambda nil: 
; Compiling Top-Level Form: 

#<Function "lambda nil" {48281D61}>

This time it compiles without any warning messages. The disassembly
includes this call:

     E87:       call    #x100001C8           ; #x100001C8: generic-+

The compiler is not intelligent enough to deduce that integer
arithmetic could have been inlined (the length of the list is known at
compile time so a really intelligent compiler could infer that the sum
of the integers in the list must be an integer between 10 and 100).

Let's assist the compiler by creating a macro that takes an integer
range declaration, a list of integer elements and sums those integer
elements at run time. We will first generalise integer-list so it can take
any arbitrary range element.

(defmacro integer-list ((lower upper) &rest list)
  `(list ,@(loop for element in list
                 collect `(the (integer ,lower ,upper) ,element))))

We can now write our integer list as:
(integer-list (1 10) 1 2 3 4 5 6 7 8 9 10)

Here's a macro to sum those integer elements:

(defmacro sum-integer-list ((lower upper) &rest list)
  `(loop for element of-type (integer ,lower ,upper) in (integer-list (,lower ,upper) ,@list)
         sum element of-type (integer ,(* lower (length list)) ,(* upper (length list)))))

Let's look at the disassembly of a lambda expression containing
(sum-integer-list (1 10) 1 2 3 4 5 6 7 8 9 10) at zero safety:

* (disassemble
    (compile nil
      (lambda ()
        (declare (optimize (safety 0)))
        (sum-integer-list (1 10) 1 2 3 4 5 6 7 8 9 10))))
; Compiling lambda nil: 
; Compiling Top-Level Form: 

48934298:       .entry "lambda nil"()        ; (function nil (or # #))
     2B0:       pop     dword ptr [ebp-8]
     2B3:       lea     esp, [ebp-64]
     2B6:       mov     edx, 4               ; No-arg-parsing entry point
     2BB:       mov     edi, 8
     2C0:       mov     esi, 12
     2C5:       mov     dword ptr [ebp-16], 16
     2CC:       mov     dword ptr [ebp-20], 20
     2D3:       mov     dword ptr [ebp-24], 24
     2DA:       mov     dword ptr [ebp-28], 28
     2E1:       mov     dword ptr [ebp-32], 32
     2E8:       mov     dword ptr [ebp-36], 36
     2EF:       mov     dword ptr [ebp-40], 40
     2F6:       mov     byte ptr [#x28000204], 0 ; lisp::*pseudo-atomic-interrupted*
     2FD:       mov     byte ptr [#x280001EC], 4 ; lisp::*pseudo-atomic-atomic*
     304:       mov     ebx, 80
     309:       add     ebx, [#x28000534]    ; x86::*current-region-free-pointer*
     30F:       cmp     ebx, [#x2800054C]    ; x86::*current-region-end-addr*
     315:       jbe     L0
     317:       call    #xB0000018           ; #xB0000018: alloc_overflow_ebx
     31C: L0:   xchg    ebx, [#x28000534]    ; x86::*current-region-free-pointer*
     322:       lea     ebx, [ebx+3]
     325:       mov     eax, ebx
     327:       mov     [eax-3], edx
     32A:       add     eax, 8
     32D:       mov     [eax-7], eax
     330:       mov     [eax-3], edi
     333:       add     eax, 8
     336:       mov     [eax-7], eax
     339:       mov     [eax-3], esi
     33C:       add     eax, 8
     33F:       mov     [eax-7], eax
     342:       mov     ecx, [ebp-16]
     345:       mov     [eax-3], ecx
     348:       add     eax, 8
     34B:       mov     [eax-7], eax
     34E:       mov     ecx, [ebp-20]
     351:       mov     [eax-3], ecx
     354:       add     eax, 8
     357:       mov     [eax-7], eax
     35A:       mov     ecx, [ebp-24]
     35D:       mov     [eax-3], ecx
     360:       add     eax, 8
     363:       mov     [eax-7], eax
     366:       mov     ecx, [ebp-28]
     369:       mov     [eax-3], ecx
     36C:       add     eax, 8
     36F:       mov     [eax-7], eax
     372:       mov     ecx, [ebp-32]
     375:       mov     [eax-3], ecx
     378:       add     eax, 8
     37B:       mov     [eax-7], eax
     37E:       mov     ecx, [ebp-36]
     381:       mov     [eax-3], ecx
     384:       add     eax, 8
     387:       mov     [eax-7], eax
     38A:       mov     ecx, [ebp-40]
     38D:       mov     [eax-3], ecx
     390:       mov     dword ptr [eax+1], #x2800000B ; nil
     397:       mov     byte ptr [#x280001EC], 0 ; lisp::*pseudo-atomic-atomic*
     39E:       cmp     byte ptr [#x28000204], 0 ; lisp::*pseudo-atomic-interrupted*
     3A5:       jeq     L1
     3A7:       break   9                    ; Pending interrupt trap
     3A9: L1:   xor     eax, eax
     3AB:       mov     ecx, ebx
     3AD:       xor     edx, edx
     3AF:       jmp     L3
     3B1: L2:   mov     eax, [ecx-3]
     3B4:       mov     ecx, [ecx+1]
     3B7:       add     edx, eax
     3B9: L3:   cmp     ecx, #x2800000B      ; nil
     3BF:       jne     L2
     3C1:       mov     ecx, [ebp-8]
     3C4:       mov     eax, [ebp-4]
     3C7:       add     ecx, 2
     3CA:       mov     esp, ebp
     3CC:       mov     ebp, eax
     3CE:       jmp     ecx

The loop has been unrolled. There are nine integer assembly adds
needed to add up ten integers.

This has not come at the expense of safety since the compiler will warn
if a declaration has been violated. Moreover the compiler will generate
appropriate code for integers of any arbitrary size. Sure at a safety of
zero we can force a run time error with incorrect declarations, but the
warning is available _at compile time_:

* (funcall (compile nil (lambda ()
                          (declare (optimize (safety 0)))
                          (sum-integer-list (0 10) 1 9999999999))))

; In: lambda nil

;   (sum-integer-list (0 10) 1 9999999999)
; --> loop block let integer-list 
; ==>
;   (list (the # 1) (the # 9999999999))
; Note: The second argument never returns a value.
; 
; Compiling lambda nil: 

; In: lambda nil

;   (sum-integer-list (0 10) 1 9999999999)
; --> loop block let integer-list list 
; ==>
;   (the (integer 0 10) 9999999999)
; Warning: This is not a (values (mod 11) &rest t):
;   9999999999
; 
; Compiling Top-Level Form: 

; Compilation unit finished.
;   1 warning
;   1 note


(#<Unknown Immediate Object, lowtag=#b10, type=#x2 {2}> . 0)


Let's fix the code:
* (funcall (compile nil (lambda ()
                          (declare (optimize (safety 0)))
                          (sum-integer-list (0 10000000000) 1 9999999999))))
; Compiling lambda nil: 
; Compiling Top-Level Form: 

10000000000

If we disassemble this code we'll find it performs generic arithmetic. 

Regard,
Adam

[0] Here's the macroexpansion of (integer-list 1 2 3 4 5 6 7 8 9 11):

(list (the (integer 1 10) 1)
      (the (integer 1 10) 2)
      (the (integer 1 10) 3)
      (the (integer 1 10) 4)
      (the (integer 1 10) 5)
      (the (integer 1 10) 6)
      (the (integer 1 10) 7)
      (the (integer 1 10) 8)
      (the (integer 1 10) 9)
      (the (integer 1 10) 11))
From: Fergus Henderson
Subject: Re: Those who don't know Lisp are doomed to underestimate it [was Re: Python from Wise Guy's Viewpoint]
Date: 
Message-ID: <3fa4737c$1@news.unimelb.edu.au>
Adam Warner <······@consulting.net.nz> writes:

>                   (let ((c (integer-list 1 2 3 4 5 6 7 8 9 10)))
>                     (loop for x in c sum x))))

This is not a realistic benchmark, because the entire input is visible to
the compiler -- there is really nothing which is unknown at compile time,
so the entire computation can be evaluated at compile time.  The Mercury
compiler can optimize the equivalent code into just returning 55.
A compiler which generates more than a couple of instructions for that
code is doing a bad job.

To make it a more realistic benchmark, the collection should be passed in as
a parameter.  The code which is being compiled should not assume anything
about the number of elements in the collection.  It should be free to assume
that the elements and their sum will fit in a reasonable-sized range
(say up to 10,000,000), but it should not make any _unchecked_ assumptions
about the collection elements if they would compromise safety.

...
>This time it compiles without any warning messages. The disassembly
>includes this call:
>
>     E87:       call    #x100001C8           ; #x100001C8: generic-+

And inside that call, there is a dynamic type check.  Just as I said.

>Let's assist the compiler by creating a macro that takes an integer
>range declaration, a list of integer elements and sums those integer
>elements at run time.

Oh, so it turns out that if we want our code to run fast, we need _more_
type declarations in a dynamically typed language than we need in a
statically typed language?

>Let's look at the disassembly of a lambda expression containing
>(sum-integer-list (1 10) 1 2 3 4 5 6 7 8 9 10) at zero safety:

I'm not convinced.  Again, the compiler ought to be able to evaluate
that completely at compile time.  Pick an example where the collection
is passed in as a parameter (which in turn is read from the user
or from a file).

>The loop has been unrolled. There are nine integer assembly adds
>needed to add up ten integers.

There's also a lot of other unnecessary instructions there.

>This has not come at the expense of safety since the compiler will warn
>if a declaration has been violated.

Let me see if I understand this.  Is it guaranteed that the compiler
will *always* warn if a declaration might be violated?

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Adam Warner
Subject: Re: Those who don't know Lisp are doomed to underestimate it [was Re: Python from Wise Guy's Viewpoint]
Date: 
Message-ID: <pan.2003.11.02.04.15.56.808733@consulting.net.nz>
Hi Fergus Henderson,

> Adam Warner <······@consulting.net.nz> writes:
> 
>>                   (let ((c (integer-list 1 2 3 4 5 6 7 8 9 10)))
>>                     (loop for x in c sum x))))
> 
> This is not a realistic benchmark, because the entire input is visible to
> the compiler -- there is really nothing which is unknown at compile time,
> so the entire computation can be evaluated at compile time.  The Mercury
> compiler can optimize the equivalent code into just returning 55.
> A compiler which generates more than a couple of instructions for that
> code is doing a bad job.

Whatever. I was responding to "Trusting the programmer's type declarations
without actually checking them properly, i.e. at the expense of safety, is
a desperate measure.  It is not at all the same thing as proving at
compile time that the checks are not needed."

> To make it a more realistic benchmark, the collection should be passed in as
> a parameter.  The code which is being compiled should not assume anything
> about the number of elements in the collection.  It should be free to assume
> that the elements and their sum will fit in a reasonable-sized range
> (say up to 10,000,000), but it should not make any _unchecked_ assumptions
> about the collection elements if they would compromise safety.

No. Let's see your code which takes a list of integers where "there is
really nothing which is unknown at compile time", the compiler checks that
every integer in this list is between 1 and 10 and then proceeds to sum
the integers using the appropriate type for the summation.

You cannot assume that the sum will be smaller than any finite sized
integer because the list in the source code is of arbitrary length.

You cannot leave checking that each integer is between 1 and 10 to run
time because this is known compile-time type information.

(big snip)

> Pick an example where the collection is passed in as a parameter (which
> in turn is read from the user or from a file).

If the integers are entered at run time how can a compile-time check be
made that they are all between 1 and 10? Before you redefine the problem
let's see your code.

(snip)

>>This has not come at the expense of safety since the compiler will warn
>>if a declaration has been violated.
> 
> Let me see if I understand this.  Is it guaranteed that the compiler
> will *always* warn if a declaration might be violated?

It's implementation dependent.

Regards,
Adam
From: Fergus Henderson
Subject: Re: Those who don't know Lisp are doomed to underestimate it [was Re: Python from Wise Guy's Viewpoint]
Date: 
Message-ID: <3fa51991$1@news.unimelb.edu.au>
Adam Warner <······@consulting.net.nz> writes:

>No. Let's see your code which takes a list of integers where "there is
>really nothing which is unknown at compile time", the compiler checks that
>every integer in this list is between 1 and 10 and then proceeds to sum
>the integers using the appropriate type for the summation.

You mean my Mercury version of the unrealistic benchmark where all the
input is visible?

	:- module x.
	:- interface.
	:- import_module io.
	:- pred main(io::di, io::uo) is det.

	:- implementation.
	:- import_module io, list, int.

	sum_list(List) = foldl(func(X,A) = X + A, List, 0).

	main -->

If you compile with the appropriate optimization options, the Mercury
compiler evaluates the call to sum_list at compile time, generating
the same code for that as for

	main --> print(55), nl.

Or did you mean a Mercury version of a more realistic benchmark?
If so, here it is:
	
	main -->
		read(L),
		(if { L = ok(List) } then
			print(sum_list(List)), nl
		else
		 	print("invalid input\n")
		).

Notice that other than the boilerplate declaration for "main",
there are no explicit type declarations needed at all.

>You cannot assume that the sum will be smaller than any finite sized
>integer because the list in the source code is of arbitrary length.

That's not correct.  The code runs on a machine with finite address space.
If the inputs are 32-bit, then I can be sure that their sum will fit in
64 bits, for example.

However, that said, my code is making a potentially unsafe assumptions
about it being OK to calculate the sum mod 2^32.  But I can live with that.
I'm willing to assume that the sum of the inputs will fit in 32 bits.
If I didn't want to make that assumption, I could use a 64-bit integer
type instead.  But in practice, this is the sort of assumption that I
am often willing to make.

>You cannot leave checking that each integer is between 1 and 10 to run
>time because this is known compile-time type information.

I don't know what you're talking about here.
I'm not trying to check that integers are between 1 and 10.
They might be much larger than that -- the program needs to be
able to handle individual inputs up to at least say 10 million,
and whose total sum might be up to say 100 million.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Adam Warner
Subject: Re: Those who don't know Lisp are doomed to underestimate it [was Re: Python from Wise Guy's Viewpoint]
Date: 
Message-ID: <pan.2003.11.02.23.21.37.762485@consulting.net.nz>
Hi Fergus Henderson,

> Adam Warner <······@consulting.net.nz> writes:
> 
>>No. Let's see your code which takes a list of integers where "there is
>>really nothing which is unknown at compile time", the compiler checks that
>>every integer in this list is between 1 and 10 and then proceeds to sum
>>the integers using the appropriate type for the summation.
> 
> You mean my Mercury version of the unrealistic benchmark where all the
> input is visible?
> 
> 	:- module x.
> 	:- interface.
> 	:- import_module io.
> 	:- pred main(io::di, io::uo) is det.
> 
> 	:- implementation.
> 	:- import_module io, list, int.
> 
> 	sum_list(List) = foldl(func(X,A) = X + A, List, 0).
> 
> 	main -->
> 
> If you compile with the appropriate optimization options, the Mercury
> compiler evaluates the call to sum_list at compile time, generating
> the same code for that as for
> 
> 	main --> print(55), nl.

Whatever. Lisp can do anything before run time and I consciously made the
program perform its computations at run time. If I wrote this:

(let ((list '(1 2 3 4 5 6 7 8 9 10)))
  `(print ,(loop for element in list sum element)))

Then the compiler would only see the form (print 55).

It wasn't a benchmark. It was a demonstration of compile type checking
that you are still unable to replicate. The disassemblies were necessary
to show when run time type checks were avoided.

> Or did you mean a Mercury version of a more realistic benchmark?
> If so, here it is:
> 	
> 	main -->
> 		read(L),
> 		(if { L = ok(List) } then
> 			print(sum_list(List)), nl
> 		else
> 		 	print("invalid input\n")
> 		).
> 
> Notice that other than the boilerplate declaration for "main",
> there are no explicit type declarations needed at all.
> 
>>You cannot assume that the sum will be smaller than any finite sized
>>integer because the list in the source code is of arbitrary length.
> 
> That's not correct.  The code runs on a machine with finite address space.
> If the inputs are 32-bit, then I can be sure that their sum will fit in
> 64 bits, for example.
> 
> However, that said, my code is making a potentially unsafe assumptions
> about it being OK to calculate the sum mod 2^32.  But I can live with that.
> I'm willing to assume that the sum of the inputs will fit in 32 bits.
> If I didn't want to make that assumption, I could use a 64-bit integer
> type instead.  But in practice, this is the sort of assumption that I
> am often willing to make.

I created code that checked a specification for every input element in the
list at compile time and appropriate code to sum it regardless of the bit
size of the underlying architecture. So far your static language has
wimped out by making unsafe assumptions and not checking the entire
specification at compile time.

>>You cannot leave checking that each integer is between 1 and 10 to run
>>time because this is known compile-time type information.
> 
> I don't know what you're talking about here.
> I'm not trying to check that integers are between 1 and 10.
> They might be much larger than that -- the program needs to be
> able to handle individual inputs up to at least say 10 million,
> and whose total sum might be up to say 100 million.

In my original code that removed run time type checks I promised the
compiler that every integer in the summation would be between 1 and 10
inclusive. This was part of my specification (a range with some kind of
real world significance). You rightly challenged my potentially unsafe
assumption so I added type checks to every data element.

Real world data will not nicely correspond with the size of a machine
integer. If I code a list of the number of days people work per week
then I should be able to tell the compiler to check that every element in
that list is an integer between 0 and 7. Checking that every element in
the list is a signed or unsigned 32-bit integer is virtually worthless.

Now in most cases I'd be happy to leave this checking to run time. But in
this case I decided to perform the type checking at compile time. Why
can't you?

Regards,
Adam
From: Adam Warner
Subject: Re: Those who don't know Mercury are doomed to underestimate it
Date: 
Message-ID: <pan.2003.11.04.03.10.46.318170@consulting.net.nz>
Hi Fergus Henderson,

         ``Invent a witty saying, and be eternally famous.''
                                                ---Anonymous

> But I can!  I just didn't realize what you were asking for.  If that is
> all you want, you just need to add a single declaration to the program
> given above:
> 
> 	:- mode sum_list(in(list_skel(bound(0;1;2;3;4;5;6;7)))) = out.
> 
> The Mercury compiler will then verify that the argument passed to sum_list
> is known statically to be a list containing only ints in the range 0-7,
> and will report a compile error if it is not.  In this case, since the
> list contains the elements 8,9,10, it will report an error.  If you remove
> those elements, it will compile and execute fine (with no dynamic type tests).
> 
> P.S. In practice, you'd probably want to use a bit more abstraction,
> e.g.
> 
> 	:- inst day_range ---> 0;1;2;3;4;5;6;7.
> 	:- mode sum_list(in(list_skel(day_range))) = out.
> 
> and probably also doing it by defining a separate sum_day_list function
> rather than putting this range constraint on sum_list.  You might also
> want to have a separate type for day_range.

Wonderful! Am I right in guessing that you didn't need to enumerate every
integer (e.g. what's the syntax for any integer between 1 and 1,000,000)?
Is compile time/memory usage constant if the permitted integer range is
larger?

Here's a little more complicated separate type. The list must contain 53
elements. The first 52 elements are integers between 0 and 7 inclusive.
The 53rd element is an integer between 0 and 1 inclusive if there are 365
days in the year or 0 and 2 inclusive if there are 366 days in the year.
The number of days of the year are supplied as part of the separate type
and only 365 or 366 are valid.

Here's my implementation of that type definition in Common Lisp:

(deftype days-week (days-year)
  (labels ((recursive-cons (days-year i)
             (if (zerop i)
                 `(cons ,(if (= days-year 365)
                             '(integer 0 1)
                             '(integer 0 2)) null)
                 `(cons (integer 0 7) ,(recursive-cons days-year (1- i))))))
    (if (not (and (integerp days-year)
                  (or (= days-year 365) (= days-year 366))))
        (error "Incorrect days of the year.")
        (recursive-cons days-year 52))))

SBCL aggressively compiles everything (it can't do anything else :-):

* (defun foo () (the (days-week 366) '(5 7 7 0 0 3 5 4 3 4 1 5 7 7 2 6 6
 0 4 2 2 2 6 5 4 2 5 6 7 2 2 7 6 0 6 6 0 0 6 2 2 5 2 7 6 3 7 4 7 2 7 3 2)))

* (foo)

(5 7 7 0 0 3 5 4 3 4 1 5 7 7 2 6 6 0 4 2 2 2 6 5 4 2 5 6 7 2 2 7 6 0 6 6 0 0 6
 2 2 5 2 7 6 3 7 4 7 2 7 3 2)

* (defun foo () (the (days-week 364) '(5 7 7 0 0 3 5 4 3 4 1 5 7 7 2 6 6
 0 4 2 2 2 6 5 4 2 5 6 7 2 2 7 6 0 6 6 0 0 6 2 2 5 2 7 6 3 7 4 7 2 7 3 2)))
; in: lambda nil
;     (THE (DAYS-WEEK 364) '(5 7 7 0 0 3 5 4 3 4 ...))
; 
; caught error:
;   Incorrect days of the year.
; compilation unit finished
;   caught 1 ERROR condition

* (defun foo () (the (days-week 365) '(5 7 7 0 0 3 5 4 3 4 1 5 7 7 2 6 6
 0 4 2 2 2 6 5 4 2 5 6 7 2 2 7 6 0 6 6 0 0 6 2 2 5 2 7 6 3 7 4 7 2 7 3 2)))
; in: lambda nil
;     (THE (DAYS-WEEK 365) '(5 7 7 0 0 3 5 4 3 4 ...))
; 
; note: deleting unreachable code
; compilation unit finished
;   printed 1 note

This compile time message could use improvement. We only know some code
was deleted within the function (because the final element in the list was
not an integer between 0 and 1 inclusive).

* (defun foo () (the (days-week 366) '(5 7 7 0 0 3 5 4 3 4 5 7 7 2 6 6
 0 4 2 2 2 6 5 4 2 5 6 7 2 2 7 6 0 6 6 0 0 6 2 2 5 2 7 6 3 7 4 7 2 7 3 2)))
; in: lambda nil
;     (THE (DAYS-WEEK 366) '(5 7 7 0 0 3 5 4 3 4 ...))
; 
; note: deleting unreachable code
; compilation unit finished
;   printed 1 note

Again not as informative as it could be. The list is missing an integer.
We've only received a hint that there will be a run time error:

* (foo)

debugger invoked on condition of type TYPE-ERROR in thread 1173:
  The value (5 7 7 0 0 3 5 ...)
  is not of type
    (CONS (UNSIGNED-BYTE 3) (CONS (UNSIGNED-BYTE 3) (CONS # #))).

Within the debugger, you can type HELP for help. At any command prompt (within
the debugger or not) you can type (SB-EXT:QUIT) to terminate the SBCL
executable. The condition which caused the debugger to be entered is bound to
*DEBUG-CONDITION*. You can suppress this message by clearing
*DEBUG-BEGINNER-HELP-P*.

restarts (invokable by number or by possibly-abbreviated name):
  0: [ABORT   ] Reduce debugger level (leaving debugger, returning to toplevel).
  1: [TOPLEVEL] Restart at toplevel READ/EVAL/PRINT loop.
(FOO)
source: (THE (DAYS-WEEK 366) '(5 7 7 0 0 3 5 ...))


Now a polemic:

It is very likely that no Common Lisp implementation has sufficiently
intelligent compile-time type checking (the compiler would ideally 
produce an error in all three cases). Also type inference is somewhat
lacking. But this is an issue of resources, not what is possible.

In many areas Lisp is still the state of the art. Yet far more resources
are devoted to fields of research that build upon clearly inferior
concepts. There are infamous cases where Lisp research has been abandoned
to work on C++. Funding agencies many require these types of decisions.

It is likely there is more institution support for many other languages
than is currently available for improving Common Lisp design and
implementation. And I would be OK with this so long as all the researchers
of other languages (and I am _not_ singling out Mercury) were working on
areas of research that have the potential to improve the state of the art.

Trying to improve the state of the art is a perilous journey that is
likely to fail. Computer science research is far more likely to succeed,
and researchers far more likely to be rewarded, if they can dress up
solved problems in new clothes. One popular way to do this is by devoting
significant resources researching how to overcome design deficiencies in
the de facto languages of the day. Locally the state of the art improves
because the languages acquire new features and extensions. But science
doesn't progress. To a similar extent designing new languages appears to
be an excessively popular activity.

Regards,
Adam
From: Raffael Cavallaro
Subject: Re: Those who don't know Mercury are doomed to underestimate it
Date: 
Message-ID: <aeb7ff58.0311041444.76975c14@posting.google.com>
Adam Warner <······@consulting.net.nz> wrote in message news:<······························@consulting.net.nz>...

> Trying to improve the state of the art is a perilous journey that is
> likely to fail. Computer science research is far more likely to succeed,
> and researchers far more likely to be rewarded, if they can dress up
> solved problems in new clothes. One popular way to do this is by devoting
> significant resources researching how to overcome design deficiencies in
> the de facto languages of the day. Locally the state of the art improves
> because the languages acquire new features and extensions. But science
> doesn't progress. To a similar extent designing new languages appears to
> be an excessively popular activity.

Just to provide some feedback lest you think you are alone in this
thought, I find your analysis very insightful. A bit depressing, but
on the money nevertheless.

I guess a corollary of your analysis is that truly brave grad students
who really want to advance the field should be working on extensions
to common lisp.

Raf

P.S. on a totally unrelated note, why do a disproportionate number of
lispers/dynamic language advocates seem to hail from New Zealand?
From: Adam Warner
Subject: Re: Those who don't know Lisp are doomed to underestimate it [was Re: Python from Wise Guy's Viewpoint]
Date: 
Message-ID: <pan.2003.11.02.01.46.34.962080@consulting.net.nz>
> The compiler is not intelligent enough to deduce that integer arithmetic
> could have been inlined (the length of the list is known at compile time
> so a really intelligent compiler could infer that the sum of the
> integers in the list must be an integer between 10 and 100).

By the way, the compiler could not have inferred this without whole
program analysis because I omitted creating a literal list object! A
non-literal list (created using LIST) can be destructively modified at any
time.

Regards,
Adam
From: Lex Spoon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3znfdgdu3.fsf@logrus.dnsalias.net>
Fergus Henderson <···@cs.mu.oz.au> writes:
>>If you are simply doing "x := y" then there is no checking required.
>
> Yes, we covered that already.  But that's not what is happening in
> the scenario that I was describing.  The scenario that I'm describing is
>
> 	Collection c;
>
> 	...
> 	foreach x in c do
> 		use(x);
>
> where use(x) might be a method call, a field access, or similar.
> For example, perhaps the collection is a set of integers, and you
> are computing their sum, so use(x) would be "sum += x".

I see.  "sum += x" would indeed tend to cause a lot of checks, but
then again the checks might well end up costing 0 overall CPU cycles.
The general technique of optimizing methods for common types, plus the
likelihood that a CPU will have multiple functional units, can make a
big difference.

Also, keep in mind that if this is a performance critical blotch of
code, then the programmer has the option of making "c" be a
specialized array or matrix type.


-Lex
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa76d99$1@news.unimelb.edu.au>
Lex Spoon <···@cc.gatech.edu> writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>>>If you are simply doing "x := y" then there is no checking required.
>>
>> Yes, we covered that already.  But that's not what is happening in
>> the scenario that I was describing.  The scenario that I'm describing is
>>
>> 	Collection c;
>>
>> 	...
>> 	foreach x in c do
>> 		use(x);
>>
>> where use(x) might be a method call, a field access, or similar.
>> For example, perhaps the collection is a set of integers, and you
>> are computing their sum, so use(x) would be "sum += x".
>
>I see.  "sum += x" would indeed tend to cause a lot of checks, but
>then again the checks might well end up costing 0 overall CPU cycles.

Occaisionally, perhaps.  But I think that would be rare.

>The general technique of optimizing methods for common types, plus the
>likelihood that a CPU will have multiple functional units, can make a
>big difference.

Even with an unlimited number of functional units, there's still the
extra CPU cycles to wait for instruction cache misses or page faults
caused by the greater code size.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <%9gmb.19137$Tr4.40139@attbi_s03>
"Joachim Durchholz" <·················@web.de> wrote in message ·················@news.oberberg.net...
>
> A test suite can never catch all permutations of data that may occur (on
> a modern processor, you can't even check the increment-by-one operation
> with that, the universe will end before the CPU has counted even half of
> the full range).

Just to be pedantic, there are some circumstances where this is
possible. For example, it is quite possible to construct a test suite
that will exhaustively test the boolean "or" operator. There are
exactly four test cases, so that's not too bad.

It's worth mentioning this because it points out what you
have to do for a unit test suite to provide the degree of
coverage that any theorem-proving based system does:
you have to check the entire set of inputs for the function.
Sometimes I run into unit test boosters who feel that
they're provably correct when they have a test case
for every code path. But you'd have every code path
tested with just one test case for the "or" example,
whereas you need fully 4 test cases before you're
provably correct.

Hmmm. Should my test suite for "or" include
passing it strings and ints and checking to be
sure it gives an exception?


Marshall
From: Ralph Becket
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3638acfd.0310232246.64fa09ed@posting.google.com>
Pascal Costanza <········@web.de> wrote in message news:<············@newsreader2.netcologne.de>...
> Ralph Becket wrote:
> > This is utterly bogus.  If you write unit tests beforehand, you are 
> > already pre-specifying the interface that the code to be tested will 
> > present.
> > 
> > I fail to see how dynamic typing can confer any kind of advantage here.
> 
> Read the literature on XP.

What, all of it?

Why not just enlighten me as to the error you see in my contention 
about writing unit tests beforehand?

> > Are you seriously claiming that concise, *automatically checked* 
> > documentation (which is one function served by explicit type 
> > declarations) is inferior to unchecked, ad hoc commenting?
> 
> I am sorry, but in my book, assertions are automatically checked.

*But* they are not required.
*And* if they are present, they can only flag a problem at runtime.
*And* then at only a single site.

> > For one thing, type declarations *cannot* become out-of-date (as
> > comments can and often do) because a discrepancy between type
> > declaration and definition will be immidiately flagged by the compiler.
> 
> They same holds for assertions as soon as they are run by the test suite.

That is not true unless your test suite is bit-wise exhaustive.

> > I don't think you understand much about language implementation.
> 
> ...and I don't think you understand much about dynamic compilation. Have 
> you ever checked some not-so-recent-anymore work about, say, the HotSpot 
> virtual machine?

Feedback directed optimisation and dynamic FDO (if that is what you
are suggesting is an advantage of HotSpot) are an implementation
techonology and hence orthogonal to the language being compiled.

On the other hand, if you are not referring to FDO, it's not clear
to me what relevance HotSpot has to the point under discussion.

> > A strong, expressive, static type system provides for optimisations
> > that cannot be done any other way.  These optimizations alone can be
> > expected to make a program several times faster.  For example:
> 
> You are only talking about micro-efficiency here. I don't care about 
> that, my machine is fast enough for a decent dynamically typed language.

Speedups (and resource consumption reduction in general) by (in many 
cases) a factor or two or more consitute "micro-efficiency"?

> > On top of all that, you can still run your code through the profiler, 
> > although the need for hand-tuned optimization (and consequent code
> > obfuscation) may be completely obviated by the speed advantage 
> > conferred by the compiler exploiting a statically checked type system.
> 
> Have you checked this?

Do you mean have I used a profiler to search for bottlenecks in programs
in a statically type checked language?  Then the answer is yes.

Or do you mean have I observed a significant speedup when porting from
C# or Python to Mercury?  Again the answer is yes.

> Weak and dynamic typing is not the same thing.

Let us try to draw some lines and see if we can agree on *something*.

UNTYPED: values in the language are just bit patterns and all
operations, primitive or otherwise, simply twiddle the bits
that come their way.

DYNAMICALLY TYPED: values in the language carry type identifiers, but
any value can be passed to any function.  Some built-in functions will
raise an exception if the type identifiers attached to their arguments 
are of the wrong sort.  Such errors can only be identified at runtime.

STATICALLY TYPED: the compiler carries out a proof that no value of the
wrong type will ever be passed to a function expecting a different type,
anywhere in the program.  (Note that with the addition of a universal
type and a checked runtime dynamic cast operator, one can add dynamically
typed facilities to a statically typed language.)

The difference between an untyped program that doesn't work (it produces
the wrong answer) and a dynamically typed program with a type bug (it
may throw an exception) is so marginal that I'm tempted to lump them both
in the same boat.

> No. The original question asked in this thread was along the lines of 
> why abandon static type systems and why not use them always. I don't 
> need to convince you that a proposed general solution doesn't always 
> work, you have to convince me that it always works.

Done: just add a universal type.  See Mercury for example.

> [...]
> The burden of proof is on the one who proposes a solution.

What?  You're the one claiming that productivity (presumably in the 
sense of leading to a working, efficient, reliable, maintainable 
piece of code) is enhanced by using languages that *do not tell you 
at compile time when you've made a mistake*!

-- Ralph
From: Kenny Tilton
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <Zn7mb.35483$pT1.33385@twister.nyc.rr.com>
Ralph Becket wrote:
> STATICALLY TYPED: the compiler carries out a proof that no value of the
> wrong type will ever be passed to a function expecting a different type,
> anywhere in the program. 

Big deal. From Robert C. Martin:

http://www.artima.com/weblogs/viewpost.jsp?thread=4639

"I've been a statically typed bigot for quite a few years....I scoffed 
at the smalltalkers who whined about the loss of flexibility. Safety, 
after all, was far more important than flexibility -- and besides, we 
can keep our software flexible AND statically typed, if we just follow 
good dependency management principles.

"Four years ago I got involved with Extreme Programming. ...

"About two years ago I noticed something. I was depending less and less 
on the type system for safety. My unit tests were preventing me from 
making type errors. The more I depended upon the unit tests, the less I 
depended upon the type safety of Java or C++ (my languages of choice).

"I thought an experiment was in order. So I tried writing some 
applications in Python, and then Ruby (well known dynamically typed 
languages). I was not entirely surprised when I found that type issues 
simply never arose. My unit tests kept my code on the straight and 
narrow. I simply didn't need the static type checking that I had 
depended upon for so many years.

"I also realized that the flexibility of dynamically typed langauges 
makes writing code significantly easier. Modules are easier to write, 
and easier to change. There are no build time issues at all. Life in a 
dynamically typed world is fundamentally simpler.

"Now I am back programming in Java because the projects I'm working on 
call for it. But I can't deny that I feel the tug of the dynamically 
typed languages. I wish I was programming in Ruby or Python, or even 
Smalltalk.

"Does anybody else feel like this? As more and more people adopt test 
driven development (something I consider to be inevitable) will they 
feel the same way I do. Will we all be programming in a dynamically 
typed language in 2010? "

Lights out for static typing.

kenny

-- 
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
  http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2tin61-h91.ln1@ID-7776.user.dfncis.de>
Kenny Tilton <·······@nyc.rr.com> wrote:
> Big deal. From Robert C. Martin:
> 
> http://www.artima.com/weblogs/viewpost.jsp?thread=4639
> 
> "I've been a statically typed bigot for quite a few years....I scoffed 
> at the smalltalkers who whined about the loss of flexibility. Safety, 
> after all, was far more important than flexibility -- and besides, we 
> can keep our software flexible AND statically typed, if we just follow 
> good dependency management principles.
> 
> "Four years ago I got involved with Extreme Programming. ...
> 
> "About two years ago I noticed something. I was depending less and less 
> on the type system for safety. My unit tests were preventing me from 
> making type errors. The more I depended upon the unit tests, the less I 
> depended upon the type safety of Java or C++ (my languages of choice).

Note that he is speaking about languages with a very bad type system.
As has been said in this thread a few times, there are statically
typed languages and there are statically typed languages. Those two
can differ substantially from each other.

Here's a posting from Richard MacDonald in comp.software.extreme-programming,
MID <····································@204.127.36.1>:

: Eliot, I work with a bunch of excellent programmers who came from AI to 
: Smalltalk to Java. We despise Java. We love Smalltalk. Some months ago we 
: took a vote and decided that we were now more productive in Java than we 
: had ever been in Smalltalk. The reason is the Eclipse IDE. It more than 
: makes up for the lousy, verbose syntax of Java. We find that we can get 
: Eclipse to write much of our code for us anyway.
: 
: Smalltalk is superior in getting something to work fast. But refactoring 
: takes a toll on a dynamically typed language because it doesn't provide 
: as much information to the IDE as does a statically-typed language (even 
: a bad one). Let's face it. If you *always* check callers and implementors 
: in Smalltalk, you can catch most of the changes. But sometimes you 
: forget. With Eclipse, you can skip this step and it still lights up every 
: problem with a big X and helps you refactor to fix it
: 
: In Smalltalk, I *needed* unit tests because Smalltalk allowed me to be 
: sloppy. In Eclipse, I can get away without writing unit tests and my code 
: miraculously often works the first time I get all those Xs eliminated.
: 
: Ok, I realize I have not addressed your question yet...
: 
: No question but that a "crappy statically typed" (*) language can get you 
: into a corner where you're faced with lousy alternatives. But say I 
: figure out a massive refactoring step that gets me out of it. In 
: Smalltalk, I would probably fail without a bank of unit tests behind me. 
: In Eclipse, I could probably make that refactoring step in less time and 
: with far great certainty that it is correct. I've done it before without 
: the safety net of tests and been successful. No way I would ever have 
: been able to do that as efficiently in Smalltalk. (I once refactored my 
: entire Smalltalk app in 3 days and needed every test I had every written. 
: I have not done the equivalent in Java, but I have complete confidence I 
: could do it just as well if not much better.)
: 
: As far as productivity, we still write unit tests. But unit test 
: maintenance takes a lot of time. In Smalltalk, I would spend 30% of my 
: time coding within the tests. I tested at all levels, i.e., low-level, 
: medium, and integration, since it paid off when searching for bugs. But 
: 30% is too much. With Eclipse, we're able to write good code with just a 
: handful of high-level tests. Often we simply write the answer as a test 
: and do the entire app with this one test. The reason is once again that 
: the IDE is visually showing us right where we broke my code and we don't 
: have to run tests to see it.
: 
: (*) I suggest we use 3 categories: (1) dynamically typed, (2) statically 
: typed, (3) lousy statically typed. Into the latter category, toss Java 
: and C++. Into (2), toss some of the functional languages; they're pretty 
: slick. Much of the classic typing wars are between dynamic-typists 
: criticizing (3) vs. static-typists working with (2).
: 
: P.S. I used to be one of those rabid dynamic defenders. I'm a little 
: chastened and wiser now that I have a fantastic IDE in my toolkit.

- Dirk
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbi11$qnj$1$8302bc10@news.demon.co.uk>
Kenny Tilton wrote:
> Ralph Becket wrote:
>> STATICALLY TYPED: the compiler carries out a proof that no value of the
>> wrong type will ever be passed to a function expecting a different type,
>> anywhere in the program.
> 
> Big deal.

Yes it is a very big deal. I suspect from your choice of words
you have a closed mind on this issue, so there's no point in me
wasting my time trying to explain why.

<snip quote from someone who doesn't understand static typing at
all if the references to Java and C++ are anything to go by>

> Lights out for static typing.

That's complete bollocks. There are more than enough sufficiently
enlightened people to keep static typing alive and well, thank you
very much. If you chose to take advantage of it that's your loss.

Regards
--
Adrian Hey
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egfzhipdg7.fsf@sefirot.ii.uib.no>
Adrian Hey <····@NoSpicedHam.iee.org> writes:

> Kenny Tilton wrote:
>> Ralph Becket wrote:

>>> STATICALLY TYPED: the compiler carries out a proof that no value of the
>>> wrong type will ever be passed to a function expecting a different type,
>>> anywhere in the program.

>> Big deal.

> Yes it is a very big deal. 

While Mr. Martin probably should get out more, I must admit that I
have a nagging feeling about typing and object orientation.  Somebody
else correlated typing with imperativity, and I suspect dynamic typing
is a better match for OO than static typing.  But I'm probably making
the common error of comparing with the rather pedestrian type systems
of C++ and Java, perhaps O'Haskell and OCaml have systems that work
better? 

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnci76$5ic$1$8300dec7@news.demon.co.uk>
··········@ii.uib.no wrote:
> While Mr. Martin probably should get out more, I must admit that I
> have a nagging feeling about typing and object orientation.  Somebody
> else correlated typing with imperativity, and I suspect dynamic typing
> is a better match for OO than static typing.  But I'm probably making
> the common error of comparing with the rather pedestrian type systems
> of C++ and Java, perhaps O'Haskell and OCaml have systems that work
> better?

I have a my own pet theories to explain the current exitement about
dynamically typed languages. Here they are..

1- Most of this buzz comes from OO folk, many of whom will only have
   (bad) experience of static typing from C/C++/Java.

2- Development of static type systems (and type inferencers/checkers)
   which are strong enough to offer cast iron *guarantees* but at the
   same time are flexible enough to allow useful programs involves
   some tricky theory that few really understand (I won't pretend I do).
   But some language developers don't want to get to bogged down with
   all that difficult and boring theory stuff for however many months
   or years it takes. They want to make their language cooler than the
   competition right now, so have to rely exclusively on the expensive
   run time checks they call "dynamic typing".

3- Given that once this design decision (hack) has been made it is
   irreversible for all practical purposes, enthusiasts/advocates of
   these languages need to "make a virtue of necessesity" by advertising
   all the advantages that dynamic typing brings (allegedly) and
   spreading FUD about all the things you can't do with statically typed
   languages (allegedly). It is likely they will cite C++ in their
   evidence. :-)

Regards
--
Adrian Hey
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <nblmb.21294$e01.43585@attbi_s02>
"Adrian Hey" <····@NoSpicedHam.iee.org> wrote in message ··························@news.demon.co.uk...
>
> I have a my own pet theories to explain the current exitement about
> dynamically typed languages. Here they are..

Nice analysis. I particularly liked:

>    But some language developers don't want to get to bogged down with
>    all that difficult and boring theory stuff for however many months
>    or years it takes.


Your ideas are probably biased, but your biases match mine, so
there you are.


Marshall
From: Lulu of the Lotus-Eaters
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <mailman.93.1067109829.702.python-list@python.org>
Adrian Hey <····@NoSpicedHam.iee.org> wrote previously:
|2- Development of static type systems (and type inferencers/checkers)
|   which are strong enough to offer cast iron *guarantees* but at the
|   same time are flexible enough to allow useful programs involves
|   some tricky theory that few really understand (I won't pretend I do).
|   But some language developers don't want to get to bogged down with
|   all that difficult and boring theory stuff for however many months

There's more to it than that.  For example, not many Python programmers
understand C3 method resolution order, and why it was adopted in 2.3
over the earlier algorithms.  And probably only about three Python
programmers understand some of the regex optimizations added to the SRE
engine.  And only ONE person in the universe fully understands the magic
hacks that the Timbot introduced into the latest sorting algorithms :-).

But understanding those theoretical issues makes hardly any difference
to regular programmers.  Python does a good job of hiding what you don't
need to know from you (but letting you get at it if you really need to).

With a pure, lazy functional language like Haskell, there is much less
you can JUST DO without understanding a lot of theory first.  And even
then, you need to program in functional styles, and use these weird IO
monads when you want to interface with the outside world.  For your
average Joe (or Jane), having at least the option to freely play with
mutable values, and imperative flow, makes programming a WHOLE LOT
easier to reason about.  It may well be that for rockets, nuclear
reactors, and pace makers, type safety is more important than easy
conceptualization--but for a lot of things, a low conceptual burden is
far more important than theoretical, provable correctness.

The bottom line IMO is that languages which can implement a strong and
static type system carry with them a lot of concomitant baggage.  You
pretty much need to be purely functional and side-effect free to do it
right.  Maybe OCaml walks the line between the sides, but still without
being as easy to use as Python, Ruby, TCL, even Perl--or maybe than
Lisp, with a nod to its enthusiasts.

Yours, Lulu...

--
---[ to our friends at TLAs (spread the word) ]--------------------------
Echelon North Korea Nazi cracking spy smuggle Columbia fissionable Stego
White Water strategic Clinton Delta Force militia TEMPEST Libya Mossad
---[ Postmodern Enterprises <·····@gnosis.cx> ]--------------------------
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <s0Dmb.20916$ao4.32759@attbi_s51>
"Lulu of the Lotus-Eaters" <·····@gnosis.cx> wrote in message ··········································@python.org...
> For your
> average Joe (or Jane), having at least the option to freely play with
> mutable values, and imperative flow, makes programming a WHOLE LOT
> easier to reason about.

I used to think that, but I'm not so sure anymore. Oh, I
suppose it's true for Joe or Jane where they're already
used to variables and imperative flow, but if they're not?
How would we know whether recursion or loops are
easier to learn? I don't think we would know anything
unless we did a big test on naive subjects.


Marshall
From: David Mertz
Subject: What static typing makes difficult
Date: 
Message-ID: <mailman.95.1067110478.702.python-list@python.org>
In this thread, a number of folks have claimed (falsely) that static
typing never gets in the way.  An example from my own Gnosis Utilities
comes to my mind as a place where I benefit greatly from dynamic
(strong) typing.

The package gnosis.xml.objectify takes an XML source, and turns it into
a "native" Python object.  The function make_instance() can accept a
wide range of different things that might sensibly relate to XML:  a DOM
object, a filename, an XML string, any object with a .read() method.
Just one function deals happily with whatever you throw at it--without
any deep commitments about what type of thing it is (i.e. some novel
file-like object, or some new DOM implementation work without any
problem).  The code is simple (the above function is actually a proxy to
a class):

  class XML_Objectify:
      def __init__(self, xml_src=None, parser=EXPAT):
          self._parser = parser
          if parser==DOM and (hasattr(xml_src,'documentElement'): ...
          elif type(xml_src) in (StringType, UnicodeType):
              if xml_src[0]=='<': ...    # looks like XML
              else:   ...                # looks like filename
          elif hasattr(xml_src,'read'): ...
          else:
              raise ValueError, ...

I would challenge any enthusiast of Haskell, SML, Mozart, Clean, or the
like to come up with something similarly direct.  I know, of course that
the task is *possible*, but I bet you'll need a WHOLE LOT of extra
scaffolding to make something work.

Actually, my strong hunch is that Lisp will also not make things quite
as easy either... but obviously, I know similar ad hoc capability
checking is possible there.

Yours, David...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons.  Intellectual
property is to the 21st century what the slave trade was to the 16th.
From: Brian McNamara!
Subject: Re: What static typing makes difficult
Date: 
Message-ID: <bnemq7$j5g$1@news-int2.gatech.edu>
·····@gnosis.cx once said:
>The package gnosis.xml.objectify takes an XML source, and turns it into
>a "native" Python object.  The function make_instance() can accept a
>wide range of different things that might sensibly relate to XML:  a DOM
>object, a filename, an XML string, any object with a .read() method.
>Just one function deals happily with whatever you throw at it--without
>any deep commitments about what type of thing it is (i.e. some novel
>file-like object, or some new DOM implementation work without any
>problem).

I have no chance at a full implementation, but here is a sketch in
Haskell.  I know that a mere sketch is never as good as a working
implementation, so I hope someone else will take up the challenge.

Anyway:

   type XMLRep = ...  -- "internal" representation of XML objects
   
   class ConvertibleToXML a where
      convertToXML :: a -> Maybe XMLRep
   
   instance ConvertibleToXML DomObject where
      convertToXML :: DomObject -> Maybe XMLRep
      convertToXML aDomObj = ...
   
   instance ConvertibleToXML String where
      convertToXML :: String -> Maybe XMLRep
      convertToXML s = if (head s) = '<'
                       then XMLStringToXML s -- assume an XML string
                       else readXMLFromFileNamed s
                         -- yes, we'd need to be in the IO monad here
   
   -- Later in the program
   someFunc x y = 
      ...
      let xml = convertToXML x in ...
      -- which will infer the constraint "ConvertibleToXML x"

As far as I can tell, the only "extra scaffolding" is the type class
ConvertibleToXML.  Each time some new data type comes along which can be
converted to XML, we add a new instance declaration which shows how.

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Alex Martelli
Subject: Re: What static typing makes difficult
Date: 
Message-ID: <fBBmb.46990$e5.1680642@news1.tin.it>
Brian McNamara! wrote:

> ·····@gnosis.cx once said:
>>The package gnosis.xml.objectify takes an XML source, and turns it into
>>a "native" Python object.  The function make_instance() can accept a
>>wide range of different things that might sensibly relate to XML:  a DOM
>>object, a filename, an XML string, any object with a .read() method.
>>Just one function deals happily with whatever you throw at it--without
>>any deep commitments about what type of thing it is (i.e. some novel
>>file-like object, or some new DOM implementation work without any
>>problem).

...but since Python doesn't (yet) have protocol adaptation machinery,
you don't really have a good systematic way to extend make_instance
(while respecting the open/closed principle) for another completely
new type of "XML source" that doesn't naturally conform (though it
might be adapted) to any of the required protocols.

I.e., if a third-party library XX generates instances of XX.YY and
feeds them into a callable which happens to be that make_instance,
you have no way of providing adapters of XX.YY instances to the
requirements of make_instance which is not "invasive" in some way
(and, presumably, similar problems with adapting the results of
make_instance to whatever protocol third-party library XX requires).

One gets by, mind you, by being "somewhat" invasive and wrapping
the daylights out of something or other.  But, it's definitely not
a point of strength or elegance -- it feels like those plumbing
repairs made with duct tape, and as such it's somewhat of a let-down
compared with many other areas of strength of Python.

Now, compare with Haskell's approach...:

>    class ConvertibleToXML a where
>       convertToXML :: a -> Maybe XMLRep
>    
>    instance ConvertibleToXML DomObject where
>       convertToXML :: DomObject -> Maybe XMLRep
>       convertToXML aDomObj = ...
   ...
> As far as I can tell, the only "extra scaffolding" is the type class
> ConvertibleToXML.  Each time some new data type comes along which can be
> converted to XML, we add a new instance declaration which shows how.

Amen.  In other terms, WITHOUT modifying some third-party library (nor
in fact this one), you can supply a *PROTOCOL ADAPTER* that is able
to convert whatever novel protocol the 3rd party library supports to
whatever protocol make_instance requires -- the "typeclass", which is
QUITE A BIT more powerful than just an "interface", represents the
protocol, and the instance represents the adapter from some type or
typeclass into the required protocol.

Now, Haskell does it all at compile-time, but that's just because
that's the way Haskell works; just the same thing could be done at
runtime by explicitly registering protocols and adapters.  But of
course, having the tools available would be not very useful unless
protocol adaptation was REQUESTED and/or OFFERED by a reasonably
vast subset of standard and third-party libraries -- if you have to
do it all yourself by hand you might as well code more down-to-earth
wrappers as above mentioned.

The best current implementation of the protocol adaptation ideas in
PEP 246, to the best of my knowledge, is PyProtocols,
http://peak.telecommunity.com/PyProtocols.html .

No, it ain't Haskell, and in particular PyProtocols' interfaces
are NOT as powerful as Haskell's typeclasses (we'd need a LITTLE
extra semantic support from the Python compiler to get those).  But,
as the above ConvertibleToXML example shows, you don't ALWAYS need
the full power of typeclasses -- pretty often, using them as little more
than interfaces is sufficient.  Moreover, PyProtocols supports
adaptation (including transitive adaptation) even on "alien" interfaces
(which it sees as "opaque protocols", if you wish -- of course you'll
have to code the adapters WITH knowledge of the alien interfaces'
semantics, PyProtocols cannot do that for you, but it can support you
with registry and dispatching to suitable adapters), NOT just its own.


Alex
From: Remi Vanicat
Subject: Re: What static typing makes difficult
Date: 
Message-ID: <87ekx1av29.dlv@wanadoo.fr>
·······@prism.gatech.edu (Brian McNamara!) writes:

> ·····@gnosis.cx once said:
>>The package gnosis.xml.objectify takes an XML source, and turns it into
>>a "native" Python object.  The function make_instance() can accept a
>>wide range of different things that might sensibly relate to XML:  a DOM
>>object, a filename, an XML string, any object with a .read() method.
>>Just one function deals happily with whatever you throw at it--without
>>any deep commitments about what type of thing it is (i.e. some novel
>>file-like object, or some new DOM implementation work without any
>>problem).
>
> I have no chance at a full implementation, but here is a sketch in
> Haskell.  I know that a mere sketch is never as good as a working
> implementation, so I hope someone else will take up the challenge.
>
> Anyway:
>
>    type XMLRep = ...  -- "internal" representation of XML objects
>    
>    class ConvertibleToXML a where
>       convertToXML :: a -> Maybe XMLRep
>    
>    instance ConvertibleToXML DomObject where
>       convertToXML :: DomObject -> Maybe XMLRep
>       convertToXML aDomObj = ...
>    
>    instance ConvertibleToXML String where
>       convertToXML :: String -> Maybe XMLRep
>       convertToXML s = if (head s) = '<'
>                        then XMLStringToXML s -- assume an XML string
>                        else readXMLFromFileNamed s
>                          -- yes, we'd need to be in the IO monad here
>    
>    -- Later in the program
>    someFunc x y = 
>       ...
>       let xml = convertToXML x in ...
>       -- which will infer the constraint "ConvertibleToXML x"
>
> As far as I can tell, the only "extra scaffolding" is the type class
> ConvertibleToXML.  Each time some new data type comes along which can be
> converted to XML, we add a new instance declaration which shows how.

By the way, this system have one big advantage with regard to the
David Mertz example : Every one can had a new type to the
ConvertibleToXML class, when with David example, one have to change
the class XML_Objectify to have the same effect.

-- 
R�mi Vanicat
From: james anderson
Subject: Re: What static typing makes difficult
Date: 
Message-ID: <3F9AEE42.CEE5E347@setf.de>
"Brian McNamara!" wrote:
> 
> ·····@gnosis.cx once said:
> >The package gnosis.xml.objectify takes an XML source, and turns it into
> >a "native" Python object....
> > ...
> >           if parser==DOM and (hasattr(xml_src,'documentElement'): ...
> > ...
>
> I have no chance at a full implementation, but here is a sketch in
> Haskell.  I know that a mere sketch is never as good as a working
> implementation, so I hope someone else will take up the challenge.
> 
> ...
> 
> As far as I can tell, the only "extra scaffolding" is the type class
> ConvertibleToXML.  Each time some new data type comes along which can be
> converted to XML, we add a new instance declaration which shows how.

? would haskell suggest to handle the attribute-based specialization in a
manner similar to the original post, or does it offer some other mechanism?

...
From: Brian McNamara!
Subject: Re: What static typing makes difficult
Date: 
Message-ID: <bnergv$ebv$1@news-int.gatech.edu>
··············@setf.de once said:
>"Brian McNamara!" wrote:
>> ·····@gnosis.cx once said:
>> >The package gnosis.xml.objectify takes an XML source, and turns it into
>> >a "native" Python object....
>> > ...
>> >           if parser==DOM and (hasattr(xml_src,'documentElement'): ...
>> > ...
>>
>> I have no chance at a full implementation, but here is a sketch in
>> Haskell.  I know that a mere sketch is never as good as a working
>> implementation, so I hope someone else will take up the challenge.
>> 
>> ...
>> 
>> As far as I can tell, the only "extra scaffolding" is the type class
>> ConvertibleToXML.  Each time some new data type comes along which can be
>> converted to XML, we add a new instance declaration which shows how.
>
>? would haskell suggest to handle the attribute-based specialization in a
>manner similar to the original post, or does it offer some other mechanism?

There are two ways to go.  If it's reasonable to know statically about
this attribute, then you could encode it in the type system (e.g. a type
like DOMWithDocumentElement).  My hunch (based on my very limited
knowledge of the domain) is that this is a dynamic attribute.  So
instead you'd 'fail' if the attribute didn't match at run-time.  Note
that my Haskell code returned "Maybe XMLRep"s, so the corresponding code
would look something like

   -- has type Maybe XMLRep
   if (hasAttr domObj "documentElement")
   then Just ...  -- do the conversion of the domObj
   else Nothing

The same strategy would be used to cope with other 'dynamic failures'
in the other branches, such as a malformed XML string, or a
non-existent file when converting from a filename.

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: james anderson
Subject: Re: What static typing makes difficult
Date: 
Message-ID: <3F9AFD34.A1DA907F@setf.de>
"Brian McNamara!" wrote:
> 
> ··············@setf.de once said:
> ...
> >
> >? would haskell suggest to handle the attribute-based specialization in a
> >manner similar to the original post, or does it offer some other mechanism?
> 
> There are two ways to go.  If it's reasonable to know statically about
> this attribute, then you could encode it in the type system (e.g. a type
> like DOMWithDocumentElement).  My hunch (based on my very limited
> knowledge of the domain) is that this is a dynamic attribute.  So
> instead you'd 'fail' if the attribute didn't match at run-time.  Note
> that my Haskell code returned "Maybe XMLRep"s, so the corresponding code
> would look something like
> 
>    -- has type Maybe XMLRep
>    if (hasAttr domObj "documentElement")
>    then Just ...  -- do the conversion of the domObj
>    else Nothing
> 
> The same strategy would be used to cope with other 'dynamic failures'
> in the other branches, such as a malformed XML string, or a
> non-existent file when converting from a filename.

one favor which clos brings to the party is a fairly direct means to express
type constraints with arbitrary valence. generic functions turn the original
__init__ sort of "inside-out" and suggest an expression more like:

(defclass XML_Objectify () ())

(defgeneric __init__ (what from)
  (:method ((instance XML_Objectify) (source string))
           (case (char source 0)
             (#\< (with-input-from-string (stream source)
                    (__init__ instance stream)))
             (t (__init__ instance (intern-uri source)))))
  (:method ((instance XML_Objectify) (source http-uri))
           (with-open-uri (stream source :method :get)
             (__init__ instance stream)))
  (:method ((instance XML_Objectify) (source pathname))
           (with-open-file (stream source :direction :input)
             (__init__ instance stream)))
  (:method ((instance XML_Objectify) (source stream))
           (__init__ instance (parse-document source)))
  (:method ((instance XML_Objectify) (source xml-document))
           (dolist (element (children (root source)))
             (setf (slot-value instance (name-symbol (name element)))
                   (read-from-string (value element))))))

if one needed additional specialization, one could add additional parameters,
as in

(defgeneric __init__ (what from how) ...)

from some perspectives this is "easy". from others perhaps not. 
...
From: james anderson
Subject: Re: What static typing makes difficult
Date: 
Message-ID: <3F9ADA5E.F98D4B69@setf.de>
David Mertz wrote:
> 
> ...
> 
> Actually, my strong hunch is that Lisp will also not make things quite
> as easy either... but obviously, I know similar ad hoc capability
> checking is possible there.

generic functions serve quite nicely to define type / value constrained
data-flow graphs. if one needs  "hasattr" specificity, one needs an extra
parameter, but that's all.

...
From: Joachim Durchholz
Subject: Re: What static typing makes difficult
Date: 
Message-ID: <bnh5ft$sum$2@news.oberberg.net>
David Mertz wrote:
> I would challenge any enthusiast of Haskell, SML, Mozart, Clean, or the
> like to come up with something similarly direct.

Take Mozart out: it uses run-time typing.

Regards,
Jo
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <nOgmb.19259$Tr4.40138@attbi_s03>
"Kenny Tilton" <·······@nyc.rr.com> wrote in message ··························@twister.nyc.rr.com...
>
> Lights out for static typing.

That kind of statement reminds me a lot of the people
who were saying in 1985 that CISC computing was
dead.

Anyway, I've argued elsewhere with Uncle Bob about
the issues raised in the quoted blog, and one thing
that came out of those arguments was that much of
what he was talking about was true for C++ vs.
Python, but not true for C++ vs. Java. In particular,
it's not clear to me that what he's referring to is
anything besides just rapid turnaround in getting
from a file edit to a running program. Java and
Python are both good about that, with Python
probably having the edge, but C++ can take a
*long* time to compile.

I'll probably get flamed for that statement, eh?


Marshall
From: Thomas Lindgren
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ad7p33e7.fsf@localhost.localdomain>
"Marshall Spight" <·······@dnai.com> writes:

> "Kenny Tilton" <·······@nyc.rr.com> wrote in message ··························@twister.nyc.rr.com...
> >
> > Lights out for static typing.
> 
> That kind of statement reminds me a lot of the people
> who were saying in 1985 that CISC computing was
> dead.

Because Intel ultimately triumphed over the pundits advocating another
solution?

Because differences in instruction set architecture due to
implementation advances ultimately became irrelevant for
high-performance computers? (Meaning all non-embedded ones, that is.)

Because a big, somewhat worse standard (x86) beat a squabbling horde of
somewhat better contenders (RISCs)?

Because the marketplace moved on from "workstations", stranding the
high-cost, high-performance systems in favour of low-cost,
nearly-same-performance systems? Which then overtook the former
champions.

Something else?

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <nsrmb.23573$Tr4.49031@attbi_s03>
"Thomas Lindgren" <···········@*****.***> wrote in message ···················@localhost.localdomain...
>
> "Marshall Spight" <·······@dnai.com> writes:
>
> > > Lights out for static typing.
> >
> > That kind of statement reminds me a lot of the people
> > who were saying in 1985 that CISC computing was
> > dead.
>
> Because [...] ?

Because at no time did CISC machines ever fall below 95%
market share.


> Because a big, somewhat worse standard (x86) beat a squabbling horde of
> somewhat better contenders (RISCs)?

Hey, x86 is a *lot* worse! :-)


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnarks$v7k$1@f1node01.rhrz.uni-bonn.de>
Ralph Becket wrote:
> Pascal Costanza <········@web.de> wrote in message news:<············@newsreader2.netcologne.de>...
> 
>>Ralph Becket wrote:
>>
>>>This is utterly bogus.  If you write unit tests beforehand, you are 
>>>already pre-specifying the interface that the code to be tested will 
>>>present.
>>>
>>>I fail to see how dynamic typing can confer any kind of advantage here.
>>
>>Read the literature on XP.
> 
> What, all of it?
> 
> Why not just enlighten me as to the error you see in my contention 
> about writing unit tests beforehand?

Maybe we are talking at cross-purposes here. I didn't know about ocaml 
not requiring target code to be present in order to have a test suite 
acceptable by the compiler. I will need to take a closer look at this.

>>>For one thing, type declarations *cannot* become out-of-date (as
>>>comments can and often do) because a discrepancy between type
>>>declaration and definition will be immidiately flagged by the compiler.
>>
>>They same holds for assertions as soon as they are run by the test suite.
> 
> That is not true unless your test suite is bit-wise exhaustive.

Assertions cannot become out-of-date. If an assertion doesn't hold 
anymore, it will be flagged by the test suite.

>>>I don't think you understand much about language implementation.
>>
>>...and I don't think you understand much about dynamic compilation. Have 
>>you ever checked some not-so-recent-anymore work about, say, the HotSpot 
>>virtual machine?
> 
> Feedback directed optimisation and dynamic FDO (if that is what you
> are suggesting is an advantage of HotSpot) are an implementation
> techonology and hence orthogonal to the language being compiled.
> 
> On the other hand, if you are not referring to FDO, it's not clear
> to me what relevance HotSpot has to the point under discussion.

Maybe we both understand language implementation, and it is irrelevant?

>>>A strong, expressive, static type system provides for optimisations
>>>that cannot be done any other way.  These optimizations alone can be
>>>expected to make a program several times faster.  For example:
>>
>>You are only talking about micro-efficiency here. I don't care about 
>>that, my machine is fast enough for a decent dynamically typed language.
> 
> Speedups (and resource consumption reduction in general) by (in many 
> cases) a factor or two or more consitute "micro-efficiency"?

Yes. Since this kind of efficiency is just one of many factors when 
developing software, it might not be the most important one and might be 
outweighed by advantages a certain loss of efficiency buys you elsewhere.

> The difference between an untyped program that doesn't work (it produces
> the wrong answer) and a dynamically typed program with a type bug (it
> may throw an exception) is so marginal that I'm tempted to lump them both
> in the same boat.

Well, but that's a wrong perspective. The one that throws an exception 
can be corrected and then continued exactly at the point of the 
execution path when the exception was thrown.

>>[...]
>>The burden of proof is on the one who proposes a solution.
> 
> What?  You're the one claiming that productivity (presumably in the 
> sense of leading to a working, efficient, reliable, maintainable 
> piece of code) is enhanced by using languages that *do not tell you 
> at compile time when you've made a mistake*!

No, other people are claiming that one should _always_ use static type 
sytems, and my claim is that there are situations in which a dynamic 
type system is better.

If you claim that something (anything) is _always_ better, you better 
have a convincing argument that _always_ holds.

I have never claimed that dynamic type systems are _always_ better.

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <65fn61-b61.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> No, other people are claiming that one should _always_ use static type 
> sytems, and my claim is that there are situations in which a dynamic 
> type system is better.
> 
> If you claim that something (anything) is _always_ better, you better 
> have a convincing argument that _always_ holds.
> 
> I have never claimed that dynamic type systems are _always_ better.

To me, it certainly looked like you did in the beginning. Maybe your
impression that other people say that one should always use static
type systems is a similar misinterpretation? 

Anyway, formulations like "A has less expressive power than B" aresvery
close to "B is always better than A". It's probably a good idea to
avoid such formulations if this is not what you mean.

- Dirk
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbafk$uu6$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Pascal Costanza <········@web.de> wrote:
> 
>>No, other people are claiming that one should _always_ use static type 
>>sytems, and my claim is that there are situations in which a dynamic 
>>type system is better.
>>
>>If you claim that something (anything) is _always_ better, you better 
>>have a convincing argument that _always_ holds.
>>
>>I have never claimed that dynamic type systems are _always_ better.
> 
> To me, it certainly looked like you did in the beginning. Maybe your
> impression that other people say that one should always use static
> type systems is a similar misinterpretation? 

Please recheck my original response to the OP of this subthread. (How 
much more "in the beginning" can one go?)

> Anyway, formulations like "A has less expressive power than B" aresvery
> close to "B is always better than A". It's probably a good idea to
> avoid such formulations if this is not what you mean.

"less expressive power" means that there exist programs that work but 
that cannot be statically typechecked. These programs objectively exist. 
By definition, I cannot express them in a statically typed language.

On the other hand, you can clearly write programs in a dynamically typed 
language that can still be statically checked if one wants to do that. 
So the set of programs that can be expressed with a dynamically typed 
language is objectively larger than the set of programs that can be 
expressed with a statically typed language.

It's definitely a trade off - you take away some expressive power and 
you get some level of safety in return. Sometimes expressive power is 
more important than safety, and vice versa.

It's not my problem that you interpret some arbitrary other claim into 
this statement.

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F994597.7090807@ps.uni-sb.de>
Pascal Costanza wrote:
> 
> "less expressive power" means that there exist programs that work but 
> that cannot be statically typechecked. These programs objectively exist. 
> By definition, I cannot express them in a statically typed language.
> 
> On the other hand, you can clearly write programs in a dynamically typed 
> language that can still be statically checked if one wants to do that. 
> So the set of programs that can be expressed with a dynamically typed 
> language is objectively larger than the set of programs that can be 
> expressed with a statically typed language.

Well, "can be expressed" is a very vague concept, as you noted yourself. 
   To rationalize the discussion on expressiveness, there is a nice 
paper by Felleisen, "On the Expressive Power of Programming Languages" 
which makes this terminology precise.

Anyway, you are right of course that any type system will take away some 
expressive power (particularly the power to express bogus programs :-) 
but also some sane ones, which is a debatable trade-off).

But you completely ignore the fact that it also adds expressive power at 
another end! For one thing, by allowing you to encode certain invariants 
in the types that you cannot express in another way. Furthermore, by 
giving more knowledge to the compiler and hence allow the language to 
automatize certain tedious things. Overloading is one obvious example 
that increases expressive power in certain ways and crucially relies on 
static typing.

So there is no inclusion, the "expressiveness" relation is unordered wrt 
static vs dynamic typing.

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbn1g$l4m$1@f1node01.rhrz.uni-bonn.de>
Andreas Rossberg wrote:
> Pascal Costanza wrote:
> 
>>
>> "less expressive power" means that there exist programs that work but 
>> that cannot be statically typechecked. These programs objectively 
>> exist. By definition, I cannot express them in a statically typed 
>> language.
>>
>> On the other hand, you can clearly write programs in a dynamically 
>> typed language that can still be statically checked if one wants to do 
>> that. So the set of programs that can be expressed with a dynamically 
>> typed language is objectively larger than the set of programs that can 
>> be expressed with a statically typed language.
> 
> Well, "can be expressed" is a very vague concept, as you noted yourself. 
>   To rationalize the discussion on expressiveness, there is a nice paper 
> by Felleisen, "On the Expressive Power of Programming Languages" which 
> makes this terminology precise.

I have skimmed through that paper. It states the following in the 
conclusion section:

"The most important criterion for comparing programming languages showed 
that an increase in expressive power may destroy semantic properties of 
the core language that programmers may have become accustomed to 
(Theorem 3.14). Among other things, this invalidation of operational 
laws through language extensions implies that there are now more 
distinctions to be considered for semantic analyses of expressions in 
the core language. On the other hand, the use of more expressive 
languages seems to facilitate the programming process by making programs 
more concise and abstract (Conciseness Conjecture). Put together, this 
result says that

* an increase in expressive power is related to a decrease of the set of 
``natural'' (mathematically appealing) operational equivalences."

This seems to be compatible with my point of view. (However, I am not 
really sure.)

> Anyway, you are right of course that any type system will take away some 
> expressive power (particularly the power to express bogus programs :-) 
> but also some sane ones, which is a debatable trade-off).

Thanks. ;)

> But you completely ignore the fact that it also adds expressive power at 
> another end! For one thing, by allowing you to encode certain invariants 
> in the types that you cannot express in another way. Furthermore, by 
> giving more knowledge to the compiler and hence allow the language to 
> automatize certain tedious things.

I think you are confusing things here. It gets much clearer when you 
separate compilation/interpretation from type checking, and see a static 
type checker as a distinct tool.

The invariants that you write, or that are inferred by the type checker, 
are expressions in a domain-specific language for static program 
analysis. You can only increase the expressive power of that 
domain-specific language by adding a more elaborate static type system. 
You cannot increase the expressive power of the language that it reasons 
about.

An increase of expressive power of the static type checker decreases the 
expressive power of the target language, and vice versa.

As a sidenote, here is where Lisp comes into the game: Since Lisp 
programs can easily reason about other Lisp programs, because there is 
no distinction between programs and data in Lisp, it should be pretty 
straightforward to write a static type checker for Lisp programs, and 
include them in your toolset.

It should also be relatively straightforward to make this a relatively 
flexible type checker for which you can increase/decrease the level of 
required conformance to the (a?) type system.

This would mean that you could have the benefits of both worlds: when 
you need static type checking, you can add it. You can even enforce it 
in a project, if the requirements are strict in this regard in a certain 
setting. If the requirements are not so strict, you can relax the static 
type soundness requirements, or maybe even go back to dynamic type checking.

In fact, such systems already seem to exist. I guess that's what soft 
typing is good for, for example (see MrFlow). Other examples that come 
to mind are Qi and ACL2.

Why would one want to switch languages for a single feature?

Note that this is just brainstorming. I don't know whether such an 
approach can really work in practice. There are probably some nasty 
details that are hard to solve.

> Overloading is one obvious example 
> that increases expressive power in certain ways and crucially relies on 
> static typing.

Overloading relies on static typing? This is news to me. What do you mean?

> So there is no inclusion, the "expressiveness" relation is unordered wrt 
> static vs dynamic typing.

No, I don't think so.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  Römerstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnch6j$534$1@grizzly.ps.uni-sb.de>
"Pascal Costanza" <········@web.de> wrote:
>
> > But you completely ignore the fact that it also adds expressive power at
> > another end! For one thing, by allowing you to encode certain invariants
> > in the types that you cannot express in another way. Furthermore, by
> > giving more knowledge to the compiler and hence allow the language to
> > automatize certain tedious things.
>
> I think you are confusing things here. It gets much clearer when you
> separate compilation/interpretation from type checking, and see a static
> type checker as a distinct tool.
>
> The invariants that you write, or that are inferred by the type checker,
> are expressions in a domain-specific language for static program
> analysis. You can only increase the expressive power of that
> domain-specific language by adding a more elaborate static type system.
> You cannot increase the expressive power of the language that it reasons
> about.

Sorry, but that reply of yours somewhat indicates that you haven't really
used modern type systems seriously.

All decent type systems allow you to define your own types. You can express
any domain-specific abstraction you want in types. Hence the type language
gives you additional expressive power wrt the problem domain.

> An increase of expressive power of the static type checker decreases the
> expressive power of the target language, and vice versa.

That's a contradiction, because the type system is part of the "target"
language. You cannot separate them, because the type system is more then
just a static analysis phase - you can program it.

> As a sidenote, here is where Lisp comes into the game: Since Lisp
> programs can easily reason about other Lisp programs, because there is
> no distinction between programs and data in Lisp, it should be pretty
> straightforward to write a static type checker for Lisp programs, and
> include them in your toolset.

It is not, because Lisp hasn't been designed with types in mind. It is
pretty much folklore that retrofitting a type system onto an arbitrary
language will not work properly. For example, Lisp makes no distinction
between tuples and lists, which is crucial for type inference.

> It should also be relatively straightforward to make this a relatively
> flexible type checker for which you can increase/decrease the level of
> required conformance to the (a?) type system.
>
> This would mean that you could have the benefits of both worlds: when
> you need static type checking, you can add it. You can even enforce it
> in a project, if the requirements are strict in this regard in a certain
> setting. If the requirements are not so strict, you can relax the static
> type soundness requirements, or maybe even go back to dynamic type
checking.

I don't believe in soft typing, since it cannot give you the same guarantees
as strong typing. If you want to mix static and dynamic typing, having
static typing as the default and *explicit* escapes to dynamic typing is the
only sensible route, IMNSHO. Otherwise, all the invariants and guarantees
typing gives you are lost.

> > Overloading is one obvious example
> > that increases expressive power in certain ways and crucially relies on
> > static typing.
>
> Overloading relies on static typing? This is news to me. What do you mean?

If you want to have extensible overloading then static types are the only
way I know for resolving it. Witness Haskell for example. It has a very
powerful overloading mechanism (for which the term 'overloading' actually is
an understatement). It could not possibly work without static typing, which
is obvious from the fact that Haskell does not even have an untyped
semantics.

> > So there is no inclusion, the "expressiveness" relation is unordered wrt
> > static vs dynamic typing.
>
> No, I don't think so.

Erasing type information from a program that uses type abstraction to
guarantee certain post conditions will invalidate those post conditions. So
you get a program with a different meaning. It expresses something
different, so the types it contained obviously had some expressive power.

Erasing type information from a program that uses overloading simply makes
it ambiguous, i.e. takes away any meaning at all. So the types definitely
expressed something relevant.

    - Andreas
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3cdif734.fsf@comcast.net>
"Andreas Rossberg" <········@ps.uni-sb.de> writes:

> Sorry, but that reply of yours somewhat indicates that you haven't really
> used modern type systems seriously.
>
> All decent type systems allow you to define your own types. You can express
> any domain-specific abstraction you want in types. Hence the type language
> gives you additional expressive power wrt the problem domain.

Cool!  So I can declare `Euclidean rings' as a type an ensure that I
never pass a non-Euclidean ring to a function?
From: Jay O'Connor
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.25.02.29.50.208249.26376@cybermesa.com>
On Fri, 24 Oct 2003 18:18:25 -0700, prunesquallor wrote:

> "Andreas Rossberg" <········@ps.uni-sb.de> writes:
> 
>> Sorry, but that reply of yours somewhat indicates that you haven't
>> really used modern type systems seriously.
>>
>> All decent type systems allow you to define your own types. You can
>> express any domain-specific abstraction you want in types. Hence the
>> type language gives you additional expressive power wrt the problem
>> domain.
> 
> Cool!  So I can declare `Euclidean rings' as a type an ensure that I
> never pass a non-Euclidean ring to a function?
 

In Ada, you can

-- 
Jay O'Connor

http://www.deskofsolomon.com
 - Online organizational software for teachers
From: Jay O'Connor
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.25.02.31.51.408248.26376@cybermesa.com>
On Fri, 24 Oct 2003 19:29:56 -0700, Jay O'Connor wrote:

> On Fri, 24 Oct 2003 18:18:25 -0700, prunesquallor wrote:
> 
>> "Andreas Rossberg" <········@ps.uni-sb.de> writes:
>> 
>>> Sorry, but that reply of yours somewhat indicates that you haven't
>>> really used modern type systems seriously.
>>>
>>> All decent type systems allow you to define your own types. You can
>>> express any domain-specific abstraction you want in types. Hence the
>>> type language gives you additional expressive power wrt the problem
>>> domain.
>> 
>> Cool!  So I can declare `Euclidean rings' as a type an ensure that I
>> never pass a non-Euclidean ring to a function?
>  
>  
> In Ada, you can

Ooops...
 

-- 
Jay O'Connor

http://www.deskofsolomon.com
 - Online organizational software for teachers
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <y8vadqrs.fsf@comcast.net>
Jay O'Connor <········@cybermesa.com> writes:

> On Fri, 24 Oct 2003 19:29:56 -0700, Jay O'Connor wrote:
>
>> On Fri, 24 Oct 2003 18:18:25 -0700, prunesquallor wrote:
>> 
>>> "Andreas Rossberg" <········@ps.uni-sb.de> writes:
>>> 
>>>> Sorry, but that reply of yours somewhat indicates that you haven't
>>>> really used modern type systems seriously.
>>>>
>>>> All decent type systems allow you to define your own types. You can
>>>> express any domain-specific abstraction you want in types. Hence the
>>>> type language gives you additional expressive power wrt the problem
>>>> domain.
>>> 
>>> Cool!  So I can declare `Euclidean rings' as a type an ensure that I
>>> never pass a non-Euclidean ring to a function?
>>  
>>  
>> In Ada, you can
>
> Ooops...

Yeah, my innocent sounding question often hide some nasty pitfalls.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9d5e79$1@news.unimelb.edu.au>
·············@comcast.net writes:

>"Andreas Rossberg" <········@ps.uni-sb.de> writes:
>
>> Sorry, but that reply of yours somewhat indicates that you haven't really
>> used modern type systems seriously.
>>
>> All decent type systems allow you to define your own types. You can express
>> any domain-specific abstraction you want in types. Hence the type language
>> gives you additional expressive power wrt the problem domain.
>
>Cool!  So I can declare `Euclidean rings' as a type an ensure that I
>never pass a non-Euclidean ring to a function?

You can easily declare "EuclideanRings" as a type class
(in Haskell/Clean/Mercury, or similar constructs in other languages),
and ensure that you only pass this function values whose type has been
declared to be an instance of that type class.

Generally the type system won't be able to enforce that the
"EuclideanRings" type class really represents Euclidean rings
that conform to all the appropriate axioms.  However, declaring
an abstract type like this is nevertheless very useful as documentation:
it makes it clear where the proof obligation lies.  _If_ you prove that
every declared instance of the type class is really a Euclidean ring,
then that, together with the type system, will be enough to guarantee
that the function's argument is always a Euclidean ring.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <k76qfs25.fsf@ccs.neu.edu>
Fergus Henderson <···@cs.mu.oz.au> writes:

> ·············@comcast.net writes:
>>
>>Cool!  So I can declare `Euclidean rings' as a type an ensure that I
>>never pass a non-Euclidean ring to a function?
>
> _If_ you prove that
> every declared instance of the type class is really a Euclidean ring,
> then that, together with the type system, will be enough to guarantee
> that the function's argument is always a Euclidean ring.

Yeah, but *that's* the easy part.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bncqe3$cuo$1@newsreader2.netcologne.de>
Andreas Rossberg wrote:

>>An increase of expressive power of the static type checker decreases the
>>expressive power of the target language, and vice versa.
> 
> 
> That's a contradiction, because the type system is part of the "target"
> language. You cannot separate them, because the type system is more then
> just a static analysis phase - you can program it.

For christ's sake, the only interesting question here is: do statically 
typed languages increase or decrease the set of programs that behave 
well at runtime?


Pascal
From: Neelakantan Krishnaswami
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbpk1ro.gd8.neelk@gs3106.sp.cs.cmu.edu>
In article <············@newsreader2.netcologne.de>, Pascal Costanza wrote:
> Andreas Rossberg wrote:
> 
>>> An increase of expressive power of the static type checker
>>> decreases the expressive power of the target language, and vice
>>> versa.
>> 
>> That's a contradiction, because the type system is part of the
>> "target" language. You cannot separate them, because the type
>> system is more then just a static analysis phase - you can program
>> it.
> 
> For christ's sake, the only interesting question here is: do
> statically typed languages increase or decrease the set of programs
> that behave well at runtime?

Ah, if that's all you wish to know: statically typed languages enlarge
the set of programs you can write. If you wonder how, observe that all
Scheme values (for example) can be encoded in a single ML type like:

  type schemeval =
    | Exact of int
    | Inexact of float
    | Bool of bool
    | String of string
    | Symbol of string
    | Vector of schemeval Array.t
    | Nil 
    | Cons of schemeval * schemeval
    | Lambda of schemeval list -> schemeval

On top of this, you can add all of the other possible ML types, so
that the universe of programs you can write is clearly bigger, since
you have more value domains available for the functions that you write
in ML than in a dynamically typed language. This may seem like a
perverse argument to you (in fact, I would be mildly surprised if you
didn't think it was perverse), but it's really not. There are roughly
two main ways of thinking about types.

The first way is to think of types in an type-assignment way (the
so-called Curry interpretation of types). This says that a programming
language has a single universe of values, and that types define
predicates that partition those values in potentially interesting
ways. That is, you assign types to a pre-existing universe of values.
This corresponds to the point of view you've been arguing, but it's
not the only way.

The second way is the "ontological" way (or Church style) of thinking
about types. In this view, the type structure of your language defines
the domains of values. There are no programs without types, any more
than there valid programs that don't match the language's grammar. The
idea is that you can't possibly understand a value means without
knowing what its type is. And indeed, at the machine level you just
have patterns of bits, which can't be given a meaning without having
some pre-existing notion of what you're looking at -- that is, without
knowing its type. When Andreas says that the type system is part of
the language, this is one of the things he means. In a very important
sense the type system *defines* what the language is, because it's how
you describe what sorts of values exist in the language.

So depending on how you choose to look at things, ML lets you write
more well-behaved programs than Scheme does, or vice versa.

(Also, you may wonder if it's possible to combine both notions of type
in the same language. The answer is yes, and it's both a great idea
and an active research area.)

-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031025105853.GC1454@mapcar.org>
On Sat, Oct 25, 2003 at 05:17:44AM +0000, Neelakantan Krishnaswami wrote:
> Ah, if that's all you wish to know: statically typed languages enlarge
> the set of programs you can write. If you wonder how, observe that all
> Scheme values (for example) can be encoded in a single ML type like:
> 
>   type schemeval =
>     | Exact of int
>     | Inexact of float
>     | Bool of bool
>     | String of string
>     | Symbol of string
>     | Vector of schemeval Array.t
>     | Nil 
>     | Cons of schemeval * schemeval
>     | Lambda of schemeval list -> schemeval

Scheme, perhaps, but not CL, which is more interesting to us, because it
has an extensible type system.  The above definition of a `dynamic type
system' fails when you have an extensible type system with useful
semantics for redefinition.

> On top of this, you can add all of the other possible ML types, so
> that the universe of programs you can write is clearly bigger, since
> you have more value domains available for the functions that you write
> in ML than in a dynamically typed language. This may seem like a
> perverse argument to you (in fact, I would be mildly surprised if you
> didn't think it was perverse), but it's really not. There are roughly
> two main ways of thinking about types.

It is a bit perverse in another way: how does said definition give you
more value domains?  It is just another way of expressing the ones you
had before.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9f828a$1@news.unimelb.edu.au>
Matthew Danish <·······@andrew.cmu.edu> writes:

>On Sat, Oct 25, 2003 at 05:17:44AM +0000, Neelakantan Krishnaswami wrote:
>> Ah, if that's all you wish to know: statically typed languages enlarge
>> the set of programs you can write. If you wonder how, observe that all
>> Scheme values (for example) can be encoded in a single ML type like:
>> 
>>   type schemeval =
>>     | Exact of int
>>     | Inexact of float
>>     | Bool of bool
>>     | String of string
>>     | Symbol of string
>>     | Vector of schemeval Array.t
>>     | Nil 
>>     | Cons of schemeval * schemeval
>>     | Lambda of schemeval list -> schemeval
>
>Scheme, perhaps, but not CL, which is more interesting to us, because it
>has an extensible type system.  The above definition of a `dynamic type
>system' fails when you have an extensible type system with useful
>semantics for redefinition.

I don't see the problem.  Could you explain in more detail?

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031029093036.GW1454@mapcar.org>
On Wed, Oct 29, 2003 at 09:04:15AM +0000, Fergus Henderson wrote:
> Matthew Danish <·······@andrew.cmu.edu> writes:
> 
> >On Sat, Oct 25, 2003 at 05:17:44AM +0000, Neelakantan Krishnaswami wrote:
> >> Ah, if that's all you wish to know: statically typed languages enlarge
> >> the set of programs you can write. If you wonder how, observe that all
> >> Scheme values (for example) can be encoded in a single ML type like:
> >> 
> >>   type schemeval =
> >>     | Exact of int
> >>     | Inexact of float
> >>     | Bool of bool
> >>     | String of string
> >>     | Symbol of string
> >>     | Vector of schemeval Array.t
> >>     | Nil 
> >>     | Cons of schemeval * schemeval
> >>     | Lambda of schemeval list -> schemeval
> >
> >Scheme, perhaps, but not CL, which is more interesting to us, because it
> >has an extensible type system.  The above definition of a `dynamic type
> >system' fails when you have an extensible type system with useful
> >semantics for redefinition.
> 
> I don't see the problem.  Could you explain in more detail?

(defclass some-class () foo bar)

(defmethod some-predicate ((obj some-class))
  ...)

(defun some-computation ()
  ... use some-predicate on objects of some-class ...)

* (some-computation)
Error! .... some bug in some-predicate ....
-> (defclass some-class () foo bar baz)  ;; we happen to need a new slot
#<STANDARD-CLASS SOME-CLASS>
-> (defmethod some-predicate ((obj some-class)) ... fix bug ...)
#<STANDARD-METHOD SOME-PREDICATE>
-> return nil  ;; return value from this frame, and continue
               ;; all objects of some-class are automatically updated
	       ;; with new definition.

Final result: ...

*

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031025111330.GD1454@mapcar.org>
On Sat, Oct 25, 2003 at 02:47:28AM +0200, Andreas Rossberg wrote:
> It is not, because Lisp hasn't been designed with types in mind. It is
> pretty much folklore that retrofitting a type system onto an arbitrary
> language will not work properly. For example, Lisp makes no distinction
> between tuples and lists, which is crucial for type inference.

Tell that to the hackers who worked on the "Python" compiler found in
CMUCL, SBCL, and some others.  It does extensive type inference for both
efficiency and correctness.  But it doesn't get in the way (except in
certain rare cases), it just (noisely) informs you what it thinks.

Tell that to the people who wrote Chapter 4 of the Common Lisp standard.
http://www.lispworks.com/reference/HyperSpec/Body/04_.htm

> If you want to have extensible overloading then static types are the only
> way I know for resolving it. Witness Haskell for example. It has a very
> powerful overloading mechanism (for which the term 'overloading' actually is
> an understatement). It could not possibly work without static typing, which
> is obvious from the fact that Haskell does not even have an untyped
> semantics.

Correction: it could not work without typing--dynamic typing does not
imply a lack of typing.  I could be wrong, but it seems you would rule
out generic functions in the CLOS (and dynamic dispatch in general) with
the above statement.

> Erasing type information from a program that uses type abstraction to
> guarantee certain post conditions will invalidate those post conditions. So
> you get a program with a different meaning. It expresses something
> different, so the types it contained obviously had some expressive power.

This doesn't sound right: erasing type information should not invalidate
the post conditions; it should simply make it more difficult
(impossible?) to check the validity of the post conditions.

This program should still work, even if you fail to type-check it, if
said type-checking would have passed successfully.

> Erasing type information from a program that uses overloading simply makes
> it ambiguous, i.e. takes away any meaning at all. So the types definitely
> expressed something relevant.

This statement is irrelevant because dynamic typing does not eliminate
type information.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh01q$6u7$1@grizzly.ps.uni-sb.de>
"Matthew Danish" <·······@andrew.cmu.edu> wrote:
> > It is not, because Lisp hasn't been designed with types in mind. It is
> > pretty much folklore that retrofitting a type system onto an arbitrary
> > language will not work properly. For example, Lisp makes no distinction
> > between tuples and lists, which is crucial for type inference.
>
> Tell that to the hackers who worked on the "Python" compiler found in
> CMUCL, SBCL, and some others.  It does extensive type inference for both
> efficiency and correctness.  But it doesn't get in the way (except in
> certain rare cases), it just (noisely) informs you what it thinks.

Clarification: I was talking about strong typing, i.e. full and precise
inference. As I wrote in another posting, I don't believe in soft typing,
since it has exactly the weaknesses that seems to make proponents of dynamic
typing judge type systems having marginal utility: its only use is to flag
more or less trivial errors, and unreliable so.

I would be very, very surprised to see a strong type system put on top of an
untyped language successfully, i.e. without changing or restricting the
language severely.

> > If you want to have extensible overloading then static types are the
only
> > way I know for resolving it. Witness Haskell for example. It has a very
> > powerful overloading mechanism (for which the term 'overloading'
actually is
> > an understatement). It could not possibly work without static typing,
which
> > is obvious from the fact that Haskell does not even have an untyped
> > semantics.
>
> Correction: it could not work without typing--dynamic typing does not
> imply a lack of typing.  I could be wrong, but it seems you would rule
> out generic functions in the CLOS (and dynamic dispatch in general) with
> the above statement.

See my example below.

> > Erasing type information from a program that uses type abstraction to
> > guarantee certain post conditions will invalidate those post conditions.
So
> > you get a program with a different meaning. It expresses something
> > different, so the types it contained obviously had some expressive
power.
>
> This doesn't sound right: erasing type information should not invalidate
> the post conditions; it should simply make it more difficult
> (impossible?) to check the validity of the post conditions.

It does, because it also invalidates the pre conditions, on which the post
conditions depend.

> This program should still work, even if you fail to type-check it, if
> said type-checking would have passed successfully.

This is only a meaningful observation if you look at a closed program. If
you simply erase types from some given piece of code it may or may not
continue to work, depending on how friendly client code will act. In that
sense it definitely changes meaning. You had to protect against that using
other, incomparable features. So there is a difference in expressiveness.

And making modular composition more reliable is exactly the main point about
the expressiveness types add!

> > Erasing type information from a program that uses overloading simply
makes
> > it ambiguous, i.e. takes away any meaning at all. So the types
definitely
> > expressed something relevant.
>
> This statement is irrelevant because dynamic typing does not eliminate
> type information.

Yes it does. With "dynamic typing" in all its incarnations I'm aware of type
infromation is always bound to values. Proper type systems are not
restricted to this, they can express free-standing or relational type
information.

As an example of the kind of "overloading" (or type dispatch, if you want)
you cannot express in dynamically typed lagnuages: in Haskell, you can
naturally define functions which are overloaded on their return type, i.e.
you don't need any value at all at runtime to do the "dispatch". For
example, consider an overloaded function fromString, that produces values of
potentially arbitrary types from strings.

 - Andreas
From: Lex Spoon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3fzhf26tm.fsf@logrus.dnsalias.net>
"Andreas Rossberg" <········@ps.uni-sb.de> writes:
> Clarification: I was talking about strong typing, i.e. full and precise
> inference. 

Please let us be careful with the terminology.  Strong typing means
that type checking happens at some point, and that a type error is
never allowed to complete (and thus do something non-sensical).
Inference is a completely independent axis -- any type system may or
may not have type inference.  If you happen to like strong
*dynamically* typed languages then it is extremely irritating when
people assume strong typing can only happen at compile time.

I don't know the exact word you are looking for, unfortunately.
"complete" doesn't seem right, because that seems to be a property of
an inferencer: an inferencer is complete if it finds a type assignment
for any program that can be type checked.  "sound" doesn't seem right,
either.  A sound type system produces *correct* types, but those types
may not be tight enough to predict an absence of type errors.

Maybe you should just say "static checking".  People will assume you
probably mean that programs get rejected if they have type errors.



> As an example of the kind of "overloading" (or type dispatch, if you want)
> you cannot express in dynamically typed lagnuages: in Haskell, you can
> naturally define functions which are overloaded on their return type, i.e.
> you don't need any value at all at runtime to do the "dispatch". For
> example, consider an overloaded function fromString, that produces values of
> potentially arbitrary types from strings.


You can do this kind of thing in dynamically typed languages, too,
though unfortunately it is not very common.  Do a web search on
"object-oriented" and "roles" and a lot comes up.  This is the same
thing: depending on the role that a requestor sees an object under,
the object can respond differently even to the same messages.  For
example, someone may respond to #requestPurchase differently depending
on whether the request treats them like a father or treats them like
the president of a company.  And it works for functional languages,
too, as is clearly exhibited by your fromString() overloaded function.
It could be viewed as acting differently depending on whether it is in
an int-producer role or a float-producer role.


(Incidentally, a lot of the results that turn up regard languages for
databases.  I was surprised at how much interesting language research
happens in the database world.  They take the view that long-lived
data is important.  This is common to Smalltalk people, and perhaps to
people who like Lisp machines, but even Scheme and Lisp people don't
seem to think a whole lot about this idea.  Database people certainly
do, however.  You can't just repopulate a database all the time!)

Anyway, depending on the role that the requestor is talking to, a
responder (be it a function or an object) can act differently.  In
some cases, the expected role can be figured out automatically,
instead of needing to be written explicitly.  This is somewhat similar
to the behavior of C++ when you invoke a non-virtual method: the type
of the *variable* decides what will happen.


Getting aside from theory and research, Matlab and Perl both allow
this kind of context sensitivity.  When you call a Matlab function,
the function knows how many return values you are expecting.  If you
call a function in Perl, it can tell whether you want a scalar or a
vector back, and it can act accordingly.  (At least, the built-in Perl
functions can do this.)



-Lex
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhero$bd5$1@newsreader2.netcologne.de>
Andreas Rossberg wrote:

> As an example of the kind of "overloading" (or type dispatch, if you want)
> you cannot express in dynamically typed lagnuages: in Haskell, you can
> naturally define functions which are overloaded on their return type, i.e.
> you don't need any value at all at runtime to do the "dispatch". For
> example, consider an overloaded function fromString, that produces values of
> potentially arbitrary types from strings.

Wrong.

(defmethod from-string-expansion (to-type string)
   (if (or (subtypep to-type 'sequence)
           (subtypep to-type 'character)
           (eq to-type t))
     `(coerce ,string ',to-type)
     `(coerce (read-from-string ,string) ',to-type)))

(defmacro set-from-string (x string &environment env)
   (multiple-value-bind
     (bound localp declarations)
     (variable-information x env)
     (declare (ignore bound localp))
     (let ((type (or (cdr (assoc 'type declarations)) t)))
       `(setf ,x ,(from-string-expansion type string)))))


Session transcript:


? (let (x)
     (declare (integer x))
     (set-from-string x "4711")
     (print x))

4711

? (let (x)
     (declare (string x))
     (set-from-string x "0815")
     (print x))

"0815"

? (defmethod from-string-expansion ((to-type (eql 'symbol)) string)
     `(intern ,string))
#<standard-method from-string-expansion ((eql symbol) t)>

? (let (x)
     (declare (symbol x))
     (set-from-string x "TEST")
     (print x))

test


The macro delegates the decision which conversion function to use to the 
generic function FROM-STRING-EXPANSION, but this is all executed at 
compile time (as required for a compiled implementation of Common Lisp).

Pascal


P.S.: This is not ANSI Common Lisp, but uses a feature as defined in Guy 
Steele's book "Common Lisp, The Language - 2nd Edition" (-> 
VARIABLE-INFORMATION). The code above works in Macintosh Common Lisp.
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F9E711E.106@ps.uni-sb.de>
Pascal Costanza wrote:
> 
>> As an example of the kind of "overloading" (or type dispatch, if you 
>> want)
>> you cannot express in dynamically typed lagnuages: in Haskell, you can
>> naturally define functions which are overloaded on their return type, 
>> i.e.
>> you don't need any value at all at runtime to do the "dispatch". For
>> example, consider an overloaded function fromString, that produces 
>> values of
>> potentially arbitrary types from strings.
> 
> Wrong.

Not sure how the "wrong" relates to the quoted paragraph, but I guess 
you mean my conjecture that you cannot dispatch on return types with 
dynamic typing.

> (defmethod from-string-expansion (to-type string)
>   (if (or (subtypep to-type 'sequence)
>           (subtypep to-type 'character)
>           (eq to-type t))
>     `(coerce ,string ',to-type)
>     `(coerce (read-from-string ,string) ',to-type)))
> 
> (defmacro set-from-string (x string &environment env)
>   (multiple-value-bind
>     (bound localp declarations)
>     (variable-information x env)
>     (declare (ignore bound localp))
>     (let ((type (or (cdr (assoc 'type declarations)) t)))
>       `(setf ,x ,(from-string-expansion type string)))))

Interesting example, thanks for showing it. I'm not fluent enough in 
Lisp to understand how this actually works but it does not seem to be 
extensible in a compositional way (you have to insert all type cases by 
hand). And does the resolution work transitively? I.e. if I write some 
complex function f using fromString somewhere, performing arbitrary 
calculations on its return value of type t, and returning something of a 
type containing t, is all this code parametric in t such that I can call 
f expecting arbitrary result types? All this would be automatic in Haskell.

Also note that your transcript shows that your approach indeed requires 
*explicit* type annotations, while you would rarely need them when doing 
the same thing in a language like Haskell.

Anyway, your code is complicated enough to make my point that the static 
type system gives you similar expressiveness with less fuss.

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnlv14$vha$1@f1node01.rhrz.uni-bonn.de>
Andreas Rossberg wrote:
> Pascal Costanza wrote:
> 
>>
>>> As an example of the kind of "overloading" (or type dispatch, if you 
>>> want)
>>> you cannot express in dynamically typed lagnuages: in Haskell, you can
>>> naturally define functions which are overloaded on their return type, 
>>> i.e.
>>> you don't need any value at all at runtime to do the "dispatch". For
>>> example, consider an overloaded function fromString, that produces 
>>> values of
>>> potentially arbitrary types from strings.
>>
>>
>> Wrong.
> 
> 
> Not sure how the "wrong" relates to the quoted paragraph, but I guess 
> you mean my conjecture that you cannot dispatch on return types with 
> dynamic typing.
> 
>> (defmethod from-string-expansion (to-type string)
>>   (if (or (subtypep to-type 'sequence)
>>           (subtypep to-type 'character)
>>           (eq to-type t))
>>     `(coerce ,string ',to-type)
>>     `(coerce (read-from-string ,string) ',to-type)))
>>
>> (defmacro set-from-string (x string &environment env)
>>   (multiple-value-bind
>>     (bound localp declarations)
>>     (variable-information x env)
>>     (declare (ignore bound localp))
>>     (let ((type (or (cdr (assoc 'type declarations)) t)))
>>       `(setf ,x ,(from-string-expansion type string)))))
> 
> 
> Interesting example, thanks for showing it. I'm not fluent enough in 
> Lisp to understand how this actually works but it does not seem to be 
> extensible in a compositional way (you have to insert all type cases by 
> hand).

No, at least not in the default method for from-string-expansion I have 
shown above. That method only covers the default cases, and needs to 
distinguish different ways how Common Lisp handles pre-defined types.

What you would need to do for your own types is to write your own 
specialized versions of from-string-expansion:

(defmethod from-string-expansion ((to-type (eql 'foo)) string)
   `(somehow-convert-string-to-foo ,string))

You don't need to modify the method given above. (You need to write 
conversion routines for your own types in any language.)

> And does the resolution work transitively? I.e. if I write some 
> complex function f using fromString somewhere, performing arbitrary 
> calculations on its return value of type t, and returning something of a 
> type containing t, is all this code parametric in t such that I can call 
> f expecting arbitrary result types? All this would be automatic in Haskell.

It should be possible in principle. The macro shown above is expanded at 
compile-time (this is 100% correct, but sufficiently correct for the 
sake of our discussion). The &environment keyword captures the lexical 
environment that is current at macro expansion time. 
VARIABLE-INFORMATION looks up type information about a variable in that 
environment, before the code is actually translated into its final form.

The original idea for such environment objects was that you could not 
only look up standardized information about entities of the compilation 
environment, but that you can also stuff in your own information. So in 
principle, it should be possible to use this as a basis for a type 
inferencing mechanism.

The downside of all this is that ANSI Common Lisp doesn't define the 
needed functions to do all this. It defines such environment objects 
only in very rudimentary ways that are actually not powerful enough. 
CLtL2 had this stuff, but apparently it had some glitches that are hard 
to get right, and it was decided to leave this stuff out of the ANSI 
standard.

There are Common Lisp implementations that implement this functionality 
to a certain degree, and apparently Duane Rettig of Franz Inc. is 
currently working on an implementation that has sorted out the edge 
cases. They seem to consider making this available as open source.

It's probably possible to do similar things with the ANSI-defined 
DEFINE-COMPILER-MACRO, or with proprietary extensions like DEFTRANSFORM 
in LispWorks.

> Also note that your transcript shows that your approach indeed requires 
> *explicit* type annotations, while you would rarely need them when doing 
> the same thing in a language like Haskell.

I think it should be possible in principle to add type inferencing to 
Common Lisp as a sophisticated macro package, even without proper 
environment objects.

It would probably take serious effort to implement such a beast, and it 
would be difficult to solve interferences with "third-party" Lisp code, 
but at least you would not need to change the base compiler to add this.

> Anyway, your code is complicated enough to make my point that the static 
> type system gives you similar expressiveness with less fuss.

Yes, you're right. If you would want/need static type checking by 
default, it wouldn't make much sense to do this in a dynamically typed 
language that treats static type checking as an exceptional case. And 
vice versa.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  Römerstr. 164, D-53117 Bonn (Germany)
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ismdz12u.fsf@rigel.goldenthreadtech.com>
"Andreas Rossberg" <········@ps.uni-sb.de> writes:

> It is not, because Lisp hasn't been designed with types in mind. It is
> pretty much folklore that retrofitting a type system onto an arbitrary
> language will not work properly. For example, Lisp makes no distinction
> between tuples and lists, which is crucial for type inference.

Your credibility just took a nose dive.

/Jon
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <1j5o61-co2.ln1@ID-7776.user.dfncis.de>
Andreas Rossberg <········@ps.uni-sb.de> wrote:
> Pascal Costanza wrote:

> Anyway, you are right of course that any type system will take away some 
> expressive power (particularly the power to express bogus programs :-) 
> but also some sane ones, which is a debatable trade-off).

Yep. It turns out that you take away lots of bogus programs, and the
sane programs that are taken away are in most cases at least questionable
(they will be mostly of the sort: There is a type error in some execution
branch, but this branch will never be reached), and can usually be 
expressed as equivalent programs that will pass.

"Taking away possible programs" is not the same as "decreasing expressive
power".

> So there is no inclusion, the "expressiveness" relation is unordered wrt 
> static vs dynamic typing.

That's the important point.

- Dirk
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnbrk2$lmc$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Andreas Rossberg <········@ps.uni-sb.de> wrote:
> 
>>Pascal Costanza wrote:
> 
> 
>>Anyway, you are right of course that any type system will take away some 
>>expressive power (particularly the power to express bogus programs :-) 
>>but also some sane ones, which is a debatable trade-off).
> 
> 
> Yep. It turns out that you take away lots of bogus programs, and the
> sane programs that are taken away are in most cases at least questionable
> (they will be mostly of the sort: There is a type error in some execution
> branch, but this branch will never be reached)

No. Maybe you believe me when I quote Ralf Hinze, one of the designers 
of Haskell:

"However, type systems are always conservative: they must necessarily 
reject programs that behave well at run time."

found at 
http://web.comlab.ox.ac.uk/oucl/research/areas/ap/ssgp/slides/hinze.pdf

Could you _please_ just accept that statement? That's all I am asking for!


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3mo61-p16.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

>> Yep. It turns out that you take away lots of bogus programs, and the
>> sane programs that are taken away are in most cases at least questionable
>> (they will be mostly of the sort: There is a type error in some execution
>> branch, but this branch will never be reached)
> 
> No. Maybe you believe me when I quote Ralf Hinze, one of the designers 
> of Haskell:
> 
> "However, type systems are always conservative: they must necessarily 
> reject programs that behave well at run time."

I don't see any contradiction. It's true that type systems must necessarily
reject programs that behave well at run time, nobody is disputing that.
These are the programs that were "taken away". Now why does a type
system reject a program? Because there's a type mismatch in some branch
if the program. Why is the program still well behaved? Very probably 
because this branch never gets executed, or it only executes with 
values where the type mismatch for some reason doesn't matter. There
may be other reasons, but at the moment I cannot think of any.

I don't have statistically evidence, but it would be easy to 
enumerate all terms of a simple language (say, simply typed lambda
calculus with a few constants and ground types) up to a certain length,
and then pick out those that are not well typed but still well behaved.

My guess is that most will be of the type

  (if true then 42 else "abc") + 1

It may not be decidable whether such a condition in an if-statement
is always true, but in this case I would consider such a program also
pretty bogus :-)

If you have an example that is not if this type, I would be interested
to see it.

> Could you _please_ just accept that statement? That's all I am
> asking for!

I have no trouble accepting that statement. As I have said, nobody is
disputing it. What I don't accept is your conclusion that because
one has to reject certain programs, one looses "expressive power".

- Dirk
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <brs6fbce.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:

> I don't see any contradiction. It's true that type systems must necessarily
> reject programs that behave well at run time, nobody is disputing that.
> These are the programs that were "taken away". Now why does a type
> system reject a program? Because there's a type mismatch in some branch
> if the program. 

*or* because the type system was unable to prove that there *isn't* a
type mismatch in *all* branches.
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <9eso61-mu7.ln1@ID-7776.user.dfncis.de>
·············@comcast.net wrote:
> Dirk Thierbach <··········@gmx.de> writes:

>> Now why does a type system reject a program? Because there's a type
>> mismatch in some branch if the program.

> *or* because the type system was unable to prove that there *isn't* a
> type mismatch in *all* branches.

I am not sure if I read this correctly, but it seems equivalent to what
I say.

  \exists branch. mismatch-in (branch)

should be the same as

  \not \forall branch. \not mismatch-in (branch)

Anyway, I don't understand your point.

(Hindley-Milner typechecking works by traversing the expression tree
in postfix order, matching types on binary application nodes. Typing
fails if and only if such a match fails (ignoring constraints or
similar extensions for the moment) If a match fails, there's a problem
in this branch).

- Dirk
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ptgle760.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:

> ·············@comcast.net wrote:
>> Dirk Thierbach <··········@gmx.de> writes:
>
>>> Now why does a type system reject a program? Because there's a type
>>> mismatch in some branch if the program.
>
>> *or* because the type system was unable to prove that there *isn't* a
>> type mismatch in *all* branches.
>
> I am not sure if I read this correctly, but it seems equivalent to what
> I say.
>
>   \exists branch. mismatch-in (branch)
>
> should be the same as
>
>   \not \forall branch. \not mismatch-in (branch)
>
> Anyway, I don't understand your point.

Only if you assume binary logic. If there are three values that
can arise --- provable-mismatch, provable-non-mismatch, and undecided
--- then you cannot assume that ~provable-mismatch = provable-non-mismatch.

My point is that type systems can reject valid programs.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uAwmb.26403$HS4.96459@attbi_s01>
<·············@comcast.net> wrote in message ·················@comcast.net...
> Dirk Thierbach <··········@gmx.de> writes:
>
> My point is that type systems can reject valid programs.

Agreed. But: does it matter? One thing that would help
in figuring out if it matters or not would be seeing a
small, useful program that cannot be proven typesafe.

If these programs are "all around us," and writing equivalent
typesafe programs is somewhat harder, then it matters.
If these programs are hard to come across, and writing
equivalent typesafe programs is easy, then this doesn't
matter.


Marshall
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <znfpck07.fsf@comcast.net>
"Marshall Spight" <·······@dnai.com> writes:

> <·············@comcast.net> wrote in message ·················@comcast.net...
>> Dirk Thierbach <··········@gmx.de> writes:
>>
>> My point is that type systems can reject valid programs.
>
> Agreed. But: does it matter? One thing that would help
> in figuring out if it matters or not would be seeing a
> small, useful program that cannot be proven typesafe.

(defun lookup (item table if-found if-missing)
  (cond ((null table) (funcall if-missing))
        ((eq item (entry-key (first-entry table)))
         (funcall if-found (entry-value (first-entry table))))
        (t (lookup item (remaining-entries table)
                   if-found
                   if-missing))))

(defun lookup-default (item local-table default-table if-found if-not-found)
  (lookup item local-table 
          if-found
          (lambda () 
            (lookup item default-table if-found if-not-found))))

(defun transform-list (list local-table default-table if-ok if-fail)
  (if (null list)
      (funcall if-ok '())
      (lookup-default (car list) local-table default-table
        (lambda (result)
          (transform-list (cdr list) local-table default-table
            (lambda (remainder)
              (funcall if-ok (cons result remainder)))
            if-fail))
        (lambda () (funcall if-fail (car list))))))

I know that simple static type checkers will be lost with this.
I do not know if the smarter ones will.
From: Stephen J. Bevan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3smlhkv9t.fsf@dino.dnsalias.com>
·············@comcast.net writes:
> (defun lookup (item table if-found if-missing)
>   (cond ((null table) (funcall if-missing))
>         ((eq item (entry-key (first-entry table)))
>          (funcall if-found (entry-value (first-entry table))))
>         (t (lookup item (remaining-entries table)
>                    if-found
>                    if-missing))))
> 
> (defun lookup-default (item local-table default-table if-found if-not-found)
>   (lookup item local-table 
>           if-found
>           (lambda () 
>             (lookup item default-table if-found if-not-found))))
> 
> (defun transform-list (list local-table default-table if-ok if-fail)
>   (if (null list)
>       (funcall if-ok '())
>       (lookup-default (car list) local-table default-table
>         (lambda (result)
>           (transform-list (cdr list) local-table default-table
>             (lambda (remainder)
>               (funcall if-ok (cons result remainder)))
>             if-fail))
>         (lambda () (funcall if-fail (car list))))))
> 
> I know that simple static type checkers will be lost with this.
> I do not know if the smarter ones will.

There are a few undefined functions in the above (entry-key,
first-entry, remaining-entries) which means it would not get past most
static type-checkers.  Assuming that typing programs that have
undefined functions wasn't part of the test, then using a simple
association list to represent the table it all type-checks under SML :-

  $ cat prune.sml
  fun lookup (item, table, ifFound, ifMissing) =
    case table of
      [] => ifMissing ()
    | (k,v)::r => 
        if item = k
        then ifFound v
        else lookup (item, r, ifFound, ifMissing)

  fun lookupDefault (item, localTable, defaultTable, ifFound, ifNotFound) =
    lookup (item, localTable, ifFound, 
             fn _ => lookup (item, defaultTable, ifFound, ifNotFound))

  fun transformList (list, localTable, defaultTable, ifOk, ifFail) =
    case list of
      [] => ifOk []
    | (h::t) => lookupDefault (h, localTable, defaultTable,
                                fn result =>
                                  transformList (t, localTable, defaultTable,
                                                  fn remainder =>
                                                    ifOk (result::remainder),
                                                  ifFail),
                                fn _ => ifFail t)
  $ ~/opt/smlnj-110.41/bin/sml 
  Standard ML of New Jersey v110.41 [FLINT v1.5], July 05, 2002
  - use "prune.sml";
  [opening prune.sml]
  prune.sml:5.15 Warning: calling polyEqual
  val lookup = fn : ''a * (''a * 'b) list * ('b -> 'c) * (unit -> 'c) -> 'c
  val lookupDefault = fn
    : ''a * (''a * 'b) list * (''a * 'b) list * ('b -> 'c) * (unit -> 'c) -> 'c
  val transformList = fn
    : ''a list * (''a * 'b) list * (''a * 'b) list * ('b list -> 'c)
      * (''a list -> 'c)
      -> 'c
  val it = () : unit
  - 
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <d6clcf1m.fsf@comcast.net>
·······@dino.dnsalias.com (Stephen J. Bevan) writes:

> There are a few undefined functions in the above (entry-key,
> first-entry, remaining-entries) which means it would not get past most
> static type-checkers.  Assuming that typing programs that have
> undefined functions wasn't part of the test, then using a simple
> association list to represent the table it all type-checks under SML :-

I don't expect type checkers to figure out missing functions.

>   $ cat prune.sml

[code snipped]

>   val lookup = fn : ''a * (''a * 'b) list * ('b -> 'c) * (unit -> 'c) -> 'c
>   val lookupDefault = fn
>     : ''a * (''a * 'b) list * (''a * 'b) list * ('b -> 'c) * (unit -> 'c) -> 'c
>   val transformList = fn
>     : ''a list * (''a * 'b) list * (''a * 'b) list * ('b list -> 'c)
>       * (''a list -> 'c)
>       -> 'c
>   val it = () : unit
>   - 

Cool!
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bneht0$adh$1@news-int.gatech.edu>
·············@comcast.net once said:
>(defun lookup (item table if-found if-missing)
>  (cond ((null table) (funcall if-missing))
>        ((eq item (entry-key (first-entry table)))
>         (funcall if-found (entry-value (first-entry table))))
>        (t (lookup item (remaining-entries table)
>                   if-found
>                   if-missing))))
>
>(defun lookup-default (item local-table default-table if-found if-not-found)
>  (lookup item local-table 
>          if-found
>          (lambda () 
>            (lookup item default-table if-found if-not-found))))
>
>(defun transform-list (list local-table default-table if-ok if-fail)
>  (if (null list)
>      (funcall if-ok '())
>      (lookup-default (car list) local-table default-table
>        (lambda (result)
>          (transform-list (cdr list) local-table default-table
>            (lambda (remainder)
>              (funcall if-ok (cons result remainder)))
>            if-fail))
>        (lambda () (funcall if-fail (car list))))))
>
>I know that simple static type checkers will be lost with this.
>I do not know if the smarter ones will.

If I have read that correctly, it looks like it admits these Haskell
types:

   lookup :: 
      (Eq k)=> k -> Map k v -> (v -> a) -> a -> a
   lookupDefault :: 
      (Eq k)=> k -> Map k v -> Map k v -> (v -> a) -> a -> a
   transformList :: 
      (Eq k)=> k -> Map k v -> Map k v -> ([a] -> b) -> (k -> a) -> [b]

except that in transformList, types "a" and "[b]" must be the same, due
to the awkward[*] way the exception-handling works with if-fail.

[*] Awkward from the static-typing point-of-view, of course.  I think a 
Haskell programmer would rewrite transformList with this type instead:

      (Eq k)=> k -> Map k v -> Map k v -> ([a] -> b) -> (k -> c) 
               -> Either c b

using the "Either" type to handle the exceptional case.  The
implementation would then be just

   transformList l locTable defTable ifOk ifFail =
      if (null l) 
         then Right (ifOk l)
         else lookupDefault (head l) locTable defTable
            (\r -> transformList (tail l) locTable defTable 
               (\remain -> ifOk (cons r remain)) ifFail)
            (Left (ifFail (head l)))

Thus, if one of the lookup fails, we'll get back a "Left" Either,
describing where the problematic input was, whereas if they all succeed,
we'll get back a "Right" Either, with the expected kind of result.

Actually, rewriting all of these functions using the Error monad would
make it all more elegant.

(Of course, the usual "I'm doing this all by hand" disclaimer applies.
Any type errors are my own.)

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <8yn9ce2k.fsf@comcast.net>
·······@prism.gatech.edu (Brian McNamara!) writes:

> If I have read that correctly, it looks like it admits these Haskell
> types:
>
>    lookup :: 
>       (Eq k)=> k -> Map k v -> (v -> a) -> a -> a
>    lookupDefault :: 
>       (Eq k)=> k -> Map k v -> Map k v -> (v -> a) -> a -> a
>    transformList :: 
>       (Eq k)=> k -> Map k v -> Map k v -> ([a] -> b) -> (k -> a) -> [b]
>
> Actually, rewriting all of these functions using the Error monad would
> make it all more elegant.

True.  

Monads are relative newcomers to the Lisp world.  You won't see them
very often.

The other problem is that monads don't compose.
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnejm0$hm8$1@news-int2.gatech.edu>
·············@comcast.net once said:
>The other problem is that monads don't compose.

I disagree.  See the discussion of "monad transformers" in part 3 of

   http://www.nomaware.com/monads/html/index.html

I have even implemented a few of the monad transformers in C++.
Composing monads is terrific!

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <4qxxyqeq.fsf@comcast.net>
·······@prism.gatech.edu (Brian McNamara!) writes:

> ·············@comcast.net once said:
>>The other problem is that monads don't compose.
>
> I disagree.  See the discussion of "monad transformers" in part 3 of
>
>    http://www.nomaware.com/monads/html/index.html
>
> I have even implemented a few of the monad transformers in C++.
> Composing monads is terrific!

The problem is when you combine several monads in a stack.  You have
to decide on an order, even if the monads are orthogonal.  Then you
have to create lifting code to get at the monad you want.  If you
re-order the stack, you have to re-order the lifting code, too.

I think monads are pretty cool, but they still have their problems.
From: Darius
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031025220926.000037d8.ddarius@hotpop.com>
On Sat, 25 Oct 2003 21:10:40 GMT
·············@comcast.net wrote:

> ·······@prism.gatech.edu (Brian McNamara!) writes:
> 
> > ·············@comcast.net once said:
> >>The other problem is that monads don't compose.
> 
> > I disagree.  See the discussion of "monad transformers" in part 3 of
> 
> >    http://www.nomaware.com/monads/html/index.html
> 
> > I have even implemented a few of the monad transformers in C++.
> > Composing monads is terrific!
> 
> The problem is when you combine several monads in a stack.  You have
> to decide on an order, even if the monads are orthogonal. Then you
> have to create lifting code to get at the monad you want.  If you
> re-order the stack, you have to re-order the lifting code, too.

Haskell and the current Control.Monad.* library handles these problems
nicely.  First, for each "monad" there is a type class that represents
it's interface.  Next, there is a MonadTrans type class that represents
a monad transformer with a single method 'lift'.  The end result for the
user of the library is that you rarely need -any- lift and that when you
do you only need one.  In my experience, I've needed multiple "lifts"
once and only because I was breaking the abstraction of one of the
monads.  However, I just stuffed that knowledge into an instance of
a type class and it's t	he only line of code that depends on the order.

The only way to need multiple lifts is if you stack the same "monad"
over itself, e.g. StateT String (State Int) a.  I've virtually never
done this.  Further, the deepest stack of monad transformers I've ever
had is four or five layers deep.  Anyways, even in this case it isn't
that much of a problem; if my code is any indicator, it's quite typical
(and presumably highly adviseable in this case) to work with the monad
indirectly.  So, for instance, I may be using State StdGen to hold a
random number generator seed, but I'm not going to write: 
do seed <- get;let (n,seed') = random seed;put seed';return n
everytime I need a new random number, I'm going to have a
function nextRandom that does it.

> I think monads are pretty cool, but they still have their problems.

Yes, overall composability is an issue.  Implementing and extending
the library is O(n^2) in the number of (basic) transformers and requires
a kind of closed-world assumption as every transformer needs to know how
to lift through every other transformer.  Luckily, there don't seem to
be all that many interesting general-purpose types of monad
transformers.  There are quite a few other approaches but so far monad
transformers seem the simplest and most popular.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnen1b$j3i$2@news.oberberg.net>
·············@comcast.net wrote:
> The other problem is that monads don't compose.

Which is why Haskell has monad transformer libraries (and I'd assume 
that other languages with a monad libraries has transformers as well).

Regards,
Jo
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <r810yk29.fsf@comcast.net>
I think I have a stumper.  I'll be impressed if a type checker can
validate this.  The same-fringe function takes a pair of arbitrary
trees and lazily determines if they have the same leaf elements in the
same order.  Computation stops as soon as a difference is found so
that if one of the trees is infinite, it won't cause divergence.

(defun lazy-sequence (x y)
  (if (null x)
      (funcall y)
      (funcall x
        (lambda (xh xt)
          (lambda (z)
            (funcall z xh
              (lambda ()
                (lazy-sequence
                 (funcall xt)
                 y))))))))

(defun lazy-fringe (x)
  (cond ((null x) '())
        ((consp x) (lazy-sequence (lazy-fringe (car x)) 
                                  (lambda () (lazy-fringe (cdr x)))))
        (t (lambda (z)
             (funcall z x (lambda () ()))))))

(defun lazy-same (s1 s2)
  (cond ((null s1) (null s2))
        ((null s2) nil)
        (t (funcall s1 
             (lambda (h1 t1)
               (funcall s2
                 (lambda (h2 t2)
                    (if (eql h1 h2)
                        (lazy-same (funcall t1) (funcall t2))
                        nil))))))))

(defun same-fringe (l1 l2)
  (lazy-same (lazy-fringe l1) (lazy-fringe l2)))

(defun testit ()
  (assert (same-fringe '((1 2) (3 (4))) '(1 (2 3) (4))))
  (let ((circular-list (list 3)))
    (setf (cdr circular-list) circular-list)
    (assert (not (same-fringe `((1 2) (3 ,circular-list)) `(1 (2 3) (3 4)))))))
From: Darius
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031025214729.000013ba.ddarius@hotpop.com>
On Sat, 25 Oct 2003 23:27:46 GMT
·············@comcast.net wrote:

> I think I have a stumper.  I'll be impressed if a type checker can
> validate this.  The same-fringe function takes a pair of arbitrary
> trees and lazily determines if they have the same leaf elements in the
> same order.  Computation stops as soon as a difference is found so
> that if one of the trees is infinite, it won't cause divergence.

-- this is all sooo pointless in Haskell
data LazyList a
  = Nil 
  | Cons (forall r. ((a,() -> LazyList a) -> r) -> r)

data Tree a = Empty | Leaf a | Fork (Tree a) (Tree a)

-- lazySequence :: LazyList a -> (() -> LazyList a) -> LazyList a
lazySequence Nil      y = y ()
lazySequence (Cons x) y 
    = x (\(xh,xt) -> 
            Cons (\k -> k (xh,(\() -> lazySequence (xt ()) y))))

-- lazyFringe :: Tree a -> LazyList a
lazyFringe Empty      = Nil
lazyFringe (Leaf a)   = Cons (\k -> k (a,\() -> Nil))
lazyFringe (Fork l r) = lazySequence (lazyFringe l)
                                     (\() -> lazyFringe r)

-- lazySame :: Eq a => LazyList a -> LazyList a -> Bool
lazySame Nil Nil = True
lazySame Nil _   = False
lazySame (Cons s1) (Cons s2) 
    = s1 (\(h1,t1) -> 
        s2 (\(h2,t2) -> h1 == h2 && lazySame (t1 ()) (t2 ())))

-- sameFringe :: Eq a => Tree a -> Tree a -> Bool
sameFringe t1 t2 = lazySame (lazyFringe t1) (lazyFringe t2)

testit = test1 && not test2
    where test1 = sameFringe (Fork (Fork (Leaf 1) (Leaf 2)) 
                                   (Fork (Leaf 3) (Fork (Leaf 4) 
                                                        Empty)))
                             (Fork (Leaf 1) 
                                   (Fork (Fork (Leaf 2) (Leaf 3)) 
                                         (Fork (Leaf 4) Empty)))
          test2 = sameFringe (Fork (Fork (Leaf 1) (Leaf 2)) 
                                   (Fork (Leaf 3) cl))
                             (Fork (Leaf 1) 
                                   (Fork (Fork (Leaf 2) (Leaf 3)) 
                                         (Fork (Leaf 3) (Leaf 4))))
          cl = Fork (Leaf 3) cl

testit == True quite unsurprisingly, the example trees aren't -exactly-
the same as the ones in the Lisp version.  This requires one popular
extension to Haskell 98 (rank-2 types).  Of course, all these
shenanigans are unnecessary in Haskell.

I already have a bunch of Church encoded data structures as well if
that's next.
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3cdgy6wz.fsf@comcast.net>
Darius <·······@hotpop.com> writes:

> On Sat, 25 Oct 2003 23:27:46 GMT
> ·············@comcast.net wrote:
>
>> I think I have a stumper.  I'll be impressed if a type checker can
>> validate this.  The same-fringe function takes a pair of arbitrary
>> trees and lazily determines if they have the same leaf elements in the
>> same order.  Computation stops as soon as a difference is found so
>> that if one of the trees is infinite, it won't cause divergence.
>
> -- this is all sooo pointless in Haskell

Well, yeah.

> data LazyList a
>   = Nil 
>   | Cons (forall r. ((a,() -> LazyList a) -> r) -> r)

I specifically didn't define the LazyList type.  I don't want to write
type annotations.  I want the type inference engine to deduce them.

I should have been more specific: I *know* that all these things can
be done in a statically typed language.  What I don't know is whether
these can be done as *easily*.  Will this type check without giving
the type checker hints about the type of 
(lambda (x y) (lambda (z) (z x y)))?  Will it type check if you
don't tell it that the `cons' in lazySequence is the same `cons'
in lazySame and lazyFringe?  

> I already have a bunch of Church encoded data structures as well if
> that's next.

I'm not just trying to come up with convoluted examples for the sake
of quizzing you.  I'm trying to find the point where the type checker
gets confused.  I'm not writing it in Haskell because I don't know
Haskell, so I'm writing in Lisp.  It will always be possible to
*recast* the problem in your favorite language, but can you
transliterate it and let the type checker infer that the program is
correct?
From: Darius
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031026014531.00007d11.ddarius@hotpop.com>
On Sun, 26 Oct 2003 04:11:44 GMT
·············@comcast.net wrote:

> Darius <·······@hotpop.com> writes:
> 
> > data LazyList a
> >   = Nil 
> >   | Cons (forall r. ((a,() -> LazyList a) -> r) -> r)
> 
> I specifically didn't define the LazyList type.  I don't want to write
> type annotations.  I want the type inference engine to deduce them.

This is a data declaration, -not- a type declaration.  For example,
Bool is data Bool = True | False and lists would be data [a] = [] | a
:[a].  If I were to use Java would I not be allowed to use 'class'?
class introduces a type too. 

Furthermore, 'data' doesn't just give you a type.  It also gives you
constructors, destructors (kind of), and recognition.

data Either a b = Left a | Right b
The idiomatic Lisp functions that would correspond to what this
declaration provides would be something like: left, right, leftp,
rightp, eitherp, get-left, get-right.

Also, that type distinction let's me add LazyList to typeclasses like
Show, Read, Eq, Ord, Monad, Num, etc.  Perhaps, I don't like my lazy
lists displaying as #<closure> or my trees as ((3 2) (4)) when what I'd
want is((3 . (2 . ()) . (4 . ())).  Perhaps, I don't want
(equal<the lazy list 1 2 3> <the lazy list 1 2 3>) to return nil because
they aren't the same procedure.

Also, ignoring the testit function, my code with those four laborious
"type annotations" is 16 lines compared to 29 lines of lisp. (Including
it, it's still shorter.)

If all you want to do is show an example in Lisp that won't type check
in most statically typed language without any alterations you could
simply write,(list t 3) or (if t 'a 10).

Finally, I don't want to write some insane continuation passing
sequence.  I want the language to not overevaluate. I guess lazy Lisp or
dynamic Haskell beats us both.
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnf42i$oq2$1@news-int2.gatech.edu>
·············@comcast.net once said:
>I think I have a stumper.  I'll be impressed if a type checker can
>validate this.  The same-fringe function takes a pair of arbitrary
>trees and lazily determines if they have the same leaf elements in the
>same order.  Computation stops as soon as a difference is found so
>that if one of the trees is infinite, it won't cause divergence.

Laziness is pretty orthogonal to static typing.

This one is so easy, we do it in C++ just for spite.  :)
I'm using the FC++ library as a helper.

----------------------------------------------------------------------
#include <iostream>
using std::ostream;
using std::cout;
using std::endl;

#include "prelude.hpp"
using namespace boost::fcpp;

struct LispCons;
typedef either<int,LispCons*> LispValue;

struct LispCons {
   LispValue car;
   LispValue cdr;
   LispCons( LispValue x, LispValue y ) : car(x), cdr(y) {}
};

LispValue make( int x )       { return inl(x); }
LispValue make( LispCons* x ) { return inr(x); }

// "l" prefix for "Lisp"
LispValue lnil = make( (LispCons*)0 );
LispValue lcons( LispValue x, LispValue y ) {
   return make( new LispCons( x, y ) );
}

ostream& operator<<( ostream& o, LispValue l ) {
   if( l.is_left() )
      o << l.left();
   else {
      LispCons* p = l.right();
      if( !p )
         o << "()";
      else
         o << "(" << p->car << "." << p->cdr << ")";
   }
   return o;
}

struct Fringe : public c_fun_type<LispValue,list<int> > {
   list<int> operator()( LispValue lv ) const {
      if( lv.is_left() )
         return cons( lv.left(), NIL );
      else {
         LispCons* lc = lv.right();
         if( lc==0 )
            return NIL;
         else
            return cat( Fringe()(lc->car),
                        thunk1(Fringe(),lc->cdr) );
      }
   }
} fringe;

int main() {
   LispValue one = make(1), two = make(2), 
             three = make(3), four = make(4);
   // tree1 = '((1 2) (3 (4))) 
   LispValue tree1 = lcons(lcons(one,lcons(two,lnil)), 
                        lcons(three,lcons(lcons(four,lnil),lnil)));
   cout << "tree1 is " << tree1 << endl;

   // tree2 = '(1 (2 3) (4))))
   LispValue tree2 = lcons(one,lcons(lcons(two,lcons(three,lnil)), 
                        lcons(lcons(four,lnil),lnil)));
   cout << "tree2 is " << tree2 << endl;

   cout << "fringe(tree1) is " << fringe(tree1) << endl;
   cout << "fringe(tree2) is " << fringe(tree2) << endl;

   LispCons* tmp = new LispCons(three,lnil);
   tmp->cdr = make(tmp);
   LispValue circle = make(tmp);
   cout << "first 10 of fringe(circle) is " 
        << take(10,fringe(circle)) << endl;

   // tree3 = '(1 (2 3) (<circle>))))
   LispValue tree3 = lcons(one,lcons(lcons(two,lcons(three,lnil)), 
                        lcons(lcons(circle,lnil),lnil)));
   cout << "first 10 of fringe(tree3) is " 
        << take(10,fringe(tree3)) << endl;
   cout << "tree2 = tree3? " << (fringe(tree2) == fringe(tree3)) << endl;
   
   return 0;
}
----------------------------------------------------------------------

The output:
----------------------------------------------------------------------
tree1 is ((1.(2.())).(3.((4.()).())))
tree2 is (1.((2.(3.())).((4.()).())))
fringe(tree1) is [1,2,3,4]
fringe(tree2) is [1,2,3,4]
first 10 of fringe(circle) is [3,3,3,3,3,3,3,3,3,3]
first 10 of fringe(tree3) is [1,2,3,3,3,3,3,3,3,3]
tree2 = tree3? 0
----------------------------------------------------------------------

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <y8v8ws28.fsf@comcast.net>
·······@prism.gatech.edu (Brian McNamara!) writes:

> ·············@comcast.net once said:
>>I think I have a stumper.  I'll be impressed if a type checker can
>>validate this.  The same-fringe function takes a pair of arbitrary
>>trees and lazily determines if they have the same leaf elements in the
>>same order.  Computation stops as soon as a difference is found so
>>that if one of the trees is infinite, it won't cause divergence.
>
> Laziness is pretty orthogonal to static typing.
>
> This one is so easy, we do it in C++ just for spite.  :)
> I'm using the FC++ library as a helper.
>
[snip]

Yuck!  Type declarations *everywhere*.  Where's this famous inference?

What's this LispCons type?  I don't have that in my source.  My code
is built of lambda abstractions.  I don't have a lispvalue type either.
And it isn't limited to integers in the list.  Where did my functions go?

And how can you compare a LispCons to an integer?

This works, but it doesn't faithfully represent what I wrote.
In addition, with all the type decorations, it only serves to reinforce
the idea the static type checking means a lot of extra keystrokes.
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bchs61-h45.ln1@ID-7776.user.dfncis.de>
·············@comcast.net wrote:
> ·······@prism.gatech.edu (Brian McNamara!) writes:

>> This one is so easy, we do it in C++ just for spite.  :)
>> I'm using the FC++ library as a helper.

> Yuck!  Type declarations *everywhere*.  Where's this famous inference?

You *did* notice that it is C++, which hasn't type inference (and
a lousy typesystem? :-) So what Brian is saying that this is so easy
he can even to it with hands tied on his back and standing on his
head.

> In addition, with all the type decorations, it only serves to reinforce
> the idea the static type checking means a lot of extra keystrokes.

Yes. C++ and Java are to blame for that, and it's a completely
justified obverservation. Static typechecking in C++ and Java just
sucks, and gets in the way all the time. If I had the choice between one
of them and a dynamically typed language, I would of course use the
dynamically typed one. 

But there are other statically typechecked languages where you don't
have this kind of trouble. I don't know if this thread turns out do to
something useful for anyone, but if it does, I would hope it gets at
least this idea across.

- Dirk
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <llr790f9.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:

> ·············@comcast.net wrote:
>> ·······@prism.gatech.edu (Brian McNamara!) writes:
>
>>> This one is so easy, we do it in C++ just for spite.  :)
>>> I'm using the FC++ library as a helper.
>
>> Yuck!  Type declarations *everywhere*.  Where's this famous inference?
>
> You *did* notice that it is C++, which hasn't type inference (and
> a lousy typesystem? :-) So what Brian is saying that this is so easy
> he can even to it with hands tied on his back and standing on his
> head.

I did in fact notice that.  It's *supposed* to be easy.  It's
*supposed* to be something simple and useful.

As I mentioned earlier in this thread, the two things I object to in a
static type system are these:

    1)  The rejection of code I know to be useful, yet the system is
        unable to prove correct.

    2)  The requirement that I decorate my program with type
        information to give the type checker hints to enable it 
        to check things that are, to me, obviously correct.

These are personal preferences, but they are shared by many of my
fellow lisp hackers.  To my knowledge, not one of my fellow lisp
hackers would mind a static type checker that noticed code fragments
that could be proven to never produce anything but an error provided
that said type checker never need hints and is never wrong.  Many lisp
systems already have a weak form of this:  try compiling a program that
invokes CAR on 2 arguments or attempts to use a literal string as a
function.

Being unable to prove code correct is the same thing as being able to
prove it incorrect.  This makes three classes of code:

  1) provably type safe for all inputs
  2) provably erroneous for all inputs
  3) neither of the above

Both dynamic and static type systems behave the same way for classes 1
and 2 except that static systems reject the code in section 2 upon
compilation while dynamic systems reject it upon execution.  It is
section 3 that is under debate.

We can categorize class 3 by partitioning the inputs.  Some inputs
can be shown to always produce a type error, some can be shown to
*never* produce a type error, and some are undecidable.  All the class
3 programs contain inputs in at least two of these partitions.

The question remains over what to do with class 3 programs.  Of course
what we do may depend on how often a class 3 program is encountered,
what the risk might be of running one, and possibly the compile time
cost of detecting one.

Those of us that are suspicious of static typing believe that there
are a large enough number of class 3 programs that they will be
regularly or frequently encountered.  We believe that a large number
of these class 3 programs will have inputs that cannot be decided by a
static type checker but are nonetheless `obviously' correct.  (By
obvious I mean fairly apparent with perhaps a little thought, but
certainly nothing so convoluted as to require serious reasoning.)

Static type check aficionados seem to believe that the number of class
3 programs is vanishingly small and they are encountered rarely.  They
appear to believe that any program that is `obviously' correct can be
shown to be correct by a static type checking system.  Conversely,
they seem to believe that programs that a static type checker find
undecidable will be extraordinarily convoluted and contrived, or
simply useless curiosities.

Some people in the static type check camp are making blanket
statements like:

  Matthias Blume 
     Typical "G�del" statements tend to be pretty contrived.

  Dirk Thierbach <··········@gmx.de> writes: 
    ...the sane programs that are taken away are in most cases at
    least questionable (they will be mostly of the sort: There is a
    type error in some execution branch, but this branch will never be
    reached)

  Joachim Durchholz <·················@web.de> writes:
    ... given a good type system, there are few if any practial
    programs that would be wrongly rejected.

  Matthias Blume
    `...humans do not write such programs, at least not
     intentionally...' 


No one has offered any empirical evidence one way or another, but the
static type people have said `if class 3 programs are so ubiquitous,
then it should be easy to demonstrate one'.  I'll buy that.  Some of
the people in here have offered programs that take user input as an
example.  No sane person claims to read the future, so any statically
typed check program would have to perform a runtime check.

But this is shooting fish in a barrel, so I'll give you a pass on it.

I (and Don Geddis) happen to believe that there are a huge number of
perfectly normal, boring, pedestrian programs that will stymie a
static type checker.  I've been offering a few that I hope are small
enough to illustrate the problem and not so contrived as to fall into
the `pointless' class.

The first program I wrote (a CPS lookup routine) generally confuses
wimp-ass static type checking like in Java.

Stephan Bevan showed that SML/NJ had no problem showing it was type
safe without giving a single type declaration.  I find this to be
very cool.

I offered my `black hole' program and got these responses:

  Remi Vanicat <···············@labri.fr>
    `I don't see how this function can be useful...'

  Jesse Tov <···@eecs.harvREMOVEard.edu>
    `we don't need functions with those types.'

  Dirk Thierbach <··········@gmx.de>
    `recursive types are very often a result of a real typing error.'

  "Marshall Spight" <·······@dnai.com>
    `I don't believe the feature this function illustrates could be
     useful.' 

Will this the response for any program that does, in fact, fool a
number of static type checkers?

  Marshall Spight wrote:
   `I'm trying to see if anyone can come up with a program
    that is small, useful, and not provably typesafe.'

  to which Joachim Durchholz <·················@web.de> replied:
   ``Matthias' and my position is that such a program doesn't exist
     (and cannot even exist)''


This appears to be getting very close to arguing that static type
checkers never reject useful programs because, by definition, a
program rejected by a static type checker is useless.

Another one was a CPS routine that traverses two trees in a lazy
manner and determins if they have the same fringe.  The point was not
to find out if this could be done in SML or Haskell.  I know it can.
The point was to find out if the static type checker could determine
the type safety of the program.

There is a deliberate reason I did not create a `lazy cons'
abstraction.  I wanted to see if the type checker, with no help
whatsover, could determine that the type of the lambda abstractions
that are *implementing* the lazy consing.

Re-writing the program in C++ with huge amounts of type decorations
and eliminating the lambdas by introducing parameterized classes does
not demonstrate the power of static type checking.

Re-writing the program in a lazy language also does nothing to
demonstrate the power of static type checking. 
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l14qxvd6wr.fsf@budvar.future-i.net>
·············@comcast.net writes:

>As I mentioned earlier in this thread, the two things I object to in a
>static type system are these:
>
>    1)  The rejection of code I know to be useful, yet the system is
>        unable to prove correct.

This has never happened to me with Haskell, and I don't think it has
ever happened with C.  With C++ I have often felt as though the type
system was being picky and getting in the way, but this was usually a
case of me finding it difficult to make my intentions clear to the
compiler - which is also a disadvantage of static typing, of course.
Languages like Haskell and ML don't tend to suffer that problem,
because the type system is a bit more understandable than C++ and will
infer things it's not told explicitly.

>    2)  The requirement that I decorate my program with type
>        information to give the type checker hints to enable it 
>        to check things that are, to me, obviously correct.

This is why type inference as used in the above-mentioned languages
and many others is so useful.  You can still add explicit type
declarations where you want to, either to help find out where you have
made a mistake (analogously to putting in extra assertions to narrow
down the point where something goes wrong at run time) and to act as
documentation (which, unlike comments, is automatically checked by the
compiler to make sure it's up to date).

>To my knowledge, not one of my fellow lisp hackers would mind a
>static type checker that noticed code fragments that could be proven
>to never produce anything but an error provided that said type
>checker never need hints and is never wrong.

I believe this is what you get with standard Haskell (though some
language extensions may require explicit type annotations in some
places).  Certainly the type checker is never wrong: it never flags a
type error in a correct program, and never gives the all-clear for a
program containing a type error.

But of course, the checker is checking Haskell programs, not Lisp
programs.  Nobody is suggesting to use some form of full compile-time
type checking on Common Lisp, nor could such a thing exist.

[later]

>This appears to be getting very close to arguing that static type
>checkers never reject useful programs because, by definition, a
>program rejected by a static type checker is useless.

Absolutely true - in a statically typed language!  In Lisp or a
dynamically typed language, obviously not true - except that the
statement doesn't really make any sense for Lisp, because one cannot
write a full static type checker for that language.

-- 
Ed Avis <··@membled.com>
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhllk$4mm$1@news.oberberg.net>
·············@comcast.net wrote:
> As I mentioned earlier in this thread, the two things I object to in a
> static type system are these:
> 
>     1)  The rejection of code I know to be useful, yet the system is
>         unable to prove correct.

Er... for static type systems, it's just "type-correct". General 
correctness isn't very well-defined and not what current main-stream 
type systems are designed for anyway.

>     2)  The requirement that I decorate my program with type
>         information to give the type checker hints to enable it 
>         to check things that are, to me, obviously correct.

I'd agree with these points, and restate that both are addressed by 
Hindley-Milner type systems.

1) Most useful code is type-correct. The code that isn't uses quite 
advanced types, but these are borderline cases like "black-hole" above. 
Besides, I don't know of any borderline case that cannot be written 
using different techniques.

2) Decorating programs with type annotations is an absolute exception. 
The only case where types are declared are those where you define a new 
type (i.e. record types).

> These are personal preferences, but they are shared by many of my
> fellow lisp hackers.  To my knowledge, not one of my fellow lisp
> hackers would mind a static type checker that noticed code fragments
> that could be proven to never produce anything but an error provided
> that said type checker never need hints and is never wrong.

A HM checker flags some usages as "wrong" that would work in Lisp - 
black-hole is an example.
I'm not sure that such usage serves a good purpose.

 > Many lisp
> systems already have a weak form of this:  try compiling a program that
> invokes CAR on 2 arguments or attempts to use a literal string as a
> function.

That's exceedingly weak indeed.
Distinguishing square from nonsquare matrices goes a bit beyond the 
standard usage of HM type systems, but it is possible.

> Being unable to prove code correct is the same thing as being able to
> prove it incorrect.

This true for HM typing but not true for inference systems in general. 
But probably you wanted to say something else anyway.

 >  This makes three classes of code:
> 
>   1) provably type safe for all inputs
>   2) provably erroneous for all inputs
>   3) neither of the above
 >
> Both dynamic and static type systems behave the same way for classes 1
> and 2 except that static systems reject the code in section 2 upon
> compilation while dynamic systems reject it upon execution.  It is
> section 3 that is under debate.

Agreed.
Programs in class 2 are rarely encountered and quickly fixed, so I'm not 
sure that this category is very interesting.

> We can categorize class 3 by partitioning the inputs.  Some inputs
> can be shown to always produce a type error, some can be shown to
> *never* produce a type error, and some are undecidable.  All the class
> 3 programs contain inputs in at least two of these partitions.
> 
> The question remains over what to do with class 3 programs.  Of course
> what we do may depend on how often a class 3 program is encountered,
> what the risk might be of running one, and possibly the compile time
> cost of detecting one.

There's also the question how to decide whether a program is in class 1 
or class 3.

> Those of us that are suspicious of static typing believe that there
> are a large enough number of class 3 programs that they will be
> regularly or frequently encountered.  We believe that a large number
> of these class 3 programs will have inputs that cannot be decided by a
> static type checker but are nonetheless `obviously' correct.

This belief is ill-founded. Just take a dynamically-typed program and 
annotate every parameter with all the types that you expect it to take 
on. Exclude possibilities if you hit a contradiction. Get suspicious if, 
for some parameter, there is /no/ type left :-)
You'll find that most programs will type readily.
(HM typing indeed starts with allowing all types for all names, then 
excluding all types that lead to a contradiction somewhere. The actual 
wording of the algorithm is somewhat different, but that's the gist of it.)

 > (By
> obvious I mean fairly apparent with perhaps a little thought, but
> certainly nothing so convoluted as to require serious reasoning.)
> 
> Static type check aficionados seem to believe that the number of class
> 3 programs is vanishingly small and they are encountered rarely.  They
> appear to believe that any program that is `obviously' correct can be
> shown to be correct by a static type checking system.  Conversely,
> they seem to believe that programs that a static type checker find
> undecidable will be extraordinarily convoluted and contrived, or
> simply useless curiosities.
> 
> Some people in the static type check camp are making blanket
> statements like:
> 
>   Matthias Blume 
>      Typical "G�del" statements tend to be pretty contrived.
> 
>   Dirk Thierbach <··········@gmx.de> writes: 
>     ...the sane programs that are taken away are in most cases at
>     least questionable (they will be mostly of the sort: There is a
>     type error in some execution branch, but this branch will never be
>     reached)
> 
>   Joachim Durchholz <·················@web.de> writes:
>     ... given a good type system, there are few if any practial
>     programs that would be wrongly rejected.
> 
>   Matthias Blume
>     `...humans do not write such programs, at least not
>      intentionally...' 
> 
> No one has offered any empirical evidence one way or another, but the
> static type people have said `if class 3 programs are so ubiquitous,
> then it should be easy to demonstrate one'.  I'll buy that.  Some of
> the people in here have offered programs that take user input as an
> example.  No sane person claims to read the future, so any statically
> typed check program would have to perform a runtime check.
> 
> But this is shooting fish in a barrel, so I'll give you a pass on it.
> 
> I (and Don Geddis) happen to believe that there are a huge number of
> perfectly normal, boring, pedestrian programs that will stymie a
> static type checker.  I've been offering a few that I hope are small
> enough to illustrate the problem and not so contrived as to fall into
> the `pointless' class.
> 
> The first program I wrote (a CPS lookup routine) generally confuses
> wimp-ass static type checking like in Java.
> 
> Stephan Bevan showed that SML/NJ had no problem showing it was type
> safe without giving a single type declaration.  I find this to be
> very cool.
> 
> I offered my `black hole' program and got these responses:
> 
>   Remi Vanicat <···············@labri.fr>
>     `I don't see how this function can be useful...'
> 
>   Jesse Tov <···@eecs.harvREMOVEard.edu>
>     `we don't need functions with those types.'
> 
>   Dirk Thierbach <··········@gmx.de>
>     `recursive types are very often a result of a real typing error.'
> 
>   "Marshall Spight" <·······@dnai.com>
>     `I don't believe the feature this function illustrates could be
>      useful.' 
> 
> Will this the response for any program that does, in fact, fool a
> number of static type checkers?
> 
>   Marshall Spight wrote:
>    `I'm trying to see if anyone can come up with a program
>     that is small, useful, and not provably typesafe.'
> 
>   to which Joachim Durchholz <·················@web.de> replied:
>    ``Matthias' and my position is that such a program doesn't exist
>      (and cannot even exist)''

Sorry, you're mixing up topics here - that was about correctness in 
general, not about type correctness (which is a subset of general 
correctness for the vast majority of type systems).

> This appears to be getting very close to arguing that static type
> checkers never reject useful programs because, by definition, a
> program rejected by a static type checker is useless.

That's a misunderstanding, partly due to conflation of correctness and 
type correctness, partly because people using a good static type checker 
don't feel constrained by that type checking because "you just don't do 
that" (similarly to "you just don't do some special things in Lisp", 
like abusing MOP mechanisms etc.)

Regards,
Jo
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3cdf8sog.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> ·············@comcast.net wrote:
>
>> Being unable to prove code correct is the same thing as being able to
>> prove it incorrect.
>
> This true for HM typing but not true for inference systems in
> general. But probably you wanted to say something else anyway.

Um, yeah.  I did:

  Being unable to prove code correct is *not* the same thing as being
  able to prove it incorrect.
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <78bv61-h75.ln1@ID-7776.user.dfncis.de>
·············@comcast.net wrote:
> Dirk Thierbach <··········@gmx.de> writes:

> As I mentioned earlier in this thread, the two things I object to in a
> static type system are these:
> 
>    1)  The rejection of code I know to be useful, yet the system is
>        unable to prove correct.
> 
>    2)  The requirement that I decorate my program with type
>        information to give the type checker hints to enable it 
>        to check things that are, to me, obviously correct.

Yes. I have also said it earlier, but (1) is nearly never going to
happen (if you substitute "correct" with "type correct", and for a
suitable definition of useful), and you don't have to do (2) if there
is type inference.

> Being unable to prove code correct is the same thing as being able to
> prove it incorrect.  This makes three classes of code:
> 
>  1) provably type safe for all inputs
>  2) provably erroneous for all inputs
>  3) neither of the above
>
> Both dynamic and static type systems behave the same way for classes 1
> and 2 except that static systems reject the code in section 2 upon
> compilation while dynamic systems reject it upon execution.  It is
> section 3 that is under debate.

> Those of us that are suspicious of static typing believe that there
> are a large enough number of class 3 programs that they will be
> regularly or frequently encountered. 

Yep. The static typers believe (from experience) that there is a very
small number of class 3 programs, and usually all programs you
normally write fall into class 1. They also know that this is not true
for all static type systems. For languages with a lousy static type
system, there is a large number of class 3 programs, where you have to
cheat to convince the type checker that they are really type safe. The
static typers also believe that with a good static type system, all
useful class 3 programs can be changed just a little bit so they fall
into class 1. In this process, they will become better in almost all
cases (for a suitable definition of "better"). 

> We believe that a large number of these class 3 programs will have
> inputs that cannot be decided by a static type checker but are
> nonetheless `obviously' correct.  (By obvious I mean fairly apparent
> with perhaps a little thought, but certainly nothing so convoluted
> as to require serious reasoning.)

Why do you believe this? A HM type checker doesn't do anything but
automate this "obvious" reasoning (in a restricted field), and points
out any errors you might have made (after all, humans make errors).

> Static type check aficionados seem to believe that the number of class
> 3 programs is vanishingly small and they are encountered rarely.  They
> appear to believe that any program that is `obviously' correct can be
> shown to be correct by a static type checking system.  Conversely,
> they seem to believe that programs that a static type checker find
> undecidable will be extraordinarily convoluted and contrived, or
> simply useless curiosities.

Yes.

> No one has offered any empirical evidence one way or another, but the
> static type people have said `if class 3 programs are so ubiquitous,
> then it should be easy to demonstrate one'.  I'll buy that.  Some of
> the people in here have offered programs that take user input as an
> example.  No sane person claims to read the future, so any statically
> typed check program would have to perform a runtime check.

Yes, and that's what it does. No problem. 

> I (and Don Geddis) happen to believe that there are a huge number of
> perfectly normal, boring, pedestrian programs that will stymie a
> static type checker.  I've been offering a few that I hope are small
> enough to illustrate the problem and not so contrived as to fall into
> the `pointless' class.
> 
> The first program I wrote (a CPS lookup routine) generally confuses
> wimp-ass static type checking like in Java.
> 
> Stephan Bevan showed that SML/NJ had no problem showing it was type
> safe without giving a single type declaration.  I find this to be
> very cool.

Good.

> I offered my `black hole' program and got these responses:
[...]
> Will this the response for any program that does, in fact, fool a
> number of static type checkers?

No. You have also been shown that it can check statically both in
Haskell and OCaml. (I cannot think of any way to make it pass
in C++ or Java, but that's to be expected.)

> This appears to be getting very close to arguing that static type
> checkers never reject useful programs because, by definition, a
> program rejected by a static type checker is useless.

It may not be useless, but you will definitely need to think about
it. And if it is useful, in nearly all cases you can make a little
change or express it a little differently, and it will check.

> Another one was a CPS routine that traverses two trees in a lazy
> manner and determins if they have the same fringe.  The point was not
> to find out if this could be done in SML or Haskell.  I know it can.
> The point was to find out if the static type checker could determine
> the type safety of the program.
> There is a deliberate reason I did not create a `lazy cons'
> abstraction.  I wanted to see if the type checker, with no help
> whatsover, could determine that the type of the lambda abstractions
> that are *implementing* the lazy consing.

See below for "lazy cons".

> Re-writing the program in C++ with huge amounts of type decorations
> and eliminating the lambdas by introducing parameterized classes does
> not demonstrate the power of static type checking.

No, it doesn't. That was just Brian saying "it's so easy, I'll
do it the hard way and demonstrate that it can even be done with
a lousy static type system as in C++".

> Re-writing the program in a lazy language also does nothing to
> demonstrate the power of static type checking. 

May I suggest that you try it yourself? It might be a good way to
get some experience with OCaml, though for a beginner it might take
some time to get used to the syntax etc. Pure lambda terms are the
same in every functional language, after all. 

The only issue is that one should use datatypes for recursive
structures, i.e. one would normally use a 'lazy cons' abstraction
(which would be the natural thing to do anyway). In OCaml, you can get
around this with -rectypes (someone has already demonstrated how to do
lazy lists in this way). In Haskell you would need them, but since
Haskell is already lazy, that's probably not the point.

So yes, for recursive types you need to give the type checker "a
little hint", if you want to put it that way. But that's done on
purpose, because it is the natural way to do it, and it is note done
because it would be impossible to statically check such terms.

So if you are extremely picky, and if you refuse to do the "lazy cons"
initially and consider command line options cheating, that's one of
the programs of class 3 that can be put into class 1 by making a
little change, where this change turns out to make your program better
to understand in the first place. Is this so bad that you want to stay
clear from static typing at any price?

(I bet someone will now comment on that ignoring the "extremely picky"
part :-( .)

- Dirk
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F9E454C.4000003@ps.uni-sb.de>
·············@comcast.net wrote:
> 
> I offered my `black hole' program and got these responses:
> 
>   Remi Vanicat <···············@labri.fr>
>     `I don't see how this function can be useful...'
> 
>   Jesse Tov <···@eecs.harvREMOVEard.edu>
>     `we don't need functions with those types.'
> 
>   Dirk Thierbach <··········@gmx.de>
>     `recursive types are very often a result of a real typing error.'
> 
>   "Marshall Spight" <·······@dnai.com>
>     `I don't believe the feature this function illustrates could be
>      useful.' 
> 
> Will this the response for any program that does, in fact, fool a
> number of static type checkers?

You missed one important point here. It is, in fact, trivial to extend 
Hindley-Milner type inference in such a way that it can deal with your 
function. That's what OCaml does when given the -rectypes option. 
However, it is a conscious, deliberate decision not to do so, at least 
not by default!

Why is that? Well, Dirk's quote gives the reason. But let me elaborate.

The design of a type system is by no means canonical. In fact, it is 
based on set of pragmatic decisions and trade-offs. Idealized, you start 
with the trivial type system, which has only one type. Then you refine 
it incrementally by distinguishing certain classes of values through 
introduction of new types and typing rules. Introduction of typing rules 
is based on the following criteria:

- Do they catch a lot of errors?
- Are these errors serious?
- Are they hard to localize otherwise?
- Does the refinement rule out useful constructions?
- Are such constructions common?
- Are they expressible by other means?
- Are the rules intuitive?
- Do they interact nicely with other rules?

And probably more. There are never any definite answers to any of these 
questions. The trade-off depends on many factors, such as the problem 
domain the language is used for. Usually the line is drawn based on 
experience with other languages and known problem domains. In the case 
of arbitrary recursive types, experience with languages that allowed 
them has clearly shown that it caused much more grief than joy.

BTW, almost the same criteria as above apply when you as a programmer 
use the type system as a tool and program it by introducing your own 
types. It can be tremendously useful if you make the right strategic 
decisions. OTOH, especially if you are unexperienced, type abstractions 
might also turn out to be counterproductive.

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmqp6$fqu$7@newsreader2.netcologne.de>
Andreas Rossberg wrote:

> The design of a type system is by no means canonical. In fact, it is 
> based on set of pragmatic decisions and trade-offs. Idealized, you start 
> with the trivial type system, which has only one type. Then you refine 
> it incrementally by distinguishing certain classes of values through 
> introduction of new types and typing rules. Introduction of typing rules 
> is based on the following criteria:
> 
> - Do they catch a lot of errors?
> - Are these errors serious?
> - Are they hard to localize otherwise?
> - Does the refinement rule out useful constructions?
> - Are such constructions common?
> - Are they expressible by other means?
> - Are the rules intuitive?
> - Do they interact nicely with other rules?
> 
> And probably more. There are never any definite answers to any of these 
> questions. The trade-off depends on many factors, such as the problem 
> domain the language is used for. Usually the line is drawn based on 
> experience with other languages and known problem domains. In the case 
> of arbitrary recursive types, experience with languages that allowed 
> them has clearly shown that it caused much more grief than joy.

Hmm, could a kind of "meta type system protocol" be feasible? I.e., a 
language environment in which you could tweak the type system to your 
concrete needs, without having to change the language completely?


Pascal
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9f9a7e$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Hmm, could a kind of "meta type system protocol" be feasible? I.e., a 
>language environment in which you could tweak the type system to your 
>concrete needs, without having to change the language completely?

Yes.  But that is still a research issue at this stage.
For some work in this area, see the following references:

[1]	Martin Sulzmann, "A General Type Inference Framework for
	Hindley/Milner Style Systems."  In 5th International Symposium
	on Functional and Logic Programming (FLOPS), Tokyo, Japan,
	March 2001.

[2]	Sandra Alves and Mario Florido, "Type Inference using
	Constraint Handling Rules", Electronic Notes in Theoretical
	Computer Science volume 64, 2002.

[3]	Kevin Glynn, Martin Sulzmann, Peter J. Stuckey,
	"Type Classes and Constraint Handling Rules",
	University of Melbourne Department of Computer Science
	and Software Engineering Technical Report 2000/7, June 2000. 

Also the work on dependent type systems, e.g. the language Cayenne,
could be considered to fit in this category.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnok3h$iqq$2@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Hmm, could a kind of "meta type system protocol" be feasible? I.e., a 
>>language environment in which you could tweak the type system to your 
>>concrete needs, without having to change the language completely?
> 
> 
> Yes.  But that is still a research issue at this stage.
> For some work in this area, see the following references:
> 
> [1]	Martin Sulzmann, "A General Type Inference Framework for
> 	Hindley/Milner Style Systems."  In 5th International Symposium
> 	on Functional and Logic Programming (FLOPS), Tokyo, Japan,
> 	March 2001.
> 
> [2]	Sandra Alves and Mario Florido, "Type Inference using
> 	Constraint Handling Rules", Electronic Notes in Theoretical
> 	Computer Science volume 64, 2002.
> 
> [3]	Kevin Glynn, Martin Sulzmann, Peter J. Stuckey,
> 	"Type Classes and Constraint Handling Rules",
> 	University of Melbourne Department of Computer Science
> 	and Software Engineering Technical Report 2000/7, June 2000. 
> 
> Also the work on dependent type systems, e.g. the language Cayenne,
> could be considered to fit in this category.

Thanks a lot for the references!


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F9FAC15.2000508@ps.uni-sb.de>
Pascal Costanza wrote:
> 
> Hmm, could a kind of "meta type system protocol" be feasible? I.e., a 
> language environment in which you could tweak the type system to your 
> concrete needs, without having to change the language completely?

Well, such "protocols" usually come in the form of compiler switches or 
pragmas. ;-)

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Tomasz Zielonka
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbpsmra.12u.t.zielonka@zodiac.mimuw.edu.pl>
·············@comcast.net wrote:
> 
> I offered my `black hole' program and got these responses:

It can be written in Haskell using laziness. Am I quite sure you will
object in some way ;)

blackHole :: a
blackHole = error "black-hole"

*BH> :t blackHole 1 2 3 'a' "ho" (blackHole, 1.2)
blackHole 1 2 3 'a' "ho" (blackHole, 1.2) :: forall t. t

*BH> blackHole 1 2 3 'a' "ho" (blackHole, 1.2)
*** Exception: black-hole

*BH> let _ = blackHole 1 2 3 'a' "ho" (blackHole, 1.2) in "abcdef"
"abcdef"

Best regards,
Tom

-- 
.signature: Too many levels of symbolic links
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh53t$sum$1@news.oberberg.net>
·············@comcast.net wrote:
> 
> Yuck!  Type declarations *everywhere*.  Where's this famous inference?

Hey, what do you expect of C++ code?

No type declarations in Haskell.

Regards,
Jo
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <6dnq61-kl.ln1@ID-7776.user.dfncis.de>
·············@comcast.net wrote:
> Dirk Thierbach <··········@gmx.de> writes:

>> ·············@comcast.net wrote:
>>> Dirk Thierbach <··········@gmx.de> writes:

>>>> Now why does a type system reject a program? Because there's a type
>>>> mismatch in some branch if the program.

>>> *or* because the type system was unable to prove that there *isn't* a
>>> type mismatch in *all* branches.

>> I am not sure if I read this correctly, but it seems equivalent to what
>> I say.

>>   \exists branch. mismatch-in (branch)
>>
>> should be the same as
>>
>>   \not \forall branch. \not mismatch-in (branch)
>>
>> Anyway, I don't understand your point.

> Only if you assume binary logic. 

Yes, of course.

> If there are three values that can arise --- provable-mismatch,
> provable-non-mismatch, and undecided --- then you cannot assume that
> ~provable-mismatch = provable-non-mismatch.

Hindley-Milner type inference always terminates. The result is either
a provable mismatch, or a provable-non-mismatch.

> My point is that type systems can reject valid programs.

That depends on what you understand by "valid". A provable mismatch
means that there is an execution branch that will crash if you ever
get there. If for some reason this branch will never get executed,
either because it's (non-provably) dead code, or because you have
an implicit restriction for possible arguments to this expression
the type system doesn't know about, than you could call it a "valid
program", but it will still be rejected, yes.

I am still not sure I get your point. (Maybe we always agreed; I just
don't know).

- Dirk
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87oew33h8t.fsf@sidious.geddis.org>
> ·············@comcast.net reasonably noted:
> > If there are three values that can arise --- provable-mismatch,
> > provable-non-mismatch, and undecided --- then you cannot assume that
> > ~provable-mismatch = provable-non-mismatch.

Dirk Thierbach <··········@gmx.de> writes:
> Hindley-Milner type inference always terminates. The result is either
> a provable mismatch, or a provable-non-mismatch.

You're completely wrong, which can be easily demonstrated.

The fact that it terminates isn't the interesting part.  Any inference
procedure can also "always terminate" simply by having a timeout, and reporting
"no proof" if it can't find one in time.

So what's interesting is whether the conclusions are correct.

Let's take as our ideal what a dynamic type system (say, a program in Lisp)
would report upon executing the program.  The question is, can your type
inference system make exactly the same conclusions at compile time, and predict
all (and only!) the type errors that the dynamic type system would report at
run time?

The answer is no.

> A provable mismatch means that there is an execution branch that will crash
> if you ever get there. If for some reason this branch will never get
> executed, either because it's (non-provably) dead code

That's one obvious case, so even you know that your claim of a "provable
mismatch" is incorrect.  There are programs that will never have run-time
errors, but your static type inference will claim a type error.

> or because you have an implicit restriction for possible arguments to this
> expression the type system doesn't know about, than you could call it a
> "valid program", but it will still be rejected, yes.

So haven't you just contradicted yourself?  Perhaps you think this "implicit
restriction" is unfair, because you've kept information from the system.
But the real problem is that all the information might be there, but the
system isn't capable of making sufficient inferences to figure it out.

For example:
        (defun foo (x)
          (check-type x (integer 0 10))
          (+ 1 x) )
        (defun fib (n)
          (check-type n (integer 0 *))
          (if (< n 2)
              1
              (+ (fib (- n 1)) (fib (- n 2))) ))
        (print (foo (fib 5)))

This program prints "9", and causes no run-time type errors.  Will it be
successfully type-checked at compile time by a static system?  Almost certainly
the answer is no.

In case the code isn't clear; FOO is a function that increments a number by
one.  Its domain is [0,10], and its range is [1,11].  FIB is the Fibonacci
sequence, with domain [0,infinity] and range [1,infinity].

Are you allowed to call FOO with the output of FIB?  In general, no, because
the range of FIB is much greater than the domain of FOO.

However, in the particular case of the particular code in this program, it
turns out that only (FIB 5) is called, which happens to compute to 8, well
within the domain of FOO.  Hence, no run-time type error.  Unfortunately, the
only way to figure this out is to actually compute the fifth Fibonacci number,
which surely no static type inference system is going to do.  (And if you do
happen to find one, I'm sure I can come up with a version of the halting
problem that will break that system too.)

Do you now accept that your static type inference systems do NOT partition
all programs into "either a provable [type] mismatch, or a provable [type]
non-mismatch"?

Finally, to get back to the point of the dynamic typing fans: realizing that
type inference is not perfect, we're annoyed to be restricted to writing only
programs that can be successfully type checked at compile time.  Nobody
objects to doing compile-time type inference (e.g. as the CMUCL compiler for
Common Lisp does) -- especially just to generate warnings -- but many people
object to refusing to compile programs that can not be proven type-safe at
compile time (by your limited type inference systems).

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l17k2rd7fb.fsf@budvar.future-i.net>
Don Geddis <···@geddis.org> writes:

>Let's take as our ideal what a dynamic type system (say, a program in
>Lisp) would report upon executing the program.  The question is, can
>your type inference system make exactly the same conclusions at
>compile time, and predict all (and only!) the type errors that the
>dynamic type system would report at run time?
>
>The answer is no.

Indeed not.  But if you were using a statically typed language you'd
write the program in a slightly different style, one that does let
type errors be found at compile time.  For example instead of a
function that takes 'either an integer, or a string' with that checked
at run time, you'd have to be more explicit:

    data T = TI Integer | TS String

    f :: T -> Bool
    f (TI i) = ...
    f (TS s) = ...

The question is whether having to change your program in this way is
so horribly cramping that it negates the advantages from having errors
found earlier.  In my experience I haven't found that to be the case.

- Hmm, I wonder if I might make an analogy between type checking and
syntax checking.  In the language Tcl almost everything, including
code, is a string, and the 'if' statement is just something that takes
three strings and evaluates the first, followed by either the second
or third.  The syntax of these strings is not checked at compile time.
You can write code like

    if $cond { puts "hello" ( } else { puts "goodbye" }

and the syntax error in the 'hello' branch won't be found unless you
run a test case setting $cond true.

The question is, can a static syntax checking system make exactly the
same conclusions at compile time, and predict all and only the syntax
errors that Tcl would report at run time?

The answer is no.

Should we then conclude that compile-time syntax checking is not worth
having?

-- 
Ed Avis <··@membled.com>
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhhe1$f59$1@newsreader2.netcologne.de>
Ed Avis wrote:

> - Hmm, I wonder if I might make an analogy between type checking and
> syntax checking.  In the language Tcl almost everything, including
> code, is a string, and the 'if' statement is just something that takes
> three strings and evaluates the first, followed by either the second
> or third.  The syntax of these strings is not checked at compile time.
> You can write code like
> 
>     if $cond { puts "hello" ( } else { puts "goodbye" }
> 
> and the syntax error in the 'hello' branch won't be found unless you
> run a test case setting $cond true.
> 
> The question is, can a static syntax checking system make exactly the
> same conclusions at compile time, and predict all and only the syntax
> errors that Tcl would report at run time?
> 
> The answer is no.
> 
> Should we then conclude that compile-time syntax checking is not worth
> having?

No. Syntax errors make the program fail, regardless whether this is 
checked at compile-time or at runtime.

A type "error" detected at compile-time doesn't imply that the program 
will fail.


Pascal
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l1znfm7rz2.fsf@budvar.future-i.net>
Pascal Costanza <········@web.de> writes:

>>Should we then conclude that compile-time syntax checking is not
>>worth having?
>
>No. Syntax errors make the program fail, regardless whether this is
>checked at compile-time or at runtime.
>
>A type "error" detected at compile-time doesn't imply that the
>program will fail.

Actually it does, in a statically typed language.  If you write a
function which expects a Boolean and you pass it a string instead,
it's going to fail one way or another.

OK, the bad call of that function might never be reachable in actual
execution, but equally the syntax error in Tcl code might not be
reached.  I'd rather find out about both kinds of mistake sooner
rather than later.

(I do mean a type error and not a type 'error' - obviously if you have
some mechanism to catch exceptions caused by passing the wrong type,
you wouldn't want this to be checked at compile time.)

-- 
Ed Avis <··@membled.com>
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2ekwyiy8l.fsf@hanabi-air.shimizu.blume>
Ed Avis <··@membled.com> writes:

> Pascal Costanza <········@web.de> writes:
> 
> >>Should we then conclude that compile-time syntax checking is not
> >>worth having?
> >
> >No. Syntax errors make the program fail, regardless whether this is
> >checked at compile-time or at runtime.
> >
> >A type "error" detected at compile-time doesn't imply that the
> >program will fail.
> 
> Actually it does, in a statically typed language.

Nitpick: Neither syntactic nor statically checked type errors make
programs fail. Instead, their presence simply implies the absence of a
program.  No program, no program failing...

Matthias
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vfqarboa.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Ed Avis <··@membled.com> writes:
>
>> Pascal Costanza <········@web.de> writes:
>> 
>> >>Should we then conclude that compile-time syntax checking is not
>> >>worth having?
>> >
>> >No. Syntax errors make the program fail, regardless whether this is
>> >checked at compile-time or at runtime.
>> >
>> >A type "error" detected at compile-time doesn't imply that the
>> >program will fail.
>> 
>> Actually it does, in a statically typed language.
>
> Nitpick: Neither syntactic nor statically checked type errors make
> programs fail. Instead, their presence simply implies the absence of a
> program.  No program, no program failing...

Nitpick:  You are defining as a program that which passes a static type
checker.  What would you like to call those constructs that make sense
to a human and would run correctly despite failing a static type
check?  These are the ones that are interesting to debate.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1fzhe3aof.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> >
> > Nitpick: Neither syntactic nor statically checked type errors make
> > programs fail. Instead, their presence simply implies the absence of a
> > program.  No program, no program failing...
> 
> Nitpick:  You are defining as a program that which passes a static type
> checker.

Yes, that's how it is usually done with statically typed languages.

> What would you like to call those constructs that make sense
> to a human and would run correctly despite failing a static type
> check?

I don't know.  What would you *like* to call them?  (They might be
called "programs" -- just programs in another language.)

> These are the ones that are interesting to debate.

Right, I am not disputing this.  (I was simply nitpicking on
terminology.)

Matthias
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l1he1ue07x.fsf@budvar.future-i.net>
Joe Marshall <···@ccs.neu.edu> writes:

>Nitpick: You are defining as a program that which passes a static
>type checker.  What would you like to call those constructs that make
>sense to a human and would run correctly despite failing a static
>type check?

In a language such as Haskell, there are no constructs that 'would run
correctly' despite being ill-typed.  If you define

    f :: Int -> Int

then there's no way that f 'would run' if you somehow passed it a
String instead.  I should point out, however, that typically a
function will be defined over a whole class of values, eg

    f :: Eq a => a -> a

so that the function f works with any type that has equality defined.

In Lisp too, there are no programs that 'would run correctly' after
failing type-checking, because a type checker for Lisp would have to
be more liberal (and so potentially less helpful) than one for a
language like Haskell or ML.

The job of a type checker, just like a syntax checker, is to eliminate
programs which do not work.  That's all there is to it.

By narrowing the set of programs which work - that is, introducing
some redundancy into the programming language by increasing the number
of programs which are 'wrong' - you can often save the human
programmer some effort by catching mistakes.

-- 
Ed Avis <··@membled.com>
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <8yn6gmic.fsf@comcast.net>
Ed Avis <··@membled.com> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>>Nitpick: You are defining as a program that which passes a static
>>type checker.  What would you like to call those constructs that make
>>sense to a human and would run correctly despite failing a static
>>type check?
>
> In a language such as Haskell, there are no constructs that 'would run
> correctly' despite being ill-typed.  If you define
>
>     f :: Int -> Int
>
> then there's no way that f 'would run' if you somehow passed it a
> String instead.  I should point out, however, that typically a
> function will be defined over a whole class of values, eg
>
>     f :: Eq a => a -> a
>
> so that the function f works with any type that has equality defined.

What would you like to call those `syntactic constructs' that do not
currently type check in Haskell, yet *may* belong to a class of
syntactic constructs that *could* conceivably be type checked in a
successor to Haskell that has a better type inferencing engine?

Did I close all the loopholes?

Look, early versions of Haskell did not have as powerful a type
inferencing mechanism as the recent versions.  What do you call those
syntactic constructs that weren't programs before but are now?  What
do you call the syntactic constructs that aren't programs now, but
might be tomorrow?  *THOSE ARE THE THINGS WE ARE TALKING ABOUT*
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l18yn0e495.fsf@budvar.future-i.net>
·············@comcast.net writes:

>What would you like to call those `syntactic constructs' that do not
>currently type check in Haskell, yet *may* belong to a class of
>syntactic constructs that *could* conceivably be type checked in a
>successor to Haskell that has a better type inferencing engine?

Well of course, if you program in a different language you'd need a
different type checking system.

-- 
Ed Avis <··@membled.com>
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <1xssqozs.fsf@comcast.net>
Ed Avis <··@membled.com> writes:

> ·············@comcast.net writes:
>
>>What would you like to call those `syntactic constructs' that do not
>>currently type check in Haskell, yet *may* belong to a class of
>>syntactic constructs that *could* conceivably be type checked in a
>>successor to Haskell that has a better type inferencing engine?
>
> Well of course, if you program in a different language you'd need a
> different type checking system.

Obviously.

But let us suppose that someone improved the type system of Haskell
such that some useful complicated constructs that did not pass the
type checker were now able to be verified as correct.  Didn't this
happen when Haskell was extended with second-order polymorphic types?
(when the restriction on forall was lifted?)

You could say that lifting this restriction created a `new' language
and refuse to admit the notion that two are related, or you might take
the viewpoint that some programs that were invalid before are now
valid.  The former point becomes rather strained if you have some sort
of switch on the implementation.  

For instance, OCaml allows recursive function types if you specify
that you want them.  Most people seem to view this as 

`OCaml with recursive function types'

instead of

`completely new language unrelated to OCaml, but sharing the exact
same syntax in all places except for declaration of recursive function
types.'

And most people seem to think that my `black hole' `syntactic
construct', which does not type check under OCaml without the flag,
but *does* under OCaml with the flag, can be unambiguously called a
`program'.
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l1brrw1d3e.fsf@budvar.future-i.net>
·············@comcast.net writes:

>But let us suppose that someone improved the type system of Haskell
>such that some useful complicated constructs that did not pass the
>type checker were now able to be verified as correct.

Wouldn't you need to define the semantics for these constructs too?
And perhaps extend the compiler to generate code for them?

My original point was that the type-checker won't reject programs
which are valid Haskell, so it makes no sense to talk about the
checker being too strict or not allowing enough flexibility.  A
type-checker for some other language such as Lisp would obviously have
to not flag errors for any legal Lisp program.  (That would probably
mean not checking anything at all, with the programmer having to
explicitly state 'yes I don't want to wait until runtime to catch this
error'.)

-- 
Ed Avis <··@membled.com>
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ptgcp8ay.fsf@comcast.net>
Ed Avis <··@membled.com> writes:

> ·············@comcast.net writes:
>
>>But let us suppose that someone improved the type system of Haskell
>>such that some useful complicated constructs that did not pass the
>>type checker were now able to be verified as correct.
>
> Wouldn't you need to define the semantics for these constructs too?
> And perhaps extend the compiler to generate code for them?
>
> My original point was that the type-checker won't reject programs
> which are valid Haskell, so it makes no sense to talk about the
> checker being too strict or not allowing enough flexibility.  

So any program that currently runs on Haskell will run on the very
first version of Haskell?  No?  But Haskell won't reject programs that
are valid Haskell, so is the later version wrong?
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l18yn014jv.fsf@budvar.future-i.net>
·············@comcast.net writes:

>>My original point was that the type-checker won't reject programs
>>which are valid Haskell, so it makes no sense to talk about the
>>checker being too strict or not allowing enough flexibility.
>
>So any program that currently runs on Haskell will run on the very
>first version of Haskell?

I should have said 'Haskell 98' and 'a type-checker in a working
Haskell 98 implementation'.

The same applies to any language, of course - Haskell 98 is just an
example.  What I mean is don't shoot the messenger if the type checker
tells you your program is invalid.  You might however want to change
to a different language (such as a later version of Haskell, or one
with nonstandard extensions) where your program is valid (and of
course the typechecker for that implementation will be happy).

-- 
Ed Avis <··@membled.com>
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo0d9m$88p$1@news.oberberg.net>
·············@comcast.net wrote:
> 
> And most people seem to think that my `black hole' `syntactic
> construct', which does not type check under OCaml without the flag,
> but *does* under OCaml with the flag, can be unambiguously called a
> `program'.

I'm still hoping for an explanation on the practical relevance of that 
black hole device.
I don't insist that the black hole function as given does anything 
useful; if the black-hole function was just a condensed example of a 
general usage pattern, I'd like to know what that pattern is. It would 
help me to find out where in the expressivity spectrum the various 
languages lie.

Regards,
Jo
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <fzh8p1k4.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> ·············@comcast.net wrote:
>> And most people seem to think that my `black hole' `syntactic
>> construct', which does not type check under OCaml without the flag,
>> but *does* under OCaml with the flag, can be unambiguously called a
>> `program'.
>
> I'm still hoping for an explanation on the practical relevance of that
> black hole device.

Neel Krishnaswami had a wonderful explanation in article
 <····················@gs3106.sp.cs.cmu.edu>
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo2uq4$ajh$1@news.oberberg.net>
·············@comcast.net wrote:
> Neel Krishnaswami had a wonderful explanation in article
>  <····················@gs3106.sp.cs.cmu.edu>

Sorry, that link doesn't work for me, I don't know the proper syntax for 
news: links, and I couldn't type one in even if I knew it.

Anybody got a full reference? This thread is too long for searching...

Regards,
Jo
From: Emile van Sebille
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo395a$17qsu5$1@ID-11957.news.uni-berlin.de>
"Joachim Durchholz" asks:
> ·············@comcast.net wrote:
> > Neel Krishnaswami had a wonderful explanation in article
> >  <····················@gs3106.sp.cs.cmu.edu>
>
> Sorry, that link doesn't work for me, I don't know the proper syntax for
> news: links, and I couldn't type one in even if I knew it.
>

I prepend:

http://groups.google.com/groups?selm=


in this case yielding

·························································@gs3106.sp.cs.cmu.edu


Emile van Sebille
·····@fenx.com
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo3u6v$s2$1@news.oberberg.net>
Emile van Sebille wrote:
> "Joachim Durchholz" asks:
> 
>>·············@comcast.net wrote:
>>
>>>Neel Krishnaswami had a wonderful explanation in article
>>> <····················@gs3106.sp.cs.cmu.edu>
>>
>>Sorry, that link doesn't work for me, I don't know the proper syntax for
>>news: links, and I couldn't type one in even if I knew it.
>>
> 
> 
> I prepend:
> 
> http://groups.google.com/groups?selm=

Got it - thanks!

Regards,
Jo
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <u2ph71-h94.ln1@ID-7776.user.dfncis.de>
·············@comcast.net wrote:
> Joachim Durchholz <·················@web.de> writes:

>> I'm still hoping for an explanation on the practical relevance of that
>> black hole device.

> Neel Krishnaswami had a wonderful explanation in article
> <····················@gs3106.sp.cs.cmu.edu>

And note that his example for practical relevance includes a datatype.

Haskell doesn't support recursive types directly, but a recursive
datatype for lists is easy:

data List a = Nil | Cons a (List a) 

Since Haskell is lazy, that's already a lazy list, but even if Haskell
was eager, you could write a similar datatype for lazy lists.
(And of course Haskell lists are essentially just given by the above
datatype, together with some syntactic sugar).

It's very natural for a relevant application of recursive types to
be coupled with data. 

- Dirk
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjavv$v4c$2@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Ed Avis <··@membled.com> writes:
> 
> 
>>Pascal Costanza <········@web.de> writes:
>>
>>
>>>>Should we then conclude that compile-time syntax checking is not
>>>>worth having?
>>>
>>>No. Syntax errors make the program fail, regardless whether this is
>>>checked at compile-time or at runtime.
>>>
>>>A type "error" detected at compile-time doesn't imply that the
>>>program will fail.
>>
>>Actually it does, in a statically typed language.
> 
> 
> Nitpick: Neither syntactic nor statically checked type errors make
> programs fail. Instead, their presence simply implies the absence of a
> program.

Yes, the absence of a program that might not fail if it wouldn't have 
been rejected by the static type system.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1brs23afm.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> > Ed Avis <··@membled.com> writes:
> > 
> >>Pascal Costanza <········@web.de> writes:
> >>
> >>
> >>>>Should we then conclude that compile-time syntax checking is not
> >>>>worth having?
> >>>
> >>>No. Syntax errors make the program fail, regardless whether this is
> >>>checked at compile-time or at runtime.
> >>>
> >>>A type "error" detected at compile-time doesn't imply that the
> >>>program will fail.
> >>
> >>Actually it does, in a statically typed language.
> > Nitpick: Neither syntactic nor statically checked type errors make
> 
> > programs fail. Instead, their presence simply implies the absence of a
> > program.
> 
> Yes, the absence of a program that might not fail if it wouldn't have
> been rejected by the static type system.

"would" "if"  bah

You first have to define what the meaning of a phrase is going to be
if you let it slip past the type checker even though it is not
well-typed.  As Andreas Rossberg pointed out, it is quite often the
case that the type is essential for understanding the semantics.
Simply ignoring types does not necessarily make sense under such
circumstances.  So it all depends on how you re-interpret the language
after getting rid of static types.

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnji1s$fri$2@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:

> You first have to define what the meaning of a phrase is going to be
> if you let it slip past the type checker even though it is not
> well-typed.  As Andreas Rossberg pointed out, it is quite often the
> case that the type is essential for understanding the semantics.
> Simply ignoring types does not necessarily make sense under such
> circumstances.  So it all depends on how you re-interpret the language
> after getting rid of static types.

Can you show me an example of a program that does't make sense anymore 
when you strip off the static type information?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F9D5D21.2060408@ps.uni-sb.de>
Pascal Costanza wrote:
> 
> Can you show me an example of a program that does't make sense anymore 
> when you strip off the static type information?

Here is a very trivial example, in SML:

	20 * 30

Multiplication, as well as literals, are overloaded. Depending on 
whether you type this expression as Int8.int (8-bit integers) or 
IntInf.int (infinite precision integer) the result is either 600 or an 
overflow exception.

So the program does not make sense without type information, because it 
does not have an unambiguous (i.e. no) semantics.

I'm ready to admit that it may be a dubious example of a typing feature. 
But it is simple, and clearly sufficient to disprove your repeated claim 
that static types don't add expressiveness to a language. If you did not 
have them for the example above, you needed some other feature to 
express the disambiguation.

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031027184024.GO1454@mapcar.org>
On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:
> Pascal Costanza wrote:
> >
> >Can you show me an example of a program that does't make sense anymore 
> >when you strip off the static type information?
> 
> Here is a very trivial example, in SML:
> 
> 	20 * 30
> 
> Multiplication, as well as literals, are overloaded. Depending on 
> whether you type this expression as Int8.int (8-bit integers) or 
> IntInf.int (infinite precision integer) the result is either 600 or an 
> overflow exception.

May I point out that the correct answer is 600, not overflow?

Something that annoys me about many statically-typed languages is the
insistence that arithmetic operations should return the same type as the
operands.  2 / 4 is 1/2, not 0.  Arithmetically, 1 * 1.0 is
well-defined, so why can I not write this in an SML program?

I do believe Haskell does it right, though, with its numeric tower
derived from Lisp.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.27.20.13.36.633554@knm.org.pl>
On Mon, 27 Oct 2003 13:40:24 -0500, Matthew Danish wrote:

> Something that annoys me about many statically-typed languages is the
> insistence that arithmetic operations should return the same type as the
> operands.  2 / 4 is 1/2, not 0.  Arithmetically, 1 * 1.0 is
> well-defined, so why can I not write this in an SML program?

Confusing integer division with rational division is not a consequence
of static typing, except that with static typing it's not as dangerous as
with dynamic typing (because a function declared as taking floating point
arguments and performing / on them will do the same even if you pass
integers to it, which in most languages will be automatically converted).

But in Haskell / is defined only on types in class Fractional, which
doesn't include integers. Integer division is `div` and `quot` (they
differ for negative numbers). In Pascal 2/4 is 0.5, in SML / is only
for floats, in both languages integer division is spelled div. Most
languages don't support rational numbers, so / on integers can only
return a floating point number or be an error.

Yes, many languages inherited the confusion of divisions from C. This
includes some dynamically typed languages, e.g. Python - but it's planned
to be fixed, and Ruby.

Mixed-type arithmetic is a different story. I'm talking only about 1/2
being equal to 0 in some languages - this doesn't coincide with static
typing.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnkhqe$jbo$1@newsreader2.netcologne.de>
Marcin 'Qrczak' Kowalczyk wrote:

> On Mon, 27 Oct 2003 13:40:24 -0500, Matthew Danish wrote:
> 
> 
>>Something that annoys me about many statically-typed languages is the
>>insistence that arithmetic operations should return the same type as the
>>operands.  2 / 4 is 1/2, not 0.  Arithmetically, 1 * 1.0 is
>>well-defined, so why can I not write this in an SML program?
> 
> 
> Confusing integer division with rational division is not a consequence
> of static typing, except that with static typing it's not as dangerous as
> with dynamic typing (because a function declared as taking floating point
> arguments and performing / on them will do the same even if you pass
> integers to it, which in most languages will be automatically converted).

Sorry, I don't get this. Why should it be more dangerous with dynamic 
typing? Common Lisp definitely gets this right, and most probably some 
other dynamically typed languages.

> Mixed-type arithmetic is a different story. I'm talking only about 1/2
> being equal to 0 in some languages - this doesn't coincide with static
> typing.

Yes, dynamic vs static typing seems to be irrelevant here. (Although I 
wonder why you should need to distinguish between / and div...)


Pascal
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.28.09.55.54.285334@knm.org.pl>
On Tue, 28 Oct 2003 02:46:54 +0100, Pascal Costanza wrote:

> Sorry, I don't get this. Why should it be more dangerous with dynamic 
> typing? Common Lisp definitely gets this right, and most probably some 
> other dynamically typed languages.

Common Lisp, Scheme and Perl get it right: the result of / on integers
is a rational or floating point number, and integer division is spelled
differently.

Python and Ruby get it wrong: the result is an integer (truncated
division), very different from the result of / on the same numbers
represented in a different type.

If a statically typed language get's this "wrong", it doesn't hurt, except
newbies which write 1/n. For example consider this:

   double foo(double *a, int len)
   {
      ... Some computation involving a[i]/a[j] which is supposed
      ... to produce a true real quotient.
   }

You make some array of doubles, set a[i] = exp(i) (a double) and it works.
Then you set a[i] = 2*i (an integer) and it still works, because the
integer has been converted to double during assignment. An integer can be
used in place of a double with the same value.

Now in a dynamically typed language the analogous code would set some
array elements to integers without conversion. If division on them means
a different thing, an integer can no longer be used in place of a floating
point number with the same value. And division is the only operation whith
breaks this.

Dynamically typed languages shouldn't use / for both real and truncating
division. For statically typed languages it's only a matter of taste (I
prefer to use different symbols), but for dynamically typed languages it's
important to prevent bugs.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ptghwb9d.fsf@sidious.geddis.org>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:
> You make some array of doubles, set a[i] = exp(i) (a double) and it works.
> Then you set a[i] = 2*i (an integer) and it still works, because the
> integer has been converted to double during assignment. An integer can be
> used in place of a double with the same value.

Surely in a "proper" staticly typed language, you can't assign an integer
to a variable declared (floating point) double.  Shouldn't that be a type
error?  (Unless you do an explicit cast, of course, but then the programmer
made the decision, not the language.)

> Now in a dynamically typed language the analogous code would set some
> array elements to integers without conversion. If division on them means
> a different thing, an integer can no longer be used in place of a floating
> point number with the same value. And division is the only operation whith
> breaks this.

Why would division on integers mean something different than division on
(floating point) doubles?

> Dynamically typed languages shouldn't use / for both real and truncating
> division. For statically typed languages it's only a matter of taste (I
> prefer to use different symbols), but for dynamically typed languages it's
> important to prevent bugs.

In Lisp, "/" always means the same thing.  It's never a truncating operation.

This doesn't seem to be related to static vs. dynamic typing.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Football combines two of the worst things about American life.
It is violence punctuated by committee meetings.
	-- George Will
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.28.21.24.52.466171@knm.org.pl>
On Tue, 28 Oct 2003 08:57:34 -0800, Don Geddis wrote:

> Why would division on integers mean something different than division on
> (floating point) doubles?

Because C did it this way and many languages are copying its conventions.
And C did it because it used / for integer division before it had floating
point types.

I was responding to "Something that annoys me about many statically-typed
languages is the insistence that arithmetic operations should return the
same type as the operands.  2 / 4 is 1/2, not 0."

While it is true that many statically typed languages do that, it's not
a consequence of static typing (others don't). The only fault of static
typing here is that it makes this choice relatively harmless, compared to
dynamically typed languages where it's a serious problem if this choice
is made.

I was going to respond "making 2/4 equal to 0 is unrelated to static or
dynamic typing", but it *is* related, only in a subtle way.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Alexander Schmolck
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <yfsllr5s6a8.fsf@black132.ex.ac.uk>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:
 
> Python and Ruby get it wrong: the result is an integer (truncated
> division), very different from the result of / on the same numbers
> represented in a different type.

It's being fixed since python >= 2.2 (sort of):

>>> from __future__ import division
>>> 1/3
0.33333333333333331
>>> 5//2
2

'as
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnk4uu$e80$1@news.oberberg.net>
Matthew Danish wrote:

> On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:
> 
>>Pascal Costanza wrote:
>>
>>>Can you show me an example of a program that does't make sense anymore 
>>>when you strip off the static type information?
>>
>>Here is a very trivial example, in SML:
>>
>>	20 * 30
>>
>>Multiplication, as well as literals, are overloaded. Depending on 
>>whether you type this expression as Int8.int (8-bit integers) or 
>>IntInf.int (infinite precision integer) the result is either 600 or an 
>>overflow exception.
> 
> May I point out that the correct answer is 600, not overflow?

No, the correct answer isn't 600 in all cases.
If it's infinite-precision arithmetic, the correct answer is indeed 600.
If it's 8-bit arithmetic with overflow, there is no correct answer.
If it's 8-bit arithmetic with wrap-around, the correct answer is 88.
If it's 8-bit arithmetic with saturation, the correct answer is 255.
(The result happens to be independent of whether the arithmetic is 
signed or unsigned.)

Using infinite-precision internally for all calculations isn't always 
practicable, and in some algorithms, this will give you even wrong 
results (e.g. when multiplying or dividing using negative numbers, or 
with saturating arithmetic).

Of course, this doesn't say much about the distinction between static 
and dynamic typing, actually the issues and unlucky fringe cases seem 
very similar to me. (In particular, overloading and type inference don't 
tell us whether the multiplication should be done in 8-bit, 16-bit, 
machine-precision, infinite-precision, wrap-around, or saturated 
arithmetic, and run-time types don't give us any useful answer either.)

Regards,
Jo
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031028093713.GQ1454@mapcar.org>
On Mon, Oct 27, 2003 at 10:08:15PM +0100, Joachim Durchholz wrote:
> Matthew Danish wrote:
> 
> >On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:
> >
> >>Pascal Costanza wrote:
> >>
> >>>Can you show me an example of a program that does't make sense anymore 
> >>>when you strip off the static type information?
> >>
> >>Here is a very trivial example, in SML:
> >>
> >>	20 * 30
> >>
> >>Multiplication, as well as literals, are overloaded. Depending on 
> >>whether you type this expression as Int8.int (8-bit integers) or 
> >>IntInf.int (infinite precision integer) the result is either 600 or an 
> >>overflow exception.
> >
> >May I point out that the correct answer is 600, not overflow?
> 
> No, the correct answer isn't 600 in all cases.
> If it's infinite-precision arithmetic, the correct answer is indeed 600.
> If it's 8-bit arithmetic with overflow, there is no correct answer.
> If it's 8-bit arithmetic with wrap-around, the correct answer is 88.
> If it's 8-bit arithmetic with saturation, the correct answer is 255.
> (The result happens to be independent of whether the arithmetic is 
> signed or unsigned.)

What is this stuff?  I am talking about integers here!  You know, the
infinite set with the same cardinality as natural numbers.  Why can't
the implementation figure out how to represent them most efficiently?

> Using infinite-precision internally for all calculations isn't always 
> practicable, and in some algorithms, this will give you even wrong 
> results (e.g. when multiplying or dividing using negative numbers, or 
> with saturating arithmetic).

If it gives you the wrong results, I'm not interested.  How hard is it
to get the arithmetically correct result of 20 * 30?  Surely 20 * -30 is
not that difficult either.  It is a pet peeve I have with many
programming languages, especially the ones that are so big on proofs and
correctness.

Arithmetic as it exists in ML is a good example of the difference
between correctness and type-safety.  It is also a good example of the
difference between correctness and proof.

Of course, you may claim, that the definition of div, or / or whatever
it is called, is "correct" in that it conforms to the specification.
All that says to me is that the specification is wrong.  Garbage can be
proven, in some system.  Doesn't mean I'm interested.

> Of course, this doesn't say much about the distinction between static 
> and dynamic typing, actually the issues and unlucky fringe cases seem 
> very similar to me. (In particular, overloading and type inference don't 
> tell us whether the multiplication should be done in 8-bit, 16-bit, 
> machine-precision, infinite-precision, wrap-around, or saturated 
> arithmetic, and run-time types don't give us any useful answer either.)

Lisp gets exact rational arithmetic right, why don't ML or Haskell?
Clever compilers like CMUCL can even emit efficient code when it can
figure out the specific types.  Surely a statically typed language would
find this even easier!

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1vfq92tl1.fsf@tti5.uchicago.edu>
Matthew Danish <·······@andrew.cmu.edu> writes:

> On Mon, Oct 27, 2003 at 10:08:15PM +0100, Joachim Durchholz wrote:
> > Matthew Danish wrote:
> > 
> > >On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:
> > >
> > >>Pascal Costanza wrote:
> > >>
> > >>>Can you show me an example of a program that does't make sense anymore 
> > >>>when you strip off the static type information?
> > >>
> > >>Here is a very trivial example, in SML:
> > >>
> > >>	20 * 30
> > >>
> > >>Multiplication, as well as literals, are overloaded. Depending on 
> > >>whether you type this expression as Int8.int (8-bit integers) or 
> > >>IntInf.int (infinite precision integer) the result is either 600 or an 
> > >>overflow exception.
> > >
> > >May I point out that the correct answer is 600, not overflow?
> > 
> > No, the correct answer isn't 600 in all cases.
> > If it's infinite-precision arithmetic, the correct answer is indeed 600.
> > If it's 8-bit arithmetic with overflow, there is no correct answer.
> > If it's 8-bit arithmetic with wrap-around, the correct answer is 88.
> > If it's 8-bit arithmetic with saturation, the correct answer is 255.
> > (The result happens to be independent of whether the arithmetic is 
> > signed or unsigned.)
> 
> What is this stuff?  I am talking about integers here!  You know, the
> infinite set with the same cardinality as natural numbers.

I didn't see you say that.  If you write 20 * 30 in SML, then you are
*not* talking about the infinite set of integers but rather about a set
[-2^N..2^N-1] where N is something like 31 or 32.  If you had written

   20 * 30 : IntInf.int

or something like that, then you would have talked about the infinite
set of integeres, and you would have gotten your "correct" result of
600.  [I still have no idea why that is more "correct" than, say, 18.
That's the result in Z_{97}.]

> Lisp gets exact rational arithmetic right, why don't ML or Haskell?
> Clever compilers like CMUCL can even emit efficient code when it can
> figure out the specific types.  Surely a statically typed language would
> find this even easier!

Sure, it is definitely not any harder than it is in Lisp.  The problem
is that for many algorithms people want to be sure that the compiler
represents their values in machine words.  Infinite precision is
needed sometimes, but in the majority of cases it is overkill.  If you
need infinite precision, specify the type (IntInf.int in SML's case).
A clever compiler might optimize that like a Lisp compiler does.  In
most other cases, why take any chances?

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmr51$i83$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> The problem
> is that for many algorithms people want to be sure that the compiler
> represents their values in machine words.  Infinite precision is
> needed sometimes, but in the majority of cases it is overkill.  If you
> need infinite precision, specify the type (IntInf.int in SML's case).
> A clever compiler might optimize that like a Lisp compiler does.  In
> most other cases, why take any chances?

I disagree strongly here. I am convinced that in most algorithms, 
machine words don't matter at all. Have you ever seen in books on 
algorithms that they actually _need_ to restrict them to values that are 
representable in machine word sizes?

"Here is a proof for the correctness of the Quicksort algorithm, but 
only for arrays with a maximum length of 65535." Ha! Nonsense! ;-)


Computers are fast enough and have enough memory nowadays. You are 
talking about micro efficiency. That's not interesting anymore.


(but I have no problems if we agree to disagree here. we both don't have 
the necessary empirical data to back our claims)

Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1brs12bbv.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Computers are fast enough and have enough memory nowadays. You are
> talking about micro efficiency. That's not interesting anymore.

I have worked on projects where people worried about *every cycle*.
(Most of the time I agree with you, though.  Still, using infinite
precision by default is, IMO, a mistake.  Having it around and at your
fingertips, though, is nice.  That's why I added the long-missing
compiler support for IntInf to SML/NJ recently.)

> (but I have no problems if we agree to disagree here.

Good.

> we both don't have the necessary empirical data to back our claims)

Speak for yourself.

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmuja$ovf$1@newsreader2.netcologne.de>
Matthias Blume wrote:

>>we both don't have the necessary empirical data to back our claims)
> 
> 
> Speak for yourself.

You too.


This:

 > I have worked on projects where people worried about *every cycle*.
  ^^^^^^^^

 > (Most of the time I agree with you, though.  Still, using infinite
 > precision by default is, IMO, a mistake.  Having it around and at your
                           ^^^^^^

 > fingertips, though, is nice.  That's why I added the long-missing
 > compiler support for IntInf to SML/NJ recently.)

is by any scientifical standard not enough evidence to back this:

> The problem
> is that for many algorithms people want to be sure that the compiler
> represents their values in machine words.  Infinite precision is
> needed sometimes, but in the majority of cases it is overkill.  If you
> need infinite precision, specify the type (IntInf.int in SML's case).
> A clever compiler might optimize that like a Lisp compiler does.  In
> most other cases, why take any chances?


Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m265i84qsh.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> 
> >>we both don't have the necessary empirical data to back our claims)
> > Speak for yourself.
> 
> You too.

I am.

> 
> 
> This:
> 
>  > I have worked on projects where people worried about *every cycle*.
>   ^^^^^^^^
> 
>  > (Most of the time I agree with you, though.  Still, using infinite
>  > precision by default is, IMO, a mistake.  Having it around and at your
>                            ^^^^^^
> 
>  > fingertips, though, is nice.  That's why I added the long-missing
>  > compiler support for IntInf to SML/NJ recently.)
> 
> is by any scientifical standard not enough evidence to back this:

It is more than enough evidence that there are programs which require
cycle-level efficiency and therefore cannot afford to use infinite
precision arithmetic.

It is, in my opinion, ridiculous to claim that most programs should
use infinite precision integers in almost all places (i.e., by
default).  Infinite precision is rarely necessary and almost always
wasteful overkill.

And in any case, it is also completely besides the point in the
discussion of static checking vs. everything dynamic.  (It just came
up as a side discussion after Andreas gave an example of ML code which
exhibits different semantics in different typing contexts -- a
discussion which seems to circle around the extremely questionable
idea that the only correct result of 20 * 30 has to be 600.)

Matthias
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031029085604.GU1454@mapcar.org>
On Wed, Oct 29, 2003 at 04:20:01AM +0000, Matthias Blume wrote:
> It is more than enough evidence that there are programs which require
> cycle-level efficiency and therefore cannot afford to use infinite
> precision arithmetic.

Said programs will then use specialized data types.

> It is, in my opinion, ridiculous to claim that most programs should
> use infinite precision integers in almost all places (i.e., by
> default).  Infinite precision is rarely necessary and almost always
> wasteful overkill.

How is it wasteful?  Numbers can still be represented by the most
efficient representation available, while retaining infinite precision
when operated upon.  Do you realize that Common Lisp implementations
represent integers in the range of a fixnum as machine words?  And
automatically convert them to bignum objects when they overflow?
Declarations can take this further, such that a compiler as smart as
CMUCL can manipulate raw (unsigned-byte 32) values, for example.

Are the vast majority of your programs the type which behave properly
within machine-word integers?  Do they take into account the possibility
of overflow?  Should someone have to worry about issues like overflow
when they just want to write a simple arithmetic program, in a high
level language?

> idea that the only correct result of 20 * 30 has to be 600.)

(20 * 30) mod 256 is, of course, a completely different expression.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2y8v4w2qj.fsf@hanabi-air.shimizu.blume>
Matthew Danish <·······@andrew.cmu.edu> writes:

> Declarations can take this further, such that a compiler as smart as
> CMUCL can manipulate raw (unsigned-byte 32) values, for example.

Oh, so you're saying you want static declarations, checked and
enforced by the compiler?  Hmm, I've read this point of view in this
thread somewhere.

> Are the vast majority of your programs the type which behave properly
> within machine-word integers?

> > idea that the only correct result of 20 * 30 has to be 600.)
> 
> (20 * 30) mod 256 is, of course, a completely different expression.

Undoubtedly, it is a different expression.  But it might mean the
same, given a correspondingly chosen domain for 20 and 30, together
with an certain * operation.

Matthias
From: Espen Vestre
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <kw3cdcf706.fsf@merced.netfonds.no>
Matthias Blume <····@my.address.elsewhere> writes:

> > Declarations can take this further, such that a compiler as smart as
> > CMUCL can manipulate raw (unsigned-byte 32) values, for example.
> 
> Oh, so you're saying you want static declarations, checked and
> enforced by the compiler?  

CL declarations are nothing but hints to the compiler that it is
allowed to optimize. Sometimes this is useful. 

> Hmm, I've read this point of view in this thread somewhere.

Now you're conflating two readings of "want declarations" (i.e.  "want
them whenever they're convenient for optimizing" vs. "want them
everywhere and always")
-- 
  (espen)
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnoo24$p1b$1@news.oberberg.net>
Espen Vestre wrote:
> Now you're conflating two readings of "want declarations" (i.e.  "want
> them whenever they're convenient for optimizing" vs. "want them
> everywhere and always")

Type inference is about "as much static checking as possible with as 
little annotations as absolutely necessary".
HM typing is extremely far on the "few declarations" side: a handful 
even in large systems.

It sounds unbelievable, but it really works.

Oh, there's one catch: Most functional programs have heaps of type 
definitions similar to this one:
   Tree a = Leaf a
          | Node (Tree a) (Tree a)
          | Empty
However, these definitions don't only declare the type, they also 
introduce constructors, which also serve as inspectors for pattern matching.

In other words, the above code is all that's needed to allow me to 
construct arbitrary values of the new type (gratuitious angle brackets 
inserts to make the code easier to recognize with a C++ background), 
like this:
   Leaf 5 -- Creates a Leaf <Integer> object that contains value 5
   Node Empty Empty -- Creates a node that doesn't have leaves
     -- Type is Tree <anything>, i.e. we can insert this object into
     -- a tree of any type, since there's no way that this can ever
     -- lead to type errors.
   Node (Leaf 5) (Leaf 6) -- Creates a node with two leaves

It also allows me to use the constructor names as tags for pattern 
matching. Note that every one of the following three definitions 
consists of the name of the function being defined, a pattern that the 
arguments must follow to select this particular definition, and the 
actual function body (which is just an expression here). Calling a 
function with parameters that match neither pattern is considered an 
error (which is usually caught at compile time but not in all cases - 
not everything is static even in functional languages).
   mapTree f (Leaf foo) = Leaf f foo
     -- If mapTree if given a "something" called f,
     -- and some piece of data that was constructed using Leaf foo,
     -- then the result will be obtained by applying f as a function
     -- to that foo, and making the result a Leaf.
     -- The occurence of f in a function position on the right side
     -- makes type inference recognize it as a function.
     -- The Leaf pattern (and, incidentally, the use of the Leaf
     -- constructor on the right side) will make type inference
     -- recognize that the second parameter of mapTree must be
     -- of Tree <anything> type. Usage details will also make
     -- type inference conclude that f must be a function from
     -- one "anything" type to another, potentially different
     -- "anything" type.
     -- In other words, the type of mapTree is known to be
     --   (a -> b) -> Tree a -> Tree b
     -- without a single type declaration in mapTree's code!
   mapTree f (Node left right) = Node (maptree f left) (maptree f right)
     -- This code is structured in exact analogy to the Leaf case.
     -- The only difference is that it uses recursion to descend
     -- into the subtrees.
     -- Incidentally, this definition of mapTree
   mapTree f Empty = Empty
     -- The null value doesn't need to be mapped, it will look the
     -- same on output.
     -- Note that this definition of mapTree doesn't restrict the
     -- type of f in the least.
     -- In HM typing, you usually don't specify the types, every
     -- usage of some object adds further restrictions what that
     -- type can be. If the set of types that a name can have becomes
     -- empty, you have contradictory type usage and hence a type error.

Hope this helps
Jo
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <r80v7wk7.fsf@ccs.neu.edu>
Joachim Durchholz <·················@web.de> writes:

> Espen Vestre wrote:
>> Now you're conflating two readings of "want declarations" (i.e.  "want
>> them whenever they're convenient for optimizing" vs. "want them
>> everywhere and always")
>
> Type inference is about "as much static checking as possible with as
> little annotations as absolutely necessary".
> HM typing is extremely far on the "few declarations" side: a handful
> even in large systems.

I certainly don't mind as much static checking as possible.  I get a
little put off by *any* annotations that are *absolutely necessary*,
though.  My preference is that all `lexically correct' code be
compilable, even if the object code ends up being the single
instruction `jmp error-handler'.  Of course I'd like to get a
compilation warning in this case.

>
> It sounds unbelievable, but it really works.
>

I believe you.  I have trouble swallowing claims like `It is never
wrong, always completes, and the resulting code never has a run-time
error.' or `You will never need to run the kind of code it doesn't allow.'
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <p9n571-853.ln1@ID-7776.user.dfncis.de>
Joe Marshall <···@ccs.neu.edu> wrote:
> I have trouble swallowing claims like `It is never wrong, always
> completes, and the resulting code never has a run-time error.'

I can understand this. Fortunately, you don't have to swallow it,
you can verify it for yourself.

A comprehensive book about the lambda calculus should contain the
Hindley-Mindler type inference algorithm. The HM-algorithm is fairly
old, so I don't know if there are any papers on the web that explain
it in detail. Unification of types and the HM-algorithm together are
maybe two pages.

* Termination is easy to verify; the algorithm works by induction on
  the structure of the lambda term.

* For "it never has a runtime error" look at the typing rules of
  the lambda calculus and convince yourself that they express the
  invariant that any function (including "constants", i.e. built-in
  functions) will only be applied to types that match its own
  type signature. Hence, no runtime errors.

  A good book will also contain the proof (or a sketch of it) that
  if the HM-algorithm succeeds, the term in question can indeed by
  typed by the typing rules.

* For the other case (i.e., there is a mismatch during unification; I
  guess that's what you mean by "it is never wrong"), try to assign to
  every variable a value of the type under the current type
  environment, and reduce along every possible reduction path of the
  subterm. One of those reductions will fail with a type error (though
  this reduction may never happen if execution never reaches this part
  of the subterm on the path that the evaluation strategy of your
  language chooses).

  Maybe it's best to do this for a few examples.
  
> or `You will never need to run the kind of code it doesn't allow.'

The last point should show that such a code at least is problematic,
unless you can somehow make sure that the part that contains the
type error is never executed. In that case, this part is useless,
so the code should be probably rewritten. At least I cannot think
of a good reason why you would "need" such kind of code.

- Dirk
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnpt47$9o7$1@news.oberberg.net>
Joe Marshall wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
>>Type inference is about "as much static checking as possible with as
>>little annotations as absolutely necessary".
>>HM typing is extremely far on the "few declarations" side: a handful
>>even in large systems.
> 
> I certainly don't mind as much static checking as possible.  I get a
> little put off by *any* annotations that are *absolutely necessary*,
> though.  My preference is that all `lexically correct' code be
> compilable, even if the object code ends up being the single
> instruction `jmp error-handler'.  Of course I'd like to get a
> compilation warning in this case.

Then static typing is probably not for you.
Mainstream FPLs tend to require an occasional type declaration. And 
you'll have to know about the type machinery to interpret the type 
errors that the compiler is going to spit at you - never mind that these 
errors will always indicate a bug (unless one of those rare explicit 
type annotations is involved, in which case it could be a bug or a 
defective type annotation).

>>It sounds unbelievable, but it really works.
> 
> I believe you.  I have trouble swallowing claims like `It is never
> wrong, always completes, and the resulting code never has a run-time
> error.' or `You will never need to run the kind of code it doesn't allow.'

This kind of claim comes is usually just a misunderstanding.
For example, the above claim indeed holds for HM typing - for the right 
definitions of "never wrong" and "never has an error".

HM typing "is never wrong and never has a run-time error" in the 
following sense: the algorithm will never allow an ill-typed program to 
pass, and there will never be a type error at run-time. However, people 
tend to overlook the "type" bit in the "type error" term, at which point 
the discussion quickly degenerates into discourses of general correctness.
Adding to the confusion is the often-reported experience of functional 
programmers, that annotating your code with static type declarations can 
be a very efficient way to finding design errors soon.
The type correctness claims are backed by hard theory; the design 
improvement claims are of a social nature and cannot be proved (they 
could be verified by field studies at best).
From: Lex Spoon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3llqzg4ky.fsf@logrus.dnsalias.net>
>>>It sounds unbelievable, but it really works.
>> I believe you.  I have trouble swallowing claims like `It is never
>> wrong, always completes, and the resulting code never has a run-time
>> error.' or `You will never need to run the kind of code it doesn't allow.'
>
> This kind of claim comes is usually just a misunderstanding.
> For example, the above claim indeed holds for HM typing - for the
> right definitions of "never wrong" and "never has an error".
>
> HM typing "is never wrong and never has a run-time error" in the
> following sense: the algorithm will never allow an ill-typed program
> to pass, and there will never be a type error at run-time. However,
> people tend to overlook the "type" bit in the "type error" term, at
> which point the discussion quickly degenerates into discourses of
> general correctness.


It is misleading to make this claim without a lot of qualification.
It requires careful, technical definitions of "type" and "type error"
that are different from what an unprepared audience will expect.


For example, it is not considered a type error if you get the wrong
branch of a datatype.  So if you define:

   datatype sexp = Atom of string | Cons of (sexp, sexp)
  
   fun car (Cons (a,b)) = a

then the following would not be considered a "type error" :

   car (Atom "hello")



To add to the situation, HM flags extra errors, too, that many people
would not consider "type errors" but which are for HM's purposes.  For
example, it is considered a type error if two branches of an "if" do
not match, even if one branch is impossible or if the later code can
remember which branch was followed.  For example:

   val myint : int  = 
     if   true
     then 0
     else "hello"


or more interestingly:

   val (tag, thingie) =
     if (whatever)
     then  (0, 1)
     else  (1, 1.0)

   val myotherstuff = 
     if tag = 0
     then (tofloat thingie) + 1.5
     else thingie + 1.5


In common parlance, as opposed to the formal definitions of "type
error", HM both overlooks some type errors and adds some others.  It
is extremely misleading to claim, in a non-technical discussion, that
HM rejects precisely those programs that have a type error.  The
statement is actually false if you use the expected meanings of "type
error" and "type".


All this said, I agree that HM type inference is a beautiful thing and
that it has significant benefits.  But the benefit of removing type
errors is a red herring--both in practice and, as described in this
post, in theory as well.

-Lex
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vd3f71-461.ln1@ID-7776.user.dfncis.de>
Lex Spoon <···@cc.gatech.edu> wrote:

> To add to the situation, HM flags extra errors, too, that many people
> would not consider "type errors" but which are for HM's purposes.  For
> example, it is considered a type error if two branches of an "if" do
> not match, even if one branch is impossible or if the later code can
> remember which branch was followed.
[...]
>   val (tag, thingie) =
>     if (whatever)
>     then  (0, 1)
>     else  (1, 1.0)
> 
>   val myotherstuff = 
>     if tag = 0
>     then (tofloat thingie) + 1.5
>     else thingie + 1.5

The point here is of course that you "glue together" the tag
and the value, with the additional side effect that this documents
your intention. So you would write in this case

data Thingie = Tag0 Integer | Tag1 Float

and then you can write

myfirststuff whatever = if whatever then Tag0 1 else Tag1 1.0

myotherstuff (Tag0 thingie) = (fromInteger thingie) + 1.5
myotherstuff (Tag1 thingie) = thingie + 1.5

Then the type checker will happily infer that

myfirststuff :: Bool -> Thingie   and
myotherstuff :: Thingie -> Float

So you indeed have to express your tags a bit differently. Is this
asking too much? Is that so inconvient, especially when you get a 
good documention of your intentions for free?

- Dirk
From: Mario S. Mommer
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <fzznfk6rfe.fsf@cupid.igpm.rwth-aachen.de>
Matthias Blume <····@my.address.elsewhere> writes:
> Matthew Danish <·······@andrew.cmu.edu> writes:
> 
> > Declarations can take this further, such that a compiler as smart as
> > CMUCL can manipulate raw (unsigned-byte 32) values, for example.
> 
> Oh, so you're saying you want static declarations, checked and
> enforced by the compiler?  Hmm, I've read this point of view in this
> thread somewhere.

The point is that you can use static typing when you want. It doesn't
stand in the way when you don't need it, which is most of the time.

> > Are the vast majority of your programs the type which behave properly
> > within machine-word integers?
> 
> > > idea that the only correct result of 20 * 30 has to be 600.)
> > 
> > (20 * 30) mod 256 is, of course, a completely different expression.
> 
> Undoubtedly, it is a different expression.  But it might mean the
> same, given a correspondingly chosen domain for 20 and 30, together
> with an certain * operation.

Indeed. It could well be 42. Or 3.141592. Or maybe "hum". Who knows,
who knows.

Just change the type declarations and - viol�! - popcorn.
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87y8v480yb.fsf@thalassa.informatimago.com>
Matthias Blume <····@my.address.elsewhere> writes:

> Pascal Costanza <········@web.de> writes:
> 
> > Computers are fast enough and have enough memory nowadays. You are
> > talking about micro efficiency. That's not interesting anymore.
> 
> I have worked on projects where people worried about *every cycle*.
> (Most of the time I agree with you, though.  Still, using infinite
> precision by default is, IMO, a mistake. 

What  are you  writing about?  Figments  of your  imagination or  real
concrete systems?


[20]> (typep (fact 100)  'fixnum)
NIL
[21]> (typep (fact 100)  'bignum)
T
[22]> (typep (/ (fact 100) (fact 99)) 'fixnum)
T
[23]> (typep (/ (fact 100) (fact 99)) 'bignum)
NIL
[24]> (/ 1 3)
1/3
[25]> (/ 1.0 3)
0.33333334

Where do you see "infinite precision by default"?


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa09f73$1@news.unimelb.edu.au>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

>Matthias Blume <····@my.address.elsewhere> writes:
>
>> Pascal Costanza <········@web.de> writes:
>> 
>> > Computers are fast enough and have enough memory nowadays. You are
>> > talking about micro efficiency. That's not interesting anymore.
>> 
>> I have worked on projects where people worried about *every cycle*.
>> (Most of the time I agree with you, though.  Still, using infinite
>> precision by default is, IMO, a mistake. 
>
>What  are you  writing about?  Figments  of your  imagination or  real
>concrete systems?

He is talking about real concrete systems, e.g. Haskell or Lisp.

>Where do you see "infinite precision by default" [in Lisp]?

We know that implementations of dynamically typed languages such as Lisp
represent small integers efficiently.  So do good implementations of
statically typed languages in which arbitrary precision arithmetic is
the default.  But in both cases, these implementations pay a price --
compared to statically typed languages using fixed precision arithmetic --
because of the possibility that adding two unknown word-sized values
may generate a result which no longer fits in a single word.  The compiler
needs to generate extra code to cater for that possibility.  Then in turn,
each subsequent operation needs to cater for the possibility that the input
will not fit in a word.  The extra tests slow the code down, and the
extra code size reduces locality.

In a dynamically typed language, this price is normally not considered
to be much of an issue, because it has already been paid for up-front:
the cost is just the same cost as you would normally for pay for *every*
data access in a dynamically typed language.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uad7knbnt.fsf@hotmail.com>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> 
> > The problem
> > is that for many algorithms people want to be sure that the compiler
> > represents their values in machine words.  Infinite precision is
> > needed sometimes, but in the majority of cases it is overkill.  If you
> > need infinite precision, specify the type (IntInf.int in SML's case).
> > A clever compiler might optimize that like a Lisp compiler does.  In
> > most other cases, why take any chances?
> 
> I disagree strongly here. I am convinced that in most algorithms,
> machine words don't matter at all. Have you ever seen in books on
> algorithms that they actually _need_ to restrict them to values that
> are representable in machine word sizes?
> 

Hmmm.. lets see... AES and  MD5 most if not all DSP/ signal processing
algorithms. There are quite a few. 
From: Jens Axel Søgaard
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9f0a12$0$69926$edfadb0f@dread12.news.tele.dk>
Daniel C. Wang wrote:
> Pascal Costanza <········@web.de> writes:

>>I disagree strongly here. I am convinced that in most algorithms,
>>machine words don't matter at all. Have you ever seen in books on
>>algorithms that they actually _need_ to restrict them to values that
>>are representable in machine word sizes?

> Hmmm.. lets see... AES and  MD5 most if not all DSP/ signal processing
> algorithms. There are quite a few. 

It's true that they are specified such that particularly efficient
implementations exists on machines with the proper word size. This
holds for many crypthographic algorithms.

This doesn't rule out other implementations though.

<Shame less self plug

   <http://www.scheme.dk/md5/md5.html>

/>
-- 
Jens Axel S�gaard
From: Stephen J. Bevan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3y8v4k9mt.fsf@dino.dnsalias.com>
Pascal Costanza <········@web.de> writes:
> > is that for many algorithms people want to be sure that the compiler
> > represents their values in machine words.  Infinite precision is
> > needed sometimes, but in the majority of cases it is overkill.  If you
> > need infinite precision, specify the type (IntInf.int in SML's case).
> > A clever compiler might optimize that like a Lisp compiler does.  In
> > most other cases, why take any chances?
> 
> I disagree strongly here. I am convinced that in most algorithms,
> machine words don't matter at all. Have you ever seen in books on
> algorithms that they actually _need_ to restrict them to values that
> are representable in machine word sizes?
> [snip]
> Computers are fast enough and have enough memory nowadays. You are
> talking about micro efficiency. That's not interesting anymore.

Implement something like MD5, SHA-1, AES, ... etc. in your favourite
language and use the fastest compiler available to you to calculate
how many MB/s it can process.  If it can get say with a factor of 2 of
C code then IMHO you'll have proved your point.  If not, then either
your point stands but your favourite language doesn't have
sufficiently good compilers available yet, or exact sizes are required
in order to get good performance in this case.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bno15p$gdl$2@newsreader2.netcologne.de>
Stephen J. Bevan wrote:

> Pascal Costanza <········@web.de> writes:
> 
>>>is that for many algorithms people want to be sure that the compiler
>>>represents their values in machine words.  Infinite precision is
>>>needed sometimes, but in the majority of cases it is overkill.  If you
>>>need infinite precision, specify the type (IntInf.int in SML's case).
>>>A clever compiler might optimize that like a Lisp compiler does.  In
>>>most other cases, why take any chances?
>>
>>I disagree strongly here. I am convinced that in most algorithms,
>>machine words don't matter at all. Have you ever seen in books on
>>algorithms that they actually _need_ to restrict them to values that
>>are representable in machine word sizes?
>>[snip]
>>Computers are fast enough and have enough memory nowadays. You are
>>talking about micro efficiency. That's not interesting anymore.
> 
> 
> Implement something like MD5, SHA-1, AES, ... etc. in your favourite
> language and use the fastest compiler available to you to calculate
> how many MB/s it can process.  If it can get say with a factor of 2 of
> C code then IMHO you'll have proved your point.  If not, then either
> your point stands but your favourite language doesn't have
> sufficiently good compilers available yet, or exact sizes are required
> in order to get good performance in this case.

Are these algorithms reason enough to have machine word sized numerical 
data types as the default for a _general purpose_ language?


Pascal
From: Stephen J. Bevan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3llr3jupi.fsf@dino.dnsalias.com>
Pascal Costanza <········@web.de> writes:
> Are these algorithms reason enough to have machine word sized
> numerical data types as the default for a _general purpose_ language?

I've no idea, I don't care that much what the default is since I
prefer to specify what the type/size should be if the compiler fails
to infer the one I wanted :-)
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnqpms$399$2@newsreader2.netcologne.de>
Stephen J. Bevan wrote:

> Pascal Costanza <········@web.de> writes:
> 
>>Are these algorithms reason enough to have machine word sized
>>numerical data types as the default for a _general purpose_ language?
> 
> 
> I've no idea, I don't care that much what the default is since I
> prefer to specify what the type/size should be if the compiler fails
> to infer the one I wanted :-)

:-)


Pascal
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egekwxwtmq.fsf@sefirot.ii.uib.no>
Matthew Danish <·······@andrew.cmu.edu> writes:

> On Mon, Oct 27, 2003 at 10:08:15PM +0100, Joachim Durchholz wrote:
>> Matthew Danish wrote:

>>>On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:

>>>>Pascal Costanza wrote:

>>>>> Can you show me an example of a program that does't make sense anymore 
>>>>> when you strip off the static type information?

>>>> Here is a very trivial example, in SML:
>>>>
>>>>	20 * 30

>> No, the correct answer isn't 600 in all cases.

> What is this stuff?  I am talking about integers here!

But the SML program isn't.  Or it may be, and maybe not.  So it's
ambigous without type information.

> Why can't the implementation figure out how to represent them most
> efficiently? 

Because it needs a type annotation or inference to decide that the
numbers are indeed integers, and not a different set with different
arithmetic properties.

> Lisp gets exact rational arithmetic right, why don't ML or Haskell?

Could you point to a case where they don't?  I don't understand your 
criticism at all.  Is the ability to do modulo arithmetic "wrong"?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031028105048.GR1454@mapcar.org>
On Tue, Oct 28, 2003 at 11:20:45AM +0100, ··········@ii.uib.no wrote:
> Matthew Danish <·······@andrew.cmu.edu> writes:
> > What is this stuff?  I am talking about integers here!
> 
> But the SML program isn't.  Or it may be, and maybe not.  So it's
> ambigous without type information.
> 
> > Why can't the implementation figure out how to represent them most
> > efficiently? 
> 
> Because it needs a type annotation or inference to decide that the
> numbers are indeed integers, and not a different set with different
> arithmetic properties.

1 is an integer.  Simple type inference.  In Lisp, it's also a fixnum,
it's also an (unsigned-byte 23), it's also an (integer 1 (2)), etc.

> > Lisp gets exact rational arithmetic right, why don't ML or Haskell?
> 
> Could you point to a case where they don't?  I don't understand your 
> criticism at all.  Is the ability to do modulo arithmetic "wrong"?

- fun fact 0 = 1 | fact n = n * fact (n - 1);
val fact = fn : int -> int
- fact 10;
val it = 3628800 : int
- fact 15;
val it = 1307674368000 : int  (* ideally *)

- 1 * 1.0;
val it = 1.0 : float  (* ideally *)

- 2 / 4;
val it = 1/2 : ratio  (* ideally *)

- 2 truncate 4;
val it = 0 : int

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1r80x2tfd.fsf@tti5.uchicago.edu>
Matthew Danish <·······@andrew.cmu.edu> writes:

> On Tue, Oct 28, 2003 at 11:20:45AM +0100, ··········@ii.uib.no wrote:
> > Matthew Danish <·······@andrew.cmu.edu> writes:
> > > What is this stuff?  I am talking about integers here!
> > 
> > But the SML program isn't.  Or it may be, and maybe not.  So it's
> > ambigous without type information.
> > 
> > > Why can't the implementation figure out how to represent them most
> > > efficiently? 
> > 
> > Because it needs a type annotation or inference to decide that the
> > numbers are indeed integers, and not a different set with different
> > arithmetic properties.
> 
> 1 is an integer.  Simple type inference.  In Lisp, it's also a fixnum,
> it's also an (unsigned-byte 23), it's also an (integer 1 (2)), etc.
> 
> > > Lisp gets exact rational arithmetic right, why don't ML or Haskell?
> > 
> > Could you point to a case where they don't?  I don't understand your 
> > criticism at all.  Is the ability to do modulo arithmetic "wrong"?
> 
> - fun fact 0 = 1 | fact n = n * fact (n - 1);
> val fact = fn : int -> int
> - fact 10;
> val it = 3628800 : int
> - fact 15;
> val it = 1307674368000 : int  (* ideally *)

$ sml
Standard ML of New Jersey v110.43.3 [FLINT v1.5], September 26, 2003
- fun fact 0 = 1 : IntInf.int
=   | fact n = n * fact (n - 1);
[autoloading]
[autoloading done]
val fact = fn : IntInf.int -> IntInf.int
- fact 15;
val it = 1307674368000 : IntInf.int
- 

> - 1 * 1.0;
> val it = 1.0 : float  (* ideally *)

That's not "ideal" at all, to me.  I find the automatic conversions in
C a paint in the b*tt because it is not obvious at all where they
happen in a given expression.  How hard is it to write

   real 1 * 1.0

thereby making things explicit, unambiguous, and non-surprising?

Matthias
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egvfq8laij.fsf@sefirot.ii.uib.no>
Matthias Blume <····@my.address.elsewhere> writes:

>> - 1 * 1.0;
>> val it = 1.0 : float  (* ideally *)

> That's not "ideal" at all, to me.  I find the automatic conversions in
> C a paint in the b*tt

You're making the mistake of comparing to C :-)  And it's much worse
in C++, when conversions in conjunction with overloading can impact
which function gets called in non-obvious ways.

Anyway, I think Haskell makes this work better.  Still no automatic
conversions, but more flexibility in the use of numeric literals.
Occasionally, you need to sprinkle "fromIntegral"s around in
expressions, which I agree aren't terribly pretty.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egu15tvcdw.fsf@sefirot.ii.uib.no>
Matthew Danish <·······@andrew.cmu.edu> writes:

> 1 is an integer.  Simple type inference.

Okay, I'm not familiar enough with ML, and I didn't know that it won't
let you use 1 to represent 1.0 or 1/1 or whatever.

>>> Lisp gets exact rational arithmetic right, why don't ML or Haskell?

>> Could you point to a case where they don't?  I don't understand your 
>> criticism at all.  Is the ability to do modulo arithmetic "wrong"?

> - fun fact 0 = 1 | fact n = n * fact (n - 1);
> val fact = fn : int -> int
> - fact 10;
> val it = 3628800 : int
> - fact 15;
> val it = 1307674368000 : int  (* ideally *)

Here's Haskell as provided by GHCi:

    Prelude> let fact n = if n == 0 then 1 else n * fact (n-1)
    Prelude> fact 10
    3628800
    Prelude> :i it
    -- it is a variable
    it :: Integer
    Prelude> fact 15
    1307674368000

So '10' defaults to Integer, which is infinite precision.
You can of course also do:

    Prelude> fact 10.0
    3628800.0

> - 1 * 1.0;
> val it = 1.0 : float  (* ideally *)

    Prelude> 1*1.0
    1.0

The 1 is treated as a Float, since that is what * requires

> - 2 / 4;
> val it = 1/2 : ratio  (* ideally *)

    Prelude> 2/4
    0.5
    Prelude> (2/4  :: Rational)
    1 % 2

The default is Float, but Rational is there if that's what you want

> - 2 truncate 4;
> val it = 0 : int

I assume this means:

    Prelude> 2 `div` 4
    0
    Prelude> :i it
    -- it is a variable
    it :: Integer

If I understood you correctly, Haskell doesn't get it all that wrong?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031028125643.GS1454@mapcar.org>
You are right about Haskell, as I wrote in my initial post on this, it
has a numeric tower derived from Lisp (though I think it is not enabled
by default, or something?).  Mostly I was annoyed that 20 * 30 being
possibly an overflow was brought up as an example of static typing being
expressive.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnlpka$9f7$1@news.oberberg.net>
Matthew Danish wrote:

> On Mon, Oct 27, 2003 at 10:08:15PM +0100, Joachim Durchholz wrote:
> 
>>Matthew Danish wrote:
>>
>>>On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:
>>>
>>>>Pascal Costanza wrote:
>>>>
>>>>>Can you show me an example of a program that does't make sense anymore 
>>>>>when you strip off the static type information?
>>>>
>>>>Here is a very trivial example, in SML:
>>>>
>>>>	20 * 30
>>>>
>>>>Multiplication, as well as literals, are overloaded. Depending on 
>>>>whether you type this expression as Int8.int (8-bit integers) or 
>>>>IntInf.int (infinite precision integer) the result is either 600 or an 
>>>>overflow exception.
>>>
>>>May I point out that the correct answer is 600, not overflow?
>>
>>No, the correct answer isn't 600 in all cases.
>>If it's infinite-precision arithmetic, the correct answer is indeed 600.
>>If it's 8-bit arithmetic with overflow, there is no correct answer.
>>If it's 8-bit arithmetic with wrap-around, the correct answer is 88.
>>If it's 8-bit arithmetic with saturation, the correct answer is 255.
>>(The result happens to be independent of whether the arithmetic is 
>>signed or unsigned.)
> 
> What is this stuff?  I am talking about integers here!  You know, the
> infinite set with the same cardinality as natural numbers.  Why can't
> the implementation figure out how to represent them most efficiently?

Hey, that 20*30 expression given above didn't say what type of integer 
arithmetic was meant, or did it?

>>Of course, this doesn't say much about the distinction between static 
>>and dynamic typing, actually the issues and unlucky fringe cases seem 
>>very similar to me. (In particular, overloading and type inference don't 
>>tell us whether the multiplication should be done in 8-bit, 16-bit, 
>>machine-precision, infinite-precision, wrap-around, or saturated 
>>arithmetic, and run-time types don't give us any useful answer either.)
> 
> Lisp gets exact rational arithmetic right, why don't ML or Haskell?

They do as well - infinite-precision integers are standard issue in 
these languages.

It's just that Andreas's answer to the challenge of presenting a 
type-dependent expression was simpler than anybody would have expected. 
Of course, if your best response is that this all is ill-defined when it 
isn't - then, I fear, you'll simply have to stay on your side of the fence.

Regards,
Jo
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F9E5D1C.3050103@ps.uni-sb.de>
Matthew Danish wrote:
>>
>>Here is a very trivial example, in SML:
>>
>>	20 * 30
>>
>>Multiplication, as well as literals, are overloaded. Depending on 
>>whether you type this expression as Int8.int (8-bit integers) or 
>>IntInf.int (infinite precision integer) the result is either 600 or an 
>>overflow exception.
> 
> May I point out that the correct answer is 600, not overflow?

No, it's not. If I choose type Int8 then I do so precisely for the 
reason that I want to be signalled if some computation overflows the 
intended value domain. Otherwise I had chosen IntInf or something. You 
see, this is exactly the reason why I say that the type system gives you 
expressiveness.

> Something that annoys me about many statically-typed languages is the
> insistence that arithmetic operations should return the same type as the
> operands.

I should note in this context is that static types usually express 
different things than dynamic ones, especially when it comes to number 
types. In Lisp, the runtime tag of a number will usually describe the 
representation of the number. This may well change between operations. 
But static typing, at least in high-level languages, is about semantics. 
If I choose a certain integer type I do so because it has the desired 
domain, which I want to have checked - I'm not at all interested in its 
representation. In fact, values of IntInf are likely to have multiple 
representations depending on their size, but the type is invariant, 
abstracting away from such low-level representation details.

Actually, I think dynamic typing should abstract from this as well, but 
unfortunately this does not seem to happen.

> 2 / 4 is 1/2, not 0.

Integer division is not real division (and should not use the same name).

> Arithmetically, 1 * 1.0 is
> well-defined, so why can I not write this in an SML program?

Because the designers decided (rightly so, IMHO) that it is best to 
avoid implicit conversions, since they might introduce subtle bugs and 
does not coexist well with type inference. But anyway, this has nothing 
to do with static vs dynamic typing - C allows the above, and there 
might well be dynamic languages that raise a runtime error if you try it.

> I do believe Haskell does it right, though, with its numeric tower
> derived from Lisp.

Yes, in Haskell you can write the above, but for slightly different 
reasons (integer literals are overloaded for floating types).

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <873cddw642.fsf@sidious.geddis.org>
Andreas Rossberg <········@ps.uni-sb.de> writes:
> I should note in this context is that static types usually express different
> things than dynamic ones, especially when it comes to number types. In Lisp,
> the runtime tag of a number will usually describe the representation of the
> number. This may well change between operations. But static typing, at least
> in high-level languages, is about semantics. If I choose a certain integer
> type I do so because it has the desired domain, which I want to have checked
> - I'm not at all interested in its representation.

Types don't have to be disjoint.  In Lisp, if a data object is a FIXNUM,
at the same time it's also a NUMBER.  And perhaps an (INTEGER 0 16) too.

Yes, at least one of the types defines the representation.  But there are
semantic types as well.

As to "change between operations": it doesn't matter what your type system is.
Any function call has the potential to "change types".  It would be a silly
system that requires the (type) domain and range of every function to always
be identical.

> In fact, values of IntInf are likely to have multiple representations
> depending on their size, but the type is invariant, abstracting away from
> such low-level representation details.

And Lisp's NUMBER type also has multiple representations.  And the SQRT
function takes a NUMBER and returns a NUMBER.  But also, at the same time,
it takes INTEGERs and returns FLOATs, and takes negative INTEGERs and returns
COMPLEX NUMBERs.  Semantics and representation, all at the same time!

> Actually, I think dynamic typing should abstract from this as well, but
> unfortunately this does not seem to happen.

But it does.

> Because the designers decided (rightly so, IMHO) that it is best to avoid
> implicit conversions, since they might introduce subtle bugs and does not
> coexist well with type inference.

Sounds like a tradeoff.  Perhaps, for some programmers, the freedom you get
with implicit type conversions is more valuable than the benefits of type
inference?

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjt4u$gbf$1@newsreader2.netcologne.de>
Andreas Rossberg wrote:

> Pascal Costanza wrote:
> 
>>
>> Can you show me an example of a program that does't make sense anymore 
>> when you strip off the static type information?
> 
> 
> Here is a very trivial example, in SML:
> 
>     20 * 30
> 
> Multiplication, as well as literals, are overloaded. Depending on 
> whether you type this expression as Int8.int (8-bit integers) or 
> IntInf.int (infinite precision integer) the result is either 600 or an 
> overflow exception.
> 
> So the program does not make sense without type information, because it 
> does not have an unambiguous (i.e. no) semantics.
> 
> I'm ready to admit that it may be a dubious example of a typing feature. 
> But it is simple, and clearly sufficient to disprove your repeated claim 
> that static types don't add expressiveness to a language. If you did not 
> have them for the example above, you needed some other feature to 
> express the disambiguation.

Sorry, do you really want to say that I can't make my program throw an 
exception when some variables are not inside a specified range?

(assert (typep (* 20 30) '(integer 0 255)))


Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1znfm2z7j.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Andreas Rossberg wrote:
> 
> > Pascal Costanza wrote:
> > 
> >>
> >> Can you show me an example of a program that does't make sense
> >> anymore when you strip off the static type information?
> 
> > Here is a very trivial example, in SML:
> 
> >     20 * 30
> 
> > Multiplication, as well as literals, are overloaded. Depending on
> > whether you type this expression as Int8.int (8-bit integers) or
> > IntInf.int (infinite precision integer) the result is either 600 or
> > an overflow exception.
> 
> > So the program does not make sense without type information, because
> > it does not have an unambiguous (i.e. no) semantics.
> 
> > I'm ready to admit that it may be a dubious example of a typing
> > feature. But it is simple, and clearly sufficient to disprove your
> > repeated claim that static types don't add expressiveness to a
> > language. If you did not have them for the example above, you needed
> > some other feature to express the disambiguation.
> 
> 
> Sorry, do you really want to say that I can't make my program throw an
> exception when some variables are not inside a specified range?

No.  Where did you get that from.

His point was that without the type information you don't know whether
the above "program" should be transliterated into this:

> (assert (typep (* 20 30) '(integer 0 255)))

or simply this:

   (* 20 30)

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnki82$jrj$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>Andreas Rossberg wrote:
>>
>>
>>>Pascal Costanza wrote:
>>>
>>>
>>>>Can you show me an example of a program that does't make sense
>>>>anymore when you strip off the static type information?
>>
>>>Here is a very trivial example, in SML:
>>
>>>    20 * 30
>>
>>>Multiplication, as well as literals, are overloaded. Depending on
>>>whether you type this expression as Int8.int (8-bit integers) or
>>>IntInf.int (infinite precision integer) the result is either 600 or
>>>an overflow exception.
>>
>>>So the program does not make sense without type information, because
>>>it does not have an unambiguous (i.e. no) semantics.
>>
>>>I'm ready to admit that it may be a dubious example of a typing
>>>feature. But it is simple, and clearly sufficient to disprove your
>>>repeated claim that static types don't add expressiveness to a
>>>language. If you did not have them for the example above, you needed
>>>some other feature to express the disambiguation.
>>
>>
>>Sorry, do you really want to say that I can't make my program throw an
>>exception when some variables are not inside a specified range?
> 
> 
> No.  Where did you get that from.
> 
> His point was that without the type information you don't know whether
> the above "program" should be transliterated into this:
> 
> 
>>(assert (typep (* 20 30) '(integer 0 255)))
> 
> 
> or simply this:
> 
>    (* 20 30)

Well, what's the obvious meaning?

Meta comment: I think we are already side-stepping too much here. I 
don't think this is a useful example to illustrate serious advantages of 
either static or dynamic typing. In all languages, you could simply 
define a default meaning for the version that doesn't have explicit type 
annotations, and then force the programmer to use explicit annotations 
for the other ones. (Andreas already said it is a dubious example.)

Can you give a better example of a program that would render meaningless 
without type annotations?

Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m2u15ucacs.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

> Can you give a better example of a program that would render
> meaningless without type annotations?

   fun f A = "A"
     | f B = "B"

This could be the function that always returns "A" for arbitrary
arguments, it could be the function that can only be legally applied
to the value (constructed by) A where it returns "A", it could be the
function that can be applied precisely to values A and B, it could
accept more than those two and raise an exception if the argument is
neither A nor B, it could be the function that can be applied to A
where it returns "A" and also many other values where it always
returns "B", ...

Which of these versions you get depends in part on the typing
environment that is in effect when the above code is encountered by
the compiler.

Matthias

PS: Just in case you wonder, here are examples of type definitions
which would trigger the results outlined above.  I go in the same
order:

1.   (* no type definition in effect *)
2.   datatype t = A
3.   datatype t = A | B
4.   datatype t = A | B | C
5.   datatype t = A | C | D of int | E of real | F of t -> t

Notice that the compiler will probably warn about a redundant match in
cases 1. and 2. and about a non-exhaustive match in case 4.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmphh$fep$1@newsreader2.netcologne.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Can you give a better example of a program that would render
>>meaningless without type annotations?
> 
> 
>    fun f A = "A"
>      | f B = "B"
> 

I don't find this convincing. This is similar to the 20 * 30 example.

The resolution in both cases would be to define a default meaning if no 
explicit type annotation exists. Done.

These examples don't allow me to write one single meaningful program 
more than in a dynamically typed language.

My claim that dynamically typed languages have more expressive power 
means that they allow you to write more programs that show useful 
behavior at runtime. The strongest example I have are programs that 
allow arbitrary changes to the definitions that are embedded inside of 
them. That's the essence of metacircularity (runtime metaobject 
protocols, etc). You cannot statically type check such programs by 
definition.

I think that Neelakantan has better examples for programs that are 
possible to write with a statically typed language, but not with 
dynamically typed ones. (Not 100% sure yet, though.)


Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1fzhd2e73.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> > Pascal Costanza <········@web.de> writes:
> > 
> >>Can you give a better example of a program that would render
> >>meaningless without type annotations?
> >    fun f A = "A"
> 
> >      | f B = "B"
> > 
> 
> I don't find this convincing. This is similar to the 20 * 30 example.
> 
> The resolution in both cases would be to define a default meaning if
> no explicit type annotation exists. Done.

Of course, you can always define a default meaning for the case that
no explicit (type) information is available.  But what you are doing
is essentially providing such information: no explicit type annotation
amounts to having an implicit annotation.

By the way, this is how, e.g., SML does it anyway: If you write just
20*30 and nothing else is known, then the type is resolved to be
Int.int.

This does not invalidate the claim that you know the semantics of the
phrase only if you know the type.  It just so happens that you always
know the type.

> I think that Neelakantan has better examples for programs that are
> possible to write with a statically typed language, but not with
> dynamically typed ones. (Not 100% sure yet, though.)

There are no such programs, obviously.  You can always translate a
statically typed program into a dynamically typed one (and vice
versa).

The advantage (as far as I am concerned) of the statically typed
program is in the guarantees that it provides:  If I write

   fun foo x = ...

and x is declared or inferred to be of type t, then I never again have
to worry about what happens should someone pass a non-t to foo
because that simply cannot happen.  This all by itself is useful, but
it gets even more useful if t is an abstract type so that I have full
control over how and where t values are generated.

This sort of thing is most useful when designing libraries because in
this case you don't know yet (who will call foo and how (and you might
in fact never know). But you do know that whoever it is and however he
does it, he must pass a value of type t.

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmtvo$nrh$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:

>>I think that Neelakantan has better examples for programs that are
>>possible to write with a statically typed language, but not with
>>dynamically typed ones. (Not 100% sure yet, though.)
> 
> There are no such programs, obviously.  You can always translate a
> statically typed program into a dynamically typed one (and vice
> versa).

No, for christ's sake! There are dynamically typed programs that you 
cannot translate into statically typed ones!

As soon as you add full-fledged runtime metaprogramming to a language, 
and write a program that uses it, you cannot statically type check this 
anymore, by definition, because you cannot statically determine anymore 
in what ways such a program would change during its lifetime!

And don't tell me that "in 99% of all cases, you don't need this". This 
just isn't true. And even if it were true, it wouldn't matter!

If there would only be _one_ useful program on earth that someone cared 
about that would make use of runtime metaprogramming, this would make my 
statement true that static typing decreases expressive power in possibly 
serious ways. And there are definitely lots of them out there!

> The advantage (as far as I am concerned) of the statically typed
> program is in the guarantees that it provides:  If I write
> 
>    fun foo x = ...
> 
> and x is declared or inferred to be of type t, then I never again have
> to worry about what happens should someone pass a non-t to foo
> because that simply cannot happen.  This all by itself is useful, but
> it gets even more useful if t is an abstract type so that I have full
> control over how and where t values are generated.
> 
> This sort of thing is most useful when designing libraries because in
> this case you don't know yet (who will call foo and how (and you might
> in fact never know). But you do know that whoever it is and however he
> does it, he must pass a value of type t.

Yes, these things are all obvious. But these are not examples for an 
increase of expressive power! These are only examples of _restricting_ 
the set of potentially useful programs! How hard is this to understand?

You _want_ this restriction. Why don't you _admit_ that it is a restriction?


Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m11xsx2aiu.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> 
> > Pascal Costanza <········@web.de> writes:
> 
> >>I think that Neelakantan has better examples for programs that are
> >>possible to write with a statically typed language, but not with
> >>dynamically typed ones. (Not 100% sure yet, though.)
> > There are no such programs, obviously.  You can always translate a
> 
> > statically typed program into a dynamically typed one (and vice
> > versa).
> 
> No, for christ's sake! There are dynamically typed programs that you
> cannot translate into statically typed ones!

Yes you can.  (In the worst case scenario you lose all the benefits of
static typing.  But a translation is *always* possible. After all,
dynamically typed programs are already statically typed in the trival
"one type fits all" sense.)

> Yes, these things are all obvious. But these are not examples for an
> increase of expressive power! These are only examples of _restricting_
> the set of potentially useful programs! How hard is this to
> understand?
>
> You _want_ this restriction. Why don't you _admit_ that it is a restriction?

Who said that I don't admit that I want restrictions.  That's the
whole point.  Static typing increases my expressive power because I
can now *express restrictions* which I couldn't express before.
That's the whole point.  Being able to express more programs is not
the issue, being able to express more restrictions and being told
early about their violations is!

[This whole discussion is entirely due to a mismatch of our notions of
what constitutes expressive power.]

Matthias
From: Raffael Cavallaro
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <aeb7ff58.0310282039.5cb9e5a6@posting.google.com>
Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@tti5.uchicago.edu>...


> > > Pascal Costanza <········@web.de> writes:
> > No, for christ's sake! There are dynamically typed programs that you
> > cannot translate into statically typed ones!
> 
> Yes you can.  (In the worst case scenario you lose all the benefits of
> static typing.  But a translation is *always* possible. After all,
> dynamically typed programs are already statically typed in the trival
> "one type fits all" sense.)

This is sophistry at its worst. If you "translate" a dynamically typed
program into a statically typed language by eliminating all the static
type checking, then WTF is the point of the static type checking?

It's also possible to "translate" any program into a turing machine
tape, so we should all start coding that way!

Introducing TuringTape(TM), the ultimate bondage and discipline
language!
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnnpjk$hbj$3$8300dec7@news.demon.co.uk>
Raffael Cavallaro wrote:

> Matthias Blume <····@my.address.elsewhere> wrote in message
> news:<··············@tti5.uchicago.edu>...
> 
> 
>> > > Pascal Costanza <········@web.de> writes:
>> > No, for christ's sake! There are dynamically typed programs that you
>> > cannot translate into statically typed ones!
>> 
>> Yes you can.  (In the worst case scenario you lose all the benefits of
>> static typing.  But a translation is *always* possible. After all,
>> dynamically typed programs are already statically typed in the trival
>> "one type fits all" sense.)
> 
> This is sophistry at its worst. If you "translate" a dynamically typed
> program into a statically typed language by eliminating all the static
> type checking, then WTF is the point of the static type checking?

Read what he wrote..in particular the words *WORST CASE SCENARIO*

Regards
--
Adrian Hey
From: Raffael Cavallaro
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <aeb7ff58.0310282048.17f4f21e@posting.google.com>
Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@tti5.uchicago.edu>...
> [This whole discussion is entirely due to a mismatch of our notions of
> what constitutes expressive power.]

No, it is due to your desire to be type constrained inappropriately
early in the development process. Lispers know that early on, you
don't care about type constraints because you haven't settled on your
final data representations yet. So why waste time placating a
type-checking compiler before you have to?

With lisp, you only add as much type checking as you need, *when* you
need it.

With a statically typed language, you have to do the compiler type
check dance from the very first minute, even though you don't need to
solve the type constraint problems yet.
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F9FB432.4040806@ps.uni-sb.de>
Raffael Cavallaro wrote:
> Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@tti5.uchicago.edu>...
> 
>>[This whole discussion is entirely due to a mismatch of our notions of
>>what constitutes expressive power.]
> 
> No, it is due to your desire to be type constrained inappropriately
> early in the development process.

Oh my. How is this related?

I think Matthias' is absolutely right. The mismatch here is that some 
fail to understand that - obviously, one should hope - the ability to 
express restrictions is an ability to express something, i.e. expressive 
power. Otherwise assertions, pre/post conditions, probably exceptions 
and similar stuff wouldn't carry any expressive power either. Which of 
course is nonsense.

> Lispers know that early on, you
> don't care about type constraints because you haven't settled on your
> final data representations yet.

Another repeated misunderstanding. When I use types in early coding 
phases in ML for example, these types are mostly abstract. They don't 
say anything about representations. All checking takes place against 
abstract types, whose representation is fully exchangable.

> With lisp, you only add as much type checking as you need, *when* you
> need it.

Yes, and you also loose most of the benefits of typing...

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Garry Hodgson
Subject: Re: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2003102913501067453434@k2.sage.att.com>
·······@mediaone.net (Raffael Cavallaro) wrote:

> With lisp, you only add as much type checking as you need, *when* you
> need it.

if you knew how much you needed and when, you wouldn't need it.

----
Garry Hodgson, Technology Consultant, AT&T Labs

Be happy for this moment.
This moment is your life.
From: Kees van Reeuwijk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <1g3l4va.1s6hgrk1u6oix0N%C.vanReeuwijk@twi.tudelft.nl>
Raffael Cavallaro <·······@mediaone.net> wrote:

> Matthias Blume <····@my.address.elsewhere> wrote in message
> news:<··············@tti5.uchicago.edu>...
> > [This whole discussion is entirely due to a mismatch of our notions of
> > what constitutes expressive power.]
> 
> No, it is due to your desire to be type constrained inappropriately
> early in the development process. Lispers know that early on,

That's very arrogant. You presume to know what is appropriate for *him*.

And I could retort with ``Static typers know that even early in the
development process it is appropriate to have the additional safety that
static typing brings to a program'', but I won't, since I'm not that
arrogant :-).
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnn1l3$t1l$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>Matthias Blume wrote:
>>
>>
>>>Pascal Costanza <········@web.de> writes:
>>
>>>>I think that Neelakantan has better examples for programs that are
>>>>possible to write with a statically typed language, but not with
>>>>dynamically typed ones. (Not 100% sure yet, though.)
>>>
>>>There are no such programs, obviously.  You can always translate a
>>
>>>statically typed program into a dynamically typed one (and vice
>>>versa).
>>
>>No, for christ's sake! There are dynamically typed programs that you
>>cannot translate into statically typed ones!
> 
> 
> Yes you can.  (In the worst case scenario you lose all the benefits of
> static typing.  But a translation is *always* possible. After all,
> dynamically typed programs are already statically typed in the trival
> "one type fits all" sense.)

No, that's not all you need to do. Essentially you would need to write a 
complete interpreter/compiler for the dynamically typed language on top 
of the statically typed one, _if_ you want runtime metaprogramming. 
That's not what I would call a straightforward translation.

But this is really getting tiring.

> [This whole discussion is entirely due to a mismatch of our notions of
> what constitutes expressive power.]

No, it's not. There's a class of programs that exhibit a certain 
behavior at runtime that you cannot write in a statically typed language 
_directly in the language itself_. There's no program that exhibits a 
certain behavior at runtime that you can only write in a statically 
typed language. [1]

That's a fact that you simply don't want to admit. But you're 
objectively wrong here.

However, the horse is beaten to death by now.


Good bye.

Pascal

[11 Except perhaps for a class of programs that would change their 
runtime and/or space complexity, provided they would need lots of 
dynamic type checks. But I haven't sorted out yet whether this class 
really exists.
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m21xsw4p5z.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

[ on my claim that every dynamically typed program has a statically typed
  translation: ]
> No, that's not all you need to do. Essentially you would need to write
> a complete interpreter/compiler for the dynamically typed language on
> top of the statically typed one, _if_ you want runtime
> metaprogramming. That's not what I would call a straightforward
> translation.

Actually, I didn't say what I need, so how can you contradict me on
this point?  Anyway, having to have a compiler/interpreter around is
only true if the original program contained a complete
compiler/interpreter, in which case I'd say this is just par for the
course.

> > [This whole discussion is entirely due to a mismatch of our notions of
> > what constitutes expressive power.]
> 
> No, it's not. There's a class of programs that exhibit a certain
> behavior at runtime that you cannot write in a statically typed
> language _directly in the language itself_.

This is simply not true.  See above.

> There's no program that
> exhibits a certain behavior at runtime that you can only write in a
> statically typed language. [1]

I do not dispute this fact.

> [11 Except perhaps for a class of programs that would change their
> runtime and/or space complexity, provided they would need lots of
> dynamic type checks.

This comment makes me /very/ uneasy.  Namely, the funny thing here
is that you seem to question a statement that you make /yourself/ even
though it is undoubtedly true.  *Of course* you can write everything
that can be written in a statically typed language in a dynamically
typed language in such a way that runtime behavior is the same
(including complexity).  Translating a well-typed ML program into,
say, Lisp is completely straightforward.  (Early ML implementations
were done that way.)

The proof of type-safety for the original ML program also shows that
no additional runtime checks are needed for the result of the
translation.  You don't even need the runtime checks that Lisp does on
its own because it doesn't know any better. (This is unless you turn
these checks off explicitly -- which in general would make the code
unsafe but in this specific case does not.)

So rest assured that the answer to your question:

> But I haven't sorted out yet whether this class
> really exists.

is "no".  But that was not my point anyway.

Matthias

PS: You don't need to try and convince me of the virtues of dynamic
typing.  I *come* from this world -- having implemented at least three
(if not four -- depending on how you count) interpreters/compilers for
dynamically typed languages.  Because of this I also already know all
the arguments that you are undoubtedly thinking of bringing forward,
having either used them myself or once marveled at them when I heard
them from other, then like-minded folks.  But the long-term effect of
working with and on such systems was that I now prefer to avoid them.
"Run-time metaprogrammming" you say?  The emperor has no clothes.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnoiip$iqg$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:

>>No, it's not. There's a class of programs that exhibit a certain
>>behavior at runtime that you cannot write in a statically typed
>>language _directly in the language itself_.
> 
> This is simply not true.  See above.

OK, let's try to distill this to some simple questions.

Assume you have a compiler ML->CL that translates an arbitrary ML 
program with a main function into Common Lisp. The main function is a 
distinguished function that starts the program (similar to main in C). 
The result is a Common Lisp program that behaves exactly like its ML 
counterpart, including the fact that it doesn't throw any type errors at 
runtime.

Assume furthermore that ML->CL retains the explicit type annotations in 
the result of the translation in the form of comments, so that another 
compiler CL->ML can fully reconstruct the original ML program without 
manual help.

Now we can modify the result of ML->CL for any ML program as follows. We 
add a new function that is defined as follows:

(defun new-main ()
   (loop (print (eval (read)))))

(We assume that NEW-MAIN is a name that isn't defined in the rest of the 
original program. Otherwise, it's easy to automatically generate a 
different unique name.)

Note that we haven't written an interpreter/compiler by ourselves here, 
we just use what the language offers by default.

Furthermore, we add the following to the program: We write a function 
RUN (again a unique name) that spawns two threads. The first thread 
starts the original main function, the second thread opens a console 
window and starts NEW-MAIN.

Now, RUN is a function that executes the original ML program (as 
translated by ML->CL, with the same semantics, including the fact that 
it doesn't throw any runtime type errors in its form as generated by 
ML->CL), but furthermore executes a read-eval-print-loop that allows 
modification of the internals of that original program in arbitrary 
ways. For example, the console allows you to use DEFUN to redefine an 
arbitrary function of the original program that runs in the first 
thread, so that the original definition is not visible anymore and all 
calls to the original definiton within the first thread use the new 
definition after the redefinition is completed. [1]

Now here come the questions.

Is it possible to modify CL->ML in a way that any program originally 
written in ML, translated with ML->CL, and then modified as sketched 
above (including NEW-MAIN and RUN) can be translated back to ML? For the 
sake of simplicity we can assume an implementation of ML that already 
offers multithreading. Again, for the sake of simplicity, it's 
acceptable that the result of CL->ML accepts ML as an input language for 
the read-eval-print-loop in RUN instead of Common Lisp. The important 
thing here is that redefinitions issued in the second thread should 
affect the internals of the program running in the first thread, as 
described above.

To ask the question in more detail:

a) Is it possible to write CL->ML in a way that the result is both still 
statically type checkable and not considerably larger than the original 
program that was given to ML->CL. Especially, is it possible to do this 
without implementing a new interpreter/compiler on top of ML and then 
letting the program run in that language, but keep the program 
executable in ML itself?

(So, ideally, it should be roughly only as many lines longer as the 
modified CL version is compared to the unmodified CL version.)

b) Otherwise, is there a way to extend ML's type system in a way that it 
is still statically checkable and can still handle such programs?

c) If you respond with yes to either a or b, what does your sketch of an 
informal proof in your head look like that convincingly shows that this 
can actually work?

d) If you respond with no to both a or b, would you still disagree with 
the assessment there is a class of programs that can be implemented with 
dynamically typed languages but without statically typed ones? If so, why?

>>[11 Except perhaps for a class of programs that would change their
>>runtime and/or space complexity, provided they would need lots of
>>dynamic type checks.
> 
> 
> This comment makes me /very/ uneasy.  Namely, the funny thing here
> is that you seem to question a statement that you make /yourself/ even
> though it is undoubtedly true.  *Of course* you can write everything
> that can be written in a statically typed language in a dynamically
> typed language in such a way that runtime behavior is the same
> (including complexity).  Translating a well-typed ML program into,
> say, Lisp is completely straightforward.  (Early ML implementations
> were done that way.)

Well, nothing in this thread makes me uneasy at all. I am not trying to 
defend dynamic typing out of pure personal preference. I want to find 
out if we can objectively classify the programs that are expressible in 
either statically typed languages or dynamically typed languages. Until 
recently, I have thought that the class of programs you can write with a 
dynamically typed language is a strict superset of the programs you can 
write with a statically typed language. It is especially the class of 
programs that allows full-fledged runtime metaprogramming, as sketched 
above, that statically typed languages cannot implement by definition 
without resorting to implement a full interpreter/compiler for a 
dynamically typed language.

If it turns out that statically typed languages can indeed express a 
class of programs that exhibit a certain behavior that you cannot write 
in a dynamically typed language without implementing a full 
interpreter/compiler for a statically typed language on top, this 
wouldn't make me feel "uneasy". To the contrary, I would be happy 
because I would finally understand what all the fuss about static typing 
is about.

If it is your only concern that you defend your pet programming style, 
well, that's not my problem. I am interested in insights.

> PS: You don't need to try and convince me of the virtues of dynamic
> typing.  I *come* from this world -- having implemented at least three
> (if not four -- depending on how you count) interpreters/compilers for
> dynamically typed languages.  Because of this I also already know all
> the arguments that you are undoubtedly thinking of bringing forward,
> having either used them myself or once marveled at them when I heard
> them from other, then like-minded folks.  But the long-term effect of
> working with and on such systems was that I now prefer to avoid them.
> "Run-time metaprogrammming" you say?  The emperor has no clothes.

I don't care whether you have made bad experiences in this regard or 
not, to more or less the same degree that you probably don't care 
whether I have made bad experiences with static type systems. (And 
that's fine.)

I am only asking questions that we can objectively answer. Can static 
typing and runtime metaprogramming be reconciled, yes or no?


Pascal

[1] Yes, with all the dangers that this may incur! There are ways to 
handle the potential dangers of such an approach. That's what dynamic 
metaobject protocols are designed for. However, this doesn't matter for 
the sake of this discussion.

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fb097fe$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>OK, let's try to distill this to some simple questions.
>
>Assume you have a compiler ML->CL that translates an arbitrary ML 
>program with a main function into Common Lisp. The main function is a 
>distinguished function that starts the program (similar to main in C). 
>The result is a Common Lisp program that behaves exactly like its ML 
>counterpart, including the fact that it doesn't throw any type errors at 
>runtime.
>
>Assume furthermore that ML->CL retains the explicit type annotations in 
>the result of the translation in the form of comments, so that another 
>compiler CL->ML can fully reconstruct the original ML program without 
>manual help.
>
>Now we can modify the result of ML->CL for any ML program as follows. We 
>add a new function that is defined as follows:
>
>(defun new-main ()
>   (loop (print (eval (read)))))
>
>(We assume that NEW-MAIN is a name that isn't defined in the rest of the 
>original program. Otherwise, it's easy to automatically generate a 
>different unique name.)
>
>Note that we haven't written an interpreter/compiler by ourselves here, 
>we just use what the language offers by default.
>
>Furthermore, we add the following to the program: We write a function 
>RUN (again a unique name) that spawns two threads. The first thread 
>starts the original main function, the second thread opens a console 
>window and starts NEW-MAIN.
>
>Now, RUN is a function that executes the original ML program (as 
>translated by ML->CL, with the same semantics, including the fact that 
>it doesn't throw any runtime type errors in its form as generated by 
>ML->CL), but furthermore executes a read-eval-print-loop that allows 
>modification of the internals of that original program in arbitrary 
>ways. For example, the console allows you to use DEFUN to redefine an 
>arbitrary function of the original program that runs in the first 
>thread, so that the original definition is not visible anymore and all 
>calls to the original definiton within the first thread use the new 
>definition after the redefinition is completed. [1]
>
>Now here come the questions.
>
>Is it possible to modify CL->ML in a way that any program originally 
>written in ML, translated with ML->CL, and then modified as sketched 
>above (including NEW-MAIN and RUN) can be translated back to ML?

Yes, it is possible.  For example, have CL->ML ignore the type annotation
comments, and translate CL values into a single ML discriminated union type
that can represent all CL values.  Or (better, but not quite the same semantics)
translate those CL values with ML type annotations back to corresponding ML
types, and insert conversions to convert between the generic CL type and
other specific types where appropriate.

Suppose the original ML program defines the following functions

	foo : int -> int
	bar : string -> string
	...

We can add dynamic typing like this:

	datatype Generic = Int of int
			 | String of string
			 | Atom of string
			 | Cons Generic Generic
			 | Apply Generic Generic
			 | ...

	fun foo_wrapper (Int x) = (Int (foo x))

	fun bar_wrapper (String x)  (String (foo x))

and then dynamic binding like this

	val foo_binding = ref foo_wrapper

	val bar_binding = ref bar_wrapper

For dynamic binding, function calls will have to be translated to does
an indirection.  So a call

	let y = foo x

will become

	let y = !foo_binding x

or, if x and y are used in a way that requires that they have type int,
then

	let (Int y) = !foo_binding (Int x)

We can then simulate eval using an explicit symbol table.

	fun lookup "foo" = !foo_binding
	  | lookup "bar" = !bar_binding
	  | lookup "define" = !define_binding
	  ...

	fun define "foo" f = foo_binding := f
	  | define "bar" f = bar_binding := f
	  | define "define" f = define_binding := f
	  ...

	fun define_wrapper (Cons (Atom name) body) =
		let () = define name body in Atom "()"
	val define_binding = ref define_wrapper

	fun eval (Apply func arg) =
		case (eval func) of
		Atom funcname => (lookup funcname) (eval arg)
	  | eval x = x

Note that our symbol table includes an entry for the function "define",
so that eval can be used to modify the dynamic bindings.

The rest (e.g. read) is straight-forward.

>To ask the question in more detail:
>
>a) Is it possible to write CL->ML in a way that the result is both still 
>statically type checkable

Yes, but only in a weak sense.  Since we are allowing dynamic binding,
and we want to be able to dynamically bind symbols to a different type,
we're definitely going to have the possibility of dynamic type errors.
The way this is resolved is that things which would have been type errors
in the original ML program may become data errors (invalid constructor
in an algebraic type Generic) in the final ML program.

>and not considerably larger than the original program that was given to
>ML->CL.

There's a little bit of overhead for the wrapper functions of type
"Generic -> Generic", and for the variables of type "ref (Generic -> Generic)"
which store the dynamic bindings.  It's about two lines of code per
function in the original program.  I don't think this is excessive.

>Especially, is it possible to do this 
>without implementing a new interpreter/compiler on top of ML and then 
>letting the program run in that language, but keep the program 
>executable in ML itself?

The program includes a definition for "eval", and "eval" is an
interpreter, So in that sense, we have added a new interpreter.
But the bulk of the program is written in ML, without making use of eval.

>c) If you respond with yes to either a or b, what does your sketch of an 
>informal proof in your head look like that convincingly shows that this 
>can actually work?

See above.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <boqboc$4p2$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:

> Suppose the original ML program defines the following functions
> 
> 	foo : int -> int
> 	bar : string -> string
> 	...
> 
> We can add dynamic typing like this:
> 
> 	datatype Generic = Int of int
> 			 | String of string
> 			 | Atom of string
> 			 | Cons Generic Generic
> 			 | Apply Generic Generic
> 			 | ...
                            ^^^
How many do you need of those, especially when you want to allow that 
type to be extended in the running program?

> Note that our symbol table includes an entry for the function "define",
> so that eval can be used to modify the dynamic bindings.

DEFUN is just one example. What about DEFTYPE, DEFCLASS, DEFPACKAGE, and 
so forth...

> The program includes a definition for "eval", and "eval" is an
> interpreter, So in that sense, we have added a new interpreter.

That's the whole point of my argument.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fb0bf48$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>> The program includes a definition for "eval", and "eval" is an
>> interpreter, So in that sense, we have added a new interpreter.
>
>That's the whole point of my argument.

Then it's a pretty silly argument. 

You ask if we can implement eval, which is an interpreter,
without including an interpreter? 
I don't see what you could usefully conclude from the answer.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <boqlen$13mi$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>>The program includes a definition for "eval", and "eval" is an
>>>interpreter, So in that sense, we have added a new interpreter.
>>
>>That's the whole point of my argument.
> 
> 
> Then it's a pretty silly argument. 
> 
> You ask if we can implement eval, which is an interpreter,
> without including an interpreter? 

Right.

> I don't see what you could usefully conclude from the answer.

...that you can't statically type check your code as soon as you 
incorporate an interpreter/compiler into your program that can interact 
with and change your program at runtime in arbitrary ways.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fb0c2cd$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>
>> Suppose the original ML program defines the following functions
>> 
>> 	foo : int -> int
>> 	bar : string -> string
>> 	...
>> 
>> We can add dynamic typing like this:
>> 
>> 	datatype Generic = Int of int
>> 			 | String of string
>> 			 | Atom of string
>> 			 | Cons Generic Generic
>> 			 | Apply Generic Generic
>> 			 | ...
>                            ^^^
>How many do you need of those, especially when you want to allow that 
>type to be extended in the running program?

In Haskell you would just use "Dynamic" instead of the above, and be done
with it.  In Mercury, you'd just use "univ".

In SML, I don't know.  Probably none.  In fact, the first two entries in
the type above are not really necessary, you could just represent ints
and strings as Atoms.  Even Apply could be represented using just Cons
and Atom: instead of Apply x y, we could use Cons (Atom "apply") (Cons x y).

>> Note that our symbol table includes an entry for the function "define",
>> so that eval can be used to modify the dynamic bindings.
>
>DEFUN is just one example. What about DEFTYPE, DEFCLASS, DEFPACKAGE, and 
>so forth...

Well, DEFTYPE in lisp really just defines a function, doesn't it?
So that's not much different than DEFUN.  Likewise, DEFCLASS just
defines a collection of functions, doesn't it?  OK, maybe these
also record some extra information in some global symbol tables.
That is easily emulated if you really want.

I don't know exactly what DEFPACKAGE does, but if the others are any
guide, it's probably not too hard either.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3islqq59i.fsf@rigel.goldenthreadtech.com>
Fergus Henderson <···@cs.mu.oz.au> writes:

> Well, DEFTYPE in lisp really just defines a function, doesn't it?
> So that's not much different than DEFUN.  Likewise, DEFCLASS just
> defines a collection of functions, doesn't it?  OK, maybe these
> also record some extra information in some global symbol tables.
> That is easily emulated if you really want.
> 
> I don't know exactly what DEFPACKAGE does, but if the others are any
> guide, it's probably not too hard either.

Yes, if you reimplement Lisp you can achieve what was asked.  However,
I don't think "turing equivalence" or Greenspun's tenth was the point...

/Jon
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <boqod5$13ms$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:

>>DEFUN is just one example. What about DEFTYPE, DEFCLASS, DEFPACKAGE, and 
>>so forth...
> 
> Well, DEFTYPE in lisp really just defines a function, doesn't it?
> So that's not much different than DEFUN.  Likewise, DEFCLASS just
> defines a collection of functions, doesn't it?  OK, maybe these
> also record some extra information in some global symbol tables.
> That is easily emulated if you really want.
> 
> I don't know exactly what DEFPACKAGE does, but if the others are any
> guide, it's probably not too hard either.

I am not sure if I understand you correctly, but are you actually 
suggesting that it is better to reimplement Common Lisp on your own than 
to just use one of the various Common Lisp implementations?

All I am trying to get across is that, in case you need the flexibility 
a dynamic language provides by default, and only occasionally need to 
restrict that flexibility, it's better to use a dynamic language from 
the outset. The price to pay is that you cannot make use of a 100% 
strict static type system anymore, but on the other hand you can pick 
one of the stable Common Lisp implementations with language features 
that have proven to work well during the past few decades.

Sure, if you don't need the flexibility of a dynamic language then you 
can think about using a static language that might buy you some 
advantages wrt to static checkability. But I highly doubt that this is a 
rational choice because I don't know any empirical studies that show 
that the problems that static languages intend to solve are in fact 
problems that occur in practice.

Let's inspect the list of those problems again:

a) performance

Good Lisp/Scheme implementations don't have problems in this regard.

b) documentation

Documentation can be handled well with comments and well-chosen names.

c) absence of a certain class of bugs

It's not clear whether this class of bugs really occurs in practice. 
There are also indications that these relatively trivial bugs are also 
covered by test suites as soon as they consist of a reasonable number of 
test cases.

d) unbreakable abstraction boundaries

Such boundaries seem to be as tedious to implement in dynamic languages, 
as is the case for dynamic features in static languages.

My conclusions would be as follows:

a) and b) are relatively uninteresting. c) needs convincing studies from 
the proponents of static type systems. As long as they don't provide 
them, it's just guesswork that these bugs are really important.

d) is the hard part. Proponents of static languages say that it's 
important to be able to express such boundaries because it increases 
their expressive power. (That's one thing I have learned from this 
discussion: It is arguable that this can also be an important kind of 
expressive power.) Proponents of dynamic languages say that it's more 
important to be able to work around restrictions, no matter whether they 
are intentional or not; it's better not to be able to paint yourself 
into a corner.

With regard to d, both "groups" don't have enough objective empirical 
evidence beyond their own subjective experiences. Static programmers 
don't have enough objective empirical evidence that their languages 
objectively increase the quality of their software, and dynamic 
programmers don't have enough objective empirical evidence that painting 
oneself into corners is a problem that occurs regularly.

So both views essentially boil down to be no more than subjective belief 
systems. And IMHO that's not so far from the "truth": I am convinced 
that these issues depend mostly on personal programming style and 
preferences and not so much on technological issues. Software quality is 
a social construct, and social problems can hardly be solved with 
technical means. If you have a bunch of good programmers and let them 
choose their preferred tools, it's more likely that they produce good 
software than when you have a bunch of average programmers and tell them 
what tools they must use.

(It's very important in this regard that we are not talking about 
braindead languages. There are both extremely stupid as well as 
excellent exemplars of static and dynamic languages.)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bor3ks$s7e$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:

> Your article that I was responding to was suggesting that there might
> be some things which could not be done in statically typed languages,
> and in particular that this sort of eval(read()) loop might be one of them.
> As I hope I've demonstrated, it is not.

And you're right in this regard. My statement was too strong. Of course 
it is always possible to reimplement a dynamic language on top of a 
static one and by this get the full expressive power of a dynamic 
language. But I didn't have Turing equivalence in mind when I made that 
statement. The real question is how hard it is to reimplement a dynamic 
language, and wouldn't it be a better idea to use a dynamic language 
when the requirements are of the sort that, when in doubt, flexibility 
turns out to be more important than stacity.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Neelakantan Krishnaswami
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbr2gip.nn2.neelk@gs3106.sp.cs.cmu.edu>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote:
> Fergus Henderson wrote:
> 
>> Suppose the original ML program defines the following functions
>> 
>> 	foo : int -> int
>> 	bar : string -> string
>> 	...
>> 
>> We can add dynamic typing like this:
>> 
>> 	datatype Generic = Int of int
>> 			 | String of string
>> 			 | Atom of string
>> 			 | Cons Generic Generic
>> 			 | Apply Generic Generic
>> 			 | ...
>                             ^^^
> How many do you need of those, especially when you want to allow that 
> type to be extended in the running program?

Two more.

  datatype generic = ...
                   | Class of generic ref
                   | Object of generic array ref

Then you can write a typeof function that looks at tags to decide what
to do.

This shouldn't be that hard to see: an implementation of Scheme or
Lisp doesn't require an infinite family of tags in the lower-level
implementation.


-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <n0bj7vwn.fsf@ccs.neu.edu>
Pascal Costanza <········@web.de> writes:

> No, for christ's sake!  There are dynamically typed programs that you
> cannot translate into statically typed ones!

You are really going to confuse the static typers here.  Certainly
there is no program expressable in a dynamically typed language such
as Lisp that is not also expressible in a statically typed language
such as SML.

But it *is* the case that there are programs for which safe execution
*must* depend upon checks (type checks or pattern matching) that are
performed at run time.  Static analysis will not remove the need for
these.

It is *also* the case that there are programs for which safe execution
requires *no* runtime checking, yet static analysis cannot prove that
this is the case.

A static analyzer that neither inserts the necessary run-time checks,
nor requires the user to do so will either fail to compile some correct
programs, or fail to correctly compile some programs.

I think the static typers will be agree (but probably not be happy
with) this statement:  There exist programs that may dynamically admit
a correct solution for which static analyzers are unable to prove that
a correct solution exists.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnp2ao$v48$1@f1node01.rhrz.uni-bonn.de>
Joe Marshall wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>No, for christ's sake!  There are dynamically typed programs that you
>>cannot translate into statically typed ones!
> 
> You are really going to confuse the static typers here.  Certainly
> there is no program expressable in a dynamically typed language such
> as Lisp that is not also expressible in a statically typed language
> such as SML.

Yes, of course. Bad wording on my side.

Thanks for clarification.

I am not interested in Turing equivalence in the static vs. dynamic 
typing debate. It's taken for granted that for every program written in 
either kind of language you can write an equivalent program in the other 
kind of language. I seriously don't intend to suggest that dynamically 
typed languages beat Turing computability. ;-)

I am not interested in _what_ programs can be implemented, but in _how_ 
programs can be implemented.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: John Thingstad
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <oprxtiyvaxxfnb1n@news.chello.no>
On Wed, 29 Oct 2003 19:53:12 +0100, Pascal Costanza <········@web.de> 
wrote:

> Joe Marshall wrote:
>> Pascal Costanza <········@web.de> writes:
>>
>
> Pascal
>

Do you ever do any real work? Or do you spend all your time constructing 
replies here...

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Henrik Motakef
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <863cdb96q6.fsf@pokey.internal.henrik-motakef.de>
John Thingstad <··············@chello.no> writes:

> Do you ever do any real work? Or do you spend all your time
> constructing replies here...

Proves the point about programmer efficiency, eh? Use Lisp, and you
too can spend most of the day posting to Usenet! ;-)
From: Daniel C. Wang
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <uvfq7l78z.fsf@hotmail.com>
Joe Marshall <···@ccs.neu.edu> writes:


> I think the static typers will be agree (but probably not be happy
> with) this statement:  There exist programs that may dynamically admit
> a correct solution for which static analyzers are unable to prove that
> a correct solution exists.

Agreed. However, if you allow the programer to explicitly guide the static
analyzers with hints. I think that set of correct programs that are
provablely correct under with a static analyzer and explicit programer
hints, is very small.

Type inference and type checking are different things. Inference will always
be incomplete or undecidable in ways that are probably quite annoying. Type
checking maybe be incomplete, but no more incomplete than modern
mathematics.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9f8c43$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Can you show me an example of a program that does't make sense anymore 
>when you strip off the static type information?

Here's a C++ example:

	x << y

Depending on the types of x and y, this might mean left shift, I/O,
or something entirely different.

Here's another C++ example:

	#include "foo.h"
	main() {
		auto Foo x;	// constructor has side-effects
	}

If you strip away the static type information here, i.e. change "auto Foo x;"
to just "auto x;", then there's no way to know which constructor to call!

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnojdj$iqm$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Can you show me an example of a program that does't make sense anymore 
>>when you strip off the static type information?
> 
> 
> Here's a C++ example:
> 
> 	x << y
> 
> Depending on the types of x and y, this might mean left shift, I/O,
> or something entirely different.

And depending on the runtime types, this can be correctly dispatched at 
runtime.

> Here's another C++ example:
> 
> 	#include "foo.h"
> 	main() {
> 		auto Foo x;	// constructor has side-effects
> 	}
> 
> If you strip away the static type information here, i.e. change "auto Foo x;"
> to just "auto x;", then there's no way to know which constructor to call!

But that's not type information, that's just a messy way to implicitly 
call a function. (C++ confuses types and classes here.)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Neelakantan Krishnaswami
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbpqh91.3ig.neelk@gs3106.sp.cs.cmu.edu>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote:
> Matthias Blume wrote:
>> 
>> Nitpick: Neither syntactic nor statically checked type errors make
>> programs fail. Instead, their presence simply implies the absence of a
>> program.
> 
> Yes, the absence of a program that might not fail if it wouldn't have 
> been rejected by the static type system.

That sentence reflects a misunderstanding of what something like ML's
type system *means*. You're trying to hammer ML into the Lisp mold,
which is leading you to incorrect conclusions.

A value in any programming language is some pattern of bits
interpreted under a type. In Scheme or Lisp, there is a single
universe (a single type) that that all values belong to, which is why
it's legal to pass any value to a function. But in ML, there are
multiple universes of values, one for each type.

This means that the same bit pattern can represent different values,
which is not true in a dynamically typed language. To make this
concrete, consider the following Ocaml code:

  type foo = A | B
  
  type baz = C | D
  
  let f1 x = 
    match x with
    | A -> C    
    | B -> D
  
  let f2 x = 
    match x with
    | C -> 0 
    | D -> 1

Some apparently-similar Scheme code would look like:

  (define (f1 x)
     (case x
       ((A) 0)
       ((B) 1)))

  (define (f2 x)
     (case x
        ((C) 0) 
        ((D) 1)))

The difference between these two programs gets revealed when you look
at the assembly code that the Ocaml compiler produces for f1 and f2,
side by side[*]:

f1:                             f2:                      
.L101:                          .L103:                   
        cmpl    $1, %eax                cmpl    $1, %eax 
        je      .L100                   je      .L102    
        movl    $3, %eax                movl    $3, %eax 
        ret                             ret              
.L100:                          .L102:                   
        movl    $1, %eax                movl    $1, %eax 
        ret                             ret              

The code generated for each function is identical, modulo label names.
This means that the bit patterns for the data constructors A/C and B/D
are identical, and the the program only makes sense because A and C,
and B and D are interpreted at different types. (In fact, notice that
the bit-representation of the integer zero and and the constructors A
and C are the same, too.) In contrast, this would not be a valid
compilation of the Scheme program, because A, B, C, and D would all
have to have distinct bit patterns.

So eliminating the types from an ML program means you no longer have a
program, because you can no longer consistently interpret bit patterns
as ML values -- there *isn't* any universal domain that all ML values
really belong to, as you are supposing.

[*] I cleaned up some extraneous alignment stuff, to make what's going
on clearer. But all that was identical too.

-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjimu$frk$1@f1node01.rhrz.uni-bonn.de>
Neelakantan Krishnaswami wrote:

> This means that the same bit pattern can represent different values,
> which is not true in a dynamically typed language. To make this
> concrete, consider the following Ocaml code:
> 
>   type foo = A | B
>   
>   type baz = C | D
>   
>   let f1 x = 
>     match x with
>     | A -> C    
>     | B -> D
>   
>   let f2 x = 
>     match x with
>     | C -> 0 
>     | D -> 1
> 
> Some apparently-similar Scheme code would look like:
> 
>   (define (f1 x)
>      (case x
>        ((A) 0)
>        ((B) 1)))
> 
>   (define (f2 x)
>      (case x
>         ((C) 0) 
>         ((D) 1)))
> 
> The difference between these two programs gets revealed when you look
> at the assembly code that the Ocaml compiler produces for f1 and f2,
> side by side[*]:
[...]

> The code generated for each function is identical, modulo label names.
> This means that the bit patterns for the data constructors A/C and B/D
> are identical, and the the program only makes sense because A and C,
> and B and D are interpreted at different types. (In fact, notice that
> the bit-representation of the integer zero and and the constructors A
> and C are the same, too.) In contrast, this would not be a valid
> compilation of the Scheme program, because A, B, C, and D would all
> have to have distinct bit patterns.

But for correct inputs, the observable _behavior_ of the OCaml and 
Scheme functions would be the same, wouldn't it?

So what's the point here, really?  It seems to me that you are only 
talking about code optimization now, and not about well-behaved programs.

It's clearly possible to transliterate the OCaml code into a dynamically 
typed language, without the type annotations, and thereby produce code 
that behaves the same. It would only make the necessary checks at 
runtime, that's all.

(The concrete Scheme code above matches to different results than the 
OCaml code, but that's besides the point.)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Neelakantan Krishnaswami
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbpr380.3vq.neelk@gs3106.sp.cs.cmu.edu>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote:
> Neelakantan Krishnaswami wrote:
> 
>> This means that the same bit pattern can represent different values,
>> which is not true in a dynamically typed language. To make this
>> concrete, consider the following Ocaml code:
>> 
>>   type foo = A | B
>>   
>>   type baz = C | D
>>   
>>   let f1 x = 
>>     match x with
>>     | A -> C    
>>     | B -> D
>>   
>>   let f2 x = 
>>     match x with
>>     | C -> 0 
>>     | D -> 1
>> The difference between these two programs gets revealed when you look
>> at the assembly code that the Ocaml compiler produces for f1 and f2,
>> side by side[*]:
> [...]
> 
>> The code generated for each function is identical, modulo label names.
>> This means that the bit patterns for the data constructors A/C and B/D
>> are identical, and the the program only makes sense because A and C,
>> and B and D are interpreted at different types. (In fact, notice that
>> the bit-representation of the integer zero and and the constructors A
>> and C are the same, too.) In contrast, this would not be a valid
>> compilation of the Scheme program, because A, B, C, and D would all
>> have to have distinct bit patterns.
> 
> So what's the point here, really?  It seems to me that you are only 
> talking about code optimization now, and not about well-behaved programs.

No, I'm not talking about optimization -- Ocaml and best-of-breed
Scheme compilers generate code that runs at about the same speed.

What I'm trying (apparently badly) is to make clear that what a
machine word means -- what value it has -- depends on the type it's
interpreted under. This is the point that I hoped the fact that three
distinct, distinguishable Ocaml values have the exact same
representation would make clear.

I think my mistake was to leave out the parallel case and examine how
this relates to Scheme. I'll try this below:

> But for correct inputs, the observable _behavior_ of the OCaml and 
> Scheme functions would be the same, wouldn't it?

No, because the Scheme function has a different type! f1 and f2 in
Scheme have the function type schemevalue -> schemevalue. f1 in Ocaml
has the type foo -> int, and f2 has the type bar -> int.

If you wrote the following ML code:

type schemevalue = 
  | Symbol of string
  | Exact of int
  | Lambda of (schemevalue -> schemevalue)
  | ... (* and so on *)

and then wrote two functions like this:

  let f1 x =
    match x with
    | Symbol "A" -> Exact 0
    | Symbol "B" -> Exact 1

  let f2 x =
    match x with
    | Symbol "C" -> Exact 0
    | Symbol "D" -> Exact 1

then you would find that the compiled Ocaml code is something very
much like what a Scheme compiler produces -- and the types of f1 and
f2 would be schemevalue -> schemevalue. (For the code to actually be
identical, you'd have to change the Symbol representation to intern
the string values and return an integer index you can compare, but
that's a detail not worth putting into a Usenet post.)

In fact, you can pretty much mechanically turn any ML compiler into a
Scheme compiler, by defining a universal datatype for Scheme values,
and then shadowing all of the standard library functions so that they
consume and produce Scheme values. (You'll also want a preprocessor to
give it an s-exp notation, too, of course.)

> It's clearly possible to transliterate the OCaml code into a dynamically 
> typed language, without the type annotations, and thereby produce code 
> that behaves the same. It would only make the necessary checks at 
> runtime, that's all.

These checks aren't the same sort of type checks that the Ocaml
compiler is doing at compile time. They correspond instead to tag
checks on data constructors during pattern matching. For example,
notice that you can have an ML type like int -> int -> int, but the
closest approximation to that in Scheme is the procedure? function.

This is why I'm pretty dubious about your claim: while it's true that
you can come up with a mapping from ML expressions into the universal
Scheme datatype, I don't see the practical relevance. You're basically
telling me that my programs will always typecheck if I use a single
type to represent everything.

Gee, I knew that already.

I use multiple types for a reason: I can use the type system to write
down many of the invariants about my program, and automatically track
violations for me. This frees my attention so that I can spend more
time thinking about the higher level issues. This is precisely the
sort of benefit that unit tests give you, except that I don't need to
write down any tests. (Or more precisely, I can spend the same time
writing tests for the really complicated invariants, secure in the
knowledge that the simple ones are already taken care of.)

-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnklo5$op6$1@newsreader2.netcologne.de>
Neelakantan Krishnaswami wrote:

>>It's clearly possible to transliterate the OCaml code into a dynamically 
>>typed language, without the type annotations, and thereby produce code 
>>that behaves the same. It would only make the necessary checks at 
>>runtime, that's all.
> 
> 
> These checks aren't the same sort of type checks that the Ocaml
> compiler is doing at compile time. They correspond instead to tag
> checks on data constructors during pattern matching. For example,
> notice that you can have an ML type like int -> int -> int, but the
> closest approximation to that in Scheme is the procedure? function.

Hmm, I would still consider the things you have said about data type 
representation irrelevant. Adding type tags to perform runtime type 
checking of (non-function) values is cheap, so I don't see any real 
increase in expressive power for a static type system.

However, your remark about function types got me thinking. Let me try to 
rephrase what you said, perhaps in a way friendlier to a "dynamically 
typed mindset". ;)

Of course, the type of a function is usually checked as soon as it is 
called. I don't know about Scheme implementations in this regard, but 
there are surely Common Lisp implementations out there that check 
parameters and return values against arbitrary types under the right 
safety settings. (Again, I don't know the details because I have never 
needed this.)

However, assume you want to check the type of a function, for example as 
passed as a value from somewhere, against a function type without 
actually calling it. One could imagine ways to implement this, by 
encoding function type descriptions as lists, storing them as meta 
information for functions, and checking a function's type description 
against a required type description for compatibility recursively. 
However, it would not be possible to make this execute efficiently in 
the general case. (And this is probably one of the reasons why ANSI 
Common Lisp doesn't allow for dynamic checks for function types via 
TYPEP and CHECK-TYPE.)

So here, a static type system indeed improves expressive power because 
it eliminates a serious overhead of runtime type checking for functions 
(provided you would otherwise actually want/need to "regularly" check 
function types in a concrete dynamically typed setting).

Is this close to what you mean?

Is this something that a "statically typed mindset" finds so obvious 
that it gets hard to express? ;)

Pascal
From: Neelakantan Krishnaswami
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbpt8i9.6me.neelk@gs3106.sp.cs.cmu.edu>
In article <············@newsreader2.netcologne.de>, Pascal Costanza wrote:
> Neelakantan Krishnaswami wrote:
> 
>>>It's clearly possible to transliterate the OCaml code into a dynamically 
>>>typed language, without the type annotations, and thereby produce code 
>>>that behaves the same. It would only make the necessary checks at 
>>>runtime, that's all.
>> 
>> These checks aren't the same sort of type checks that the Ocaml
>> compiler is doing at compile time. They correspond instead to tag
>> checks on data constructors during pattern matching. For example,
>> notice that you can have an ML type like int -> int -> int, but the
>> closest approximation to that in Scheme is the procedure? function.
> 
> Hmm, I would still consider the things you have said about data type 
> representation irrelevant. Adding type tags to perform runtime type 
> checking of (non-function) values is cheap, so I don't see any real 
> increase in expressive power for a static type system.

This applies to any parameterized type, IMO. For example, when I
program in Python, one of the biggest sources of errors in my code are
when I use hash tables: I create a dictionary that I intend to have a
type like Dictionary[String, Int], and then put a bad key/value pair
into it. The problem I have is that when the runtime error is raised,
it manifests at the site that *accesses* the dictionary, rather than
the program point that that put the bad value into it.

If you rewrite function types as having a type like Function[a,b]
rather than a -> b, you can see that a function arrow is a type
parameterized over its argument and return values, so it's one
instance of a more general case.

> Of course, the type of a function is usually checked as soon as it is 
> called. I don't know about Scheme implementations in this regard, but 
> there are surely Common Lisp implementations out there that check 
> parameters and return values against arbitrary types under the right 
> safety settings. (Again, I don't know the details because I have never 
> needed this.)

Dylan lets you check argument and return types on each call, and
DrScheme now has a nice system for doing design-by-contract with
higher-order functions.

> However, assume you want to check the type of a function, for example as 
> passed as a value from somewhere, against a function type without 
> actually calling it. One could imagine ways to implement this, by 
> encoding function type descriptions as lists, storing them as meta 
> information for functions, and checking a function's type description 
> against a required type description for compatibility recursively. 
> However, it would not be possible to make this execute efficiently in 
> the general case. (And this is probably one of the reasons why ANSI 
> Common Lisp doesn't allow for dynamic checks for function types via 
> TYPEP and CHECK-TYPE.)
>
> So here, a static type system indeed improves expressive power
> because it eliminates a serious overhead of runtime type checking
> for functions (provided you would otherwise actually want/need to
> "regularly" check function types in a concrete dynamically typed
> setting).  Is this close to what you mean?

Well, I don't really care about the performance overhead very much,
with one exception.[*] It's a red herring, imo.

The real issue is that having a static type system with polymorphic
types lets me push the work of checking the consistency of all
polymorphic instantiations onto the compiler. That's something I just
don't have to think about anymore -- I don't have to write tests for
it, and I don't have to worry about getting it wrong, ever.

This, in turn, lets me write code in a much more generic style. In
particular, I shifted to using statically typed languages when I
started writing a lot of higher-order functional code. Once you start
using things like combinators and partial application, having that
consistency check becomes extremely helpful, because the dynamic
errors that arise in erroneous programs can happen far from their real
point of origin in the source code.

I find non-polymorphic type systems to definitely be a straitjacket,
and for simple first-order code a polymorphic type system doesn't help
much (though it doesn't hurt either). So for things like routine
scripting I use Python without too many problems. (I'd use Scheme or
ML, except that Python has a lot more libraries for random utility
stuff like file globbing and pathname operations.)

[*] The exception is that checking return values can lead to loss of
tail recursion. Since this can potentially alter the complexity class
of the space usage, I do care about that.

-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnms5s$k27$1@newsreader2.netcologne.de>
Neelakantan Krishnaswami wrote:

> In article <············@newsreader2.netcologne.de>, Pascal Costanza wrote:

>>Hmm, I would still consider the things you have said about data type 
>>representation irrelevant. Adding type tags to perform runtime type 
>>checking of (non-function) values is cheap, so I don't see any real 
>>increase in expressive power for a static type system.
> 
> This applies to any parameterized type, IMO. For example, when I
> program in Python, one of the biggest sources of errors in my code are
> when I use hash tables: I create a dictionary that I intend to have a
> type like Dictionary[String, Int], and then put a bad key/value pair
> into it. The problem I have is that when the runtime error is raised,
> it manifests at the site that *accesses* the dictionary, rather than
> the program point that that put the bad value into it.

I don't find this convincing. Just define setter functions that make the 
necessary checks. This is just a similar "annotation" and guarantees 
correctness.

>>However, assume you want to check the type of a function, for example as 
>>passed as a value from somewhere, against a function type without 
>>actually calling it. One could imagine ways to implement this, by 
>>encoding function type descriptions as lists, storing them as meta 
>>information for functions, and checking a function's type description 
>>against a required type description for compatibility recursively. 
>>However, it would not be possible to make this execute efficiently in 
>>the general case. (And this is probably one of the reasons why ANSI 
>>Common Lisp doesn't allow for dynamic checks for function types via 
>>TYPEP and CHECK-TYPE.)
>>
>>So here, a static type system indeed improves expressive power
>>because it eliminates a serious overhead of runtime type checking
>>for functions (provided you would otherwise actually want/need to
>>"regularly" check function types in a concrete dynamically typed
>>setting).  Is this close to what you mean?
> 
> Well, I don't really care about the performance overhead very much,
> with one exception.[*] It's a red herring, imo.

No, in this case I tend to disagree (funnily in favor of statically 
typed languages). The typical overhead for checking values against an 
expected type is an additive constant, at most. The overhead for 
checking functions against an expected type, as described above, would 
be at least proportional to the type descriptors. This is a much more 
serious, conceptional overhead.

In such a case, I would agree that this is related to expressive power. 
A statically typed language would enable me to write a program similar 
in behavior, but with _considerably_ less performance overhead, under 
the assumption that I need to do those checks regularly.

> The real issue is that having a static type system with polymorphic
> types lets me push the work of checking the consistency of all
> polymorphic instantiations onto the compiler. That's something I just
> don't have to think about anymore -- I don't have to write tests for
> it, and I don't have to worry about getting it wrong, ever.
> 
> This, in turn, lets me write code in a much more generic style. In
> particular, I shifted to using statically typed languages when I
> started writing a lot of higher-order functional code. Once you start
> using things like combinators and partial application, having that
> consistency check becomes extremely helpful, because the dynamic
> errors that arise in erroneous programs can happen far from their real
> point of origin in the source code.

Yes, I think these are examples for which a static type system is really 
helpful, regardless of expressive power.

> I find non-polymorphic type systems to definitely be a straitjacket,
> and for simple first-order code a polymorphic type system doesn't help
> much (though it doesn't hurt either).

Again, I disagree. Static type systems don't allow me to do full-fledged 
dynamic metaprogramming. This can hurt in certain scenarios.

> So for things like routine
> scripting I use Python without too many problems. (I'd use Scheme or
> ML, except that Python has a lot more libraries for random utility
> stuff like file globbing and pathname operations.)
> 
> [*] The exception is that checking return values can lead to loss of
> tail recursion. Since this can potentially alter the complexity class
> of the space usage, I do care about that.

Yes, another indication that static type system may increase expressive 
power.

So, perhaps a good summary could be that static type systems increase 
expressive power when you want to reason about higher-order functions 
(and not just call them). Agreed?


Pascal
From: Jesse Tov
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbpubnh.83r.tov@tov.student.harvard.edu>
Pascal Costanza <········@web.de>:
> Again, I disagree. Static type systems don't allow me to do full-fledged 
> dynamic metaprogramming. This can hurt in certain scenarios.

This sounds like a worthy area for research.  Surely it's not
_impossible_, even if we don't know how to do it.

Jesse
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnnpjh$hbj$2$8300dec7@news.demon.co.uk>
Jesse Tov wrote:

> Pascal Costanza <········@web.de>:
>> Again, I disagree. Static type systems don't allow me to do full-fledged
>> dynamic metaprogramming. This can hurt in certain scenarios.
> 
> This sounds like a worthy area for research.  Surely it's not
> _impossible_, even if we don't know how to do it.

I believe people who know better than me say it is impossible,
(something to do with Goedels theorem I gather) but it's a lame
argument against any form static typing. It just means that
sometimes you do need dynamic typing too AFAICS.

Regards
--
Adrian Hey
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bno11r$gdl$1@newsreader2.netcologne.de>
Jesse Tov wrote:

> Pascal Costanza <········@web.de>:
> 
>>Again, I disagree. Static type systems don't allow me to do full-fledged 
>>dynamic metaprogramming. This can hurt in certain scenarios.
> 
> 
> This sounds like a worthy area for research.  Surely it's not
> _impossible_, even if we don't know how to do it.


No, I think it's impossible. Dynamic metaprogramming means that I can 
potentially alter any definition (data, types, functions, etc.) that was 
defined for the currently running program. But I am happy to be proven 
wrong.

Until then, I use the tools that work _right now_.


Pascal
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnj9in$v2o$1@f1node01.rhrz.uni-bonn.de>
Ed Avis wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>>Should we then conclude that compile-time syntax checking is not
>>>worth having?
>>
>>No. Syntax errors make the program fail, regardless whether this is
>>checked at compile-time or at runtime.
>>
>>A type "error" detected at compile-time doesn't imply that the
>>program will fail.
> 
> Actually it does, in a statically typed language.

No, it doesn't.

> If you write a
> function which expects a Boolean and you pass it a string instead,
> it's going to fail one way or another.

Yes, that's one example. This doesn't mean that this implication always 
holds. What part of "doesn't imply" is the one you don't understand?

> OK, the bad call of that function might never be reachable in actual
> execution, but equally the syntax error in Tcl code might not be
> reached.  I'd rather find out about both kinds of mistake sooner
> rather than later.

I don't care for unreachable code in this specific context. A part of 
the program that passes a value of type "don't know" to a variable of 
type "don't know" might be unacceptable to a static type sytem, but 
might still not throw any exceptions at all at runtime. Or, in other 
scenarios, only exceptions that you can correct at runtime, which might 
be still useful, or even the preferred behavior.

> (I do mean a type error and not a type 'error' - obviously if you have
> some mechanism to catch exceptions caused by passing the wrong type,
> you wouldn't want this to be checked at compile time.)

Exactly.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Ed Avis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l1ekwse4ws.fsf@budvar.future-i.net>
Pascal Costanza <········@web.de> writes:

>>>A type "error" detected at compile-time doesn't imply that the
>>>program will fail.
>> 
>>Actually it does, in a statically typed language.
>
>No, it doesn't.

Well, it's rather a meaningless question since all statically-typed
languages define their semantics only for well-typed expressions, and
so something like

   f :: Int -> Int
   x :: String = "hello"
   f "hello"

has no semantics, and so cannot be said to 'fail' or 'not fail'.  But
it seems pretty clear that if you did bypass the type checking and
compile such code, it would go wrong as soon as it was run.

>>If you write a function which expects a Boolean and you pass it a
>>string instead, it's going to fail one way or another.
>
>Yes, that's one example. This doesn't mean that this implication
>always holds. What part of "doesn't imply" is the one you don't
>understand?

I just gave one example - 'Boolean' and 'string' in the above are just
examples of one possible type mismatch.   But the same holds for any
other type mismatch.  You cannot call a function defined for type X
and pass a value of type Y, where Y is not an instance of X.

>I don't care for unreachable code in this specific context. A part of
>the program that passes a value of type "don't know" to a variable of
>type "don't know" might be unacceptable to a static type sytem,

Actually, an expressive static type system will allow this:

   f :: a -> a

f takes a parameter 'don't know' and returns a result of the same
type.  Or you can have the even more general a -> b, any type to any
type.  Such a function isn't especially useful however, since if you
know nothing at all about what the type supports (eg, not even
equality might be defined) then you can't promise much about the
return value.

>but might still not throw any exceptions at all at runtime.

>>(I do mean a type error and not a type 'error' - obviously if you
>>have some mechanism to catch exceptions caused by passing the wrong
>>type, you wouldn't want this to be checked at compile time.)
>
>Exactly.

Some statically-typed languages do support this, for example Haskell's
Dynamic library, but you have to ask for it explicitly.  For me, the
common case of a type error is when I've simply made a mistake, and I
would like as much help as possible from the computer to catch the
mistake as early as possible.  But one might want to suppress the
checking occasionally.

(However, with a more expressive type system you don't so often feel
the need to suppress it altogether.)

-- 
Ed Avis <··@membled.com>
From: Jacques Garrigue
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l2vfq6eshh.fsf@suiren.i-did-not-set--mail-host-address--so-shoot-me>
Matthias Blume <····@my.address.elsewhere> writes:

> > (Contrary to Matthias I'm a purely static guy, but I've always been
> > attracted by those fancy dynamic development environments.)
> 
> Do you mean that you don't have any dynamically typed skeletons in
> your closet?  My excuse is that I have been attracted by the static
> side of the force all along, but for a long time I didn't understand
> that this was the case... :-)

To be honest, I have been for a long time a fan of Prolog. To choose
an untyped language, I prefer it (pseudo) intelligent! And you can
also do plenty of fun stuff with meta-programming in Prolog.

Maybe the switch has been when I was (as undergrad) assigned a project
to write a lazy prolog interpreter in ML. This was so easy that I
didn't see the point of using prolog afterwards...
Not that I pretend that good compilers for untyped languages are easy
to write. But at least type inference (even trivial) is a bit harder
than interpreters, which gives you this warm feeling that you're doing
some real work. 

---------------------------------------------------------------------------
Jacques Garrigue      Kyoto University     garrigue at kurims.kyoto-u.ac.jp
		<A HREF=http://wwwfun.kurims.kyoto-u.ac.jp/~garrigue/>JG</A>
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhm1v$4uv$1@news.oberberg.net>
Don Geddis wrote:

> Dirk Thierbach <··········@gmx.de> writes:
> 
>>Hindley-Milner type inference always terminates. The result is either
>>a provable mismatch, or a provable-non-mismatch.
> 
> You're completely wrong, which can be easily demonstrated.
> 
> The fact that it terminates isn't the interesting part.  Any inference
> procedure can also "always terminate" simply by having a timeout, and reporting
> "no proof" if it can't find one in time.

Now you are completely wrong.
Of course you can make any type checker terminate by such draconian 
measures, but such a type system would be near-useless: code may 
suddenly become incorrect if compiled on a smaller machine.

There are better ways of doing this, like cutting down on the size of 
some intermediate result during type checking (such as C++, where 
template nesting depth or something similar is cut off at a relatively 
small, fixed number IIRC).
Standard type systems don't have, need or want such cut-offs though :-)

> So what's interesting is whether the conclusions are correct.
> 
> Let's take as our ideal what a dynamic type system (say, a program in Lisp)
> would report upon executing the program.  The question is, can your type
> inference system make exactly the same conclusions at compile time, and predict
> all (and only!) the type errors that the dynamic type system would report at
> run time?
> 
> The answer is no.

Not precisely.
The next question, however, is whether the programs where the answers 
differ are interesting.
There's also a narrow and a broad sense here: obviously, it's not 
possible to type check all Lisp idioms, but are we allowed to present 
alternative idioms that do type check and serve the same purpose?

[Snipping the parts that are getting ad hominem-ish :-( ]

Regards,
Jo
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <y8v77doi.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> There's also a narrow and a broad sense here: obviously, it's not
> possible to type check all Lisp idioms, but are we allowed to present
> alternative idioms that do type check and serve the same purpose?

I don't have a problem with this, but I don't want to split hairs on
what constitutes an `idiom' vs. what constitutes a complete rewrite.

Presumably, an alternative idiom would involve only *local* changes,
not global ones, and could be performed incrementally, i.e., each
use of an idiom could be independently replaced and thus reduce the
the the type checking errors.

If a change involves pervasive edits, say, for instance, editing all
callers of some function to pass an extra argument, or wrapping a
conditional branch around all uses of an object, that would not be
an alternative idiom.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnip9c$klc$1@news.oberberg.net>
·············@comcast.net wrote:
> Joachim Durchholz <·················@web.de> writes:
> 
>>There's also a narrow and a broad sense here: obviously, it's not
>>possible to type check all Lisp idioms, but are we allowed to present
>>alternative idioms that do type check and serve the same purpose?
> 
> I don't have a problem with this, but I don't want to split hairs on
> what constitutes an `idiom' vs. what constitutes a complete rewrite.

When it comes to comparing whether static types are "getting in the 
way", even a complete rewrite would be OK according to my book.
Sticking to the original code will, of course, make a more compelling 
case - not because that is better, but because it's better understood.

> If a change involves pervasive edits, say, for instance, editing all
> callers of some function to pass an extra argument, or wrapping a
> conditional branch around all uses of an object, that would not be
> an alternative idiom.

If you mean that Lisp has a point if it can process an arbitrary number 
of parameters in one line of code, while a non-Lisp would need an extra 
handler for each parameter: then I agree.
Though that isn't usually needed for currying languages. (I know of a 
single instance in the standard Haskell libraries where it's needed - 
and in that case, it's about taking apart tuples of various arities, not 
about taking apart parameter lists which is usually a snap.)

Regards,
Jo
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhphc$reg$1@newsreader2.netcologne.de>
Joachim Durchholz wrote:

> The next question, however, is whether the programs where the answers 
> differ are interesting.
> There's also a narrow and a broad sense here: obviously, it's not 
> possible to type check all Lisp idioms, but are we allowed to present 
> alternative idioms that do type check and serve the same purpose?

No, and that's exactly the point. "We" can write completely idiomatic 
and well-behaved Lisp code that works, can be understood and maintained 
by other Lispers, and provides various useful additional behavior at 
runtime.

Why should we rewrite it just to make a static type checker happy? 
That's redundant work with no obvious advantages.

Why should we, on the other hand, accept statically type-checked code 
that does less than our straightforward solutions?

We could _additionally_, if we wanted to, write our code in a style that 
is acceptable by a static type checker whithout switching the language. 
We could add a full-blown static type checker to our language and ask 
for a different coding style for those parts of the code that would use 
it. See Qi and ACL2 for two real-world examples.

This is the gist of unrestricted expressive power: we objectively have 
more options! A language that forces your code to adhere to a static 
type system has less options!

You may probably not care about the loss of options. You might even 
think it's an advantage to have less options. But you objectively have 
less options to write well-behaved code!


Pascal
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <278v61-6k4.ln1@ID-7776.user.dfncis.de>
Don Geddis <···@geddis.org> wrote:
>> ·············@comcast.net reasonably noted:
>> > If there are three values that can arise --- provable-mismatch,
>> > provable-non-mismatch, and undecided --- then you cannot assume that
>> > ~provable-mismatch = provable-non-mismatch.
> 
> Dirk Thierbach <··········@gmx.de> writes:
>> Hindley-Milner type inference always terminates. The result is either
>> a provable mismatch, or a provable-non-mismatch.
> 
> You're completely wrong, which can be easily demonstrated.

Unfortunately, I can *prove* that the HM-type inference always terminates,
without any timeout, by induction on the length of the expression.

> The fact that it terminates isn't the interesting part.  Any
> inference procedure can also "always terminate" simply by having a
> timeout, and reporting "no proof" if it can't find one in time.

Yes, you can do that. But it's not done during HM-type inference.

> Let's take as our ideal what a dynamic type system (say, a program
> in Lisp) would report upon executing the program.  The question is,
> can your type inference system make exactly the same conclusions at
> compile time, and predict all (and only!) the type errors that the
> dynamic type system would report at run time?

> The answer is no.

Right. It cannot, because that question is not decidable. 

> That's one obvious case, so even you know that your claim of a "provable
> mismatch" is incorrect.  

It's still a provable mismatch. Only that part of the code never gets
executed, so you don't have a dynamic type error. I would consider a
program which has a branch that contains an error, but fortunately
never executes that branch pretty bogus. I don't see any advantage
in admitting such a program. It's a bad program, and you should either
correct the error in the dead branch, or remove the dead branch 
completely if it isn't going to be executed anyway.

> There are programs that will never have run-time
> errors, but your static type inference will claim a type error.

Yes. But these programs will have problems, even without a run-time error. 
Rewrite them to increase the quality of your software.

>> or because you have an implicit restriction for possible arguments to this
>> expression the type system doesn't know about, than you could call it a
>> "valid program", but it will still be rejected, yes.

> For example:
>        (defun foo (x)
>          (check-type x (integer 0 10))
>          (+ 1 x) )
>        (defun fib (n)
>          (check-type n (integer 0 *))
>          (if (< n 2)
>              1
>              (+ (fib (- n 1)) (fib (- n 2))) ))
>        (print (foo (fib 5)))

> This program prints "9", and causes no run-time type errors.  Will
> it be successfully type-checked at compile time by a static system?
> Almost certainly the answer is no.

It will be successfully type checked, because the static type system
does not allow you to express assumptions about value ranges of types.
These things have to be checked dynamically, as in your program. So the
type system "doesn't get in the way": It admits the program.

> Unfortunately, the only way to figure this out is to actually
> compute the fifth Fibonacci number, which surely no static type
> inference system is going to do.

Yes. That's why you cannot express such restrictions statically.

> Do you now accept that your static type inference systems do NOT partition
> all programs into "either a provable [type] mismatch, or a provable [type]
> non-mismatch"?

No. But you probably don't understand what I mean with that. A provable
type mismatch means that the program contains a location where it can
be statically verified that executing the location will cause trouble.
It cannot statically check that this location will be indeed executed,
so it might (wrongly) reject the program. But this rejection is acceptable,
because the program is bogus. On the other hand, if there is no static 
type mismatch, that doesn't mean that the program will not crash because
of runtime errors (division by zero, or dynamically checked restrictions).

> Finally, to get back to the point of the dynamic typing fans:
> realizing that type inference is not perfect, we're annoyed to be
> restricted to writing only programs that can be successfully type
> checked at compile time.

You may not believe it, but I perfectly understand that :-) The
problem is that this is just not true (with a good static type
system): It is not a real restriction. If your program is a good
program, it will type check. If it doesn't type check, then there is
something wrong with it.

I am annoyed in the very same way if I have to write programs in a
language with a bad type system (say, C++). I cannot express myself as
abstractly as I would want to, I have to write down lots of
unnecessary type annotions, and I have to invent tricks to please the
type checker and let it allow me do what I want to do. Really
horrible.

Again, try thinking of the static type systems as an automatic testing
tool. Saying that you want to write programs that will be rejected by
the static typing is like saying that you want to write programs that
will be rejected by your unit tests. It just doesn't make any sense;
the unit tests are there to guarantee that you write good quality
software, so why would you want to ignore them? The only case when you
want to do that is if they are bad tests; when they point to problems
that are not really there. But with a good static type system, that
doesn't happen.

- Dirk
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <r80yrb1b.fsf@ccs.neu.edu>
> Don Geddis <···@geddis.org> wrote:
>
>> For example:
>>        (defun foo (x)
>>          (check-type x (integer 0 10))
>>          (+ 1 x) )
>>        (defun fib (n)
>>          (check-type n (integer 0 *))
>>          (if (< n 2)
>>              1
>>              (+ (fib (- n 1)) (fib (- n 2))) ))
>>        (print (foo (fib 5)))
>
>> This program prints "9", and causes no run-time type errors.  Will
>> it be successfully type-checked at compile time by a static system?
>> Almost certainly the answer is no.

Dirk Thierbach <··········@gmx.de> writes:
> It will be successfully type checked, because the static type system
> does not allow you to express assumptions about value ranges of types.

I was working on the assumption that the type language *would* allow
one to express arbitrary types.  Certainly one can create a
sufficiently weak static type system that terminates under all
conditions and produces correct answers within the system.  Lisp has
one:  everything is a subtype of type t and all programs pass.  The
inference is trivial.

But I surely wouldn't be impressed by a type checker that allowed this
to pass:

(defun byte-increment (x)
  (check-type x (integer 0 (256)))
  (+ x 1))

(defun fib (n)
  (if (< n 2)
      1
      (+ (fib (- n 1)) (fib (- n 2))))) 

(print (byte-increment (fib 13)))

> On the other hand, if there is no static type mismatch, that doesn't
> mean that the program will not crash because of runtime errors
> (division by zero, or dynamically checked restrictions).

I think most people here were assuming that passing an integer greater
than 255 to a routine expecting a single 8-bit byte is a type error,
and something that could cause a crash.
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <817071-bsg.ln1@ID-7776.user.dfncis.de>
Joe Marshall <···@ccs.neu.edu> wrote:
> Dirk Thierbach <··········@gmx.de> writes:
>> It will be successfully type checked, because the static type system
>> does not allow you to express assumptions about value ranges of types.

> I was working on the assumption that the type language *would* allow
> one to express arbitrary types. 

It doesn't. (There are languages with a type languages that do
allow to express arbitrary types, but the consequence is that typing
is no longer decidable at compile-time, so your compilation may
not terminate. For many people, this is not acceptable.)

> Certainly one can create a sufficiently weak static type system that
> terminates under all conditions and produces correct answers within
> the system.

Yes. The trick is that there is a fine line between "too weak" and
"not decidable". As I repeatedly said, it helps to think of the
static type system as a tool that automatically writes tests in
a limited area, but these tests are more powerful than unit tests
because they work on all possible execution paths. But this tool
clearly won't write all the tests you need, so for the rest, you
insert checks or write the unit tests by hand.

> But I surely wouldn't be impressed by a type checker that allowed this
> to pass:

[more stuff with value ranges]

It's not a question whether it will pass or fail. You cannot express 
this in the type language.

Maybe the misunderstanding is that you and others think that a static
type checker must now check everything at compile time that is checked
at runtime with dynamic typing. It doesn't, and in those cases where
it doesn't, you of course write the same tests in a statically typed
language as you do in a dynamically typed language.

But that doesn't mean static typechecking is not useful; it *will*
help you in a lot of cases.

> I think most people here were assuming that passing an integer greater
> than 255 to a routine expecting a single 8-bit byte is a type error,
> and something that could cause a crash.

But that's not how it works. What you can use the static type system
for is to make two different types, say Byte and Integer, and then
write a conversion routine from Integer to Byte that does a range check,
and the type checker will make sure that this conversion routine
is called everywhere where it is necessary. So you can use the type
checker to remind you when you forget that. (That's quite helpful,
because it is easy to forget adding a manual type check.)

(And, BTW, that's quite different from C where the static typechecker
allows you to assign values of different ranges without ever asking
for a range check.)

- Dirk
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87u15uxzuw.fsf@sidious.geddis.org>
> > Dirk Thierbach <··········@gmx.de> writes:
> >> Hindley-Milner type inference always terminates. The result is either
> >> a provable mismatch, or a provable-non-mismatch.

I wrote:
> > You're completely wrong, which can be easily demonstrated.

Dirk Thierbach <··········@gmx.de> writes:
> Unfortunately, I can *prove* that the HM-type inference always terminates,
> without any timeout, by induction on the length of the expression.

It was obvious from later in my posting that I was concerned about your
"the result is either..." claim.  I wasn't objecting to whether HM terminates,
as I stated explicitly elsewhere.

I wrote:
> > For example:
> >        (defun foo (x)
> >          (check-type x (integer 0 10))
> >          (+ 1 x) )
> >        (defun fib (n)
> >          (check-type n (integer 0 *))
> >          (if (< n 2)
> >              1
> >              (+ (fib (- n 1)) (fib (- n 2))) ))
> >        (print (foo (fib 5)))
> > This program prints "9", and causes no run-time type errors.  Will
> > it be successfully type-checked at compile time by a static system?
> > Almost certainly the answer is no.

Dirk Thierbach <··········@gmx.de> writes:
> It will be successfully type checked, because the static type system
> does not allow you to express assumptions about value ranges of types.

Now hang on a minute!  So far we've been talking about "static type systems"
in general.  And when dynamic fans bring up Java or C++, static fans reply
that "yes, admittedly those are static type systems -- but they aren't good
ones!  We're only talking about good ones."  And usually ML/Haskell/Ocaml is
mentioned.  Fine.

But this conversation was _never_ about a particular static type system.  How
can you possibly claim that "THE static type system" doesn't allow something?
Which, one, exactly, are you referring to?  I was referring to the abstract
theory of compile-time checking of type errors.  This whole thread began with
people making a claim that static type checking in general was a good idea,
and that any future modern language would necessarily need to have it.  (And,
not so subtly, that languages without it like Lisp were outdated.)

So where do you get off saying that no static type system could possibly allow
one to express value ranges of types?

> These things have to be checked dynamically, as in your program. So the
> type system "doesn't get in the way": It admits the program.

Well, if so, then now this whole argument just falls apart.  Perhaps nobody is
disagreeing about anything.  Let's see what we all might agree on:

1. Compile-time detection of source code bugs is a good idea (syntax errors,
   type errors, etc.)
2. Some kinds of inference can help find more cases of compile-time errors.
3. Some errors can be caught at compile time, while others can't be found
   until run time.

Seems pretty non-controversial.  And in this sense, there's basically no
difference between Lisp and Haskell.  Both do some checks at compile time,
both do type inference at compile time (esp. in a Lisp implementation like
CMUCL), both do many type checks at runtime, and both have lots of other
compile-time and run-time error checking that doesn't involve types.

So let me turn this back on you: what _exactly_ do you object to about Lisp
code?  As far as I can tell, given your increasingly weak claims, it belongs
in the same class of languages as your favorites (Haskell, Ocaml).

Surely I've misunderstood.  Can you state a sharp objection about something
you think is done poorly in Lisp (on the topic of types) that is done well
in Haskell/Ocaml?

> A provable type mismatch means that the program contains a location where
> it can be statically verified that executing the location will cause
> trouble.  It cannot statically check that this location will be indeed
> executed, so it might (wrongly) reject the program. But this rejection is
> acceptable, because the program is bogus. On the other hand, if there is no
> static type mismatch, that doesn't mean that the program will not crash
> because of runtime errors (division by zero, or dynamically checked
> restrictions).

You need to be much more precise about this.  It appears that you're simply
saying that some errors will get caught at compile time, and others at run
time.  And you can't even claim that all "type" errors will get caught at
compile time.  At least, for any general definition of "type".

And don't try to get around this by saying "types are exactly those things
definable by ML/Haskell/Ocaml at compile time".  That's a circular definition.
Those are implementations, and possible solutions to a type problem.  I want
the freedom to propose different solutions.  Tell me what you think a type is,
without referencing existing implementations.

> Again, try thinking of the static type systems as an automatic testing
> tool.

Look: If I'm not required to add any type annotations, and if the compiler
only reports provable type errors (vs. "unprovable type safety"), I don't
think anyone objects.  But in that case, Common Lisp is already a fine
language to use as a base.  Such an "automatic testing tool" can easily be
added on top as an option, as shown by the CMUCL implementation.

Yet many people on this thread seem to claim that static typing _must_ be a
primitive part of the language definition, and it would be "impossible" to
back-fit it onto a language that wasn't designed for it from the beginning.

Something doesn't add up here.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
There's so much comedy on television.  Does that cause comedy in the streets?
	-- Dick Cavett, mocking the TV-violence debate
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <4kl071-94k.ln1@ID-7776.user.dfncis.de>
Don Geddis <···@geddis.org> wrote:

> It was obvious from later in my posting that I was concerned about
> your "the result is either..." claim.  I wasn't objecting to whether
> HM terminates, as I stated explicitly elsewhere.

Since you made the comment about stopping with a timeout, I assumed
that was your point.

> Now hang on a minute!  So far we've been talking about "static type
> systems" in general.  And when dynamic fans bring up Java or C++,
> static fans reply that "yes, admittedly those are static type
> systems -- but they aren't good ones!  We're only talking about good
> ones."  And usually ML/Haskell/Ocaml is mentioned.  Fine.

> But this conversation was _never_ about a particular static type system. 

For me, it was about a particular kind of static type system. The C++
or Java type systems are a lot weaker and more complicated. They don't
have type inference, either. I don't mind discussing them, but
they clearly belong to different categories.

Maybe lets call the one "functional static typing" and the other
"ad-hoc static typing". I am open to better names. I want to talk
about functional static typing.

> How can you possibly claim that "THE static type system" doesn't
> allow something?

I meant functional static typing.

> I was referring to the abstract theory of compile-time checking of
> type errors.

I have never seen an abstract theory of compile-time checking of type
errors. Any pointers?

> This whole thread began with people making a claim that static type
> checking in general was a good idea,

So far I agree.

> and that any future modern language would necessarily need to have it. 

That's nonsense. 

> (And, not so subtly, that languages without it like Lisp were
> outdated.)

That's also nonsense.

> So where do you get off saying that no static type system could
> possibly allow one to express value ranges of types?

Show me one that can verify at compile time that these value ranges
will always be respected. I don't think that is possible. One
can do dataflow analysis and get a very conservative estimation
what ranges the values can be in, but that's not the same thing.

> Well, if so, then now this whole argument just falls apart. Perhaps
> nobody is disagreeing about anything.

I hope so. It sure would simplify this discussion :-)

> Let's see what we all might agree on:

> 1. Compile-time detection of source code bugs is a good idea (syntax errors,
>   type errors, etc.)
> 2. Some kinds of inference can help find more cases of compile-time errors.
> 3. Some errors can be caught at compile time, while others can't be found
>   until run time.

> Seems pretty non-controversial.  

Yep.

> So let me turn this back on you: what _exactly_ do you object to
> about Lisp code?

I don't object to Lisp, and I never did (and I have repeatedly said
so). Lisp is a fine language. What I object to is claims that Lisp
is "better" because it is dynamically typed. (I have also repeatedly
said this.)

> Surely I've misunderstood.

Looks like it.

> Look: If I'm not required to add any type annotations, and if the
> compiler only reports provable type errors (vs. "unprovable type
> safety"), I don't think anyone objects.

The problem is that you cannot do *complete* type checking in Lisp.
That makes type checking in Lisp a lot less useful. Therefore, AFAIK
the static type checking extensions of Lisps I know of make static
type checking optional, and give only hints if they can. They also
require more type annotations.

In a language where static type checking is mandatory, it is a lot
more useful.

> Yet many people on this thread seem to claim that static typing
> _must_ be a primitive part of the language definition, and it would
> be "impossible" to back-fit it onto a language that wasn't designed
> for it from the beginning.

It can be back-fitted to some degree, but it will never be as powerful
as in a language where it has been designed in right from the start.

(If you think about it, this observation also holds in other
areas that have nothing to do with computers. It's not really
surprising, after all.)

If you drop macros, MOP and imperative features from Lisp, static
type checking will become a lot stronger, but I am sure people will 
object to that :-) (and they're right, of course).

> Something doesn't add up here.

Maybe what's missing is the experience how mandatory static typing
really helps in developing programs. You won't get this experience by
using Lisp, even together with the type checking aids. I can only
recommend again if you have some spare time to try and play with some
other languages.  It's never a mistake to learn something new.

- Dirk
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87k76pw9ez.fsf@sidious.geddis.org>
I wrote:
> > Now hang on a minute!  So far we've been talking about "static type
> > systems" in general.
> > But this conversation was _never_ about a particular static type system. 

Dirk Thierbach <··········@gmx.de> writes:
> For me, it was about a particular kind of static type system. The C++
> or Java type systems are a lot weaker and more complicated. They don't
> have type inference, either. I don't mind discussing them, but
> they clearly belong to different categories.
> Maybe lets call the one "functional static typing" and the other
> "ad-hoc static typing". I am open to better names. I want to talk
> about functional static typing.

Sure, C++/Java types are weaker, less interesting, and don't have type
inference.  I grant you that ML/Haskell is a more interesting type system.

But those aren't the only choices.  "Static type inference" could easily
refer to some new system even better and more expressive than ML.

> > How can you possibly claim that "THE static type system" doesn't
> > allow something?
> 
> I meant functional static typing.

I still don't think you see the big picture.  You appear to be talking about
a particular static type system (ML?  Haskell/Ocaml?).  Hence your use of
phrases like "THE static type system doesn't allow that."

I _do_ mean to talk about compile-time static type systems, with extensive
type inference.  But you at least need to leave open the possibility that this
might be something other than ML.

> > So where do you get off saying that no static type system could
> > possibly allow one to express value ranges of types?
> 
> Show me one that can verify at compile time that these value ranges
> will always be respected. I don't think that is possible. One
> can do dataflow analysis and get a very conservative estimation
> what ranges the values can be in, but that's not the same thing.

You've answered your own question.  Of course your system does the best
inference it can in a reasonable amount of time, and that results in
conservative bounds on what the data values might possibly be.  Since you're
not executing the code, you can't determine the exact bounds; but you can
often quickly compute a reasonable estimate.

That's much, much better than simply giving up and refusing to use value
ranges at all.

(And by the way: the CMUCL implementation of Common Lisp is an example system
that does type inference with value ranges.)

> > Look: If I'm not required to add any type annotations, and if the
> > compiler only reports provable type errors (vs. "unprovable type
> > safety"), I don't think anyone objects.
> 
> The problem is that you cannot do *complete* type checking in Lisp.

Can you define this term?  What does "complete" type checking mean?

You've already agreed that compile-time analysis can't predict exactly
those type errors that will occur at run-time (with a dynamic type system).
You've also abandoned computing ranges on values.

What could you possible mean by "complete", in a theoretical sense?  Again,
I fear you simply mean that it is different from ML, which is true but not
interesting.

> That makes type checking in Lisp a lot less useful. Therefore, AFAIK
> the static type checking extensions of Lisps I know of make static
> type checking optional, and give only hints if they can.

This part is true, and intensional.  The whole point (from the dynamic camp)
is to not require the programmer to provide type annotations if they don't
wish to.  This is _precisely_ what dynamic type programmers object to in
static typing languages: having the language insist on additional specification
that isn't strictly necessary to run the code.  Why not leave it up to the
(experienced) programmer to decide what information he choses to supply?

> They also require more type annotations.

That, I doubt.  Surely, given the same information, a type inference system in
Lisp can make exactly the same deductions as one in Haskell.  What is you
evidence that static type checking in Lisp requires more annotations?

> In a language where static type checking is mandatory, it is a lot
> more useful.

Only by forcing the programmer to provide type information even when he isn't
interested.

Surely you agree that a superior language leaves this choice up to the
programmer?  Why not admit both styles of programming?  For those who want
extensive compile-time type checking, let them add sufficient annotations
for a "complete" check (whatever you meant by that word earlier).  But in
an ideal language, this would be optional, and dynamic typing fans can still
program in their preferred style (resulting in extensive run-time type checks).

Why do you think it's a better language when you prohibit the dynamic/run-time
style of programming, and only allow the static typing version?  Why do you
object to leaving it up to the programmer to decide how much static type
inference they care about in different parts of their code?

> If you drop macros, MOP and imperative features from Lisp, static
> type checking will become a lot stronger, but I am sure people will 
> object to that :-) (and they're right, of course).

Ah, but now you're clearly getting into tradeoffs.  Even if I agree
hypothetically that compile-time static type checks might be a useful tool
for a programmer, we now have to compare the value of that tool with the
value of these other programmer tools (macros, MOP, etc.) that you're telling
me I'll need to give up.

The case for ("complete") static type checking seems to be getting more
dubious...

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <4qa471-vt.ln1@ID-7776.user.dfncis.de>
Don Geddis <···@geddis.org> wrote:
> But those aren't the only choices.  "Static type inference" could easily
> refer to some new system even better and more expressive than ML.

Sure. But I don't know anything about those systems, so I can't really
talk about them.

> I still don't think you see the big picture.  You appear to be talking about
> a particular static type system (ML?  Haskell/Ocaml?).  Hence your use of
> phrases like "THE static type system doesn't allow that."

Yes.

> I _do_ mean to talk about compile-time static type systems, with
> extensive type inference.  But you at least need to leave open the
> possibility that this might be something other than ML.

Ok. In that case, you're completely right: An arbitrary static type
system might well choose to include things like bounds estimation.
But it would be a very stupid idea to reject programs based on that 
analysis, because you will indeed end up with a very large class
of programs that will work correctly at runtime, but will still be
rejected by the type checker. That's a situation that should be avoided
at all costs.

My point was that with "the" static type system of "functionally
static typing", such things can never happen. Hindley-Milner-style
type inference always terminates, and for exactly two reasons: Either
everything works, or there is a type mismatch. In the latter case, one
can construct examples of values that will provoke this type mismatch
at the specific location(s). Now the compiled code for such a location
doesn't include any dynamic type checks: All type information is gone
at run-time, the only thing that remains are bit patterns. So one such
example of values will cause the compiled code to interpret the same
bit pattern in different ways, for example the bit pattern 0xdeadbeef
might be interpreted in one place as an integer, and in another place
as a function call. If it was supposed to be an integer in the call,
this will make your program crash.

So a program with a type error is not "meaningful". It has a severe
problem built in. The only way to make it "well-behaved" is to make
sure that this particular location never gets executed. 

Clearer now?

> Since you're not executing the code, you can't determine the exact
> bounds; but you can often quickly compute a reasonable estimate.

> That's much, much better than simply giving up and refusing to use value
> ranges at all.

Yep. The point is that range estimation does not provide such critical
information as described above, hence it is not included in "the"
static type-system. It is a nice thing to have, and it would be
certainly interesting as an additional tool. But the only way you can
enforce such checks is dynamically at run-time, if you don't want to
falsely reject a lot of programs.

Maybe one problem in this discussion is that both camps have a different
idea what "type-checking" constitutes.

> (And by the way: the CMUCL implementation of Common Lisp is an
> example system that does type inference with value ranges.)

I know. 

>> The problem is that you cannot do *complete* type checking in Lisp.

> Can you define this term?  What does "complete" type checking mean?

Without any type annotations, you can assign a type to every subterm.

> You've already agreed that compile-time analysis can't predict exactly
> those type errors that will occur at run-time (with a dynamic type system).

No, but it can predict those errors where your program will crash unless
you include dynamic type information. 

> This part is true, and intensional.  The whole point (from the dynamic camp)
> is to not require the programmer to provide type annotations if they don't
> wish to. 

As Hindley-Milner type inference does.

> This is _precisely_ what dynamic type programmers object to in
> static typing languages: having the language insist on additional
> specification that isn't strictly necessary to run the code.

And the point is that this doesn't apply to "functionally statically typed"
languages.

> Why not leave it up to the (experienced) programmer to decide what
> information he choses to supply?

Exactly. OTOH, practical experience suggests that it is often a plus
to annotate your functions, for the same reasons that you write unit
tests first. But you don't have to.

>> They also require more type annotations.

> That, I doubt.  

It is not possible to do type inference without any type annotations
once you include imperative features, unless you are very careful
how these features are formulated. It's easy to prove me wrong on
this point: Just show me an algorithm to do type inference on a 
sufficiently large sublanguage of Lisp that has the following properties:

* it includes setq with the usual Lisp semantics;
* it doesn't have any type annotations;
* it guarantees that any well-typed program will never encounter a 
  type error at runtime (those that arise from using Haskell-like 
  datatypes don't count as type errors in this sense, but you
  can ignore such datatypes; they're not important here);
* it doesn't reject a large number of programs where I can prove 
  the above by hand, with the side conditions:
  - the type errors don't occur in dead code;
  - I am allowed to add arbitrary calls of functions to the program
    (as long as they are well typed).

- Dirk
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnoafq$i2g$1$830fa79f@news.demon.co.uk>
Dirk Thierbach wrote:

> Don Geddis <···@geddis.org> wrote:

>> This part is true, and intensional.  The whole point (from the dynamic
>> camp) is to not require the programmer to provide type annotations if
>> they don't wish to.
> 
> As Hindley-Milner type inference does.

This is a point I hoped I had demonstrated reasonably well with the
phonecodes code a posted a couple of days ago. A fully working
moderately complex (though not particularly useful) strongly typed
Haskell program with no type annotation, none at all.

A might also observe that during development of this program the type
system detected a serious bug, one that would still have been a bug
in an untyped or dynamically typed language (it wasn't just being
awkward :-).

Regards
--
Adrian Hey 
From: Pekka P. Pirinen
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ixbrrxe2j0.fsf@ocoee.cam.harlequin.co.uk>
Don Geddis <···@geddis.org> writes:
> > They also require more type annotations.
> 
> That, I doubt.  Surely, given the same information, a type inference
> system in Lisp can make exactly the same deductions as one in
> Haskell.

But they never are given the same information, because the Lisp system
gets Lisp code and the Haskell system gets Haskell code.  So there's
the issue of side effects mentioned by Dirk Thierbach.  There's
Haskell's list type with all the elements constrained to be the same
type; in Lisp, you have add type annotations to get this.  No doubt
there are other relevant differences, and these differences are not
accidental: Haskell is that way, so that it can support type inference
with minimal annotations; Lisp is another way, because the flexibility
doesn't cost anything - unless type inference is one of the goals.
-- 
Pekka P. Pirinen
Heard at OOPSLA 2000: We didn't pick the best solution [for the architecture],
because it would have required C++ programmers who understand the language.
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egekwygcvg.fsf@vipe.ii.uib.no>
Don Geddis <···@geddis.org> writes:

> So where do you get off saying that no static type system could
> possibly allow one to express value ranges of types?

One problem with this, is that any numeric value would belong to an
infinite number of types.  I suspect this would complicate type
inference somewhat :-)

(Completely arbitrary types are of course undecidable.  E.g. the type
consisting of strings that represent terminating programs :-)

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <878yn5w7cn.fsf@sidious.geddis.org>
> Don Geddis <···@geddis.org> writes:
> > So where do you get off saying that no static type system could
> > possibly allow one to express value ranges of types?

··········@ii.uib.no writes:
> One problem with this, is that any numeric value would belong to an
> infinite number of types.  I suspect this would complicate type
> inference somewhat :-)
> (Completely arbitrary types are of course undecidable.  E.g. the type
> consisting of strings that represent terminating programs :-)

I agree with you.  That's why I find the (strawman) claim of "ML type
inference is so great everyone should use it!" to be kind of silly.

We should instead explore the space of compile-time inference on source code
to generate interesting annotations for the programmer, especially without
requiring the programmer to add additional information.

Syntax checking, type inference, dead code identification, etc.  Lots of
interesting topics there.  The ML style of "add exactly this additional
data, and get exactly these conclusions" seems to be an awfully specific and
not especially interesting point on the spectrum.

And just for the record: undecidable problems does not mean they need to be
completely avoided.  AI is full of heuristic approaches to returning some
useful information even in the face of undecidability.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Dysfunction:  The only consistent feature of all your dissatisfying
relationships is you.  -- Despair.com
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egn0bkl9wm.fsf@sefirot.ii.uib.no>
Don Geddis <···@geddis.org> writes:

> I agree with you.  That's why I find the (strawman) claim of "ML type
> inference is so great everyone should use it!" to be kind of silly.

> We should instead explore the space of compile-time inference on
> source code to generate interesting annotations for the programmer,
> especially without requiring the programmer to add additional
> information.

(I don't know ML well, but I have used Haskell a bit.)  I think the
Haskell type system is an interesting and useful step on the way, it
is very rare that the programmer *needs* to add type annotation
(although it's regarded as a good practice)

It could of course go a lot further, but I suspect this is what we can
do easily with current technology, that is mature enough for everyday
use. 

> And just for the record: undecidable problems does not mean they
> need to be completely avoided.

Right.  It's always nice to have the theory in order, but you can
often get away with more pragmatic approaches (C++ recurse limits,
anyone?) 

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9f93ae$1@news.unimelb.edu.au>
··········@ii.uib.no writes:

>Don Geddis <···@geddis.org> writes:
>
>> So where do you get off saying that no static type system could
>> possibly allow one to express value ranges of types?
>
>One problem with this, is that any numeric value would belong to an
>infinite number of types.  I suspect this would complicate type
>inference somewhat :-)

Probably not as much as you think.  In Haskell, any numeric literal
already belongs to an unbounded and possibly infinite number of types.

For example, if you declare

	instance (Num a, Num b) => Num (a -> b) where ...

then the value 1 belongs to the types Int, Int -> Int, Int -> Int -> Int,
and so on ad infinitum.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egvfq8jnix.fsf@sefirot.ii.uib.no>
Fergus Henderson <···@cs.mu.oz.au> writes:

> Probably not as much as you think.  In Haskell, any numeric literal
> already belongs to an unbounded and possibly infinite number of
> types.

I realize this, but I think every *value* in the program has one and
only one type.  For numeric literals, this is decided by type
inference, or if that fails to disambiguate, defaulting rules.  (Or
type annotations, of course)

The suggested system would attach an infinity of types to any value,
and I don't think it would be possible to capture that statically in a
meaningful way.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa5cba1$1@news.unimelb.edu.au>
··········@ii.uib.no writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>
>> Probably not as much as you think.  In Haskell, any numeric literal
>> already belongs to an unbounded and possibly infinite number of
>> types.
>
>I realize this, but I think every *value* in the program has one and
>only one type.

Nope.  For example, the value "[]" has types "[Int]", "[[Int]]", "[[[Int]]]",
etc.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ··········@ii.uib.no
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <egr80p7uo4.fsf@sefirot.ii.uib.no>
Fergus Henderson <···@cs.mu.oz.au> writes:

>> I realize this, but I think every *value* in the program has one and
>> only one type.

> Nope.  For example, the value "[]" has types "[Int]", "[[Int]]", "[[[Int]]]",
> etc.

Okay, I guess this becomes a matter of definition.  But I think the []
overloading is much the same as the numerical literal overloading.
[]::[Int] is not the same as []::[[Int]]; to wit:

  Prelude> let intlist = [] :: [Int]
  Prelude> intlist
  []
  Prelude> intlist : []
  [[]]
  Prelude> intlist : intlist

  <interactive>:1
     Couldn't match `[Int]' against `Int'
        Expected type: [[Int]]
        Inferred type: [Int]
     In the second argument of `(:)', namely `intlist'
     In the definition of `it': it = intlist : intlist

So every value in the program seems to belong to one particular type,
and the system needs to determine the type uniquely before it can be
used.

But to return to the orignal question, if this isn't it, then I would
like to know what the (possible) definition(s) of a type is.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Jesse Tov
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbqdf7q.2pg.tov@tov.student.harvard.edu>
··········@ii.uib.no <··········@ii.uib.no>:
> So every value in the program seems to belong to one particular type,
> and the system needs to determine the type uniquely before it can be
> used.

Not exactly:

    nil = []
    foo = 5 : nil
    bar = 'a' : nil
    ...
    Ok, modules loaded: Test.
    Test> :type foo
    foo :: [Integer]
    Test> :type bar
    bar :: [Char]
    Test> :type nil
    nil :: forall a. [a]

nil has one particular type--its principle type--forall a. [a].  It can
be used at more than one concrete type at the same time, and that
doesn't affect its principle type.

Of course, lots of values have polymorphic principle types, for example:
  (+) :: forall a. Num a => a -> a -> a

> But to return to the orignal question, if this isn't it, then I would
> like to know what the (possible) definition(s) of a type is.

I like to think of types as sets.  I'm not terribly formal about it.

Jesse
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: Jacques Garrigue
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <l2ekwvflhq.fsf@suiren.i-did-not-set--mail-host-address--so-shoot-me>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> 
> >>No, it's not. There's a class of programs that exhibit a certain
> >>behavior at runtime that you cannot write in a statically typed
> >>language _directly in the language itself_.
> > 
> > This is simply not true.  See above.
> 
> OK, let's try to distill this to some simple questions.
> 
> Assume you have a compiler ML->CL that translates an arbitrary ML 
> program with a main function into Common Lisp. The main function is a 
> distinguished function that starts the program (similar to main in C). 
> The result is a Common Lisp program that behaves exactly like its ML 
> counterpart, including the fact that it doesn't throw any type errors at 
> runtime.
> 
> Assume furthermore that ML->CL retains the explicit type annotations in 
> the result of the translation in the form of comments, so that another 
> compiler CL->ML can fully reconstruct the original ML program without 
> manual help.
> 
> Now we can modify the result of ML->CL for any ML program as follows. We 
> add a new function that is defined as follows:
> 
> (defun new-main ()
>    (loop (print (eval (read)))))
> 
> (We assume that NEW-MAIN is a name that isn't defined in the rest of the 
> original program. Otherwise, it's easy to automatically generate a 
> different unique name.)
> 
> Note that we haven't written an interpreter/compiler by ourselves here, 
> we just use what the language offers by default.
> 
> Furthermore, we add the following to the program: We write a function 
> RUN (again a unique name) that spawns two threads. The first thread 
> starts the original main function, the second thread opens a console 
> window and starts NEW-MAIN.
> 
> Now, RUN is a function that executes the original ML program (as 
> translated by ML->CL, with the same semantics, including the fact that 
> it doesn't throw any runtime type errors in its form as generated by 
> ML->CL), but furthermore executes a read-eval-print-loop that allows 
> modification of the internals of that original program in arbitrary 
> ways. For example, the console allows you to use DEFUN to redefine an 
> arbitrary function of the original program that runs in the first 
> thread, so that the original definition is not visible anymore and all 
> calls to the original definiton within the first thread use the new 
> definition after the redefinition is completed. [1]
> 
> Now here come the questions.
> 
> Is it possible to modify CL->ML in a way that any program originally 
> written in ML, translated with ML->CL, and then modified as sketched 
> above (including NEW-MAIN and RUN) can be translated back to ML? For the 
> sake of simplicity we can assume an implementation of ML that already 
> offers multithreading. Again, for the sake of simplicity, it's 
> acceptable that the result of CL->ML accepts ML as an input language for 
> the read-eval-print-loop in RUN instead of Common Lisp. The important 
> thing here is that redefinitions issued in the second thread should 
> affect the internals of the program running in the first thread, as 
> described above.

You have an interesting point here, but it is only partly related to
static typing. It is more about static binding vs. dynamic binding.
That is, are you able to dynamically change the definition of any
identifier in the language.
One must recognizes that dynamically typed systems, in particular Lisp
and Smalltalk, also give you full dynamic binding, and that this is an
incredibly powerful feature, at least for developpers.

Now, would it be possible to do it in a statically typed language?
I don't see why not. Some parts are relatively easy: allowing you to
change function definitions, as long as you don't change the type.  I
say only relatively, because polymorphism may get in your way: you
would have to replace any polymorphic function by a function at least
as polymorphic as it, whether this polymorphism is really needed by
the rest of the program or not.

Some parts are much more difficult: how to change the data
representation dynamically. This is already difficult in dynamically
typed language (you must introduce lots of kludges to convert the data
on the fly, probably on an as-needed basis), but in a statically typed
language this must be expressed at the level of types. Matthias'
argument would be that anyway you need to do some formal reasonning to
prove that, knowing the dataflow of your program, you have indeed
introduced all the needed conversions and kludges in you dynamically
typed program, and that this reasonning could be converted to some
kind of types, but this is going to be very hard. The most I know
about this is some work on data versionning, but it doesn't consider
modifications inside a running program.

I'm willing to concede you the point: there may be applications where
you want this ability to dynamically modify the internals of your
program, and, while knowing this is just going to be damned dangerous,
a fully dynamic (both types and binding) language is your only way to
be sure that you will be able to do all and every possible changes.
But these applications strike me as being of the high availability
kind, so that the very fact this is so dangerous may be a major
concern.

On the other hand, in the huge majority of cases, this feature is only
used during program development, and once you're done you compile and
optimize, and optimizing actually means loosing most of the dynamic
binding to allow inlining.
In those cases, clever statically typed languages like ML compensate
for their staticness in various ways, for instance by allowing
efficient separate compilation as long as interfaces do not change.
You may have to restart your program, but do not loose much time for
that. And since more bugs are caught by the type system, the need to
correct them is less frequent. You are also provided with an
interactive toplevel, which lets you change some of the definitions at
runtime, at least those functions and variables you have explicitly
declared as mutable. Static typing does not prevent you from running
and modifying interactively a GUI application, it just restricts the
extent of the modifications you can do.

(Contrary to Matthias I'm a purely static guy, but I've always been
attracted by those fancy dynamic development environments.)

---------------------------------------------------------------------------
Jacques Garrigue      Kyoto University     garrigue at kurims.kyoto-u.ac.jp
		<A HREF=http://wwwfun.kurims.kyoto-u.ac.jp/~garrigue/>JG</A>
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m24qxr7491.fsf@hanabi-air.shimizu.blume>
Jacques Garrigue <···@my.signature> writes:

[ ... ]

Thanks for the detailed reply, Jacques!  (I was contemplating a reply
of my own, but I think I have to start behaving again and cut down on
the time I waste on netnews. :-)

> (Contrary to Matthias I'm a purely static guy, but I've always been
> attracted by those fancy dynamic development environments.)

Do you mean that you don't have any dynamically typed skeletons in
your closet?  My excuse is that I have been attracted by the static
side of the force all along, but for a long time I didn't understand
that this was the case... :-)

Matthias
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnems9$j3i$1@news.oberberg.net>
·············@comcast.net wrote:
> My point is that type systems can reject valid programs.

And the point of the guys with FPL experience is that, given a good type 
system [*], there are few if any practial programs that would be wrongly 
rejected.

[*] With "good type system", I mean a type system that has both 
parametric polymorphism (i.e. templates) and type inference (i.e. you 
don't have to write down any types, they are automatically determined by 
the compiler from usage).
Each of these facilities is mildly useful in a static-type context, but 
when they are combined in, say, Hindley-Milner (HM) fashion, the effect 
is awesome. Some of my awe came from the realization that the "how do 
you typecheck this?" challenges can actually be typechecked (though the 
technical details of the solutions were often wrong - nothing is perfect 
in this world.)

Regards,
Jo
From: Joe Marshall
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <n0bmrao0.fsf@ccs.neu.edu>
Joachim Durchholz <·················@web.de> writes:

> ·············@comcast.net wrote:
>> My point is that type systems can reject valid programs.
>
> And the point of the guys with FPL experience is that, given a good
> type system [*], there are few if any practial programs that would be
> wrongly rejected.

We're stating a pretty straightforward, objective, testable hypothesis:  

  ``There exists valid programs that cannot be statically checked by
    such-and-such a system.''

and we get back
 
   `yes, but...'
   `in our experience'
   `*I've* never seen it'
   `if the type system is any good'
   `few programs'
   `no practical programs'
   `no useful programs'
   `isolated case'
   `99% of the time'
   `most commercial programs'
   `most real-world programs'
   `only contrived examples'
   `nothings perfect'
   `in almost every case'

Excuse us if we are skeptical.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjm7p$7p8$1@news.oberberg.net>
Joe Marshall wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
>>·············@comcast.net wrote:
>>
>>>My point is that type systems can reject valid programs.
>>
>>And the point of the guys with FPL experience is that, given a good
>>type system [*], there are few if any practial programs that would be
>>wrongly rejected.
> 
> We're stating a pretty straightforward, objective, testable hypothesis:  
> 
>   ``There exists valid programs that cannot be statically checked by
>     such-and-such a system.''
> 
> and we get back
>  
>    `yes, but...'
>    `in our experience'
>    `*I've* never seen it'
>    `if the type system is any good'
>    `few programs'
>    `no practical programs'
>    `no useful programs'
>    `isolated case'
>    `99% of the time'
>    `most commercial programs'
>    `most real-world programs'
>    `only contrived examples'
>    `nothings perfect'
>    `in almost every case'
> 
> Excuse us if we are skeptical.

Then be sceptical if you like.
Actually your hypothesis is ill-worded anyway: it says "valid" without 
specifying what kind of validity means (which already has lead us into 
many irrelevant meanderings about what correctness and validity mean, 
and that they cannot be expressed if you don't have specifications). It 
ignores that some programs need restructuring to have any useful static 
types associated with its items (you can always put a Lisp-style type 
system on top of a statically-typed language, with little effort). It 
ignores the practical experience of people who indeed say "static typing 
with type inference is not a straightjacket but a ladder: supports where 
it's helpful, and extending my reach in directions that were unthinkable 
before they were invented".
However, all I hear is questions "how to you type this". Whining for the 
known and trusted idioms, instead of even thinking about the idioms that 
static typing might provide.

Just take the testimony and accept it. Just as I take the testimony that 
one can write large programs in Lisp. Stay sceptical if you like - I 
certainly do.

I'm just tired of the Costanza-style constant nitpicking, topic evasion, 
and not answering to critical questions (such as his experience with 
modern type systems - he claims it but shows little evidence for that, 
something that makes me suspicious about his credibility in general).

Technical newsgroups are a place for exchanging new ideas, and for 
helping people along. This thread has already served that purpose to the 
utmost extent imaginable, what's left is squabbling.

Good night and Over and Out for me (except for direct answers to my 
posts, where appropriate).

Regards,
Jo
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031027182817.GN1454@mapcar.org>
On Mon, Oct 27, 2003 at 05:57:00PM +0100, Joachim Durchholz wrote:
> (you can always put a Lisp-style type system on top of a
> statically-typed language, with little effort). 

This is not true, as I've pointed out on several occasions.  Such
systems do not behave like a Lisp-style type system when dealing with
redefinition.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnk5li$egb$2@news.oberberg.net>
Matthew Danish wrote:

> On Mon, Oct 27, 2003 at 05:57:00PM +0100, Joachim Durchholz wrote:
> 
>>(you can always put a Lisp-style type system on top of a
>>statically-typed language, with little effort). 
> 
> This is not true, as I've pointed out on several occasions.  Such
> systems do not behave like a Lisp-style type system when dealing with
> redefinition.

Add a State monad and they will do even that.

(I thought we were talking about differences between compile-time and 
run-time typing, not about specifics of Lisp.)

Regards,
Jo
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <1xt2gt7c.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:

> Yep. It turns out that you take away lots of bogus programs, and the
> sane programs that are taken away are in most cases at least questionable
> (they will be mostly of the sort: There is a type error in some execution
> branch, but this branch will never be reached), and can usually be 
> expressed as equivalent programs that will pass.

I don't understand why you think that most of them will be `dead code'.

I don't understand why a smart type checker would complain about dead
code.
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <5fqo61-dg7.ln1@ID-7776.user.dfncis.de>
·············@comcast.net wrote:
> I don't understand why you think that most of them will be `dead code'.

Because otherwise the code will be executed, and this will result
in a crash -- there must be a reason why the type checker complains.

> I don't understand why a smart type checker would complain about dead
> code.

Because in general, it is not decidable if some part of the code will
be executed or not (halting problem).

- Dirk
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <7k2uf77d.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:

> ·············@comcast.net wrote:
>> I don't understand why you think that most of them will be `dead code'.
>
> Because otherwise the code will be executed, and this will result
> in a crash -- there must be a reason why the type checker complains.

Allow me to rephrase:  I don't understand why you think *most* of them
will be dead code.  Aren't there other cases where the type checker
would complain?


>
>> I don't understand why a smart type checker would complain about dead
>> code.
>
> Because in general, it is not decidable if some part of the code will
> be executed or not (halting problem).
>
> - Dirk
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <xDgmb.20581$HS4.72060@attbi_s01>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> ... there exist programs that work but
> that cannot be statically typechecked. These programs objectively exist.
> By definition, I cannot express them in a statically typed language.

I agree these programs exist.

It would be really interesting to see a small but useful example
of a program that will not pass a statically typed language.
It seems to me that how easy it is to generate such programs
will be an interesting metric.

Anyone? (Sorry, I'm a static typing guy, so my brain is
warped away from such programs. :-)


Marshall
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <wuaufe52.fsf@comcast.net>
"Marshall Spight" <·······@dnai.com> writes:

> It would be really interesting to see a small but useful example
> of a program that will not pass a statically typed language.
> It seems to me that how easy it is to generate such programs
> will be an interesting metric.

Would this count?

(defun noisy-apply (f arglist)
  (format t "I am now about to apply ~s to ~s" f arglist)
  (apply f arglist))
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ZVqmb.23522$e01.47903@attbi_s02>
<·············@comcast.net> wrote in message ·················@comcast.net...
> "Marshall Spight" <·······@dnai.com> writes:
>
> > It would be really interesting to see a small but useful example
> > of a program that will not pass a statically typed language.
> > It seems to me that how easy it is to generate such programs
> > will be an interesting metric.
>
> Would this count?
>
> (defun noisy-apply (f arglist)
>   (format t "I am now about to apply ~s to ~s" f arglist)
>   (apply f arglist))

Interesting, interesting. Thanks for taking me seriously!

I'm trying to map this program into Java, and it's possible
but there are enough different ways to go about it that
I'm having a hard time reasoning about the result.

For one thing, what would f be? Probably an instance of
a class that implements a specific interface. But then,
implementing a specific interface is like saying we
know what type f is. Is it a function that takes a
single argument of type Object? If we concede
all those points, then this is fairly easy to map
into Java. If we say that f can take any single argument
then we can do it with reflection if we are willing
to add 15 lines of code, which is certainly not pretty.
If it can take any number of arguments then it's
looking triply awful now.

It's starting to feel like this is merely a demonstration
of Java's weakness in generic programming, and
not something hooked into Goedel.

Anyone have any comments?


Marshall
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3r811z42i.fsf@rigel.goldenthreadtech.com>
"Marshall Spight" <·······@dnai.com> writes:

> <·············@comcast.net> wrote in message ·················@comcast.net...
> > "Marshall Spight" <·······@dnai.com> writes:
> >
> > > It would be really interesting to see a small but useful example
> > > of a program that will not pass a statically typed language.
> > > It seems to me that how easy it is to generate such programs
> > > will be an interesting metric.
> >
> > Would this count?
> >
> > (defun noisy-apply (f arglist)
> >   (format t "I am now about to apply ~s to ~s" f arglist)
> >   (apply f arglist))
> 
> Interesting, interesting. Thanks for taking me seriously!
> 
> I'm trying to map this program into Java, and it's possible

Isn't Java a particularly bad thing to try here?

> but there are enough different ways to go about it that
> I'm having a hard time reasoning about the result.
> 
> For one thing, what would f be? Probably an instance of

As written, f can be any object of any type.  arglist is any set of
arguments with arity 0 to <hardware limit>. It will do something
useful in all cases and something more useful if f is a function

/Jon
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bne24o$4e9$1@news-int.gatech.edu>
"Marshall Spight" <·······@dnai.com> once said:
><·············@comcast.net> wrote in message ·················@comcast.net...
>> "Marshall Spight" <·······@dnai.com> writes:
>> > It would be really interesting to see a small but useful example
>> > of a program that will not pass a statically typed language.
>> > It seems to me that how easy it is to generate such programs
>> > will be an interesting metric.
>>
>> Would this count?
>>
>> (defun noisy-apply (f arglist)
>>   (format t "I am now about to apply ~s to ~s" f arglist)
>>   (apply f arglist))
>
>Interesting, interesting. Thanks for taking me seriously!
>
>I'm trying to map this program into Java, and it's possible
...
>Anyone have any comments?

Well, in C++ you could say

   template <class F, class A>
   typename result_of<F(A)>::type
   noisy_apply( const F& f, const A& a ) {
      cout << "I am now about to apply " << f << " to " << a << endl;
      return f(a);
   }

These assume that both "f" and "a" work with the out-streaming operator
(<<).  This is just an ad-hoc version of what would be "class Show" in
Haskell.  In C++ practice, most functions aren't "showable", but many
common data types are.  So the most useful version of the function would
probably be

   // This version works for all "f" and all Showable "a"
   template <class F, class A>
   typename result_of<F(A)>::type
   noisy_apply( const F& f, const A& a ) {
      cout << "I am now about to apply a function with type " 
           << typeid(f).name() << " to the value " << a << endl;
      return f(a);
   }

Again, provided that we have some notion of "class Show" in our
statically-typed language, then examples like these are easy to type.
(What dynamically-typed languages typically buy you is that every object
in the system provides some basic methods like toString(), which
eliminates the Show-able constraint that the statically-typed version
needs.)

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <n0bpcje0.fsf@comcast.net>
·······@prism.gatech.edu (Brian McNamara!) writes:

> "Marshall Spight" <·······@dnai.com> once said:
>><·············@comcast.net> wrote in message ·················@comcast.net...
>>> "Marshall Spight" <·······@dnai.com> writes:
>>> > It would be really interesting to see a small but useful example
>>> > of a program that will not pass a statically typed language.
>>> > It seems to me that how easy it is to generate such programs
>>> > will be an interesting metric.
>>>
>>> Would this count?
>>>
>>> (defun noisy-apply (f arglist)
>>>   (format t "I am now about to apply ~s to ~s" f arglist)
>>>   (apply f arglist))
>>
>>Interesting, interesting. Thanks for taking me seriously!
>>
>>I'm trying to map this program into Java, and it's possible
> ...
>>Anyone have any comments?
>
> Well, in C++ you could say
>
>    template <class F, class A>
>    typename result_of<F(A)>::type
>    noisy_apply( const F& f, const A& a ) {
>       cout << "I am now about to apply " << f << " to " << a << endl;
>       return f(a);
>    }
>

I don't mean to nitpick, but APPLY takes an arbitrary list of arguments.
How do you parameterize over that without enumerating the power set
of potential types?

What if F `returns' void?
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bneebh$fh4$1@news-int2.gatech.edu>
·············@comcast.net once said:
>·······@prism.gatech.edu (Brian McNamara!) writes:
>> Well, in C++ you could say
>>
>>    template <class F, class A>
>>    typename result_of<F(A)>::type
>>    noisy_apply( const F& f, const A& a ) {
>>       cout << "I am now about to apply " << f << " to " << a << endl;
>>       return f(a);
>>    }
>>
>
>I don't mean to nitpick, but APPLY takes an arbitrary list of arguments.
>How do you parameterize over that without enumerating the power set
>of potential types?

This isn't really possible for normal C++ functions.

You can always program in a style where every function takes exactly
one argument, which is an N-ary tuple, and use boost::mpl and
boost::tuple to then generalize things.  (Indeed, using such libraries,
you can simulate "apply" rather convincingly.  But somewhere under the
hood, someone has to have written N different overloads for 0-arg,
1-arg, ... N-arg, up to some fixed ("large enough") N.)

So C++ can only mimic "noisy_apply" so well.  I expect that Haskell can
mimic it better in this respect.

>What if F `returns' void?

It still works.  (You are allowed to say "return f(a)" inside a template
function returning void, provided f(a) "returns" void as well.)

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ekx1yvqg.fsf@rigel.goldenthreadtech.com>
·······@prism.gatech.edu (Brian McNamara!) writes:

> boost::tuple to then generalize things.  (Indeed, using such libraries,
> you can simulate "apply" rather convincingly.  But somewhere under the
> hood, someone has to have written N different overloads for 0-arg,
> 1-arg, ... N-arg, up to some fixed ("large enough") N.)

That's not actually good enough.  You also have to have overloads for
all the possible types for 1-arg, ..., N-arg.  Actually it's worse
than that - the set of types is not closed, so even in principle this
won't work.

/Jon
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnei85$aj0$1@news-int.gatech.edu>
·········@rcn.com (Jon S. Anthony) once said:
>·······@prism.gatech.edu (Brian McNamara!) writes:
>
>> boost::tuple to then generalize things.  (Indeed, using such libraries,
>> you can simulate "apply" rather convincingly.  But somewhere under the
>> hood, someone has to have written N different overloads for 0-arg,
>> 1-arg, ... N-arg, up to some fixed ("large enough") N.)
>
>That's not actually good enough.  You also have to have overloads for
>all the possible types for 1-arg, ..., N-arg.  Actually it's worse
>than that - the set of types is not closed, so even in principle this
>won't work.

I'm not sure I understand you, but if I do, then "templates" take care
of this.  That is, we'd write (e.g. for the 3-arg case):

   template <class A, class B, class C>
   Result someFunc( A a, B b, C c ); // fudging "Result" for simplicity

which means that someFunc works "forall" types A, B, and C.

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Jon S. Anthony
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m31xt1ymko.fsf@rigel.goldenthreadtech.com>
·······@prism.gatech.edu (Brian McNamara!) writes:

> ·········@rcn.com (Jon S. Anthony) once said:
> >·······@prism.gatech.edu (Brian McNamara!) writes:
> >
> >> boost::tuple to then generalize things.  (Indeed, using such libraries,
> >> you can simulate "apply" rather convincingly.  But somewhere under the
> >> hood, someone has to have written N different overloads for 0-arg,
> >> 1-arg, ... N-arg, up to some fixed ("large enough") N.)
> >
> >That's not actually good enough.  You also have to have overloads for
> >all the possible types for 1-arg, ..., N-arg.  Actually it's worse
> >than that - the set of types is not closed, so even in principle this
> >won't work.
> 
> I'm not sure I understand you, but if I do, then "templates" take care
> of this.  That is, we'd write (e.g. for the 3-arg case):
> 
>    template <class A, class B, class C>
>    Result someFunc( A a, B b, C c ); // fudging "Result" for simplicity
> 
> which means that someFunc works "forall" types A, B, and C.

No, it means for any _instantiations_ of types A, B, and C, this will
(probably) give a runable function back.

/Jon
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnen82$jaq$1@news.oberberg.net>
Brian McNamara! wrote:
> So C++ can only mimic "noisy_apply" so well.  I expect that Haskell
> can mimic it better in this respect.

Definitely.
Haskell uses currying, which mimicks multi-parameter functions using
single-parameter ones.

Regards,
Jo
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bne3id$agh$2@news.oberberg.net>
Marshall Spight wrote:

> <·············@comcast.net> wrote in message ·················@comcast.net...
> 
>>"Marshall Spight" <·······@dnai.com> writes:
>>
>>(defun noisy-apply (f arglist)
>>  (format t "I am now about to apply ~s to ~s" f arglist)
>>  (apply f arglist))
> 
> It's starting to feel like this is merely a demonstration
> of Java's weakness in generic programming, and
> not something hooked into Goedel.
> 
> Anyone have any comments?

You're dead right: Java has insufficient support for typing this.

C++ would allow it, but the result isn't pretty either... which says a 
lot about C++'s qualities for higher-order programming.

Regards,
Jo
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bne3e4$agh$1@news.oberberg.net>
·············@comcast.net wrote:

> "Marshall Spight" <·······@dnai.com> writes:
> 
> 
>>It would be really interesting to see a small but useful example
>>of a program that will not pass a statically typed language.
>>It seems to me that how easy it is to generate such programs
>>will be an interesting metric.
>
> Would this count?
> 
> (defun noisy-apply (f arglist)
>   (format t "I am now about to apply ~s to ~s" f arglist)
>   (apply f arglist))

It wouldn't typecheck in Haskell because you don't restrict the elements 
of the arglist to be of the "Show" type class, which is the group of 
types that have a printable representation.

Other than that, the declaration of this function would be (in a 
Pascal-inspired notation)

   noisy_apply (f: function (x: a): b, x: a): b

Note that a and b are type parameters here: if noisy_apply is given a 
function with input type a and output type b, it will /demand/ that its 
second parameter is of type a, and its result type will be b.

I.e. in C++, you'd write something like

   template <a, b> {
     b noisy_apply ( (b) (f (a)), a x );
   }

(syntax is most likely dead wrong, it's been a while since I was 
actively working with C++).

For completeness, here's the Haskell type:

   (a -> b) -> a -> b

(I hope I got this one right.)
The first "a -> b" translates as "a function that takes any type a and 
returns any type b".
The x -> y -> z notations translates as "a function taking input values 
of types x and y and returning a value of z type".
If the same type letter occurs more than once, if must be the same type 
in calls.
(Yes, the function is polymorphic, though in a different way than OO 
polymorphism: most OO languages don't allow expressing the restriction 
that the two "a" parameters must be of the same type but the caller is 
free to choose any type.)
To sum it all up: the above specifications are all intended to say the 
same, namely "noisy_apply is a function that takes an arbitrary 
function, and another parameter that must be of the same type as the 
input parameter for the supplied function, and that will return a value 
of the same type as the supplied function will return".
Modern static type systems can express such types :-)


You might ask "where's the argument list"?
The answer is that I'm assuming a "currying" language. All functions in 
such a language have a single argument; functions with multiple 
arguments are written as a function that takes an argument and returns 
another function which expects the next argument.

I.e.
   add (3, 4)
is first evaluated as
   [***] (4)
where [***] is an internally-created function that adds 3 to its single 
argument.
(That's just the theory how the operation is defined. Currying languages 
will usually be executed in the obvious manner, unless the code takes 
specific advantage of currying.)


HTH
Jo
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ismdcj0d.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> ·············@comcast.net wrote:
>
>> "Marshall Spight" <·······@dnai.com> writes:
>>
>>>It would be really interesting to see a small but useful example
>>>of a program that will not pass a statically typed language.
>>>It seems to me that how easy it is to generate such programs
>>>will be an interesting metric.
>>
>> Would this count?
>> (defun noisy-apply (f arglist)
>>   (format t "I am now about to apply ~s to ~s" f arglist)
>>   (apply f arglist))
>
> To sum it all up: the above specifications are all intended to say the
> same, namely "noisy_apply is a function that takes an arbitrary
> function, and another parameter that must be of the same type as the
> input parameter for the supplied function, and that will return a
> value of the same type as the supplied function will return".
> Modern static type systems can express such types :-)

Are they happy with something like this?

(defun black-hole (x)
  #'black-hole)

(for non lispers, the funny #' is a namespace operator.
 The black-hole function gobbles an argument and returns
 the black-hole function.)
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnefil$9ef$1@news-int.gatech.edu>
·············@comcast.net once said:
>Are they happy with something like this?
>
>(defun black-hole (x)
>  #'black-hole)
>
>(for non lispers, the funny #' is a namespace operator.
> The black-hole function gobbles an argument and returns
> the black-hole function.)

Finally, an example that I don't think you can type in Haskell.  
You score a point for that.  :)

If we have a static type system which admits infinite types, then we
can assign black-hole a type.  So it's still typeable, just not in any
common language I can name offhand.  :)

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Remi Vanicat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87llr9b1cj.dlv@wanadoo.fr>
·······@prism.gatech.edu (Brian McNamara!) writes:

> ·············@comcast.net once said:
>>Are they happy with something like this?
>>
>>(defun black-hole (x)
>>  #'black-hole)
>>
>>(for non lispers, the funny #' is a namespace operator.
>> The black-hole function gobbles an argument and returns
>> the black-hole function.)
>
> Finally, an example that I don't think you can type in Haskell.  
> You score a point for that.  :)
>
> If we have a static type system which admits infinite types, then we
> can assign black-hole a type.  So it's still typeable, just not in any
> common language I can name offhand.  :)

$ ocaml -rectypes
        Objective Caml version 3.07+2
 
# let rec f x = f;;
val f : 'b -> 'a as 'a = <fun>

By the way, I don't see how this function can be useful...

-- 
R�mi Vanicat
From: Jesse Tov
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbpln58.pj5.tov@tov.student.harvard.edu>
Remi Vanicat <···············@labri.fr>:
> $ ocaml -rectypes
>         Objective Caml version 3.07+2
>  
> # let rec f x = f;;
> val f : 'b -> 'a as 'a = <fun>

Eww.  Something like this would be much nicer:

    f : mu (\'b. 'a -> 'b)

Still, I didn't know Ocaml did that--cool.

Maybe the point here should be we know how to make a type system that
allows that, but typically it's not worth it because we don't need
functions with those types.  (The ML solution--an implicit fix-point in
data constructors--makes it work pretty much wherever I'd care.)
Actually, I'd be interested to see a function that needs a recursive
type that couldn't be trivially rewritten to make it type in ML/Haskell.

Jesse



Jesse
-- 
"A hungry man is not a free man."         --Adlai Stevenson
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <08ds61-7i.ln1@ID-7776.user.dfncis.de>
Brian McNamara! <·······@prism.gatech.edu> wrote:
> ·············@comcast.net once said:

>>(defun black-hole (x)
>>  #'black-hole)

>>(for non lispers, the funny #' is a namespace operator.
>> The black-hole function gobbles an argument and returns
>> the black-hole function.)

> Finally, an example that I don't think you can type in Haskell.  

It's a bit tricky. As Remi has shown, you need recursive types.
Recursive types in Haskell always need an intervening data type
constructor. That's a conscious design decision, because recursive
types are very often a result of a real typing error. (IIRC, that's
why OCaml added an option to enable recursive typing after having it
enabled as default for some time, but Remi should know that better
than I do.)

We also need an existential type in this constructor, because the
argument can be of different type for each application of the black hole.

> data BlackHole = BH (forall a. a -> BlackHole)

Now we can write the black hole function itself:

> black_hole :: BlackHole
> black_hole = BH (\_ -> black_hole)

That's it. However, we cannot apply it directly. We have to "unfold"
it explicitely by taking it out of the data constructor. We define an
auxiliary infix function to take care of that.

> infixl 0 $$

> ($$) :: BlackHole -> a -> BlackHole
> (BH f) $$ x = f x

Now we can write a function like

> f = black_hole $$ "12" $$ 5 $$ True

which will nicely typecheck.

That's the first time one actually has to add non-obvious stuff to
"please" the type checker. OTOH, the black hole function is pretty
bogus, so one can argue that this is a realistic price to pay to say
that you really really want this strange function. I would be very
surprised if this function is of any use in a real-world program.

And there is nothing else you can do with arguments of arbitary type
but silently discard them, so I guess there are not many other examples
like this.

- Dirk
From: Tomasz Zielonka
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbppn95.nt.t.zielonka@zodiac.mimuw.edu.pl>
Brian McNamara! napisa�:
> ·············@comcast.net once said:
>>Are they happy with something like this?
>>
>>(defun black-hole (x)
>>  #'black-hole)
>>
>>(for non lispers, the funny #' is a namespace operator.
>> The black-hole function gobbles an argument and returns
>> the black-hole function.)
> 
> Finally, an example that I don't think you can type in Haskell.  
> You score a point for that.  :)
> 
> If we have a static type system which admits infinite types, then we
> can assign black-hole a type.  So it's still typeable, just not in any
> common language I can name offhand.  :)

You are making things a bit too complicated. I think you can write
blackHole in Haskell:

blackHole :: a
blackHole = error "black-hole"

*BH> :t blackHole 1 2 3 'a' "ho" (blackHole, 1.2)
blackHole 1 2 3 'a' "ho" (blackHole, 1.2) :: forall t. t

*BH> blackHole 1 2 3 'a' "ho" (blackHole, 1.2)
*** Exception: black-hole

*BH> let _ = blackHole 1 2 3 'a' "ho" (blackHole, 1.2) in "abcdef"
"abcdef"

Best regards,
Tom

-- 
.signature: Too many levels of symbolic links
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9d5f91$1@news.unimelb.edu.au>
·······@prism.gatech.edu (Brian McNamara!) writes:

>·············@comcast.net once said:
>>(defun black-hole (x)
>>  #'black-hole)
>>
>>(for non lispers, the funny #' is a namespace operator.
>> The black-hole function gobbles an argument and returns
>> the black-hole function.)
>
>Finally, an example that I don't think you can type in Haskell.  
>You score a point for that.  :)

I don't think it deserves any points, because as another poster said,
in Haskell that is equivalent to

	black_hole _ = undefined

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <0Izmb.25575$Fm2.12537@attbi_s04>
<·············@comcast.net> wrote in message ·················@comcast.net...
>
> Are they happy with something like this?
>
> (defun black-hole (x)
>   #'black-hole)
>
> (for non lispers, the funny #' is a namespace operator.
>  The black-hole function gobbles an argument and returns
>  the black-hole function.)

Ha!

Although this doesn't get me any closer to my goal of
simple, useful, correct program that cannot be proven
typesafe. I don't believe the feature this function
illustrates could be useful; you have to have a handle
on black-hole before you can invoke it, so getting
it back as a return value doesn't get me anything.
But it's a nice example.


Marshall
From: Neelakantan Krishnaswami
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <slrnbplpv6.ikr.neelk@gs3106.sp.cs.cmu.edu>
In article <·····················@attbi_s04>, Marshall Spight wrote:
><·············@comcast.net> wrote in message ·················@comcast.net...
>>
>> Are they happy with something like this?
>>
>> (defun black-hole (x)
>>   #'black-hole)
>>
>> (for non lispers, the funny #' is a namespace operator.
>>  The black-hole function gobbles an argument and returns
>>  the black-hole function.)
> 
> Ha!
> 
> Although this doesn't get me any closer to my goal of simple,
> useful, correct program that cannot be proven typesafe. I don't
> believe the feature this function illustrates could be useful; you
> have to have a handle on black-hole before you can invoke it, so
> getting it back as a return value doesn't get me anything.  But it's
> a nice example.

The feature this program demonstrates is useful! It's a function of
recursive type; in Ocaml (invoked with the -rectypes option) it would
type as:

  # let rec blackhole x = blackhole;;
  val blackhole : 'b -> 'a as 'a

With a little bit more work, you can use recursive types to represent
(for example) infinite streams:

  type 'a stream = unit -> 'a * 'b as 'b

  let head stream = fst(stream())
  let tail stream = snd(stream())
  let cons h t = fun() -> h, t

  let rec unfold head tail cons seed =
    let rec f() =
      let h = head seed in
      let t = tail seed in
      cons h (unfold head tail cons t)
    in f

So now you can write the infinite stream of ones as

  let id x = x

  let ones = unfold id id cons 1

and the natural numbers as

  let nats = unfold id ((+) 1) 0 cons

You can write a function that gives you every even or odd element of a
stream as:

  let ($) f g x = g(f x)

  let evens s = unfold head (tail $ tail) cons s
  
  let odds s = unfold head (tail $ tail) cons (tail s) 


-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnf688$esd$1@newsreader2.netcologne.de>
Marshall Spight wrote:
> <·············@comcast.net> wrote in message ·················@comcast.net...
> 
>>Are they happy with something like this?
>>
>>(defun black-hole (x)
>>  #'black-hole)
>>
>>(for non lispers, the funny #' is a namespace operator.
>> The black-hole function gobbles an argument and returns
>> the black-hole function.)
> 
> 
> Ha!
> 
> Although this doesn't get me any closer to my goal of
> simple, useful, correct program that cannot be proven
> typesafe. 

OK, here we go! :)



(defvar *default-company* 'costanza-inc)

(defclass employed ()
   ((original-class :initarg :original-class)
    (company :accessor company :initarg :company)
    (salary :accessor salary :initarg :salary)))

(defun hire (someone salary &key (company *default-company*))
   (let* ((original-class (class-name (class-of someone)))
          (employed-class
           (intern (format nil "~A-~A" 'employed original-class))))
     (eval `(defclass ,employed-class (employed ,original-class) ()))
     (change-class someone employed-class
                   :original-class original-class
                   :company company
                   :salary salary)))

(defun fire (someone)
   (when (member
          (find-class 'employed)
          (class-precedence-list (class-of someone)))
     (change-class someone (slot-value someone 'original-class))))


(defun test-employed ()
   (let ((person (make-symbol "PERSON")))
     (eval `(defclass ,person ()
              ((name :accessor name :initarg :name))))
     (let ((joe (make-instance person :name "joe")))
       (format t "-> hire joe~%")
       (hire joe 60000)
       (format t "name: ~A~%" (name joe))
       (format t "current class: ~A~%" (class-name (class-of joe)))
       (format t "original class: ~A~%" (slot-value joe 'original-class))
       (format t "company: ~A~%" (company joe))
       (format t "salary: ~A~%" (salary joe))
       (format t "-> fire joe~%")
       (fire joe)
       (if (member (find-class 'employed)
                   (class-precedence-list (class-of joe)))
           (format t "joe is still employed.~%")
         (format t "joe is not employed anymore.~%")))))



And here is a sample session:

CL-USER 1 > (test-employed)
-> hire joe
name: joe
current class: EMPLOYED-PERSON
original class: PERSON
company: COSTANZA-INC
salary: 60000
-> fire joe
joe is not employed anymore.
NIL



Some minor comments:

- This is all standard ANSI Common Lisp, except for 
CLASS-PRECEDENCE-LIST which is part of the semi-standard MOP.

- This codes runs without any changes and without any additional 
libraries on both LispWorks 4.3 and Macintosh Common Lisp 5.0, and 
probably also on many other Common Lisp implementations. (This fulfils 
the requirement that this is indeed a relatively short example.)

- I have used EVAL to define classes at runtime, because although the 
MOP defines ENSURE-CLASS for that purpose the latter is not defined in 
Macintosh Common Lisp. The use of EVAL is not the reason for this code 
not being acceptable for a static type checker.

- The important thing here is that the EMLOYED mixin works on any class, 
even one that is added later on to a running program. So even if you 
want to hire martians some time in the future you can still do this.

- As an interesting sidenote, both Common Lisp compilers I have used 
emit a warning that there is no definition for the function NAME that is 
called in TEST-EMPLOYED. I don't care - the code is elegant, relatively 
straightforward to understand, useful and correct. And I can safely 
ignore the warning - it works as expected.



Pascal


P.S., to Joe Marshall: I hope you don't mind that I hire and fire you 
within a split second. ;-)
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <brs4yazr.fsf@comcast.net>
Pascal Costanza <········@web.de> writes:

> P.S., to Joe Marshall: I hope you don't mind that I hire and fire you
> within a split second. ;-)

I'm getting used to it.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa630f6$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>- The important thing here is that the EMLOYED mixin works on any class, 
>even one that is added later on to a running program. So even if you 
>want to hire martians some time in the future you can still do this.

What happens if the existing class defines a slot named "salary" or "company",
but with a different meaning?  Are slot names global, or is there some sort
of namespace control to prevent this kind of accidental name capture?

Anyway, regarding how to write this example in a statically typed
language: you can do this in a quite straight-forward manner,
by just keeping a separate table of employees.
For example, here it is in Java.  

	import java.util.*;
	public class Employed {
		static String default_company = "constanz-inc";

		static class Employee {
			public Object		obj;
			public String		company;
			public int		salary;
			public Employee(Object o, String c, int s) {
				company = c;
				salary = s;
				obj = o;
			}
		}

		static Hashtable employees = new Hashtable();

		static void hire(Object obj, int salary) {
			hire(obj, salary, default_company);
		}
		static void hire(Object obj, int salary, String company) {
			employees.put(obj, new Employee(obj, company, salary));
		}
		static void fire(Object obj) {
			employees.remove(obj);
		}

		static void test_employed() {
			class Person {
				public String name;
				Person(String n) { name = n; }
			};
			Person joe = new Person("joe");
			System.out.println("-> hire joe");
			hire(joe, 60000);
			System.out.println("name: " + joe.name);
			System.out.println("class: "
					+ joe.getClass().getName());
			Employee e = (Employee) employees.get(joe);
			System.out.println("employed: " +
					(e != null ? "yes" : "no"));
			System.out.println("company: " + e.company);
			System.out.println("salary: " + e.salary);
			System.out.println("-> fire joe");
			fire(joe);
			if (employees.contains(joe)) {
				System.out.println("joe is still employed.");
			} else {
				System.out.println(
					"joe is not employed anymore.");
			}
		}

		public static void main(String args[]) {
			test_employed();
		}
	}

As you can see, there's no need here for dynamically changing the types of
objects at runtime or for creating classes at runtime.  But you can employ
Martians or any other object.

This example makes use of one dynamic cast; that's because the Java
type system doesn't support generics / parametric polymorphism.  It would
be a little nicer to do this in a language which supported generics, then
we could use `Hashtable<Object, Employee>' rather than just `Hashtable',
and there wouldn't be any need for the dynamic cast to `(Employee)'.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa65759$1@news.unimelb.edu.au>
Fergus Henderson <···@cs.mu.oz.au> writes:

>Anyway, regarding how to write this example in a statically typed
>language: you can do this in a quite straight-forward manner,
>by just keeping a separate table of employees.
>For example, here it is in Java.  

And in case you didn't like the type declarations and downcast that you
need in Java, here it is in Mercury.  

	:- module employed.
	:- interface.
	:- import_module io.

	:- pred main(io::di, io::uo) is det.

	:- implementation.
	:- import_module map, string, std_util.

	default_company = "constanz-inc".

	:- type employee ---> some [Obj]
		employee(object::Obj, salary::int, company::string).

	hire(Obj, Salary, !Employees) :-
		hire(Obj, Salary, default_company, !Employees).
	hire(Obj, Salary, Company, !Employees) :-
		set(!.Employees, Obj, 'new employee'(Obj, Salary, Company),
			!:Employees).
	fire(Obj, !Employees) :-
		delete(!.Employees, Obj, !:Employees).

	:- type person ---> person(name::string).

	test_employed(!.Employees) -->
			{ Joe = person("joe") },
			print("-> hire joe"), nl,
			{ hire(Joe, 60000, !Employees) },
			print("name: " ++ Joe^name), nl,
			print("class: " ++ type_name(type_of(Joe))), nl,
			print("employed: " ++ (if !.Employees `contains` Joe
				then "yes" else "no")), nl,
			print("company: " ++
				!.Employees^det_elem(Joe)^company), nl,
			print("salary: "),
			print(!.Employees^det_elem(Joe)^salary), nl,
			print("-> fire joe"), nl,
			{ fire(Joe, !Employees) },
			(if {!.Employees `contains` Joe} then
				print("joe is still employed."), nl
			else
				print("joe is not employed anymore."), nl
			).

	main --> { init(Employees) }, test_employed(Employees).

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo5m9q$bbg$1@newsreader2.netcologne.de>
Fergus Henderson wrote:

> Fergus Henderson <···@cs.mu.oz.au> writes:
> 
> 
>>Anyway, regarding how to write this example in a statically typed
>>language: you can do this in a quite straight-forward manner,
>>by just keeping a separate table of employees.
>>For example, here it is in Java.  
> 
> 
> And in case you didn't like the type declarations and downcast that you
> need in Java, here it is in Mercury.  

Thanks for the variations on this theme. However, your abstraction is 
still leaking.

> 			print("employed: " ++ (if !.Employees `contains` Joe
                                                   ^^^^^^^^^^^^^^^^^^^^^^
> 				then "yes" else "no")), nl,
> 			print("company: " ++
> 				!.Employees^det_elem(Joe)^company), nl,
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
> 			print("salary: "),
> 			print(!.Employees^det_elem(Joe)^salary), nl,
                               ^^^^^^^^^^^^^^^^^^^^^^^^^

Pascal
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bo5hgl$3pk$1@newsreader2.netcologne.de>
Fergus Henderson wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>- The important thing here is that the EMLOYED mixin works on any class, 
>>even one that is added later on to a running program. So even if you 
>>want to hire martians some time in the future you can still do this.
> 
> 
> What happens if the existing class defines a slot named "salary" or "company",
> but with a different meaning?  Are slot names global, or is there some sort
> of namespace control to prevent this kind of accidental name capture?

Sure, that's what Common Lisp's packages are there for. Define the 
EMPLOYED mixin in its own package - done! You won't ever need to worry 
about name clashes.

> Anyway, regarding how to write this example in a statically typed
> language: you can do this in a quite straight-forward manner,
> by just keeping a separate table of employees.

Yuck.

> 		static void test_employed() {
> 			class Person {
> 				public String name;
> 				Person(String n) { name = n; }
> 			};
> 			Person joe = new Person("joe");
> 			System.out.println("-> hire joe");
> 			hire(joe, 60000);
> 			System.out.println("name: " + joe.name);
> 			System.out.println("class: "
> 					+ joe.getClass().getName());
> 			Employee e = (Employee) employees.get(joe);
						^^^^^^^^^^^^^^^^^^
This part is not domain-specific, but shows that your abstraction leaks. 
I.e. the client of your interface has to remember how the employee 
abstraction is implemented in order to use it correctly.

In my original example, I was able to just call (company joe) and 
(salary joe) (or, in Java syntax, this would be joe.salary and 
joe.company). I.e., I don't have to know anything about the internal 
implementation.

You can't implement unanticipated optional features in a statically 
typed language that doesn't involve leaking abstractions.

> As you can see, there's no need here for dynamically changing the types of
> objects at runtime or for creating classes at runtime.  But you can employ
> Martians or any other object.

Sure. You also don't need functions and parameter passing. You also 
don't need GOSUB and RETURN. So why don't we just program in assembler 
again?

> This example makes use of one dynamic cast; that's because the Java
> type system doesn't support generics / parametric polymorphism.  It would
> be a little nicer to do this in a language which supported generics, then
> we could use `Hashtable<Object, Employee>' rather than just `Hashtable',
> and there wouldn't be any need for the dynamic cast to `(Employee)'.

I would still need to remember what features happen to be kept external 
from my objects and what not.


Pascal

P.S.: Your implementation of default_company doesn't match mine. Yours 
is not dynamically scoped. But maybe that's nitpicking...
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3fa67918$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>
>> 		static void test_employed() {
>> 			class Person {
>> 				public String name;
>> 				Person(String n) { name = n; }
>> 			};
>> 			Person joe = new Person("joe");
>> 			System.out.println("-> hire joe");
>> 			hire(joe, 60000);
>> 			System.out.println("name: " + joe.name);
>> 			System.out.println("class: "
>> 					+ joe.getClass().getName());
>> 			Employee e = (Employee) employees.get(joe);
>						^^^^^^^^^^^^^^^^^^
>This part is not domain-specific, but shows that your abstraction leaks. 
>I.e. the client of your interface has to remember how the employee 
>abstraction is implemented in order to use it correctly.
>
>In my original example, I was able to just call (company joe) and 
>(salary joe) (or, in Java syntax, this would be joe.salary and 
>joe.company). I.e., I don't have to know anything about the internal 
>implementation.

Well, you have a point, I didn't encapsulate the use of the Hashtable.
I can do that quite easily:

		static Employee employee(Object obj) {
			return (Employee) employees.get(obj);
		}

Then the code there could become as follows.

		System.out.println("employed: " +
			(employee(joe) != null ? "yes" : "no"));
		System.out.println("company: " + employee(joe).company);
		System.out.println("salary: " + employee(joe).salary);

If you prefer, you can make make it simpler still,

		System.out.println("employed: " +
			(employeed(joe) ? "yes" : "no"));
		System.out.println("company: " + company(joe));
		System.out.println("salary: " + salary(joe));

by defining suitable methods:

		static bool employed(Object obj) {
			return employee(obj) != null;
		}
		static String company(Object obj) {
			return employee(obj).company;
		}
		static int salary(Object obj) {
			return employee(obj).salary;
		}

Now, my guess is that you're still going to be complaining about the
abstraction leaking, but I don't think such complaints are valid.
Yes, the syntax is different than a field access, but that's not
important.  The client is going to need to know the names of the
attributes or methods they want to use anyway; as long as they need to
know whether the name of the entity is "wage" or "salary", it doesn't make
a significant difference that they also need to know whether the interface
to that entity is a field, a member function, or a static function.

And of course, this syntax issue is language-specific.  In Mercury
and Haskell, the same syntax is used for field access, method call,
and function call, so the issue doesn't arise!

>You can't implement unanticipated optional features in a statically 
>typed language that doesn't involve leaking abstractions.

Not true.  See above.

>I would still need to remember what features happen to be kept external 
>from my objects and what not.

No, that's not what you need to remember.  You just need to remember the
names of the features, and for each feature what kind of feature it is:
whether it is a field, an instance method, or a static method.
If it is a field, you know it is kept inside the object, but in
the other two cases the implementation is encapsulated -- the user
can't tell where the data is stored.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bneo3g$jkv$1@news.oberberg.net>
·············@comcast.net wrote:
> Are they happy with something like this?
> 
> (defun black-hole (x)
>   #'black-hole)
> 
> (for non lispers, the funny #' is a namespace operator.
>  The black-hole function gobbles an argument and returns
>  the black-hole function.)

Now *that* is a real challenge, and Haskell indeed doesn't allow this 
(it says "black_hole has an infinite type", which is not a surprise: the 
literal transliteration of the above function would be
   black_hole _ = black_hole
and the only solution to the above equation would require that 
black_hole has a countably infinite number of _ parameters).

However, what purpose would the function serve? I'm pretty sure that 
there's an equivalent idiom in Haskell, but I can't tell unless I know 
what black_hole is good for.

Regards,
Jo
From: Andreas Rossberg
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3F9E688A.7010206@ps.uni-sb.de>
·············@comcast.net wrote:
> "Marshall Spight" <·······@dnai.com> writes:
> 
>>It would be really interesting to see a small but useful example
>>of a program that will not pass a statically typed language.
>>It seems to me that how easy it is to generate such programs
>>will be an interesting metric.
> 
> Would this count?
> 
> (defun noisy-apply (f arglist)
>   (format t "I am now about to apply ~s to ~s" f arglist)
>   (apply f arglist))

Moscow ML version 2.00 (June 2000)
Enter `quit();' to quit.
- fun noisyApply f x =
   (print "I am about to apply "; printVal f;
    print " to "; printVal x; print "\n";
    f x);
 > val ('a, 'b) noisyApply = fn : ('a -> 'b) -> 'a -> 'b
- noisyApply Math.sin Math.pi;
I am about to apply fn to 3.14159265359
 > val it = 1.22460635382E~16 : real


In this example, printVal requires some runtime type information. But 
that does in no way preclude static type checking.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnc57b$pv6$1@f1node01.rhrz.uni-bonn.de>
Marshall Spight wrote:
> "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> 
>>... there exist programs that work but
>>that cannot be statically typechecked. These programs objectively exist.
>>By definition, I cannot express them in a statically typed language.
> 
> 
> I agree these programs exist.
> 
> It would be really interesting to see a small but useful example
> of a program that will not pass a statically typed language.
> It seems to me that how easy it is to generate such programs
> will be an interesting metric.
> 
> Anyone? (Sorry, I'm a static typing guy, so my brain is
> warped away from such programs. :-)

Have you ever used a program that has required you to enter a number?

The check whether you have really typed a number is a dynamic check, right?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnc682$j4t$1@news-int2.gatech.edu>
Pascal Costanza <········@web.de> once said:
>Marshall Spight wrote:
>> I agree these programs exist.
>> 
>> It would be really interesting to see a small but useful example
>> of a program that will not pass a statically typed language.
>> It seems to me that how easy it is to generate such programs
>> will be an interesting metric.
>> 
>> Anyone? (Sorry, I'm a static typing guy, so my brain is
>> warped away from such programs. :-)
>
>Have you ever used a program that has required you to enter a number?
>
>The check whether you have really typed a number is a dynamic check, right?

Yes, but this doesn't imply we can't write a statically-typed program to
handle the situation...

I can imagine Haskell code like

   y = do x <- myread "34"
          return x * 2
   z = do x <- myread "foo"
          return x * 2

where

   myread :: String -> Maybe a
   y, z :: Maybe Int

and "y" ends up with the value "Just 68" whereas "z" is "Nothing".

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnc7n1$pvg$1@f1node01.rhrz.uni-bonn.de>
Brian McNamara! wrote:
> Pascal Costanza <········@web.de> once said:
> 
>>Marshall Spight wrote:
>>
>>>I agree these programs exist.
>>>
>>>It would be really interesting to see a small but useful example
>>>of a program that will not pass a statically typed language.
>>>It seems to me that how easy it is to generate such programs
>>>will be an interesting metric.
>>>
>>>Anyone? (Sorry, I'm a static typing guy, so my brain is
>>>warped away from such programs. :-)
>>
>>Have you ever used a program that has required you to enter a number?
>>
>>The check whether you have really typed a number is a dynamic check, right?
> 
> 
> Yes, but this doesn't imply we can't write a statically-typed program to
> handle the situation...
> 
> I can imagine Haskell code like
> 
>    y = do x <- myread "34"
>           return x * 2
>    z = do x <- myread "foo"
>           return x * 2
> 
> where
> 
>    myread :: String -> Maybe a
>    y, z :: Maybe Int
> 
> and "y" ends up with the value "Just 68" whereas "z" is "Nothing".

The code you have given above doesn't give the user any feedback, right? 
Do you really think that this is useful?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Brian McNamara!
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bncb5n$h5a$1@news-int.gatech.edu>
Pascal Costanza <········@web.de> once said:
>Brian McNamara! wrote:
>> I can imagine Haskell code like
>> 
>>    y = do x <- myread "34"
>>           return x * 2
>>    z = do x <- myread "foo"
>>           return x * 2
>> 
>> where
>> 
>>    myread :: String -> Maybe a
>>    y, z :: Maybe Int
>> 
>> and "y" ends up with the value "Just 68" whereas "z" is "Nothing".
>
>The code you have given above doesn't give the user any feedback, right? 
>Do you really think that this is useful?

It is certainly useful if the strings are coming from a file read over
the network by a batch process that runs nightly on a machine sitting in
a closet.

But I suppose you really want this example

>>(defun f (x)
>>  (unless (< x 200)
>>    (cerror "Type another number"
>>            "You have typed a wrong number")
>>    (f (read)))
>>  (* x 2))

statically typed, huh?  Ok, I'll try.  If I come up short, I expect it's
because I'm fluent in neither Haskell nor Lisp, not because it can't be
done.

   readInt :: IO Maybe Int
   
   cerror :: String -> String -> IO Maybe a -> IO Maybe a
   cerror optmsg errmsg v = 
      do print errmsg
         print ("1: " ++ optmsg)
         print "2: Fail"
         mx <- readInt
         r <- if (maybe False (=1) mx) 
              then v
              else return Nothing
         return r

   f :: Int -> IO Maybe Int
   f x = do x' <- if x < 200
                  then cerror "Type another number"
                              "You have typed a wrong number"
                              (do mx <- readInt
                                  do x <- mx
                                     return f x)
                  else return (return (x * 2))
            return x'

I think that maybe works.  Perhaps someone who really knows Haskell and
has a Haskell compiler can check it and/or tidy it up a little if
necessary.

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <mbbt61-lgd.ln1@ID-7776.user.dfncis.de>
Brian McNamara! <·······@prism.gatech.edu> wrote:
> Pascal Costanza <········@web.de> once said:
>>Brian McNamara! wrote:

> But I suppose you really want this example

>>>(defun f (x)
>>>  (unless (< x 200)
>>>    (cerror "Type another number"
>>>            "You have typed a wrong number")
>>>    (f (read)))
>>>  (* x 2))

> statically typed, huh? 

After quite some time, I think I have finally figured out what Pascal
meant with

>> The type system might test too many cases.

I have the impression that he is confusing dynamic type errors and
runtime errors. In Lisp and Smalltalk they are more or less the same,
and since dynamic type errors map to static type errors, he may think
by analogy that other runtime errors must necessarily also map to
compile errors somehow involved with static typing. Of course this is
nonsense; those two are completely different things. The "too many
cases" referred to some cases of runtime errors he didn't want to be
checked at compile time. As you cannot get "too many test cases" by
type annotations static typing (which was the context where he made
this comment), like you cannot get "too many test cases" by writing too
many of them by hand, I really had a hard time figuring this out.

To sum it up: Unit tests (some, not all!) correspond to type
annotations, static type checking is the same as running the test
suite, dynamic types correspond to data types, runtime errors
correspond to runtime errors (surprise :-).

> Ok, I'll try.  If I come up short, I expect it's because I'm fluent
> in neither Haskell nor Lisp, not because it can't be done.

You don't really need runtime errors for the above example,
but here's a similar version to yours that throws an error in
'cerror' to return to the toplevel. No Maybe types.

cerror :: String -> String -> IO a -> IO a
cerror optmsg errmsg cont = do
  print errmsg 
  print ("1: " ++ optmsg)
  print ("2: Fail")
  s <- getLine
  x <- readIO s
  let choice 1 = cont
      choice 2 = ioError (userError errmsg)
  choice x
  
f :: Integer -> IO (Integer)
f x =
  if (x < 200) 
    then return (x * 2)
    else cerror 
      "Type another number"
      "You have typed a wrong number" 
      (getLine >>= readIO >>= f)

>> I don't want an "approximation of cerror". I want cerror!

And you got it, exactly as you wanted. Perfectly typeable.
(Please don't say now "but I want to use the Lisp code, and
it should run exactly as it is, without any changes in syntax").
You could even assign the very same types in Lisp if any of the 
extensions of Lisp support that. (I didn't try.)

- Dirk
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnh78o$s9f$1@newsreader2.netcologne.de>
Dirk Thierbach wrote:

> Brian McNamara! <·······@prism.gatech.edu> wrote:
> 
>>Pascal Costanza <········@web.de> once said:
>>
>>>Brian McNamara! wrote:
> 
> 
>>But I suppose you really want this example
> 
> 
>>>>(defun f (x)
>>>> (unless (< x 200)
>>>>   (cerror "Type another number"
>>>>           "You have typed a wrong number")
>>>>   (f (read)))
>>>> (* x 2))
> 
> 
>>statically typed, huh? 

Sidenote: The code above has a bug. Here is the correct version:

(defun f (x)
   (if (< x 200)
     (* x 2)
     (progn
       (cerror "Type another number"
               "You have typed a wrong number")
       (print '>)
       (f (read)))))

(Noone spotted this bug before. All transliterations I have seen so far 
have silently corrected this bug. It is interesting to note that it 
probably weren't the type systems that corrected it.)

> After quite some time, I think I have finally figured out what Pascal
> meant with
> 
> 
>>>The type system might test too many cases.
> 
> 
> I have the impression that he is confusing dynamic type errors and
> runtime errors. In Lisp and Smalltalk they are more or less the same,
> and since dynamic type errors map to static type errors, he may think
> by analogy that other runtime errors must necessarily also map to
> compile errors somehow involved with static typing. Of course this is
> nonsense; those two are completely different things. The "too many
> cases" referred to some cases of runtime errors he didn't want to be
> checked at compile time. As you cannot get "too many test cases" by
> type annotations static typing (which was the context where he made
> this comment), like you cannot get "too many test cases" by writing too
> many of them by hand, I really had a hard time figuring this out.

I think you have a restricted view of what "type" means. Here is the 
same program written in a "type-friendly" way. (Again, in standard ANSI 
Common Lisp. Note that the type is defined inside of f and not visible 
to the outside.)

(defun f (x)
   (check-type x (real * 200))
   (* x 2))

CL-USER 1 > (f 5)
10

CL-USER 2 > (f 666)

Error: The value 666 of X is not of type (REAL * 200).
   1 (continue) Supply a new value of X.
   2 (abort) Return to level 0.
   3 Return to top loop level 0.

Type :b for backtrace, :c <option number> to proceed,  or :? for other 
options

CL-USER 3 : 1 > :c 1

Enter a form to be evaluated: 66
132

>>Ok, I'll try.  If I come up short, I expect it's because I'm fluent
>>in neither Haskell nor Lisp, not because it can't be done.
> 
> 
> You don't really need runtime errors for the above example,
> but here's a similar version to yours that throws an error in
> 'cerror' to return to the toplevel. No Maybe types.
> 
> cerror :: String -> String -> IO a -> IO a
> cerror optmsg errmsg cont = do
>   print errmsg 
>   print ("1: " ++ optmsg)
>   print ("2: Fail")
>   s <- getLine
>   x <- readIO s
>   let choice 1 = cont
>       choice 2 = ioError (userError errmsg)
>   choice x
>   
> f :: Integer -> IO (Integer)
> f x =
>   if (x < 200) 
>     then return (x * 2)
>     else cerror 
>       "Type another number"
>       "You have typed a wrong number" 
>       (getLine >>= readIO >>= f)
> 
> 
>>>I don't want an "approximation of cerror". I want cerror!
> 
> 
> And you got it, exactly as you wanted. Perfectly typeable.

Nice.

> (Please don't say now "but I want to use the Lisp code, and
> it should run exactly as it is, without any changes in syntax").

No, I don't care about syntax.

You have used CPS - that's a nice solution. However, you obviously 
needed to make the type declaration for f :: Integer -> IO (Integer)

Note that my code is not restricted to integers. (But I don't know 
whether this is actually a serious restriction in Haskell. I am not a 
Haskell programmer.)

> You could even assign the very same types in Lisp if any of the 
> extensions of Lisp support that. (I didn't try.)

What you actually did is: you have assigned a "broad" type statically, 
and then you revert to a manual dynamic check to make the fine-grained 
distinction.

In both Lisp versions, you have exactly one place where you check the 
(single) type. A static type system cannot provide this kind of 
granularity. It has to make more or less distinctions.


I regard the distinction between dynamic type errors and runtime errors 
to be an artificial one, and in fact a red herring. I would rather call 
the former type _exceptions_. Exceptions are situations that occur at 
runtime and that I have a chance to control and correct in one way or 
the other. Errors are beyond control.

This might be terminological nitpicking, but I think it is a serious 
confusion, and it probably stems from weakly typed languages that allow 
for arbitrary memory access and jumps to arbitrary machine addresses. 
The niche in which such languages are still useful is getting smaller by 
the time. (Languages like Java, Python, Ruby, etc., did a fine job in 
this regard, i.e. to promote writing safe programs, rather than using 
languages for high-level purposes that are actually designed for writing 
hardware drivers.)

Anything below real runtime errors (core dumps) has the chance to be 
controlled and corrected at runtime. (Including endless loops - they 
should never be unbreakable in the first place.)

This can give you a real advantage for long-running systems or for very 
large applications that you don't want to, or cannot stop and restart on 
a regular basis.

Both (correct) code snippets I have provided are considerably smaller 
than your solution and both are more general. And my second solution 
still provides a way to correct a wrong parameter at runtime at no 
additional cost. I regard this a feature, not a problem.

Assume that you need to write a program that has only vague 
specifications and requires a high level of such flexibility in its 
specs. _I_ would fire the programmer who would insist to use a language 
that requires him to program all this flexibility by hand, and 
especially wants to see a specification for "unforeseen problems", 
whether formal or not.

See http://www.dilbert.com/comics/dilbert/archive/dilbert-20031025.html ;)


Pascal

P.S.: Please always remember the original setting of this thread. The 
original original questions was something along the lines of "why on 
earth would one not want a static type system?" I don't want to prove 
the general superiority of dynamic type checking over static type 
checking. Heck, there isn't even a general superiority of typing over 
no-typing-at-all. All these approaches have their respective advantages 
and disadvantages, and should be used in the right circumstances.

Yes, you can clearly tell from my postings that I prefer dynamic typing 
over static typing. This in turn means that I am probbaly not a good fit 
for projects that require a strong static approach. And everyone who 
prefers static typing is probably not a good fit for projects that 
require a strong dynamic approach. So what? All of us who participate in 
this discussion are probably not good fits for writing hardware drivers. 
   ;)

Everyone should do what they are best at, and everyone should use the 
tools that fit their style most.

But why on earth should anyone want to prove that their own preferred 
_personal_ style is generally superior to all others?



"If I have seen farther than others, it is because I was
standing on the shoulder of giants."

--- Isaac Newton


"In computer science, we stand on each other's feet."

--- Brian K. Reed


P.P.S.: And I still think that soft typing is the best compromise. It's 
the only approach I know of that has the potential to switch styles 
during the course without the need to completely start from scratch.
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <a9st61-0nh.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

> Sidenote: The code above has a bug. Here is the correct version:
> 
> (defun f (x)
>   (if (< x 200)
>     (* x 2)
>     (progn
>       (cerror "Type another number"
>               "You have typed a wrong number")
>       (print '>)
>       (f (read)))))
> 
> (Noone spotted this bug before. All transliterations I have seen so far 
> have silently corrected this bug. 

Because all people who did transliterations guessed what the program
does, instead of testing it directly. I don't have a Lisp system here,
so I couldn't try it out.

> It is interesting to note that it probably weren't the type systems
> that corrected it.)

While doing the transliteration, I actually had quite a few attempts
that didn't work, and all of them had type errors. Once the typing
was right, everything worked. (And I had to put the continuation in,
because, as I said, non-local control transfers don't translate
one-to-one.)

>>>>The type system might test too many cases.

> I think you have a restricted view of what "type" means. 

Maybe. It sure would help if you'd tell me your view instead of having
me guess :-) For me, a type is a certain class of values, and a static
type is given by a limited language describing such classes. A
dynamic type is a tag that is associated with a value. Arbitrary
classes of values (like "all reals less than 200") are not a type.

> Here is the same program written in a "type-friendly" way.

I'm not sure what's "type-friendly" about it. It uses a dynamic check
with a mix of dynamic type checking and value checking, yes. (I guess
"(real * 200)" means "a real number below 200", does it?)

> (Again, in standard ANSI Common Lisp. Note that the type is defined
> inside of f and not visible to the outside.)

You do not define any type, you do a dynamic check.

> (defun f (x)
>   (check-type x (real * 200))
>   (* x 2))

[...]
> You have used CPS - that's a nice solution. However, you obviously 
> needed to make the type declaration for f :: Integer -> IO (Integer)

In this case, you're right, but I am not sure if you know the reason
for it :-) (I usually make type annotations for the same reason you
write unit tests, i.e. almost everwhere unless the function is
trivial) So why do I have to make it here? Is the type annotation for
cerror also necessary?

> What you actually did is: you have assigned a "broad" type statically, 
> and then you revert to a manual dynamic check to make the fine-grained 
> distinction.

If you mean by "broad" type the static integer type, yes. I have to
assign some type. If I don't want to restrict it to integers, I assign
a more general type (like the Number type I used in the other example).

Dynamic type checks either can be dropped because static type
checking makes them unnecessary, or they translate to similar dynamic
checks in the other language.

Since you cannot statically verify that a user-supplied value is less
than 200, this check becomes a dynamic check (what else should it
translate to?)

> In both Lisp versions, you have exactly one place where you check the 
> (single) type. 

You have one place where you do the dynamic check, I have one place where
I do the dynamic check. The static type is only there because there
has to be a type. It really doesn't matter which one I use statically.
The static type check does not play any role for the dynamic check.

> A static type system cannot provide this kind of granularity. 

I am sorry, but this is nonsense (and it isn't really useful either).

This is like saying "I don't write unit tests in Lisp, because I have
a single place inside my program where I can check everything that
can go wrong."

> I regard the distinction between dynamic type errors and runtime errors 
> to be an artificial one, and in fact a red herring. I would rather call 
> the former type _exceptions_. 

Fine with me. I would go further and say that without dynamic
type checking, there are in fact only exceptions.

> Both (correct) code snippets I have provided are considerably smaller 
> than your solution and both are more general. And my second solution 
> still provides a way to correct a wrong parameter at runtime at no 
> additional cost. I regard this a feature, not a problem.

I would debate both points, but this is about static typing, not about
comparisons a la "in my language I can write programs that are
two lines shorter than yours."

> P.S.: Please always remember the original setting of this thread. The 
> original original questions was something along the lines of "why on 
> earth would one not want a static type system?" 

The answer to this is of course "You use what you like best. End of
story."

What I wanted to do was to show that the claim "There are things that
are easy to do in a dynamically typed language, but one cannot do them
conveniently in a statically typed language, because they won't
typecheck. Hence, statically typed languages are less expressive" is
wrong in all but very very rare situations. Of course the amount of
convenience with which you can do something varies from language to
language, and with the available libraries, but static typing (if done
properly) is never a show-stopper.

> Yes, you can clearly tell from my postings that I prefer dynamic typing 
> over static typing. 

And that's fine with me. What I am a bit allergic to is making
unjustified general claims about static typing (or any other things,
for that matter) that are just not true.

> But why on earth should anyone want to prove that their own preferred 
> _personal_ style is generally superior to all others?

I don't know. As far as I am concerned, this thread was never about
"superiority", and I said so several times.

- Dirk
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnhkqm$kd8$1@newsreader2.netcologne.de>
Dirk Thierbach wrote:

>>I think you have a restricted view of what "type" means. 
> 
> 
> Maybe. It sure would help if you'd tell me your view instead of having
> me guess :-) For me, a type is a certain class of values, and a static
> type is given by a limited language describing such classes. A
> dynamic type is a tag that is associated with a value. Arbitrary
> classes of values (like "all reals less than 200") are not a type.

What are "non-arbitrary classes of values"?

According to your criteria, (real * 200) is
- a certain class of values
- given in a limited language describing that class

I would be interesting to see a convincing definition of the term "type" 
that precludes (real * 200), and similar type specifications.

Most of your other comments depend on this, so I don't comment on them 
in detail.

(Maybe it helps to explain that CHECK-TYPE doesn't check a value per se. 
(check-type 5 (real * 200)) doesn't work. CHECK-TYPE checks a property 
of the variable it is being passed.)

> >>P.S.: Please always remember the original setting of this thread. The
>>original original questions was something along the lines of "why on 
>>earth would one not want a static type system?" 
> 
> The answer to this is of course "You use what you like best. End of
> story."

Fine. (Really.)

> What I wanted to do was to show that the claim "There are things that
> are easy to do in a dynamically typed language, but one cannot do them
> conveniently in a statically typed language, because they won't
> typecheck. Hence, statically typed languages are less expressive" is
> wrong in all but very very rare situations. Of course the amount of
> convenience with which you can do something varies from language to
> language, and with the available libraries, but static typing (if done
> properly) is never a show-stopper.

"very rare situations"
"never a show-stopper"

How do you know?

If you are only talking about your personal experiences, that's fine, 
but you should say so.

>>Yes, you can clearly tell from my postings that I prefer dynamic typing 
>>over static typing. 
> 
> And that's fine with me. What I am a bit allergic to is making
> unjustified general claims about static typing (or any other things,
> for that matter) that are just not true.

The only claim I make is that static type systems need to reject 
well-behaved programs. That's an objective truth.

This implies that there is a trade-off involved. That's also an 
objective truth.

You choose to downplay the importance of dynamic type checking. All I 
hear is that you (and many others) say that the disadvantages of static 
typing are negligible. However, I haven't found any convincing arguments 
for that claim. This claim is simply repeated again and again and again, 
ad infinitum. But how do you actually _justify_ that claim?

"It doesn't matter in practice." is not a valid response! Why do think 
it doesn't matter in practice? "That's my personal experience." is not a 
valid response either. The claim suggests to be valid for a much broader 
scale than just your personal experience. Why do you think your personal 
experience translates well (or should translate well) to other people?

If it's a personal, subjective choice, that's fine with me. Great! Go 
on, use what helps you most.

But I am interested in the question why you (or others) think that 
almost all software should be developed like that. This is a very strong 
  claim, and definitely deserves more justification than "well, I guess 
that's better".

I have chosen to illustrate examples in which a dynamic approach might 
be considerably better. I am decidedly not trying to downplay static 
typing. It can be a rational choice to use a statically typed language 
in specific cases. But the claim that static typing is almost always the 
better choice is irrational if it is not based on empirical evidence.

Again, to make this absolutely clear, it is my personal experience that 
dynamic type checking is in many situations superior to static type 
checking. But I don't ask anyone to unconditionally use dynamically 
typed languages.


Pascal
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <30dv61-ni5.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

>> Maybe. It sure would help if you'd tell me your view instead of having
>> me guess :-) For me, a type is a certain class of values, and a static
>> type is given by a limited language describing such classes. A
>> dynamic type is a tag that is associated with a value. Arbitrary
>> classes of values (like "all reals less than 200") are not a type.
> 
> What are "non-arbitrary classes of values"?

Those that can be described by the language available for static types
e.g. in Haskell or OCaml.

> According to your criteria, (real * 200) is
> - a certain class of values
> - given in a limited language describing that class

Yes, but you will have a hard time developing a static type checker
that will work with such a language.

> I would be interesting to see a convincing definition of the term "type" 
> that precludes (real * 200), and similar type specifications.

Look at the definition for type in Haskell or OCaml, for example.

> (Maybe it helps to explain that CHECK-TYPE doesn't check a value per se. 
> (check-type 5 (real * 200)) doesn't work. CHECK-TYPE checks a property 
> of the variable it is being passed.)

Yes, I know. What does that explain?

> "very rare situations"
> "never a show-stopper"
> 
> How do you know?
> 
> If you are only talking about your personal experiences, that's fine, 
> but you should say so.

Of course I am talking from personal experience, like everyone does.
There is not other way. But in this case, I think my experience is 
sufficient to say that.

> The only claim I make is that static type systems need to reject 
> well-behaved programs. That's an objective truth.

This depends on the definition of "well-behaved". The claim I make is
that for a suitable definition of "well-behaved", it is not an
objective truth. And even if you stick to your definition of
"well-behaved", it doesn't really matter in practice.

> All I hear is that you (and many others) say that the disadvantages
> of static typing are negligible. However, I haven't found any
> convincing arguments for that claim.

What kind of arguments would you like to have? I have tried to
show with a few examples that even programs that you think should
be rejected with static typing will be accepted (if you allow for
the fact that they are written in a different language).

What else is there I could possibly do? The simplest way is probably
that you just sit down and give it a try. No amount of talking can
replace personal experience. Get Haskell or OCaml and do a few simple
examples. You will find that they have many things that you won't
like, e.g. no good IDE, no "eval", and (as every language) they
require a slightly different mindset compared to what you're used
to. They might also not have the library functions that you are used
to. But it might give you a good idea what programs are statically
typeable and what are not.

> But I am interested in the question why you (or others) think that 
> almost all software should be developed like that. 

I didn't say that. Please do not put up a strawman. In fact, I 
explicitely said "you use whatever tool you like best".

> I have chosen to illustrate examples in which a dynamic approach might 
> be considerably better. 

And you didn't convince me; all your examples can be statically
typed.

> Again, to make this absolutely clear, it is my personal experience
> that dynamic type checking is in many situations superior to static
> type checking.

That's maybe the important point. HOW DO YOU KNOW IF YOU HAVE NEVER
TRIED IT? (In a language with good static typing, not in a language
with lousy static typing). And obviously you haven't tried it,
otherwise you wouldn't give examples that can be easily statically
typed, or confuse exceptions or dynamic checks with static type checks.
So it cannot come from your personal experience.

> But I don't ask anyone to unconditionally use dynamically typed
> languages.

But you insist that dynamically typed languages are "better" or
"superior to" statically typed, because you claim you cannot do things
in a statically typed language that you can do in a dynamically typed
one. That's the point where I disagree. I don't ask you to use a
statically typed language, I just want you to give admit that both
are equally good in this respect, or at least you should sit down and
verify that yourself before saying it.

- Dirk
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnj8uq$v2m$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Pascal Costanza <········@web.de> wrote:
> 
>>Dirk Thierbach wrote:
> 
> 
>>>Maybe. It sure would help if you'd tell me your view instead of having
>>>me guess :-) For me, a type is a certain class of values, and a static
>>>type is given by a limited language describing such classes. A
>>>dynamic type is a tag that is associated with a value. Arbitrary
>>>classes of values (like "all reals less than 200") are not a type.
>>
>>What are "non-arbitrary classes of values"?
> 
> Those that can be described by the language available for static types
> e.g. in Haskell or OCaml.

This sounds like a circular definiton.

>>According to your criteria, (real * 200) is
>>- a certain class of values
>>- given in a limited language describing that class
> 
> Yes, but you will have a hard time developing a static type checker
> that will work with such a language.

I am not asking for a definition of the term "static type", but for a 
definition of the term "type".

>>I would be interesting to see a convincing definition of the term "type" 
>>that precludes (real * 200), and similar type specifications.
> 
> Look at the definition for type in Haskell or OCaml, for example.

Haskell: "An expression evaluates to a value and has a static type." 
(http://www.haskell.org/onlinereport/intro.html#sect1.3 )

Where is the definiton for "type"? (without "static"?)

I haven't found a definiton in http://caml.inria.fr/ocaml/htmlman/index.html

>>(Maybe it helps to explain that CHECK-TYPE doesn't check a value per se. 
>>(check-type 5 (real * 200)) doesn't work. CHECK-TYPE checks a property 
>>of the variable it is being passed.)
> 
> Yes, I know. What does that explain?

Let's first get our terminology right.

>>The only claim I make is that static type systems need to reject 
>>well-behaved programs. That's an objective truth.
> 
> This depends on the definition of "well-behaved". The claim I make is
> that for a suitable definition of "well-behaved", it is not an
> objective truth. And even if you stick to your definition of
> "well-behaved", it doesn't really matter in practice.

"well-behaved" means "doesn't show malformed behavior at runtime", i.e. 
especially "doesn't core dump".

"Behavior" is a term that describes dynamic processes. I am only 
interested in dynamic behavior here.

I don't mind if you want to change that terminology. Let's just rephrase 
it: Static type systems need to reject programs that wouldn't 
necessarily fail in serious ways at runtime.

>>All I hear is that you (and many others) say that the disadvantages
>>of static typing are negligible. However, I haven't found any
>>convincing arguments for that claim.
> 
> What kind of arguments would you like to have? I have tried to
> show with a few examples that even programs that you think should
> be rejected with static typing will be accepted (if you allow for
> the fact that they are written in a different language).

Yes, for some of them.

>>But I am interested in the question why you (or others) think that 
>>almost all software should be developed like that. 
> 
> I didn't say that. Please do not put up a strawman. In fact, I 
> explicitely said "you use whatever tool you like best".

But that was the original question that initiated this thread. If we 
have an agreement here, that's perfect!

>>I have chosen to illustrate examples in which a dynamic approach might 
>>be considerably better. 
> 
> And you didn't convince me; all your examples can be statically
> typed.

What about the example in 
http://groups.google.com/groups?selm=bnf688%24esd%241%40newsreader2.netcologne.de 
?

I don't think this can be done without a serious rewrite.

>>Again, to make this absolutely clear, it is my personal experience
>>that dynamic type checking is in many situations superior to static
>>type checking.
> 
> That's maybe the important point. HOW DO YOU KNOW IF YOU HAVE NEVER
> TRIED IT? (In a language with good static typing, not in a language
> with lousy static typing). And obviously you haven't tried it,
> otherwise you wouldn't give examples that can be easily statically
> typed, or confuse exceptions or dynamic checks with static type checks.
> So it cannot come from your personal experience.

Right, it comes from a more principled consideration: You can't have 
metacircularity in a statically type language. You might be able to have 
metacircularity if you strictly separate the stages, but as soon as you 
want to be able to at least occasionally call base code from meta code 
and vice versa, then you lose.

Metacircularity gives me the guaranntee that I can always code around 
any unforeseeable limitations that might come up, without having to 
start from scratch.

So, yes, I am interested in having the opportunity to change invariants 
during the runtime of a program. This might sound self-contradictory, 
but in practice it isn't. Remember, "One man's constant is another man's 
variable." (see http://www-2.cs.cmu.edu/afs/cs.cmu.edu/Web/csd/perlis.html )

>>But I don't ask anyone to unconditionally use dynamically typed
>>languages.
> 
> But you insist that dynamically typed languages are "better" or
> "superior to" statically typed, because you claim you cannot do things
> in a statically typed language that you can do in a dynamically typed
> one. That's the point where I disagree. I don't ask you to use a
> statically typed language, I just want you to give admit that both
> are equally good in this respect, or at least you should sit down and
> verify that yourself before saying it.

I haven't said that I can do more things in a dynamically typed 
language. I have said that statically typed languages need to reject 
well-behaved programs. That's a different claim. We are not talking 
about Turing equivalence.

If a base program calls its meta program and changes types, you can't 
type check such a program by definition.

For example:

(defun check (x)
   (integerp x))

(defun example-1 ()
   (let ((x 5))
     (assert (check x))
     (print 'succeeded)
     (eval '(defun check (x)
              (stringp x)))))

Now, this might seem nonsensical, but consider this:

(defun example-2 ()
   (eval '(defun check (x)
            (realp x)))
   (example-1))



Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20tv61-ike.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

>>>According to your criteria, (real * 200) is
>>>- a certain class of values
>>>- given in a limited language describing that class

>> Yes, but you will have a hard time developing a static type checker
>> that will work with such a language.

> I am not asking for a definition of the term "static type", but for a 
> definition of the term "type".

I am happy with a definition of "type" that allows arbitrary sets of
values, but how does this apply to static typing? Or dynamic type
checking? I was talking all the time about a definition of a "static
type", because that it what is relevant here.

Philosophically, there are a lot more sensible definitions for type,
but how does one such definition relate to our discussion?

> Haskell: "An expression evaluates to a value and has a static type." 
> (http://www.haskell.org/onlinereport/intro.html#sect1.3 )
> 
> Where is the definiton for "type"? (without "static"?)

There is none, because that is not relevant.

> Let's first get our terminology right.

Maybe we should also agree on what we want to use the terminology for.

> I don't mind if you want to change that terminology. Let's just
> rephrase it: Static type systems need to reject programs that
> wouldn't necessarily fail in serious ways at runtime.

I think I already agreed with that several times, didn't I?

But then you have also to add "even if they won't necessarily fail,
nearly all of them won't be well-behaved". 

>>>But I am interested in the question why you (or others) think that 
>>>almost all software should be developed like that. 

>> I didn't say that. Please do not put up a strawman. In fact, I 
>> explicitely said "you use whatever tool you like best".

> But that was the original question that initiated this thread. If we 
> have an agreement here, that's perfect!

Finally. *Sigh*. Why do I have to repeat that multiple times?

> What about the example in 
> http://groups.google.com/groups?selm=bnf688%24esd%241%40newsreader2.netcologne.de 
> ?

> I don't think this can be done without a serious rewrite.

The problem here is that the object system of CLOS and OCaml is
quite different, and Haskell has no objects at all. So you cannot
directly transfer that example to those languages. Not because they
are statically typed, but because they are different. It probably 
wouldn't be possible to do exactly the same in another arbitrary 
dynamically typed language, either.

But you can trivially simulate the *effects* of your program once
you accept that such a simulation need not use classes.

Please don't mix the differences because of statically/dynamically
typing and because of other language features.

> Right, it comes from a more principled consideration: You can't have 
> metacircularity in a statically type language. You might be able to have 
> metacircularity if you strictly separate the stages, but as soon as you 
> want to be able to at least occasionally call base code from meta code 
> and vice versa, then you lose.

But you don't need metacircularity, because then you simply solve
your problem in a different way. 

> Metacircularity gives me the guaranntee that I can always code around 
> any unforeseeable limitations that might come up, without having to 
> start from scratch.

You can also create very subtle bugs that are difficult to find.

> I haven't said that I can do more things in a dynamically typed
> language. I have said that statically typed languages need to reject 
> well-behaved programs. That's a different claim. We are not talking 
> about Turing equivalence.

Neither am I talking about Turing equivalence. 

But if you can agree that it is not harder to express something in a
(properly) statically typed languages then to express it in a
dynamically typed language, then we can stop the discussion here.

What I want you is to give up the point of view that dynamically
languages have an *advantage* because they are dynamically typed.

- Dirk
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9f87a6$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>You can't have metacircularity in a statically type language.

Could you explain exactly what you mean by "metacircularity"?

Anyway, I'm skeptical of this claim.  At very least, it should be possible
to have a language which is mostly statically typed (i.e. statically
typed by default), even if on some occaisions you have to fall back to
dynamic typing. 

Whether or not any existing statically typed language implementations
support this sort of thing is another question...

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Dirk Thierbach
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2npo61-9c7.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Brian McNamara! wrote:

>>> Have you ever used a program that has required you to enter a
>>> number?  The check whether you have really typed a number is a
>>> dynamic check, right?

This dynamic check doesn't necessarily translate to a type check.
Not everything boils down to types just because you're using "numbers".
As Brian has shown, you can use a type that says "maybe it is a
number, maybe it isn't". And that is statically safe, even if we
cannot decide at compile-time if it will be a number of not.

>> Yes, but this doesn't imply we can't write a statically-typed program to
>> handle the situation...

> The code you have given above doesn't give the user any feedback, right? 
> Do you really think that this is useful?

There's no problem with adding feedback, other than you have now to
package up the whole thing into the IO monad, because you have side
effects. The important point is the "Maybe" type. With a function like

multiread :: IO (Maybe Int)

you can write code as in

f = do z <- multiread
       let y = do x <- z 
                  return (x * 2) 
       return y

The outer "do" handles the IO monad, the inner "do" will ignore the
calculation if there is no value available. Here's a (very quickly
written, and very bad; I am sure it can be done with more elegance)
implementation of multiread:

myread :: String -> IO (Maybe Int) -> IO (Maybe Int)
myread "" _       = return Nothing
myread s failcont = result (reads s) where
  result [(x, "")] = return (Just x)
  result _         = failcont

multiread :: IO (Maybe Int)
multiread = do
  s <- getLine
  myread s (print "Reenter number, or press Enter to abort" >> multiread)


- Dirk
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <zvhmb.20799$HS4.73333@attbi_s01>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> Marshall Spight wrote:
> > "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> >
> >>... there exist programs that work but
> >>that cannot be statically typechecked. These programs objectively exist.
> >>By definition, I cannot express them in a statically typed language.
> >
> > I agree these programs exist.
> >
> > It would be really interesting to see a small but useful example
> > of a program that will not pass a statically typed language.
> > It seems to me that how easy it is to generate such programs
> > will be an interesting metric.
> >
> > Anyone? (Sorry, I'm a static typing guy, so my brain is
> > warped away from such programs. :-)
>
> Have you ever used a program that has required you to enter a number?
>
> The check whether you have really typed a number is a dynamic check, right?

This is not an example of what I requested. I can easily write
a statically typed program that inputs a string, and converts
it to a number, possibly failing if the string does not parse to a number.

I was asking for a small, useful program that *cannot* be written
in a statically compiled language (i.e., that cannot statically
be proven type-correct.) I'd be very interested to see such
a thing.


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnc8rk$pvk$1@f1node01.rhrz.uni-bonn.de>
Marshall Spight wrote:

>>>It would be really interesting to see a small but useful example
>>>of a program that will not pass a statically typed language.
>>>It seems to me that how easy it is to generate such programs
>>>will be an interesting metric.
>>>
>>>Anyone? (Sorry, I'm a static typing guy, so my brain is
>>>warped away from such programs. :-)
>>
>>Have you ever used a program that has required you to enter a number?
>>
>>The check whether you have really typed a number is a dynamic check, right?
> 
> 
> This is not an example of what I requested. I can easily write
> a statically typed program that inputs a string, and converts
> it to a number, possibly failing if the string does not parse to a number.

...and what does it do when it fails?

> I was asking for a small, useful program that *cannot* be written
> in a statically compiled language (i.e., that cannot statically
> be proven type-correct.) I'd be very interested to see such
> a thing.

I have given this example in another post. Please bear in mind that 
expressive power is not the same thing as Turing equivalence.

I have given an example of a program that behaves well and cannot be 
statically typechecked. I don't need any more evidence than that for my 
point. If you ask for more then you haven't gotten my point.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <S1imb.21014$HS4.73578@attbi_s01>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> Marshall Spight wrote:
>
> >>>It would be really interesting to see a small but useful example
> >>>of a program that will not pass a statically typed language.
> >>>It seems to me that how easy it is to generate such programs
> >>>will be an interesting metric.
> >>>
> >>>Anyone? (Sorry, I'm a static typing guy, so my brain is
> >>>warped away from such programs. :-)
> >>
> >>Have you ever used a program that has required you to enter a number?
> >>
> >>The check whether you have really typed a number is a dynamic check, right?
> >
> >
> > This is not an example of what I requested. I can easily write
> > a statically typed program that inputs a string, and converts
> > it to a number, possibly failing if the string does not parse to a number.
>
> ...and what does it do when it fails?

What it does when it fails is irrelevant to my request
for someone to come up with a small, useful program
that cannot be written in a statically typed language,
since the program you describe can easily be written
in any statically typed language I'm aware of.

I'm perfectly aware of the fact that statically typed languages
have some runtime checks. This is a feature that static
and dynamic languages have in common, so I don't
see what you might be trying to get at.


> > I was asking for a small, useful program that *cannot* be written
> > in a statically compiled language (i.e., that cannot statically
> > be proven type-correct.) I'd be very interested to see such
> > a thing.
>
> I have given this example in another post.

I'm very sorry, but I didn't see it. Could you help me find it?


> Please bear in mind that
> expressive power is not the same thing as Turing equivalence.

No prob.


> I have given an example of a program that behaves well and cannot be
> statically typechecked. I don't need any more evidence than that for my
> point. If you ask for more then you haven't gotten my point.

I'm not asking for more; I'm asking to see the program you're referring to.

Also, I was under the impression that this subthread is about
*my* point, which was a request for a small, useful program
that cannot be written in a statically typed language.


Marshall
From: ·············@comcast.net
Subject: More static type fun.
Date: 
Message-ID: <r80z79yk.fsf_-_@comcast.net>
"Marshall Spight" <·······@dnai.com> writes:

> It would be really interesting to see a small but useful example
> of a program that will not pass a statically typed language.
> It seems to me that how easy it is to generate such programs
> will be an interesting metric.

(defun foo (f)
  (funcall (funcall f #'+) 
           (funcall f 3)
           (funcall f 2)))

(defun test1 ()
  (foo (lambda (thing)
         (format t "~&--> ~s" thing)
         thing)))

(defun test2 ()
  (foo (lambda (thing)
         (if (eq thing #'+)
             #'*
             thing))))

(defun transpose-tensor (tensor)
  (apply #'mapcar #'mapcar (list #'list #'list) tensor))

(defun test3 ()
  (transpose-tensor '(((1 2 3)
                       (4 5 6))
                      ((a b c)
                       (d e f)))))
From: Stephen J. Bevan
Subject: Re: More static type fun.
Date: 
Message-ID: <m3fzhfl8k8.fsf@dino.dnsalias.com>
·············@comcast.net writes:
> "Marshall Spight" <·······@dnai.com> writes:
> > It would be really interesting to see a small but useful example
> > of a program that will not pass a statically typed language.
> > It seems to me that how easy it is to generate such programs
> > will be an interesting metric.
> 
> (defun foo (f)
>   (funcall (funcall f #'+) 
>            (funcall f 3)
>            (funcall f 2)))
> 
> (defun test1 ()
>   (foo (lambda (thing)
>          (format t "~&--> ~s" thing)
>          thing)))
> 
> (defun test2 ()
>   (foo (lambda (thing)
>          (if (eq thing #'+)
>              #'*
>              thing))))

test2 relies on some kind of equality being defined over functions.
Some (statically typed) languages do not support that (for reasons
other than static typing).
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <ismaraju.fsf@ccs.neu.edu>
·······@dino.dnsalias.com (Stephen J. Bevan) writes:

> ·············@comcast.net writes:
>> "Marshall Spight" <·······@dnai.com> writes:
>> > It would be really interesting to see a small but useful example
>> > of a program that will not pass a statically typed language.
>> > It seems to me that how easy it is to generate such programs
>> > will be an interesting metric.
>> 
>> (defun foo (f)
>>   (funcall (funcall f #'+) 
>>            (funcall f 3)
>>            (funcall f 2)))
>> 
>> (defun test1 ()
>>   (foo (lambda (thing)
>>          (format t "~&--> ~s" thing)
>>          thing)))
>> 
>> (defun test2 ()
>>   (foo (lambda (thing)
>>          (if (eq thing #'+)
>>              #'*
>>              thing))))
>
> test2 relies on some kind of equality being defined over functions.
> Some (statically typed) languages do not support that (for reasons
> other than static typing).

Ok, so create a wrapper that has identity and can be invoked.
From: Stephen J. Bevan
Subject: Re: More static type fun.
Date: 
Message-ID: <m37k2qkl2u.fsf@dino.dnsalias.com>
Joe Marshall <···@ccs.neu.edu> writes:
> > test2 relies on some kind of equality being defined over functions.
> > Some (statically typed) languages do not support that (for reasons
> > other than static typing).
> 
> Ok, so create a wrapper that has identity and can be invoked.

Doing that requires defining some kind of datatype that identity
can be defined over and in a previous example were a datatype was
defined you were less enthusiastic about that approach
(cf. <············@comcast.net>).
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <ism77v4j.fsf@ccs.neu.edu>
·······@dino.dnsalias.com (Stephen J. Bevan) writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>> > test2 relies on some kind of equality being defined over functions.
>> > Some (statically typed) languages do not support that (for reasons
>> > other than static typing).
>> 
>> Ok, so create a wrapper that has identity and can be invoked.
>
> Doing that requires defining some kind of datatype that identity
> can be defined over and in a previous example were a datatype was
> defined you were less enthusiastic about that approach
> (cf. <············@comcast.net>).

I can't (for some reason) locate that message with google.

The point of test2 was to demonstrate that F needed to be a polytype
function.  It was used with an int argument and with an int->int
argument.  In addition, when presented with a particular int->int 
argument it needed to return a *different* int->int argument (this was
to throw a monkey wrench in for all those compilers that might
have noticed that F(x) => x for all x).

The nature of the test didn't require that F was exactly int->int,
but rather that F was used in two different ways, so creating a wrapper
object that could be projected onto int->int would not materially
change the test.

(It is a bit disturbing to see the words `some kind of datatype that
identity can be defined over', however.  I would assume that identity
need not be defined *over* any datatype, it ought to work for any
expressable value.)
From: Mark Carroll
Subject: Re: More static type fun.
Date: 
Message-ID: <4pF*4Fa6p@news.chiark.greenend.org.uk>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>·······@dino.dnsalias.com (Stephen J. Bevan) writes:
(snip)
>> (cf. <············@comcast.net>).
>
>I can't (for some reason) locate that message with google.

http://www.google.com/groups?selm=3cdgy6wz.fsf%40comcast.net

HTH.

-- Mark
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <7k2neok1.fsf@ccs.neu.edu>
Mark Carroll <·····@chiark.greenend.org.uk> writes:

> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>>·······@dino.dnsalias.com (Stephen J. Bevan) writes:
> (snip)
>>> (cf. <············@comcast.net>).
>>
>>I can't (for some reason) locate that message with google.
>
> http://www.google.com/groups?selm=3cdgy6wz.fsf%40comcast.net
>
> HTH.

Thanks.
From: Stephen J. Bevan
Subject: Re: More static type fun.
Date: 
Message-ID: <m3he1rju45.fsf@dino.dnsalias.com>
Joe Marshall <···@ccs.neu.edu> writes:
> The point of test2 was to demonstrate that F needed to be a polytype
> function.  It was used with an int argument and with an int->int
> argument.  In addition, when presented with a particular int->int 
> argument it needed to return a *different* int->int argument (this was
> to throw a monkey wrench in for all those compilers that might
> have noticed that F(x) => x for all x).

I understood the point of the test, I just pointed out that since
equality isn't defined over functions in some languages then
*some* kind of change would have to be made to allow for that.
Since you didn't proffer any modified Common Lisp that used a wrapper
it wasn't clear what kind of wrapper would be acceptable.


> (It is a bit disturbing to see the words `some kind of datatype that
> identity can be defined over', however.  I would assume that identity
> need not be defined *over* any datatype, it ought to work for any
> expressable value.)

Perhaps we have a terminology problem.  If I define a structure (or
record, call it what you will) in a language that doesn't have any
kind of implicit "pointer/address equality" over values then equality
over that structure is either hard-wired in the language (say
recursive structural equivalence on each field) or the language allows
the user to define equality on a per structure basis (again in terms
of the fields).
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <7k2mr4a2.fsf@ccs.neu.edu>
·······@dino.dnsalias.com (Stephen J. Bevan) writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>>
>> (It is a bit disturbing to see the words `some kind of datatype that
>> identity can be defined over', however.  I would assume that identity
>> need not be defined *over* any datatype, it ought to work for any
>> expressable value.)
>
> Perhaps we have a terminology problem.  If I define a structure (or
> record, call it what you will) in a language that doesn't have any
> kind of implicit "pointer/address equality" over values then equality
> over that structure is either hard-wired in the language (say
> recursive structural equivalence on each field) or the language allows
> the user to define equality on a per structure basis (again in terms
> of the fields).

Equality, sure, but identity?
From: Jesse Tov
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbq2n0k.fvl.tov@tov.student.harvard.edu>
Joe Marshall <···@ccs.neu.edu>:
> Equality, sure, but identity?

Well, in a pure and referentially transparent language such as Haskell,
identity is sort of a vacant concept.  If I write:
   a = [1, 2, 3]
   b = [1, 2, 3]
There's no program I could write [1] that can tell me whether those
share all, some, or none of their memory.  Likewise, there's no reason
why I'd ever care.

I am assuming this is what you mean by identity.  If you mean something
else, please correct me.

Jesse

[1] Actually, if I'm using the FFI, I might be able to tell and might
care, but that's because I'm talking to functions that aren't written in
Haskell and they might care.
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: Adam Warner
Subject: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <pan.2003.10.31.03.34.34.251851@consulting.net.nz>
Hi Jesse Tov,

Please don't snip all the context. I've added it back in:

>>> (It is a bit disturbing to see the words `some kind of datatype that
>>> identity can be defined over', however.  I would assume that identity
>>> need not be defined *over* any datatype, it ought to work for any
>>> expressable value.)

>> Perhaps we have a terminology problem.  If I define a structure (or
>> record, call it what you will) in a language that doesn't have any kind
>> of implicit "pointer/address equality" over values then equality over
>> that structure is either hard-wired in the language (say recursive
>> structural equivalence on each field) or the language allows the user
>> to define equality on a per structure basis (again in terms of the
>> fields).

> Joe Marshall <···@ccs.neu.edu>:
>> Equality, sure, but identity?
> 
> Well, in a pure and referentially transparent language such as Haskell,
> identity is sort of a vacant concept.  If I write:
>    a = [1, 2, 3]
>    b = [1, 2, 3]
> There's no program I could write [1] that can tell me whether those share
> all, some, or none of their memory.  Likewise, there's no reason why I'd
> ever care.
> 
> I am assuming this is what you mean by identity.  If you mean something
> else, please correct me.

Is a and b where:

   a = [1, 2, 3]
   b = a

a vacant identity concept in Haskell? Identical objects in Lisp are
conceptually the same object (most do point to the same memory location
but this is not the deciding issue because numbers and characters can be
copied at any time while still being conceptually identical). EQL is the
conceptual identity test. Do not confuse identity with equality (an
identical object will be equal but an equal object is not necessarily
identical).

If your example was translated to Lisp, A and B would probably not be
identical:

(let* ((a #(1 2 3))
       (b #(1 2 3)))
  (eql a b)) => probably nil

But if B is bound to the same object as A they _must be identical_:

(let* ((a #(1 2 3))
       (b a))
  (eql a b)) => t

Identity is not defined in relation to a data type. It is defined by the
relationship of one object to another, i.e. whether an object IS
conceptually the same object. It doesn't matter how complicated the object
is. If another object is bound or set to the same object the new object is
identical, e.g.:

(let ((a (make-hash-table))
      (b nil)
      (c nil))
  (setf a b)
  (setf b c)
  (eql a c)) => t

Clearly the concept of pervasive identity affects compiler implementation.
But even if we imagine an implementation that copied all objects at any
time it would still be possible to conceptually track identity.

Any other language could support a rigorous concept of identity. If
Haskell doesn't support object identity then you simply have to find
another way to compare function objects when translating prunesquallor's
code.

Regards,
Adam
From: Jesse Tov
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <slrnbq3tvo.uq2.tov@tov.student.harvard.edu>
Adam Warner <······@consulting.net.nz>:
> Hi Jesse Tov,
> 
> Please don't snip all the context. I've added it back in:

Okay...

>>>> (It is a bit disturbing to see the words `some kind of datatype that
>>>> identity can be defined over', however.  I would assume that identity
>>>> need not be defined *over* any datatype, it ought to work for any
>>>> expressable value.)
> 
>>> Perhaps we have a terminology problem.  If I define a structure (or
>>> record, call it what you will) in a language that doesn't have any kind
>>> of implicit "pointer/address equality" over values then equality over
>>> that structure is either hard-wired in the language (say recursive
>>> structural equivalence on each field) or the language allows the user
>>> to define equality on a per structure basis (again in terms of the
>>> fields).
> 
>> Joe Marshall <···@ccs.neu.edu>:
>>> Equality, sure, but identity?
>> 
>> Well, in a pure and referentially transparent language such as Haskell,
>> identity is sort of a vacant concept.  If I write:
>>    a = [1, 2, 3]
>>    b = [1, 2, 3]
>> There's no program I could write [1] that can tell me whether those share
>> all, some, or none of their memory.  Likewise, there's no reason why I'd
>> ever care.
>> 
>> I am assuming this is what you mean by identity.  If you mean something
>> else, please correct me.
> 
> Is a and b where:
> 
>    a = [1, 2, 3]
>    b = a
> 
> a vacant identity concept in Haskell?

Yes!

My point above was that you can't tell whether I do

a = [1, 2, 3]
b = a

or

a = [1, 2, 3]
b = [1, 2, 3]

It's irrelevant, since you can't mutate them.  The compiler is free to
copy or alias in the first case, or to alias in the second place, and it
won't ever make any difference.

(Of course, it's slightly relevant in terms of memory usage, but that's
not a part of the semantics.)

> Identical objects in Lisp are
> conceptually the same object (most do point to the same memory location
> but this is not the deciding issue because numbers and characters can be
> copied at any time while still being conceptually identical). EQL is the
> conceptual identity test. Do not confuse identity with equality (an
> identical object will be equal but an equal object is not necessarily
> identical).

In impure languages, sure.  (I think it's rather unfortunate that we use the
terms "pure"/"impure"; I'd prefer something more value-neutral, wouldn't you?)

> If your example was translated to Lisp, A and B would probably not be
> identical:
> 
> (let* ((a #(1 2 3))
>        (b #(1 2 3)))
>   (eql a b)) => probably nil
>
> But if B is bound to the same object as A they _must be identical_:
> 
> (let* ((a #(1 2 3))
>        (b a))
>   (eql a b)) => t

Ah, but:
   ___         ___ _
  / _ \ /\  /\/ __(_)
 / /_\// /_/ / /  | |      GHC Interactive, version 6.0.1, for Haskell 98.
/ /_\\/ __  / /___| |      http://www.haskell.org/ghc/
\____/\/ /_/\____/|_|      Type :? for help.

Loading package base ... linking ... done.
Prelude> let a = [2, 3] 
Prelude> let b = 1 : a
Prelude> let c = 1 : a
Prelude> let d = c
Prelude> let e = [1, 2, 3]
Prelude> [b == c, c == d, b == e, c == d, c == e, d == e]
[True,True,True,True,True,True]
Prelude> 

The above will always be the case.

> Clearly the concept of pervasive identity affects compiler implementation.
> But even if we imagine an implementation that copied all objects at any
> time it would still be possible to conceptually track identity.

Hm... but if state is immutable and safe to share, why should it ever copy
anything?

> Any other language could support a rigorous concept of identity. If
> Haskell doesn't support object identity then you simply have to find
> another way to compare function objects when translating prunesquallor's
> code.

Not any language can "support a rigorous concept of identity".  It's pretty
much directly in conflict with a rigorous concept of observational equivalence.

Jesse
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: Adam Warner
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <pan.2003.10.31.07.55.31.400010@consulting.net.nz>
Hi Jesse Tov,

>> Identical objects in Lisp are
>> conceptually the same object (most do point to the same memory location
>> but this is not the deciding issue because numbers and characters can be
>> copied at any time while still being conceptually identical). EQL is the
>> conceptual identity test. Do not confuse identity with equality (an
>> identical object will be equal but an equal object is not necessarily
>> identical).
> 
> In impure languages, sure.  (I think it's rather unfortunate that we use the
> terms "pure"/"impure"; I'd prefer something more value-neutral, wouldn't you?)
> 
>> If your example was translated to Lisp, A and B would probably not be
>> identical:
>> 
>> (let* ((a #(1 2 3))
>>        (b #(1 2 3)))
>>   (eql a b)) => probably nil
>>
>> But if B is bound to the same object as A they _must be identical_:
>> 
>> (let* ((a #(1 2 3))
>>        (b a))
>>   (eql a b)) => t
> 
> Ah, but:
>    ___         ___ _
>   / _ \ /\  /\/ __(_)
>  / /_\// /_/ / /  | |      GHC Interactive, version 6.0.1, for Haskell 98.
> / /_\\/ __  / /___| |      http://www.haskell.org/ghc/
> \____/\/ /_/\____/|_|      Type :? for help.
> 
> Loading package base ... linking ... done.
> Prelude> let a = [2, 3] 
> Prelude> let b = 1 : a
> Prelude> let c = 1 : a
> Prelude> let d = c
> Prelude> let e = [1, 2, 3]
> Prelude> [b == c, c == d, b == e, c == d, c == e, d == e]
> [True,True,True,True,True,True]
> Prelude> 
> 
> The above will always be the case.

(Warning: I don't know Haskell and I've only just looked at The Haskell 98
Report)

Since == appears to be an equivalence operator this isn't surprising.
These expressions would all be EQUAL in Common Lisp since you are testing
list equality, not identity.

The issue arose because it was claimed that many languages have no concept
of function identity so prunesquallor's code couldn't be directly
translated.

How would you do this in Haskell:
(defparameter *functions* (list #'+ #'* #'/))
(dolist (fn *functions*)
  (when (eql fn #'*)
    (format t "Function ~S is identical to #'*.~%" fn)))

=> Function #<Function * {10079FE9}> is identical to #'*.

*functions* is a list of three function objects. Implementations have no
readable representation for the objects but they may print something like
this (CMUCL):
(#<Function + {100CB641}> #<Function * {10079FE9}> #<Function / {10487711}>)
or this (CLISP):
(#<system-function +> #<system-function *> #<system-function />)

I iterate over the list of objects comparing their identity with the
standard multiply function.

Every object can be tested for identity. Even the most hairy objects
where writing an equivalence test would be a nightmare.
 
>> Clearly the concept of pervasive identity affects compiler
>> implementation. But even if we imagine an implementation that copied
>> all objects at any time it would still be possible to conceptually
>> track identity.
> 
> Hm... but if state is immutable and safe to share, why should it ever
> copy anything?
> 
>> Any other language could support a rigorous concept of identity. If
>> Haskell doesn't support object identity then you simply have to find
>> another way to compare function objects when translating
>> prunesquallor's code.
> 
> Not any language can "support a rigorous concept of identity".  It's
> pretty much directly in conflict with a rigorous concept of
> observational equivalence.

I don't understand this claim since identity is a subset of equivalence.
Identity has consistent semantics. In a worse case scenario you just have
to simulate Lisp object binding in your language of choice. All identical
objects are observationally equivalent and all equal objects are
observationally equivalent. Where's the conflict?

If you can't implement examples that can determine if, say, function
objects or hash tables are identical (_and_ you don't have any operator to
test their equivalence) then you're missing functionality.

Regards,
Adam
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <pan.2003.10.31.08.23.29.688540@knm.org.pl>
On Fri, 31 Oct 2003 20:55:42 +1300, Adam Warner wrote:

> How would you do this in Haskell:
> (defparameter *functions* (list #'+ #'* #'/))
> (dolist (fn *functions*)
>   (when (eql fn #'*)
>     (format t "Function ~S is identical to #'*.~%" fn)))

You can't compare functions in Haskell.

> If you can't implement examples that can determine if, say, function
> objects or hash tables are identical (_and_ you don't have any operator to
> test their equivalence) then you're missing functionality.

Of course you can determine if mutable objects are identical.
== on mutable object compares identity. But there is no concept of
identity of immutable objects.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Pascal Costanza
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <bnumph$n7i$2@newsreader2.netcologne.de>
Marcin 'Qrczak' Kowalczyk wrote:

> On Fri, 31 Oct 2003 20:55:42 +1300, Adam Warner wrote:
> 
> 
>>How would you do this in Haskell:
>>(defparameter *functions* (list #'+ #'* #'/))
>>(dolist (fn *functions*)
>>  (when (eql fn #'*)
>>    (format t "Function ~S is identical to #'*.~%" fn)))
> 
> 
> You can't compare functions in Haskell.
> 
> 
>>If you can't implement examples that can determine if, say, function
>>objects or hash tables are identical (_and_ you don't have any operator to
>>test their equivalence) then you're missing functionality.
> 
> 
> Of course you can determine if mutable objects are identical.
> == on mutable object compares identity. But there is no concept of
> identity of immutable objects.

Why not? This can be very useful...


Pascal
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <pan.2003.10.31.22.42.40.637802@knm.org.pl>
On Fri, 31 Oct 2003 23:13:06 +0100, Pascal Costanza wrote:

>> Of course you can determine if mutable objects are identical.
>> == on mutable object compares identity. But there is no concept of
>> identity of immutable objects.
> 
> Why not? This can be very useful...

Why? For all other operations the objects are equivalent. Common Lisp
doesn't have the concept of identity for integers and characters and eq is
meaningless for them. Haskell has much more immutable types.

Actually I do see one application: observing sharing for serialization to
yield more compressed output. By losing sharing the result is correct but
inefficient.

Haskell has only one equality operator. It compares values of immutable
objects and identities of mutable objects (which are rare).
http://citeseer.nj.nec.com/baker93equal.html

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: ·············@comcast.net
Subject: Re: Object Identity
Date: 
Message-ID: <65i5ouj8.fsf@comcast.net>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> Why?  For all other operations the objects are equivalent.  Common Lisp
> doesn't have the concept of identity for integers and characters and eq is
> meaningless for them. 

Not true.  First of all, the IDENTITY function is defined over any
Common Lisp object and it returns the object.  Second of all, the EQL
predicate is defined over all objects and works in the appropriate way
for integers and characters.
From: Pascal Costanza
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <bnuqhp$t3v$1@newsreader2.netcologne.de>
Marcin 'Qrczak' Kowalczyk wrote:

> On Fri, 31 Oct 2003 23:13:06 +0100, Pascal Costanza wrote:
> 
> 
>>>Of course you can determine if mutable objects are identical.
>>>== on mutable object compares identity. But there is no concept of
>>>identity of immutable objects.
>>
>>Why not? This can be very useful...
> 
> 
> Why? For all other operations the objects are equivalent. Common Lisp
> doesn't have the concept of identity for integers and characters and eq is
> meaningless for them. Haskell has much more immutable types.
> 
> Actually I do see one application: observing sharing for serialization to
> yield more compressed output. By losing sharing the result is correct but
> inefficient.
> 
> Haskell has only one equality operator. It compares values of immutable
> objects and identities of mutable objects (which are rare).
> http://citeseer.nj.nec.com/baker93equal.html

Erann has given an excellent example why you might want object identity 
for immutables at 
http://groups.google.com/groups?selm=gat-0309032237340001%40192.168.1.52

(At least, that's how I understand the example.)


Pascal
From: Erann Gat
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <gat-3110031627310001@k-137-79-50-101.jpl.nasa.gov>
In article <············@newsreader2.netcologne.de>, Pascal Costanza
<········@web.de> wrote:

> Marcin 'Qrczak' Kowalczyk wrote:
> 
> > On Fri, 31 Oct 2003 23:13:06 +0100, Pascal Costanza wrote:
> > 
> > 
> >>>Of course you can determine if mutable objects are identical.
> >>>== on mutable object compares identity. But there is no concept of
> >>>identity of immutable objects.
> >>
> >>Why not? This can be very useful...
> > 
> > 
> > Why? For all other operations the objects are equivalent. Common Lisp
> > doesn't have the concept of identity for integers and characters and eq is
> > meaningless for them. Haskell has much more immutable types.
> > 
> > Actually I do see one application: observing sharing for serialization to
> > yield more compressed output. By losing sharing the result is correct but
> > inefficient.
> > 
> > Haskell has only one equality operator. It compares values of immutable
> > objects and identities of mutable objects (which are rare).
> > http://citeseer.nj.nec.com/baker93equal.html
> 
> Erann has given an excellent example why you might want object identity 
> for immutables at 
> http://groups.google.com/groups?selm=gat-0309032237340001%40192.168.1.52
> 
> (At least, that's how I understand the example.)

Here's a more succinct example:

x = make-object()  // An immutable object
y = make-object()  // A structurally identical immutable object

m = make-associative-map()

m[x] = 1
m[y] = 2
print(m[x])

This example can, of course, be rendered into a purely functional form as well.

Allowing the system to distinguish between structurally identical
immutable objects leaves open the possibility that the program will print
1 instead of 2.

E.
From: Jesse Tov
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <slrnbqbb9a.ea6.tov@tov.student.harvard.edu>
Erann Gat <···@jpl.nasa.gov>:
> x = make-object()  // An immutable object
> y = make-object()  // A structurally identical immutable object
> 
> m = make-associative-map()
> 
> m[x] = 1
> m[y] = 2
> print(m[x])
> 
> This example can, of course, be rendered into a purely functional form as well.

The problem is, it can't.  If you call make-object twice with the same
arguments, it must return _indistinguishable_ results.  That's what
"purely functional" means!  So if there's a function that can tell x and
y apart (say, an identity function), then it's not purely functional.

If you want a "function" make-object in Haskell that returns
distinguishable objects for indistinguishable calls, you might do it in
a monad that keeps track of an implicit state (like a counter) that can
be attached to each object.  In other words, you can simulate impurity
if you want to, but the objects can't be structurally equal if you want
be able to index your associate map with them.

Jesse
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: Adam Warner
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <pan.2003.10.31.23.33.44.460992@consulting.net.nz>
Hi Marcin 'Qrczak' Kowalczyk,

> On Fri, 31 Oct 2003 23:13:06 +0100, Pascal Costanza wrote:
> 
>>> Of course you can determine if mutable objects are identical. == on
>>> mutable object compares identity. But there is no concept of identity
>>> of immutable objects.
>> 
>> Why not? This can be very useful...
> 
> Why? For all other operations the objects are equivalent. Common Lisp
> doesn't have the concept of identity for integers and characters and eq
> is meaningless for them. Haskell has much more immutable types.

EQ exposes implementation issues and raising it is a cheap shot. Common
Lisp has defined conceptual identity for all objects. The predicate test
is called EQL. If EQ offends you then you don't have to use it for any
predicate test. People use it simply for the efficiency advantage. If
Common Lisp had many more immutable types then EQ would have many more
exceptions. But EQL would still give you conceptual identity for all
objects.

What's Haskell's test for conceptual identity of _any_ object? The correct
answer is, of course: Common Lisp doesn't have the concept of identity for
integers and characters and look at the bright shiny object to your left.

Regards,
Adam
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <pan.2003.10.31.23.56.46.390254@knm.org.pl>
On Sat, 01 Nov 2003 12:33:46 +1300, Adam Warner wrote:

> Common Lisp has defined conceptual identity for all objects.
> The predicate test is called EQL.

Ok, you call "conceptual identity" what I called "equality".

> What's Haskell's test for conceptual identity of _any_ object?

==, the only standard equality operator / function. It defined for almost
all types but not for function types.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Matthias Blume
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <m2vfq57xxh.fsf@hanabi-air.shimizu.blume>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> On Sat, 01 Nov 2003 12:33:46 +1300, Adam Warner wrote:
> 
> > Common Lisp has defined conceptual identity for all objects.
> > The predicate test is called EQL.
> 
> Ok, you call "conceptual identity" what I called "equality".

There is a well-established notion of equality that is used in math:
Leibnitz' law.  It basically says that things are equal if there is
nothing which can distinguish between them.  So if you have some
predicate E that you think tests equality but also some predicate I
such that

     x I y  =>   x E y

and also such that there are some x0 and y0 with

     x0 E y0 & ~(x0 I y0)

then E is, in fact, not an equality predicate.

Of course, in the presence of EQ, only EQ can count as equality...

Matthias
From: Frode Vatvedt Fjeld
Subject: Re: Object Identity
Date: 
Message-ID: <2had7gnxs2.fsf@vserver.cs.uit.no>
Matthias Blume <····@my.address.elsewhere> writes:

> There is a well-established notion of equality that is used in math:
> Leibnitz' law.  It basically says that things are equal if there is
> nothing which can distinguish between them.  So if you have some
> predicate E that you think tests equality but also some predicate I
> such that
>
>      x I y  =>   x E y
>
> and also such that there are some x0 and y0 with
>
>      x0 E y0 & ~(x0 I y0)
>
> then E is, in fact, not an equality predicate.

I think this is an interesting perspective to use as a starting
point. Computer Science is all about abstractions. In mathemathics, I
believe one is dealing with trancendental objects exclusively. Hence
there is no ambiguity about what e.g. equality means, because
everything lives at the same abstraction level.

But obviously, even if 1=1, one apple is not one orange. One apple is
not even identical to another apple. Still, in every day life we have
noe trouble juggling both uses of equality. If you get an apple and I
get an apple, we're rewarded the same even if we didn't get the same
apple.

So..

> Of course, in the presence of EQ, only EQ can count as equality...

What I find to be one of the most intriguing aspects of programming
languages is how they try to bridge the barrier between the real world
and the metaphysical mathematical world. The programmer is presented
with a linguistic abstraction that is a blend of concepts from the two
worlds: The real world for obvious necessity, and the mathemathical
world because it's how humanity has learned and prefers to build
abstractions for engineering.

Now, different programming languages pick and chose differently from
the two. Functional programming is all about clinging as closely to
the mathemathcal side as possible. In the metaphysical world, there is
no concept of side-effects. Or time, for that matter. There is one
name-space for everything. And so on.

Common Lisp takes an approach that you can have different name-spaces
for functions and variables, because people can deal with this
separation much like they can deal with separating verbs and subjects
etc. in natural communication. And it's OK to expose the fact that
objects exist at different levels of abstraction, as EQ and EQL
do. (After all, the EQ identity is just another abstraction, namely
the one that is presented to you by the CPU.) And there is time, and
so side-effects. And so on.


This, btw, is not intended to be polemic, just some random thoughts
that you triggered with the Leibnitz identity.

-- 
Frode Vatvedt Fjeld
From: Adam Warner
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <pan.2003.11.01.00.43.54.385738@consulting.net.nz>
Hi Marcin 'Qrczak' Kowalczyk,

> On Sat, 01 Nov 2003 12:33:46 +1300, Adam Warner wrote:
> 
>> Common Lisp has defined conceptual identity for all objects. The
>> predicate test is called EQL.
> 
> Ok, you call "conceptual identity" what I called "equality".

I doubt it. Does this look like a useful test for equality:
(eql (list 1 2 3) (list 1 2 3)) => nil

I could use the term "conceptually the same" in contrast to
"implementationally identical", which is how the ANSI Common Lisp
HyperSpec describes the difference between EQL and EQ.

This time let's supply the predicate with conceptually the same
object:
(eql #1='(1 2 3) #1#) => t

EQUAL is closer to what a typical person would consider is an
appropriate equality test for lists:
(equal (list 1 2 3) (list 1 2 3)) => t

More information about why this is ultimately arbitrary:
<http://www.nhplace.com/kent/PS/EQUAL.html>

Regards,
Adam
From: Jesse Tov
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <slrnbq4v95.h2a.tov@tov.student.harvard.edu>
Adam Warner <······@consulting.net.nz>:
> Since == appears to be an equivalence operator this isn't surprising.
> These expressions would all be EQUAL in Common Lisp since you are testing
> list equality, not identity.

Correct.  I cannot do the other.

> The issue arose because it was claimed that many languages have no concept
> of function identity so prunesquallor's code couldn't be directly
> translated.
> 
> How would you do this in Haskell:
> (defparameter *functions* (list #'+ #'* #'/))
> (dolist (fn *functions*)
>   (when (eql fn #'*)
>     (format t "Function ~S is identical to #'*.~%" fn)))

I wouldn't do it in Haskell.  Can you give a useful, _non-trivial_
example where comparing functions is useful?  It can probably be done as
easily another way.

> Every object can be tested for identity. Even the most hairy objects
> where writing an equivalence test would be a nightmare.

It's never a nightmare in Haskell.  If equivalent objects have
equivalent representation, the compiler can derive it; if not, one can
supply a method.

>> Not any language can "support a rigorous concept of identity".  It's
>> pretty much directly in conflict with a rigorous concept of
>> observational equivalence.
> 
> I don't understand this claim since identity is a subset of equivalence.
> Identity has consistent semantics. In a worse case scenario you just have
> to simulate Lisp object binding in your language of choice. All identical
> objects are observationally equivalent and all equal objects are
> observationally equivalent. Where's the conflict?

What I'm saying amounts to two claims:

 (1) Identity is meaningless for immutable objects.
 (2) Identity breaks observational equivalence.

The first claim should be fairly easy to understand.  The reason we
care about identity, in general, is that mutating an object also mutates
all its aliases.  If we can't mutate objects, this is a non-issue.

The second claim is a bit harder.  By observational equivalence, I mean
that names can always be replaced by the expression to which they are
bound, and that identical expressions can always be abstracted away into
names, without changing semantics.  This means that the five code
fragments below are semantically indistinguishable:

    -- (a)
    foo = [1, 2, 3]
    bar = somefunction foo foo
<=>
    -- (b)
    foo = [1, 2, 3]
    bar = somefunction [1, 2, 3] foo
<=>
    -- (c)
    foo = [1, 2, 3]
    bar = somefunction foo [1, 2, 3]
<=>
    -- (d)
    foo = [1, 2, 3]
    bar = somefunction [1, 2, 3] [1, 2, 3]
<=>
    -- (e)
    baz = [2, 3]
    foo = [1, 2, 3]
    bar = somefunction [1, 2, 3] (1 : baz)

Now, suppose that somefunction were the identity function.  What would
it return in each case?

By observational equivalence, bar must in each case be the same.  It
can't matter whether we construct the list once and then use it, or
construct it twice.  It would be incoherent if somefunction could
distinguish between any of these cases.

Now, it's the same with functions:

    foo = \x -> \y -> 2 * x + y
    bar = somefunction foo foo
<=>
    foo = \x -> \y -> 2 * x + y
    bar = somefunction foo (\x -> \y -> 2 * x + y)
<=>
    foo = \x -> \y -> 2 * x + y
    bar = somefunction (\x -> \y -> 2 * x + y) (\x -> \y -> 2 * x + y)
<=>
    foo = \x -> \y -> 2 * x + y
    bar = somefunction (\a -> \b -> 2 * a + b) (\zip -> \zap -> 2 * zip + zap)

Again, bar must have the same value in each case.  (Note for the last
case: renaming of bound variables can't change the semantics.)  There is
no function _somefunction_ that can distinguish between defining the
function once and "sharing" or defining it twice.  So we have no
function identity.

May we admit function _equality_?  Only trivially, and only if functions
have some comparable representation.  There are different ways we could
define _somefunction_ (as equality) that would give different answers
above, and I have no idea which is "best".  There's probably no
implementation barrier to doing this, but it's probably more misleading
than it is useful.

> If you can't implement examples that can determine if, say, function
> objects or hash tables are identical (_and_ you don't have any operator to
> test their equivalence) then you're missing functionality.

As another poster wrote, _mutable_ objects admit identity.  If I want
it, though, I might have to cook it up myself.

    -- MyHashTable exports the same interface as GHC's Data.HashTable,
    -- except that it also provides an instance of Eq that implements
    -- identity.
    module MyHashTable (MyHashTable, delete, insert, toList, lookup,
        longestChain, prime, hashString, hashInt, new, fromList) where 

    import qualified Data.HashTable as H
    import qualified Data.Unique    as U
    import Prelude ((.), (==), Eq, return)

    -- MyHashTable is a hash table with a unique tag.
    data MyHashTable key val = HTCon { tag :: U.Unique, table :: H.HashTable key val }

    -- to compare MyHashTables, compare tags
    instance Eq (MyHashTable k v) where
        a == b = tag a == tag b

    -- new and fromList 
    new cmp hash       = do t <- U.newUnique
                            h <- H.new cmp hash
                            return (HTCon t h)
    fromList hash list = do t <- U.newUnique
                            h <- H.fromList hash list
                            return (HTCon t h)

    -- boilerplate so we can keep MyHashTable opaque.
    delete          = H.delete       . table
    insert          = H.insert       . table
    toList          = H.toList       . table
    lookup          = H.lookup       . table
    longestChain    = H.longestChain . table
    prime           = H.prime
    hashString      = H.hashString
    hashInt         = H.hashInt

Then I can run:

    Ok, modules loaded: MyHashTable.
    *MyHashTable> a <- fromList hashString [("a", 0), ("b", 1)]
    *MyHashTable> b <- fromList hashString [("a", 0), ("b", 1)]
    *MyHashTable> a == b
    False
    *MyHashTable> a == a
    True
    *MyHashTable> 

This is your "worst case scenario" from above.  I'd be very surprised if
anyone here has encountered it in a case where object identity wasn't
itself in the problem domain, but I'd like to hear about it.

Jesse
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: ·············@comcast.net
Subject: Re: Object Identity
Date: 
Message-ID: <1xstost2.fsf@comcast.net>
Jesse Tov <···@eecs.harvREMOVEard.edu> writes:

> I wouldn't do it in Haskell.  Can you give a useful, _non-trivial_
> example where comparing functions is useful?  

Certainly.  If a function is passed around as a first-class object,
then it is useful to be able to talk about its identity.  It may be
undecidable if two non-shared blocks of code compute the same value,
but it is trivial to determine if two entry points are the same.  A
debugger might wish to keep a list of functions that have been called,
or a compiler might wish to see if you passed in a particular
primitive as a first-class function.

> It can probably be done as easily another way.

No doubt, all you need do is attach a unique label to the object.
This is true of any object, so you don't need equal to be defined on
anything but the unique labels if you want to go through the effort of
attaching labels to everything.

> What I'm saying amounts to two claims:
>
>  (1) Identity is meaningless for immutable objects.

Not true.  I may, for instance, wish to place a first-class function 
in a set or see if it is a member of a set. 

*Copying* may not be meaningful, but identity is.

>  (2) Identity breaks observational equivalence.

Not true.  It depends on the equivalence predicate.  Again, if the
predicate can detect *copying* of an immutable object, then you can
lose observational equivalence.  However, if the predicate is
insensitive to copying of immutable objects, you have no problems.

> Now, it's the same with functions:
>
>     foo = \x -> \y -> 2 * x + y
>     bar = somefunction foo foo
> <=>
>     foo = \x -> \y -> 2 * x + y
>     bar = somefunction foo (\x -> \y -> 2 * x + y)
> <=>
>     foo = \x -> \y -> 2 * x + y
>     bar = somefunction (\x -> \y -> 2 * x + y) (\x -> \y -> 2 * x + y)
> <=>
>     foo = \x -> \y -> 2 * x + y
>     bar = somefunction (\a -> \b -> 2 * a + b) (\zip -> \zap -> 2 * zip + zap)
>
> Again, bar must have the same value in each case.  (Note for the last
> case: renaming of bound variables can't change the semantics.)  There is
> no function _somefunction_ that can distinguish between defining the
> function once and "sharing" or defining it twice.  So we have no
> function identity.

This is `intentional' (or is it `extensional'?  I forget which is
which) identity.  In any case, whether two functions can be externally
distinguished is undecidable in general, but the notion of `objects
are equal to themselves' is easily decidable.

> May we admit function _equality_?  Only trivially, and only if functions
> have some comparable representation.  

Since I'm only interested in trivial equality, and since I'm only
interested in functions I can represent, this is sufficient.

> There's probably no implementation barrier to doing this, but it's
> probably more misleading than it is useful.

Depends.  You won't mislead lisp hackers.

> As another poster wrote, _mutable_ objects admit identity.  If I want
> it, though, I might have to cook it up myself.

[code snipped]

>
> Then I can run:
>
>     Ok, modules loaded: MyHashTable.
>     *MyHashTable> a <- fromList hashString [("a", 0), ("b", 1)]
>     *MyHashTable> b <- fromList hashString [("a", 0), ("b", 1)]
>     *MyHashTable> a == b
>     False
>     *MyHashTable> a == a
>     True
>     *MyHashTable> 
>
> This is your "worst case scenario" from above.

Works for me.  

Suppose I encounter a hash table named C and I want to know if it may
be used in place of A.  I can test C==A and if it returns true, then I
am sure that there is no difference between using C and A.

This is a weaker condition than enumerating the contents, but it may
be enough for my purposes.
From: Joachim Durchholz
Subject: Re: Object Identity
Date: 
Message-ID: <bnvajd$oc2$1@news.oberberg.net>
·············@comcast.net wrote:

> Jesse Tov <···@eecs.harvREMOVEard.edu> writes:
> 
>>I wouldn't do it in Haskell.  Can you give a useful, _non-trivial_
>>example where comparing functions is useful?  
> 
> Certainly.  If a function is passed around as a first-class object,
> then it is useful to be able to talk about its identity.  It may be
> undecidable if two non-shared blocks of code compute the same value,
> but it is trivial to determine if two entry points are the same.  A
> debugger might wish to keep a list of functions that have been called,
> or a compiler might wish to see if you passed in a particular
> primitive as a first-class function.

This notion doesn't work too well when testing whether a function that 
was sent over the network is the same as one loaded from a local file.

This is actually a practically relevant problem: data structures in 
functional languages tend to contain unevaluated function calls, and 
it's important what code will be executed when the calls are evaluated.

(Omitting response on your ideas of equality here - I know that Jesse's 
points are valid, and I don't understand your arguments to the contrary.)

Regards,
Jo
From: ·············@comcast.net
Subject: Re: Object Identity
Date: 
Message-ID: <ism4n2aa.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> ·············@comcast.net wrote:
>
>> Jesse Tov <···@eecs.harvREMOVEard.edu> writes:
>>
>>>I wouldn't do it in Haskell.  Can you give a useful, _non-trivial_
>>> example where comparing functions is useful?
>> Certainly.  If a function is passed around as a first-class object,
>> then it is useful to be able to talk about its identity.  It may be
>> undecidable if two non-shared blocks of code compute the same value,
>> but it is trivial to determine if two entry points are the same.  A
>> debugger might wish to keep a list of functions that have been called,
>> or a compiler might wish to see if you passed in a particular
>> primitive as a first-class function.
>
> This notion doesn't work too well when testing whether a function that
> was sent over the network is the same as one loaded from a local file.

It works just fine.  I'm only interested in weak equivalence, not
strong.  If you have two things claiming to be different functions,
simply compare their entry points.  If they are the same, then it is
the same function.  

I'm not claiming that if they are different then it is different, only
that if they are *identical* in representation (as in absolutely
identical: same bits, same chunk of memory) then they are necessarily
identical in the abstract.

> This is actually a practically relevant problem: data structures in
> functional languages tend to contain unevaluated function calls, and
> it's important what code will be executed when the calls are evaluated.

Yes....  So if object A has contains a function that starts at
location 0x55EA, and I copy the reference to object B, the exact same
code is executed and therefore it is the exact same function.

> (Omitting response on your ideas of equality here - I know that
> Jesse's points are valid, and I don't understand your arguments to the
> contrary.)

I'm not arguing that his points are invalid, I'm arguing that they are
irrelevant to the kind of weak identity that I am seeking.
From: Joachim Durchholz
Subject: Re: Object Identity
Date: 
Message-ID: <bo08iv$66k$1@news.oberberg.net>
·············@comcast.net wrote:
> I'm not arguing that his points are invalid, I'm arguing that they are
> irrelevant to the kind of weak identity that I am seeking.

Well, that kind of equality is insufficient for deciding, for example, 
whether a function should be sent across the network to enable another 
machine to execute some function calls that were sent to it earlier.

You may decide that this issue doesn't interest you - but you'll be 
unable to combine networking and higher-order functions.

Actually networking is just a special case of marshalling, and you need 
strong(er) equality than pointer equality for all kinds of marshalling, 
whether it's over the network, to a file, or to a database.
Marshalling is important. I assume Lisp does it using macrology, but I 
think having a good equality is a simpler solution, and one that's less 
prone to borderline cases.

Regards,
Jo
From: Jesse Tov
Subject: Re: Object Identity
Date: 
Message-ID: <slrnbqbr6r.kbj.tov@tov.student.harvard.edu>
·············@comcast.net <·············@comcast.net>:
> *Copying* may not be meaningful, but identity is.
> 
>>  (2) Identity breaks observational equivalence.
> 
> Not true.  It depends on the equivalence predicate.  Again, if the
> predicate can detect *copying* of an immutable object, then you can
> lose observational equivalence.  However, if the predicate is
> insensitive to copying of immutable objects, you have no problems.

Suppose that === is your identity predicate:

  (a)   result = (\x -> x) === (\x -> x)

  (b)   result = f === f where f = \x -> x

  (c)   result = f === g where f = \x -> x
                               g = f

What should be the result in each of the above cases?

>> As another poster wrote, _mutable_ objects admit identity.  If I want
>> it, though, I might have to cook it up myself.
> 
> [code snipped]
> 
>> Then I can run:
>>
>>     Ok, modules loaded: MyHashTable.
>>     *MyHashTable> a <- fromList hashString [("a", 0), ("b", 1)]
>>     *MyHashTable> b <- fromList hashString [("a", 0), ("b", 1)]
>>     *MyHashTable> a == b
>>     False
>>     *MyHashTable> a == a
>>     True
>>     *MyHashTable> 
>>
>> This is your "worst case scenario" from above.
> 
> Works for me.  
> 
> Suppose I encounter a hash table named C and I want to know if it may
> be used in place of A.  I can test C==A and if it returns true, then I
> am sure that there is no difference between using C and A.

Yes.

> This is a weaker condition than enumerating the contents, but it may
> be enough for my purposes.

It's a different condition.  In the above example, the contents of a are
the same as the contents of b, but they aren't interchangeable.

Jesse
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: ·············@comcast.net
Subject: Re: Object Identity
Date: 
Message-ID: <llqxzds4.fsf@comcast.net>
Jesse Tov <···@eecs.harvREMOVEard.edu> writes:

> ·············@comcast.net <·············@comcast.net>:
>> *Copying* may not be meaningful, but identity is.
>> 
>>>  (2) Identity breaks observational equivalence.
>> 
>> Not true.  It depends on the equivalence predicate.  Again, if the
>> predicate can detect *copying* of an immutable object, then you can
>> lose observational equivalence.  However, if the predicate is
>> insensitive to copying of immutable objects, you have no problems.
>
> Suppose that === is your identity predicate:
>
>   (a)   result = (\x -> x) === (\x -> x)
>
>   (b)   result = f === f where f = \x -> x
>
>   (c)   result = f === g where f = \x -> x
>                                g = f
>
> What should be the result in each of the above cases?

true, true, and true.  Of course, result (a) make take a really long
time...  
From: ·············@comcast.net
Subject: Re: Object Identity
Date: 
Message-ID: <ptg9xoyu.fsf@comcast.net>
·············@comcast.net writes:

> Jesse Tov <···@eecs.harvREMOVEard.edu> writes:
>
>> ·············@comcast.net <·············@comcast.net>:
>>> *Copying* may not be meaningful, but identity is.
>>> 
>>>>  (2) Identity breaks observational equivalence.
>>> 
>>> Not true.  It depends on the equivalence predicate.  Again, if the
>>> predicate can detect *copying* of an immutable object, then you can
>>> lose observational equivalence.  However, if the predicate is
>>> insensitive to copying of immutable objects, you have no problems.
>>
>> Suppose that === is your identity predicate:
>>
>>   (a)   result = (\x -> x) === (\x -> x)
>>
>>   (b)   result = f === f where f = \x -> x
>>
>>   (c)   result = f === g where f = \x -> x
>>                                g = f
>>
>> What should be the result in each of the above cases?
>
> true, true, and true.  Of course, result (a) make take a really long
> time...  

I changed my mind slightly.  The first result may be false, but the
second and third must be true.
From: Jesse Tov
Subject: Re: Object Identity
Date: 
Message-ID: <slrnbqdh5j.2pg.tov@tov.student.harvard.edu>
·············@comcast.net <·············@comcast.net>:
> ·············@comcast.net writes:
>> Jesse Tov <···@eecs.harvREMOVEard.edu> writes:
>>> Suppose that === is your identity predicate:
>>>
>>>   (a)   result = (\x -> x) === (\x -> x)
>>>
>>>   (b)   result = f === f where f = \x -> x
>>>
>>>   (c)   result = f === g where f = \x -> x
>>>                                g = f
>>>
>>> What should be the result in each of the above cases?
>>
>> true, true, and true.  Of course, result (a) make take a really long
>> time...  

Extensional equivalence?  Cute.

> I changed my mind slightly.  The first result may be false, but the
> second and third must be true.

How is that referentially transparent?

Or, what about:
  (a')   result = [5] === [5]

In Haskell, (b) to (a) must be a legal, semantics-preserving
transformation.  All three must return True, and they can't "take a
really long time".  What about this:

  (d)   result = f === g where f :: a -> a
                               f  = \x -> x
                               g :: Int -> Int
                               g  = \x -> x

  (e)   result = f === g where f :: a -> a
                               f  = \x -> x
                               g :: Int -> Int
                               g  = f

Jesse
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: Tim Sweeney
Subject: Re: Object Identity
Date: 
Message-ID: <9ef8dc7.0311031545.790b6ed0@posting.google.com>
> *Copying* may not be meaningful, but identity is.
> 
> >  (2) Identity breaks observational equivalence.
> 
> Not true.  It depends on the equivalence predicate.

By comparing functions "by pointer" (or, more generally, by their
internal representation), you can sometimes determine that two
functions *are* observationally equivalant.  But you can't generally
determine that two functions *aren't* observationally equivalant.  Two
functions might have different internal representations, while still
representing the same function, from an extensional equivalance point
of view.

In theory, this matters whenever a language's equality predicate
corresponds to observational equivalance, a.k.a. Leibniz equality.

In practice, this matters wherever a language runtime might perform
runtime code specialization (optimizing some occurances of a function
but not others, thus changing some internal representations), might
serialize the function for sending across the network, etc.

One should be very careful with any piece of code that claims to
compare two functions for equality.  If in a Turing-complete language
such an equality predicate exists on functions, then there are cases
where it's going to either throw an exception or return false when in
theory it should return true.  Such an equality can be a useful
engineering tool, but it's not mathematical equality.
From: Jesse Tov
Subject: Re: Object Identity [was Re: More static type fun.]
Date: 
Message-ID: <slrnbq503a.h2a.tov@tov.student.harvard.edu>
Adam Warner <······@consulting.net.nz>:
> Since == appears to be an equivalence operator this isn't surprising.
> These expressions would all be EQUAL in Common Lisp since you are testing
> list equality, not identity.

Correct.  I cannot do the other.

> The issue arose because it was claimed that many languages have no concept
> of function identity so prunesquallor's code couldn't be directly
> translated.
> 
> How would you do this in Haskell:
> (defparameter *functions* (list #'+ #'* #'/))
> (dolist (fn *functions*)
>   (when (eql fn #'*)
>     (format t "Function ~S is identical to #'*.~%" fn)))

I wouldn't do it in Haskell.  Can you give a _non-trivial_ example where
comparing functions is useful?  It can probably be done as easily
another way.

> Every object can be tested for identity. Even the most hairy objects
> where writing an equivalence test would be a nightmare.

It's never a nightmare in Haskell.  If equivalent objects have
equivalent representation, the compiler can derive it; if not, one can
supply a method.

>> Not any language can "support a rigorous concept of identity".  It's
>> pretty much directly in conflict with a rigorous concept of
>> observational equivalence.
> 
> I don't understand this claim since identity is a subset of equivalence.
> Identity has consistent semantics. In a worse case scenario you just have
> to simulate Lisp object binding in your language of choice. All identical
> objects are observationally equivalent and all equal objects are
> observationally equivalent. Where's the conflict?

What I'm saying amounts to two claims:

 (1) Identity is meaningless for immutable objects.
 (2) Identity breaks observational equivalence.

The first claim should be fairly easy to understand.  The reason we
care about identity, in general, is that mutating an object also mutates
all its aliases.  If we can't mutate objects, this is a non-issue.

The second claim is a bit harder.  By observational equivalence, I mean
that names can always be replaced by the expression to which they are
bound, and that identical expressions can always be abstracted away into
names, without changing semantics.  This means that the five code
fragments below are semantically indistinguishable:

    -- (a)
    foo = [1, 2, 3]
    bar = somefunction foo foo
<=>
    -- (b)
    foo = [1, 2, 3]
    bar = somefunction [1, 2, 3] foo
<=>
    -- (c)
    foo = [1, 2, 3]
    bar = somefunction foo [1, 2, 3]
<=>
    -- (d)
    foo = [1, 2, 3]
    bar = somefunction [1, 2, 3] [1, 2, 3]
<=>
    -- (e)
    baz = [2, 3]
    foo = [1, 2, 3]
    bar = somefunction [1, 2, 3] (1 : baz)

Now, suppose that somefunction were the identity function.  What would
it return in each case?

By observational equivalence, bar must in each case be the same.  It
can't matter whether we construct the list once and then use it, or
construct it twice.  It would be incoherent if somefunction could
distinguish between any of these cases.

Now, it's the same with functions:

    foo = \x -> \y -> 2 * x + y
    bar = somefunction foo foo
<=>
    foo = \x -> \y -> 2 * x + y
    bar = somefunction foo (\x -> \y -> 2 * x + y)
<=>
    foo = \x -> \y -> 2 * x + y
    bar = somefunction (\x -> \y -> 2 * x + y) (\x -> \y -> 2 * x + y)
<=>
    foo = \x -> \y -> 2 * x + y
    bar = somefunction (\a -> \b -> 2 * a + b) (\zip -> \zap -> 2 * zip + zap)

Again, bar must have the same value in each case.  (Note for the last
case: renaming of bound variables can't change the semantics.)  There is
no function _somefunction_ that can distinguish between defining the
function once and "sharing" or defining it twice.  So we have no
function identity.

May we admit function _equality_?  Only trivially, and only if functions
have some comparable representation.  There are different ways we could
define _somefunction_ (as equality) that would give different answers
above, and I have no idea which is "best".  There's probably no
implementation barrier to doing this, but it's probably more misleading
than it is useful.

> If you can't implement examples that can determine if, say, function
> objects or hash tables are identical (_and_ you don't have any operator to
> test their equivalence) then you're missing functionality.

As another poster wrote, _mutable_ objects admit identity.  If I want
it, though, I might have to cook it up myself.

    -- MyHashTable exports the same interface as GHC's Data.HashTable,
    -- except that it also provides an instance of Eq that implements
    -- identity.
    module MyHashTable (MyHashTable, delete, insert, toList, lookup,
        longestChain, prime, hashString, hashInt, new, fromList) where 

    import qualified Data.HashTable as H
    import qualified Data.Unique    as U
    import Prelude ((.), (==), Eq, return)

    -- MyHashTable is a hash table with a unique tag.
    data MyHashTable key val = HTCon { tag :: U.Unique, table :: H.HashTable key val }

    -- to compare MyHashTables, compare tags
    instance Eq (MyHashTable k v) where
        a == b = tag a == tag b

    -- new and fromList 
    new cmp hash       = do t <- U.newUnique
                            h <- H.new cmp hash
                            return (HTCon t h)
    fromList hash list = do t <- U.newUnique
                            h <- H.fromList hash list
                            return (HTCon t h)

    -- boilerplate so we can keep MyHashTable opaque.
    delete          = H.delete       . table
    insert          = H.insert       . table
    toList          = H.toList       . table
    lookup          = H.lookup       . table
    longestChain    = H.longestChain . table
    prime           = H.prime
    hashString      = H.hashString
    hashInt         = H.hashInt

Then I can run:

    Ok, modules loaded: MyHashTable.
    *MyHashTable> a <- fromList hashString [("a", 0), ("b", 1)]
    *MyHashTable> b <- fromList hashString [("a", 0), ("b", 1)]
    *MyHashTable> a == b
    False
    *MyHashTable> a == a
    True
    *MyHashTable> 

This is your "worst case scenario" from above.  I'd be very surprised if
anyone here has encountered it in a case where object identity wasn't
itself in the problem domain, but I'd like to hear about it.

Jesse
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: Stephen J. Bevan
Subject: Re: More static type fun.
Date: 
Message-ID: <m3smladq5u.fsf@dino.dnsalias.com>
Joe Marshall <···@ccs.neu.edu> writes:
> > Perhaps we have a terminology problem.  If I define a structure (or
> > record, call it what you will) in a language that doesn't have any
> > kind of implicit "pointer/address equality" over values then equality
> > over that structure is either hard-wired in the language (say
> > recursive structural equivalence on each field) or the language allows
> > the user to define equality on a per structure basis (again in terms
> > of the fields).
> 
> Equality, sure, but identity?

I originally wrote "equality" and in your followup you used "identity"
so in my next response I wrote "identity" on the assumption you
considered "equality" and "identity" to be sloppy synonyms in this
context.  This is because, as Jesse Tov has already mentioned,
identity (at least as I understand it in Common Lisp) doesn't have an
analog in some (statically typed) languages and can only be
approximated via an equality/equivalence function.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <4qxpmlrd.fsf@ccs.neu.edu>
·······@dino.dnsalias.com (Stephen J. Bevan) writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>> > Perhaps we have a terminology problem.  If I define a structure (or
>> > record, call it what you will) in a language that doesn't have any
>> > kind of implicit "pointer/address equality" over values then equality
>> > over that structure is either hard-wired in the language (say
>> > recursive structural equivalence on each field) or the language allows
>> > the user to define equality on a per structure basis (again in terms
>> > of the fields).
>> 
>> Equality, sure, but identity?
>
> I originally wrote "equality" and in your followup you used "identity"
> so in my next response I wrote "identity" on the assumption you
> considered "equality" and "identity" to be sloppy synonyms in this
> context.  This is because, as Jesse Tov has already mentioned,
> identity (at least as I understand it in Common Lisp) doesn't have an
> analog in some (statically typed) languages and can only be
> approximated via an equality/equivalence function.

Ok, I see the source of confusion.

My original function said something like

  (lambda (x)
    (if (eq x #'+)
        #'*
        x))

What I was saying is that the reason I wrote something like that was
because I was concerned that simply writing (lambda (x) x) would be
recognized as a universal identity and that it could be elided from
the type checking algorithm.  The fact that I'm using an equivalence
operator on the addition operator is just incidental.  I'd be just as
happy to do something like (lambda (x) (if (eq x 42) 32 x))
(assuming that there exists a polytype equality function)
From: Jesse Tov
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbq5dil.m5q.tov@tov.student.harvard.edu>
Joe Marshall <···@ccs.neu.edu>:
> Ok, I see the source of confusion.
> 
> My original function said something like
> 
>   (lambda (x)
>     (if (eq x #'+)
>         #'*
>         x))
> 
> What I was saying is that the reason I wrote something like that was
> because I was concerned that simply writing (lambda (x) x) would be
> recognized as a universal identity and that it could be elided from
> the type checking algorithm.  The fact that I'm using an equivalence
> operator on the addition operator is just incidental.  I'd be just as
> happy to do something like (lambda (x) (if (eq x 42) 32 x))
> (assuming that there exists a polytype equality function)

It depends on the language, of course.  In Standard ML, = is polymorphic
as a special case; it has type  ''a * ''a -> bool.  Type variables
starting with '' are "equality types", meaning that = is defined over
those types.  = compares by value (by structural recursion), and the
programmer can't redefine it for types where the abstraction indicates a
different kind of equality [1].  I think this whole thing is kind of a
kluge, but it works fairly well in practice.

I don't know what the situation is in Ocaml, but I've heard it's kind of
unpleasant.  A cursory check tells me that both = and == have type 'a ->
'a -> bool.  (If you don't consider Lisp's treatment of equality
unpleasant, you'll probably like Ocaml's fine, too.)

Haskell has type classes.  There's a type class Eq with methods (==) and
(/=); types are made instances of Eq to define equality on them.  When
defining a new algebraic datatype, the programmer can tell the compiler
to derive Eq, in which case it's recursive value equality like in SML;
or it's possible to provide your own method for (==) if the default is
inappropriate.

Jesse

[1] The exception is that SML uses pointer equality/identity for mutable
references.  It's trivial to define value equality on refs if you want
it:

Standard ML of New Jersey, Version 110.0.3, January 30, 1998 [CM;
autoload enabled]
- val a = ref 5;        (* binds _a_ to a new, mutable cell containing 5 *)
- val b = ref 5;
- val c = ref 6;
- a = b;
val it = false : bool;
- a = a;
val it = true : bool;
- infix ==;
infix ==
- fun a == b = !a = !b;   (* !a is the value stored in the cell a *)
val == = fn : ''a ref * ''a ref -> bool
- a == a;
val it = true : bool;
- a == b;
val it = true : bool;
- a == c;
val it = false : bool;
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: Adam Warner
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.10.27.04.10.57.330406@consulting.net.nz>
Hi Stephen J. Bevan,

> ·············@comcast.net writes:
>> "Marshall Spight" <·······@dnai.com> writes:
>> > It would be really interesting to see a small but useful example of a
>> > program that will not pass a statically typed language. It seems to me
>> > that how easy it is to generate such programs will be an interesting
>> > metric.
>> 
>> (defun foo (f)
>>   (funcall (funcall f #'+)
>>            (funcall f 3)
>>            (funcall f 2)))
>> 
>> (defun test1 ()
>>   (foo (lambda (thing)
>>          (format t "~&--> ~s" thing)
>>          thing)))
>> 
>> (defun test2 ()
>>   (foo (lambda (thing)
>>          (if (eq thing #'+)
>>              #'*
>>              thing))))
> 
> test2 relies on some kind of equality being defined over functions. Some
> (statically typed) languages do not support that (for reasons other than
> static typing).

Since I can't find a straight answer in the archives or the HyperSpec can
someone please explain whether an EQ test for function identity is
conforming? It appears to be unspecified.

Implementation-wise functions should be EQ if they have the same pointer.
But can this be relied upon in conforming ANSI Common Lisp programs?

Here's some useful discussion that I have found so far:
<http://groups.google.co.nz/groups?selm=F8VJ9.8899%24K5.6432%40fe01>

   Notice that the second test (below) did not return #<PROCEDURE 
   combinator-false>. However, in the third test, (which called the same
   result with arguments to see which one it would return) we see that the
   result behaves like combinator-false. Explanation: This anomaly is due
   to LISP's inability to determine function equality. Functions are
   considered equal in LISP if and only if the pointers to them are the
   same. In this case, we have two functions that are semantically the
   same, but are syntactically different.

Kent M Pitman versus Bruno Haible:
<http://groups.google.co.nz/groups?selm=sfw1zybze88.fsf%40world.std.com>

   What is "readable" is partly caught up in the notion of object
   identity. Is EQness of the original function required?  In that case,
   then only named functions can probably win.  (Even then, there's a
   minor philosophical question about which version of CAR (or whatever)
   you should get back if you've printed the function to a file, then
   loaded it back into an environment that has since "redefined" (e.g.,
   patched a bug) the function.

<http://groups.google.co.nz/groups?selm=69jg7j%24ojk%241%40nz12.rz.uni-karlsruhe.de>

   Now about the EQness issue: Testing function identity for EQ is always
   a bad thing because then your code stops working when you switch on the
   tracer or profiler. The EQness of the lexical environment, if some
   implementation decides to handles these functions too, can not be
   guaranteed. Just the same way as when printing an uninterned symbol,
   its READ result will certainly be different from the different symbol.
   And we don't really worry about this latter case. So why should we
   worry about the EQness of functions' lexical environment?

Kent, in considering the limits of function identity, remarks that it can
probably only be preserved when reading named functions. But he does
discuss how EQness can be a requirement which indicates that it must be
something that can be relied upon in particular circumstances. Can we rely
upon it when comparing the identity of anonymous functions?

In other words we must confirm that this is always true:
(eq #1=#.(lambda ()) #1#) => t

And it must be impermissible for an implementation to replace or inline
the functions within the predicate with a function of identical semantics.

Regards,
Adam
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4ptgjay2v.fsf@beta.franz.com>
[cross-posting removed]

Adam Warner <······@consulting.net.nz> writes:

> Hi Stephen J. Bevan,
> 
> > ·············@comcast.net writes:
> >> "Marshall Spight" <·······@dnai.com> writes:
> >> > It would be really interesting to see a small but useful example of a
> >> > program that will not pass a statically typed language. It seems to me
> >> > that how easy it is to generate such programs will be an interesting
> >> > metric.
> >> 
> >> (defun foo (f)
> >>   (funcall (funcall f #'+)
> >>            (funcall f 3)
> >>            (funcall f 2)))
> >> 
> >> (defun test1 ()
> >>   (foo (lambda (thing)
> >>          (format t "~&--> ~s" thing)
> >>          thing)))
> >> 
> >> (defun test2 ()
> >>   (foo (lambda (thing)
> >>          (if (eq thing #'+)
> >>              #'*
> >>              thing))))
> > 
> > test2 relies on some kind of equality being defined over functions. Some
> > (statically typed) languages do not support that (for reasons other than
> > static typing).
> 
> Since I can't find a straight answer in the archives or the HyperSpec can
> someone please explain whether an EQ test for function identity is
> conforming? It appears to be unspecified.

I think if you reword your question you can find the answer easily.
On the surface, it is not clear what you mean by "function identity".
But from the examples and from the further discussion below it appears
to me that you are really interested in what the FUNCTION operator
(i.e. #' ) does.  And if you look up FUNCTION in the hyperspec, it
clearly defines when the assumption of EQ cannot be made.  Of course,
there are others as well; for example, the following sequence might
not return true:

(setq x #'foo)
(setf (symbol-function 'foo) (symbol-function 'bar))
(setq y #'foo)
(eq x y)

> Implementation-wise functions should be EQ if they have the same pointer.
> But can this be relied upon in conforming ANSI Common Lisp programs?

Again, the question is not clear.  Obviously, objects which have the
same pointer are EQ, by definition, but it is not clear if the "this"
in your question is referring to the predicate of the "if", nor is it
clear what you mean by "implementation-wise".  Perhaps some clarification
is in order.

> Here's some useful discussion that I have found so far:
> <http://groups.google.co.nz/groups?selm=F8VJ9.8899%24K5.6432%40fe01>
> 
>    Notice that the second test (below) did not return #<PROCEDURE 
>    combinator-false>. However, in the third test, (which called the same
>    result with arguments to see which one it would return) we see that the
>    result behaves like combinator-false. Explanation: This anomaly is due
>    to LISP's inability to determine function equality. Functions are
>    considered equal in LISP if and only if the pointers to them are the
>    same. In this case, we have two functions that are semantically the
>    same, but are syntactically different.

Note that the discussion immediately above has nothing to do with Common
Lisp, as your preceding preface and question implies - though being
generally true in CL for lambda forms, it is a discussion about Scheme
specifically.

> Kent M Pitman versus Bruno Haible:
> <http://groups.google.co.nz/groups?selm=sfw1zybze88.fsf%40world.std.com>
> 
>    What is "readable" is partly caught up in the notion of object
>    identity. Is EQness of the original function required?  In that case,
>    then only named functions can probably win.  (Even then, there's a
>    minor philosophical question about which version of CAR (or whatever)
>    you should get back if you've printed the function to a file, then
>    loaded it back into an environment that has since "redefined" (e.g.,
>    patched a bug) the function.

This is of course true, and is a feature.  If identity were completely
immutable over time, then one of the major features of Common Lisp, namely
dynamic function redefinition, would be lost.  Note that this applies to
all CL objects, not just function objects, as Kent also points out in
that same article by refernce to his article on equality. 

> <http://groups.google.co.nz/groups?selm=69jg7j%24ojk%241%40nz12.rz.uni-karlsruhe.de>
> 
>    Now about the EQness issue: Testing function identity for EQ is always
>    a bad thing because then your code stops working when you switch on the
>    tracer or profiler. The EQness of the lexical environment, if some
>    implementation decides to handles these functions too, can not be
>    guaranteed.

Bruno's observation here is unfortunately very true.  However, in the
many years since we included ourselves in the implementation of the
"encapsulation" style of tracing, we've always had trouble with the
notion that Eq-ness could be compromised by no other means than to trace
the function; it has larger ramifications than just the EQ test (for
example, any implementation with an internal generic-function-p predicate
which would return true might start returning nil when the gf is traced).
So we implemented the "fwrap" concept, which performs sort of an
encapsulation of a function object without losing its identity (see
http://www.franz.com/support/documentation/6.2/doc/fwrappers-and-advice.htm)
so that this case could be removed from the mix.

>    Just the same way as when printing an uninterned symbol,
>    its READ result will certainly be different from the different symbol.
>    And we don't really worry about this latter case. So why should we
>    worry about the EQness of functions' lexical environment?
> 
> Kent, in considering the limits of function identity, remarks that it can
> probably only be preserved when reading named functions. But he does
> discuss how EQness can be a requirement which indicates that it must be
> something that can be relied upon in particular circumstances. Can we rely
> upon it when comparing the identity of anonymous functions?
> 
> In other words we must confirm that this is always true:
> (eq #1=#.(lambda ()) #1#) => t

Contrast this with the following:

(eq #1=(lambda ()) #1#) => nil

Note that in CL (lambda ()) refers to a macro and it expands to #'(lambda ())
or in long form (function (lambda ())).  Thus, according to the definition
of FUNCTION, the two lambda forms are not guaranteed to return EQ closures.
The original form should indeed be true, barring any tracing "bugs" as
previously mentioned, since the evaluation has already been done at read
time, and since function objects are first class they should be comparable
to themselves.

> And it must be impermissible for an implementation to replace or inline
> the functions within the predicate with a function of identical semantics.

Yes, but such a rule can be generalized to include all usages of the
FUCTION operator, as the above forms do.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Adam Warner
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.10.27.09.53.59.557335@consulting.net.nz>
Hi Duane Rettig,

Thanks for the comprehensive reply Duane.

>> In other words we must confirm that this is always true: 
>> (eq #1=#.(lambda ()) #1#) => t
> 
> Contrast this with the following:
> 
> (eq #1=(lambda ()) #1#) => nil
> 
> Note that in CL (lambda ()) refers to a macro and it expands to
> #'(lambda ()) or in long form (function (lambda ())).  Thus, according
> to the definition of FUNCTION, the two lambda forms are not guaranteed
> to return EQ closures. The original form should indeed be true, barring
> any tracing "bugs" as previously mentioned, since the evaluation has
> already been done at read time, and since function objects are first
> class they should be comparable to themselves.

It's the final point I was missing: "since function objects are first
class they should be comparable to themselves." What objects are first
class in Lisp? Before now I only knew symbols could be reliability
compared using EQ. Now I understand function objects can be. Is there
anything else?

Thanks,
Adam
From: Nikodemus Siivola
Subject: Re: More static type fun.
Date: 
Message-ID: <bnigo9$2h9$1@nyytiset.pp.htv.fi>
In comp.lang.lisp Adam Warner <······@consulting.net.nz> wrote:

1.

>> test2 relies on some kind of equality being defined over functions. Some
>> (statically typed) languages do not support that (for reasons other than
>> static typing).

2.

> Since I can't find a straight answer in the archives or the HyperSpec can
> someone please explain whether an EQ test for function identity is
> conforming? It appears to be unspecified.

Simple solution to point 2: use '+ and '* instead. Doubt it helps with
no 1, though.

Cheers,

 -- Nikodemus
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnis29$lrs$1@news.oberberg.net>
Adam Warner wrote [a lot of things about comparing functions for equality]:

I agree to the problems, and since comparing by-value is undecidable, 
some approximation is necessary.
Comparing by representation is the roughest approximation imaginable.
I think structural comparison would give the best results: if a function 
is builtin, use pointer equality, otherwise compare structures and check 
whether the structure elements are (recursively) equal.

It will not be able to detect the equality of
   CAR
and
   (LAMBDA X (CAR X))
but I think most people would be satisfied with the result anyway :-)

Comparing functions for equality across version changes is an extremely 
tricky problem. If the version change was a bug fix, one would like to 
compare the old and new function to be equal, if it was an incompatible 
change, they should compare inequal (but some applications might want to 
have them compare equal anyway, if they don't depend on the aspects of 
the function that have changed).
Vendor information on such changes is usually incomplete, so it's not 
reliable enough to base fundamental things like equality on it...

Regards,
Jo
From: Matthew Danish
Subject: Re: More static type fun.
Date: 
Message-ID: <20031027105236.GJ1454@mapcar.org>
On Mon, Oct 27, 2003 at 10:30:18AM +0100, Joachim Durchholz wrote:
> Adam Warner wrote [a lot of things about comparing functions for equality]:
> 
> I agree to the problems, and since comparing by-value is undecidable, 
> some approximation is necessary.
> Comparing by representation is the roughest approximation imaginable.
> I think structural comparison would give the best results: if a function 
> is builtin, use pointer equality, otherwise compare structures and check 
> whether the structure elements are (recursively) equal.

Just to clear up a few confusions on the part of the non-Lispers:

EQ tests object identity.  Nothing more, nothing less.  Objects are only
identical to themselves.  EQ is a bit low-level in that is exposes
certain implementation decisions regarding numbers and characters, and
for the meaning of those types of objects it is not usefully defined.

EQL tests object identity except for numbers and characters, for which
it compares the actual value.  This is because object identity for
numbers and characters is allowed to be violated for purposes of
efficiency.  No other objects are like that.  EQL reflects the
preservation of object identity that is central to Common Lisp, and is
therefore the default equality predicate in every situation I can think
of.

For example, CL takes the trouble to ensure that symbols of the same
name (from the same package) are interned and all uses are identical
under EQ.  Binding, function application, etc do NOT copy objects
(except for numbers/characters).  Copies of objects are not EQ to the
originals.

EQUAL compares structural identity for a number of built-in types such
as lists and strings.

EQUALP goes further than EQUAL, comparing vectors, structs, and strings
case-insensitively.

= compares values arithmetically, and is defined only for numbers.

* (= 0.0 0)
T
* (eql 0.0 0)
NIL

There are a number of other equality predicates, more specialized.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnium1$n8c$1@news.oberberg.net>
Matthew Danish wrote:
> Just to clear up a few confusions on the part of the non-Lispers:
> 
> EQ tests object identity.  Nothing more, nothing less.

I.e. reference equality by my book (for some suitable definition of 
"reference", which would be whatever the system uses to identify the two 
objects to be compared, be it memory addresses or indexes into a hash 
table).

 > Objects are only identical to themselves.

That's a circular definition (unfortunately).

One can make it non-circular by applying fixed-point theory, with an 
addition rephrased informally as "among the many relationship that 
satisfy the above, use the one that gives most differences".

> For example, CL takes the trouble to ensure that symbols of the same
> name (from the same package) are interned and all uses are identical
> under EQ.

Then EQ equality is name equality.
Which is fine by me (though those integer and character specialties of 
EQ are awful warts IMHO).

(Additional equality operators snipped - this already-overlong thread 
doesn't need yet another flamewar on Lisp's idea of equality, which 
would almost certainly ensue if I brought up my opinion about it.)

Regards,
Jo
From: Matthew Danish
Subject: Re: More static type fun.
Date: 
Message-ID: <20031027131136.GK1454@mapcar.org>
On Mon, Oct 27, 2003 at 11:14:58AM +0100, Joachim Durchholz wrote:
> Then EQ equality is name equality.

No, it isn't.  It is a comparison of object identity.  Every object has
a unique identity.  Symbols can be interned in packages (this is not
required) and every successive call of INTERN with the same
(case-sensitive) name string will return the same object as it did the
first time.

(eq (intern "a") (intern "a")) => T

MAKE-SYMBOL doesn't intern the symbol.

(eq (make-symbol "a") (make-symbol "a")) => NIL

> Which is fine by me (though those integer and character specialties of 
> EQ are awful warts IMHO).

Do you realize the reason for those exceptions?  If they weren't
present, all bignum objects would have to be interned like symbols.
Common Lisp may be a high-level language, but it does try to be
reasonably practical.

When dealing with objects that may be of any type, you should use EQL.
That is why the default equality operator is EQL, when the TEST argument
is available for standard functions, as I mentioned before.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Espen Vestre
Subject: Re: More static type fun.
Date: 
Message-ID: <kw3cdepzey.fsf@merced.netfonds.no>
Matthew Danish <·······@andrew.cmu.edu> writes:

> On Mon, Oct 27, 2003 at 11:14:58AM +0100, Joachim Durchholz wrote:
> > Then EQ equality is name equality.
> 
> No, it isn't.  It is a comparison of object identity.  

Isn't that approximately what the FP people mean when they say 
"name equality"?
-- 
  (espen)
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnjn2r$83h$1@news.oberberg.net>
Matthew Danish wrote:

> On Mon, Oct 27, 2003 at 11:14:58AM +0100, Joachim Durchholz wrote:
> 
>>Then EQ equality is name equality.
> 
> No, it isn't.  It is a comparison of object identity.  Every object has
> a unique identity.  Symbols can be interned in packages (this is not
> required) and every successive call of INTERN with the same
> (case-sensitive) name string will return the same object as it did the
> first time.
> 
> (eq (intern "a") (intern "a")) => T
> 
> MAKE-SYMBOL doesn't intern the symbol.
> 
> (eq (make-symbol "a") (make-symbol "a")) => NIL

Ah, right. I forgot about these intricacies - "intern" and "make-symbol" 
didn't exist when I did Interlisp, but now I remember having wrestled 
with similar functions then.

Lisp is unique in making equality a complicated thing that only experts 
can understand.
Equality certainly /is/ more complicated than meets the eye, but I think 
Lisp introduces complications in the wrong places. (See below for 
alternatives.)

>>Which is fine by me (though those integer and character specialties of 
>>EQ are awful warts IMHO).
> 
> 
> Do you realize the reason for those exceptions?  If they weren't
> present, all bignum objects would have to be interned like symbols.
> Common Lisp may be a high-level language, but it does try to be
> reasonably practical.

It wouldn't be /that/ difficult to do. Just keep all bignums in a hash 
table, for example - that would be an overhead of just a few bytes per 
bignum, which isn't much given that bignums already have memory 
management overhead.
It might even save some space, if the same bignum is used more than once.

Another solution would be to make bignums immutable, just like integers 
and characters. If you can't mutate a value, there's no meaningful 
distinction between reference and value equality anymore, so you can 
compare by-reference, and try by-value equality if by-reference equality 
doesn't work (just as for EQUAL).


I know that neither solution is going to be accepted for any mainstream 
Lisp dialect: there's too many legacy code around that uses the 
complicated equality idioms.
Which is just as well: it's unlikely that I'll ever program in Lisp. 
(And there are languages which are infinitely worse than Lisp. Paying me 
good money might actually convince me to program in it - there are 
languages you'd have to force me at gun point to program in, and I'm not 
talking about assembly here.)

Regards,
Jo
From: Matthew Danish
Subject: Re: More static type fun.
Date: 
Message-ID: <20031027191229.GP1454@mapcar.org>
On Mon, Oct 27, 2003 at 06:11:27PM +0100, Joachim Durchholz wrote:
> Matthew Danish wrote:
> 
> >On Mon, Oct 27, 2003 at 11:14:58AM +0100, Joachim Durchholz wrote:
> >
> >>Then EQ equality is name equality.
> >
> >No, it isn't.  It is a comparison of object identity.  Every object has
> >a unique identity.  Symbols can be interned in packages (this is not
> >required) and every successive call of INTERN with the same
> >(case-sensitive) name string will return the same object as it did the
> >first time.
> >
> >(eq (intern "a") (intern "a")) => T
> >
> >MAKE-SYMBOL doesn't intern the symbol.
> >
> >(eq (make-symbol "a") (make-symbol "a")) => NIL
> 
> Ah, right. I forgot about these intricacies - "intern" and "make-symbol" 
> didn't exist when I did Interlisp, but now I remember having wrestled 
> with similar functions then.

Well, I don't know Interlisp, but symbol interning has been around for a
long time.  Don't see what's so terribly intricate about it either.
What if it was a module that had the following functions:

Symbol.make
Symbol.intern
Symbol.unintern

Is that intricate?

> Lisp is unique in making equality a complicated thing that only experts 
> can understand.

I'm not sure what you find so complicated about it.  I think the problem
is your model of the Lisp world.  It is somewhat different than the
typical functional programming language.  In Lisp, you go about creating
objects, binding variables to them, supplying them as arguments to
functions, maybe mutating slots in them, etc.  But when an object is
created, there is the concept that it has some kind of unique identity
which can be compared against the identity of other objects.  The
predicate EQ tests the identity of all of its arguments to see if they
are the same.  But because of the exception for numbers and characters,
and the decision to not define EQ on them, EQL was introduced.  If you
are a beginner, you should not bother with EQ.  There is no need for it.
I only mentioned it because it was used in an example.

> Equality certainly /is/ more complicated than meets the eye, but I think 
> Lisp introduces complications in the wrong places. (See below for 
> alternatives.)
> 
> >>Which is fine by me (though those integer and character specialties of 
> >>EQ are awful warts IMHO).
> >
> >
> >Do you realize the reason for those exceptions?  If they weren't
> >present, all bignum objects would have to be interned like symbols.
> >Common Lisp may be a high-level language, but it does try to be
> >reasonably practical.
> 
> It wouldn't be /that/ difficult to do. Just keep all bignums in a hash 
> table, for example - that would be an overhead of just a few bytes per 
> bignum, which isn't much given that bignums already have memory 
> management overhead.
> It might even save some space, if the same bignum is used more than once.

So every time you perform a calculation on bignums, you need to look it
up in a hash-table?  One of the reasons why the symbols, from which a
program is made, don't have overhead is that the look-up is performed
during compile-time.  Bignums wouldn't have the same advantage.  Not to
mention, this might have to happen for intermediate results too.

> Another solution would be to make bignums immutable, just like integers 
> and characters. If you can't mutate a value, there's no meaningful 
> distinction between reference and value equality anymore, so you can 
> compare by-reference, and try by-value equality if by-reference equality 
> doesn't work (just as for EQUAL).

This is precisely what Lisp implementations do, and what EQL does.  =)
(btw, integers are a superset of bignums, in Lisp (and in math))

And there is a meaningful distinction between reference and value
equality, even for immutable values.  Just by the fact that one may fail
while the other doesn't, gives it a distinction.

Remember, in the Lisp world, every object created has a unique identity.
A lot of Lisp programming hinges around the use of identity to
distinguish objects.  For example, symbolic programming.  Let me give an
example outside of that:

I have a class of objects called ABSTRACT-SPILL-SLOT.  Every time the
register allocator needs to spill, it creates an object of this class.
These objects are accumulated and then operated upon by various set
operations in order to determine how the spill slots should be
allocated.  The class has no slots, the only distinguishing mark between
the abstract spill slots is identity.  At the end, I have a set of
abstract spill slots and I can go through it and replace them with real
spill slots.  When outputting asm, I can test for the presence of one
of the abstract spill slots by identity, and substitute the appropriate
real spill slot instead.

Of course, the way to do this without the concept of object identity
would be to simulate it by including a unique integer in a slot, and
then writing an equality predicate to test that.

> I know that neither solution is going to be accepted for any mainstream 
> Lisp dialect: there's too many legacy code around that uses the 
> complicated equality idioms.

Too bad one is already in use ;)

What's so complicated about the equality idioms?  The naming scheme is
kinda silly, but it makes some kind of sense: the longer the name, the
more structure it checks.  You know very well that equality has many
definitions.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnk6fv$euv$1@news.oberberg.net>
Matthew Danish wrote:

> On Mon, Oct 27, 2003 at 06:11:27PM +0100, Joachim Durchholz wrote:
> 
>>Matthew Danish wrote:
>>
>>>On Mon, Oct 27, 2003 at 11:14:58AM +0100, Joachim Durchholz wrote:
>>>
>>>
>>>>Then EQ equality is name equality.
>>>
>>>No, it isn't.  It is a comparison of object identity.  Every object has
>>>a unique identity.  Symbols can be interned in packages (this is not
>>>required) and every successive call of INTERN with the same
>>>(case-sensitive) name string will return the same object as it did the
>>>first time.
>>>
>>>(eq (intern "a") (intern "a")) => T
>>>
>>>MAKE-SYMBOL doesn't intern the symbol.
>>>
>>>(eq (make-symbol "a") (make-symbol "a")) => NIL
>>
>>Ah, right. I forgot about these intricacies - "intern" and "make-symbol" 
>>didn't exist when I did Interlisp, but now I remember having wrestled 
>>with similar functions then.
> 
> 
> Well, I don't know Interlisp, but symbol interning has been around for a
> long time.  Don't see what's so terribly intricate about it either.
> What if it was a module that had the following functions:
> 
> Symbol.make
> Symbol.intern
> Symbol.unintern
> 
> Is that intricate?

The intricacy is not in the interning mechanism, it's in the fact that I 
have to think about it when reasoning about equality.
Take away mutability and it's not an issue (note that this is an 
entirely different area than static vs dynamic typing).

>>Lisp is unique in making equality a complicated thing that only experts 
>>can understand.
> 
> I'm not sure what you find so complicated about it.  I think the problem
> is your model of the Lisp world.  It is somewhat different than the
> typical functional programming language.  In Lisp, you go about creating
> objects, binding variables to them, supplying them as arguments to
> functions, maybe mutating slots in them, etc.  But when an object is
> created, there is the concept that it has some kind of unique identity
> which can be compared against the identity of other objects.

What I find complicated is the ramifications of having mutable objects 
everywhere.
Lisp is quite unique in both having good higher-order function support 
and encouraging programmers to use side effects. I find this combination 
dangerous and unfortunate, though I know enough about Lisp's history to 
accept that it was unavoidable given its evolution.

>>>>Which is fine by me (though those integer and character specialties of 
>>>>EQ are awful warts IMHO).
>>>
>>>
>>>Do you realize the reason for those exceptions?  If they weren't
>>>present, all bignum objects would have to be interned like symbols.
>>>Common Lisp may be a high-level language, but it does try to be
>>>reasonably practical.
>>
>>It wouldn't be /that/ difficult to do. Just keep all bignums in a hash 
>>table, for example - that would be an overhead of just a few bytes per 
>>bignum, which isn't much given that bignums already have memory 
>>management overhead.
>>It might even save some space, if the same bignum is used more than once.
> 
> So every time you perform a calculation on bignums, you need to look it
> up in a hash-table?

You pay that price anyway - or what do you think what the system does 
when it looks for a free block to store the new bignum in???

 > One of the reasons why the symbols, from which a
> program is made, don't have overhead is that the look-up is performed
> during compile-time.  Bignums wouldn't have the same advantage.  Not to
> mention, this might have to happen for intermediate results too.

However, bignum calculations incur allocations (or reallocations). It's 
not that clear whether having true value semantics would incur any 
noticeable overhead - I bet some programs will suffer and others will 
profit, depending on how much data sharing is possible.

>>Another solution would be to make bignums immutable, just like integers 
>>and characters. If you can't mutate a value, there's no meaningful 
>>distinction between reference and value equality anymore, so you can 
>>compare by-reference, and try by-value equality if by-reference equality 
>>doesn't work (just as for EQUAL).
> 
> This is precisely what Lisp implementations do, and what EQL does.  =)
> (btw, integers are a superset of bignums, in Lisp (and in math))

Yes, right - but then even EQ could do the same.
See, under immutability, it's not even a useful information to know 
whether two values share their memory location or not - any predicate 
that you could throw at the values will always return the same result, 
so they are, for all intents and purposes, truly equal.
If the values can mutate, distinguishing reference equality from value 
equality begins to make sense: changing one of the values will also 
change all its aliases, so it's relevant whether the values share a 
memory location or not.

> And there is a meaningful distinction between reference and value
> equality, even for immutable values.  Just by the fact that one may fail
> while the other doesn't, gives it a distinction.

Not at all.

> Remember, in the Lisp world, every object created has a unique identity.
> A lot of Lisp programming hinges around the use of identity to
> distinguish objects.

It's irrelevant for immutable objects. You don't /need/ to distinguish 
immutable objects with the same value. This give a much simpler 
semantics, which is easier to reason about (both informally inside one's 
head and formally).

 > For example, symbolic programming.  Let me give an
> example outside of that:
> 
> I have a class of objects called ABSTRACT-SPILL-SLOT.  Every time the
> register allocator needs to spill, it creates an object of this class.
> These objects are accumulated and then operated upon by various set
> operations in order to determine how the spill slots should be
> allocated.  The class has no slots, the only distinguishing mark between
> the abstract spill slots is identity.  At the end, I have a set of
> abstract spill slots and I can go through it and replace them with real
> spill slots.  When outputting asm, I can test for the presence of one
> of the abstract spill slots by identity, and substitute the appropriate
> real spill slot instead.

> Of course, the way to do this without the concept of object identity
> would be to simulate it by including a unique integer in a slot, and
> then writing an equality predicate to test that.

Exactly.
The point is that you can't get rid of that Lispish identity concept 
when it doesn't help you. In other words, Lisp objects are more 
complicated than they need to be.

>>I know that neither solution is going to be accepted for any mainstream 
>>Lisp dialect: there's too many legacy code around that uses the 
>>complicated equality idioms.
> 
> Too bad one is already in use ;)
> 
> What's so complicated about the equality idioms?  The naming scheme is
> kinda silly, but it makes some kind of sense: the longer the name, the
> more structure it checks.  You know very well that equality has many
> definitions.

*shrug* yes, equality has many definition.
It's just that Lisp has many more definitions than needed. And that 
these definitions depend on implementation details, not on interfaces. 
To me, this all feels quite backwards.

Regards,
Jo
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <41xsy6u2y.fsf@beta.franz.com>
Joachim Durchholz <·················@web.de> writes:

> Matthew Danish wrote:

> > I'm not sure what you find so complicated about it.  I think the
> > problem is your model of the Lisp world.  It is somewhat different than the
> > typical functional programming language.  In Lisp, you go about creating
> > objects, binding variables to them, supplying them as arguments to
> > functions, maybe mutating slots in them, etc.  But when an object is
> > created, there is the concept that it has some kind of unique identity
> > which can be compared against the identity of other objects.
> 
> What I find complicated is the ramifications of having mutable objects
> everywhere.
> 
> Lisp is quite unique in both having good higher-order function support
> and encouraging programmers to use side effects. I find this
> combination dangerous and unfortunate, though I know enough about
> Lisp's history to accept that it was unavoidable given its evolution.

I find this combination powerful and fortunate.

> >>It might even save some space, if the same bignum is used more than once.
> > So every time you perform a calculation on bignums, you need to look
> > it up in a hash-table?

(but of course, the hash-table would need to be a weak one
in order to really save space for the long term, since
bignums in intermediate calculations are generally ephemeral)

> You pay that price anyway - or what do you think what the system does
> when it looks for a free block to store the new bignum in???

Heh, your implementation naivete is showing.  The system has to do all
of incrementing a pointer and performing a single test against a limit.

Think GC allocator, not best-fit malloc algorithm.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnlnlj$8kj$1@news.oberberg.net>
Duane Rettig wrote:

> Joachim Durchholz <·················@web.de> writes:
>>
>>Lisp is quite unique in both having good higher-order function support
>>and encouraging programmers to use side effects. I find this
>>combination dangerous and unfortunate, though I know enough about
>>Lisp's history to accept that it was unavoidable given its evolution.
> 
> I find this combination powerful and fortunate.

I agree it's powerful.

>>>>It might even save some space, if the same bignum is used more than once.
>>>
>>>So every time you perform a calculation on bignums, you need to look
>>>it up in a hash-table?
> 
> (but of course, the hash-table would need to be a weak one
> in order to really save space for the long term, since
> bignums in intermediate calculations are generally ephemeral)

Agreed.

>>You pay that price anyway - or what do you think what the system does
>>when it looks for a free block to store the new bignum in???
> 
> Heh, your implementation naivete is showing.

Oh, we're starting to call names again.
Bye.
(Sorry, but other people in this thread have used up all my goodwill, 
and I'm tired of staying polite and on-topic in the presence of name 
calling.)

Regards,
Jo
From: Jon S. Anthony
Subject: Re: More static type fun.
Date: 
Message-ID: <m3znflwfc0.fsf@rigel.goldenthreadtech.com>
Joachim Durchholz <·················@web.de> writes:

> Duane Rettig wrote:
> 
> > Joachim Durchholz <·················@web.de> writes:
> >>
> >>You pay that price anyway - or what do you think what the system does
> >>when it looks for a free block to store the new bignum in???
> > Heh, your implementation naivete is showing.
> 
> Oh, we're starting to call names again.
> Bye.
> (Sorry, but other people in this thread have used up all my goodwill,
> and I'm tired of staying polite and on-topic in the presence of name
> calling.)

What is it about the truth that bothers you so much?

/Jon
From: Espen Vestre
Subject: Re: More static type fun.
Date: 
Message-ID: <kw65i9jyz3.fsf@merced.netfonds.no>
Joachim Durchholz <·················@web.de> writes:

> Oh, we're starting to call names again.
> Bye.
> (Sorry, but other people in this thread have used up all my goodwill,
> and I'm tired of staying polite and on-topic in the presence of name
> calling.)

I think you should contemplate on why you end up in this situation
all the time. 

You have shown a surprising (well, at least for someone apparently
genuinely interested in programming languages) lack of knowledge of
lisp, combined with some unfair judgements based on guesswork from
this limited knowledge. Even the little interlisp you once knew must
have been long gone from your memory, judging from some of your recent
posts (if you'd still known any interlisp, you would have asked:
"funcall?  the same as apply*?")

You ignited _my_ flamethrower by - when all other arguments were gone
- resorting to the extremely irritating but-the-parens-suck argument.
Exremely irritating because it only comes from those who do not _want_
to learn lisp - I've never encountered any pupil unexposed to
programming languages or any hacker _wanting_ to learn lisp that has
had a real problem with lisp syntax. And yes, I _have_, for many
years, worked in environents where the lisp hackers were a very small
minority.

-- 
  (espen)
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnm9jf$h0v$1@news.oberberg.net>
Espen,

Sorry if I have irritated you with some of of my arguments - rest 
assured that many of the arguments from the Lisp arguments have been 
similarly irritating for the ML/Haskell/static typing side of this 
discussion (not specifically you, though I fear most of us have 
contributed to the flame-warish aspects of this thread).

Anyway. Personal behaviour is only marginally relevant for a technical 
newsgroup, so I'm leaving this subthread now.

I regret that this thread has become more of a challenge contest than a 
collaboration to find out differences and similarities. Too much noise, 
unfortunately :-(

Regards,
Jo
From: Espen Vestre
Subject: Re: More static type fun.
Date: 
Message-ID: <kw65i8gtex.fsf@merced.netfonds.no>
Joachim Durchholz <·················@web.de> writes:

> Sorry if I have irritated you with some of of my arguments - rest
> assured that many of the arguments from the Lisp arguments have been
> similarly irritating for the ML/Haskell/static typing side of this

I think it's a cultural clash between lisp engineers and FP scientists
:-). If you guys want to impress the CL hacker community, you should
probably come up with real life stories on how a project using one of
these languages improved quality, saved man-years or money or, at
least, made the programmers have a whole lot of fun.

> discussion (not specifically you, though I fear most of us have
> contributed to the flame-warish aspects of this thread).

I tried not to make claim anything about these languages since I
haven't really used them (though I have been exposed to quite a few
seminar lectures on them and their underlying theory during my days
doing mathematical logic at the university).
-- 
  (espen)
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnooid$p1b$2@news.oberberg.net>
Espen Vestre wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
>>Sorry if I have irritated you with some of of my arguments - rest
>>assured that many of the arguments from the Lisp arguments have been
>>similarly irritating for the ML/Haskell/static typing side of this
> 
> I think it's a cultural clash between lisp engineers and FP scientists
> :-). If you guys want to impress the CL hacker community, you should
> probably come up with real life stories on how a project using one of
> these languages improved quality, saved man-years or money or, at
> least, made the programmers have a whole lot of fun.

I can't give any accounts of the former, but having fun is indeed 
possible with FPLs. You just don't play with the read-eval-print loop 
and macros, you play with the type system - and writing monadic 
combinators is just as challenging, intellectually satisfying and full 
of a-ha effects as writing a new set of macros.
Or at least that's what I've been made to believe :-)

Regards,
Jo
From: Nick Name
Subject: Re: More static type fun.
Date: 
Message-ID: <qYRnb.64038$e5.2347017@news1.tin.it>
Joachim Durchholz wrote:

>  and writing monadic
> combinators is just as challenging, intellectually satisfying and full
> of a-ha effects as writing a new set of macros.

not to speak about arrows! But all this is having fun with category
theory at all... ;)

V.
From: Frode Vatvedt Fjeld
Subject: Re: More static type fun.
Date: 
Message-ID: <2had7mtftk.fsf@vserver.cs.uit.no>
Joachim Durchholz <·················@web.de> writes:

> The intricacy is not in the interning mechanism, it's in the fact
> that I have to think about it when reasoning about equality.

This is positively false for any value of "I" that is reasonably
comfortable with Common Lisp.

> Lisp is quite unique in both having good higher-order function
> support and encouraging programmers to use side effects.

I find this statement ridiculous. Common Lisp does not encourage
programmers to use side-effects. However, the lisp mindset is that
your programming environment is a persistent, interactive, and
evolving thing. It takes the concept of time into account---not just
as some abstract unit, but also the actual, real-world time where
everything changes from one instance to the next, from day to day and
week to week, and so obviously every aspect of the lisp environment
must be able to change with it. This in contrast to many other
programming environments I know, which are merely able to take a
static snapshot of the world, and whose only support for evolution is
the ultimate side-effect: Thrash everything and start from scratch, by
another edit-recompile-run cycle. You may feel clean and functional
from not ever having written a side-effecting expression, but you are
missing the bigger picture.

>> So every time you perform a calculation on bignums, you need to
>> look it up in a hash-table?
>
> You pay that price anyway - or what do you think what the system
> does when it looks for a free block to store the new bignum in???

What are you going to look up in the hash-table? It's going to have to
be a bignum, isn't it? So you need to allocate the bignum somehow,
regardless. And why would you expect many bignums to be alive more
than once? Seems very improbable to me. I mean, the range of bignums
you can fit in an address-space today isn't very impressive (not to
mention actual available RAM). And if you don't expect may hits in
your hash-table, you'll end up optimizing for the uncommon case at the
cost of the common case.

> It's just that Lisp has many more definitions than needed. And that
> these definitions depend on implementation details, not on
> interfaces. To me, this all feels quite backwards.

I already explained this.

  (eq x y) =approx= (eql (the (not (or character number)) x)
                         (the (not (or character number)) y))

..and the reason for having this rather peculiar function is as you
say due to implementation details. But those details happen to be in
common for every known machine architecture, and have quite serious
performance implications. But that's just the real world knocking on
the door, once again.

-- 
Frode Vatvedt Fjeld
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnlnd4$8c9$1@news.oberberg.net>
Frode Vatvedt Fjeld wrote:
> Joachim Durchholz <·················@web.de> writes:
> 
>>The intricacy is not in the interning mechanism, it's in the fact
>>that I have to think about it when reasoning about equality.
> 
> This is positively false for any value of "I" that is reasonably
> comfortable with Common Lisp.
> 
>>Lisp is quite unique in both having good higher-order function
>>support and encouraging programmers to use side effects.
> 
> 
> I find this statement ridiculous.

Uh, well, likewise. I find it ridiculous to conflate the 
"take-a-snapshot-of-the-world" paradigm with the edit-compile-run cycle: 
static typing and interactivity can go together (and indeed do, as the 
existence of Haskell interpreters proves).

Another subthread that I can go away from - calling names isn't my 
favorite pastime.

-Jo
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <spammers_must_die-2810030720040001@192.168.1.51>
In article <············@news.oberberg.net>, Joachim Durchholz
<·················@web.de> wrote:

> static typing and interactivity can go together (and indeed do, as the 
> existence of Haskell interpreters proves).

I don't know if this is typical (given what I know about Haskell I
strongly suspect it it because I can't see how it could work any other
way) but the Hugs interpreter does not allow you to enter new definitions
at the command prompt.  (In fact, it constrains everything you type at the
interpreter to fit on a single line.)  Definitions must go in files.  This
rather undermines my notion of "interactivity".  On this view, C is
interactive too given dlopen().

E.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <3F9E9812.8060303@ps.uni-sb.de>
Erann Gat wrote:
> 
> I don't know if this is typical (given what I know about Haskell I
> strongly suspect it it because I can't see how it could work any other
> way) but the Hugs interpreter does not allow you to enter new definitions
> at the command prompt.

GHCi does allow it. And ML systems have always been fully interactive 
(that is, for 25 years). Actually, one of the reasons type inference was 
invented in the first place was to support interactive work.

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnm6js$u0u$1@f1node01.rhrz.uni-bonn.de>
Andreas Rossberg wrote:
> Erann Gat wrote:
> 
>>
>> I don't know if this is typical (given what I know about Haskell I
>> strongly suspect it it because I can't see how it could work any other
>> way) but the Hugs interpreter does not allow you to enter new definitions
>> at the command prompt.
> 
> 
> GHCi does allow it. And ML systems have always been fully interactive 
> (that is, for 25 years). Actually, one of the reasons type inference was 
> invented in the first place was to support interactive work.

It seems to me that a revision of a definiton that some other place of 
your program depends on might break static type correctness of the 
overall program. How is this addressed in such systems? Do you get a 
list of errors and a chance to adapt the affected places?

Or do I have a wrong view of how these things are handled?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  Römerstr. 164, D-53117 Bonn (Germany)
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.10.28.16.58.46.64845@knm.org.pl>
On Tue, 28 Oct 2003 17:47:56 +0100, Pascal Costanza wrote:

>> GHCi does allow it. And ML systems have always been fully interactive 
>> (that is, for 25 years). Actually, one of the reasons type inference was 
>> invented in the first place was to support interactive work.
> 
> It seems to me that a revision of a definiton that some other place of 
> your program depends on might break static type correctness of the 
> overall program. How is this addressed in such systems? Do you get a 
> list of errors and a chance to adapt the affected places?

If you define a value (maybe function) of the same name as some value
before, other functions which use the old value don't switch to the new
value.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnmr5r$i83$2@newsreader2.netcologne.de>
Marcin 'Qrczak' Kowalczyk wrote:

> On Tue, 28 Oct 2003 17:47:56 +0100, Pascal Costanza wrote:
> 
> 
>>>GHCi does allow it. And ML systems have always been fully interactive 
>>>(that is, for 25 years). Actually, one of the reasons type inference was 
>>>invented in the first place was to support interactive work.
>>
>>It seems to me that a revision of a definiton that some other place of 
>>your program depends on might break static type correctness of the 
>>overall program. How is this addressed in such systems? Do you get a 
>>list of errors and a chance to adapt the affected places?
> 
> 
> If you define a value (maybe function) of the same name as some value
> before, other functions which use the old value don't switch to the new
> value.

Yuck!


Pascal
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3f9fa10b$1@news.unimelb.edu.au>
·················@jpl.nasa.gov (Erann Gat) writes:

>Joachim Durchholz <·················@web.de> wrote:
>
>> static typing and interactivity can go together (and indeed do, as the 
>> existence of Haskell interpreters proves).
>
>I don't know if this is typical (given what I know about Haskell I
>strongly suspect it it because I can't see how it could work any other
>way) but the Hugs interpreter does not allow you to enter new definitions
>at the command prompt.  (In fact, it constrains everything you type at the
>interpreter to fit on a single line.)  Definitions must go in files.  This
>rather undermines my notion of "interactivity".

Why?

You may find the following shell function helps:

	hugsi() {
		touch hugs.hs
		hugs hugs.hs
		rm hugs.hs
	}

Then, when you want to enter a function definition,
instead of typing "let" (as in ghci) or "defun", just type ":e".
Hugs will then let you enter one or more multi-line definitions.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2910031115440001@k-137-79-50-101.jpl.nasa.gov>
In article <··········@news.unimelb.edu.au>, Fergus Henderson
<···@cs.mu.oz.au> wrote:

> ·················@jpl.nasa.gov (Erann Gat) writes:
> 
> >Joachim Durchholz <·················@web.de> wrote:
> >
> >> static typing and interactivity can go together (and indeed do, as the 
> >> existence of Haskell interpreters proves).
> >
> >I don't know if this is typical (given what I know about Haskell I
> >strongly suspect it it because I can't see how it could work any other
> >way) but the Hugs interpreter does not allow you to enter new definitions
> >at the command prompt.  (In fact, it constrains everything you type at the
> >interpreter to fit on a single line.)  Definitions must go in files.  This
> >rather undermines my notion of "interactivity".
> 
> Why?

Because I've used Lisp and Python.

> 
> You may find the following shell function helps:
> 
>         hugsi() {
>                 touch hugs.hs
>                 hugs hugs.hs
>                 rm hugs.hs
>         }
> 
> Then, when you want to enter a function definition,
> instead of typing "let" (as in ghci) or "defun", just type ":e".
> Hugs will then let you enter one or more multi-line definitions.

Nope.

Prelude> :e
ERROR - Hugs is not configured to use an editor


There is no excuse for this in a system that is supposed to be optimized
for teaching running on unix.  The EDITOR environment variable convention
is universal.

That's strike three for Hugs.  I may give ghci a try some day, but for now
my Haskell time allotment has expired.

E.
From: Simon Helsen
Subject: Re: More static type fun.
Date: 
Message-ID: <Pine.SOL.4.44.0310291634300.4865-100000@crete.uwaterloo.ca>
On Wed, 29 Oct 2003, Erann Gat wrote:

>Nope.
>
>Prelude> :e
>ERROR - Hugs is not configured to use an editor
>
>
>There is no excuse for this in a system that is supposed to be optimized
>for teaching running on unix.  The EDITOR environment variable convention
>is universal.
>
>That's strike three for Hugs.  I may give ghci a try some day, but for now
>my Haskell time allotment has expired.

I guess that is your open mindedness, he? If you judge the capacity of
modern statically typed languages based on 'one' implementation of 'one'
particular language... Anyways, next time you have a time slot, take a
look at www.ocaml.org, the language, its applications, its implementation.

	S
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2910031417280001@k-137-79-50-101.jpl.nasa.gov>
In article <·······································@crete.uwaterloo.ca>,
Simon Helsen <·······@computer.org> wrote:

> On Wed, 29 Oct 2003, Erann Gat wrote:
> 
> >Nope.
> >
> >Prelude> :e
> >ERROR - Hugs is not configured to use an editor
> >
> >
> >There is no excuse for this in a system that is supposed to be optimized
> >for teaching running on unix.  The EDITOR environment variable convention
> >is universal.
> >
> >That's strike three for Hugs.  I may give ghci a try some day, but for now
> >my Haskell time allotment has expired.
> 
> I guess that is your open mindedness, he? If you judge the capacity of
> modern statically typed languages based on 'one' implementation of 'one'
> particular language... 

How do you get from "strike three for Hugs" to any sort of judgement about
"the capacity of modern statically typed languages"?

> Anyways, next time you have a time slot, take a
> look at www.ocaml.org, the language, its applications, its implementation.

I am in fact just finishing up the last stages of building OCaml from
source even as I write this.  So far it's gone without a hitch.

E.
From: Erann Gat
Subject: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <gat-2910031459090001@k-137-79-50-101.jpl.nasa.gov>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
···@jpl.nasa.gov (Erann Gat) wrote:

> I am in fact just finishing up the last stages of building OCaml from
> source even as I write this.  So far it's gone without a hitch.

I regret to report that OCaml immediately exhibits what I consider to be a
fatal flaw:

# let rec fact n =
  if n<=1 then 1 else n*fact(n-1);;
val fact : int -> int = <fun>
# fact 5;;
- : int = 120
# fact 30;;
- : int = -738197504
# fact 40;;
- : int = 0

Silently returning the wrong answer is never acceptable IMHO.

E.
From: Simon Taylor
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <3fa05d1e@news.unimelb.edu.au>
In article <····················@k-137-79-50-101.jpl.nasa.gov>, Erann Gat wrote:
> In article <····················@k-137-79-50-101.jpl.nasa.gov>,
> ···@jpl.nasa.gov (Erann Gat) wrote:
> 
>> I am in fact just finishing up the last stages of building OCaml from
>> source even as I write this.  So far it's gone without a hitch.
> 
> I regret to report that OCaml immediately exhibits what I consider to be a
> fatal flaw:
> 
> # let rec fact n =
>   if n<=1 then 1 else n*fact(n-1);;
> val fact : int -> int = <fun>
> # fact 5;;
> - : int = 120
> # fact 30;;
> - : int = -738197504
> # fact 40;;
> - : int = 0
> 
> Silently returning the wrong answer is never acceptable IMHO.

If you use modular arithmetic operations, don't act
surprised when they don't trap overflow.  I do think
modular arithmetic is a poor default.

The Ocaml standard library provides a type Big_int
which will give the answers you expect.

Simon.
From: Erann Gat
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <gat-2910032052490001@192.168.1.51>
In article <········@news.unimelb.edu.au>, Simon Taylor
<·····@cs.mu.oz.au> wrote:

> In article <····················@k-137-79-50-101.jpl.nasa.gov>, Erann
Gat wrote:
> > In article <····················@k-137-79-50-101.jpl.nasa.gov>,
> > ···@jpl.nasa.gov (Erann Gat) wrote:
> > 
> >> I am in fact just finishing up the last stages of building OCaml from
> >> source even as I write this.  So far it's gone without a hitch.
> > 
> > I regret to report that OCaml immediately exhibits what I consider to be a
> > fatal flaw:
> > 
> > # let rec fact n =
> >   if n<=1 then 1 else n*fact(n-1);;
> > val fact : int -> int = <fun>
> > # fact 5;;
> > - : int = 120
> > # fact 30;;
> > - : int = -738197504
> > # fact 40;;
> > - : int = 0
> > 
> > Silently returning the wrong answer is never acceptable IMHO.
> 
> If you use modular arithmetic operations, don't act
> surprised when they don't trap overflow.

That's not what surprises me.  What surprises me is that +,*,/ and - are
modular arithmetic operator defined only on ints mod 2^32 (apparently):

        Objective Caml version 3.07+2

# 1.0+1.0;;
This expression has type float but is here used with type int


That is just pathetic.  What happened to polymorphism?  Even C, which is
one of the more brain damaged languages known to man, knows how to add
floats!


> The Ocaml standard library provides a type Big_int
> which will give the answers you expect.

Well, that's dandy, but I think I'm going to stick with Lisp.  The
designers of OCaml seem to have completly missed the point of what a high
level programming language is for.

E.
From: Simon Taylor
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <3fa0c797$1@news.unimelb.edu.au>
In article <····················@192.168.1.51>, Erann Gat wrote:
> In article <········@news.unimelb.edu.au>, Simon Taylor
><·····@cs.mu.oz.au> wrote:
> 
>> In article <····················@k-137-79-50-101.jpl.nasa.gov>, Erann
> Gat wrote:
>> > In article <····················@k-137-79-50-101.jpl.nasa.gov>,
>> > ···@jpl.nasa.gov (Erann Gat) wrote:
>> > 
>> >> I am in fact just finishing up the last stages of building OCaml from
>> >> source even as I write this.  So far it's gone without a hitch.
>> > 
>> > I regret to report that OCaml immediately exhibits what I consider to be a
>> > fatal flaw:
>> > 
>> > # let rec fact n =
>> >   if n<=1 then 1 else n*fact(n-1);;
>> > val fact : int -> int = <fun>
>> > # fact 5;;
>> > - : int = 120
>> > # fact 30;;
>> > - : int = -738197504
>> > # fact 40;;
>> > - : int = 0
>> > 
>> > Silently returning the wrong answer is never acceptable IMHO.
>> 
>> If you use modular arithmetic operations, don't act
>> surprised when they don't trap overflow.
> 
> That's not what surprises me.  What surprises me is that +,*,/ and - are
> modular arithmetic operator defined only on ints mod 2^32 (apparently):

mod 2^(word_size-1).

That's an ugly choice for a default.  But if you want better behaviour,
it's right there in the standard library.

> # 1.0+1.0;;
> This expression has type float but is here used with type int
> 
> That is just pathetic.  What happened to polymorphism?  Even C, which is
> one of the more brain damaged languages known to man, knows how to add
> floats!

# 1.0 +. 1.0 ;;
- : float = 2

It's completely reasonable for integer and floating point addition
to have different names.  They are different operations with very
different semantics.

>> The Ocaml standard library provides a type Big_int
>> which will give the answers you expect.
> 
> Well, that's dandy, but I think I'm going to stick with Lisp.  The
> designers of OCaml seem to have completly missed the point of what a high
> level programming language is for.

You haven't bothered to read the manual, and you've become upset when
the first thing you tried didn't behave the same as in Lisp.  You're
guaranteed not to like anything other than Lisp.

Simon.
From: Erann Gat
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <gat-3010030737060001@192.168.1.51>
In article <··········@news.unimelb.edu.au>, Simon Taylor
<·····@cs.mu.oz.au> wrote:

> It's completely reasonable for integer and floating point addition
> to have different names.

Not in a language with a polymorphic type system IMHO.  What is the point
of polymorphism if not to allow you to use the same + operator to add
different kinds of things?

> They are different operations with very different semantics.

You and I have very different ideas about what the phrase "very different"
means.

E.
From: Matthias Blume
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <m1ism61yvw.fsf@tti5.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··········@news.unimelb.edu.au>, Simon Taylor
> <·····@cs.mu.oz.au> wrote:
> 
> > It's completely reasonable for integer and floating point addition
> > to have different names.
> 
> Not in a language with a polymorphic type system IMHO.  What is the point
> of polymorphism if not to allow you to use the same + operator to add
> different kinds of things?

There are different kinds of polymorphism.  When we talk about the
polymorphism in the ML family of languages, we usually talk about
/parametric/ polymorphism (although other forms tend to exist, too).
An operation that is parametric must not care what the actual type of
the argument is.  In particular this means that it can't use
floating-point add for one and integer add for another.

> > They are different operations with very different semantics.
> 
> You and I have very different ideas about what the phrase "very different"
> means.

You will have to admit that they correspond to different machine
instructions.  They also have different
rounding/truncating/wrap-around/overflow properties.  They have
different characteristics as far as precision is concerned.
How much more "different" than that can you get?

SML, by the way, overloads the common arithmetic operators for all
integral and floating point types.  Personally I am not sure whether
this is the right choice.  The usual argument in favor of this is to
prevent having discussions like this one on completely marginal issues
(which are, nontheless, actually quite tricky to deal with:
overloading makes type inference a lot less smooth than it would
normally be -- which is why OCaml takes the route it takes).

Besides, I am a bit disappointed with your reaction.  Although I also
do not like OCaml's particular design choices in a few areas, these
have to be /minor/ annoyances at best.  If you can't look past these
and see the big picture, then there is really no basis for a
meaningful discussion.

(I am always left shaking my head when people tell me how great C++ is
as it lets you define your own overloaded operators...)

Matthias
From: Erann Gat
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <gat-3010031505400001@k-137-79-50-101.jpl.nasa.gov>
I'm going to respond to your comments out of order in hopes of getting
this discussion back on track.

In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> I am a bit disappointed with your reaction.  Although I also
> do not like OCaml's particular design choices in a few areas, these
> have to be /minor/ annoyances at best.  If you can't look past these
> and see the big picture, then there is really no basis for a
> meaningful discussion.

We seem to have different ideas of what the "big picture" is.

You seem to consider the "big picture" to be some kind of a tradeoff
between the effort required to *use* a programming language versus the
effort required to imlement it.  For example:

> SML, by the way, overloads the common arithmetic operators for all
> integral and floating point types.  Personally I am not sure whether
> this is the right choice.  The usual argument in favor of this is to
> prevent having discussions like this one on completely marginal issues
> (which are, nontheless, actually quite tricky to deal with:
> overloading makes type inference a lot less smooth than it would
> normally be -- which is why OCaml takes the route it takes).

IMO the whole point of having high-level languages is to make programming
easier.  All else being equal, simpler is better, but when it comes to a
tradeoff between implementation effort and user experience I believe that
the balance must be heavily, if not exclusively, biased towards the user. 
(I think the differences in our positions might be the result of the fact
that I'm in an industrial setting while you are in an academic one.  In an
academic setting the implementor and the user of a langauge is often the
same person, so it makes sense to balance the interests of both.)

> There are different kinds of polymorphism.  When we talk about the
> polymorphism in the ML family of languages, we usually talk about
> /parametric/ polymorphism (although other forms tend to exist, too).
> An operation that is parametric must not care what the actual type of
> the argument is.  In particular this means that it can't use
> floating-point add for one and integer add for another.

So parametric polymorphism adheres to the Liskov Substitution Principle, right?

This choice, it seems to me, benefits the implementor at the expense of
the user, so is in my view a poor one.  But I can see how someone using a
different quality metric might not think so.


> > > They are different operations with very different semantics.
> > 
> > You and I have very different ideas about what the phrase "very different"
> > means.
> 
> You will have to admit that they correspond to different machine
> instructions.

First, that depends on the machine.  And second, what does that have to do
with anything?  If I cared about the machine I'd program in C.  Does ML
distinguish between signed and unsigned operations?  Those correspond to
different machine instructions too.

> They also have different
> rounding/truncating/wrap-around/overflow properties.  They have
> different characteristics as far as precision is concerned.
> How much more "different" than that can you get?

A lot.  Adding integers and floats (and rationals and complexes and
arrays) are very different in terms of bits, but very similar (for better
or worse) in terms of the mental models that humans bring to bear on
them.  Again, based on my premise that one should optimize for the user,
it is better to optimize the impedance match between the language
constructs and the mental models than the language constructs and the
machine instruction set.


> (I am always left shaking my head when people tell me how great C++ is
> as it lets you define your own overloaded operators...)

You must be shaking your head at the Haskell people too then.  (And you
must really hate Common Lisp multimethods.  Non-parametric polymorphism
*and* overloaded operators in one language!  Oh, the horror!)

E.
From: Matthias Blume
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <m2fzhayz3j.fsf@hanabi-air.shimizu.blume>
···@jpl.nasa.gov (Erann Gat) writes:

> I'm going to respond to your comments out of order in hopes of getting
> this discussion back on track.
> 
> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > I am a bit disappointed with your reaction.  Although I also
> > do not like OCaml's particular design choices in a few areas, these
> > have to be /minor/ annoyances at best.  If you can't look past these
> > and see the big picture, then there is really no basis for a
> > meaningful discussion.
> 
> We seem to have different ideas of what the "big picture" is.
> 
> You seem to consider the "big picture" to be some kind of a tradeoff
> between the effort required to *use* a programming language versus the
> effort required to imlement it.  [ ... ]

No.  But to me using a different symbol for floating point addition is
*no effort at all*.

> [ ... ]  Does ML
> distinguish between signed and unsigned operations?  Those correspond to
> different machine instructions too.

SML does.

> > (I am always left shaking my head when people tell me how great C++ is
> > as it lets you define your own overloaded operators...)
> 
> You must be shaking your head at the Haskell people too then.  (And you
> must really hate Common Lisp multimethods.  Non-parametric polymorphism
> *and* overloaded operators in one language!  Oh, the horror!)

You misunderstood.  Although I don't have a big thing for overloading,
I don't hate it.  What gets me is that to so many people having it seems to
be such a big deal.

Matthias
From: Erann Gat
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <gat-3010032054220001@192.168.1.51>
In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
<····@my.address.elsewhere> wrote:

> What gets me is that to so many people having it seems to
> be such a big deal.

Suppose I write a big hairy piece of code full of integer adds.  Then I
decide I want to change some of those integers to a float so that half of
the integer adds become float adds.  If + is overloaded I don't have to do
any work.  If it isn't, I have a big, tedious job ahead of me.

That why it's a big deal to me.

E.
From: Matthias Blume
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <m2ptgehte1.fsf@hanabi-air.shimizu.blume>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > What gets me is that to so many people having it seems to
> > be such a big deal.
> 
> Suppose I write a big hairy piece of code full of integer adds.  Then I
> decide I want to change some of those integers to a float so that half of
> the integer adds become float adds.  If + is overloaded I don't have to do
> any work.  If it isn't, I have a big, tedious job ahead of me.

If it is a big hairy pice of code, then you *better* do that big,
tedious job!  I have actually worked with such code in a language that
does overload operators -- which did not make me very confident about
the correctness of the code.

And don't make it sound like it is such a huge deal: If you change
some of the types to floating point in OCaml, then the compiler will
happily point out to you where you need to write "+." instead of "+".
I find it very helpful to have the compiler force me to go through
each case because it also forces me to spend at least a seconds or two
on each one thinking about whether using floating-point-add was really
what I wanted there.

Matthias
From: Erann Gat
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <gat-3010032210150001@192.168.1.51>
In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
<····@my.address.elsewhere> wrote:

> And don't make it sound like it is such a huge deal: If you change
> some of the types to floating point in OCaml, then the compiler will
> happily point out to you where you need to write "+." instead of "+".

If the compiler can point them out, why can't the compiler go ahead and
make the changes for me?  Computers are supposed to make life easier, not
harder.

> I find it very helpful to have the compiler force me to go through
> each case because it also forces me to spend at least a seconds or two
> on each one thinking about whether using floating-point-add was really
> what I wanted there.

We'll just have to agree to disagree on this.

E.
From: Jerzy Karczmarczuk
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <3FA21BFF.4040909@info.unicaen.fr>
Erann Gat wrote:
> Matthias Blume wrote:
> 
>>And don't make it sound like it is such a huge deal: If you change
>>some of the types to floating point in OCaml, then the compiler will
>>happily point out to you where you need to write "+." instead of "+".
> 
> 
> If the compiler can point them out, why can't the compiler go ahead and
> make the changes for me?  Computers are supposed to make life easier, not
> harder.

When you started to ask for a "loud condamnation" of Hugs, and wrote your
hmmmm... peculiar logic about the relation between bugs in a program and the
soundness of a type theory, I suspected that you were trolling. When you
began to "condemn" the (+) vs. (+.) issue, I thought that perhaps you shoot
blindly around, without even trying to understand some differences between
various type systems.

Now I am sure of both. Stop wasting your time.
Anyway, I remain optimist. Either you will learn enough to prevent you from
such provocative attitude, or you will give up and go away from functional
programming, which will make everybody happy.



J. Karczmarczuk
From: Matthias Blume
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <m2ekwtpm7j.fsf@hanabi-air.shimizu.blume>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > And don't make it sound like it is such a huge deal: If you change
> > some of the types to floating point in OCaml, then the compiler will
> > happily point out to you where you need to write "+." instead of "+".
> 
> If the compiler can point them out, why can't the compiler go ahead and
> make the changes for me?  Computers are supposed to make life easier, not
> harder.

Didn't what I wrote (see below) imply the answer to this one?  I DON'T
WANT THE COMPUTER TO GO AND CHANGE IT WITHOUT ASKING.

> > I find it very helpful to have the compiler force me to go through
> > each case because it also forces me to spend at least a seconds or two
> > on each one thinking about whether using floating-point-add was really
> > what I wanted there.
> 
> We'll just have to agree to disagree on this.

Looks like it.
From: Joe Marshall
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <8yn1mm59.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> ···@jpl.nasa.gov (Erann Gat) writes:
>
>> In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
>> <····@my.address.elsewhere> wrote:
>> 
>> > What gets me is that to so many people having it seems to
>> > be such a big deal.
>> 
>> Suppose I write a big hairy piece of code full of integer adds.  Then I
>> decide I want to change some of those integers to a float so that half of
>> the integer adds become float adds.  If + is overloaded I don't have to do
>> any work.  If it isn't, I have a big, tedious job ahead of me.
>
> If it is a big hairy pice of code, then you *better* do that big,
> tedious job!

There seems to be a bent towards the `Protestant Work Ethic' here.

I've noticed it in the `Lisp vs. Python' thread, too.  They want small
changes to the language to be a big production.
From: Erann Gat
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <gat-3110030909050001@192.168.1.51>
In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> wrote:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> >
> >> In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
> >> <····@my.address.elsewhere> wrote:
> >> 
> >> > What gets me is that to so many people having it seems to
> >> > be such a big deal.
> >> 
> >> Suppose I write a big hairy piece of code full of integer adds.  Then I
> >> decide I want to change some of those integers to a float so that half of
> >> the integer adds become float adds.  If + is overloaded I don't have to do
> >> any work.  If it isn't, I have a big, tedious job ahead of me.
> >
> > If it is a big hairy pice of code, then you *better* do that big,
> > tedious job!
> 
> There seems to be a bent towards the `Protestant Work Ethic' here.
> 
> I've noticed it in the `Lisp vs. Python' thread, too.  They want small
> changes to the language to be a big production.

Yes, isn't that interesting?  I suppose I should not be as surprised as I
am.  I first encountered this phenomenon when many years ago I tried to
sell people on the idea of writing tools in Lisp to make their jobs
easier.  I found, much to my shock at the time, that there was a
significant population of people who did not want their jobs made any
easier (and I'm not referring to programmers here).  They actually enjoyed
the tedium of doing the same things over and over again.  To this day I
still find this bizarre, but I suppose there's no accounting for taste.

E.
From: David Golden
Subject: OT: "Dopamine addicts" was Re: OCaml: I am not impressed
Date: 
Message-ID: <235b265c.0310311808.69a5062@posting.google.com>
···@jpl.nasa.gov (Erann Gat) wrote in message news:<····················@192.168.1.51>...
>  I found, much to my shock at the time, that there was a
> significant population of people who did not want their jobs made any
> easier (and I'm not referring to programmers here).  They actually enjoyed
> the tedium of doing the same things over and over again.  To this day I
> still find this bizarre, but I suppose there's no accounting for taste.
> 

There's a wacky just-so-story theory about that here:

http://www.reciprocality.org/Reciprocality/r1/intro.html

Probably not entirely true, but makes for fun reading and could
perhaps be formed into a somewhat  testable hypothesis...
From: Sebastian Stern
Subject: Re: OT: "Dopamine addicts" was Re: OCaml: I am not impressed
Date: 
Message-ID: <ad7d32de.0311010118.7d48c511@posting.google.com>
············@oceanfree.net (David Golden) wrote:
> ···@jpl.nasa.gov (Erann Gat) wrote:
> >  I found, much to my shock at the time, that there was a
> > significant population of people who did not want their jobs made any
> > easier (and I'm not referring to programmers here).  They actually enjoyed
> > the tedium of doing the same things over and over again.  To this day I
> > still find this bizarre, but I suppose there's no accounting for taste.
> > 
> 
> There's a wacky just-so-story theory about that here:
> 
> http://www.reciprocality.org/Reciprocality/r1/intro.html
> 
> Probably not entirely true, but makes for fun reading and could
> perhaps be formed into a somewhat  testable hypothesis...

Interesting.

Also, Milan Kundera wrote: "And therein lies the whole of man's
plight. Human time does not turn in a circle; it runs ahead in a
straight line. That is why man cannot be happy: happiness is the
longing for repetition."

I cannot say I completely agree with him, however. Mihaly
Csikczentmihalyi has written an entire book on this subject, in which
he states that people feel optimal (are in 'flow') when their skills
match the challenges they face. Given this, I think the reason most
people 'like' routine is because for most people, routine _is_
excersizing the few skills they have.

C|        /
h| Fear  /
a|      /w
l|     /o
l|    /l
e|   /F
n|  /
g| / Boredom
e|/
s+---------
     Skills

Sebastian Stern
"Freedom is the freedom to say (= (+ 2 2) 4). If that is granted, all
else follows."
From: TLOlczyk
Subject: Re: OT: "Dopamine addicts" was Re: OCaml: I am not impressed
Date: 
Message-ID: <9t97qvk6mfvb80sd8ue65fg97nofp87aub@4ax.com>
On 31 Oct 2003 18:08:34 -0800, ············@oceanfree.net (David
Golden) wrote:

>There's a wacky just-so-story theory about that here:
>
>http://www.reciprocality.org/Reciprocality/r1/intro.html
>
>Probably not entirely true, but makes for fun reading and could
>perhaps be formed into a somewhat  testable hypothesis...
There are several stupid things about the article.
1) Sometimes hunters have to sit and wait patiently, and let their
    prey come to them.

2) The goal of meditiation is to sit doing nothing, but not let any 
     thoughts pass your mind. It is acheivable, so the boredom part
     is wrong.
From: Tom Breton
Subject: Re: OT: "Dopamine addicts" was Re: OCaml: I am not impressed
Date: 
Message-ID: <m3fzh8j6g2.fsf@panix.com>
TLOlczyk <··········@yahoo.com> writes:

> On 31 Oct 2003 18:08:34 -0800, ············@oceanfree.net (David
> Golden) wrote:
> 
> >There's a wacky just-so-story theory about that here:
> >
> >http://www.reciprocality.org/Reciprocality/r1/intro.html
> >
> >Probably not entirely true, but makes for fun reading and could
> >perhaps be formed into a somewhat  testable hypothesis...
> There are several stupid things about the article.
> 1) Sometimes hunters have to sit and wait patiently, and let their
>     prey come to them.
> 
> 2) The goal of meditiation is to sit doing nothing, but not let any 
>      thoughts pass your mind. It is acheivable, so the boredom part
>      is wrong.

 3) The brain chemical that "made monkeys wait in trees until the lion
    went away" would more likely be serotonin, associated with anxiety
    and a longer mental future-horizon.  Not dopamine.  This basically
    kills the article's premise.


-- 
Tom Breton at panix.com, username tehom.  http://www.panix.com/~tehom
From: Simon Taylor
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <3fa1c55b$1@news.unimelb.edu.au>
In article <····················@k-137-79-50-101.jpl.nasa.gov>, Erann Gat wrote:
> In article <··············@tti5.uchicago.edu>, Matthias Blume
><····@my.address.elsewhere> wrote:
 
>> > > They are different operations with very different semantics.
>> > 
>> > You and I have very different ideas about what the phrase "very different"
>> > means.
>
>> They also have different
>> rounding/truncating/wrap-around/overflow properties.  They have
>> different characteristics as far as precision is concerned.
>> How much more "different" than that can you get?
> 
> A lot.  Adding integers and floats (and rationals and complexes and
> arrays) are very different in terms of bits, but very similar (for better
> or worse) in terms of the mental models that humans bring to bear on
> them.  Again, based on my premise that one should optimize for the user,
> it is better to optimize the impedance match between the language
> constructs and the mental models than the language constructs and the
> machine instruction set.

If you program floating point computations using the same mental
model as for integer calculations you will get wrong answers, just
the same as if your integer calculation overflows because you were
assuming arbitrary precision arithmetic.  This is why numerical
analysis is hard.

In my opinion, arbitrary precision (or exception on overflow),
modular and floating point arithmetic should all use different
sets of operators.  The mental models required to use each of
them correctly are quite different.  Conflating them leads to
difficult to find bugs.
 
Simon.
From: Simon Helsen
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <Pine.SOL.4.44.0310301208500.27347-100000@crete.uwaterloo.ca>
On Thu, 30 Oct 2003, Erann Gat wrote:

>> It's completely reasonable for integer and floating point addition
>> to have different names.
>
>Not in a language with a polymorphic type system IMHO.  What is the point
>of polymorphism if not to allow you to use the same + operator to add
>different kinds of things?

because these are very very different kinds of polymorphism! Maybe you
should catch on to the difference of parametric and ad-hoc polymorphism.
In fact, only Haskell type classes marry those two properly. Since Ocaml
does not have type classes, ad-hoc polymorphism is thrown out for
arithmetic operators.

>> They are different operations with very different semantics.
>
>You and I have very different ideas about what the phrase "very different"
>means.

? Last time I looked, floating point operations and integer operations
behaved rather differently.

	Simon
From: Simon Helsen
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <Pine.SOL.4.44.0310301150140.27347-100000@crete.uwaterloo.ca>
On Thu, 30 Oct 2003, Simon Taylor wrote:

>You haven't bothered to read the manual, and you've become upset when
>the first thing you tried didn't behave the same as in Lisp.  You're
>guaranteed not to like anything other than Lisp.
>

This is the 'open-mindedness' I was referring to. You cannot battle
religious extremism, nomatter from what side (that does not just apply to
Al Qaeda or Bush). And along those lines, I do not understand the people
who dismiss dynamic typing entirely as well. But what I find most
incredible is the quote by Erann "The designers of OCaml seem to have
completly missed the point of what a high level programming language is
for" without even understanding why certain decisions were made. There is
almost nothing in Ocaml that happened by accident.

I stop here.

	Simon
From: Rayiner Hashem
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <a3995c0d.0310301244.294d0072@posting.google.com>
> That is just pathetic.  What happened to polymorphism?  Even C, which is
> one of the more brain damaged languages known to man, knows how to add
> floats!
The C++ operators are not polymorphic. The operator+ for floats and
ints are written the same way, but that's just a synactic convenience.
The behavior doesn't change based on the true types of the variables.
Thus, if you have an int* that really points to a float, the compiler
will still emit code for integer addition. The Ocaml operators are not
polymorphic either, but they require you to write + and +. seperately
because it helps during type inference.

Ocaml is *not* Lisp. Its not a dynamic language at all. It has
polymorphic constructs, but they are very different from what you'd
consider polymorphism in Lisp. Ocaml is based on a mathematically
formal type system. In that type system, functions map specific types
to other specific types. Its all very static and rigid, and is so by
design. Ocaml's type system is fundementally different, so don't be
surprised that its not the same as Lisp's.
From: Joachim Durchholz
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <bnr689$sb9$1@news.oberberg.net>
Erann Gat wrote:
>         Objective Caml version 3.07+2
> 
> # 1.0+1.0;;
> This expression has type float but is here used with type int
> 
> That is just pathetic.  What happened to polymorphism?  Even C, which is
> one of the more brain damaged languages known to man, knows how to add
> floats!

That's a conscious decision. If you allow the same operator names for 
int and float, you'll run into inconsistencies elsewhere (mostly for 
division, though other types of inconsistency are possible as well).

It's mostly a question of what inconsistencies you want. OCaml opted for 
disappointing the expectation of 1.0 + 1.0 being a valid expression, and 
having less problems elsewhere.
One may find that decision fortunate or unfortunate, but it's certainly 
not brain-damaged (certainly no more than having to write (Foo params) 
instead of foo (params) in Lisp *gg*).

Regards,
Jo
From: Erann Gat
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <gat-3010031356170001@k-137-79-50-101.jpl.nasa.gov>
In article <············@news.oberberg.net>, Joachim Durchholz
<·················@web.de> wrote:

> Erann Gat wrote:
> >         Objective Caml version 3.07+2
> > 
> > # 1.0+1.0;;
> > This expression has type float but is here used with type int
> > 
> > That is just pathetic.  What happened to polymorphism?  Even C, which is
> > one of the more brain damaged languages known to man, knows how to add
> > floats!
> 
> That's a conscious decision. If you allow the same operator names for 
> int and float, you'll run into inconsistencies elsewhere (mostly for 
> division, though other types of inconsistency are possible as well).

Every language I know seems to get along just fine overloading + for ints
and floats (and others in many cases) and I've never encountered, nor ever
even heard of, the kinds of problems you allude to here.

Could you elaborate?  Perhaps give an example?

E.
From: Basile STARYNKEVITCH
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <q5rznfineyz.fsf@hector.lesours>
>>>>> "Erann" == Erann Gat <···@jpl.nasa.gov> writes:

    Erann> In article <············@news.oberberg.net>, Joachim
    Erann> Durchholz
    Erann> <·················@web.de> wrote:

    >> Erann Gat wrote: > Objective Caml version 3.07+2
    >> > 
    >> > # 1.0+1.0;; 
    >> > This expression has type float but is here used
    >> with type int
    >> > 
    >> > That is just pathetic.  What happened to polymorphism? [...]
    >> 
    >> That's a conscious decision. If you allow the same operator
    >> names for int and float, you'll run into inconsistencies [...]

    Erann> Every language I know seems to get along just fine
    Erann> overloading + for ints and floats (and others in many
    Erann> cases) and I've never encountered, nor ever even heard of,
    Erann> the kinds of problems you allude to here.

    Erann> Could you elaborate?  Perhaps give an example?

What would be the type of
   let double x = x + x 

(IIRC, SML handle it a bit specially by having it typed as int -> int)

The main force of Ocaml is its type inference system (and also its
module system).

Erann did not mention any type infering language .... Most of them
don't really overload + (with a possible exception for Haskell which I
don't know much - I suppose it has a hierarchy of number type classes)

-- 

Basile STARYNKEVITCH         http://starynkevitch.net/Basile/ 
email: basile<at>starynkevitch<dot>net 
aliases: basile<at>tunes<dot>org = bstarynk<at>nerim<dot>net
8, rue de la Fa�encerie, 92340 Bourg La Reine, France
From: Kaz Kylheku
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <cf333042.0310311427.5612fd34@posting.google.com>
Basile STARYNKEVITCH <···········@starynkevitch.net> wrote in message news:<···············@hector.lesours>...
> >>>>> "Erann" == Erann Gat <···@jpl.nasa.gov> writes:
>     Erann> Every language I know seems to get along just fine
>     Erann> overloading + for ints and floats (and others in many
>     Erann> cases) and I've never encountered, nor ever even heard of,
>     Erann> the kinds of problems you allude to here.
> 
>     Erann> Could you elaborate?  Perhaps give an example?
> 
> What would be the type of
>    let double x = x + x 

Since the declaration asserts that the type of x is double, that's
what it would be. A better question would be: what will the *value* of
x be, assuming that this is even well-defined.
From: Simon Helsen
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <Pine.SOL.4.44.0310311757350.21040-100000@crete.uwaterloo.ca>
On 31 Oct 2003, Kaz Kylheku wrote:

>> What would be the type of
>>    let double x = x + x
>
>Since the declaration asserts that the type of x is double, that's
>what it would be. A better question would be: what will the *value* of
>x be, assuming that this is even well-defined.

double is the name of the function, not a type. So, the question is 'what
is the type of this function'?
From: Nils Goesche
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <87ism4r4pe.fsf@darkstar.cartan>
Simon Helsen <·······@computer.org> writes:

> On 31 Oct 2003, Kaz Kylheku wrote:
> 
> >> What would be the type of
> >>    let double x = x + x
> >
> >Since the declaration asserts that the type of x is double,
> >that's what it would be. A better question would be: what will
> >the *value* of x be, assuming that this is even well-defined.
> 
> double is the name of the function, not a type. So, the
> question is 'what is the type of this function'?

It's a function that maps ints to ints, which the compiler can
cleverly infer from the fact that + is used which can only be
used on ints.  So, if you want to be able to double floats, too,
you'll have to define another function:

let double_float x = x +. x

and if we want to be able to double other numbers, too, we'll
simply define a few more:

let double_num x = Num.add_num x x

The following addition library functions are called `add�, rather
than `add_num� because of a deep theorem in modern type theory
only about ten people in the world are able to understand.  Let's
just say it has to do something with category theory.

let double_int32 x = Int32.add x x

let double_int64 x = Int64.add x x

let double_native x = Nativeint.add x x

Now isn't this easy?  And suppose we want a function that adds 42
to its argument.  That's easy, too:

let add42 x = x + 42

let add42_float x = x +. 42.0

Now observe how we make convenient use of the immense power of
currying, to simplify even further:

let add42_num = Num.add_num (Int 42)

let add42_int32 = Int32.add (Int32.of_int 42)

let add42_int64 = Int64.add (Int64.of_int 42)

let add42_native = Nativeint.add (Nativeint.of_int 42)

Life is really wonderful, these days, thanks to our hard working
comrades in our universities' Progressive Computer Science
departments.

Forwards!
-- 
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID #xD26EF2A0
From: Kaz Kylheku
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <cf333042.0311011353.3342f80d@posting.google.com>
Simon Helsen <·······@computer.org> wrote in message news:<········································@crete.uwaterloo.ca>...
> On 31 Oct 2003, Kaz Kylheku wrote:
> 
> >> What would be the type of
> >>    let double x = x + x
> >
> >Since the declaration asserts that the type of x is double, that's
> >what it would be. A better question would be: what will the *value* of
> >x be, assuming that this is even well-defined.
> 
> double is the name of the function, not a type. So, the question is 'what
> is the type of this function'?

Ah okay. So I suppose that there must be an answer: the function must
have exactly one type. Therefore we must accept some braindamaged
programming language design which allows us to have that answer. It is
not conceivable, for instance, that a function have a set of
permissible types.
From: Darius
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <20031101170445.0000706e.ddarius@hotpop.com>
On 1 Nov 2003 13:53:54 -0800
···@ashi.footprints.net (Kaz Kylheku) wrote:

> Simon Helsen <·······@computer.org> wrote in message
> news:<········································@crete.uwaterloo.ca>...
> > On 31 Oct 2003, Kaz Kylheku wrote:
> > 
> > >> What would be the type of
> > >>    let double x = x + x
> > >
> > >Since the declaration asserts that the type of x is double, that's
> > >what it would be. A better question would be: what will the *value*
> > >of x be, assuming that this is even well-defined.
> > 
> > double is the name of the function, not a type. So, the question is
> > 'what is the type of this function'?
> 
> Ah okay. So I suppose that there must be an answer: the function must
> have exactly one type. Therefore we must accept some braindamaged
> programming language design which allows us to have that answer. It is
> not conceivable, for instance, that a function have a set of
> permissible types.

Actually, in Haskell, a set of permissible types is -exactly- what is
inferred for the above (though it would just be 'double x = x + x' in
Haskell).
From: Rob Warnock
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <QRKdnY0yjq6S9z-iXTWc-w@speakeasy.net>
Erann Gat <···@jpl.nasa.gov> wrote:
+---------------
| Every language I know seems to get along just fine overloading + for ints
| and floats (and others in many cases) ...
+---------------

Just as an aside -- for historical trivia purposes, not that it
really applies to this thread -- in the BLISS language data is
completely *untyped*, it is instead the *operators* which are typed,
exactly as it is in assembler languages. Thus BLISS's "+" is typed
(int x int) -> int, while BLISS's FADR (Floating ADd and Round) operator
is typed (float x float) -> float. The following code is a legal BLISS
expression[1], though probably not something anyone would want to do very
often:

	begin local a, b;
	  a := 1.0 fadr 1.0;		! that is, 2.0
	  b := .a + 1;
	  .b fsbr .a			! fsbr is floating subtraction
	end

On a machine with IEEE floating point, that block should yield a
value of roughly 2.38e-07...  ;-}

Though maybe this *does* have some applicability to this thread after all.
Oddly enough, practical exerience in BLISS showed[2] that "type errors"
were one of the *least* common sources of programmer error in BLISS code.
Much more common were misplaced/missing/extra dots (the "contents-of"
operator) and semicolons (which in BLISS are expression *separators*,
not statement terminators).


-Rob

[1] Note: "." is "contents of" operator. I have also taken the liberty
    of using ":=" to represent the BLISS assignment operator, since that
    was originally the ASR-33 back-arrow character, which codepoint ASCII
    replaced with underscore. ["b := .a + 1" is easier to read than
    "b_.a+1", yes?]

[2] I *think* the following paper may be where this was reported,
    but I'm not completely sure:

	Wulf, W. A., et al., "Reflections on a Systems Programming
	Language," Proceedings of the SIGPLAN Symposium on System
	Implementation Languages, Purdue University, October 197l.

    It may have been here:

	Wulf, W. A., "Systems for Systems Implementors: Some Experiences
	from Bliss," Proceedings of the FJCC, November 1972. 

    Or somewhere else entirely (such as a paper called "Why the dot?",
    which I can't find a reference to at the moment)...

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Joachim Durchholz
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <bnv9uv$o1s$1@news.oberberg.net>
Erann Gat wrote:
> Every language I know seems to get along just fine overloading + for ints
> and floats (and others in many cases) and I've never encountered, nor ever
> even heard of, the kinds of problems you allude to here.

You never have been surprised that 3 / 2 == 1??
The real problem here isn't your surprise. It's that 3.0 / 2.0 != 3 / 2. 
In other words, that reals and integers are quite different beasts. 
(Which implies type annotations.)

OCaml decided that it's not the types that make the difference, it's the 
operations. Which is why OCaml has / (for integers) and /. (for 
floating-point).
I don't agree with all details of that decision, but I agree that 
there's an issue there.

Regards,
Jo
From: Erann Gat
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <gat-3110032026190001@192.168.1.51>
In article <············@news.oberberg.net>, Joachim Durchholz
<·················@web.de> wrote:

> Erann Gat wrote:
> > Every language I know seems to get along just fine overloading + for ints
> > and floats (and others in many cases) and I've never encountered, nor ever
> > even heard of, the kinds of problems you allude to here.
> 
> You never have been surprised that 3 / 2 == 1??

No, because I use Lisp, where (/ 3 2) --> 3/2, just as I would expect.

> The real problem here isn't your surprise. It's that 3.0 / 2.0 != 3 / 2. 

That's right, it's 1.5.  Lisp gets this one right too.

BTW, this facility is not unique to Lisp.  Many languages get this right,
including (some implementations of) Scheme, and C++ when using Bruno
Haible's CLN package.

> In other words, that reals and integers are quite different beasts.

But the reals are not under discussion here.  The matter at hand concerns
integers, floats, rationals and (maybe) complexes, tensors, quaternions,
etc.

> (Which implies type annotations.)

Lisp is an existence proof that this is incorrect.

> OCaml decided that it's not the types that make the difference, it's the 
> operations. Which is why OCaml has / (for integers) and /. (for 
> floating-point).
> I don't agree with all details of that decision, but I agree that 
> there's an issue there.

Sorry, I don't see it.  Furthermore I point to the Common Lisp numeric
system and CLN as proof that whatever issues there might be are long since
resolved.

E.
From: Pascal Costanza
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <bo0f2a$g7l$1@newsreader2.netcologne.de>
Joachim Durchholz wrote:
> Erann Gat wrote:
> 
>> Every language I know seems to get along just fine overloading + for ints
>> and floats (and others in many cases) and I've never encountered, nor 
>> ever
>> even heard of, the kinds of problems you allude to here.
> 
> 
> You never have been surprised that 3 / 2 == 1??
> The real problem here isn't your surprise. It's that 3.0 / 2.0 != 3 / 2. 
> In other words, that reals and integers are quite different beasts. 

What do you mean?!?

Welcome to Macintosh Common Lisp Version 5.0!
? (/ 3 2)
3/2
? (/ 3.0 2.0)
1.5
? (= (/ 3 2) (/ 3.0 2.0))
t



Pascal
From: Michael Manti
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <011120031016262859%mmanti@mac.com>
In article <············@newsreader2.netcologne.de>, Pascal Costanza
<········@web.de> wrote:

> Joachim Durchholz wrote:
> > Erann Gat wrote:
> > 
> >> Every language I know seems to get along just fine overloading + for ints
> >> and floats (and others in many cases) and I've never encountered, nor 
> >> ever
> >> even heard of, the kinds of problems you allude to here.
> > 
> > 
> > You never have been surprised that 3 / 2 == 1??
> > The real problem here isn't your surprise. It's that 3.0 / 2.0 != 3 / 2. 
> > In other words, that reals and integers are quite different beasts. 
> 
> What do you mean?!?
> 
> Welcome to Macintosh Common Lisp Version 5.0!
> ? (/ 3 2)
> 3/2
> ? (/ 3.0 2.0)
> 1.5
> ? (= (/ 3 2) (/ 3.0 2.0))
> t
> 
> 
> 
> Pascal
> 

Welcome to the demo version of Macintosh Common Lisp Version 5.0!
? (= (/ 22 7) (/ 22.0 7.0))
NIL
?
From: Edi Weitz
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <87r80s83r0.fsf@bird.agharta.de>
On Sat, 01 Nov 2003 10:16:26 -0500, Michael Manti <······@mac.com> wrote:

> In article <············@newsreader2.netcologne.de>, Pascal Costanza
> <········@web.de> wrote:
>
>> Welcome to Macintosh Common Lisp Version 5.0!
>> ? (= (/ 3 2) (/ 3.0 2.0))
>> t
>
> Welcome to the demo version of Macintosh Common Lisp Version 5.0!
> ? (= (/ 22 7) (/ 22.0 7.0))
> NIL
> ?

As far as I understand it both answers are correct. Read the section
about "numbers" of the CLHS[1]. In 12.1.4.1 you'll find: "When
rationals and floats are compared by a numerical function, the
function RATIONAL is effectively called to convert the float to a
rational and then an exact comparison is performed." And the
dictionary entry about RATIONAL says: "RATIONAL assumes that the float
is completely accurate." (Isn't it nice to have an ANSI standard for
your language?)

It is very likely that MCL will be able to represent (/ 3 2)
accurately as a float, namely as 1.5. It is highly unlikely that any
current machine will be able to represent (/ 22 7) accurately as a
float with a finite number of bits.

Edi.

[1] <http://www.lispworks.com/reference/HyperSpec/Front/index.htm>
From: Michael Manti
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <011120031152206042%mmanti@mac.com>
In article <··············@bird.agharta.de>, Edi Weitz <···@agharta.de>
wrote:

> On Sat, 01 Nov 2003 10:16:26 -0500, Michael Manti <······@mac.com> wrote:
> 
> > In article <············@newsreader2.netcologne.de>, Pascal Costanza
> > <········@web.de> wrote:
> >
> >> Welcome to Macintosh Common Lisp Version 5.0!
> >> ? (= (/ 3 2) (/ 3.0 2.0))
> >> t
> >
> > Welcome to the demo version of Macintosh Common Lisp Version 5.0!
> > ? (= (/ 22 7) (/ 22.0 7.0))
> > NIL
> > ?
> 
> As far as I understand it both answers are correct. Read the section
> about "numbers" of the CLHS[1]. In 12.1.4.1 you'll find: "When
> rationals and floats are compared by a numerical function, the
> function RATIONAL is effectively called to convert the float to a
> rational and then an exact comparison is performed." And the
> dictionary entry about RATIONAL says: "RATIONAL assumes that the float
> is completely accurate." (Isn't it nice to have an ANSI standard for
> your language?)
> 
> It is very likely that MCL will be able to represent (/ 3 2)
> accurately as a float, namely as 1.5. It is highly unlikely that any
> current machine will be able to represent (/ 22 7) accurately as a
> float with a finite number of bits.
> 
> Edi.
> 
> [1] <http://www.lispworks.com/reference/HyperSpec/Front/index.htm>

The question isn't whether MCL complies with the standard. It's whether
implicit conversion of rationals to floats is what you want. Obviously,
what you want differs across different values of "you."

Michael
From: Michael Manti
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <011120031219525158%mmanti@mac.com>
In article <·························@mac.com>, Michael Manti
<······@mac.com> wrote:

> In article <··············@bird.agharta.de>, Edi Weitz <···@agharta.de>
> wrote:
> 
> > On Sat, 01 Nov 2003 10:16:26 -0500, Michael Manti <······@mac.com> wrote:
> > 
> > > In article <············@newsreader2.netcologne.de>, Pascal Costanza
> > > <········@web.de> wrote:
> > >
> > >> Welcome to Macintosh Common Lisp Version 5.0!
> > >> ? (= (/ 3 2) (/ 3.0 2.0))
> > >> t
> > >
> > > Welcome to the demo version of Macintosh Common Lisp Version 5.0!
> > > ? (= (/ 22 7) (/ 22.0 7.0))
> > > NIL
> > > ?
> > 
> > As far as I understand it both answers are correct. Read the section
> > about "numbers" of the CLHS[1]. In 12.1.4.1 you'll find: "When
> > rationals and floats are compared by a numerical function, the
> > function RATIONAL is effectively called to convert the float to a
> > rational and then an exact comparison is performed." And the
> > dictionary entry about RATIONAL says: "RATIONAL assumes that the float
> > is completely accurate." (Isn't it nice to have an ANSI standard for
> > your language?)
> > 
> > It is very likely that MCL will be able to represent (/ 3 2)
> > accurately as a float, namely as 1.5. It is highly unlikely that any
> > current machine will be able to represent (/ 22 7) accurately as a
> > float with a finite number of bits.
> > 
> > Edi.
> > 
> > [1] <http://www.lispworks.com/reference/HyperSpec/Front/index.htm>
> 
> The question isn't whether MCL complies with the standard. It's whether
> implicit conversion of rationals to floats is what you want. Obviously,
> what you want differs across different values of "you."
> 
> Michael
> 

I should have said "floats to rationals"--not rationals to
floats--above.

Interestingly, J (http://www.jsoftware.com) thinks that the rational
and floating point versions are the same:

   22r7 = 22.0 % 7.0
1

This holds even if you set the tolerance of the comparison to 0:

   22r7 (=!.0) 22.0 % 7.0
1

This works because J converts the rational to a float, and not the
other way around. [1] I mention this just to illustrate another
approach to the problem.

J doesn't have an ANSI standard, but it does have a rather thorough
Dictionary and Vocabulary that define the language.

Michael

[1] http://www.jsoftware.com/books/help/dictionary/dictg.htm
From: Simon Helsen
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <Pine.SOL.4.44.0311011525220.8965-100000@crete.uwaterloo.ca>
On Sat, 1 Nov 2003, Edi Weitz wrote:

>> Welcome to the demo version of Macintosh Common Lisp Version 5.0!
>> ? (= (/ 22 7) (/ 22.0 7.0))
>> NIL
>> ?
>
>As far as I understand it both answers are correct. Read the section
>about "numbers" of the CLHS[1]. In 12.1.4.1 you'll find: "When
>rationals and floats are compared by a numerical function, the
>function RATIONAL is effectively called to convert the float to a
>rational and then an exact comparison is performed." And the
>dictionary entry about RATIONAL says: "RATIONAL assumes that the float
>is completely accurate." (Isn't it nice to have an ANSI standard for
>your language?)
>
>It is very likely that MCL will be able to represent (/ 3 2)
>accurately as a float, namely as 1.5. It is highly unlikely that any
>current machine will be able to represent (/ 22 7) accurately as a
>float with a finite number of bits.

sounds like one of the wonders of LISP. The examples given above must be a
lovely and rather subtle source for unresolvable bugs. (Almost as good as
an uninitialized C-pointer) It would be nice if someone can explain me why
this is better than having different operators for different types and
doing conversions explicitely...

	Simon
From: Robert E. Brown
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <873cd7mz3x.fsf@loki.bibliotech.com>
Simon Helsen <·······@computer.org> writes:

> It would be nice if someone can explain me why
> this is better than having different operators for different types and
> doing conversions explicitely...


Sometimes it is convenient to reuse the same code when computing with
rationals vs. floating point numbers.  For instance, suppose I have a
function called LEGENDRE, which creates a representation of the nth Legendre
polynomial:

    * (print-poly (legendre 7))
    + 429/16 X^8 - 693/16 X^6 + 315/16 X^4 - 35/16 X^2 

The polynomial is represented as a list of rational coefficients.  I can
easily evaluate the polynomial at a floating point or rational location,
since operators such as + and * work on either.  First, we plug in a
floating point number:

    * (evaluate (legendre 7) 0.9491079123427585d0)
    4.440892098500626d-15

Next, we plug in the rational equivalent of the same number and see how the
result is different.  The rational result is exact and doesn't quite match
the floating point approximation:

    * (rationalize 0.9491079123427585d0)
    258049319/271886174

    * (evaluate (legendre 7) (rationalize 0.9491079123427585d0))
    658160486268599802359946028187261488555282319
    /175723500483911811960365472874822\9105613465904687611692611584

    * (coerce * 'double-float)       ;; * means the last computed result
    3.7454323665084116d-16

The EVALUATE function uses + and * in the obvious ways.  Because they work
on both floats and rationals, I don't need to maintain two versions of
EVALUATE.  Here's what it looks like:

    (defun evaluate (poly x)
      (let ((sum 0))
        (loop for i from (1- (length poly)) downto 0
              for coeff in poly
              do (incf sum (* coeff (expt x i))))
        sum))
From: Matthias Blume
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <m2znff5svo.fsf@hanabi-air.shimizu.blume>
······@speakeasy.net (Robert E. Brown) writes:

> Simon Helsen <·······@computer.org> writes:
> 
> > It would be nice if someone can explain me why
> > this is better than having different operators for different types and
> > doing conversions explicitely...
> 
> 
> Sometimes it is convenient to reuse the same code when computing with
> rationals vs. floating point numbers.

In such cases it would have been even better to parameterize the code
by the ring/field/whatever that is being used.

Matthias
From: Dirk Thierbach
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <d81f71-461.ln1@ID-7776.user.dfncis.de>
Matthias Blume <····@my.address.elsewhere> wrote:
> ······@speakeasy.net (Robert E. Brown) writes:

>> Sometimes it is convenient to reuse the same code when computing with
>> rationals vs. floating point numbers.

> In such cases it would have been even better to parameterize the code
> by the ring/field/whatever that is being used.

Which is of course exactly what e.g. Haskell does (with help of typeclasses).

There's a difference between "I want to mix different numeric types,
and convert them implicitely" and "I want to reuse the same code for
different numeric types, but without implicit conversion (because it
can cause trouble)".

- Dirk
From: Erann Gat
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <gat-0111032324040001@192.168.1.51>
In article <·······································@crete.uwaterloo.ca>,
Simon Helsen <·······@computer.org> wrote:

> It would be nice if someone can explain me why
> this is better than having different operators for different types and
> doing conversions explicitely...

Because if you really have a different operator for every operation you'll
end up with literally dozens of math operators.  You need operators for
add, subtract, multply and the four different kinds of division (floor,
ceiling, exact, inexact) distributed over all the possible numerical types
(signed and unsigned ints of various lengths, floats (possibly of multiple
precisions), bignums, rationals, complexes, perhaps others.  It quickly
becomes unwieldly.

Nothing prevents you from doing your own type conversion if you don't
trust the system to get it right:

? (= (float (/ 22 7)) (/ 22.0 7.0))
T

E.
From: Dirk Thierbach
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <e12f71-461.ln1@ID-7776.user.dfncis.de>
Erann Gat <···@jpl.nasa.gov> wrote:
> In article <·······································@crete.uwaterloo.ca>,
> Simon Helsen <·······@computer.org> wrote:

>> It would be nice if someone can explain me why
>> this is better than having different operators for different types and
>> doing conversions explicitely...

> Nothing prevents you from doing your own type conversion if you don't
> trust the system to get it right:

> ? (= (float (/ 22 7)) (/ 22.0 7.0))
> T

That's again the question of opt-in vs. opt-out. It's easy to overlook
that this conversion is necessary, especially if you're only using
test cases where the trouble doesn't show up. 

The static type system is a tool that points out such problems. It
just says: "Look, here's a point in your program that might lead to
subtle bugs later on. If you want, do the conversion here, but tell
me to do so. Alternatively, you might consider doing the conversion
in a different place. Or you might even want to restructure your code.
You're the programmer, not me, so please make up your mind."

This sort of warnings gets very annoying if the compiler complains
when there is in fact no reason to complain. Then static typing becomes
indeed a "straightjacket", you have to fight the compiler all the time.
(Or, as in C, even with static typing the compiler might silently
convert some of your values, leading to subtle bugs).

So the idea is to have a type system that doesn't complain if there is
at least some reason for it. If you want to mix numeric types and
convert them automatically, no problem. Use a datatype; I gave an
example how to do this. You can even abstract over it, put the
necessary routines in a library, use typeclasses to overload the usual
+ - * / operations so they can also be used with this library and have
done with it once and for all. (For some reason, no one so far felt
the necessity to write such a library).

But it should be up to the programmer to decide what he wants. The
compiler shouldn't do unsafe things without telling the programmer.

- Dirk
From: Adam Warner
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <pan.2003.11.02.09.46.37.131140@consulting.net.nz>
Hi Erann Gat,

> Nothing prevents you from doing your own type conversion if you don't
> trust the system to get it right:
> 
> ? (= (float (/ 22 7)) (/ 22.0 7.0))
> T

Note that this still assumes that *read-default-float-format* is
single-float. I like coerce better because it makes the choice of float
format explicit. Alternatively, (= (float (/ 22 7)) (/ 22f0 7f0)) => T

Regards,
Adam
From: Pascal Bourguignon
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <87llr03rqi.fsf@thalassa.informatimago.com>
Joachim Durchholz <·················@web.de> writes:

> Erann Gat wrote:
> > Every language I know seems to get along just fine overloading + for ints
> > and floats (and others in many cases) and I've never encountered, nor ever
> > even heard of, the kinds of problems you allude to here.
> 
> You never have been surprised that 3 / 2 == 1??
> The real problem here isn't your surprise. It's that 3.0 / 2.0 != 3 /

No, the  real problem is a  problem of notation. You're  using / which
usually denotes the rational or the real division, while the : or � is
the usual integer division notation.

                           3 � 2 = 1 (remains 1)



-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Ian Zimmerman
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <873cd67ocf.fsf@newsguy.com>
Joachim> That's a conscious decision. If you allow the same operator
Joachim> names for int and float, you'll run into inconsistencies
Joachim> elsewhere (mostly for division, though other types of
Joachim> inconsistency are possible as well).

Erann> Every language I know seems to get along just fine overloading +
Erann> for ints and floats (and others in many cases) and I've never
Erann> encountered, nor ever even heard of, the kinds of problems you
Erann> allude to here.

Erann> Could you elaborate?  Perhaps give an example?

In a famous paper from around 1990, Andrew Appel evaluated the many
strengths and a few weaknesses of _Standard_ ML, which does have
overloading (not "polymorphism") of this type.  He ranked it squarely
among the weaknesses, and proposed a solution much like Ocaml's.

-- 
"Rap music is our punishment for neglecting music education."
An anonymous teacher
From: Justin Pearson
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <v7evfq0nphg.fsf@typhoeus.it.uu.se>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <············@news.oberberg.net>, Joachim Durchholz
> <·················@web.de> wrote:
> 
> Every language I know seems to get along just fine overloading + for ints
> and floats (and others in many cases) and I've never encountered, nor ever
> even heard of, the kinds of problems you allude to here.
> 
> Could you elaborate?  Perhaps give an example?

See 
http://docs.sun.com/source/806-3568/ncg_goldberg.html

I you think floating point numbers have the same properties as
integers (associativity, etc.) then you should not be using them.


/Justin

> 
> E.

-- 
Justin Pearson - Uppsala Sweden http://www.docs.uu.se/~justin
From: Erann Gat
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <gat-0411030929430001@192.168.1.51>
In article <···············@typhoeus.it.uu.se>, Justin Pearson
<······@DoCS.UU.SE> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <············@news.oberberg.net>, Joachim Durchholz
> > <·················@web.de> wrote:
> > 
> > Every language I know seems to get along just fine overloading + for ints
> > and floats (and others in many cases) and I've never encountered, nor ever
> > even heard of, the kinds of problems you allude to here.
> > 
> > Could you elaborate?  Perhaps give an example?
> 
> See 
> http://docs.sun.com/source/806-3568/ncg_goldberg.html
> 
> I you think floating point numbers have the same properties as
> integers (associativity, etc.) then you should not be using them.

But these are just (well known) problems with floats, not problems with
overloading arithmetic operators.

E.
From: Kaz Kylheku
Subject: Re: OCaml: I am not impressed
Date: 
Message-ID: <cf333042.0310311316.22414992@posting.google.com>
Joachim Durchholz <·················@web.de> wrote in message news:<············@news.oberberg.net>...
> Erann Gat wrote:
> >         Objective Caml version 3.07+2
> > 
> > # 1.0+1.0;;
> > This expression has type float but is here used with type int
> > 
> > That is just pathetic.  What happened to polymorphism?  Even C, which is
> > one of the more brain damaged languages known to man, knows how to add
> > floats!

Speaking of C, its even more braindmaged predecessor, a language
called B, had separate operators for floating point, ones prefixed
with a sharpsign: #+, #-, #* and so on. Ritchie had the good sense to
rid of them.

> That's a conscious decision. If you allow the same operator names for 
> int and float, you'll run into inconsistencies elsewhere (mostly for

Such as what? Do these inconsistencies also hold if you use the same
operator name for accessing the length of a string, list, vector or
whatever else?

> division, though other types of inconsistency are possible as well).

What inconsistencies for division? There are different kinds of
division, like exact and truncating with remainder. The variety in
operator names should be used to express that variety, rather than
following the operand types. Just because the operands being divided
are integers doesn't mean that the programmer wants the truncating
division semantics.

> It's mostly a question of what inconsistencies you want. OCaml opted for 
> disappointing the expectation of 1.0 + 1.0 being a valid expression, and 
> having less problems elsewhere.
> One may find that decision fortunate or unfortunate, but it's certainly 
> not brain-damaged (certainly no more than having to write (Foo params) 
> instead of foo (params) in Lisp *gg*).

I would not put these two design decisions in the same category.
From: Mark Carroll
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <vee*P8b6p@news.chiark.greenend.org.uk>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
Erann Gat <···@jpl.nasa.gov> wrote:
(snip)
>Silently returning the wrong answer is never acceptable IMHO.

FWIW Haskell does this,

Prelude> let fact n = if n <= 1 then 1 else n * fact (n-1)
Prelude> fact 5
120
Prelude> fact 30
265252859812191058636308480000000
Prelude> fact 40
815915283247897734345611269596115894272000000000
Prelude>

However,

Prelude> fact 30 :: Int
1409286144

but then, by saying Int, you're asking for the number to be truncated.
I suppose you could define some other type that fits the Num and
Integral typeclasses that does explicit bounds checking and throws
errors - that wouldn't be much of a problem - but I'm not aware that
the standard libraries offer a shrinkwrapped one.

(And let me know if I should be trimming Newsgroups: to just
comp.lang.functional)

-- Mark
From: Erann Gat
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <gat-2910032058220001@192.168.1.51>
In article <·········@news.chiark.greenend.org.uk>, Mark Carroll
<·····@chiark.greenend.org.uk> wrote:

> In article <····················@k-137-79-50-101.jpl.nasa.gov>,
> Erann Gat <···@jpl.nasa.gov> wrote:
> (snip)
> >Silently returning the wrong answer is never acceptable IMHO.
> 
> FWIW Haskell does this,

Yes, Haskell's numerics seem much saner than OCaml's -- not that that's
saying much.  I am a little disappointed that Haskell (apparently -- from
experimenting with Hugs) has bignums but not rationals or complexes. 
Still, better than nothing.  At least the basic arithmetic operators are
polymorphic across floats and integers.

> (And let me know if I should be trimming Newsgroups: to just
> comp.lang.functional)

This discussion branched off from a discussion of static vs. dynamic
typing that was cross-posted to comp.lang.lisp and no one seems to mind. 
I think Lispers are generally interested in this sort of thing.

E.
From: Brian McNamara!
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <bnq9le$hdj$1@news-int.gatech.edu>
···@jpl.nasa.gov (Erann Gat) once said:
>Yes, Haskell's numerics seem much saner than OCaml's -- not that that's
>saying much.  I am a little disappointed that Haskell (apparently -- from
>experimenting with Hugs) has bignums but not rationals or complexes. 

It looks like they're there, just in libraries you need to import.
Check out
   http://www.haskell.org/onlinereport/
and check out, e.g., the link for "Ratio", which has some useful prose
near the top of that page.

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: ··········@ii.uib.no
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <eg1xsvw1qo.fsf@vipe.ii.uib.no>
···@jpl.nasa.gov (Erann Gat) writes:

> I am a little disappointed that Haskell (apparently -- from
> experimenting with Hugs) has bignums but not rationals or
> complexes. 

Rationals are written as '4 % 5', and perfectly standard.  I haven't
ever used complex numbers with Haskell, but I'd be surprised if they
aren't available from a library -- they're not very complicated to
implement, after all.

BTW, (early?) Hugs used Floats when constructing rationals in some cases,
leading to unexpected results.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Erann Gat
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <gat-3010031437430001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@vipe.ii.uib.no>, ··········@ii.uib.no wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > I am a little disappointed that Haskell (apparently -- from
> > experimenting with Hugs) has bignums but not rationals or
> > complexes. 
> 
> Rationals are written as '4 % 5', and perfectly standard.

Ah, so they are.  Unorthodox syntax, but they seem to do the Right Thing.

E.
From: Jacques Garrigue
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <l2smlaerj3.fsf@suiren.i-did-not-set--mail-host-address--so-shoot-me>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <·········@news.chiark.greenend.org.uk>, Mark Carroll
> <·····@chiark.greenend.org.uk> wrote:
> 
> > In article <····················@k-137-79-50-101.jpl.nasa.gov>,
> > Erann Gat <···@jpl.nasa.gov> wrote:
> > (snip)
> > >Silently returning the wrong answer is never acceptable IMHO.
> > 
> > FWIW Haskell does this,
> 
> Yes, Haskell's numerics seem much saner than OCaml's -- not that that's
> saying much.  I am a little disappointed that Haskell (apparently -- from
> experimenting with Hugs) has bignums but not rationals or complexes. 
> Still, better than nothing.  At least the basic arithmetic operators are
> polymorphic across floats and integers.

Not trying to argue.

        Objective Caml version 3.07+2

# #load"nums.cma";;  (* A few lines of preparation *)
# open Num;;
# let (!/) = num_of_int;;
val ( !/ ) : int -> Num.num = <fun>
# let print_num ppf x = Format.fprintf ppf "%s" (string_of_num x);;
val print_num : Format.formatter -> Num.num -> unit = <fun>
# #install_printer print_num;;  (* Now we're ready *)

# let rec fact n = if n < 2 then !/1 else !/n */ fact (n-1);;
val fact : int -> Num.num = <fun>
# fact 40;;
- : Num.num = 815915283247897734345611269596115894272000000000
# fact 40 // !/106;;
- : Num.num = 407957641623948867172805634798057947136000000000/53

---------------------------------------------------------------------------
Jacques Garrigue      Kyoto University     garrigue at kurims.kyoto-u.ac.jp
		<A HREF=http://wwwfun.kurims.kyoto-u.ac.jp/~garrigue/>JG</A>
From: Fergus Henderson
Subject: Re: OCaml: I am not impressed (was: Re: More static type fun.)
Date: 
Message-ID: <3fa26c0b$1@news.unimelb.edu.au>
···@jpl.nasa.gov (Erann Gat) writes:

>I am a little disappointed that Haskell (apparently -- from
>experimenting with Hugs) has bignums but not rationals or complexes. 

Haskell has both rational numbers and complex numbers.  They are in the
"Ratio" and "Complex" modules in the Haskell 98 standard library.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Matthias Blume
Subject: Re: OCaml: I am not impressed  (was: Re: More static type fun.)
Date: 
Message-ID: <m2znfj5ov3.fsf@hanabi-air.shimizu.blume>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <····················@k-137-79-50-101.jpl.nasa.gov>,
> ···@jpl.nasa.gov (Erann Gat) wrote:
> 
> > I am in fact just finishing up the last stages of building OCaml from
> > source even as I write this.  So far it's gone without a hitch.
> 
> I regret to report that OCaml immediately exhibits what I consider to be a
> fatal flaw:
> 
> # let rec fact n =
>   if n<=1 then 1 else n*fact(n-1);;
> val fact : int -> int = <fun>
> # fact 5;;
> - : int = 120
> # fact 30;;
> - : int = -738197504
> # fact 40;;
> - : int = 0
> 
> Silently returning the wrong answer is never acceptable IMHO.

Well, you might want to look at SML (www.standardml.org) after all,
regardless of what Marcin told you.  SML has several independent
high-quality implementations.  Of course, I'm going to recommend
SML/NJ.  See www.smlnj.org.

Matthias

---------------

$ sml
Standard ML of New Jersey v110.43.3 [FLINT v1.5], September 26, 2003
- fun fact 0 = 1
=   | fact n = n * fact (n - 1);
val fact = fn : int -> int
- fact 10;
val it = 3628800 : int
- fact 50;

uncaught exception overflow

- fun fact 0 = 1 : IntInf.int
=   | fact n = n * fact (n - 1);
[autoloading]
[autoloading done]
val fact = fn : IntInf.int -> IntInf.int
- fact 10;
val it = 3628800 : IntInf.int
- fact 50;
val it = 30414093201713378043612608166064768844377641568960512000000000000
  : IntInf.int
- fact 1000;
val it =
  4023872600770937735437024339230039857193748642107146325437999104299385#
  : IntInf.int
- size (IntInf.toString it);
val it = 2568 : int
- 
From: Daniel C. Wang
Subject: Re: More static type fun.
Date: 
Message-ID: <ur80vl6mj.fsf@hotmail.com>
···@jpl.nasa.gov (Erann Gat) writes:

{stuff deleted}
> I am in fact just finishing up the last stages of building OCaml from
> source even as I write this.  So far it's gone without a hitch.

Last time I used OCaml. I could also get it to core dump on a stack overflow
at least on Windows. I don't know if they fixed it. Given they have to make
it run portably on lots of OSes I'm not too surprised. SML/NJ I know never
dumps on a stack overflow, and handles memory exhaustion
gracefully.
From: Ralf Muschall
Subject: Re: More static type fun.
Date: 
Message-ID: <848yn3my47.fsf@tecont.de>
···@jpl.nasa.gov (Erann Gat) writes:

> Prelude> :e
> ERROR - Hugs is not configured to use an editor

> There is no excuse for this in a system that is supposed to be optimized
> for teaching running on unix.  The EDITOR environment variable convention
> is universal.

Are you sure that your $EDITOR isn't empty?  Mine (on SuSE 7.2) is.
After (sinfully) saying export EDITOR=/bin/ed, :e started working.

Ralf
-- 
GS d->? s:++>+++ a+ C++++ UL+++ UH++ P++ L++ E+++ W- N++ o-- K- w--- !O M- V-
PS+>++ PE Y+>++ PGP+ !t !5 !X !R !tv  b+++ DI+++ D?  G+ e++++ h+ r? y?
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-3010030729500001@192.168.1.51>
In article <··············@tecont.de>, Ralf Muschall <····@tecont.de> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > Prelude> :e
> > ERROR - Hugs is not configured to use an editor
> 
> > There is no excuse for this in a system that is supposed to be optimized
> > for teaching running on unix.  The EDITOR environment variable convention
> > is universal.
> 
> Are you sure that your $EDITOR isn't empty?

Well, I thought it was set (I remember checking it and having it come up
"emacs") but I just checked it again and lo and behold it was indeed
empty, and when I fixed it Hugs's :e command worked.  So I apologise for
spreading this bit of FUD about Hugs.  Mea culpa.

E.
From: Donn Cave
Subject: Re: More static type fun.
Date: 
Message-ID: <1067502330.677551@yasure>
Quoth Ralf Muschall <····@tecont.de>:
| ···@jpl.nasa.gov (Erann Gat) writes:
|
|> Prelude> :e
|> ERROR - Hugs is not configured to use an editor
|
|> There is no excuse for this in a system that is supposed to be optimized
|> for teaching running on unix.  The EDITOR environment variable convention
|> is universal.
|
| Are you sure that your $EDITOR isn't empty?  Mine (on SuSE 7.2) is.
| After (sinfully) saying export EDITOR=/bin/ed, :e started working.

In the above described situation, I believe the effect of :e would
be to begin editing Prelude.hs.  You should see a bunch of comments
including "Under normal circumstances, you should not attempt to modify
this file in any way!"

Hugs is good because it runs on computers on which I can't build nhc98
or ghc.  For an expedient way to experiment with Haskell expressions,
edit disk files and run them with "runhugs"; start the disk file with
"module Main (main) where", and end it with "main = ...".  Comments
begin with "--".

  module Main (main) where
  import System (getArgs)
  main = do
      args <- getArgs
      putStrLn (show args)

for example.

As for whether there is or is not an excuse for hugs, I wonder if
after reading that again, the author might agree that after a
while this attitude could get a little tiresome.  I don't know
what originally made the Princes of Lisp want to visit us here in
comp.lang.functional, but as long as you're here, you might think
about it like a foreign country where you can just expect things
to be a little different, not something to come unglued about.
(And if you're going to stay much longer you might think about
finding some work to do.)

	Donn Cave, ····@drizzle.com
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa2648e$1@news.unimelb.edu.au>
"Donn Cave" <····@drizzle.com> writes:

>Quoth Ralf Muschall <····@tecont.de>:
>| ···@jpl.nasa.gov (Erann Gat) writes:
>|
>|> Prelude> :e
...
>In the above described situation, I believe the effect of :e would
>be to begin editing Prelude.hs.

No, it would be to invoke the editor with no arguments.

>Hugs is good because it runs on computers on which I can't build nhc98
>or ghc.  For an expedient way to experiment with Haskell expressions,
>edit disk files and run them with "runhugs"; start the disk file with
>"module Main (main) where", and end it with "main = ...".  Comments
>begin with "--".
>
>  module Main (main) where
>  import System (getArgs)
>  main = do
>      args <- getArgs
>      putStrLn (show args)
>
>for example.

That's a little misleading: the line "module Main (main) where" is not
necessary at all.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa0b7df$1@news.unimelb.edu.au>
···@jpl.nasa.gov (Erann Gat) writes:

>Fergus Henderson <···@cs.mu.oz.au> wrote:
>
>> when you want to enter a function definition,
>> instead of typing "let" (as in ghci) or "defun", just type ":e".
>> Hugs will then let you enter one or more multi-line definitions.
>
>Nope.
>
>Prelude> :e
>ERROR - Hugs is not configured to use an editor
>
>There is no excuse for this

I agree.  Where did you get such a crummy Hugs distribution?

Please let us know, so that we can notify the distributor of the problem,
and get it fixed.

FWIW, the version of Hugs on Debian Linux 3.0 does not have this problem.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-3010030724370001@192.168.1.51>
In article <··········@news.unimelb.edu.au>, Fergus Henderson
<···@cs.mu.oz.au> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> >Fergus Henderson <···@cs.mu.oz.au> wrote:
> >
> >> when you want to enter a function definition,
> >> instead of typing "let" (as in ghci) or "defun", just type ":e".
> >> Hugs will then let you enter one or more multi-line definitions.
> >
> >Nope.
> >
> >Prelude> :e
> >ERROR - Hugs is not configured to use an editor
> >
> >There is no excuse for this
> 
> I agree.  Where did you get such a crummy Hugs distribution?

www.haskell.org./hugs

I'm running it on OS/X.

E.
From: Jon S. Anthony
Subject: Re: More static type fun.
Date: 
Message-ID: <m3u15qvfkl.fsf@rigel.goldenthreadtech.com>
Fergus Henderson <···@cs.mu.oz.au> writes:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> >Fergus Henderson <···@cs.mu.oz.au> wrote:
> >
> >> when you want to enter a function definition,
> >> instead of typing "let" (as in ghci) or "defun", just type ":e".
> >> Hugs will then let you enter one or more multi-line definitions.
> >
> >Nope.
> >
> >Prelude> :e
> >ERROR - Hugs is not configured to use an editor
> >
> >There is no excuse for this
> 
> I agree.  Where did you get such a crummy Hugs distribution?
> 
> Please let us know, so that we can notify the distributor of the problem,
> and get it fixed.

I guess they didn't prove the program was correct.  Or if so, they
didn't include the notion of useful in the result. :-|

/Jon
From: Darius
Subject: Re: More static type fun.
Date: 
Message-ID: <20031030113354.00002e9f.ddarius@hotpop.com>
On 30 Oct 2003 11:46:34 -0500
·········@rcn.com (Jon S. Anthony) wrote:

> Fergus Henderson <···@cs.mu.oz.au> writes:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > >Fergus Henderson <···@cs.mu.oz.au> wrote:
> > >
> > >> when you want to enter a function definition,
> > >> instead of typing "let" (as in ghci) or "defun", just type ":e".
> > >> Hugs will then let you enter one or more multi-line definitions.
> > >
> > >Nope.
> > >
> > >Prelude> :e
> > >ERROR - Hugs is not configured to use an editor
> > >
> > >There is no excuse for this
> > 
> > I agree.  Where did you get such a crummy Hugs distribution?
> > 
> > Please let us know, so that we can notify the distributor of the
> > problem, and get it fixed.
> 
> I guess they didn't prove the program was correct.  Or if so, they
> didn't include the notion of useful in the result. :-|

Yes, it is difficult to prove that the user won't make a mistake.
From: Jon S. Anthony
Subject: Re: More static type fun.
Date: 
Message-ID: <m3he1qvdee.fsf@rigel.goldenthreadtech.com>
Darius <·······@hotpop.com> writes:

> On 30 Oct 2003 11:46:34 -0500
> ·········@rcn.com (Jon S. Anthony) wrote:
> 
> > I guess they didn't prove the program was correct.  Or if so, they
> > didn't include the notion of useful in the result. :-|
> 
> Yes, it is difficult to prove that the user won't make a mistake.

Absolutely.

/Jon
From: Frode Vatvedt Fjeld
Subject: Re: More static type fun.
Date: 
Message-ID: <2hsmldsbpo.fsf@vserver.cs.uit.no>
Frode Vatvedt Fjeld wrote:

>> I find this statement ridiculous.

Joachim Durchholz <·················@web.de> writes:

> Another subthread that I can go away from - calling names isn't my
> favorite pastime.

Somehow I'm not surprised you managed to read that as "calling names".

> I find it ridiculous to conflate the "take-a-snapshot-of-the-world"
> paradigm with the edit-compile-run cycle: static typing and
> interactivity can go together (and indeed do, as the existence of
> Haskell interpreters proves).

One of us is massively confused here. Why do you bring in static
typing, and how does it relate to either
"take-a-snapshot-of-the-world" or "the edit-compile-run cycle" or
both? My initial comment was about your statement about HOFs and
side-effects in lisp, not about static typing (despite the subject
line). How can you have a reasonably interactive programming
environment without side-effects?

-- 
Frode Vatvedt Fjeld
From: ·············@comcast.net
Subject: Re: More static type fun.
Date: 
Message-ID: <smlehpyp.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> Then EQ equality is name equality.
> Which is fine by me (though those integer and character specialties of
> EQ are awful warts IMHO).

EQL doesn't have the `warts' that EQ has and is the `default' for
things have implicit equality checks.

> (Additional equality operators snipped - this already-overlong thread
> doesn't need yet another flamewar on Lisp's idea of equality, which
> would almost certainly ensue if I brought up my opinion about it.)

`Equality' is a difficult concept to pin down.  Lisp at least provides
the *most* restrictive form of equality (essentially representational
equivalence).  It is easy to write less restrictive forms from the
more restrictive, but very difficult to go in the other direction.
From: Frode Vatvedt Fjeld
Subject: Re: More static type fun.
Date: 
Message-ID: <2h1xsyhlor.fsf@vserver.cs.uit.no>
Joachim Durchholz <·················@web.de> writes:

> Then EQ equality is name equality.  Which is fine by me (though
> those integer and character specialties of EQ are awful warts IMHO).

This is the best mental model of eq, IMHO:

  (eq x y) =approx= (eql (the (not (or number character)) x)
                         (the (not (or number character)) y))

In plain english, it's a special case of eql that one can use when you
know that at least one of the arguments is neither a number nor a
character. And this implies that eql is "the" identity comparator in
Common Lisp.

From a naive, abstract perspective, this is a rather ugly thing to
have in a language. From the perspective of a pragmatic, real-world
language, it's close to a necessity. However, it might be argued that
eq ideally should have had a more "don't use me so much" kind of
name, but this is where Lisp's history enters the picture, I suppose.

-- 
Frode Vatvedt Fjeld
From: Jesse Tov
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbpptjd.db5.tov@tov.student.harvard.edu>
Joachim Durchholz <·················@web.de>:
> It will not be able to detect the equality of
>    CAR
> and
>    (LAMBDA X (CAR X))
> but I think most people would be satisfied with the result anyway :-)

I'd rather function comparison be generative or disallowed, but if you
actually want to inspect them I want alpha equivalence:
   (LAMBDA X (CAR X))
and
   (LAMBDA Y (CAR Y))
should be equal.  You'd need some unification.

Jesse
From: Alain Picard
Subject: Re: More static type fun.
Date: 
Message-ID: <87y8v7t7um.fsf@memetrics.com>
Adam Warner <······@consulting.net.nz> writes:


> Since I can't find a straight answer in the archives or the HyperSpec can
> someone please explain whether an EQ test for function identity is
> conforming? It appears to be unspecified.

Hyperspec says EQ is defined on any "object".
Clicking on "object" yields:

  object n. 1. any Lisp datum. ``The function cons creates an object
  which refers to two other objects.'' 2. (immediately following the
  name of a type) an object which is of that type, used to emphasize
  that the object is not just a name for an object of that type but
  really an element of the type in cases where objects of that type
  (such as function or class) are commonly referred to by name. ``The
  function symbol-function takes a function name and returns a
  function object.''

Seems pretty clear cut.
From: Adam Warner
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.10.27.10.06.03.726603@consulting.net.nz>
Hi Alain Picard,

>> Since I can't find a straight answer in the archives or the HyperSpec
>> can someone please explain whether an EQ test for function identity is
>> conforming? It appears to be unspecified.
> 
> Hyperspec says EQ is defined on any "object". Clicking on "object"
> yields:
> 
>   object n. 1. any Lisp datum. ``The function cons creates an object
>   which refers to two other objects.'' 2. (immediately following the
>   name of a type) an object which is of that type, used to emphasize
>   that the object is not just a name for an object of that type but
>   really an element of the type in cases where objects of that type
>   (such as function or class) are commonly referred to by name. ``The
>   function symbol-function takes a function name and returns a function
>   object.''
> 
> Seems pretty clear cut.

If an object is any Lisp datum how is this clear cut? Many Lisp objects
cannot be compared in a conforming manner using EQ. Duane stated it's
because functions are first class objects. Are anything else?

I've just grepped the HyperSpec and the phrase "first class" doesn't
appear once. Xanalys reveals that CLOS classes are also first class:
http://www.lispworks.com/products/lisp-overview.html

Regards,
Adam
From: Matthew Danish
Subject: Re: More static type fun.
Date: 
Message-ID: <20031027102754.GI1454@mapcar.org>
On Mon, Oct 27, 2003 at 11:06:06PM +1300, Adam Warner wrote:
> >> Since I can't find a straight answer in the archives or the HyperSpec
> >> can someone please explain whether an EQ test for function identity is
> >> conforming? It appears to be unspecified.
> > 
> > Hyperspec says EQ is defined on any "object". Clicking on "object"
> > yields:
> > 
> >   object n. 1. any Lisp datum. ``The function cons creates an object
> >   which refers to two other objects.'' 2. (immediately following the
> >   name of a type) an object which is of that type, used to emphasize
> >   that the object is not just a name for an object of that type but
> >   really an element of the type in cases where objects of that type
> >   (such as function or class) are commonly referred to by name. ``The
> >   function symbol-function takes a function name and returns a function
> >   object.''
> > 
> > Seems pretty clear cut.
> 
> If an object is any Lisp datum how is this clear cut? Many Lisp objects
> cannot be compared in a conforming manner using EQ. Duane stated it's
> because functions are first class objects. Are anything else?

You are getting confused as to what EQ does, I think.  EQ is not a
structural equivalence operator.  EQ is defined to return T if and only
if all of the arguments are the same object.  Not a copy of the object,
but the exact same one.

(let ((x (compute-some-value))
      (y x))
  (eq x y)) ==> T

X and Y are bound to the same object, therefore EQ returns T here.

There is an exception made for numbers and characters, because the
standard allows implementations to violate object identity on these
types of objects for purposes of efficiency.  All other objects must
have identity preserved in variable binding, function application, etc.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Adam Warner
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.10.27.12.11.43.993828@consulting.net.nz>
Hi Matthew Danish,

> You are getting confused as to what EQ does, I think.

Yes, I've been under a fundamental misconception this whole time. Thank
you for setting me straight! I had built up an idea that more complex
objects, especially those composed of numbers and characters, might not be
EQ even if they were created the same. What I didn't realise was that the
more complex objects _can't_ be secretly copied by the implementation.
That's why I was worried about the implementation making a copy of the
function object.

> EQ is not a structural equivalence operator.  EQ is defined to return T
> if and only if all of the arguments are the same object.  Not a copy of
> the object, but the exact same one.
> 
> (let* ((x (compute-some-value))
>        (y x))
>   (eq x y)) ==> T
> 
> X and Y are bound to the same object, therefore EQ returns T here.

Right. So (let* ((x 1) (y x)) (eq x y)) may not be T but
          (let* ((x #(1 1)) (y x)) (eq x y)) must be.

Similarly, (let* ((x #\a) (y x)) (eq x y)) may not be T but
           (let* ((x "ab") (y x)) (eq x y)) must be.
 
> There is an exception made for numbers and characters, because the
> standard allows implementations to violate object identity on these
> types of objects for purposes of efficiency.  All other objects must
> have identity preserved in variable binding, function application, etc.

Got it!

Many thanks,
Adam
From: Alain Picard
Subject: Re: More static type fun.
Date: 
Message-ID: <87n0bnt00p.fsf@memetrics.com>
Adam Warner <······@consulting.net.nz> writes:

> If an object is any Lisp datum how is this clear cut? 
> Many Lisp objects
> cannot be compared in a conforming manner using EQ. 

Yes, but those are enumerated on the same page describing EQ,
and FUNCTIONS are not included in the excluded types of objects.

So, again:
 * EQ applies to OBJECTS (with some exceptions)
 * functions are OBJECTS

Therefore, to me, the spec says I can expect that
(eq #'foo #'foo) will evaluate to T. 

> I've just grepped the HyperSpec and the phrase "first class" doesn't
> appear once. 

That's a red herring.  Functions are lisp datums (i.e. objects),
and that's sufficient for a clear reading of the spec to ensure
that they can meaningfully be compared with EQ.

-- 
It would be difficult to construe        Larry Wall, in  article
this as a feature.			 <·····················@netlabs.com>
From: Adam Warner
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.10.27.12.23.29.51468@consulting.net.nz>
Hi Alain Picard,

>> I've just grepped the HyperSpec and the phrase "first class" doesn't
>> appear once.
> 
> That's a red herring.  Functions are lisp datums (i.e. objects), and
> that's sufficient for a clear reading of the spec to ensure that they
> can meaningfully be compared with EQ.

Understood! By the way there are two references to "first-class" in the
Hyperspec, one praising the major contributions of Scheme.

Body/01_ab.htm:
"The major contributions of Scheme were lexical scoping, lexical closures,
first-class continuations, and simplified syntax (no separation of value
cells and function cells). Some of these contributions made a large impact
on the design of Common Lisp."

The other reference to "first-class" is in Issues/iss236_w.htm (Issue
MAKE-LOAD-FORM-CONFUSION Writeup).

My searches were case-insensitive.

Regards,
Adam
From: Adam Warner
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.10.27.12.49.25.976982@consulting.net.nz>
> Hi Stephen J. Bevan,
> 
>> ·············@comcast.net writes:
>>> "Marshall Spight" <·······@dnai.com> writes:
>>> > It would be really interesting to see a small but useful example of a
>>> > program that will not pass a statically typed language. It seems to me
>>> > that how easy it is to generate such programs will be an interesting
>>> > metric.
>>> 
>>> (defun foo (f)
>>>   (funcall (funcall f #'+)
>>>            (funcall f 3)
>>>            (funcall f 2)))
>>> 
>>> (defun test1 ()
>>>   (foo (lambda (thing)
>>>          (format t "~&--> ~s" thing)
>>>          thing)))
>>> 
>>> (defun test2 ()
>>>   (foo (lambda (thing)
>>>          (if (eq thing #'+)
>>>              #'*
>>>              thing))))
>> 
>> test2 relies on some kind of equality being defined over functions. Some
>> (statically typed) languages do not support that (for reasons other than
>> static typing).
> 
> Since I can't find a straight answer in the archives or the HyperSpec can
> someone please explain whether an EQ test for function identity is
> conforming? It appears to be unspecified.

Just to update comp.lang.functional readers (followup-to has been set to
comp.lang.lisp), an EQ test for function identity is definitely conforming
in Common Lisp.

#'+ unless rebound, etc. always returns the same object from the function
namespace, e.g.:

* #'+

#<Function + {100CB641}>
* #'+

#<Function + {100CB641}>

Only character and number objects can be copied at any time (in the
interests of implementation efficiency) and can't be compared using EQ in
a conforming manner. This exception goes away if the EQL test is used
instead.

Regards,
Adam
From: Alain Picard
Subject: Re: More static type fun.
Date: 
Message-ID: <87ekwytkwl.fsf@memetrics.com>
Adam Warner <······@consulting.net.nz> writes:

> * #'+
>
> #<Function + {100CB641}>
> * #'+
>
> #<Function + {100CB641}>
>

And just one last didactic note; the value you see
printed there "{100CB641}" is not part of the object,
and is liable to change at any time.  [Ok, for #'+, unlikely,
but for your own function #'FOO, it might].  So
it is possible you would get a transcript like this one:

USER: 17> x
==>  #<Function BLAH {100CB641}>

USER: 18> (setq y x)
==>  #<Function BLAH {100CB641}>

[do a lot more stuff... GC occurs...]

USER: 207> y
==>  #<Function BLAH {100C0000}>  ;; WTF?  But, no matter

USER: 208> x
==>  #<Function BLAH {100C0000}>

USER: 209> (eq x y)
==>  T

So, just to be clear, that address doesn't _define_ identity.
EQ defines identity.

Cheers,
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnird9$ljb$1@news.oberberg.net>
·············@comcast.net wrote:
> 
> (defun foo (f)
>   (funcall (funcall f #'+) 
>            (funcall f 3)
>            (funcall f 2)))

I don't know what funcall does, so I can't transliterate. 
(Unfortunately, the rest of your examples mostly depend on foo.)

I assume that #'+ is a reference to the add operator; is that true?

> (defun transpose-tensor (tensor)
>   (apply #'mapcar #'mapcar (list #'list #'list) tensor))
> 
> (defun test3 ()
>   (transpose-tensor '(((1 2 3)
>                        (4 5 6))
>                       ((a b c)
>                        (d e f)))))

I'm having difficulties getting the effects of those mapcar and list 
functions sorted out with confidence. Could you explain what's happening 
here?

What are the dynamic types of a-f? If a-f can be functions, that's not a 
problem, but then I need to know parameter and result types.

Combining arbitrary types in a list isn't allowed, you'd use some 
different idiom in an FPL. Knowing the type of a-f makes it easier to 
choose the appropriate one.
(I know zilch about tensors.)

Regards,
Jo
From: ·············@comcast.net
Subject: Re: More static type fun.
Date: 
Message-ID: <y8v7gdj3.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> ·············@comcast.net wrote:
>> (defun foo (f)
>>   (funcall (funcall f #'+)            
>>            (funcall f 3)
>>            (funcall f 2)))
>
> I don't know what funcall does, so I can't
> transliterate. (Unfortunately, the rest of your examples mostly depend
> on foo.)

Sorry about that.  FUNCALL is a namespace operator because putting F
in the operator position would not invoke the global function F, not the
argument passed in.  In scheme it would be:

 (define (foo f)
   ((f +) (f 3) (f 2)))

> I assume that #'+ is a reference to the add operator; is that true?

Yes.  Again the #' is a namespace escape because we are not using
plus in the operator position.

>> (defun transpose-tensor (tensor)
>>   (apply #'mapcar #'mapcar (list #'list #'list) tensor))
>>
>> (defun test3 ()
>>   (transpose-tensor '(((1 2 3)
>>                        (4 5 6))
>>                       ((a b c)
>>                        (d e f)))))
>
> I'm having difficulties getting the effects of those mapcar and list
> functions sorted out with confidence. Could you explain what's
> happening here?

List is an N-ARY function that returns a list of all its arguments:

   (list 2 'a 4 5)  =>  (2 a 4 5)

MAPCAR takes a function of arity N and N lists and returns a list
where the function has been applied in turn to the elements of 
the lists.  Easier demonstrated than explained:

(mapcar #'+ '(3 1 4) '(2 7 2)) =>  (5 8 6)

So the result is 3 added to 2, 1 added to 7, 4 added to 2.

In the example I've given, you won't have to worry about the
lists being the same length.


Finally, APPLY takes an N-ARY function and a list of arguments and
invokes the function on those arguments (it sort of un-tuples).
APPLY can also take some `already untupled' elements before the list.

So transpose-tensor evaluates in these steps:

(apply #'mapcar #'mapcar (list #'list #'list) '(((1 2 3) (4 5 6)) ((a b c) (d e f))))

(mapcar #'mapcar (list #'list #'list) '((1 2 3) (4 5 6)) '((a b c) (d e f)))

(list (mapcar #'list '(1 2 3) '(a b c))
      (mapcar #'list '(4 5 6) '(d e f)))

(list (list (list 1 'a) (list 2 'b) (list 3 'c))
      (list (list 4 'd) (list 5 'e) (list 6 'f)))

(((1 a) (2 b) (3 c)) ((4 d) (5 e) (6 f)))

> What are the dynamic types of a-f? If a-f can be functions, that's
> not a problem, but then I need to know parameter and result types.

A through F are simply symbols.
From: Jens Axel Søgaard
Subject: Re: More static type fun.
Date: 
Message-ID: <3f9cf48e$0$69909$edfadb0f@dread12.news.tele.dk>
Joachim Durchholz wrote:
> ·············@comcast.net wrote:
> 
>>
>> (defun foo (f)
>>   (funcall (funcall f #'+)            (funcall f 3)
>>            (funcall f 2)))
> 
> 
> I don't know what funcall does, so I can't transliterate. 
> (Unfortunately, the rest of your examples mostly depend on foo.)

It function application. In Scheme

(define (foo f)
   ((f +) (f 2)))

>> (defun transpose-tensor (tensor)
>>   (apply #'mapcar #'mapcar (list #'list #'list) tensor))
>>
>> (defun test3 ()
>>   (transpose-tensor '(((1 2 3)
>>                        (4 5 6))
>>                       ((a b c)
>>                        (d e f)))))
> 
> 
> I'm having difficulties getting the effects of those mapcar and list 
> functions sorted out with confidence. Could you explain what's happening 
> here?

That's tricky, but here is the output.

 > (define (transpose-tensor tensor)
     (apply map map (list list list) tensor))

(transpose-tensor '(((1 2 3)
                      (4 5 6))
                     ((a b c)
                      (d e f))))
 > (((1 a) (2 b) (3 c))
    ((4 d) (5 e) (6 f)))

-- 
Jens Axel S�gaard
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <s7qv61-9vd.ln1@ID-7776.user.dfncis.de>
·············@comcast.net wrote:

> (defun foo (f)
>  (funcall (funcall f #'+) 
>           (funcall f 3)
>           (funcall f 2)))
>
> (defun test1 ()
>  (foo (lambda (thing)
>         (format t "~&--> ~s" thing)
>         thing)))
> 
> (defun test2 ()
>  (foo (lambda (thing)
>         (if (eq thing #'+)
>             #'*
>             thing))))

You're using arguments of different type with the same function, and
you're then relying on library functions like "format" to sort out
the tags and act accordingly. The equivalent in Haskell is to
create a datatype that supplies those tags:

> data Foo a = Op (a -> a -> a) | Val a 

We have to tell the 'show' formatter how to deal with that datatype
(in Haskell, it is not possible to print a function, otherwise this
could be done in an easier way):

> instance Show a => Show (Foo a) where
>   show (Op f)  = "operator"
>   show (Val x) = show x

We define an auxiliary function to do the application. Since you
will check for an error dynamically, we do the same:

> foocall (Op p) (Val x) (Val y) = p x y
> foocall _                      = error "dynamic type error"

Note that foo is a bogus function; there are values for f which will
cause the program to crash. We here say honestly and explicitely "we
don't care about such situations", whereas with pure dynamic typing,
you sort of brush it under the carpet and hope nobody will ever do
this. If someone now uses foo and isn't aware of that restriction,
and innocently passes a wrong argument in some obscure situation
that is not covered by a unit test, you're in trouble.

Foo is now easy:

> foo f = foocall (f (Op (+))) (f (Val 3)) (f (Val 2))

In test1, I cheat a bit because outside the I/O monad, there cannot
be side effects like printing, but for typechecking this is close enough:

> test1 = foo (\thing -> show thing `seq` thing)

In test2, I cannot test for function equality (that is a fishy concept
for reasons already mentioned), but I can test for the presence of
an operator:

> test2 = foo (\thing -> case thing of
>                          Op _ -> Op (*)
>                          _    -> thing)

It doesn't do exactly the same, but it's close, and after all, this
example doesn't do anything useful.

> (defun transpose-tensor (tensor)
>  (apply #'mapcar #'mapcar (list #'list #'list) tensor))

That's the most interesting example so far, because it really has an
application. However, whatever that function does, it doesn't transpose
an arbitrary tensor:

(transpose-tensor '( ((1 A) (2 B) (3 C)) ((4 D) (5 E) (6 F)) ) )

evaluates to

(((1 4) (A D)) ((2 5) (B E)))

dropping the last column. The culprit is the finite number of #'list
occurences: there are only two, so you get only two columns.

Anyway, let's do mapcar in Haskell. To make the somewhat arbitrary
use of variable length argument lists simpler, I put all the arguments
in a single list argument:

> mapcar :: ([a] -> b) -> [[a]] -> [b]
> mapcar f m = map f (transpose m)

map and transpose are in the standard library and do what is expected.
Now it turns out that the #'list functions will be just the identity,
because the function f is already supposed to process a list as argument. 
We don't need a whole list of them either. So we can write a correct
transpose-tensor function here called f

> f = mapcar (mapcar id)

For simplicity, let's test only with integers:

> mat = [[[1,2,3],[4,5,6]],[[10,11,12],[13,14,15]]]

And then

Main> f mat
[[[1,10],[2,11],[3,12]],[[4,13],[5,14],[6,15]]]
Main> f (f mat)
[[[1,4],[10,13]],[[2,5],[11,14]],[[3,6],[12,15]]]

yields correct results. To simulate the bogus transpose-tensor function,
we have to allow for a list of functions in mapcar:

> mapcar2 :: (a -> [b] -> c) -> [a] -> [[b]] -> [c]
> mapcar2 f l m = zipWith f l (transpose m)

zipWith is just the Lisp mapcar with two lists to operate on. Then
we have 

> g = mapcar2 mapcar [id, id]

and indeed

Main> g mat
[[[1,10],[2,11],[3,12]],[[4,13],[5,14],[6,15]]]
Main> g (f mat)
[[[1,4],[10,13]],[[2,5],[11,14]]]

as the original bogus function does.

> (defun test3 ()
>  (transpose-tensor '(((1 2 3) (4 5 6)) ((a b c) (d e f)))))

If you want to mix integers and characters (or symbols, or whatever)
in a single matrix you again need a datatype:

> data IntChar = I Integer | C Char  deriving Show

Then you write down the tensor with apropriate tags and apply f or g.
(I think you should have seen the procedure to emulate dynamic typing
now often enough, so I'll leave this as an exercise).

IMHO, the Haskell version is clearer and easy to understand than the
Lisp version. It took me a long time to figure out what the Lisp
version does (especially, when it didn't worked as I thought it
should).

- Dirk
From: Don Geddis
Subject: Re: More static type fun.
Date: 
Message-ID: <87oew2xyvt.fsf@sidious.geddis.org>
Dirk Thierbach <··········@gmx.de> writes:
> IMHO, the Haskell version is clearer and easy to understand than the
> Lisp version. It took me a long time to figure out what the Lisp
> version does (especially, when it didn't worked as I thought it
> should).

Surely even you would agree that this is a matter of experience or personal
preference.  The original code was only a few lines of Lisp.  I doubt many
Lisp programmers would have had much trouble with it.  What is your objective
criteria for claiming that the Haskell version is "clearer and easier to
understand"?

> If you want to mix integers and characters (or symbols, or whatever)
> in a single matrix you again need a datatype:
> > data IntChar = I Integer | C Char  deriving Show

Note that in the original Lisp code, the transpose-tensor function worked
with lists of arbitrary objects, not limited to an explicit list of types.
In particular, long after the transpose-tensor function had been written,
some different programmer might create some new data types, construct a list
with those new types, and pass it to the existing transpose-tensor function.

How do you handle a function accepting lists of arbitrary objects as an
argument?

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
From: Aaron Denney
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbpr0m0.k7s.wnoise@ofb.net>
In article <··············@sidious.geddis.org>, Don Geddis wrote:
> Dirk Thierbach <··········@gmx.de> writes:
>> If you want to mix integers and characters (or symbols, or whatever)
>> in a single matrix you again need a datatype:
>> > data IntChar = I Integer | C Char  deriving Show
> 
> Note that in the original Lisp code, the transpose-tensor function worked
> with lists of arbitrary objects, not limited to an explicit list of types.
> In particular, long after the transpose-tensor function had been written,
> some different programmer might create some new data types, construct a list
> with those new types, and pass it to the existing transpose-tensor function.
> 
> How do you handle a function accepting lists of arbitrary objects as an
> argument?

The haskell approach would define the transpose-tensor function working
for any single object type, parametrically:

> transpose-tensor :: [[a]] -> [[a]]

Assuming I have the depth correct.

Then the user would define a sum type, as above, but for their types,
and use the transpose-tensor function just as it is.  (In fact, they
have to define the sum type in order to get mixed-lists in the first
place, so transpose-tensor adds no burden to its users.  Yes, it's a
small bit of extra machinery to write, compared with the dynamic-typing
approach.  Personally, I find the clarity worth it.)

-- 
Aaron Denney
-><-
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <iog071-dui.ln1@ID-7776.user.dfncis.de>
Don Geddis <···@geddis.org> wrote:
> Dirk Thierbach <··········@gmx.de> writes:
>> IMHO, the Haskell version is clearer and easy to understand than the
>> Lisp version. It took me a long time to figure out what the Lisp
>> version does (especially, when it didn't worked as I thought it
>> should).

> Surely even you would agree that this is a matter of experience or personal
> preference.  The original code was only a few lines of Lisp. 

And the translation is also only a few lines, and the actual function
(existence of mapcar assumed) is a lot shorter.

> I doubt many Lisp programmers would have had much trouble with it.

I mostly had trouble because I didn't understand why there were two
'list'-operations in the definition. I had some really wild theories
why they should be necessary, but it turned out that they did work
in a completely different way, restricting the tensor to two columns.
I am not really sure if this was intended in the original code,
at least it did surprise me.

> What is your objective criteria for claiming that the Haskell
> version is "clearer and easier to understand"?

One criterion is that the transliterated function actually does what
it is supposed to do. Another criterion is that the correct function
is shorter: No apply, no (list '#list '#list).

> Note that in the original Lisp code, the transpose-tensor function worked
> with lists of arbitrary objects, not limited to an explicit list of types.

And so does the translated function -- it works with a value of arbitrary
types. However, there is no default type that contains both integers
and characters. If you want to have it, you have to make one.

If you want to have a type that contains nearly all possible objects
(say, s-expressions), you have to make a datatype for that. That has been
already done during this discussion. 

If you really want to use arbitrary types everywhere, just put this
universal type in a library and use it everywhere. 

> In particular, long after the transpose-tensor function had been written,
> some different programmer might create some new data types, construct a list
> with those new types, and pass it to the existing transpose-tensor function.

You might have noticed that this is exactly the same way the transliteration
handles it. First you wait the transpose-tensor function. It has type

transpose-tensor :: [[[a]]] -> [[[a]]]

i.e. it takes lists of lists of lists of some type a and returns the same.
Then, long after the function has been written, you can make arbitrary
other types, and the function will work on them.

> How do you handle a function accepting lists of arbitrary objects as an
> argument?

I think I have demonstrated this now quite a few times, but maybe
the construction did not come across. Which part is the difficult one?

- Dirk
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310272003.21215aee@posting.google.com>
Dirk Thierbach <··········@gmx.de> wrote in message news:<··············@ID-7776.user.dfncis.de>...

> If you want to have a type that contains nearly all possible objects
> (say, s-expressions), you have to make a datatype for that. That has been
> already done during this discussion. 
> 
> If you really want to use arbitrary types everywhere, just put this
> universal type in a library and use it everywhere. 

Then what would be the point of static type checking, since every
possible datum would satisfy this type requirement?

I think the dynamic typing argument is that if you want the
flexibility of a type that contains all possible objects, what's the
point of having to jump through the hoops of a static type checker?
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egad7mgc8e.fsf@vipe.ii.uib.no>
·······@mediaone.net (Raffael Cavallaro) writes:

> Dirk Thierbach <··········@gmx.de> wrote in message news:<··············@ID-7776.user.dfncis.de>...

>> If you really want to use arbitrary types everywhere, just put this
>> universal type in a library and use it everywhere. 

> Then what would be the point of static type checking, since every
> possible datum would satisfy this type requirement?

Exactly.  It's just to show that you *can*, I don't think many people
actually do this.

> I think the dynamic typing argument is that if you want the
> flexibility of a type that contains all possible objects, what's the
> point of having to jump through the hoops of a static type checker?

I agree - if this is what you usually need, by all means, use dynamic
typing.  However, IME a tensor will be typically be over some
well-defined domain.  YMMV.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <h4n171-in.ln1@ID-7776.user.dfncis.de>
Raffael Cavallaro <·······@mediaone.net> wrote:
> Dirk Thierbach <··········@gmx.de> wrote in message news:<··············@ID-7776.user.dfncis.de>...
> 
>> If you want to have a type that contains nearly all possible objects
>> (say, s-expressions), you have to make a datatype for that. That has been
>> already done during this discussion. 
>> 
>> If you really want to use arbitrary types everywhere, just put this
>> universal type in a library and use it everywhere. 

> Then what would be the point of static type checking, since every
> possible datum would satisfy this type requirement?

I usually don't want to use every possible datum everywhere. That's
the point. Usually, I do know that I only want e.g. tensors with
real numbers, because I want to add and multiply them as well.
But that doesn't keep me from writing a function that transposes
a tensor, no matter what sort of types this tensor contains.

Most of the examples here are somewhat artificial and require mixing
different types, because people think that this won't work with static
typing.

> I think the dynamic typing argument is that if you want the
> flexibility of a type that contains all possible objects, what's the
> point of having to jump through the hoops of a static type checker?

On the one hand, you already have the flexibility of types that deals
with all possible objects -- type signatures contain variables,
and you can substitute any type for them. You just cannot use two
different types at the same time for the same variable.

On the other hand, if you really want to have multiple types at the
same time (which isn't very often), it keeps you honest.

It says clearly: Here, at this point in the program, you should be
prepared to deal with values of such an such types. If you cannot
handle all of those, better be sure you really want it that way, or
do something different.

And your co-worker or the guy that has to work with your program after
you left the company also sees this.

- Dirk
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310281535.87f9bf5@posting.google.com>
Dirk Thierbach <··········@gmx.de> wrote in message news:<·············@ID-7776.user.dfncis.de>...

> Most of the examples here are somewhat artificial and require mixing
> different types, because people think that this won't work with static
> typing.

On the contrary. It is quite common for lisp programmers to use lists
whose elements are of different types, especially early in development
when designs are just being sketched out. Only later is a particular
representation chosen, be it a built-in type, a struct, or an object.

 
> On the one hand, you already have the flexibility of types that deals
> with all possible objects -- type signatures contain variables,
> and you can substitute any type for them.

like the Haskell cons operator? "You can cons anything onto anything
else, as long as they're all the same type." It's the Henry Ford of
programming languages - (Ford is reputed to have said "They can have
any color they like, as long as it's black.")

> On the other hand, if you really want to have multiple types at the
> same time (which isn't very often), it keeps you honest.

Honest? Why do I need a compiler to "keep me honest?" If I need a
compiler to "keep my honest," I'll tell it when, and how to do so.
Until then, it had best stay out of my way if it knows what's good for
it. ;^)

> 
> It says clearly: Here, at this point in the program, you should be
> prepared to deal with values of such an such types. If you cannot
> handle all of those, better be sure you really want it that way, or
> do something different.

You can do this, if, and when you want to in lisp. 
The important difference is, in lisp, you do it at the right time,
when you're ready, when you've actually chosen the right data
representation and algorithms, and not before.

Static typing is a form of premature optimization.

I think something that the static typing advocates don't know is that
the number of runtime type errors (as opposed to runtime program logic
errors) in real world projects in lisp or smalltalk is actually tiny.
Static typing is an optimization for an uncommon case (type errors not
discovered until runtime), at the expense of programmer time in the
early stages of development. This is a bad trade off.
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egad7kl7l9.fsf@sefirot.ii.uib.no>
·······@mediaone.net (Raffael Cavallaro) writes:

> I think something that the static typing advocates don't know is that
> the number of runtime type errors (as opposed to runtime program logic
> errors) in real world projects in lisp or smalltalk is actually tiny.

I think that something the dynamic typing advocates don't realize is
that static typing helps avoiding many logic errors.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Paul F. Dietz
Subject: Re: More static type fun.
Date: 
Message-ID: <7KSdnYlbrYuCKwKiRVn-iQ@dls.net>
··········@ii.uib.no wrote:

> I think that something the dynamic typing advocates don't realize is
> that static typing helps avoiding many logic errors.

The counterargument is that there are many logic errors that static
typing does not catch.  To catch *those*, you need to test your code
adequately, and that testing will also catch most of the errors
that static typing would have caught.

	Paul
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <3F9FD141.7000800@ps.uni-sb.de>
Paul F. Dietz wrote:
> ··········@ii.uib.no wrote:
> 
>> I think that something the dynamic typing advocates don't realize is
>> that static typing helps avoiding many logic errors.
> 
> The counterargument is that there are many logic errors that static
> typing does not catch.  To catch *those*, you need to test your code
> adequately, and that testing will also catch most of the errors
> that static typing would have caught.

The counterargument to that old argument is: if you know how to turn the 
type system to your advantage it will cut down the number of error 
"dimensions" significantly. Since the difficulty of localizing errors 
increases exponentially with the number of dimensions that is not at all 
a neglectable advantage.

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egekwvw35l.fsf@vipe.ii.uib.no>
Andreas Rossberg <········@ps.uni-sb.de> writes:

>> The counterargument is that there are many logic errors that static
>> typing does not catch.  To catch *those*, you need to test your code
>> adequately, and that testing will also catch most of the errors
>> that static typing would have caught.

> The counterargument to that old argument is: if you know how to turn
> the type system to your advantage it will cut down the number of error
> "dimensions" significantly.

And it will also make it easier getting the tests correct. :-)

I have certainly no objection to test-first and extensive testing, but
I think this doesn't really eliminate or even significantly diminish
the benefits of a good type system.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Darius
Subject: Re: More static type fun.
Date: 
Message-ID: <20031030033324.00003c2a.ddarius@hotpop.com>
On 30 Oct 2003 09:17:10 +0100
··········@ii.uib.no wrote:

> Andreas Rossberg <········@ps.uni-sb.de> writes:
> 
> >> The counterargument is that there are many logic errors that static
> >> typing does not catch.  To catch *those*, you need to test your
> >code> adequately, and that testing will also catch most of the errors
> >> that static typing would have caught.
> 
> > The counterargument to that old argument is: if you know how to turn
> > the type system to your advantage it will cut down the number of
> > error"dimensions" significantly.
> 
> And it will also make it easier getting the tests correct. :-)
> 
> I have certainly no objection to test-first and extensive testing, but
> I think this doesn't really eliminate or even significantly diminish
> the benefits of a good type system.

What?!  You don't have unit tests for your unit tests?!
From: Jon S. Anthony
Subject: Re: More static type fun.
Date: 
Message-ID: <m3ekwwvsz9.fsf@rigel.goldenthreadtech.com>
··········@ii.uib.no writes:

> ·······@mediaone.net (Raffael Cavallaro) writes:
> 
> > I think something that the static typing advocates don't know is that
> > the number of runtime type errors (as opposed to runtime program logic
> > errors) in real world projects in lisp or smalltalk is actually tiny.
> 
> I think that something the dynamic typing advocates don't realize is
> that static typing helps avoiding many logic errors.

Having done a lot of both, I disagree with this assertion.  In
practice, ime, ketil assertion has never happened and raffael
assertion is in exact alignment.  While YMMV, I think this sort of
experience is typical for the "dynamic" camp.

/Jon
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1smlb20gj.fsf@tti5.uchicago.edu>
·········@rcn.com (Jon S. Anthony) writes:

> ··········@ii.uib.no writes:
> 
> > I think that something the dynamic typing advocates don't realize is
> > that static typing helps avoiding many logic errors.
> 
> Having done a lot of both, I disagree with this assertion.  In
> practice, ime, ketil assertion has never happened and raffael
> assertion is in exact alignment.  While YMMV, I think this sort of
> experience is typical for the "dynamic" camp.

I don't know about "typical" since I can only speak for myself.  But
having been in the "dynamic" camp myself for long enough, I can say
that having had exactly the experience that Ketil describes is what
made me leave it.

Can you give a short rundown on what exactly you have done with static
types?  Maybe knowing that would clear up why you haven't had the
experience that every other "static camper" seems to enjoy.  What I
learned over the years is that it takes quite a bit of experience to
effectively harness the power of static type checking for containing
logical errors.  This is not something that happens immediatly, and it
is even harder if not outright impossible if one is not willing to
adjust one's programming style.  You can never really understand it if
you insist that you have to keep fighting it.  (That's true for many
things in life, I guess.)

Matthias
From: Simon Helsen
Subject: Re: More static type fun.
Date: 
Message-ID: <Pine.SOL.4.44.0310291642160.4865-100000@crete.uwaterloo.ca>
On 29 Oct 2003, Matthias Blume wrote:

>Can you give a short rundown on what exactly you have done with static
>types?  Maybe knowing that would clear up why you haven't had the
>experience that every other "static camper" seems to enjoy.  What I
>learned over the years is that it takes quite a bit of experience to
>effectively harness the power of static type checking for containing
>logical errors.  This is not something that happens immediatly, and it
>is even harder if not outright impossible if one is not willing to
>adjust one's programming style.  You can never really understand it if
>you insist that you have to keep fighting it.  (That's true for many
>things in life, I guess.)

I do not always like Matthias's style of arguing ;-), but I must agree
here. I have seen some die-hard dynamic typing gurus code in an ML-style
type system and it is remarkable how they actually 'fight' the types. I
firmly beleive that for a big class of problems, a powerful ML-style or
Haskell-style type system is good enough if you are willing to shape your
abstractions into the type system. There certainly are a few situations
where it is not entirely obvious how to combine some abstraction
mechanisms (e.g. OO-inheritance or others mentioned in this thread) with
static typing (which does not necessarily precludes it either). E.g. OCaml
combines ML-style typing with class-based OO, but it does not allow
downcasting because that is not safe with pure static typing (you need
some dynamic type checks). Whether that renders static types perse
entirely useless, is a different matter IMO. Oh, and I should add that
static typing caused me some major nightmares when used to encode typed
program generator generation, but that of course, that is quite a
specialist area (and again, a more powerful weakly dependent type system
instead of just ML-typing would have solved most of the problems)

	S
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310292016.35628003@posting.google.com>
Simon Helsen <·······@computer.org> wrote in message news:<·······································@crete.uwaterloo.ca>...

> I firmly beleive that for a big class of problems, a powerful ML-style or
> Haskell-style type system is good enough if you are willing to shape your
> abstractions into the type system.

Once you've decided to alter *your* abstractions to fit the demands of
a compiler, alarms should be going off very loudly. Computers, and
compilers, should serve programmers' needs, conform to *our*
abstractions, not the other way around.

To have to modify my choice of abstractions just to satisfy a dumb
compiler (clever about type inferencing, maybe,  but still *waaaay*
dumb) is a complete inversion of the purpose of computers; they are
tools, they should do what *we* want, not the other way around.

Lisp gives programmers the freedom to choose whatever abstractions
suit them.

The difference between a lisp compiler, and a haskell or Ocaml
compiler, is that the lisp compiler *knows* that it's not as
intelligent as I am, and a haskell or Ocaml compiler doesn't.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2vfq75o33.fsf@hanabi-air.shimizu.blume>
·······@mediaone.net (Raffael Cavallaro) writes:

> Simon Helsen <·······@computer.org> wrote in message news:<·······································@crete.uwaterloo.ca>...
> 
> > I firmly beleive that for a big class of problems, a powerful ML-style or
> > Haskell-style type system is good enough if you are willing to shape your
> > abstractions into the type system.
> 
> Once you've decided to alter *your* abstractions to fit the demands of
> a compiler, alarms should be going off very loudly. Computers, and
> compilers, should serve programmers' needs, conform to *our*
> abstractions, not the other way around.

Maybe Simon misphrased this, which is what confuses you.  You are not
supposed to shape your abstractions for the compiler, you are supposed
to *express* them to the compiler.  In other words, you have to learn
the language and become fluent in it.  That's not at all the same as
"serving the compiler's needs".  The compiler serves our needs, and in
order to be able to do so it offers a certain interface which is the
language.  That is not at all different than programming in Lisp.

What /is/ different, though, is the ease at which you can express
abstractions.  Dynamically typed languages tend to have very few
really good abstraction facilities that are worthy of that label.

Matthias
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <oevyrajg.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> What /is/ different, though, is the ease at which you can express
> abstractions.  Dynamically typed languages tend to have very few
> really good abstraction facilities that are worthy of that label.

Oh, I dunno.  How many do you need?  LAMBDA does an awful lot.
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310302056.493e5a94@posting.google.com>
Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@hanabi-air.shimizu.blume>...

> What /is/ different, though, is the ease at which you can express
> abstractions.  Dynamically typed languages tend to have very few
> really good abstraction facilities that are worthy of that label.

This is pretty close to being a troll. Just in case you're serious:

lambda (already mentioned by Joe Marshall)
map & co. (mapcar, mapcon, mapcan, mapl, maplist, maphash, map-into)
apply 
reduce
remove, remove-if, remove-if-not
unwind-protect
subsetp
rassoc-if, rassoc-if-not

setf, psetf, shiftf (yes, there *are* abstractions that involve side
effects)
delete, delete-if, delete-if-not

the with- macros:
with-accessors
with-open-file 
with-open-stream 
with-output-to-string 
with-slots, etc.

CLOS:
generic functions
multimethods
method combination
the CLOS MOP

and I'm sure others can point out more.

You're kidding, right? If not, please take a good look at the
hyperspec before you claim that lisp doesn't have good abstraction
facilities. BTW, the functional abstractions beloved of the FP crowd
are really *not* the only useful kind of abstraction, you know.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2u15qhtos.fsf@hanabi-air.shimizu.blume>
·······@mediaone.net (Raffael Cavallaro) writes:

> Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@hanabi-air.shimizu.blume>...
> 
> > What /is/ different, though, is the ease at which you can express
> > abstractions.  Dynamically typed languages tend to have very few
> > really good abstraction facilities that are worthy of that label.
> 
> This is pretty close to being a troll. Just in case you're serious:
> 
> lambda (already mentioned by Joe Marshall)
> map & co. (mapcar, mapcon, mapcan, mapl, maplist, maphash, map-into)
> apply 
> reduce
> remove, remove-if, remove-if-not
> unwind-protect
> subsetp
> rassoc-if, rassoc-if-not
> 
> setf, psetf, shiftf (yes, there *are* abstractions that involve side
> effects)
> delete, delete-if, delete-if-not
> 
> the with- macros:
> with-accessors
> with-open-file 
> with-open-stream 
> with-output-to-string 
> with-slots, etc.
> 
> CLOS:
> generic functions
> multimethods
> method combination
> the CLOS MOP
> 
> and I'm sure others can point out more.
> 
> You're kidding, right? If not, please take a good look at the
> hyperspec before you claim that lisp doesn't have good abstraction
> facilities.

I think you don't know what an abstraction is.  (None of the stuff
that you list -- except lambda (but that is taken by Joe already) --
has much at all to do with abstraction.)

> BTW, the functional abstractions beloved of the FP crowd
> are really *not* the only useful kind of abstraction, you know.

Indeed.  And that is precisely the problem.

Matthias
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310310704.797bb2b6@posting.google.com>
Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@hanabi-air.shimizu.blume>...
> I think you don't know what an abstraction is.

Quite the smug one, aren't we. But I think you don't know that the
english word abstraction has a much broader meaning than you know,
even in the context of programming languages.

>  (None of the stuff
> that you list -- except lambda (but that is taken by Joe already) --
> has much at all to do with abstraction.)

Well, assuming now that you mean function abstraction *only*, then,
yes they do. Function abstraction is pretty useless if you can't apply
the functions you've abstracted, and all of these items:

map & co. (mapcar, mapcon, mapcan, mapl, maplist, maphash, map-into)
apply 
reduce
remove, remove-if, remove-if-not
rassoc-if, rassoc-if-not
delete, delete-if, delete-if-not

take functional arguments, which make them more specific, more
convenient, built-in uses of function abstraction and application with
lambda, or named functions. So, yes, lisp also has facilities for
function abstraction and application.

There are, however, other types of abstraction. Just because you think
everything should be done with functional abstractions, doesn't mean
that that is the best, or the most natural, or the most convenient
form of abstraction. Lisp provides multiple paradigms, which is why
lisp is more expressively powerful than a pure functional language. It
gives the programmer a complete toolkit, instead of insisting that he
do everything one way, even though a different paradigm would fit the
problem better.

Raf
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1wualz6ub.fsf@tti5.uchicago.edu>
·······@mediaone.net (Raffael Cavallaro) writes:

> Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@hanabi-air.shimizu.blume>...
> > I think you don't know what an abstraction is.
> 
> Quite the smug one, aren't we. But I think you don't know that the
> english word abstraction has a much broader meaning than you know,
> even in the context of programming languages.

In the context of programming languages it has a fairly specific
meaning -- which I think I know well enough.  As far as the English
language is concerned, I guess I have to take your word for it since I
am not a native speaker.  But let's see... dict.org ...  Webster's
Revised Unabridged Dictionary ...  abstraction:

Abstraction \Ab*strac"tion\, n. [Cf. F. abstraction. See Abstract , a.]
     1. The act of abstracting, separating, or withdrawing, or the
        state of being withdrawn; withdrawal.
  
              A wrongful abstraction of wealth from certain
              members of the community.             --J. S. Mill.
  
     2. (Metaph.) The act process of leaving out of consideration
        one or more properties of a complex object so as to attend
        to others; analysis. Thus, when the mind considers the
        form of a tree by itself, or the color of the leaves as
        separate from their size or figure, the act is called
        abstraction. So, also, when it considers whiteness,
        softness, virtue, existence, as separate from any
        particular objects.
  
     Note: Abstraction is necessary to classification, by which
           things are arranged in genera and species. We separate
           in idea the qualities of certain objects, which are of
           the same kind, from others which are different, in
           each, and arrange the objects having the same
           properties in a class, or collected body.
  
                 Abstraction is no positive act: it is simply the
                 negative of attention.             --Sir W.
                                                    Hamilton.
  
     3. An idea or notion of an abstract, or theoretical nature;
        as, to fight for mere abstractions.
  
     4. A separation from worldly objects; a recluse life; as, a
        hermit's abstraction.
  
     5. Absence or absorption of mind; inattention to present
        objects.
  
     6. The taking surreptitiously for one's own use part of the
        property of another; purloining. [Modern]
  
     7. (Chem.) A separation of volatile parts by the act of
        distillation. --Nicholson. 

All of these seem to chime well with what I have in mind when I think
of "abstraction".  (I also think that the English word "abstraction"
and the German word "Abstraktion" are fairly close in meaning, so my
intutition even as a non-native speaker should be ok.)

> 
> >  (None of the stuff
> > that you list -- except lambda (but that is taken by Joe already) --
> > has much at all to do with abstraction.)
> 
> Well, assuming now that you mean function abstraction *only*, then,
> yes they do. Function abstraction is pretty useless if you can't apply
> the functions you've abstracted, and all of these items:
> 
> map & co. (mapcar, mapcon, mapcan, mapl, maplist, maphash, map-into)
> apply 
> reduce
> remove, remove-if, remove-if-not
> rassoc-if, rassoc-if-not
> delete, delete-if, delete-if-not

As Fergus already pointed out, these are all concrete functions (which
you can see as the outcome of some earlier abstraction process).  They
are not abstraction facilities, though.

Matthias
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310311952.32fc5281@posting.google.com>
Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@tti5.uchicago.edu>...
> ·······@mediaone.net (Raffael Cavallaro) writes:
> 
> > Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@hanabi-air.shimizu.blume>...

> In the context of programming languages it has a fairly specific
> meaning -- which I think I know well enough.

No, you are confusing the very limited, and specific notion of
_function_abstraction_, with the more general concept of abstraction.
You didn't write "_function_abstraction_ facilities," you wrote
"abstraction facilities." The first, clearly you are familiar with.
However, the concept of abstraction in programming languages refers to
any facilities for doing exactly what you posted, from a dictionary
definition you cut and pasted, but apparently didn't take the time to
read and understand:

"The act [or] process of leaving out of consideration one or more
properties of a complex object so as to attend to others."

This is a pretty good description of the purpose of macro facilities -
they allow the user of the macro to attend to a higher level
abstraction without having to consider lower level details. That's why
I listed the with- macros among lisp's facilities for abstraction.

It is also a good description of OO facilities - they allow the user
of a class or its objects to attend to the higher level abstraction of
the object's slots, and the methods defined on the object, without
having to deal with other properties, such as method implementation
details, or the memory layout of slots. That is why I listed CLOS
among lisp's facilities for abstraction.

> As Fergus already pointed out, these are all concrete functions (which
> you can see as the outcome of some earlier abstraction process).  They
> are not abstraction facilities, though.

Abstracted functions are useless unless you can call or apply them.
The functions I listed are facilities for applying/calling functions,
both named and anonymously abstracted, and so, are facilities for
abstraction.

Of course, we lispers don't need to limit ourselves to
_function_abstraction_ as our only form of abstraction. We have
macros, both as part of the standard, and which we can write, and
other abstraction facilities, such as CLOS, so we are not limited to a
single, narrow kind of abstraction.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2r80s90t0.fsf@hanabi-air.shimizu.blume>
·······@mediaone.net (Raffael Cavallaro) writes:

> [ ... ] from a dictionary
> definition you cut and pasted, but apparently didn't take the time to
> read and understand:

Consider our discussion closed indefinitely.
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-9F13C6.23224901112003@netnews.attbi.com>
In article <··············@hanabi-air.shimizu.blume>,
 Matthias Blume <····@my.address.elsewhere> wrote:

> ·······@mediaone.net (Raffael Cavallaro) writes:
> 
> > [ ... ] from a dictionary
> > definition you cut and pasted, but apparently didn't take the time to
> > read and understand:
> 
> Consider our discussion closed indefinitely.

Only because what I wrote is true. You use the term "abstraction" to 
mean "function abstraction" only. That is a wrong, even in the context 
of programming languages.

I gave several examples of types of abstraction that are *not* function 
abstraction, including macro facilities, and OO facilities.

There are other facilities for abstraction in programming languages 
besides function abstraction. Lisp gives us access to most all of them.
From: thomas
Subject: Re: More static type fun.
Date: 
Message-ID: <3c5586ca.0311030726.777a0770@posting.google.com>
Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@hanabi-air.shimizu.blume>...
> ·······@mediaone.net (Raffael Cavallaro) writes:
> 
> > Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@hanabi-air.shimizu.blume>...
> > 
> > > What /is/ different, though, is the ease at which you can express
> > > abstractions.  Dynamically typed languages tend to have very few
> > > really good abstraction facilities that are worthy of that label.
> > 
> > This is pretty close to being a troll. Just in case you're serious:
> > 
> > lambda (already mentioned by Joe Marshall)
> > map & co. (mapcar, mapcon, mapcan, mapl, maplist, maphash, map-into)
> > apply 
> > reduce
> > remove, remove-if, remove-if-not
> > unwind-protect
> > subsetp
> > rassoc-if, rassoc-if-not
> > 
> > setf, psetf, shiftf (yes, there *are* abstractions that involve side
> > effects)
> > delete, delete-if, delete-if-not
> > 
> > the with- macros:
> > with-accessors
> > with-open-file 
> > with-open-stream 
> > with-output-to-string 
> > with-slots, etc.
> > 
> > CLOS:
> > generic functions
> > multimethods
> > method combination
> > the CLOS MOP
> > 
> > and I'm sure others can point out more.
> > 
> > You're kidding, right? If not, please take a good look at the
> > hyperspec before you claim that lisp doesn't have good abstraction
> > facilities.
> 
> I think you don't know what an abstraction is.  (None of the stuff
> that you list -- except lambda (but that is taken by Joe already) --
> has much at all to do with abstraction.)
> 
> > BTW, the functional abstractions beloved of the FP crowd
> > are really *not* the only useful kind of abstraction, you know.
> 
> Indeed.  And that is precisely the problem.
> 
> Matthias


Then perhaps you would consider lisp's support for the concept of
object identity to be a valid abstraction facility?
(see the concurrent thread on this, particularly message
<··············@vserver.cs.uit.no> )

Type systems provide support for distinguishing between things that
are otherwise semantically equivalent, as does object identity.

Together with symbols, macros and so on, abstracting lispers seem to
get by...
Of course (being lisp) this is all a lot more dynamic and transitory
than the abstraction facilities provided by static type systems.

(I think this makes sense, but feel free to shred me if it doesn't.
Possibly we have strayed from abstraction into modelling.)

thomas
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa26b49$1@news.unimelb.edu.au>
·······@mediaone.net (Raffael Cavallaro) writes:

>Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@hanabi-air.shimizu.blume>...
>
>> What /is/ different, though, is the ease at which you can express
>> abstractions.  Dynamically typed languages tend to have very few
>> really good abstraction facilities that are worthy of that label.
>
>This is pretty close to being a troll. Just in case you're serious:
>
>lambda (already mentioned by Joe Marshall)

That's an abstraction facility.

>map & co. (mapcar, mapcon, mapcan, mapl, maplist, maphash, map-into)
>apply 
>reduce
>remove, remove-if, remove-if-not
>unwind-protect
>subsetp
>rassoc-if, rassoc-if-not
>
>setf, psetf, shiftf (yes, there *are* abstractions that involve side
>effects)
>delete, delete-if, delete-if-not

As you indicate yourself, those are all individual abstractions, not
abstraction facilities.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Coby Beck
Subject: Re: More static type fun.
Date: 
Message-ID: <bo180n$g9c$1@otis.netspace.net.au>
"Fergus Henderson" <···@cs.mu.oz.au> wrote in message
···············@news.unimelb.edu.au...
> ·······@mediaone.net (Raffael Cavallaro) writes:
>
> >Matthias Blume <····@my.address.elsewhere> wrote in message
news:<··············@hanabi-air.shimizu.blume>...
> >
> >> What /is/ different, though, is the ease at which you can express
> >> abstractions.  Dynamically typed languages tend to have very few
> >> really good abstraction facilities that are worthy of that label.
> >
> >This is pretty close to being a troll. Just in case you're serious:
> >
> >lambda (already mentioned by Joe Marshall)
>
> That's an abstraction facility.
>
> >map & co. (mapcar, mapcon, mapcan, mapl, maplist, maphash, map-into)
> >apply
> >reduce
> >remove, remove-if, remove-if-not
> >unwind-protect
> >subsetp
> >rassoc-if, rassoc-if-not
> >
> >setf, psetf, shiftf (yes, there *are* abstractions that involve side
> >effects)
> >delete, delete-if, delete-if-not
>
> As you indicate yourself, those are all individual abstractions, not
> abstraction facilities.

I really don't know what you have in mind with the term abstraction
facility, and sorry if I should just STFW, but what about defmacro, defun,
defclass then?  If these are not abstraction facilities either then I would
be curious about your definition.  I would also be willing to concede
Mathias' point that Lisp has only one abstraction facility (according to
your prefered definition) but would no longer care ;)

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa64156$1@news.unimelb.edu.au>
"Coby Beck" <·····@mercury.bc.ca> writes:

>"Fergus Henderson" <···@cs.mu.oz.au> wrote:
>> As you indicate yourself, those are all individual abstractions, not
>> abstraction facilities.
>
>I really don't know what you have in mind with the term abstraction
>facility, and sorry if I should just STFW, but what about defmacro, defun,
>defclass then?

Sure, I would consider those to be abstraction facilities.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Gareth McCaughan
Subject: Re: More static type fun.
Date: 
Message-ID: <87znfgd8lz.fsf@g.mccaughan.ntlworld.com>
Matthias Blume wrote:

> What /is/ different, though, is the ease at which you can express
> abstractions.  Dynamically typed languages tend to have very few
> really good abstraction facilities that are worthy of that label.

What are your criteria for real goodness and for worthiness
of the label "abstraction facility"? Can you give some
examples of abstraction facilities in your favourite
statically typed languages that have no worthy counterparts
in dynamically typed languages?

-- 
Gareth McCaughan
.sig under construc
From: Gareth McCaughan
Subject: Re: More static type fun.
Date: 
Message-ID: <87k76e7emp.fsf@g.mccaughan.ntlworld.com>
I wrote:

> Matthias Blume wrote:
> 
> > What /is/ different, though, is the ease at which you can express
> > abstractions.  Dynamically typed languages tend to have very few
> > really good abstraction facilities that are worthy of that label.
> 
> What are your criteria for real goodness and for worthiness
> of the label "abstraction facility"? Can you give some
> examples of abstraction facilities in your favourite
> statically typed languages that have no worthy counterparts
> in dynamically typed languages?

... Hello? (The questions aren't intended as a trap
or anything like that, in case that's why Matthias
hasn't answered. I'm just curious.)

-- 
Gareth McCaughan
.sig under construc
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m11xsm5uf9.fsf@tti5.uchicago.edu>
Gareth McCaughan <·····@g.local> writes:

> I wrote:
> 
> > Matthias Blume wrote:
> > 
> > > What /is/ different, though, is the ease at which you can express
> > > abstractions.  Dynamically typed languages tend to have very few
> > > really good abstraction facilities that are worthy of that label.
> > 
> > What are your criteria for real goodness and for worthiness
> > of the label "abstraction facility"? Can you give some
> > examples of abstraction facilities in your favourite
> > statically typed languages that have no worthy counterparts
> > in dynamically typed languages?
> 
> ... Hello? (The questions aren't intended as a trap
> or anything like that, in case that's why Matthias
> hasn't answered. I'm just curious.)

To me, abstraction facilities give you a way of drawing a separating
line between parts of a program, with certain details being visible on
one side of the line and hidden on the other.

LAMBDA is an abstraction facility in this sence because it can hide
some details in a closure, providing access to these details to the
code of the LAMBDA itself while the rest of the program can do no more
than to call the function.

The ML module system is a better abstraction facility than simply
LAMBDA because it gives more freedom on how to draw the separating
line between the two program parts.  An abstraction can encompass
multiple (abstract) types and multiple operations on values of these
types. The values in the abstract types do not have to be functions
but can be represented by values of any other (concrete or abstract)
type.

Classes in many OO languages are somewhere in between: they let you
have more than one operation but do not let you have more than one
type.  To get the effect of having multiple types in the same
abstraction you need clutchy things like C++ "friends", etc.

I do not think of macros as abstraction facilities but rather as
"abbreviation facilities" (although Scheme's hygienic macros probably
can be made to go some of the remaining distance).  There is no
difference between an instance of a macro and its expansion: I could
have written the expansion by hand and would have gotten the exact
same program.  In other words, the "abstraction" is completely
transparent -- no details are really hidden (except from the
programmer's eye).

The confusion we are having probably arises from the fact that human
programmers can take abbreviation facilities and use them as if they
were abstraction facilities -- simply by being disciplined.  Over time
I have come to believe that such a requirement of discipline does not
scale very well unless there is linguistic support for enforcing it.

Anyway, flame away.  The real reason why I did not answer (and did not
intend to answer) was that I don't see this discussion leading
anywhere.

Matthias
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-0511031500180001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> There is no
> difference between an instance of a macro and its expansion: I could
> have written the expansion by hand and would have gotten the exact
> same program.

But a macro expansion in general is the result of running an (arbitrary)
program.  So your claim is equivalent to the claim that "There is no
difference between a program and the result of running that program.  I
could have written the result down by hand and gotten the exact same
result."

I hope it is self-evident how absurd that it.

E.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2d6c6e06v.fsf@wireless-5-198-70.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > There is no
> > difference between an instance of a macro and its expansion: I could
> > have written the expansion by hand and would have gotten the exact
> > same program.
> 
> But a macro expansion in general is the result of running an (arbitrary)
> program.  So your claim is equivalent to the claim that "There is no
> difference between a program and the result of running that program.  I
> could have written the result down by hand and gotten the exact same
> result."

That is right.  The end result is precisely the same.

> I hope it is self-evident how absurd that it.

Not to me.  Sure, code generators such as Lisp macros or tools like
yacc or lex are a useful and powerful idea.  They just don't happen to
be abstraction facilities in their own right.  When abstraction comes
into the picture, it usually does so through some other abstraction
facility which gets paired with the code generating mechanism.

Matthias
From: Thomas A. Russ
Subject: Re: More static type fun.
Date: 
Message-ID: <ymin0b9wch6.fsf@sevak.isi.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> 
> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@tti5.uchicago.edu>, Matthias Blume
> > <····@my.address.elsewhere> wrote:
> > 
> > > There is no
> > > difference between an instance of a macro and its expansion: I could
> > > have written the expansion by hand and would have gotten the exact
> > > same program.

But by that criterion, one would have to rule out functions and lambdas
as abstraction mechanisms as well, since one could (by hand) write the
body of the function's code into place.  In fact, inlining of functions
does exactly that.

> > But a macro expansion in general is the result of running an (arbitrary)
> > program.  So your claim is equivalent to the claim that "There is no
> > difference between a program and the result of running that program.  I
> > could have written the result down by hand and gotten the exact same
> > result."
> 
> That is right.  The end result is precisely the same.
> 
> > I hope it is self-evident how absurd that it.
> 
> Not to me.  Sure, code generators such as Lisp macros or tools like
> yacc or lex are a useful and powerful idea.  They just don't happen to
> be abstraction facilities in their own right.  When abstraction comes
> into the picture, it usually does so through some other abstraction
> facility which gets paired with the code generating mechanism.
> 
> Matthias

This really is confusing to me, and to other Lispers, I would imagine.
Our take on the world is that macros provide a very powerful abstraction
tool.  The key insight is that we view the purpose of abstraction from
the point of view of the PROGRAMMER, not the COMPILER.

By using macros to develop specialized languages for solving problems,
we interpose a conceptual barrier between what the machine needs to
manipulate to get the result and the parts of the process that we, as
programmers need to think about when using that abstraction.  From the
point of view of the source code, you can't even tell the difference
between the use of a macro and the use of a function.  They look exactly
alike.  That is why we can't understand how one can maintain that
functions provide abstraction but macros do not.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m24qxgeu9i.fsf@wireless-5-198-70.uchicago.edu>
···@sevak.isi.edu (Thomas A. Russ) writes:

> But by that criterion, one would have to rule out functions and lambdas
> as abstraction mechanisms as well, since one could (by hand) write the
> body of the function's code into place.  In fact, inlining of functions
> does exactly that.

You cannot write the body of every function that way.  Not if there is
local state involved, to name one example.

> The key insight is that we view the purpose of abstraction from
> the point of view of the PROGRAMMER, not the COMPILER.

So do I.

Matthias
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87islws443.fsf@gruk.tech.ensign.ftech.net>
Matthias Blume <····@my.address.elsewhere> writes:

> ···@sevak.isi.edu (Thomas A. Russ) writes:
> 
> > But by that criterion, one would have to rule out functions and lambdas
> > as abstraction mechanisms as well, since one could (by hand) write the
> > body of the function's code into place.  In fact, inlining of functions
> > does exactly that.
> 
> You cannot write the body of every function that way.  Not if there is
> local state involved, to name one example.

OK, I shall give you a small example of what I'd term "macros as
abstraction facility". the writing of this was alas inspired by what
looked like a homework question a while back (though I'd guess it'd be
spotted at picked yup from somewhere else if handed in).

This extends the language to have a (rudimentary) facility for
generating solution to cryptarithmetic problems (it currently uses
brute fore, one could probably extend it to do some clever reasoning,
but I leave that as an exercise for the interested) in base 10 (base
selectability left as an exercise for the interested) and code
introspection.

It's probably non-trivial to implement using HOFs, without introducing
an explicit parameter list somewhere.

First, here is a possible use:

(defun print-solutions ()
  (crypto-loop
     (= (+ (cryptoval s e n d) (cryptoval m o r e))
           (cryptoval m o n e y))))


Here is the code implementing the whole thing:

(defmacro crypto-loop (&body body)
  (let ((syms (crypto-extract-symbols body)))
    (loop for sym in syms
          for rv = `(dotimes (,sym 10)
                          (when (/= ,@syms)
                               (when (progn ,@body)
                                 (crypto-report ,syms))))
              then `(dotimes (,sym 10) ,rv)
          finally (return rv))))

(defun crypto-extract-symbols-1 (form)
  (cond ((null form) nil)
        ((and (consp form) (eql (car form) 'cryptoval)) (cdr form))
        ((consp form) (append (crypto-extract-symbols (car form))
                              (crypto-extract-symbols (cdr form))))
        (t nil)))

(defun crypto-extract-symbols (form)
  (crypto-make-distinct (crypto-extract-symbols-1 form) nil))

(defun crypto-make-distinct (list collection)
  (cond ((null list) collection)
        ((member (car list) collection) (crypto-make-distinct (cdr list) collection))
        (t (crypto-make-distinct (cdr list) (cons (car list) collection)))))

(defmacro cryptoval (&rest form)
  (reduce #'(lambda (a b) `(+ (* 10 ,a) ,b)) form))

(defmacro crypto-report (vars)
  `(progn
     (format t "Values:~%")
     ,@(loop for v in vars collect `(format t "  Value of ~s is ~s~%" ',v ,v))
     (format t "~%")))


> > The key insight is that we view the purpose of abstraction from
> > the point of view of the PROGRAMMER, not the COMPILER.
> 
> So do I.


-- 
When C++ is your hammer, everything looks like a thumb
	Latest seen from Steven M. Haflich, in c.l.l
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.11.07.17.01.23.736882@knm.org.pl>
On Fri, 07 Nov 2003 13:25:00 +0000, Ingvar Mattsson wrote:

> It's probably non-trivial to implement using HOFs, without introducing
> an explicit parameter list somewhere.

It was not that hard. Here is a version in Ruby (perhaps some parts could
be simplified, I don't have much experience in Ruby):

def digitsForSyms(syms, vals, i, &proc)
  if i < syms.size
    for val in 0..9 do
      dupl = false
      for j in 0..(i-1) do
        if vals[syms[j]] == val then dupl = true; break end
      end
      next if dupl
      vals[syms[i]] = val
      digitsForSyms(syms, vals, i+1, &proc)
    end
  else
    proc.call(vals)
  end
end

def cryptoLoop(&proc)
  allSyms = ""
  proc.call(proc{|syms| allSyms << syms; 0})
  digitsForSyms(allSyms.split("").uniq, {}, 0) do |vals|
    getValue = proc do |syms|
      val = 0
      syms.each_byte {|sym| val = val*10 + vals[sym.chr]}
      val
    end
    if proc.call(getValue)
      puts "Values: " + vals.map{|sym,val| "#{sym}=#{val}"}.join(", ")
    end
  end
end

cryptoLoop do |val|
  val["send"] + val["more"] == val["money"]
end

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87islway8k.fsf@gruk.tech.ensign.ftech.net>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> On Fri, 07 Nov 2003 13:25:00 +0000, Ingvar Mattsson wrote:
> 
> > It's probably non-trivial to implement using HOFs, without introducing
> > an explicit parameter list somewhere.
> 
> It was not that hard. Here is a version in Ruby (perhaps some parts could
> be simplified, I don't have much experience in Ruby):

Erm, that uses strings instead of lexical variables. It's arguably
less trivial to compile.

//Ingvar
-- 
"No. Most Scandiwegians use the same algorithm as you Brits.
 "Ingvar is just a freak."
Stig Morten Valstad, in the Monastery
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.11.07.20.53.08.986685@knm.org.pl>
On Fri, 07 Nov 2003 17:23:55 +0000, Ingvar Mattsson wrote:

>> It was not that hard. Here is a version in Ruby (perhaps some parts could
>> be simplified, I don't have much experience in Ruby):
> 
> Erm, that uses strings instead of lexical variables. It's arguably
> less trivial to compile.

But it's twice as short. I didn't think a Lisper would trade readability
for efficiency :-P

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87r80ga031.fsf@gruk.tech.ensign.ftech.net>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> On Fri, 07 Nov 2003 17:23:55 +0000, Ingvar Mattsson wrote:
> 
> >> It was not that hard. Here is a version in Ruby (perhaps some parts could
> >> be simplified, I don't have much experience in Ruby):
> > 
> > Erm, that uses strings instead of lexical variables. It's arguably
> > less trivial to compile.
> 
> But it's twice as short. I didn't think a Lisper would trade readability
> for efficiency :-P

*shrug* The macro solution is, btw, capable of taking any amount of
forms (and can thus do funky things like "abort after first solution
found").

//ingvar
-- 
(defmacro fakelambda (args &body body) `(labels ((me ,args ,@body)) #'me))
(funcall (fakelambda (a b) (if (zerop (length a)) b (format nil "~a~a" 
 (aref a 0) (me b (subseq a 1))))) "Js nte iphce" "utaohrls akr")
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <boh8nh$640$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> ···@sevak.isi.edu (Thomas A. Russ) writes:
> 
> 
>>But by that criterion, one would have to rule out functions and lambdas
>>as abstraction mechanisms as well, since one could (by hand) write the
>>body of the function's code into place.  In fact, inlining of functions
>>does exactly that.
> 
> 
> You cannot write the body of every function that way.  Not if there is
> local state involved, to name one example.

Why not, if you can push and pop stuff to and from the stack?


Pascal
From: Lauri Alanko
Subject: Re: More static type fun.
Date: 
Message-ID: <boc5du$vf$1@la.iki.fi>
Matthias Blume <····@my.address.elsewhere> virkkoi:
> I do not think of macros as abstraction facilities but rather as
> "abbreviation facilities" (although Scheme's hygienic macros probably
> can be made to go some of the remaining distance).

Indeed they can.

> There is no
> difference between an instance of a macro and its expansion: I could
> have written the expansion by hand and would have gotten the exact
> same program.  In other words, the "abstraction" is completely
> transparent -- no details are really hidden (except from the
> programmer's eye).

In eg. PLT Scheme, if a macro is defined in a module, the variables in
its body (and thus its expansions) refer to variables _inside_ that
module. And if those variables are not exported but the macro is, then
the macro is the only public interface to them, and there is no way the
programmer could access those module variables by writing the expansion
"by hand". This probably counts as an abstraction facility as you
define it.


Lauri Alanko
··@iki.fi
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2he1ie0dj.fsf@wireless-5-198-70.uchicago.edu>
Lauri Alanko <··@iki.fi> writes:

> Matthias Blume <····@my.address.elsewhere> virkkoi:
> > I do not think of macros as abstraction facilities but rather as
> > "abbreviation facilities" (although Scheme's hygienic macros probably
> > can be made to go some of the remaining distance).
> 
> Indeed they can.
> 
> > There is no
> > difference between an instance of a macro and its expansion: I could
> > have written the expansion by hand and would have gotten the exact
> > same program.  In other words, the "abstraction" is completely
> > transparent -- no details are really hidden (except from the
> > programmer's eye).
> 
> In eg. PLT Scheme, if a macro is defined in a module, the variables in
> its body (and thus its expansions) refer to variables _inside_ that
> module. And if those variables are not exported but the macro is, then
> the macro is the only public interface to them, and there is no way the
> programmer could access those module variables by writing the expansion
> "by hand". This probably counts as an abstraction facility as you
> define it.

Indeed, it does.  And it is also precisely what I was alluding to with
my parenthetical remark.  (I made my own implementation of macros and
modules for Scheme, and they do the same thing.)  But notice that it
is actually the module that is doing the hiding here, not the macro.

Matthias
From: Jon S. Anthony
Subject: Re: More static type fun.
Date: 
Message-ID: <m37k2dqel6.fsf@rigel.goldenthreadtech.com>
Matthias Blume <····@my.address.elsewhere> writes:

> Lauri Alanko <··@iki.fi> writes:
> 
> > Matthias Blume <····@my.address.elsewhere> virkkoi:
> > > I do not think of macros as abstraction facilities but rather as
> > > "abbreviation facilities" (although Scheme's hygienic macros probably
> > > can be made to go some of the remaining distance).
> > 
> > Indeed they can.
> > 
> > > There is no
> > > difference between an instance of a macro and its expansion: I could
> > > have written the expansion by hand and would have gotten the exact
> > > same program.  In other words, the "abstraction" is completely
> > > transparent -- no details are really hidden (except from the
> > > programmer's eye).
> > 
> > In eg. PLT Scheme, if a macro is defined in a module, the variables in
> > its body (and thus its expansions) refer to variables _inside_ that
> > module. And if those variables are not exported but the macro is, then
> > the macro is the only public interface to them, and there is no way the
> > programmer could access those module variables by writing the expansion
> > "by hand". This probably counts as an abstraction facility as you
> > define it.
> 
> Indeed, it does.  And it is also precisely what I was alluding to with
> my parenthetical remark.  (I made my own implementation of macros and
> modules for Scheme, and they do the same thing.)  But notice that it
> is actually the module that is doing the hiding here, not the macro.

Since the description here is largely a name space issue, how is this
different from Common Lisp, where "package" is substituted for
"module"?

/Jon
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa9c9e3$1@news.unimelb.edu.au>
Matthias Blume <····@my.address.elsewhere> writes:

>Gareth McCaughan <·····@g.local> writes:
>
>> Matthias Blume wrote:
>> 
>> > What /is/ different, though, is the ease at which you can express
>> > abstractions.  Dynamically typed languages tend to have very few
>> > really good abstraction facilities that are worthy of that label.
>> 
>> What are your criteria for real goodness and for worthiness
>> of the label "abstraction facility"?
>
>To me, abstraction facilities give you a way of drawing a separating
>line between parts of a program, with certain details being visible on
>one side of the line and hidden on the other.
...
>I do not think of macros as abstraction facilities but rather as
>"abbreviation facilities"

I would tend to use slightly different terminology.  In particular,
I would distinguish between "abstraction" and "encapsulation", and
I think that much of what you are calling "abstraction facilities"
is what I would call "encapsulation facilities".

However, because dynamically typed languages tend to lack any explicit
way of describing interfaces, as well as any means for enforcing such
descriptions, I would tend to consider them to be weak in both abstraction
facilities and encapsulation facilities.  IMHO having a language in which
to express the interfaces for one's abstractions is an important part of
support for abstraction.  It's not enough to just be able to express
the abstractions; in order to properly support the use of abstractions,
you need to be able to describe their interfaces, and there should be
a clear separation between interface and implementation.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <boeak9$cco$1@newsreader2.netcologne.de>
Fergus Henderson wrote:

> However, because dynamically typed languages tend to lack any explicit
> way of describing interfaces, as well as any means for enforcing such
> descriptions, I would tend to consider them to be weak in both abstraction
> facilities and encapsulation facilities. 

Of course, you can always describe interfaces by just putting the 
description into comments. Of course, your gut reaction probably is to 
reject such a way of interface description, but you have already made 
the disctinction between description and enforcement yourself.

There are also even more perspectives on this:

- Does enforcement mean that I definitely know what an interface 
supports, or does it mean that I also cannot add to an interface?

- Do you take IDE support into account here? In modern IDEs, it's 
perfectly possible to get an overview what features, say, a class 
supports. Does this count as a way to get a description of an interface? 
Or do you require that it be textually captured? If so, why?

> IMHO having a language in which
> to express the interfaces for one's abstractions is an important part of
> support for abstraction.  It's not enough to just be able to express
> the abstractions; in order to properly support the use of abstractions,
> you need to be able to describe their interfaces, and there should be
> a clear separation between interface and implementation.

What do you think of, say, Common Lisp's packages then? You could, for 
example, require that each CLOS be put into its own package. Then you 
could only use the explicitly exported symbols by default.


Pascal
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fafac15$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>
>> However, because dynamically typed languages tend to lack any explicit
>> way of describing interfaces, as well as any means for enforcing such
>> descriptions, I would tend to consider them to be weak in both abstraction
>> facilities and encapsulation facilities. 
>
>Of course, you can always describe interfaces by just putting the 
>description into comments.

Sure.  Likewise, you can always write OOP programs in C.  However,
if you're writing a lot of programs in a particular style, it may be
better to use a language which actually _supports_ that style, rather
than one which just _allows_ it.

>There are also even more perspectives on this:
>
>- Does enforcement mean that I definitely know what an interface 
>supports, or does it mean that I also cannot add to an interface?

I think I would say yes to the former, and no to the latter.
But I'm not really sure what you're getting at here; maybe you
could elaborate.

>- Do you take IDE support into account here? In modern IDEs, it's 
>perfectly possible to get an overview what features, say, a class 
>supports. Does this count as a way to get a description of an interface? 
>Or do you require that it be textually captured? If so, why?

So long as it is easy for the client programmer to determine the
interface, and immediately clear to the library programmer whether
their changes to the source code will change the interface, then I think
that suffices.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <boobh8$ths$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>Fergus Henderson wrote:
>>
>>>However, because dynamically typed languages tend to lack any explicit
>>>way of describing interfaces, as well as any means for enforcing such
>>>descriptions, I would tend to consider them to be weak in both abstraction
>>>facilities and encapsulation facilities. 
>>
>>Of course, you can always describe interfaces by just putting the 
>>description into comments.
> 
> Sure.  Likewise, you can always write OOP programs in C.  However,
> if you're writing a lot of programs in a particular style, it may be
> better to use a language which actually _supports_ that style, rather
> than one which just _allows_ it.

Agreed - that's why I love macros. ;)

>>There are also even more perspectives on this:
>>
>>- Does enforcement mean that I definitely know what an interface 
>>supports, or does it mean that I also cannot add to an interface?
> 
> I think I would say yes to the former, and no to the latter.
> But I'm not really sure what you're getting at here; maybe you
> could elaborate.

see below

>>- Do you take IDE support into account here? In modern IDEs, it's 
>>perfectly possible to get an overview what features, say, a class 
>>supports. Does this count as a way to get a description of an interface? 
>>Or do you require that it be textually captured? If so, why?
> 
> So long as it is easy for the client programmer to determine the
> interface, and immediately clear to the library programmer whether
> their changes to the source code will change the interface, then I think
> that suffices.

Well, that's exactly what IDEs for dynamic languages do. They usually 
have various browsers for classes, functions, methods, packages, 
namespaces, whatever, and those browsers present views of the currently 
active definitions and are updated accordingly whenever you change 
definitions.

(I think that's one basic misunderstanding between static and dynamic 
"typers". It's not that dynamic typers don't value the advantages of 
static typing at all, it's just that they use different tools to get 
similar advantages. And vice versa.)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Jesse Tov
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbqvn14.8mb.tov@tov.student.harvard.edu>
Pascal Costanza <········@web.de>:
> Well, that's exactly what IDEs for dynamic languages do. They usually 
> have various browsers for classes, functions, methods, packages, 
> namespaces, whatever, and those browsers present views of the currently 
> active definitions and are updated accordingly whenever you change 
> definitions.

How is this different from IDEs for "static" languages?

Jesse
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <booosg$nv5$1@newsreader2.netcologne.de>
Jesse Tov wrote:

> Pascal Costanza <········@web.de>:
> 
>>Well, that's exactly what IDEs for dynamic languages do. They usually 
>>have various browsers for classes, functions, methods, packages, 
>>namespaces, whatever, and those browsers present views of the currently 
>>active definitions and are updated accordingly whenever you change 
>>definitions.
> 
> 
> How is this different from IDEs for "static" languages?

It's harder to make these things work for static languages. During 
development, code is typically inconsistent most of the time. Dynamic 
languages allow more freedom to deal with inconsistent code from the 
start. For static languages, you have to make the language "softer" in 
order to make some features work.


Pascal
From: Jesse Tov
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbr2v5m.b7c.tov@tov.student.harvard.edu>
Pascal Costanza <········@web.de>:
> Jesse Tov wrote:
>> How is this different from IDEs for "static" languages?
> 
> It's harder to make these things work for static languages. During 
> development, code is typically inconsistent most of the time. Dynamic 
> languages allow more freedom to deal with inconsistent code from the 
> start. For static languages, you have to make the language "softer" in 
> order to make some features work.

I think your argument is dubious.  It may be harder, but IDEs for static
languages certainly exist.  I've heard very good things about Eclipse, and
I'd be very surprised if Microsoft's development tools didn't provide the
features you list.  Heck, even haskell-mode in Emacs (written in some silly
Lisp variant) does most of that stuff, and Helium does, too.

Maybe it's not that dynamic languages make IDEs possible but that dynamic
languages make IDEs necessary.

Jesse
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-1111031941480001@192.168.1.51>
In article <··················@tov.student.harvard.edu>, Jesse Tov
<···@eecs.harvREMOVEard.edu> wrote:

> Maybe it's not that dynamic languages make IDEs possible but that dynamic
> languages make IDEs necessary.

No, dynamic problems make dynamic languages (and IDEs) necessary.

E.
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egr80eypka.fsf@sefirot.ii.uib.no>
Jesse Tov <···@eecs.harvREMOVEard.edu> writes:

> Pascal Costanza <········@web.de>:
>> Jesse Tov wrote:
>>> How is this different from IDEs for "static" languages?

>> It's harder to make these things work for static languages.

> I think your argument is dubious.  It may be harder, but IDEs for static
> languages certainly exist.  I've heard very good things about Eclipse, and
> I'd be very surprised if Microsoft's development tools didn't provide the
> features you list.

I used to work in Visual Studio.  The set of features would have been
nice, if it had only worked properly.  The way it was, you basically
needed to turn off all the extras to get acceptable performance.

You could step through code, halt execution, make changes, and
continue, to some extent.  Unfortunately, you had to be really careful
about what code you touched; make the wrong change (typically in a
.hpp file), and continuing meant a complete rebuild (taking about 20
minutes), and a restart.

> Heck, even haskell-mode in Emacs (written in some silly
> Lisp variant) does most of that stuff, and Helium does, too.

I much prefer having an interactive "REPL" like GHCi provides for
Haskell.  You can't "step through" code execution in the same sense,
but you can toy around with functions and values, which I think is
much more important.  (Linus Torvalds once made a good point about
debuggers only leading to people fixing the symptoms, I think this is
the essence of the difference -- interactive queries can help you
*understand* the code).  IMHO, anyway.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb09e0b$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:
>Fergus Henderson wrote:
>>Pascal Costanza <········@web.de> writes:
>>>- Do you take IDE support into account here? In modern IDEs, it's 
>>>perfectly possible to get an overview what features, say, a class 
>>>supports. Does this count as a way to get a description of an interface? 
>>>Or do you require that it be textually captured? If so, why?
>> 
>> So long as it is easy for the client programmer to determine the
>> interface, and immediately clear to the library programmer whether
>> their changes to the source code will change the interface, then I think
>> that suffices.
>
>Well, that's exactly what IDEs for dynamic languages do. They usually 
>have various browsers for classes, functions, methods, packages, 
>namespaces, whatever, and those browsers present views of the currently 
>active definitions and are updated accordingly whenever you change 
>definitions.

Do those IDEs clearly distinguish the interface from the implementation?
Do they show data type definitions and function types as part of
the interface?  What about for functions where the programmer did not
explicitly declare a type?

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <boqb5p$4ou$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:

 >

I don't completely understand all your questions, but...

> Do those IDEs clearly distinguish the interface from the implementation?

...sure, it's possible to only see function/method headers.

> Do they show data type definitions and function types as part of
> the interface?  What about for functions where the programmer did not
> explicitly declare a type?

I guess that's what soft typing systems could do. I haven't tried one 
out yet, though.

However, it's clear that you won't get the 100% static typing 
"experience". This wouldn't make sense. The only important question is: 
Can the _problems_ that static type systems intend to solve be solved 
with different tools as well?

The problem that are addressed by static type systems seem to be:

- performance
- documentation
- absence of (a certain class of) bugs
- definition of unbreakable abstraction boundaries

The first three can be tackled by different means. The last one is the 
one you don't want in a dynamic language. _Unbreakable_ abstraction 
boundaries are regarded as a disadvantage.

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <boqff4$lfu$1@grizzly.ps.uni-sb.de>
Pascal Costanza wrote:
> 
> The problem that are addressed by static type systems seem to be:
> 
> - performance

This is just desirable added value, it in fact is the least important one.

> - documentation

Yes.

> - absence of (a certain class of) bugs

Yes.

> - definition of unbreakable abstraction boundaries
> 
> The first three can be tackled by different means.

Not true, at least not on a comparable level. But that discussion is futile 
by now.

> The last one is the
> one you don't want in a dynamic language. _Unbreakable_ abstraction
> boundaries are regarded as a disadvantage.

You have your Lisp glasses on. There are other dynamic languages that surely 
provide unbreakable abstractions. One example would be Oz, which has a 
strong eye on distribution, where this is very important.

Of course, this implies that this particular problem can in fact be tackled 
by different means. ;-)

BTW, you forgot (at least):

- significant reduction of search space for non type errors
- tool for program maintanence
- tool and guiding principle for program design
- tool and guiding principle for language design
- simplification of language implementations
...

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Jon S. Anthony
Subject: Re: More static type fun.
Date: 
Message-ID: <m3u15borkp.fsf@rigel.goldenthreadtech.com>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> > The first three can be tackled by different means.
> 
> Not true, at least not on a comparable level. But that discussion is futile 
> by now.

Indeed.  Especially as no good evidence has been evinced to the contrary.


> BTW, you forgot (at least):
> 
> - significant reduction of search space for non type errors
> - tool for program maintanence
> - tool and guiding principle for program design
> - tool and guiding principle for language design

Perhaps he left these out on purpose.  Having been there and done that
in the ST camp, I would.


> - simplification of language implementations

OTOH, this is true, but irrelevant.

/Jon
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <anv681-nq2.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> The problem that are addressed by static type systems seem to be:
> 
> 1. performance
> 2. documentation
> 3. absence of (a certain class of) bugs
> 4. definition of unbreakable abstraction boundaries

Instead of 3. and 4., better ask for

5. a way to express invariants that can be 100% guaranteed at run-time,
   without having to check them dynamically.

Then 1. follows from it (because you can just drop the dynamic checks
and data tagging overhead).

> The first three can be tackled by different means. 

Yes. However, unless your language supports type inference, you need
to include annotations to get them, because even with dataflow
analysis you won't be able to find suitable types for all your
code. 

So you can get an approximation that may be good enough in some cases,
but not in all. After all, Lisp is a very nice language to write
programs in, you just have to be more careful and disciplined and
include more tests. Most people do that automatically.

> The last one is the one you don't want in a dynamic
> language. 

I don't see what that has to to with "dynamic".

> _Unbreakable_ abstraction boundaries are regarded as a disadvantage.

Either I have an abstraction boundary, or I don't. A "weak" abstraction
boundary doesn't really help. (As you said yourself, the abstraction
shouldn't "leak").

And "workarounds" like those seen in this thread to break an abstraction
are a really really bad idea. It's messy code, it will bite you later,
and it's unreadable as well.

Yes, from time to time it is necessary to break through the
abstraction boundary and do some things "inside". If so, do it in a
clean way. Change the source code, and refactor. Write a wrapper. But
don't mess with internals of existing code if you don't know what
you may break.

If you really try to tell me that it is an "advantage" to be able to
break abstractions in such a way, I only hope I have never to deal
with any code that you wrote :-) Maintainability of code quickly drops
to zero if you employ such "hacks" regularly. If you haven't had
this experience yet, you'll learn that probably sooner or later the
hard way (or at least those who have to use your code will learn it,
and they won't like it).

- Dirk
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bor66m$s7o$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:

> Either I have an abstraction boundary, or I don't. A "weak" abstraction
> boundary doesn't really help. (As you said yourself, the abstraction
> shouldn't "leak").

No, I didn't say that. I said that macros allow you to write 
abstractions that don't leak, in the sense that you don't see at all how 
they are implemented. That's different for example from HOFs, because 
when you create an abstraction that relies on passing anonymous 
functions, you admit in the interface that you are using anonymous 
functions. This might be totally irrelevant to the domain that you try 
to model.

This is completely orthogonal to the question whether abstractions 
should be unbreakable or not.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <041881-e05.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

>> Either I have an abstraction boundary, or I don't. A "weak" abstraction
>> boundary doesn't really help. (As you said yourself, the abstraction
>> shouldn't "leak").

> No, I didn't say that. I said that macros allow you to write
> abstractions that don't leak, in the sense that you don't see at all
> how they are implemented.

And you don't see how things are implemented when you're using HOF's,
either (unless you look at the code).

> That's different for example from HOFs, because when you create an
> abstraction that relies on passing anonymous functions, you admit in
> the interface that you are using anonymous functions.

And you "admit" in the "interface" (i.e., the type) of every function
in a statically typed language what kind of values are suitable
for that parameter. I don't see any problem with it.

(And you don't need to use *anonymous* functions, either, you can
use named ones as well, of course).

I completely fail to see your point.

> This might be totally irrelevant to the domain that you try to
> model.

When you try to model a domain, you express it somehow with the
features your language provides. Lispers model everything as 
s-expressions. In Haskell, you use HOFs and infix functions. Both
are fine, and both are completely irrelevant to the domain that
you try to model.

The Lisp version is a bit nicer with respect to the amount of syntacic
sugar you can put in, but that's it. Both are about equal in
convencience for the programmer.

- Dirk
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bornvn$ts$1@newsreader2.netcologne.de>
Dirk Thierbach wrote:

> The Lisp version is a bit nicer with respect to the amount of syntacic
> sugar you can put in, but that's it. Both are about equal in
> convencience for the programmer.

No, they're not. And we had this discussion before. This is boring.


Good bye.


Pascal
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bor7nv$u1u$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:

> If you really try to tell me that it is an "advantage" to be able to
> break abstractions in such a way, I only hope I have never to deal
> with any code that you wrote :-) Maintainability of code quickly drops
> to zero if you employ such "hacks" regularly. If you haven't had
> this experience yet, you'll learn that probably sooner or later the
> hard way (or at least those who have to use your code will learn it,
> and they won't like it).

You can check an example of code that I have written in my paper on 
dynamically scoped functions at http://doi.acm.org/10.1145/944579.944587

That code relies in important ways on Common Lisp facilities for 
breaking abstraction boundaries. It works by rebinding function 
definitions within a running program, even in unanticipated cases. It 
does so in a well-behaved way by giving feedback to the programmer and 
requiring a confirmation in order to continue execution. Yet it still 
doesn't need an extra interpreter or compiler for a different language, 
but is completely embedded in Common Lisp and works well with basically 
unchanged Common Lisp programs. And you get all this in a one-page 
implementation.

I don't think it would have been possible to achieve the same degree of 
seamlessness in a static language. For example, take the approach of 
Lewis et al. for integrating a restricted form of dynamic scoping into 
Haskell. They needed to change the compiler for Haskell in order to 
implement their approach. See http://doi.acm.org/10.1145/325694.325708 . 
(And by looking at the source code, you can see that they changed the 
design of their approach later on, and again they needed to change the 
Haskell compiler in order to implement the new design, along with some 
cruft to continue support for the "deprecated" design.)

I am pretty sure that my code reliably does what it is intended to do. I 
didn't achieve that because of tools that enforce abstraction 
boundaries, but mainly because many people have looked at my code and 
made valuable suggestions for improvement.

Both writing programs and finding bugs require creativity. It's the 
human factor that counts, because only human beings are creative and 
computers aren't. Some of the static typers in this discussion have 
already admitted that by saying that one needs experience to turn a 
static type system to your advantage.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <9r3881-at5.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

>> If you really try to tell me that it is an "advantage" to be able to
>> break abstractions in such a way, I only hope I have never to deal
>> with any code that you wrote :-) Maintainability of code quickly drops
>> to zero if you employ such "hacks" regularly. If you haven't had
>> this experience yet, you'll learn that probably sooner or later the
>> hard way (or at least those who have to use your code will learn it,
>> and they won't like it).

> You can check an example of code that I have written in my paper on 
> dynamically scoped functions at http://doi.acm.org/10.1145/944579.944587

Without ACM account I can't.

> That code relies in important ways on Common Lisp facilities for 
> breaking abstraction boundaries. It works by rebinding function 
> definitions within a running program, even in unanticipated cases.

You're confusing several things here. What you do (if I understood
it correctly) is a *language extension*. You're not breaking 
abstraction boundaries by this extension itself. 

The kind of abstraction boundary I would like to see maintained is
e.g. in ADTs. If I have a sorted heap, I want to inspect all my
functions operating on that heap structure with respect to the invariant
that they keep the heap sorted. Then I don't have to worry that this
invariant will be broken by someone using the ADT.

I might want to break this abstraction barrier if I want to write a new
ADT that partly reuses the old code. In that case, I don't write
extra functions that insert values in an uncontrolled way in the heap
and may cause the old functions to fail. I write new functions
that share some code with the old functions, and keep the old functions
(and the old abstraction barrier) intact.

If there is a language extension that allows me "change the function
without touching the source code" (which might be the idea behind your
extension, if I guessed that correctly), then that amounts to the same.
I could do also the same by writing a wrapper.

> Yet it still doesn't need an extra interpreter or compiler for a
> different language, but is completely embedded in Common Lisp and
> works well with basically unchanged Common Lisp programs. And you
> get all this in a one-page implementation.

Yes, yes, we all know that it is easy in Lisp to quickly hack a language
extensions (since all is interpreted, anyway). That's certainly one of
the nice features of Lisp.

> For example, take the approach of Lewis et al. for integrating a
> restricted form of dynamic scoping into Haskell. They needed to
> change the compiler for Haskell in order to implement their
> approach. See http://doi.acm.org/10.1145/325694.325708 .

If I read the abstract correctly, this is about *implicit arguments*,
which has nothing to do with "dynamic scoping". And it's a new language
feature in a compiled language, so you change the compiler. It doesn't
matter whether the type system is dynamic or static. (In fact,
implicit arguments rely heavily on the type system, so there's no
way one could do that at runtime).

> I am pretty sure that my code reliably does what it is intended to
> do. I didn't achieve that because of tools that enforce abstraction
> boundaries,

Which would not have been helpful at all for this task, because you
are doing a language extension. I hope I gave an example where it's
obvious that abstraction boundaries help.

> but mainly because many people have looked at my code and 
> made valuable suggestions for improvement.

And that's of course also important.

> Both writing programs and finding bugs require creativity. 

Yes, of course.

> It's the human factor that counts, because only human beings are
> creative and computers aren't.

But computers can provide tools that help you being more creative, 
because they take away the dull work from you in areas where humans
easily make stupid mistakes. That's what static type systems are
for.

> Some of the static typers in this discussion have already admitted
> that by saying that one needs experience to turn a static type
> system to your advantage.

There's no need to "admit" this. *Everything* needs experience before
you can use it to your advantage. Lisp does, macros do, Monads do,
and so on.

But a good static type system doesn't "get in the way". If you
don't use it consciously, it will just point out bugs and typos.
Once you start thinking in a "typed" way, it provides a framework
that helps you thinking about programs. It's not a silver bullet,
but it helps. Bugs like the tensor-transpose function that only
works on tensors with two columns wouldn't have happened if you
only *design* program with types in mind, even if there's no static
type checker to assist you. 

I am sorry if I am impolite, but can you please, please stop trying to
preach to everyone that Lisp is the only religion, Lisp is the best
and only way to do it, all other languages are inferior to Lisp, and
so on? Lisp *is* a nice language. Many people in clf know Lisp,
but still think that a (proper) static type system is a good idea.
If you don't want ever to program in anything but Lisp, that is fine.
But please accept that there are people you don't share your religious
zeal, and who think that (depending on the application, of course)
other alternatives are also valid choices.

Is that asking too much?

- Dirk
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <wua6oamk.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:

> Yes, yes, we all know that it is easy in Lisp to quickly hack a language
> extensions (since all is interpreted, anyway). That's certainly one of
> the nice features of Lisp.

Yep.  All cons cells, too.  That's why it's so big.  And slow, too.

> I am sorry if I am impolite, but can you please, please stop trying to
> preach to everyone that Lisp is the only religion, Lisp is the best
> and only way to do it, all other languages are inferior to Lisp, and
> so on? 

You're cross posting to comp.lang.lisp

You were expecting Perl hackers?

-- 
~jrm
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <6te981-ul.ln1@ID-7776.user.dfncis.de>
Joe Marshall <·············@comcast.net> wrote:
> Dirk Thierbach <··········@gmx.de> writes:

>> Yes, yes, we all know that it is easy in Lisp to quickly hack a language
>> extensions (since all is interpreted, anyway). That's certainly one of
>> the nice features of Lisp.

> Yep.  All cons cells, too.  That's why it's so big.  And slow, too.

I suppose that is meant to be sarcastic. (Yes, I know that you can
compile Lisp as well).

>> I am sorry if I am impolite, but can you please, please stop trying to
>> preach to everyone that Lisp is the only religion, Lisp is the best
>> and only way to do it, all other languages are inferior to Lisp, and
>> so on? 

> You're cross posting to comp.lang.lisp

I didn't start the cross-posting.

> You were expecting Perl hackers?

No. I was expecting that Lispers, like most people, can accept that there
is "more than one way to skin a cat" (as Cody said). If they can't
(but I don't believe that), then there is really no sense in any kind of
discussion.

- Dirk
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzbrrhzx4z.fsf@cupid.igpm.rwth-aachen.de>
Dirk Thierbach <··········@gmx.de> writes:
> No. I was expecting that Lispers, like most people, can accept that there
> is "more than one way to skin a cat" (as Cody said). If they can't
> (but I don't believe that), then there is really no sense in any kind of
> discussion.

Intrestingly enough, the impression I had so far is that this was
precisely the point of the lispers involved in this discussion. There
is statict typing, and there is dynamic typing. Both have their uses,
that's why we have both. And goto. And classes, etc.

This whole discussion got me thinking that it probably would be an
interesting exercise to write a macro that goes like this:

(with-haskel-semantics
        ....)
        ^^---- prefixed haskell exprs.

If I only had the time!

Regards,
        Mario.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <dp7a81-c98.ln1@ID-7776.user.dfncis.de>
Mario S. Mommer <········@yahoo.com> wrote:
> Intrestingly enough, the impression I had so far is that this was
> precisely the point of the lispers involved in this discussion. There
> is statict typing, and there is dynamic typing. Both have their uses,
> that's why we have both. And goto. And classes, etc.

If that feeling is shared by everybody, I have no complaints.

> This whole discussion got me thinking that it probably would be an
> interesting exercise to write a macro that goes like this:
> 
> (with-haskel-semantics
>        ....)
>        ^^---- prefixed haskell exprs.

It would be a rather large macro (since you have to include the
type inference and typechecking). But it would certainly be interesting,
yes :-) Qi has already been mentioned in this thread, and it is
quite ML-ish.

- Dirk
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <borp5p$314$1@newsreader2.netcologne.de>
Dirk Thierbach wrote:

>>You can check an example of code that I have written in my paper on 
>>dynamically scoped functions at http://doi.acm.org/10.1145/944579.944587
> 
> 
> Without ACM account I can't.

There are other ways to get the paper as well.

>>That code relies in important ways on Common Lisp facilities for 
>>breaking abstraction boundaries. It works by rebinding function 
>>definitions within a running program, even in unanticipated cases.
> 
> You're confusing several things here. What you do (if I understood
> it correctly) is a *language extension*. You're not breaking 
> abstraction boundaries by this extension itself.

Yes I do. I am changing the definition of a function within a macro. The 
intention of the programmer of the original function was to have that 
original function executed, and not something else.

Furthermore there is no real difference between writing language 
extensions and writing libraries. See 
http://research.sun.com/research/jtech/pubs/98-oopsla-growing.ps

> If there is a language extension that allows me "change the function
> without touching the source code" (which might be the idea behind your
> extension, if I guessed that correctly), then that amounts to the same.
> I could do also the same by writing a wrapper.

No, you're missing the bit about dynamic scoping.

>>Yet it still doesn't need an extra interpreter or compiler for a
>>different language, but is completely embedded in Common Lisp and
>>works well with basically unchanged Common Lisp programs. And you
>>get all this in a one-page implementation.
> 
> Yes, yes, we all know that it is easy in Lisp to quickly hack a language
> extensions (since all is interpreted, anyway). That's certainly one of
> the nice features of Lisp.

It's not interpreted, it's compiled.

>>For example, take the approach of Lewis et al. for integrating a
>>restricted form of dynamic scoping into Haskell. They needed to
>>change the compiler for Haskell in order to implement their
>>approach. See http://doi.acm.org/10.1145/325694.325708 .
> 
> If I read the abstract correctly, this is about *implicit arguments*,
> which has nothing to do with "dynamic scoping".

You haven't read it correctly. It is in fact about dynamic scoping.

> I am sorry if I am impolite, but can you please, please stop trying to
> preach to everyone that Lisp is the only religion, Lisp is the best
> and only way to do it, all other languages are inferior to Lisp, and
> so on?

I am not doing that. Please read my posts more carefully.


Pascal
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <boe8vn$a2f$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> I do not think of macros as abstraction facilities but rather as
> "abbreviation facilities" (although Scheme's hygienic macros probably
> can be made to go some of the remaining distance). 

If you regard Scheme's macro hygiene as a facility to support 
abstractions, then you should also acknowledge Common Lisp's support in 
that regard as well. Under the assumption that your understanding of 
abstraction is correct, you would have to say, in a strict sense, that 
GENSYM and MAKE-SYMBOL are abstraction facilities, because they allow 
you to write macros that hide information from the surrounding context 
in which they are used.

> There is no
> difference between an instance of a macro and its expansion: I could
> have written the expansion by hand and would have gotten the exact
> same program.  In other words, the "abstraction" is completely
> transparent -- no details are really hidden (except from the
> programmer's eye).

That's a very simplistic view of Lisp macros IMHO. Of course, there can 
be huge differences between what you would write by hand and what can be 
generated by macros.

It seems to me that your position is based on the assumption that a 
language feature should be supported by distinct language constructs 
that can be defined and analyzed in isolation. However, I think that it 
makes a lot more sense to rather look at how a number of language 
constructs can implement a feature in concert. From that angle, Common 
Lisp macros + GENSYM/MAKE-SYMBOL must definitely be acknowledged as 
providing abstraction support.

> The confusion we are having probably arises from the fact that human
> programmers can take abbreviation facilities and use them as if they
> were abstraction facilities -- simply by being disciplined.  Over time
> I have come to believe that such a requirement of discipline does not
> scale very well unless there is linguistic support for enforcing it.

I am convinced that enforcement of restrictions is a bad idea. If you 
build a wall, it's hard to tear it down later on when you discover that 
it shouldn't have been built in the first place.

I think it is a much better idea to incorporate support for abstraction 
boundaries that you can still always break intentionally. It should take 
explicit steps to work around an abstraction boundary, but it shouldn't 
be impossible. Otherwise you always risk that you drive yourself into a 
dead end.


Pascal
From: Jens Axel Søgaard
Subject: Re: More static type fun.
Date: 
Message-ID: <3faab23f$0$69920$edfadb0f@dread12.news.tele.dk>
> Matthias Blume wrote:
> 
>> I do not think of macros as abstraction facilities but rather as
>> "abbreviation facilities" (although Scheme's hygienic macros probably
>> can be made to go some of the remaining distance). 
> 
> 
> If you regard Scheme's macro hygiene as a facility to support 
> abstractions, then you should also acknowledge Common Lisp's support in 
> that regard as well. Under the assumption that your understanding of 
> abstraction is correct, you would have to say, in a strict sense, that 
> GENSYM and MAKE-SYMBOL are abstraction facilities, because they allow 
> you to write macros that hide information from the surrounding context 
> in which they are used.
> 
>> There is no
>> difference between an instance of a macro and its expansion: I could
>> have written the expansion by hand and would have gotten the exact
>> same program.  In other words, the "abstraction" is completely
>> transparent -- no details are really hidden (except from the
>> programmer's eye).

Just to play devils advocate:

  [Assume this is a functional language]

  There is no
  difference between an *function call* and its *return value*: I could
  have written the *return value* by hand and would have gotten the exact
  same program.  In other words, the "abstraction" is completely
  transparent -- no details are really hidden (except from the
  programmer's eye).

-- 
Jens Axel S�gaard
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1islx42x0.fsf@tti5.uchicago.edu>
Jens Axel S�gaard <······@jasoegaard.dk> writes:

> Just to play devils advocate:
> 
>   [Assume this is a functional language]
> 
>   There is no
>   difference between an *function call* and its *return value*: I could
>   have written the *return value* by hand and would have gotten the exact
>   same program.

Nice try.  The problem with this retort is that it is not true.  As
has been discussed in a a separate sub-thread, it is also not true for
certain macro systems that are paired with, e.g., a module system --
and, in fact, for precisely the same reasons: being able to access
things that cannot be named using surface-level syntax.  So let me be
a bit more careful: Macros that work purely at the level of the
surface language by expanding macro calls into other (source-)code
snippets are not (part of) an abstraction facilities.(**)

I don't know how PLT macros work in detail, but I guess that macro
expansion takes place at the level of some "internal syntax" which is
richer than what can be written down directly at the level of the
surface langage. (At least that's how my own implementation of this
feature works.)  This corresponds to functions operating at the value
level and not at the level of the language's syntax.

Matthias

(**) Ironically, macro systems such as PLT's or my own are able to do
what they do precisely because they maintain invarients by employing
an abstraction (call it "internal syntax") which hides direct access
to some of its details from direct programmer access.  This makes it
possible for a macro defined in module M but used outside of M to
expand into code which has direct access to variables x that are
private to M.  There is no surface syntax to do the same.
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-0611031502230001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> Jens Axel S�gaard <······@jasoegaard.dk> writes:
> 
> > Just to play devils advocate:
> > 
> >   [Assume this is a functional language]
> > 
> >   There is no
> >   difference between an *function call* and its *return value*: I could
> >   have written the *return value* by hand and would have gotten the exact
> >   same program.
> 
> Nice try.  The problem with this retort is that it is not true.  As
> has been discussed in a a separate sub-thread, it is also not true for
> certain macro systems that are paired with, e.g., a module system --
> and, in fact, for precisely the same reasons: being able to access
> things that cannot be named using surface-level syntax.  So let me be
> a bit more careful: Macros that work purely at the level of the
> surface language by expanding macro calls into other (source-)code
> snippets are not (part of) an abstraction facilities.(**)
> 
> I don't know how PLT macros work in detail, but I guess that macro
> expansion takes place at the level of some "internal syntax" which is
> richer than what can be written down directly at the level of the
> surface langage. (At least that's how my own implementation of this
> feature works.)  This corresponds to functions operating at the value
> level and not at the level of the language's syntax.
> 
> Matthias
> 
> (**) Ironically, macro systems such as PLT's or my own are able to do
> what they do precisely because they maintain invarients by employing
> an abstraction (call it "internal syntax") which hides direct access
> to some of its details from direct programmer access.  This makes it
> possible for a macro defined in module M but used outside of M to
> expand into code which has direct access to variables x that are
> private to M.  There is no surface syntax to do the same.

But you can hide things with macros too:

? (defmacro foo () `','#.(list nil))  ; Weehah!
FOO
? (foo)
(NIL)
? (setf (car (foo)) 1)
1
? (foo)
(1)
? 

There is now no surface syntax other than (foo) that will give you access
to what (foo) gives you access to.

(Pascal Costanza made a similar point earlier, saying (correctly) that you
can do this (hide things with macros) by using uninterned symbols.  This
example proves that uninterned symbols are not necessary.)

E.
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-0611032051130001@192.168.1.51>
In article <··············@mycroft.actrix.gen.nz>, Paul Foley
<···@below.invalid> wrote:

> On Thu, 06 Nov 2003 15:02:23 -0800, Erann Gat wrote:
> 
> > But you can hide things with macros too:
> 
> > ? (defmacro foo () `','#.(list nil))  ; Weehah!
> > FOO
> > ? (foo)
> > (NIL)
> > ? (setf (car (foo)) 1)
> > 1
> > ? (foo)
> > (1)
> > ? 
> 
> Hm.  Modifying constant structure is "illegal" though; better not try
> that in code that's been through a file.

Where do you see constant structure?  That cell is freshly consed at read time.

E.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2ekwkoilf.fsf@hanabi-air.shimizu.blume>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@mycroft.actrix.gen.nz>, Paul Foley
> <···@below.invalid> wrote:
> 
> > On Thu, 06 Nov 2003 15:02:23 -0800, Erann Gat wrote:
> > 
> > > But you can hide things with macros too:
> > 
> > > ? (defmacro foo () `','#.(list nil))  ; Weehah!
> > > FOO
> > > ? (foo)
> > > (NIL)
> > > ? (setf (car (foo)) 1)
> > > 1
> > > ? (foo)
> > > (1)
> > > ? 
> > 
> > Hm.  Modifying constant structure is "illegal" though; better not try
> > that in code that's been through a file.
> 
> Where do you see constant structure?  That cell is freshly consed at read time.

Isn't *every* list that's being read by the reader freshly consed at
read time?  I don't see how the above is different from simply saying

   (defmacro foo () `','(nil))

which in turn is probably equivalent to

   (defmacro foo () '(nil))

[CL experts: please correct me if I'm wrong.]
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <47k2c4iik.fsf@franz.com>
Matthias Blume <····@my.address.elsewhere> writes:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@mycroft.actrix.gen.nz>, Paul Foley
> > <···@below.invalid> wrote:
> > 
> > > On Thu, 06 Nov 2003 15:02:23 -0800, Erann Gat wrote:
> > > 
> > > > But you can hide things with macros too:
> > > 
> > > > ? (defmacro foo () `','#.(list nil))  ; Weehah!
> > > > FOO
> > > > ? (foo)
> > > > (NIL)
> > > > ? (setf (car (foo)) 1)
> > > > 1
> > > > ? (foo)
> > > > (1)
> > > > ? 
> > > 
> > > Hm.  Modifying constant structure is "illegal" though; better not try
> > > that in code that's been through a file.
> > 
> > Where do you see constant structure?  That cell is freshly consed at read time.
> 
> Isn't *every* list that's being read by the reader freshly consed at
> read time?  I don't see how the above is different from simply saying
> 
>    (defmacro foo () `','(nil))
> 
> which in turn is probably equivalent to
> 
>    (defmacro foo () '(nil))
> 
> [CL experts: please correct me if I'm wrong.]

It seems that 3.7.1 of the CL spec gives clear direction wrt literal
objects, but it is not clear to me whether there is in fact a difference
between a literal object that has been read and a literal object that has
been manufatured in some way other than by the reader.  What is clear to
me as an implementor is that I would find it impossible to distinguish
two such objects, because at the time the compiler sees the object it has
already been read.  This is a similar problem to that of passing a literal
into a function and the function destructively modifying it - unless each
literal is actually marked (perhaps with an "immutable" bit) a list or
array coming into a function cannot be distinguished as either mutable
or immutable, and this is why the spec grants great freedom to
implementors not to have to perform such a check.

I decided to try it out on our own impementation, to see what it had
to say (not that it is definitive, but I was in fact curious as to what
compile-file would do with all the swearing above (i.e. `','#. :-)
I selected a lisp with a "pure" space (a .pll file in Allegro CL
terminology) and instead of lists, which we don't make "pure", I used
strings of various origins, based on the knowledge that we do purify
CL symbol names at least.  I did this on an 6.2 alisp on the sparc.
I also had to define a purep operator, based on a kernel function that
determines if the object is in purespace or not.  It probably would have
also worked to test each string as EQ with the symbol name of 'car,
but it probably wouldn't have the same impact. Note also my attempt
to modify the pure string at the end...

Here are the results:

CL-USER(1): (compile (defun purep (n) (excl::.primcall 'sys::purep n)))
PUREP
NIL
NIL
CL-USER(2): (purep (symbol-name 'car))
T
CL-USER(3): (purep "CAR")
NIL
CL-USER(4): (with-open-file (s "zzz.cl" :direction :output :if-exists :supersede)
               (format s "(in-package :user)
                          (defun foo ()
                            \"CAR\")
                         "))
NIL
CL-USER(5): :cl zzz
;;; Compiling file zzz.cl
;;; Writing fasl file zzz.fasl
;;; Fasl write complete
; Fast loading /net/gemini/home/duane/zzz.fasl
CL-USER(6): (foo)
"CAR"
CL-USER(7): (purep *)
T
CL-USER(8): (with-open-file (s "zzz.cl" :direction :output :if-exists :supersede)
               (format s "(in-package :user)
                          (defun foo ()
                            (make-array 3 :element-type 'character
                                          :initial-contents (list #\\C #\\A #\\R)))
                         "))
NIL
CL-USER(9): :cl zzz
;;; Compiling file zzz.cl
;;; Writing fasl file zzz.fasl
;;; Fasl write complete
; Fast loading /net/gemini/home/duane/zzz.fasl
CL-USER(10): (foo)
"CAR"
CL-USER(11): (purep *)
NIL
CL-USER(12): (with-open-file (s "zzz.cl" :direction :output :if-exists :supersede)
               (format s "(in-package :user)
                          (defun foo ()
                            #.(make-array 3 :element-type 'character
                                            :initial-contents (list #\\C #\\A #\\R)))
                         "))
NIL
CL-USER(13): :cl zzz
;;; Compiling file zzz.cl
;;; Writing fasl file zzz.fasl
;;; Fasl write complete
; Fast loading /net/gemini/home/duane/zzz.fasl
CL-USER(14): (foo)
"CAR"
CL-USER(15): (purep *)
T
CL-USER(16): (with-open-file (s "zzz.cl" :direction :output :if-exists :supersede)
               (format s "(in-package :user)
                          (defmacro bar ()
                            `','#.(make-array 3 :element-type 'character
                                            :initial-contents (list #\\C #\\A #\\R)))
                          (defun foo () (bar))
                         "))
NIL
CL-USER(17): :cl zzz
;;; Compiling file zzz.cl
;;; Writing fasl file zzz.fasl
;;; Fasl write complete
; Fast loading /net/gemini/home/duane/zzz.fasl
CL-USER(18): (foo)
"CAR"
CL-USER(19): (purep *)
T
CL-USER(20): (bar)
"CAR"
CL-USER(21): (purep *)
T
CL-USER(22): (setf (char ** 0) #\B)
Error: Attempt to store into purespace address #xfa307ae4.
  [condition type: SIMPLE-ERROR]

Restart actions (select using :continue):
 0: Return to Top Level (an "abort" restart).
 1: Abort entirely from this process.
[1] CL-USER(23): 

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-0711030817270001@192.168.1.51>
In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@mycroft.actrix.gen.nz>, Paul Foley
> > <···@below.invalid> wrote:
> > 
> > > On Thu, 06 Nov 2003 15:02:23 -0800, Erann Gat wrote:
> > > 
> > > > But you can hide things with macros too:
> > > 
> > > > ? (defmacro foo () `','#.(list nil))  ; Weehah!
> > > > FOO
> > > > ? (foo)
> > > > (NIL)
> > > > ? (setf (car (foo)) 1)
> > > > 1
> > > > ? (foo)
> > > > (1)
> > > > ? 
> > > 
> > > Hm.  Modifying constant structure is "illegal" though; better not try
> > > that in code that's been through a file.
> > 
> > Where do you see constant structure?  That cell is freshly consed at
read time.
> 
> Isn't *every* list that's being read by the reader freshly consed at
> read time?

1) No, not necessarily. The reader can reuse/merge list structure, and 2)
(more important) that list isn't read by the reader, it's constructed by
an explicit call to LIST.

>  I don't see how the above is different from simply saying
> 
>    (defmacro foo () `','(nil))

The reader is not referentially transparent when #. is used.  Consider:

(defmacro foo () `','#.(foo))

(defun foo ()
(make-something-without-a-readable-surface-syntax-like-a-tcp-socket-or-a-file-handle-or-something-like-that))


> which in turn is probably equivalent to
> 
>    (defmacro foo () '(nil))

No, because in this case the list *is* constructed by the reader, and you
have no guarantees about how the reader does this.  It might cons up a new
list, or it might reuse an old one that it had consed up before.

Common Lisp is not like Scheme.  Common Lisp programs are real lists, not
strings that happen to look like lists ;-)

E.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1znf82ini.fsf@tti5.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> > Isn't *every* list that's being read by the reader freshly consed at
> > read time?
> 
> 1) No, not necessarily. The reader can reuse/merge list structure,

Is that really true?

> and 2)
> (more important) that list isn't read by the reader, it's constructed by
> an explicit call to LIST.

I know that part.  But as Duane pointed out, how would the
implementation ever know where the CONS came from?

> No, because in this case the list *is* constructed by the reader, and you
> have no guarantees about how the reader does this.  It might cons up a new
> list, or it might reuse an old one that it had consed up before.

I might be wrong (not being a CL lawyer), but I somehow doubt that.
If the reader reads, e.g.,

   ((a b) . (a b))

then you say it might be that CAR and CDR of the result are EQ?
Is that what you are saying?

> Common Lisp is not like Scheme.  Common Lisp programs are real lists, not
> strings that happen to look like lists ;-)

I know.

Matthias
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-0711031919050001@192.168.1.51>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > > Isn't *every* list that's being read by the reader freshly consed at
> > > read time?
> > 
> > 1) No, not necessarily. The reader can reuse/merge list structure,
> 
> Is that really true?

Well, it's a defensible position.  I quote from the scriptures:

<quote>
2.4.1 Left-Parenthesis 

The left-parenthesis initiates reading of a list.  Read is called
recursively to read successive objects until a right parenthesis is found
in the input stream. A list of the objects read is returned.
</quote>

Since the spec does not specify that a fresh list of the objects read is
returned I interpret this to mean that the reader may re-use a previously
consed-up list as long as they are "similar" as defined in section
3.2.4.2.

But I suppose a reasonable person could disagree, in which case I will
simply fall back on uninterned symbols to make the point that macros are
an abstraction mechanism.

> > and 2)
> > (more important) that list isn't read by the reader, it's constructed by
> > an explicit call to LIST.
> 
> I know that part.  But as Duane pointed out, how would the
> implementation ever know where the CONS came from?

That's the implementation's problem.  Most implementations don't bother to
keep track, and place the burden on the user not to get themselves into
trouble by modifying constant structure.  But that is all beside the
point.

> > No, because in this case the list *is* constructed by the reader, and you
> > have no guarantees about how the reader does this.  It might cons up a new
> > list, or it might reuse an old one that it had consed up before.
> 
> I might be wrong (not being a CL lawyer), but I somehow doubt that.
> If the reader reads, e.g.,
> 
>    ((a b) . (a b))
> 
> then you say it might be that CAR and CDR of the result are EQ?
> Is that what you are saying?

Not that they *are*, merely that they could be.

E.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m24qxe6wzx.fsf@hanabi-air.shimizu.blume>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > > Isn't *every* list that's being read by the reader freshly consed at
> > > > read time?
> > > 
> > > 1) No, not necessarily. The reader can reuse/merge list structure,
> > 
> > Is that really true?
> 
> Well, it's a defensible position.  I quote from the scriptures:
> 
[ quote from language document snipped ... ]
>
> Since the spec does not specify that a fresh list of the objects read is
> returned I interpret this to mean that the reader may re-use a previously
> consed-up list as long as they are "similar" as defined in section
> 3.2.4.2.

I very highly doubt that this is the intention.  (I don't care enough
to investigate this in detail, though.)  After all, there is special
syntax for shared substructures in s-expressions.  Having such syntax
around seems to indicate that the default is to not share.

In any case, even if your code were legal, you still wouldn't have
abstraction.  Notice that even in PLTs macro system (where macros can
expand into something that you cannot write by hand) the macros
themselves do not establish the abstraction.  The rules of the macro
expander merely /maintain/ the abstraction, so macros can be useful
parts of an abstraction's interface.

In your code (assuming for a moment that mutating a quotation were
legal) the macro foo expands into the whole list.  Nothing is really
hidden.  As you demonstrated yourself, you can setf the car to
anything you like.  You would have an abstraction if, for example, you
managed to hide the list in such a way that setf-ing its car is
controlled by some single interface that cannot be circumvented.  Your
macro does not do that.  I suspect that many (if not all) examples
that make use of uninterned symbols suffer from the same defect.

In summary: The ability of a macro to expand into something that one
cannot write "by hand" alone does not make it an abstraction facility.
Macro systems such as PLT's are /part of/ an abstraction facility
because they "play by the rules" of the module system, i.e., they
cannot be used to break the abstraction.

Matthias
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-0811032323510001@192.168.1.51>
In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@tti5.uchicago.edu>, Matthias Blume
> > <····@my.address.elsewhere> wrote:
> > 
> > > ···@jpl.nasa.gov (Erann Gat) writes:
> > > 
> > > > > Isn't *every* list that's being read by the reader freshly consed at
> > > > > read time?
> > > > 
> > > > 1) No, not necessarily. The reader can reuse/merge list structure,
> > > 
> > > Is that really true?
> > 
> > Well, it's a defensible position.  I quote from the scriptures:
> > 
> [ quote from language document snipped ... ]
> >
> > Since the spec does not specify that a fresh list of the objects read is
> > returned I interpret this to mean that the reader may re-use a previously
> > consed-up list as long as they are "similar" as defined in section
> > 3.2.4.2.
> 
> I very highly doubt that this is the intention.  (I don't care enough
> to investigate this in detail, though.)  After all, there is special
> syntax for shared substructures in s-expressions.  Having such syntax
> around seems to indicate that the default is to not share.

Hm, that's a good point.

> You would have an abstraction if, for example, you
> managed to hide the list in such a way that setf-ing its car is
> controlled by some single interface that cannot be circumvented.

(defstruct cell value)

(defmacro foo (arg)
  `(ecase ',arg
     (inc (incf (cell-value ',#1=#.(make-cell :value 0))))
     (dec (decf (cell-value ',#1#)))))
E.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m265hs4tqo.fsf@hanabi-air.shimizu.blume>
···@jpl.nasa.gov (Erann Gat) writes:

> > You would have an abstraction if, for example, you
> > managed to hide the list in such a way that setf-ing its car is
> > controlled by some single interface that cannot be circumvented.
> 
> (defstruct cell value)
> 
> (defmacro foo (arg)
>   `(ecase ',arg
>      (inc (incf (cell-value ',#1=#.(make-cell :value 0))))
>      (dec (decf (cell-value ',#1#)))))

Ok, let me give this a shot...

[1]> (defstruct cell value)
CELL
[2]> (defmacro foo (arg)
  `(ecase ',arg
     (inc (incf (cell-value ',#1=#.(make-cell :value 0))))
     (dec (decf (cell-value ',#1#)))))
FOO
[3]> (foo inc)
1
[4]> (foo inc)
2
[5]> (foo dec)
1
;; So far so good.  Now let's see what we can do to break this...
[6]>    (setf (cell-value (cadadr (cadadr (caddr
                (caddr (macroexpand '(foo inc)))))))
         'you-lose)
YOU-LOSE
[7]> (foo inc)

*** - argument to + should be a number: YOU-LOSE
1. Break [8]> 


Anyway, maybe macros are abstractions and a macro facility is an
abstraction facility.  An encapsulation facility (as Fergus put it)
they certainly are not.  The above technique shows that quite clearly:
if a macro can expand into it, then one can get at it.

(By the way, I think that both your technique of writing a macro which
can (almost) hide something and my way of breaking it are right up
there with Olin's "Stylish Lisp Programming Techniques":
         http://www.ai.mit.edu/~shivers/newstyle.html)

Matthias
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-1011030853060001@192.168.1.51>
In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ;; So far so good.  Now let's see what we can do to break this...
> [6]>    (setf (cell-value (cadadr (cadadr (caddr
>                 (caddr (macroexpand '(foo inc)))))))
>          'you-lose)

On this view, abstraction is not possible in a sufficiently powerful
interactive development environment.  Not even lambda and modules count as
abstractions in this case.  Here's how I can break a module abstraction:

(defun break-module-abstration ()
  (with-open-file (f path-to-module-source :direction :io)
    (muck-with-source-code f))
  (load path-to-module-source))

Mucking with a macro's macroexpansion is no different from mucking with
the source file for a module.

With most Lisp debuggers I can go in and muck with a lexical frame
directly.  Some Lisps even allow this to be done programmatically (i.e.
non-interactively).

With sufficiently low-level hacking I can muck with the actual machine
code of a closure at run time.  Look up the FPC floating-point compiler
for Mac Common lisp for an actual practical application of this technique.

There is absolutely no difference between macros and any other putative
abstraction mechanism in this regard.

> (By the way, I think that both your technique of writing a macro which
> can (almost) hide something and my way of breaking it are right up
> there with Olin's "Stylish Lisp Programming Techniques":
>          http://www.ai.mit.edu/~shivers/newstyle.html)

Yes.  And your point would be ....?

E.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m11xsg17a7.fsf@tti5.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> > (By the way, I think that both your technique of writing a macro which
> > can (almost) hide something and my way of breaking it are right up
> > there with Olin's "Stylish Lisp Programming Techniques":
> >          http://www.ai.mit.edu/~shivers/newstyle.html)
> 
> Yes.  And your point would be ....?

... that the "abstraction facility" (if we want to call it that) is
not a very good one.  (In fact, I'd say it is absolutely terrible.)

Matthias
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-1011031050190001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > > (By the way, I think that both your technique of writing a macro which
> > > can (almost) hide something and my way of breaking it are right up
> > > there with Olin's "Stylish Lisp Programming Techniques":
> > >          http://www.ai.mit.edu/~shivers/newstyle.html)
> > 
> > Yes.  And your point would be ....?
> 
> ... that the "abstraction facility" (if we want to call it that)

I don't care what you call it.  You made a claim that macros are
fundamentally different in some way from other language constructs, that
macros lack the "abstraction" property (your choice of words, not mine)
whereas lambda and modules have this property.  I am disputing that claim,
but I don't much care what terminology you use.  It's your claim.

> is not a very good one.  (In fact, I'd say it is absolutely terrible.)

Well, you are entitled to your opinion, but this is a completely different
argument.  To this new claim I will say that citing one example of a poor
use of a facility does not render a facility "absolutely terrible".  If it
did I could write one awful SML program and conclude that static typing is
"absolutely terrible."

E.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1u15cyprw.fsf@tti5.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> > is not a very good one.  (In fact, I'd say it is absolutely terrible.)
> 
> Well, you are entitled to your opinion, but this is a completely different
> argument.

No, it is not.  To refresh your memory, here is the remark that
originally sparked this sub-thread:

M> Dynamically typed languages tend to have very few
M> really good abstraction facilities that are worthy of that label.

Notice that I did not say in this sentence that DL languages have no
abstraction facilities, I merely said that they don't have really good
ones.

Matthias
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-1011031209210001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > > is not a very good one.  (In fact, I'd say it is absolutely terrible.)
> > 
> > Well, you are entitled to your opinion, but this is a completely different
> > argument.
> 
> No, it is not.  To refresh your memory, here is the remark that
> originally sparked this sub-thread:
> 
> M> Dynamically typed languages tend to have very few
> M> really good abstraction facilities that are worthy of that label.
> 
> Notice that I did not say in this sentence that DL languages have no
> abstraction facilities, I merely said that they don't have really good
> ones.

Yes, I get that, but there are nonetheless two different topics at hand
here: 1) whether macros are or are not an abstraction facility (at one
point you said they weren't) and 2) assuming they are, whether or not they
are good.  We have been focused on question 1, since settling it (and
answering it in the affirmative) is a prerequisite for having a meaningful
discussion of question 2.  But if you are willing to concede that the
answer to question 1 is "yes" we can move on if you like.  Or we could
just drop it.  That would be fine with me too.

E.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1d6bzzy90.fsf@tti5.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> [ ... ] Or we could just drop it.  That would be fine with me too.

Let's just drop it.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <wua8gfab.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> M> Dynamically typed languages tend to have very few
> M> really good abstraction facilities that are worthy of that label.
>
> Notice that I did not say in this sentence that DL languages have no
> abstraction facilities, I merely said that they don't have really good
> ones.

You said that they have very *few* good ones.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1ptg0yo6q.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > M> Dynamically typed languages tend to have very few
> > M> really good abstraction facilities that are worthy of that label.
> >
> > Notice that I did not say in this sentence that DL languages have no
> > abstraction facilities, I merely said that they don't have really good
> > ones.
> 
> You said that they have very *few* good ones.

Yes.  That does not contradict what I wrote, so what is your point?
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <znf4ndz1.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> Matthias Blume <····@my.address.elsewhere> writes:
>> 
>> > M> Dynamically typed languages tend to have very few
>> > M> really good abstraction facilities that are worthy of that label.
>> >
>> > Notice that I did not say in this sentence that DL languages have no
>> > abstraction facilities, I merely said that they don't have really good
>> > ones.
>> 
>> You said that they have very *few* good ones.
>
> Yes.  That does not contradict what I wrote, so what is your point?

The first sentence sounds like you are asserting that DL languages
have a paucity of really good abstraction facilities.

The second sentence sounds like you are asserting that DL languages
have abstraction facilities, but that *none* of them are really good.

I think LAMBDA is a really good abstraction facility.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1he1bzybe.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> I think LAMBDA is a really good abstraction facility.

I agree.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bohbk8$cqn$1@newsreader2.netcologne.de>
Matthias Blume wrote:

>>(more important) that list isn't read by the reader, it's constructed by
>>an explicit call to LIST.
> 
> I know that part.  But as Duane pointed out, how would the
> implementation ever know where the CONS came from?

The implementation doesn't need to know. You need to know, or better yet 
document it.

I don't know why one would want to write such a macro, but if one wants 
to write it, the modification seems to be part of the interface to that 
macro and should be documented as such.

The difference between Erann's and my illustrations of how to write 
macros that "hide stuff" is that uninterned symbols are regularly used 
in Common Lisp code while I suspect that `','#.(list nil) rarely occurs. ;)

(Still, an "interesting" example. Or am I missing something?)


Pascal
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4ekwj99a8.fsf@franz.com>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@hanabi-air.shimizu.blume>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > In article <··············@mycroft.actrix.gen.nz>, Paul Foley
> > > <···@below.invalid> wrote:
> > > 
> > > > On Thu, 06 Nov 2003 15:02:23 -0800, Erann Gat wrote:
> > > > 
> > > > > But you can hide things with macros too:
> > > > 
> > > > > ? (defmacro foo () `','#.(list nil))  ; Weehah!
> > > > > FOO
> > > > > ? (foo)
> > > > > (NIL)
> > > > > ? (setf (car (foo)) 1)
> > > > > 1
> > > > > ? (foo)
> > > > > (1)
> > > > > ? 
> > > > 
> > > > Hm.  Modifying constant structure is "illegal" though; better not try
> > > > that in code that's been through a file.
> > > 
> > > Where do you see constant structure?  That cell is freshly consed at
> read time.
> > 
> > Isn't *every* list that's being read by the reader freshly consed at
> > read time?
> 
> 1) No, not necessarily. The reader can reuse/merge list structure, and 2)
> (more important) that list isn't read by the reader, it's constructed by
> an explicit call to LIST.

Paul's original warning was for "code that's been through a file"
which I assume he means is thus going through a compile-file
operation.  Your example is clearly not in a file, and so is exempt
from that warning.  The warning still applies, though, against assuming
too much about the port-ability of your example to a file - if you were
to place the defmacro into a file and compile it, then coalescence
rules for compile-file apply and you _may_ end up with a constant which
is EQ to a literal.  Note that compile-file semantics apply not only to
EQ objects, but to objects that might be coalesced (which for objects
other than symbols and packages might be coalesced if they are
"similar" ... see 3.2.4.4)

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Paul Foley
Subject: Re: More static type fun.
Date: 
Message-ID: <m2wuactmwg.fsf@mycroft.actrix.gen.nz>
On Fri, 07 Nov 2003 05:23:55 GMT, Matthias Blume wrote:

>> Where do you see constant structure?  That cell is freshly consed at read time.

> Isn't *every* list that's being read by the reader freshly consed at
> read time?

Exactly.  I can't imagine how you'd go about telling the difference
between Erann's list and one constructed with QUOTE once it's been
read (you could look at the source as a string of characters, of
course, but you can't tell from looking at the resulting Lisp object)

>             I don't see how the above is different from simply saying

>    (defmacro foo () `','(nil))

> which in turn is probably equivalent to

>    (defmacro foo () '(nil))

Almost; it's actually (defmacro foo () ''(nil))

[the comma just makes the following quote go away]

-- 
Cogito ergo I'm right and you're wrong.                 -- Blair Houghton

(setq reply-to
  (concatenate 'string "Paul Foley " "<mycroft" '(··@) "actrix.gen.nz>"))
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m18yms40vb.fsf@tti5.uchicago.edu>
Paul Foley <···@below.invalid> writes:

> On Fri, 07 Nov 2003 05:23:55 GMT, Matthias Blume wrote:
> 
> >> Where do you see constant structure?  That cell is freshly consed at read time.
> 
> > Isn't *every* list that's being read by the reader freshly consed at
> > read time?
> 
> Exactly.  I can't imagine how you'd go about telling the difference
> between Erann's list and one constructed with QUOTE once it's been
> read (you could look at the source as a string of characters, of
> course, but you can't tell from looking at the resulting Lisp object)
> 
> >             I don't see how the above is different from simply saying
> 
> >    (defmacro foo () `','(nil))
> 
> > which in turn is probably equivalent to
> 
> >    (defmacro foo () '(nil))
> 
> Almost; it's actually (defmacro foo () ''(nil))

Oops. Yes, of course.
From: Jens Axel Søgaard
Subject: Re: More static type fun.
Date: 
Message-ID: <3faabd04$0$69946$edfadb0f@dread12.news.tele.dk>
Matthias Blume wrote:
> Jens Axel S�gaard <······@jasoegaard.dk> writes:

>>Just to play devils advocate:
>>
>>  [Assume this is a functional language]
>>
>>  There is no
>>  difference between an *function call* and its *return value*: I could
>>  have written the *return value* by hand and would have gotten the exact
>>  same program.

> Nice try.  The problem with this retort is that it is not true.  As
> has been discussed in a a separate sub-thread, it is also not true for
> certain macro systems that are paired with, e.g., a module system --
> and, in fact, for precisely the same reasons: being able to access
> things that cannot be named using surface-level syntax.  So let me be
> a bit more careful: Macros that work purely at the level of the
> surface language by expanding macro calls into other (source-)code
> snippets are not (part of) an abstraction facilities.(**)

Ok - I can see the difference.

> I don't know how PLT macros work in detail, but I guess that macro
> expansion takes place at the level of some "internal syntax" which is
> richer than what can be written down directly at the level of the
> surface langage. (At least that's how my own implementation of this
> feature works.)  This corresponds to functions operating at the value
> level and not at the level of the language's syntax.

You probably know it, but perhaps others would like to read it:

     "Composable and Compilable Macros: You Want it When?".
     Matthew Flatt
     International Conference on Functional Programming (ICFP'2002). 2002.
     <http://www.cs.utah.edu/plt/publications/macromod.pdf>

-- 
Jens Axel S�gaard
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m28ymseugz.fsf@wireless-5-198-70.uchicago.edu>
Jens Axel S�gaard <······@jasoegaard.dk> writes:

> You probably know it, but perhaps others would like to read it:
> 
>      "Composable and Compilable Macros: You Want it When?".
>      Matthew Flatt
>      International Conference on Functional Programming (ICFP'2002). 2002.
>      <http://www.cs.utah.edu/plt/publications/macromod.pdf>

How could I possibly not?!

Matthias
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0311070722.2cea218@posting.google.com>
Pascal Costanza <········@web.de> wrote in message news:<············@newsreader2.netcologne.de>...

> I think it is a much better idea to incorporate support for abstraction 
> boundaries that you can still always break intentionally. It should take 
> explicit steps to work around an abstraction boundary, but it shouldn't 
> be impossible. Otherwise you always risk that you drive yourself into a 
> dead end.

I think you touch on the real reason that lisp, and dynamic languages
in general, do not satisfy Matthias's personal definition of
abstraction. To Matthias, the hiding of details created by an
abstraction should be unbreakable. According to his preference, one
shouldn't be able to violate an abstraction, and the language should
enforce this barrier at the language level - that is, one shouldn't
have to make special efforts using gensym, or uninterning symbols,
etc., to make the abstraction barrier impenetrable.

I think, with you, that this sort of enforcement is a fundamental
violation of the common lisp philosophy that the programmer should be
trusted with the power to do whatever it is that he might need to in a
given situation. Experience has shown (not just my experience, but
that of many programmers) that having the freedom to move in any
direction is the only way to avoid being painted into undesirable
corners by one's language. For this reason, lisp trusts programmers
with the power to violate abstractions when they think it is
absolutely necessary.

Matthias, I believe, finds this power to violate abstractions,
undesirable. He comes down on the side of trusting the language
designers' enforcement mechanisms more than the programmer's judgment,
and, for many programmers, who aren't actually more clever than the
language designers, this is a good call. This is also quite consistent
with his preference for statically typed languages - again, trusting
the language designers' enforcement mechanisms over the programmer's
judgment.

I think most lispers come down on the side of trusting programmers
with the power to overrule the language designers' enforcement
mechanisms when they feel they need to. This works well, but only for
better programmers. And our preference in this regard is also quite
consistent with our preference for dynamic typing.

Since both types of languages give one the power to move in the other
direction, the only issue is where you want to set the initial
default. Static typing advocates want to trust the language designers
and constrain programmers by default. Dynamic typing advocates, by
default, want to trust programmers to know what they are doing.

Raf
From: Thant Tessman
Subject: Re: More static type fun.
Date: 
Message-ID: <boggvu$bh5$1@terabinaries.xmission.com>
Raffael Cavallaro wrote:

[...]

> I think, with you, that this sort of enforcement is a fundamental
> violation of the common lisp philosophy that the programmer should be
> trusted with the power to do whatever it is that he might need to in a
> given situation. Experience has shown (not just my experience, but
> that of many programmers) that having the freedom to move in any
> direction is the only way to avoid being painted into undesirable
> corners by one's language. For this reason, lisp trusts programmers
> with the power to violate abstractions when they think it is
> absolutely necessary. [...]

You sound like someone defending C++ against, say, Lisp.

-thant
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bohcf5$e4e$1@newsreader2.netcologne.de>
Thant Tessman wrote:

> Raffael Cavallaro wrote:
> 
> [...]
> 
>> I think, with you, that this sort of enforcement is a fundamental
>> violation of the common lisp philosophy that the programmer should be
>> trusted with the power to do whatever it is that he might need to in a
>> given situation. Experience has shown (not just my experience, but
>> that of many programmers) that having the freedom to move in any
>> direction is the only way to avoid being painted into undesirable
>> corners by one's language. For this reason, lisp trusts programmers
>> with the power to violate abstractions when they think it is
>> absolutely necessary. [...]
> 
> 
> You sound like someone defending C++ against, say, Lisp.

Common Lisp and C++ are indeed similar in this regard. That is, these 
languages are based on the belief that programmers know better than the 
language designers. For this reason, both support ways to code around 
intentional or accidental restrictions.


Pascal
From: Isaac Gouy
Subject: Re: More static type fun.
Date: 
Message-ID: <ce7ef1c8.0311071057.1cd2dbfa@posting.google.com>
·······@mediaone.net (Raffael Cavallaro) wrote in message news:<···························@posting.google.com>...
> Pascal Costanza <········@web.de> wrote in message news:<············@newsreader2.netcologne.de>...
-SNIP- 
> default. Static typing advocates want to trust the language designers
> and constrain programmers by default. Dynamic typing advocates, by
> default, want to trust programmers to know what they are doing.

http://pauillac.inria.fr/~xleroy/talks/icfp99.ps.gz
Slide 40 but don't miss the other slides ;-)
From: Joel Ray Holveck
Subject: Re: More static type fun.
Date: 
Message-ID: <87znf724ou.fsf@thor.piquan.org>
> I think, with you, that this sort of enforcement is a fundamental
> violation of the common lisp philosophy that the programmer should be
> trusted with the power to do whatever it is that he might need to in a
> given situation.

Craig Brozefsky once said something along the lines of, "I know when I
need to break abstraction, and I don't need some damn compiler yapping
at me like a stressed chihuahua when I do it!"

I think that languages should be about conferring power, not limiting
it.  If I ever saw a justifiable need to muck about with the internal
structure of a cons cell, I'd want the language to support it.  (As it
happens, I don't, so don't care that it doesn't.)

Cheers,
joelh

-- 
Joel Ray Holveck - ·····@piquan.org
   Fourth law of programming:
   Anything that can go wrong wi
sendmail: segmentation violation - core dumped
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bohcvl$f2k$1@newsreader2.netcologne.de>
Joel Ray Holveck wrote:

>>I think, with you, that this sort of enforcement is a fundamental
>>violation of the common lisp philosophy that the programmer should be
>>trusted with the power to do whatever it is that he might need to in a
>>given situation.
> 
> 
> Craig Brozefsky once said something along the lines of, "I know when I
> need to break abstraction, and I don't need some damn compiler yapping
> at me like a stressed chihuahua when I do it!"
> 
> I think that languages should be about conferring power, not limiting
> it. 

One of my favorite quotes is by Guy Steele and Gerald Sussman, and it 
goes like this:

"No amount of language design can _force_ a programmer to write clear 
programs. If the programmer's conception of the problem is badly 
organized, then his program will also be badly organized. The extent to 
which a programming language can help a programmer to organize his 
problem is precisely the extent to which it provides features 
appropriate to his problem domain. The emphasis should not be on 
eliminating "bad" language constructs, but on discovering or inventing 
helpful ones."

(in "Lambda - The Ultimate Imperative", AIM 353)


Pascal
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <uad77cvjk.fsf@dtpq.com>
>>>>> On Fri, 07 Nov 2003 22:29:34 GMT, Joel Ray Holveck ("Joel") writes:
 Joel> I think that languages should be about conferring power, not limiting
 Joel> it.  If I ever saw a justifiable need to muck about with the internal
 Joel> structure of a cons cell, I'd want the language to support it.

I think you're looking for either SYS:%STORE-TAG-AND-POINTER, or
perhaps SYS:%P-STORE-CDR-CODE (if you meant cons cells specifically).
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3faf799f$1@news.unimelb.edu.au>
·······@mediaone.net (Raffael Cavallaro) writes:

>I think, with you, that this sort of enforcement is a fundamental
>violation of the common lisp philosophy that the programmer should be
>trusted with the power to do whatever it is that he might need to in a
>given situation. Experience has shown (not just my experience, but
>that of many programmers) that having the freedom to move in any
>direction is the only way to avoid being painted into undesirable
>corners by one's language.

That seems bogus to me.  I don't see how any amount of experience could
_ever_ show that a particular way of doing things was the _only_ way of
achieving a set goal.  Are you saying that you, and "many programmers",
have tried _every_ possible language design, either current or future,
and found all of them lacking, save one (the one which gives programmers
"the freedom to move in any direction")???

Furthermore, your statement definitely does not fit with _my_ experience.

>Matthias, I believe, finds this power to violate abstractions,
>undesirable. He comes down on the side of trusting the language
>designers' enforcement mechanisms more than the programmer's judgment,

Not at all.  The question is whether to trust the library implementor's
judgement or the library user's judgement.

Note also that as long as the source code is available, the programmer
always retains full power to break their own abstractions whenever they
want.  Language enforcement of abstraction just means that to do it,
the programmer may need to modify the source code for the library which
implements that abstraction.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzsmkwjse2.fsf@cupid.igpm.rwth-aachen.de>
Fergus Henderson <···@cs.mu.oz.au> writes:
> >Matthias, I believe, finds this power to violate abstractions,
> >undesirable. He comes down on the side of trusting the language
> >designers' enforcement mechanisms more than the programmer's judgment,
> 
> Not at all.  The question is whether to trust the library implementor's
> judgement or the library user's judgement.

As a user of libraries, I want the power to disagree with the
implementor's judgement, thank you very much. I've used libraries
where the restrictions imposed were there only because the implementor

        A] Hadn't thought about that particular use.

        B] Just because it was said in a book that making restrictions
        is always good.

Remember that libraries are not written by gods. Developers of
libraries also make errors and/or fall prey to bad judgement when
making design decissions.

> Note also that as long as the source code is available, the programmer
> always retains full power to break their own abstractions whenever they
> want.  Language enforcement of abstraction just means that to do it,
> the programmer may need to modify the source code for the library which
> implements that abstraction.

If a programmer decides to break an abstraction, why do you think is
it a good thing to put obstacles in his way?

Up with this I shall not put.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3faf9461$1@news.unimelb.edu.au>
Mario S. Mommer <········@yahoo.com> writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>> >Matthias, I believe, finds this power to violate abstractions,
>> >undesirable. He comes down on the side of trusting the language
>> >designers' enforcement mechanisms more than the programmer's judgment,
>> 
>> Not at all.  The question is whether to trust the library implementor's
>> judgement or the library user's judgement.
>
>As a user of libraries, I want the power to disagree with the
>implementor's judgement, thank you very much.

As a maintainer of code, I want the power to update the implementation
of an abstraction without having the world fall down because some lazy
hacker violated abstraction boundaries, thank you very much.

>I've used libraries where the restrictions imposed were there only because
>the implementor
>
>        A] Hadn't thought about that particular use.
>
>        B] Just because it was said in a book that making restrictions
>        is always good.

So modify the library source.

>> Note also that as long as the source code is available, the programmer
>> always retains full power to break their own abstractions whenever they
>> want.  Language enforcement of abstraction just means that to do it,
>> the programmer may need to modify the source code for the library which
>> implements that abstraction.
>
>If a programmer decides to break an abstraction, why do you think is
>it a good thing to put obstacles in his way?

Because it makes maintenance and debugging easier.  Programmers can rely
on abstractions keeping their invariants, which makes debugging easier.
They can also modify implementations of abstractions without having to
examine all the rest of the code to see if it will break.

Obviously if you have the library source code available, then breaking
the abstraction is trivial (e.g. s/private/public/g).  So essentially
this judgement comes down to ease of development *when using libraries
whose source you can't modify* versus ease of maintenance.  I know which
I'd choose!

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Lauri Alanko
Subject: Re: More static type fun.
Date: 
Message-ID: <boo4v3$5du$1@la.iki.fi>
Fergus Henderson <···@cs.mu.oz.au> virkkoi:
> Obviously if you have the library source code available, then breaking
> the abstraction is trivial (e.g. s/private/public/g).  So essentially
> this judgement comes down to ease of development *when using libraries
> whose source you can't modify* versus ease of maintenance.  I know which
> I'd choose!

And in those cases when the source _isn't_ available, the vendor of the
library probably doesn't want the users to know anything about its
innards anyway.


Lauri Alanko
··@iki.fi
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <boo6ma$uja$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Mario S. Mommer <········@yahoo.com> writes:
> 
> 
>>Fergus Henderson <···@cs.mu.oz.au> writes:
>>
>>>>Matthias, I believe, finds this power to violate abstractions,
>>>>undesirable. He comes down on the side of trusting the language
>>>>designers' enforcement mechanisms more than the programmer's judgment,
>>>
>>>Not at all.  The question is whether to trust the library implementor's
>>>judgement or the library user's judgement.
>>
>>As a user of libraries, I want the power to disagree with the
>>implementor's judgement, thank you very much.
> 
> 
> As a maintainer of code, I want the power to update the implementation
> of an abstraction without having the world fall down because some lazy
> hacker violated abstraction boundaries, thank you very much.

Of course, you can still do that. Stick to the documented interface. If 
a client of your library breaks abstraction boundaries, it's the 
responsibility of the client's programmer to update his source code 
properly in the long run.

> Obviously if you have the library source code available, then breaking
> the abstraction is trivial (e.g. s/private/public/g). 

...but then it's even harder to get your code up and running again in 
case you distribute a new version of your library because all changes to 
the library code need to be reapplied again, even those that wouldn't 
have looked different. When you allow breaking abstraction boundaries 
it's clear that only client code needs to be changed.

> So essentially
> this judgement comes down to ease of development *when using libraries
> whose source you can't modify* versus ease of maintenance.  I know which
> I'd choose!

Make sure that you don't dismiss the opportunity to learn from working 
client code that happens to break initial abstraction boundaries how a 
library can be improved.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fafa804$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:
>Fergus Henderson wrote:
>>Mario S. Mommer <········@yahoo.com> writes:
>>>Fergus Henderson <···@cs.mu.oz.au> writes:
>>>
>>>>The question is whether to trust the library implementor's
>>>>judgement or the library user's judgement.
>>>
>>>As a user of libraries, I want the power to disagree with the
>>>implementor's judgement, thank you very much.
>> 
>> As a maintainer of code, I want the power to update the implementation
>> of an abstraction without having the world fall down because some lazy
>> hacker violated abstraction boundaries, thank you very much.
>
>Of course, you can still do that. Stick to the documented interface. If 
>a client of your library breaks abstraction boundaries, it's the 
>responsibility of the client's programmer to update his source code 
>properly in the long run.

That's absolutely no help in the very common case when I am the
maintenance programmer for the client code too -- that is, when the
library in question is just a module within the application that I'm
maintaining.

Even in the case of a genuine third-party library, in practice this
approach doesn't work well.  My clients will complain that their programs
broke, and even though it is arguably their fault, I will get a lot of
the blame.

>> Obviously if you have the library source code available, then breaking
>> the abstraction is trivial (e.g. s/private/public/g). 
>
>...but then it's even harder to get your code up and running again in 
>case you distribute a new version of your library because all changes to 
>the library code need to be reapplied again, even those that wouldn't 
>have looked different.

That's what systems for version control and/or configuration management,
e.g. CVS, are for.

In the case where the library implementation of an abstraction does
change, it is probably much easier to get the code up and running in
systems that enforce encapsulation, since the version control system
will flag the conflict, so the programmer will have a clue as to what
has gone wrong.  In systems that don't enforce encapsulation, the program
will just misbehave when the library is upgraded, and the programmer
may have no idea what is causing it.

>> So essentially
>> this judgement comes down to ease of development *when using libraries
>> whose source you can't modify* versus ease of maintenance.  I know which
>> I'd choose!
>
>Make sure that you don't dismiss the opportunity to learn from working 
>client code that happens to break initial abstraction boundaries how a 
>library can be improved.

Of course.  That's why the programmer who modifies the source code to a
third-party library should send those changes back to the upstream source.
It's not an argument in favour of languages which don't enforce encapsulation.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <boobqc$thu$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>Fergus Henderson wrote:
>>
>>>Mario S. Mommer <········@yahoo.com> writes:
>>>
>>>>Fergus Henderson <···@cs.mu.oz.au> writes:
>>>>
>>>>
>>>>>The question is whether to trust the library implementor's
>>>>>judgement or the library user's judgement.
>>>>
>>>>As a user of libraries, I want the power to disagree with the
>>>>implementor's judgement, thank you very much.
>>>
>>>As a maintainer of code, I want the power to update the implementation
>>>of an abstraction without having the world fall down because some lazy
>>>hacker violated abstraction boundaries, thank you very much.
>>
>>Of course, you can still do that. Stick to the documented interface. If 
>>a client of your library breaks abstraction boundaries, it's the 
>>responsibility of the client's programmer to update his source code 
>>properly in the long run.
> 
> That's absolutely no help in the very common case when I am the
> maintenance programmer for the client code too -- that is, when the
> library in question is just a module within the application that I'm
> maintaining.

If something isn't helpful, don't use it. Breaking abstraction 
boundaries is just another option in your bag of options.

> Even in the case of a genuine third-party library, in practice this
> approach doesn't work well.  My clients will complain that their programs
> broke, and even though it is arguably their fault, I will get a lot of
> the blame.

No, because they have gotten at least warnings and needed to make a 
conscious decision to break abstraction boundaries.

>>>Obviously if you have the library source code available, then breaking
>>>the abstraction is trivial (e.g. s/private/public/g). 
>>
>>...but then it's even harder to get your code up and running again in 
>>case you distribute a new version of your library because all changes to 
>>the library code need to be reapplied again, even those that wouldn't 
>>have looked different.
> 
> That's what systems for version control and/or configuration management,
> e.g. CVS, are for.
> 
> In the case where the library implementation of an abstraction does
> change, it is probably much easier to get the code up and running in
> systems that enforce encapsulation, since the version control system
> will flag the conflict, so the programmer will have a clue as to what
> has gone wrong.  In systems that don't enforce encapsulation, the program
> will just misbehave when the library is upgraded, and the programmer
> may have no idea what is causing it.

No, you still get warnings.

>>>So essentially
>>>this judgement comes down to ease of development *when using libraries
>>>whose source you can't modify* versus ease of maintenance.  I know which
>>>I'd choose!
>>
>>Make sure that you don't dismiss the opportunity to learn from working 
>>client code that happens to break initial abstraction boundaries how a 
>>library can be improved.
> 
> Of course.  That's why the programmer who modifies the source code to a
> third-party library should send those changes back to the upstream source.
> It's not an argument in favour of languages which don't enforce encapsulation.

You're still dismissing the additional opportunity to learn from 
programmers who don't want to fiddle with your code.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fafc3a0$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>> Pascal Costanza <········@web.de> writes:
>> 
>>>Fergus Henderson wrote:
>>>
>>>>As a maintainer of code, I want the power to update the implementation
>>>>of an abstraction without having the world fall down because some lazy
>>>>hacker violated abstraction boundaries, thank you very much.
>>>
>>>Of course, you can still do that. Stick to the documented interface. If 
>>>a client of your library breaks abstraction boundaries, it's the 
>>>responsibility of the client's programmer to update his source code 
>>>properly in the long run.
>> 
>> That's absolutely no help in the very common case when I am the
>> maintenance programmer for the client code too -- that is, when the
>> library in question is just a module within the application that I'm
>> maintaining.
>
>If something isn't helpful, don't use it. Breaking abstraction 
>boundaries is just another option in your bag of options.

But it's not enough just for _me_ to not use it.  In order to avoid
the problem described above, I need to make sure that no lazy hacker in
my team ever used it, including the lazy hackers who were on the team
before I even joined it!

>> Even in the case of a genuine third-party library, in practice this
>> approach doesn't work well.  My clients will complain that their programs
>> broke, and even though it is arguably their fault, I will get a lot of
>> the blame.
>
>No, because they have gotten at least warnings and needed to make a 
>conscious decision to break abstraction boundaries.

That was months ago, and they've long since forgotton. 
The lazy hacker in question may even have moved on to a different job,
and it may be some other hacker on the client's team who blames me.
Rest assured, if it worked with the old version of my software, and
breaks with the new version, at least some of them will blame me,
even if it's not my fault.

>> In systems that don't enforce encapsulation, the program
>> will just misbehave when the library is upgraded, and the programmer
>> may have no idea what is causing it.
>
>No, you still get warnings.

I guess that depends on exactly what kind of mechanism is used to break
abstraction, and on what warnings the compiler issues.  In the dynamically
typed languages that I'm familiar with, you wouldn't get any warnings.

If you have a language in which it is possible to violate encapsulation,
but you are guaranteed to get warnings in that case, that is a somewhat
different situation, especially if it is possible to turn those warnings
into errors.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <booln8$iuq$1@newsreader2.netcologne.de>
Fergus Henderson wrote:

> Pascal Costanza <········@web.de> writes:
> 
>>Fergus Henderson wrote:
>>
>>>Pascal Costanza <········@web.de> writes:

>>If something isn't helpful, don't use it. Breaking abstraction 
>>boundaries is just another option in your bag of options.
> 
> But it's not enough just for _me_ to not use it.  In order to avoid
> the problem described above, I need to make sure that no lazy hacker in
> my team ever used it, including the lazy hackers who were on the team
> before I even joined it!

Oh well - why is it that static typers tend to sound like control freaks? ;)

>>>Even in the case of a genuine third-party library, in practice this
>>>approach doesn't work well.  My clients will complain that their programs
>>>broke, and even though it is arguably their fault, I will get a lot of
>>>the blame.
>>
>>No, because they have gotten at least warnings and needed to make a 
>>conscious decision to break abstraction boundaries.
> 
> That was months ago, and they've long since forgotton. 
> The lazy hacker in question may even have moved on to a different job,
> and it may be some other hacker on the client's team who blames me.

Lazy hackers can do damage with any language.

>>>In systems that don't enforce encapsulation, the program
>>>will just misbehave when the library is upgraded, and the programmer
>>>may have no idea what is causing it.
>>
>>No, you still get warnings.
> 
> I guess that depends on exactly what kind of mechanism is used to break
> abstraction, and on what warnings the compiler issues.  In the dynamically
> typed languages that I'm familiar with, you wouldn't get any warnings.

Then check out the better ones.

> If you have a language in which it is possible to violate encapsulation,
> but you are guaranteed to get warnings in that case, that is a somewhat
> different situation, especially if it is possible to turn those warnings
> into errors.

Exactly. See for example 
http://www.lispworks.com/reference/HyperSpec/Body/v_break_.htm#STbreak-on-signalsST

You could, for example, require that in production code, 
*break-on-signals* is set to a type descriptor tailored to the needs of 
your project. This would not be totally unlike the requirement that in 
production code, abstraction boundaries are not allowed to be broken. 
The differences are a) that you can break such boundaries during 
development, or when you need to fix bugs in the last moment, or similar 
situations; and b) that you can adapt the policy to the actual needs of 
a specific project and don't need to either stick to a possibly 
inadequate policy devised by some language/library designer who doesn't 
know your project at all, or else completely switch language/libraries.


Pascal
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fafebd3$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>
>> I guess that depends on exactly what kind of mechanism is used to break
>> abstraction, and on what warnings the compiler issues.  In the dynamically
>> typed languages that I'm familiar with, you wouldn't get any warnings.
>
>Then check out the better ones.

Perhaps you could give me an example of source code which violates
an abstraction, and the kind of warnings you get.

(Somehow I think googling for "better dynamically typed languages" is not
going to give me anything helpful ;-)

>> If you have a language in which it is possible to violate encapsulation,
>> but you are guaranteed to get warnings in that case, that is a somewhat
>> different situation, especially if it is possible to turn those warnings
>> into errors.
>
>Exactly. See for example 
>http://www.lispworks.com/reference/HyperSpec/Body/v_break_.htm#STbreak-on-signalsST

OK, so you can turn warnings into errors.  That's the easy bit.
Are you actually guaranteed to get the warnings in the first place?

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bop621$h1t$1@newsreader2.netcologne.de>
Fergus Henderson wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>Fergus Henderson wrote:
>>
>>
>>>I guess that depends on exactly what kind of mechanism is used to break
>>>abstraction, and on what warnings the compiler issues.  In the dynamically
>>>typed languages that I'm familiar with, you wouldn't get any warnings.
>>
>>Then check out the better ones.
> 
> 
> Perhaps you could give me an example of source code which violates
> an abstraction, and the kind of warnings you get.
> 
> (Somehow I think googling for "better dynamically typed languages" is not
> going to give me anything helpful ;-)

:-)

OK, here is a rough sketch that uses a different way to do it.

First, I define a package and a class that lives inside that package.

 >>>

(defpackage boundary
   (:use common-lisp)
   (:export person name)) ; <- the exported symbols

(in-package boundary)

(defclass person ()
   ((iname :accessor name :initarg :name)))

<<<

INAME is not exported, so this means that you can only access the INAME 
slot via the NAME accessor.

So here is a transcript of a sample session inside the package CL-USER.

 >>>

CL-USER 1 > (use-package 'boundary)
T

CL-USER 2 > (setf p (make-instance 'person :name "Pascal"))
#<PERSON 1009EF03>

CL-USER 3 > (name p)
"Pascal"

CL-USER 4 > (slot-value p 'iname)

Error: The slot INAME is missing from #<PERSON 10ECF9F7> (of class 
#<STANDARD-CLASS PERSON 10FC9C8F>), when reading the value.
   1 (abort) Return to level 0.
   2 Return to top loop level 0.

Type :b for backtrace, :c <option number> to proceed,  or :? for other 
options

CL-USER 5 : 1 > :c 2

CL-USER 6 > (slot-value p 'boundary::iname)
"Pascal"

CL-USER 7 > (shadow 'slot-value)
T

CL-USER 8 > (slot-value p 'boundary::iname)

Error: Undefined operator SLOT-VALUE in form (SLOT-VALUE P (QUOTE 
BOUNDARY::INAME)).
   1 (continue) Try invoking SLOT-VALUE again.
   2 Return some values from the form (SLOT-VALUE P (QUOTE 
BOUNDARY::INAME)).
   3 Try invoking COMMON-LISP:SLOT-VALUE with the same arguments.
   4 Set the symbol-function of SLOT-VALUE to the symbol-function of 
COMMON-LISP:SLOT-VALUE.
   5 Try invoking something other than SLOT-VALUE with the same arguments.
   6 Set the symbol-function of SLOT-VALUE to another function.
   7 Set the macro-function of SLOT-VALUE to another function.
   8 (abort) Return to level 0.
   9 Return to top loop level 0.

Type :b for backtrace, :c <option number> to proceed,  or :? for other 
options

CL-USER 9 : 1 > :c 9

CL-USER 10 >

<<<

Notes:

1 > tells the CL-USER package to import all the public symbols from 
BOUNDARY.

2 > creates an instance of class PERSON and stores it in variable P.

3 > calls the reader (accessor) NAME for the slot INAME.

4 > SLOT-VALUE is another way to access slots in objects, without going 
through an accessor method. Since INAME is not among the exported 
symbols of BOUNDARY, this attempt fails.

6 > BOUNDARY::INAME is a way to break the abstraction boundary of a 
package and access an internal symbol. Now we succeed to read the slot 
without going through an accessor method.

7 > This line tells the CL-USER package to replace the SLOT-VALUE symbol 
inherited from COMMON-LISP with a fresh symbol SLOT-VALUE that now has 
CL-USER as its home package. (I don't explain the details of home 
packages here.)

I do this because I would like to tell the system not to allow direct 
slot accesses anymore. In other words, I would like to enforce my 
abstraction boundaries.

8 > Voila, I cannot access INAME anymore via SLOT-VALUE.


As I said above, this is a rough sketch. You would still be able to call 
(CL:SLOT-VALUE P 'BOUNDARY::INAME), so it's not 100% safe. But you can 
do more than that. For example, you could use an uninterned symbol 
instead of INAME, so noone would be able to access that slot anymore. etc.


Pascal
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb06533$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:
>>>Fergus Henderson wrote:
>>>
>>>>I guess that depends on exactly what kind of mechanism is used to break
>>>>abstraction, and on what warnings the compiler issues.  In the dynamically
>>>>typed languages that I'm familiar with, you wouldn't get any warnings.
>
>OK, here is a rough sketch that uses a different way to do it.
>
>First, I define a package and a class that lives inside that package.
...
>INAME is not exported, so this means that you can only access the INAME 
>slot via the NAME accessor.
...
>CL-USER 6 > (slot-value p 'boundary::iname)
>"Pascal"

Ah, so I don't get abstraction safety by default.  I have to explicitly
enable it.  Ugh!

>CL-USER 7 > (shadow 'slot-value)
>T

Now it's enabled, so I feel a bit better...

>You would still be able to call 
>(CL:SLOT-VALUE P 'BOUNDARY::INAME), so it's not 100% safe.

... but my hopes are again dashed!

Unless there is a very strong cultural taboo against using CL:SLOT-VALUE,
this "(shadow 'slot-value)" operation doesn't seem to have bought me
anything.

>But you can 
>do more than that. For example, you could use an uninterned symbol 
>instead of INAME, so noone would be able to access that slot anymore. etc.

But doing all that would be a lot of bother.  I'll bet almost no-one
does that in practice.  And probably you'll tell me that even if I go
to all that bother, it's still not going to be 100% safe.

I think that squarely falls in the category of "No really good encapsulation
facilities".

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Jon S. Anthony
Subject: Re: More static type fun.
Date: 
Message-ID: <m3n0b2q5kf.fsf@rigel.goldenthreadtech.com>
Fergus Henderson <···@cs.mu.oz.au> writes:

<...>

> >But you can 
> >do more than that. For example, you could use an uninterned symbol 
> >instead of INAME, so noone would be able to access that slot anymore. etc.
> 
> But doing all that would be a lot of bother.  I'll bet almost no-one
> does that in practice.  And probably you'll tell me that even if I go
> to all that bother, it's still not going to be 100% safe.

Come on.  Nothing is "100% safe".  You know that, I know that, and
most everyone else does as well.


> I think that squarely falls in the category of "No really good
> encapsulation facilities".

This (and some of your other comments) are the sort of _claims_ that
static type people like to make.  I should know, I used to be one of
them myself, making the same or nearly the same sorts of comments. I
gave this up when I started to keep track of what actually happens in
practice with my own work and those I have hired and worked with.
Ironically I originally started to do this as a way of getting some
good (albeit, small sampling, probably skewed sampling, etc) evidence
in _support_ of these claims.  Imagine my surprise when it turned out
that I had been wrong all along...

/Jon
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <ubrri7nbt.fsf@dtpq.com>
More Features, Less Crime
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <boqct6$4p8$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>>>Fergus Henderson wrote:
>>>>
>>>>
>>>>>I guess that depends on exactly what kind of mechanism is used to break
>>>>>abstraction, and on what warnings the compiler issues.  In the dynamically
>>>>>typed languages that I'm familiar with, you wouldn't get any warnings.
>>
>>OK, here is a rough sketch that uses a different way to do it.
>>
>>First, I define a package and a class that lives inside that package.
> 
> ...
> 
>>INAME is not exported, so this means that you can only access the INAME 
>>slot via the NAME accessor.
> 
> ...
> 
>>CL-USER 6 > (slot-value p 'boundary::iname)
>>"Pascal"
> 
> 
> Ah, so I don't get abstraction safety by default.  I have to explicitly
> enable it.  Ugh!

That's wrong. I needed to export PERSON and NAME from package BOUNDARY 
in order to make them visible by default. The notation BOUNDARY::INAME 
clearly indicates to a Lisp programmer that an internal symbol is 
accessed. Normally, you shouldn't do this. That language feature (double 
colon notation) is explicitly provided only to allow for breaking of 
abstraction boundaries. It's not something a Lisp programmer uses by 
default.

Likewise, it's standard practice not to go through SLOT-VALUE to access 
fields but to use accessor methods. So in this example we actually have 
two indications that this is exceptional code.

> 
> 
>>CL-USER 7 > (shadow 'slot-value)
>>T
> 
> 
> Now it's enabled, so I feel a bit better...
> 
> 
>>You would still be able to call 
>>(CL:SLOT-VALUE P 'BOUNDARY::INAME), so it's not 100% safe.
> 
> 
> ... but my hopes are again dashed!
> 
> Unless there is a very strong cultural taboo against using CL:SLOT-VALUE,
> this "(shadow 'slot-value)" operation doesn't seem to have bought me
> anything.

My example didn't intend to illustrate how to _prevent_ someone from 
using low-level features to break abstraction boundaries. This doesn't 
fit the Lisp mindset. The example illustrates how to _detect_ the use of 
such features.

It's extremely unlikely that someone would do all three of a) use 
CL:SLOT-VALUE instead of SLOT-VALUE, b) use SLOT-VALUE at all and c) use 
a non-exported symbol to access a field.

Here is another, even simpler approach to check whether someone broke 
abstraction boundaries: just grep for "::" in the source code. ;)

>>But you can 
>>do more than that. For example, you could use an uninterned symbol 
>>instead of INAME, so noone would be able to access that slot anymore. etc.
> 
> But doing all that would be a lot of bother.  I'll bet almost no-one
> does that in practice.

Yes, because noone is that paranoid. It doesn't cause problems. At 
least, I have never heard anyone say "gee, if we only had a way to 
enforce 100% safe abstraction boundaries, our software would have 
succeeded".

Don't try to fit the static typing mindset onto a dynamic language. 
Dynamic languages don't enforce abstraction boundaries by default for 
good reasons. People who use dynamic languages don't want 100% safe 
boundaries. I am only trying to illustrate how one could proceed if they 
wanted to make sure that abstraction boundaries aren't violated as a 
final step of a software development process. Until then, everyone 
should be free to use the appropriate tools.

If you want 100% safety from the start, don't use a dynamic language.

> And probably you'll tell me that even if I go
> to all that bother, it's still not going to be 100% safe.

ANSI Common Lisp doesn't provide a way to access uninterned symbols 
unless you already have a reference to them. Some Common Lisp 
implementations even provide low-level access to uninterned symbols.

> I think that squarely falls in the category of "No really good encapsulation
> facilities".

Good for what purposes? The abstraction boundaries Common Lisp provides 
seem to work in practice. That's all that matters.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Coby Beck
Subject: Re: More static type fun.
Date: 
Message-ID: <bopmbu$2in1$1@otis.netspace.net.au>
"Fergus Henderson" <···@cs.mu.oz.au> wrote in message
···············@news.unimelb.edu.au...
> Pascal Costanza <········@web.de> writes:
> >If something isn't helpful, don't use it. Breaking abstraction
> >boundaries is just another option in your bag of options.
>
> But it's not enough just for _me_ to not use it.  In order to avoid
> the problem described above, I need to make sure that no lazy hacker in
> my team ever used it, including the lazy hackers who were on the team
> before I even joined it!

I submit that there exists no language feature or combination of language
features that can force "lazy hackers" to write secure, maintainable,
bug-free code.  In my view any case made for language feature 'X' that tries
to show how it forces you to write better code is not only unconvincing, but
is its own counter-argument.

Give me the damn rope, if I hang myself, I promise I will not come crying to
you!

Cheers,

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")

PS.  (Unless it has a big red "Disable Feature" button, this in my view is
the best of both worlds)
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb06eae$1@news.unimelb.edu.au>
"Coby Beck" <·····@mercury.bc.ca> writes:

>Give me the damn rope, if I hang myself, I promise I will not come crying to
>you!

If you are writing programs all alone, and no-one else will ever need to
maintain them, that may be a reasonable request.

But if you're writing programs in a team, your team mates may reasonably
complain, because

	(a) when you hang yourself, _they_ have to come clean up the mess

	(b) even if you don't hang yourself, they nevertheless will still
	    find that they keep tripping up on the rope.
	    (The mere _presence_ of features which break abstraction makes
	    maintenance more difficult, even if those features are not used!)

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Coby Beck
Subject: Re: More static type fun.
Date: 
Message-ID: <boqi58$c1u$1@otis.netspace.net.au>
"Fergus Henderson" <···@cs.mu.oz.au> wrote in message
···············@news.unimelb.edu.au...
> "Coby Beck" <·····@mercury.bc.ca> writes:
>
> >Give me the damn rope, if I hang myself, I promise I will not come crying
to
> >you!
>
> If you are writing programs all alone, and no-one else will ever need to
> maintain them, that may be a reasonable request.

You can not force me to write good code no matter how much protection you
build into your language.  Lack of comments, badly named variables, cut and
paste 10 times but fix the bug in only 8 instances and the poor design that
leads to this, scatter-brained spaghetti code, lack of exception handling,
misunderstood requirements, laziness, *these* are the enemies, not
flexibility, power and freedom.

> But if you're writing programs in a team, your team mates may reasonably
> complain, because


I see no difference between coding in a team and coding alone frankly, when
it comes to good coding practices.  I find going back to things I wrote two
months ago remarkably similar to maintaining someone else's code (someone of
remarkable talent of course ;)  -- (he says trying to lighten the tone...)

> (a) when you hang yourself, _they_ have to come clean up the mess

And I hope the bring a rope equally long.

> (b) even if you don't hang yourself, they nevertheless will still
>     find that they keep tripping up on the rope.

FUD.  (I love that acronym)

>     (The mere _presence_ of features which break abstraction makes
>     maintenance more difficult, even if those features are not used!)

This is only a feeling you (and many others) have.  I feel it is exactly the
opposite problem.  Besides, if *you* make a mess (it's not always my
fault!), I'll have a *much* easier time cleaning it up if there are no
artificial barriers put in my way by well meaning but self-important
language designers who assumed there was nothing more in heaven and earth
than was dreamt of in their philosophy.

I submit that there exists no language feature or combination of language
features that can force "lazy hackers" to write secure, maintainable,
bug-free code.  In my view any case made for language feature 'X' that
claims it forces you to write better code is not only unconvincing, but is
its own counter-argument.

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Tayss
Subject: Re: More static type fun.
Date: 
Message-ID: <5627c6fa.0311110816.533c1431@posting.google.com>
Out of curiosity, are there any sufficiently powerful static type
systems out there that allow one to specify something like "x belongs
to type will-not-result-in-static-type-error"?  This sounds like a
perfectly good type, but I am not familiar enough with current static
systems.

The big win is that it can then fit in well with lisp's philosophy of
programmer versatility, by making such a type default.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bor3li$5e5$2@grizzly.ps.uni-sb.de>
Tayss wrote:
> Out of curiosity, are there any sufficiently powerful static type
> systems out there that allow one to specify something like "x belongs
> to type will-not-result-in-static-type-error"?  This sounds like a
> perfectly good type, but I am not familiar enough with current static
> systems.

Yes, because in a language with a static type system, all expressions would 
belong to such a type, by definition.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Tayss
Subject: Re: More static type fun.
Date: 
Message-ID: <5627c6fa.0311111800.1e9423fc@posting.google.com>
Andreas Rossberg <········@ps.uni-sb.de> wrote in message news:<············@grizzly.ps.uni-sb.de>...
> Tayss wrote:
> > Out of curiosity, are there any sufficiently powerful static type
> > systems out there that allow one to specify something like "x belongs
> > to type will-not-result-in-static-type-error"?  This sounds like a
> > perfectly good type, but I am not familiar enough with current static
> > systems.
> 
> Yes, because in a language with a static type system, all expressions would 
> belong to such a type, by definition.

In that case, this can become a default type in certain languages
whose features otherwise tend to oppose static typing.  I don't then
see what all the argument is about.  (By choosing not to specify a
type, they specify type will-not-result-in-static-type-error, just
with less work.  Everyone wins.)

I wonder why this case hasn't been made to lispers.  Common lispers
appear to resist any paradigm that a) breaks with cl's tradition of
flexibility and b) locks users into its view of the world. 
Particularly, I think most lispers would ask, "Can this fit within a
language that normally allows me to defer decisions to when I think
most appropriate?  I need to juggle a million different techniques
which each promise security, reliability, etc; and one technique just
can't push the others out of its way."  If this question is answered
badly, it would just push away more people than if it hadn't been
answered at all.

But you will find that lispers are very receptive to great paradigms
that are willing to coexist naturally.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <botb6h$kae$1@grizzly.ps.uni-sb.de>
Tayss wrote:
>> > Out of curiosity, are there any sufficiently powerful static type
>> > systems out there that allow one to specify something like "x belongs
>> > to type will-not-result-in-static-type-error"?  This sounds like a
>> > perfectly good type, but I am not familiar enough with current static
>> > systems.
>> 
>> Yes, because in a language with a static type system, all expressions
>> would belong to such a type, by definition.
> 
> In that case, this can become a default type in certain languages
> whose features otherwise tend to oppose static typing.

I think you missed the bit of irony in my answer. ;-)

What you want - at least the way you formulated it - exists in Lisp already. 
You see, Lisp is a statically typed language - it just happens to have only 
one universal type. If you want, you can call that type 
"will-not-result-in-static-type-error". Unfortunately, that does not buy 
you anything...

So let me assume you rather meant something like "Can ordinary static type 
systems express a universal type?", i.e. a type that fits everywhere? Yes, 
it is trivial, it would be the type "forall T.T".

OTOH, allowing the user to arbitrarily state this type would make the type 
system unsound, i.e. you'd loose all guarantees the type system can make 
and hence almost all of its advantages. As Pascal pointed out, something 
like that has been done (many times, in many different ways, in fact) and 
is usually called soft typing.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Tayss
Subject: Re: More static type fun.
Date: 
Message-ID: <5627c6fa.0311121139.754e97fe@posting.google.com>
Andreas Rossberg <········@ps.uni-sb.de> wrote in message news:<············@grizzly.ps.uni-sb.de>...
> I think you missed the bit of irony in my answer. ;-)

No, you missed the sarcasm in mine. ;)  I noticed that most of the
discussion was getting extraordinarily repetitious, at least the parts
I sampled, and I wanted to hint that stalemate would remain until
static typing proponents at least appeared to accept that sane
multiparadigm languages would /never/ willingly accept a paradigm
which claims prominence over other great techniques.  At least not on
the basis of points raised in this thread.

All the points mentioned here could have been found in the first
chapter of Pierce's _Types and Programming Languages_.  In fact, you
claim:

> OTOH, allowing the user to arbitrarily state this type would make the type 
> system unsound, i.e. you'd loose all guarantees the type system can make 
> and hence almost all of its advantages.

and this confirms my belief that the static typing world has no
coherent vocabulary; different people have different definitions of
'unsound.'  In fact, as Pierce argues, "The term 'safe language' is,
unfortunately, even more contentious than 'type system.'  Although
people generally feel they know one when they see it, their notions of
exatly what constitutes language safety are strongly influenced by the
language community to which they belong."

 
> What you want - at least the way you formulated it - exists in Lisp already. 
> You see, Lisp is a statically typed language - it just happens to have only 
> one universal type. If you want, you can call that type 
> "will-not-result-in-static-type-error". Unfortunately, that does not buy 
> you anything...

Sure it buys me things.  Let me use alternate terms.  Lisp is a
dynamically checked language, and strongly typed.  There are primitive
and programmer-defined types; and there are no ways to subvert the
type system like one can with C buffer overflows.


> OTOH, allowing the user to arbitrarily state this type would make the type 
> system unsound, i.e. you'd loose all guarantees the type system can make 
> and hence almost all of its advantages. 

And none of its disadvantages.  Read Bird/Wadler's functional
programming book.  In section 1.3, they consistently call strong
typing a "Discipline."  They say, "Strong typing is important because
adherence to the discipline..."  Well, lispers can be quite
disciplined, but in things they find important.  You will not gain
converts by demanding they 'adhere' to your 'discipline.'

As a lisp programmer, you gain a large number of techniques to achieve
these goals of reliability, soundness, documentation, etc.  For
example, just today I used lisp's code-is-data nature to implement
reliability from the database world -- normalization, where my
variable names were not subject to update anomalies through any
carelessness.

I ask you, are your statically checked languages capable of more than
one technique to achieve the goal of expressive, sound software?  Lisp
can -- it is the best multiparadigm language I've seen so far, and I
suspect part of that success lies in trying not to let any given
paradigm be too bossy.

No doubt lisp will eventually improve its typing facilities, based on
the hard work of good static typers.  Just maybe not in the way you
think.
From: Tayss
Subject: Re: More static type fun.
Date: 
Message-ID: <5627c6fa.0311121831.18e84955@posting.google.com>
··········@yahoo.com (Tayss) wrote in message news:<····························@posting.google.com>...
> I ask you, are your statically checked languages capable of more than
> one technique to achieve the goal of expressive, sound software?  Lisp
> can -- it is the best multiparadigm language I've seen so far, and I
> suspect part of that success lies in trying not to let any given
> paradigm be too bossy.

By the way, before I sound dogmatic, interesting typing facilities
are... interesting.  After all, I occasionally write down
domains/ranges when designing, and it would clearly be fun to bring
them into the world of my code.  By the same token, I might like
public/private in lisp's oop system (oop appears to have a lot to do
with protocols) and definitely some way to tell the system to use tail
recursion.

But Python is beginning to demonstrate that it is quite possible to
have success on large projects with dynamic typing, by heavily
promoting such things as unit testing.  Things that must be done
anyway, even for emacs scripts.

More importantly, it is rather counterproductive to say that Your Way
is the only possible way to achieve good code, without exploring if
the host language already has resources to achieve these very same
ends.  As an example, oop is not the same thing in C++/Java as it is
in Common Lisp.  Therefore if I were to argue against oop, I usually
must specify which language.  If nothing else, you would have a more
nuanced and convincing argument.

Most of all, I disliked your sarcastic response to my innocent
question. ;)
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bovf44$bem$1$8300dec7@news.demon.co.uk>
Tayss wrote:

> But Python is beginning to demonstrate that it is quite possible to
> have success on large projects with dynamic typing, by heavily
> promoting such things as unit testing.  Things that must be done
> anyway, even for emacs scripts.

But have they demonstrated that the absence of a good static type
system (Hindley-Milner or better) is advantageous in any way?

Nobody doubts the *possibility* of success on large projects with dynamic
typing (only).  We (static typers) merely doubt the probability and ease
of success.

Regards
--
Adrian Hey
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bovhrq$s3u$1@f1node01.rhrz.uni-bonn.de>
Adrian Hey wrote:
> Tayss wrote:
> 
> 
>>But Python is beginning to demonstrate that it is quite possible to
>>have success on large projects with dynamic typing, by heavily
>>promoting such things as unit testing.  Things that must be done
>>anyway, even for emacs scripts.
> 
> 
> But have they demonstrated that the absence of a good static type
> system (Hindley-Milner or better) is advantageous in any way?

Why should they? There are probably thousands of conceivable language 
features that programmers don't use. Should everybody always need to 
show that the absence of some arbitrary features is an advantage?

No, it's the job of those who promote a language feature to give 
convincing arguments beyond personal experience that it is really 
helpful. Otherwise, just let everyone use what they think is most 
appropriate.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bovvqc$ofe$1$8302bc10@news.demon.co.uk>
Pascal Costanza wrote:

> Adrian Hey wrote:
>> Tayss wrote:
>> 
>> 
>>>But Python is beginning to demonstrate that it is quite possible to
>>>have success on large projects with dynamic typing, by heavily
>>>promoting such things as unit testing.  Things that must be done
>>>anyway, even for emacs scripts.
>> 
>> 
>> But have they demonstrated that the absence of a good static type
>> system (Hindley-Milner or better) is advantageous in any way?
> 
> Why should they?

Because that is what is claimed by some Python (and/or dynamic typing
in general) advocates.

> There are probably thousands of conceivable language
> features that programmers don't use. Should everybody always need to
> show that the absence of some arbitrary features is an advantage?

No. Only those folk asserting that the absence of whatever feature is
an advantage need to do this.

> No, it's the job of those who promote a language feature to give
> convincing arguments beyond personal experience that it is really
> helpful.

Oh don't start that again. You have already been been given "arguments
beyond personal experience" that static typing is helpful.
You can continue your beligerant denial that those arguments are
convincing enough if you like, but I don't want to go there again.

Furthermore, even the mere "personal experience" of an awful lot of people
who regularly use these and other languages carries a lot of weight in
my book. In contrast the supposed personal experience of most who
disparage all statically typed languages merely because they are
statically typed becomes scarcely credible when they write things like
this..

 The type system is always getting in the way
or..
 I don't like having to decorate programs with type annotation
or..    
 Bugs caused by type errors (or bugs detectable by a static
 type system to be more precise) are rare. 

These are a pretty clear indication that they have no personal
experience with modern static type systems.

Regards
--
Adrian Hey
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <frub81-u21.ln1@ID-7776.user.dfncis.de>
Adrian Hey <····@nospicedham.iee.org> wrote:

>> But Python is beginning to demonstrate that it is quite possible to
>> have success on large projects with dynamic typing, by heavily
>> promoting such things as unit testing.  Things that must be done
>> anyway, even for emacs scripts.

> Nobody doubts the *possibility* of success on large projects with dynamic
> typing (only).  We (static typers) merely doubt the probability and ease
> of success.

As someone who usually prefers static typing, I don't doubt the
probability -- with enough unit tests, you can certainly approximate
static type checking, and for practical use, it is "good enough".

"Ease" is to some degree a matter of taste, and a matter of what other
tools are available.

- Dirk
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bp08co$4o0$1$8302bc10@news.demon.co.uk>
Dirk Thierbach wrote:

> As someone who usually prefers static typing, I don't doubt the
> probability -- with enough unit tests, you can certainly approximate
> static type checking, and for practical use, it is "good enough".

But on a level playing field, I.E. Equal amount of time available
for unit tests and debugging, I'm still sceptical that removing
static typing is would help me up. If I imagine a (hypothetically)
"better" Haskell without static type checking (and presumably
no type classes or overloading either) I have a hard time seeing
how this is of any help at all. It just makes a bad problem
(producing reliable programs) even worse AFAICS.

Maybe this newly untyped or dynamically typed Haskell could be
further improved to support dynamic megaprogramming :-)

But then, maybe existing statically typed Haskell could be
improved to support that anyway (with the occasional use of
dynamics of course).

Regards
--
Adrian Hey
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp095l$vrs$1@f1node01.rhrz.uni-bonn.de>
Adrian Hey wrote:
> Dirk Thierbach wrote:
> 
>>As someone who usually prefers static typing, I don't doubt the
>>probability -- with enough unit tests, you can certainly approximate
>>static type checking, and for practical use, it is "good enough".
> 
> But on a level playing field, I.E. Equal amount of time available
> for unit tests and debugging, I'm still sceptical that removing
> static typing is would help me up. If I imagine a (hypothetically)
> "better" Haskell without static type checking (and presumably
> no type classes or overloading either) I have a hard time seeing
> how this is of any help at all. It just makes a bad problem
> (producing reliable programs) even worse AFAICS.

...and you accused the "dynamic typers" in this discussion that they 
don't have enough experience with advanced static type systems? Do you 
know what your statements sound like?

> Maybe this newly untyped or dynamically typed Haskell could be
> further improved to support dynamic megaprogramming :-)

Did you mean "metaprogramming"?

> But then, maybe existing statically typed Haskell could be
> improved to support that anyway (with the occasional use of
> dynamics of course).

Runtime metaprogramming and static type checking can't be reconciled.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m18ymkmeyx.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Runtime metaprogramming and static type checking can't be reconciled.

Could you, please, prove this for the unconvinced among us?
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87k764kzta.fsf@gruk.tech.ensign.ftech.net>
Matthias Blume <····@my.address.elsewhere> writes:

> Pascal Costanza <········@web.de> writes:
> 
> > Runtime metaprogramming and static type checking can't be reconciled.
> 
> Could you, please, prove this for the unconvinced among us?

How do you reconcile "function used to take two integers in the range
3-123" and "function now takes two integers in the range 0-2^56 and an
optional third boolean signifying that the frobozz interpolation
should be used" with a static type system that (compile-time) was told
that the first was the type of the function? Admittedly, the intervals
of the two mandatory integers of the later definition are
intentionally so that they span the intervals of the prior definition.

In general, "static type checking" (as I understand it) is done at
compile-time and compile-time only and requires a whole-program
analysis. One could, possibly, do a complete re-compile run-time, in
the face of a redefinition later, but I think (this is an opinion)
that it'd be prohibitively expensive in sufficiently-complex
systems. Not impossible, though, but it doesn't encourage and I'd say
even discourages modifying the program run-time. Especially scary
things like changing the composition of a composed type (with
"composed type" I mean types like "is an integer or the symbol Foo").

Type-tagging of values that are checked run-time are costlier at all
times (though with probably quite close to a constant factor) and
doesn't necessarily require whole-program analysis in the face of
function redefinitions, but may (of course) throw type errors
run-time, if fed the wrong things.

It's a non-trivial balance to strike, between type-safety and
dynamically changing programs.

//Ingvar
-- 
(defun m (a b) (cond ((or a b) (cons (car a) (m b (cdr a)))) (t ())))
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m14qx8mbt2.fsf@tti5.uchicago.edu>
Ingvar Mattsson <······@cathouse.bofh.se> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Pascal Costanza <········@web.de> writes:
> > 
> > > Runtime metaprogramming and static type checking can't be reconciled.
> > 
> > Could you, please, prove this for the unconvinced among us?
> 
> How do you reconcile "function used to take two integers in the range
> 3-123" and "function now takes two integers in the range 0-2^56 and an
> optional third boolean signifying that the frobozz interpolation
> should be used" with a static type system that (compile-time) was told
> that the first was the type of the function? Admittedly, the intervals
> of the two mandatory integers of the later definition are
> intentionally so that they span the intervals of the prior definition.

I don't know.  But the fact that you and I don't know is hardly what I
call a proof.

> It's a non-trivial balance to strike, between type-safety and
> dynamically changing programs.

Nobody is disputing this.
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87vfpojh1q.fsf@gruk.tech.ensign.ftech.net>
Matthias Blume <····@my.address.elsewhere> writes:

> Ingvar Mattsson <······@cathouse.bofh.se> writes:
> 
> > Matthias Blume <····@my.address.elsewhere> writes:
> > 
> > > Pascal Costanza <········@web.de> writes:
> > > 
> > > > Runtime metaprogramming and static type checking can't be reconciled.
> > > 
> > > Could you, please, prove this for the unconvinced among us?
> > 
> > How do you reconcile "function used to take two integers in the range
> > 3-123" and "function now takes two integers in the range 0-2^56 and an
> > optional third boolean signifying that the frobozz interpolation
> > should be used" with a static type system that (compile-time) was told
> > that the first was the type of the function? Admittedly, the intervals
> > of the two mandatory integers of the later definition are
> > intentionally so that they span the intervals of the prior definition.
> 
> I don't know.  But the fact that you and I don't know is hardly what I
> call a proof.

Indeed not. FWIW, I think it's doable, under some limitations. I
cannot see how it'd work with function signatures that "change
incompatibly with prior definitions" and still be regarded as a static
type system. One would have to have a notion of "change sets", where
all functions calling a "fundamentally changed" function are changed
are redefined, instead of a time-window where they're just not called.

It definitely heads off to "quite hairy" rather fast, if one has to
keep a type-sound system at all times, instead of a system that is
patrially type-sound only for the parts currently being accessed.

> > It's a non-trivial balance to strike, between type-safety and
> > dynamically changing programs.
> 
> Nobody is disputing this.

//Ingvar type-sound? I like that word, I woner what it means?)
-- 
((lambda (x) `(,x ',x)) '(lambda (x) `(,x ',x)))
	Probably KMP
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp0mlo$sdq$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:

> I don't know.  But the fact that you and I don't know is hardly what I
> call a proof.

The simple thought experiment behind these arguments is this: Static 
type checking is about enforcing unbreakable invariants. Dynamic meta 
programming is about being able to change anything, even what you 
thought of being invariants before.

Any flexibility that you can introduce in a statically typed language 
will be countered by examples of properties that you still can't change 
at runtime. Any example of dynamic changes of programs will be countered 
by a way how you can implement it in a static language.

As I said before, these things cannot be reconciled.

The important argument in favor of dynamic metaprogramming is that you 
don't need to anticipate the changes anymore that you might want to 
perform. Any illustration of how you can implement these things in a way 
that requires you to anticipate such changes totally misses the point.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1r80ckp0j.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> 
> > I don't know.  But the fact that you and I don't know is hardly what I
> > call a proof.
> 
> The simple thought experiment behind these arguments is this: Static
> type checking is about enforcing unbreakable invariants. Dynamic meta
> programming is about being able to change anything, even what you
> thought of being invariants before.

If the code that you don't change does not rely on the invariant in
question, then change it accordingly.  If it does, then you need to
change *it* also anyway.
From: Thant Tessman
Subject: Re: More static type fun.
Date: 
Message-ID: <bp0tsc$j1j$1@terabinaries.xmission.com>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Matthias Blume wrote:
>>
>>
>>>I don't know.  But the fact that you and I don't know is hardly what I
>>>call a proof.
>>
>>The simple thought experiment behind these arguments is this: Static
>>type checking is about enforcing unbreakable invariants. Dynamic meta
>>programming is about being able to change anything, even what you
>>thought of being invariants before.
> 
> 
> If the code that you don't change does not rely on the invariant in
> question, then change it accordingly.  If it does, then you need to
> change *it* also anyway.

One measure of a type system might be the degree with which it requires 
one to overspecify invariants. The equating of subtyping with 
inheritance by some languages seems to be an example of exactly this. To 
me it explains why so many OO advocates tend to favor latent typing to 
manifest typing.

-thant
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2k763rhbc.fsf@hanabi.local>
Matthias Blume <····@my.address.elsewhere> writes:

> Pascal Costanza <········@web.de> writes:
> 
> > Matthias Blume wrote:
> > 
> > > I don't know.  But the fact that you and I don't know is hardly what I
> > > call a proof.
> > 
> > The simple thought experiment behind these arguments is this: Static
> > type checking is about enforcing unbreakable invariants. Dynamic meta
> > programming is about being able to change anything, even what you
> > thought of being invariants before.
> 
> If the code that you don't change does not rely on the invariant in
> question, then change it accordingly.  If it does, then you need to
                        ^^ the invariant
> change *it* also anyway.
          ^^ the code
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp1cgr$gol$2@newsreader3.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
>>Matthias Blume wrote:
>>
>>>I don't know.  But the fact that you and I don't know is hardly what I
>>>call a proof.
>>
>>The simple thought experiment behind these arguments is this: Static
>>type checking is about enforcing unbreakable invariants. Dynamic meta
>>programming is about being able to change anything, even what you
>>thought of being invariants before.
> 
> If the code that you don't change does not rely on the invariant in
> question, then change it accordingly.  If it does, then you need to
> change *it* also anyway.

Maybe, maybe not. Depends on whether *it* will ever be called. What 
matters is the timing. That's the whole point of dynamicity.


Pascal
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp0bqi$vs4$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>Runtime metaprogramming and static type checking can't be reconciled.
> 
> Could you, please, prove this for the unconvinced among us?

Again?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1znf0kx70.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> > Pascal Costanza <········@web.de> writes:
> > 
> >>Runtime metaprogramming and static type checking can't be reconciled.
> > Could you, please, prove this for the unconvinced among us?
> 
> Again?

Yes.  I must have missed it the first time.  If you really have (or
think you have), then just post a pointer to the article.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp0m0a$sdo$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>Matthias Blume wrote:
>>
>>>Pascal Costanza <········@web.de> writes:
>>>
>>>
>>>>Runtime metaprogramming and static type checking can't be reconciled.
>>>
>>>Could you, please, prove this for the unconvinced among us?
>>
>>Again?
> 
> Yes.  I must have missed it the first time.  If you really have (or
> think you have), then just post a pointer to the article.

See 
http://groups.google.com/groups?selm=bnf688%24esd%241%40newsreader2.netcologne.de

You can only make this code statically type checkable by dropping the 
dynamic metaprogramming stuff. Stating that this is possible is besides 
the point.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1vfpokp72.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> See
> http://groups.google.com/groups?selm=bnf688%24esd%241%40newsreader2.netcologne.de
> 
> You can only make this code statically type checkable by dropping the
> dynamic metaprogramming stuff.

Proof?  (Just saying so doesn't make it so.)
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp1ccd$gol$1@newsreader3.netcologne.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>See
>>http://groups.google.com/groups?selm=bnf688%24esd%241%40newsreader2.netcologne.de
>>
>>You can only make this code statically type checkable by dropping the
>>dynamic metaprogramming stuff.
> 
> Proof?  (Just saying so doesn't make it so.)

There are two ways to deal with a conjecture: either you give a proof or 
you give a counter example. I have given a counter example.

Conjecture: static typing and dynamic metaprogramming can be reconciled.

Counter example: see link above.

Hint: Can you statically determine whether and when hire and fire will 
ever be called?

Got it?


Pascal
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2brrfd3s5.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> > Pascal Costanza <········@web.de> writes:
> > 
> >>See
> >>http://groups.google.com/groups?selm=bnf688%24esd%241%40newsreader2.netcologne.de
> >>
> >>You can only make this code statically type checkable by dropping the
> >>dynamic metaprogramming stuff.
> > Proof?  (Just saying so doesn't make it so.)
> 
> There are two ways to deal with a conjecture: either you give a proof
> or you give a counter example. I have given a counter example.

No.  You have to give a proof.  If you give a counterexample you
*disprove* it.

> Conjecture: static typing and dynamic metaprogramming can be reconciled.
> 
> Counter example: see link above.

It is not a counterexample until you prove that it is one.  You have
not done so.  You have not even defined what "metaprogramming" is, nor
what a "statically typed programming language" is, so how can you have
possibly given a "counterexample"?

Matthias
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m27k23d3cb.fsf@hanabi-air.shimizu.blume>
Matthias Blume <····@my.address.elsewhere> writes:

> Pascal Costanza <········@web.de> writes:
> 
> > Matthias Blume wrote:
> > > Pascal Costanza <········@web.de> writes:
> > > 
> > >>See
> > >>http://groups.google.com/groups?selm=bnf688%24esd%241%40newsreader2.netcologne.de
> > >>
> > >>You can only make this code statically type checkable by dropping the
> > >>dynamic metaprogramming stuff.
> > > Proof?  (Just saying so doesn't make it so.)
> > 
> > There are two ways to deal with a conjecture: either you give a proof
> > or you give a counter example. I have given a counter example.
> 
> No.  You have to give a proof.  If you give a counterexample you
> *disprove* it.

Correction: I see you are talking about the negation of the conjecture
(the conjecture of yours being that metaprogramming and static typing
/cannot/ be reconciled).

Anyway, you still have to show that your "counterexample" actually is
one.  By "show" I mean "show rigorously", which includes saying
/precisely/ what the claim is you are trying to disprove -- instead of
just appealing to everybody's intution of what "static typing" or
"metaprogramming" might be.

Matthias
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp224q$q2s$1@newsreader3.netcologne.de>
Matthias Blume wrote:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
>>Pascal Costanza <········@web.de> writes:
>>
>>>Matthias Blume wrote:
>>>
>>>>Pascal Costanza <········@web.de> writes:
>>>>
>>>>
>>>>>See
>>>>>http://groups.google.com/groups?selm=bnf688%24esd%241%40newsreader2.netcologne.de
>>>>>
>>>>>You can only make this code statically type checkable by dropping the
>>>>>dynamic metaprogramming stuff.
>>>>
>>>>Proof?  (Just saying so doesn't make it so.)
>>>
>>>There are two ways to deal with a conjecture: either you give a proof
>>>or you give a counter example. I have given a counter example.
>>
>>No.  You have to give a proof.  If you give a counterexample you
>>*disprove* it.
> 
> Correction: I see you are talking about the negation of the conjecture
> (the conjecture of yours being that metaprogramming and static typing
> /cannot/ be reconciled).
> 
> Anyway, you still have to show that your "counterexample" actually is
> one.  By "show" I mean "show rigorously", which includes saying
> /precisely/ what the claim is you are trying to disprove -- instead of
> just appealing to everybody's intution of what "static typing" or
> "metaprogramming" might be.

Are you really asking me to do your homework?

You can work through my example and try to understand my point if you're 
interested, and maybe ask specific questions about it. Of course, you 
only need to do that if you are _really_ interested in understanding it. 
Instead you choose to make very broad claims what you consider to be 
valid proofs and disproofs. That's your choice, but I am just not 
interested in such games.

Just another hint: How can you statically check a property to be always 
present whose presence actually depends on dynamic properties?

No, I am not going to give you a proof. I am not interested in 
convincing you to switch programming styles, so why should I?


Pascal
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1n0ayly0b.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> > Anyway, you still have to show that your "counterexample" actually
> > is
> > one.  By "show" I mean "show rigorously", which includes saying
> > /precisely/ what the claim is you are trying to disprove -- instead of
> > just appealing to everybody's intution of what "static typing" or
> > "metaprogramming" might be.
> 
> Are you really asking me to do your homework?

No, I am asking you to do /your/ homework!

Now listen.  I have no idea how you come up with the admittedly very
entertaining idea that it is somehow /my/ responsibility to make
precise and possibly even prove your ill-conceived conjecture of
static typing and metaprogramming being irreconcilable in principle.
Notice that I did not claim you are wrong on this (although I believe
you are) -- I merely asked you for a proof of your statement.  The way
you phrased it indicated you think it to be a fact that everybody
simply should acknowledge.  Well, not all of us do, and you have given
no hard evidence for why we should.

> You can work through my example and try to understand my point if
> you're interested, and maybe ask specific questions about it.  Of
> course, you only need to do that if you are _really_ interested in
> understanding it.

I understand your example, thank you very much.

> Instead you choose to make very broad claims what
> you consider to be valid proofs and disproofs. That's your choice, but
> I am just not interested in such games.

You are the one making very broad claims about things you refuse to
define properly so that we have a basis for discussion.

> Just another hint: How can you statically check a property to be
> always present whose presence actually depends on dynamic properties?

You can't because it won't.

> No, I am not going to give you a proof.

I didn't expect you -- because I know you can't.  How would anyone be
able to give a proof of something that he cannot even /state/ properly?
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp33d8$ft8$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>>Anyway, you still have to show that your "counterexample" actually
>>>is
>>>one.  By "show" I mean "show rigorously", which includes saying
>>>/precisely/ what the claim is you are trying to disprove -- instead of
>>>just appealing to everybody's intution of what "static typing" or
>>>"metaprogramming" might be.
>>
>>Are you really asking me to do your homework?
> 
> 
> No, I am asking you to do /your/ homework!

Final attempt:

: (defun do-it ()
:   (progn
:     (eval (read))
:     (f)))

Compilation in LispWorks (for example) produces this output:

: ;;; Safety = 3, Speed = 1, Space = 1, Float = 1, Interruptible = 0
: ;;; Compilation speed = 1, Debug = 2, Fixnum safety = 3, GC safety = 3
: ;;; Source level debugging is on
: ;;; Source file recording is  on
: ;;; Cross referencing is on
: ; DO-IT
:
: The following function is undefined:
: F which is referenced by DO-IT
:
: ---- Done ----

In spite of this "type error", DO-IT can still be executed:

: CL-USER 1 > (do-it)
: 5

: Error: Undefined function F called with arguments ().
:   1 (continue) Try invoking F again.
:   2 Return some values from the call to F.
:   3 Try invoking something other than F with the same arguments.
:   4 Set the symbol-function of F to another function.
:   5 (abort) Return to level 0.
:   6 Return to top loop level 0.

: Type :b for backtrace, :c <option number> to proceed,
: or :? for other options

: CL-USER 2 : 1 > :c 6

: CL-USER 3 > (do-it)
: (defun f () 5)
: 5

In the first step, evaluation of an input by the user hasn't provided a 
proper definition for F, therefore DO-IT has produced an error at 
runtime. In the third step, the user of the program has typed in a 
definition for F, and so the program can happily proceed without 
producing an error.

This is a simple example of dynamic metaprogramming: The meta-level (-> 
EVAL performing an arbitrary expression not known until runtime) affects 
the base program.

You cannot perform any reasonable static analysis for F that produces 
more information for the DO-IT program than a) that it is called, b) 
that it is called without arguments, and c) that it has no apparent 
definition at compile-time. It is still possible to make DO-IT behave 
well without any errors, as shown above.

Any attempt to reconcile static type checking and dynamic 
metaprogramming should be able to handle the case above without any 
restrictions to what EVAL can do at runtime [1], and still be an 
acceptable type system for static typers. This is not possible because 
EVAL turns the presence and absence of _any_ program element into a 
dynamic property of a program.

This is not a proof in a strict sense, but if you find a specific hole 
in my argumentation, I would be happy to hear about it. If you choose 
instead to sidestep the discussion again by discussing meta-stuff like 
what you do and don't consider to be a proof, or any such nonsense, you 
can regard this discussion as being closed.


As a final final note, I don't want to convince anyone to switch their 
personally preferred programming style in any way. You, Mattias Blume, 
have ignited this discussion by claiming that anyone should use a 
statically typed programming language because programmers should, in 
your opinion, have a sketch of a formal proof for the correctness of 
their programs inside their head. You have even made the claim that 
programmers who don't do this should be fired.

This discussion has shown a) that static type systems are neither a 
necessary nor a sufficient condition for program correctness and b) that 
there is a programming style that renders all features of a program 
dynamic, and therefore not amenable to static analysis. It's not up to 
you, and not up to anyone, to judge whether such a programming style can 
be helpful to some people and/or projects, as it is not up to me, and 
not up to anyone, to judge whether a statically typed system can be 
helpful to other people and/or projects.

If you would just drop your claim about what _any_ programmer should do, 
we would be just fine. Otherwise it's _your_ task to prove that your 
very broad claim makes sense. Everything else is not my concern.


Pascal

[1] including, for example, UNINTERN

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1brreltov.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> You, Mattias Blume, have ignited this discussion by claiming that
> anyone should use a statically typed programming language because
> programmers should, in your opinion, have a sketch of a formal proof
> for the correctness of their programs inside their head.

You clearly have not understood what I actually said, otherwise you
wouldn't misrepresent me so blatantly.

> This discussion has shown a) that static type systems are neither a
> necessary nor a sufficient condition for program correctness

It is not this discussion that has "shown" this rather well-known fact
which has been established (and is completely obvious for that matter)
a long time ago.

Having said this, I would like to point out that a) having a correct
program and b) knowing it to be correct are different things.  As far
as program correctness in concerned, I am (for very practical reasons)
only interested in b) -- and so should everybody else.  If we don't
know for sure that a program is correct, then we cannot trust it
(unless we are engaging in "faith-based" programming).

What is necessary to _know_ that a program is correct are two things:

  1. static precisely what it means to be "correct"
  2. proving it

> If you would just drop your claim about what _any_ programmer should
> do, we would be just fine.

If you ask me to drop my claim that all programmers should convince
themselves that their code works, sorry, won't do it.
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <ubrreaj1v.fsf@dtpq.com>
>>>>> On 14 Nov 2003 11:57:04 -0600, Matthias Blume ("Matthias") writes:
 Matthias> Having said this, I would like to point out that a) having
 Matthias> a correct program and b) knowing it to be correct are
 Matthias> different things.  As far as program correctness in
 Matthias> concerned,

I would like to point out that a) having a correct program and
and b) knowing it to be type-consistent are different things.

This is why, outside of certain camps that engage in this
particular newspeak, the phrase "correct program" raises
a serious bullshit flag.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m17k22lr05.fsf@tti5.uchicago.edu>
······@dtpq.com (Christopher C. Stacy) writes:

> I would like to point out that a) having a correct program and
> and b) knowing it to be type-consistent are different things.

So?  Why are you bringing type-consistent up here?
From: Steve Schafer
Subject: Re: More static type fun.
Date: 
Message-ID: <nc6arv07hmd2sphotcvbihgjmmt9u93km8@4ax.com>
On Fri, 14 Nov 2003 18:29:11 +0100, Pascal Costanza <········@web.de>
wrote:

>This is not a proof in a strict sense, but if you find a specific hole 
>in my argumentation, I would be happy to hear about it. 

The "hole" is that you assume that "static" means "before the program as
a whole has begun execution." Clearly, it's impossible for any kind of
type-checking system to type-check a code fragment that doesn't even
exist. But as soon as the code fragment _does_ exist (in your example,
as soon as you've typed it in), it _can_ be statically type-checked;
that is, it can be type-checked _before_ it executes. And, furthermore,
it can be type-checked against the remaining code (that is, the code
that _has_ existed since the beginning of program execution).

I know that this is possible because I've written interpreters in
statically typed programming languages, where the language recognized by
the interpreter is also statically typed. (And I'm obviously not the
only person to have done this.) There's no magic--it's all perfectly
straightforward, and also perfectly statically typed. Something like
GHCi (the interactive Haskell system from Glascow) couldn't exist if it
weren't possible to statically type-check code that isn't known until
runtime, and yet it clearly does exist, and works just fine.

-Steve
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp375o$1198$1@f1node01.rhrz.uni-bonn.de>
Steve Schafer wrote:
> On Fri, 14 Nov 2003 18:29:11 +0100, Pascal Costanza <········@web.de>
> wrote:
> 
>>This is not a proof in a strict sense, but if you find a specific hole 
>>in my argumentation, I would be happy to hear about it. 
> 
> The "hole" is that you assume that "static" means "before the program as
> a whole has begun execution." Clearly, it's impossible for any kind of
> type-checking system to type-check a code fragment that doesn't even
> exist. But as soon as the code fragment _does_ exist (in your example,
> as soon as you've typed it in), it _can_ be statically type-checked;
> that is, it can be type-checked _before_ it executes.

I am concerned about the call site of that code, not the code itself.

Please go back to my example again: How can you statically type check 
DO-IT that includes a call of a function F, when the existence of F can 
only be determined at runtime?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1y8uikbk0.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Steve Schafer wrote:
> > On Fri, 14 Nov 2003 18:29:11 +0100, Pascal Costanza <········@web.de>
> > wrote:
> > 
> >> This is not a proof in a strict sense, but if you find a specific
> >> hole in my argumentation, I would be happy to hear about it.
> > The "hole" is that you assume that "static" means "before the
> > program as
> > a whole has begun execution." Clearly, it's impossible for any kind of
> > type-checking system to type-check a code fragment that doesn't even
> > exist. But as soon as the code fragment _does_ exist (in your example,
> > as soon as you've typed it in), it _can_ be statically type-checked;
> > that is, it can be type-checked _before_ it executes.
> 
> I am concerned about the call site of that code, not the code itself.
> 
> Please go back to my example again: How can you statically type check
> DO-IT that includes a call of a function F, when the existence of F
> can only be determined at runtime?

fun f(g,x) = g(x)

How can I typecheck f and therefore the call site of g before I apply
f to some g and some x, both of which may or may not exist at the time
I typecheck f?  My SML compiler is doing it.  How can that be?!?

(Notice that this is not meant to answer your particular question but
only to show that the form of reasoning you are using is invalid.
Just because you can't think of a way of type-checking something does
not mean that there isn't one.)
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp3c5r$hbu$1@newsreader3.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>Steve Schafer wrote:
>>
>>>On Fri, 14 Nov 2003 18:29:11 +0100, Pascal Costanza <········@web.de>
>>>wrote:
>>>
>>>
>>>>This is not a proof in a strict sense, but if you find a specific
>>>>hole in my argumentation, I would be happy to hear about it.
>>>
>>>The "hole" is that you assume that "static" means "before the
>>>program as
>>>a whole has begun execution." Clearly, it's impossible for any kind of
>>>type-checking system to type-check a code fragment that doesn't even
>>>exist. But as soon as the code fragment _does_ exist (in your example,
>>>as soon as you've typed it in), it _can_ be statically type-checked;
>>>that is, it can be type-checked _before_ it executes.
>>
>>I am concerned about the call site of that code, not the code itself.
>>
>>Please go back to my example again: How can you statically type check
>>DO-IT that includes a call of a function F, when the existence of F
>>can only be determined at runtime?
> 
> 
> fun f(g,x) = g(x)
> 
> How can I typecheck f and therefore the call site of g before I apply
> f to some g and some x, both of which may or may not exist at the time
> I typecheck f?  My SML compiler is doing it.  How can that be?!?

Because SML doesn't know keyword or optional parameters?

> (Notice that this is not meant to answer your particular question but
> only to show that the form of reasoning you are using is invalid.
> Just because you can't think of a way of type-checking something does
> not mean that there isn't one.)

Yes, it doesn't answer my particular question. You are again 
side-stepping the discussion.


Pascal
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1oevek8e4.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> > fun f(g,x) = g(x)
> > How can I typecheck f and therefore the call site of g before I apply
> > f to some g and some x, both of which may or may not exist at the time
> > I typecheck f?  My SML compiler is doing it.  How can that be?!?
> 
> Because SML doesn't know keyword or optional parameters?

I must admit: you just left me speechless.
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB5A058.6925FAB5@his.com>
Matthias Blume wrote:
> 
> Pascal Costanza <········@web.de> writes:
> 
> > Matthias Blume wrote:
> > > fun f(g,x) = g(x)
> > > How can I typecheck f and therefore the call site of g before I apply
> > > f to some g and some x, both of which may or may not exist at the time
> > > I typecheck f?  My SML compiler is doing it.  How can that be?!?
> >
> > Because SML doesn't know keyword or optional parameters?
> 
> I must admit: you just left me speechless.

A bit stunning, isn't it?

David
From: Neelakantan Krishnaswami
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbraq7o.2e0.neelk@gs3106.sp.cs.cmu.edu>
In article <············@newsreader3.netcologne.de>, Pascal Costanza wrote:
> Matthias Blume wrote:
>> 
>> fun f(g,x) = g(x)
>> 
>> How can I typecheck f and therefore the call site of g before I
>> apply f to some g and some x, both of which may or may not exist at
>> the time I typecheck f?  My SML compiler is doing it.  How can that
>> be?!?
> 
> Because SML doesn't know keyword or optional parameters?

Ocaml has both keyword and optional parameters, and does type
inference.

-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp527a$jno$1@newsreader3.netcologne.de>
Neelakantan Krishnaswami wrote:

> In article <············@newsreader3.netcologne.de>, Pascal Costanza wrote:
> 
>>Matthias Blume wrote:
>>
>>>fun f(g,x) = g(x)
>>>
>>>How can I typecheck f and therefore the call site of g before I
>>>apply f to some g and some x, both of which may or may not exist at
>>>the time I typecheck f?  My SML compiler is doing it.  How can that
>>>be?!?
>>
>>Because SML doesn't know keyword or optional parameters?
> 
> Ocaml has both keyword and optional parameters, and does type
> inference.

I should have put a smiley after my question. The example above is just 
side-stepping the important point again.


Pascal
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m21xs9g5qb.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

> Neelakantan Krishnaswami wrote:
> 
> > In article <············@newsreader3.netcologne.de>, Pascal Costanza wrote:
> > 
> >>Matthias Blume wrote:
> >>
> >>>fun f(g,x) = g(x)
> >>>
> >>>How can I typecheck f and therefore the call site of g before I
> >>>apply f to some g and some x, both of which may or may not exist at
> >>>the time I typecheck f?  My SML compiler is doing it.  How can that
> >>>be?!?
> >>
> >>Because SML doesn't know keyword or optional parameters?
> > Ocaml has both keyword and optional parameters, and does type
> > inference.
> 
> I should have put a smiley after my question.

Your question shows a complete lack of understanding.  Do you really
think you could have hidden that with a smiley?

> The example above is just side-stepping the important point again.

It is not side-stepping anything.  I gave it to demonstrate that it is
very well possible to type-check a call site without knowing who the
callee is going to be -- a situation that according to you should be
impossible to handle for a static type system.

The fact of the matter is that you still have not given (and, of
coures, cannot give) a proof for you rather bold claim that static
typing cannot be reconciled with whatever you consider
"metaprogramming".  It is not our problem to actually give a typing
for each of every of your little examples.  Instead, it is upon you to
give a /proof/ that at least one of them is impossible to handle for
/every/ static type system in the world (including those that have not
been invented yet).

Good luck.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp5fae$e9q$1@newsreader3.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>Neelakantan Krishnaswami wrote:
>>
>>
>>>In article <············@newsreader3.netcologne.de>, Pascal Costanza wrote:
>>>
>>>
>>>>Matthias Blume wrote:
>>>>
>>>>
>>>>>fun f(g,x) = g(x)
>>>>>
>>>>>How can I typecheck f and therefore the call site of g before I
>>>>>apply f to some g and some x, both of which may or may not exist at
>>>>>the time I typecheck f?  My SML compiler is doing it.  How can that
>>>>>be?!?
>>>>
>>>>Because SML doesn't know keyword or optional parameters?
>>>
>>>Ocaml has both keyword and optional parameters, and does type
>>>inference.
>>
>>I should have put a smiley after my question.
> 
> Your question shows a complete lack of understanding.  Do you really
> think you could have hidden that with a smiley?

My question was an attempt to caricature your change maneuvers. 
Obviously, I have failed in this regard.

fun(g,x) = g(x) doesn't have anything to do with my example. Yet, you 
still think this is a good response. Now tell who is the one who shows a 
lack of understanding?

>>The example above is just side-stepping the important point again.
> 
> It is not side-stepping anything.  I gave it to demonstrate that it is
> very well possible to type-check a call site without knowing who the
> callee is going to be -- a situation that according to you should be
> impossible to handle for a static type system.

What about...

(defun f (x) (funcall (intern (some-arbitrary-string-expression)) x))

...and...

(defun f (x) ...)

(defun g (x)
   (eval (read))
   (f x))

...when the user types in (unintern 'f) when the first line of g is 
executed?

etc. pp.?

> The fact of the matter is that you still have not given (and, of
> coures, cannot give) a proof for you rather bold claim that static
> typing cannot be reconciled with whatever you consider
> "metaprogramming".  It is not our problem to actually give a typing
> for each of every of your little examples. Instead, it is upon you to
> give a /proof/ that at least one of them is impossible to handle for
> /every/ static type system in the world (including those that have not
> been invented yet).

It is your problem when you start to require programmers to use static 
type systems who might want to do something along the lines of my 
"little" examples without wanting to wait for some static type system to 
come up in the future that happens to be able to deal with it.


Pascal


--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1fzgpjjtk.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> 
> > Pascal Costanza <········@web.de> writes:
> > 
> >>Neelakantan Krishnaswami wrote:
> >>
> >>
> >>>In article <············@newsreader3.netcologne.de>, Pascal Costanza wrote:
> >>>
> >>>
> >>>>Matthias Blume wrote:
> >>>>
> >>>>
> >>>>>fun f(g,x) = g(x)
> >>>>>
> >>>>>How can I typecheck f and therefore the call site of g before I
> >>>>>apply f to some g and some x, both of which may or may not exist at
> >>>>>the time I typecheck f?  My SML compiler is doing it.  How can that
> >>>>>be?!?
> >>>>
> >>>>Because SML doesn't know keyword or optional parameters?
> >>>
> >>>Ocaml has both keyword and optional parameters, and does type
> >>>inference.
> >>
> >>I should have put a smiley after my question.
> > Your question shows a complete lack of understanding.  Do you really
> > think you could have hidden that with a smiley?
> 
> My question was an attempt to caricature your change
> maneuvers. Obviously, I have failed in this regard.

You mean the change maneuvers that did not take place?

> fun(g,x) = g(x) doesn't have anything to do with my example. Yet, you
> still think this is a good response. Now tell who is the one who shows
> a lack of understanding?

This is getting ridiculous.  Good bye.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp7ut9$hnk$1@newsreader3.netcologne.de>
Matthias Blume wrote:

> You mean the change maneuvers that did not take place?
> 
>>fun(g,x) = g(x) doesn't have anything to do with my example. Yet, you
>>still think this is a good response. Now tell who is the one who shows
>>a lack of understanding?
> 
> This is getting ridiculous.  Good bye.

Here is what happened from my point of view. I have given an example 
along the following lines (switching to Scheme syntax):

(define (f x) (g x))

I have put in (eval (read)) to make the point that anything is regarded 
an optional feature by default in a certain programming style. But 
that's not really important here. (Changes can be made anywhere, for 
example in a listener in another thread.)

You have changed the example to this:

(define (f g x) (g x))

That is, your proposed solution changes an optional feature - the 
presence or absence of g - to a mandatory feature: Now, g must be passed 
in as a parameter.

That's not a reconciliation of two programming styles, that's giving 
preference to one programming style over the other.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Steve Schafer
Subject: Re: More static type fun.
Date: 
Message-ID: <j98arv05mvn3sue3sujuklarft7ncq75mr@4ax.com>
On Fri, 14 Nov 2003 19:33:26 +0100, Pascal Costanza <········@web.de>
wrote:

>I am concerned about the call site of that code, not the code itself.

No problem. The same arguments apply.

>Please go back to my example again: How can you statically type check 
>DO-IT that includes a call of a function F, when the existence of F can 
>only be determined at runtime?

Here are the things you can determine before F is defined:

1) You can determine the number and allowable type(s) of the
parameter(s) passed to F, including such things as optional or keyword
parameters, variable-length parameter lists, etc.

2) You can determine the allowable type(s) of the result returned by F,
including the possibility that, in some languages, F can return multiple
values.

Together, (1) and (2) are often referred to as the "type signature" of
F. Note that, depending on the context of F in DO-IT, the answers to (1)
or (2) might be "anything at all." That's not a problem, as we shall see
below.

That's all you can know before the program executes, but then again,
that's all you need to know. If you know that F is constrained by that
type signature, you know that the rest of the program + F is statically
type safe (with the obvious caveat that you can't know whether or not a
particular implementation of F is itself statically type safe until you
actually inspect that implementation). This level of type safety
includes the "anything at all" possibilities, because they simply mean
that the static type check has determined that even completely arbitrary
values for parameters passed and/or results returned won't cause a
type-related error at runtime.

Then, at the point during program execution when a definition for F
becomes available, you can statically type check its implementation
against the constraints listed above, along with any other constraints
that might be applicable, depending on its definition. For example, F
might access data or functions that are outside the scope of DO-IT, and
such access can also be statically type-checked.

Once the implementation of F has been statically type checked and
determined to be type-correct, you have a situation where the whole of
"the rest of the program + F" is known to be statically type safe, and
you can discard the caveat mentioned above. If at any point the
implementation of F changes, you just repeat the static type checking
process prior to executing the new F. (Being that this discussion is
taking place in comp.lang.functional, I should perhaps say "prior to
evaluating the new F" instead, but I think the point is clear.)

-Steve
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp3bva$gsg$1@newsreader3.netcologne.de>
Steve Schafer wrote:

> On Fri, 14 Nov 2003 19:33:26 +0100, Pascal Costanza <········@web.de>
> wrote:
> 
> 
>>I am concerned about the call site of that code, not the code itself.
> 
> 
> No problem. The same arguments apply.
> 
> 
>>Please go back to my example again: How can you statically type check 
>>DO-IT that includes a call of a function F, when the existence of F can 
>>only be determined at runtime?
> 
> 
> Here are the things you can determine before F is defined:
> 
> 1) You can determine the number and allowable type(s) of the
> parameter(s) passed to F, including such things as optional or keyword
> parameters, variable-length parameter lists, etc.
> 
> 2) You can determine the allowable type(s) of the result returned by F,
> including the possibility that, in some languages, F can return multiple
> values.
> 
> Together, (1) and (2) are often referred to as the "type signature" of
> F. Note that, depending on the context of F in DO-IT, the answers to (1)
> or (2) might be "anything at all." That's not a problem, as we shall see
> below.
> 
> That's all you can know before the program executes, but then again,
> that's all you need to know. If you know that F is constrained by that
> type signature, you know that the rest of the program + F is statically
> type safe (with the obvious caveat that you can't know whether or not a
> particular implementation of F is itself statically type safe until you
> actually inspect that implementation). This level of type safety
> includes the "anything at all" possibilities, because they simply mean
> that the static type check has determined that even completely arbitrary
> values for parameters passed and/or results returned won't cause a
> type-related error at runtime.
> 
> Then, at the point during program execution when a definition for F
> becomes available, you can statically type check its implementation
> against the constraints listed above, along with any other constraints
> that might be applicable, depending on its definition. For example, F
> might access data or functions that are outside the scope of DO-IT, and
> such access can also be statically type-checked.
> 
> Once the implementation of F has been statically type checked and
> determined to be type-correct, you have a situation where the whole of
> "the rest of the program + F" is known to be statically type safe, and
> you can discard the caveat mentioned above. If at any point the
> implementation of F changes, you just repeat the static type checking
> process prior to executing the new F. (Being that this discussion is
> taking place in comp.lang.functional, I should perhaps say "prior to
> evaluating the new F" instead, but I think the point is clear.)

Wouldn't this mean that such a static type system would have to accept 
any program written in any dynamically type-checked language? Any static 
type error would have to be flagged as something that might get 
corrected at runtime. What would be the point to call this a static type 
system?

For example, this seems to be what CMUCL and SBCL are already doing. And 
Common Lisp is typically not regarded as a statically typed language.

Do you know other languages that are implemented like this? From what I 
have heard in this discussion, this wouldn't be an acceptable scheme for 
Haskell, right? At least one implementation mentioned doesn't allow you 
to change the definitions of functions that are already in use, but only 
to "stack" new definitions for new code. Are there Haskell 
implementations that are more flexible than this?


Pascal
From: Peter Seibel
Subject: Re: More static type fun.
Date: 
Message-ID: <m3y8uilngd.fsf@javamonkey.com>
Pascal Costanza <········@web.de> writes:

> Steve Schafer wrote:

> > Then, at the point during program execution when a definition for
> > F becomes available, you can statically type check its
> > implementation against the constraints listed above, along with
> > any other constraints that might be applicable, depending on its
> > definition. For example, F might access data or functions that are
> > outside the scope of DO-IT, and such access can also be statically
> > type-checked. Once the implementation of F has been statically
> > type checked and determined to be type-correct, you have a
> > situation where the whole of "the rest of the program + F" is
> > known to be statically type safe, and you can discard the caveat
> > mentioned above. If at any point the implementation of F changes,
> > you just repeat the static type checking process prior to
> > executing the new F. (Being that this discussion is taking place
> > in comp.lang.functional, I should perhaps say "prior to evaluating
> > the new F" instead, but I think the point is clear.)
> 
> Wouldn't this mean that such a static type system would have to
> accept any program written in any dynamically type-checked language?
> Any static type error would have to be flagged as something that
> might get corrected at runtime. What would be the point to call this
> a static type system?

I'm a Lisper so I don't really know what I'm talking about re: static
typing but I don't think that follows. Imagine this:

  (defun foo (x y) (+ x y))

  (defun bar () (foo "a" "b"))

That's cleary a type error, as the code stands. Sure, it may be the
case that you'll dynamically change the definition of either FOO or
BAR before BAR is called, but at the moment it *is* a type error.

It seems that most Lispers would be happy to have Lisp complain about
the type error when BAR is defined or when it's compiled, as long as
it doesn't force them to do anything about it and didn't require any
explicit type annotation. Couldn't a Lisp do fancy Haskell-style type
inferencing in the background and use it to generate warnings whenever
the *current* set of definitions contains type errors. (I assume this
is the case because, as you know, some Lisp compilers already do
pretty fancy type inferencing for optimization purposes.)

Or if constantly being warned about things is too annoying, it
certainly doesn't seem like it would hurt the normal Lisp development
process to have a function LIST-TYPE-ERRORS that told you about all
the definitions in which the background type checker had detected type
errors in. When one is done making dynamic changes (for a while) one
could call LIST-TYPE-ERRORS and make sure one hadn't forgotten to
change some definitions that needed to be changed.

On the other hand, I'm curious what a static type checker would make
of this program:

  (defun foo (x y) (+ x y))

  (defun bar () (foo "a" "b"))

  (defun baz () (setf (symbol-function 'bar) #'(lambda () (foo  1 2))))

  (defun quux () (baz) (bar))

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp3fon$nsj$1@newsreader3.netcologne.de>
Peter Seibel wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>Steve Schafer wrote:
> 
> 
>>>Then, at the point during program execution when a definition for
>>>F becomes available, you can statically type check its
>>>implementation against the constraints listed above, along with
>>>any other constraints that might be applicable, depending on its
>>>definition. For example, F might access data or functions that are
>>>outside the scope of DO-IT, and such access can also be statically
>>>type-checked. Once the implementation of F has been statically
>>>type checked and determined to be type-correct, you have a
>>>situation where the whole of "the rest of the program + F" is
>>>known to be statically type safe, and you can discard the caveat
>>>mentioned above. If at any point the implementation of F changes,
>>>you just repeat the static type checking process prior to
>>>executing the new F. (Being that this discussion is taking place
>>>in comp.lang.functional, I should perhaps say "prior to evaluating
>>>the new F" instead, but I think the point is clear.)
>>
>>Wouldn't this mean that such a static type system would have to
>>accept any program written in any dynamically type-checked language?
>>Any static type error would have to be flagged as something that
>>might get corrected at runtime. What would be the point to call this
>>a static type system?
> 
> 
> I'm a Lisper so I don't really know what I'm talking about re: static
> typing but I don't think that follows. Imagine this:
> 
>   (defun foo (x y) (+ x y))
> 
>   (defun bar () (foo "a" "b"))
> 
> That's cleary a type error, as the code stands. Sure, it may be the
> case that you'll dynamically change the definition of either FOO or
> BAR before BAR is called, but at the moment it *is* a type error.

Yes, in this specific case it is a type error. But it may be that you 
have only picked an example that illustrates a needlessly inflexible 
part of Common Lisp.

Consider this:

(defmethod foo (x y) (+ x y))

(defmethod bar () (foo "a" "b"))

This doesn't look so clear-cut anymore, does it?

In fact, Jonathan Bachrach gives some good arguments why one might want 
to get rid of "pure" functions and instead opt for generic functions all 
the way down. See http://www.ai.mit.edu/~jrb/goo/


Pascal

P.S.: Yes, I know about Generic Haskell, but mentioning Generic Haskell 
would again miss some other important arguments in this discussion.
From: Peter Seibel
Subject: Re: More static type fun.
Date: 
Message-ID: <m3llqilj5c.fsf@javamonkey.com>
Pascal Costanza <········@web.de> writes:

> Peter Seibel wrote:
> 
> > Pascal Costanza <········@web.de> writes:
> >
> >>Steve Schafer wrote:
> >
> >>>Then, at the point during program execution when a definition for
> >>>F becomes available, you can statically type check its
> >>>implementation against the constraints listed above, along with
> >>>any other constraints that might be applicable, depending on its
> >>>definition. For example, F might access data or functions that are
> >>>outside the scope of DO-IT, and such access can also be statically
> >>>type-checked. Once the implementation of F has been statically
> >>>type checked and determined to be type-correct, you have a
> >>>situation where the whole of "the rest of the program + F" is
> >>>known to be statically type safe, and you can discard the caveat
> >>>mentioned above. If at any point the implementation of F changes,
> >>>you just repeat the static type checking process prior to
> >>>executing the new F. (Being that this discussion is taking place
> >>>in comp.lang.functional, I should perhaps say "prior to evaluating
> >>>the new F" instead, but I think the point is clear.)
> >>
> >>Wouldn't this mean that such a static type system would have to
> >>accept any program written in any dynamically type-checked language?
> >>Any static type error would have to be flagged as something that
> >>might get corrected at runtime. What would be the point to call this
> >>a static type system?
> > I'm a Lisper so I don't really know what I'm talking about re: static
> > typing but I don't think that follows. Imagine this:
> >   (defun foo (x y) (+ x y))
> >   (defun bar () (foo "a" "b"))
> > That's cleary a type error, as the code stands. Sure, it may be the
> > case that you'll dynamically change the definition of either FOO or
> > BAR before BAR is called, but at the moment it *is* a type error.
> 
> Yes, in this specific case it is a type error. But it may be that you
> have only picked an example that illustrates a needlessly inflexible
> part of Common Lisp.
> 
> Consider this:
> 
> (defmethod foo (x y) (+ x y))
> 
> (defmethod bar () (foo "a" "b"))
> 
> This doesn't look so clear-cut anymore, does it?

Hmmm. I don't see this really as any different than my example.
Assuming these two definitions are the one ones we've made so far, I'd
say the call to FOO in BAR is still a type error. That it could be
fixed by defining a new method that specializes either X or Y is
really no different than that I could fix the type error in my
original example by redefining FOO. My notional background type
checker would presumably know what methods were defined on what
generic functions and, after infering the types of "a" and "b", could
figure out what the efective method for FOO would be given the current
state of the world. From there it can see that the call to FOO from
BAR is going to end up trying to apply + to "a" and "b" which is a
type error. That we can dynamically change the state of the world just
means that the background type checker would have to update its list
of type errors.

In a way, your example makes it more obvious to me *why* it'd be cool
to have a background static type checker: because generic functions
make it that much harder (due to the extra level of indirection) to
see at a glance what code is going to be run in response to any given
call, it'd be nice to have something that made sure that I hadn't left
gaps (say by forgetting to define a method specialized on some
combination of types) where a type error could slip through.

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp3jn9$294$1@newsreader3.netcologne.de>
Peter Seibel wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>Peter Seibel wrote:

>>>I'm a Lisper so I don't really know what I'm talking about re: static
>>>typing but I don't think that follows. Imagine this:
>>>  (defun foo (x y) (+ x y))
>>>  (defun bar () (foo "a" "b"))
>>>That's cleary a type error, as the code stands. Sure, it may be the
>>>case that you'll dynamically change the definition of either FOO or
>>>BAR before BAR is called, but at the moment it *is* a type error.
>>
>>Yes, in this specific case it is a type error. But it may be that you
>>have only picked an example that illustrates a needlessly inflexible
>>part of Common Lisp.
>>
>>Consider this:
>>
>>(defmethod foo (x y) (+ x y))
>>
>>(defmethod bar () (foo "a" "b"))
>>
>>This doesn't look so clear-cut anymore, does it?
> 
> 
> Hmmm. I don't see this really as any different than my example.
> Assuming these two definitions are the one ones we've made so far, I'd
> say the call to FOO in BAR is still a type error. That it could be
> fixed by defining a new method that specializes either X or Y is
> really no different than that I could fix the type error in my
> original example by redefining FOO. My notional background type
> checker would presumably know what methods were defined on what
> generic functions and, after infering the types of "a" and "b", could
> figure out what the efective method for FOO would be given the current
> state of the world. From there it can see that the call to FOO from
> BAR is going to end up trying to apply + to "a" and "b" which is a
> type error. That we can dynamically change the state of the world just
> means that the background type checker would have to update its list
> of type errors.

Hmm, what I was referring to is the fact that a "pure" function (-> 
DEFUN) is typically meant as the "final word" about what the function is 
supposed to do, whereas generic functions (-> DEFGENERIC/DEFMETHOD) are 
intentionally open to further specialization. The fact that the MOP 
allows you to define those specializations programmatically means that 
you cannot really tell what is going to happen at runtime. Again, that's 
the whole point of dynamic metaprogramming.

> In a way, your example makes it more obvious to me *why* it'd be cool
> to have a background static type checker: because generic functions
> make it that much harder (due to the extra level of indirection) to
> see at a glance what code is going to be run in response to any given
> call, it'd be nice to have something that made sure that I hadn't left
> gaps (say by forgetting to define a method specialized on some
> combination of types) where a type error could slip through.

Yes, this would definitely be cool. But please keep in mind that the 
standard notion of a static type system is that it has to be strict - 
i.e., if it can't statically check a program to be free of what it 
regards as type errors, it has to reject it and refuse to execute it at 
all. Any weakening of this strictness would move a type system towards 
soft typing, or towards what, for example, CMUCL and SBCL already seem 
to do, which is what I have already mentioned as a good compromise from 
the very beginning of this discussion.


Pascal
From: Peter Seibel
Subject: Re: More static type fun.
Date: 
Message-ID: <m37k22lfav.fsf@javamonkey.com>
Pascal Costanza <········@web.de> writes:

> Peter Seibel wrote:
> 
> > Pascal Costanza <········@web.de> writes:
> >
> >>Peter Seibel wrote:
> 
> >>>I'm a Lisper so I don't really know what I'm talking about re: static
> >>>typing but I don't think that follows. Imagine this:
> >>>  (defun foo (x y) (+ x y))
> >>>  (defun bar () (foo "a" "b"))
> >>>That's cleary a type error, as the code stands. Sure, it may be the
> >>>case that you'll dynamically change the definition of either FOO or
> >>>BAR before BAR is called, but at the moment it *is* a type error.
> >>
> >>Yes, in this specific case it is a type error. But it may be that you
> >>have only picked an example that illustrates a needlessly inflexible
> >>part of Common Lisp.
> >>
> >>Consider this:
> >>
> >>(defmethod foo (x y) (+ x y))
> >>
> >>(defmethod bar () (foo "a" "b"))
> >>
> >>This doesn't look so clear-cut anymore, does it?
> > Hmmm. I don't see this really as any different than my example.
> > Assuming these two definitions are the one ones we've made so far, I'd
> > say the call to FOO in BAR is still a type error. That it could be
> > fixed by defining a new method that specializes either X or Y is
> > really no different than that I could fix the type error in my
> > original example by redefining FOO. My notional background type
> > checker would presumably know what methods were defined on what
> > generic functions and, after infering the types of "a" and "b", could
> > figure out what the efective method for FOO would be given the current
> > state of the world. From there it can see that the call to FOO from
> > BAR is going to end up trying to apply + to "a" and "b" which is a
> > type error. That we can dynamically change the state of the world just
> > means that the background type checker would have to update its list
> > of type errors.
> 
> Hmm, what I was referring to is the fact that a "pure" function (->
> DEFUN) is typically meant as the "final word" about what the function
> is supposed to do, whereas generic functions (-> DEFGENERIC/DEFMETHOD)
> are intentionally open to further specialization. The fact that the
> MOP allows you to define those specializations programmatically means
> that you cannot really tell what is going to happen at runtime. Again,
> that's the whole point of dynamic metaprogramming.

Right. So it seems to me that the reconciliation of static typing with
The Lisp Way, would be to say: at any given moment in the lifetime of
a Lisp program the type checker can either prove that the
program--i.e. the current set of definitions--is free of type errors
or it is not. (I'm assuming based on what the static type advocates
have been saying in this thread that the fancy Haskell-style type
checkers can in fact do that; I don't know that myself.)

Hmmm. I thought I was going to go on and say something to disagree
with you but then I wrote the example that I thought was going to
prove my point and it proved your point instead. ;-) So now I pose
this question to the static typing experts:

Suppose I have this code:

  (defgeneric foo (a))

  (defun call-foo ()
    (let (arg method-defn)
      (format *query-io*
        "Please enter an argument for FOO: ")
      (setf arg (read *query-io*))
      (format *query-io* "
        Please enter a new method definition for FOO to handle ~a: " arg)
      (eval (read *query-io*))
      (foo arg)))

It seems that statically speaking, this program contains a type error
because in Common Lisp a generic function with no methods defined on
it can't safely be called with any type of argument. Thus there is no
type that ARG could have that would make the call to FOO typesafe.
However, assuming the user does their part and provides an appropriate
method definition, by the time FOO is called it will be typesafe.

Or is there some way to type this program so that it is considered
type safe before the new user-provided method has been added to the
system? (This may be just a more complex version of one of Pascal's
examples but his were compressed enough that *I* didn't quite get his
point until now; maybe this example and question will help someone
else.)

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: james anderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB569CC.E1987522@setf.de>
Peter Seibel wrote:
> 
> Suppose I have this code:
> 
>   (defgeneric foo (a))
> 
>   (defun call-foo ()
>     (let (arg method-defn)
>       (format *query-io*
>         "Please enter an argument for FOO: ")
>       (setf arg (read *query-io*))
>       (format *query-io* "
>         Please enter a new method definition for FOO to handle ~a: " arg)
>       (eval (read *query-io*))
>       (foo arg)))
> 
> It seems that statically speaking, this program contains a type error
> because in Common Lisp a generic function with no methods defined on
> it can't safely be called with any type of argument. Thus there is no
> type that ARG could have that would make the call to FOO typesafe.
> However, assuming the user does their part and provides an appropriate
> method definition, by the time FOO is called it will be typesafe.
> 

one somewhat less contrived instance of this situation would be a generic
implemention of data conversion. eg

(defGeneric convert-units (value from to) (:method ...)
(:generic-function-class adaptive-gf))

where the base implementation provided the building blocks and a method for
no-applicable-method specialized on adaptive-gf computed the implemention for
given arguments from the known building blocks, defined the method on-the-fly,
and delegated to the newly defined method for its result.


another example is dynamically generated functions which implement queries
against a clos data model. such things can be implemented as generic functions
which examine as-yet unknown data introspetively and generate composite access
methods on the fly.

both of these situations illustrate the same issue as the example above, with
the added aspect, that the functions are typed not data* -> data but (data* X
function) -> (data X function), and the reconciliation happens after the
function is called.

...
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <h47h81-021.ln1@ID-7776.user.dfncis.de>
Peter Seibel <·····@javamonkey.com> wrote:
> Right. So it seems to me that the reconciliation of static typing with
> The Lisp Way, would be to say: at any given moment in the lifetime of
> a Lisp program the type checker can either prove that the
> program--i.e. the current set of definitions--is free of type errors
> or it is not. 

The problem with "The Lisp Way" is that as soon as you admit in one
place that type inference cannot figure out what the type should be,
you loose any advantage of static typing in all parts of your program
that somehow refer to that one place.

And the primitives of Lisp are made in such a way that it you'll run
into this problem sooner or later.

So you either have to throw in type annotations to "convince" the
compiler, or you leave large parts of your program unchecked. The
latter means that the compiler has now to insert dynamic checks again
in those places (and the places may change non-locally according how
you change your functions in very different place of the program),
and that you again have to write low-level unit tests for all those
parts of your program that remain unchecked.

Lisp doesn't admit to static typing easily. The best in Lisp you can
do is to turn it into some supplementary tool as CMUCL does, keep it
optional, and mainly use it as a hint to the compiler to generate
better code if he can, or as an additional way to force dynamic checks
if the compiler cannot.


That's better than nothing, but not as good as you can do in languages
that are designed to support static typing from bottom up. But that
doesn't matter at all, because "The Lisp Way" has developed methods
(e.g. unit tests) to deal with it, so there's no need to worry about
anything, and no need to change Lisp in any way.


The main point I was trying to bring accross in this thread is: There
are static type systems that don't get in the way like those of C and
Java do. You can develop as easily and flexibly with a static type
system as you can with a dynamic type system. So the kind of type
system itself is not to blame for anything; it's the particular
*language* (and the libraries) that decides what kind of things are
easy to do, and what are harder to do. And of course these things
differ from language to language, so you choose the right one for the
particular job you have to do.

- Dirk
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <fzgpziar.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:

> The main point I was trying to bring accross in this thread is:  There
> are static type systems that don't get in the way like those of C and
> Java do.  

No problem with this statement.

> You can develop as easily and flexibly with a static type system as
> you can with a dynamic type system.

It is this one that most lispers have problems with.  If it were
qualified with an `almost as easily and flexibly', you wouldn't have
as many complaints, either.

Earlier you said:

> The problem with "The Lisp Way" is that as soon as you admit in one
> place that type inference cannot figure out what the type should be,
> you loose any advantage of static typing in all parts of your program
> that somehow refer to that one place.
>
> And the primitives of Lisp are made in such a way that it you'll run
> into this problem sooner or later.

Here's the problem:  Lispers *like* these primitives quite a bit.
Much of the flexibility of Lisp comes from these particular primitives.
If static typing involves giving these up, we feel that we have lost
important functionality.  Being assured that the alternative is just
about as good isn't enough.

-- 
~jrm
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <eeci81-0n9.ln1@ID-7776.user.dfncis.de>
Joe Marshall <·············@comcast.net> wrote:
> Dirk Thierbach <··········@gmx.de> writes:

>> The main point I was trying to bring accross in this thread is:  There
>> are static type systems that don't get in the way like those of C and
>> Java do.  

> No problem with this statement.

Good. :-)

>> You can develop as easily and flexibly with a static type system as
>> you can with a dynamic type system.

> It is this one that most lispers have problems with.  If it were
> qualified with an `almost as easily and flexibly', you wouldn't have
> as many complaints, either.

"Ease" and "flexibility" are not very well defined. So I don't have
any problems throwing in an "almost" if that makes you happy :-)

In my experience, the differences caused by language and library
issues are much bigger than any difference caused by dynamic vs.
static typing.

>> And the primitives of Lisp are made in such a way that it you'll run
>> into this problem sooner or later.

> Here's the problem:  Lispers *like* these primitives quite a bit.

Of course.

> Much of the flexibility of Lisp comes from these particular primitives.

But you don't have to change them drastically. The changes are
certainly bigger than between, say, Scheme and CL, but it's mostly
details that get "in the way" of type inference. One thing that is
not a detail are imperative features (setq and friends).

(Of course I may be wrong here -- I haven't looked at the details,
so it might in fact be a lot harder than it looks).

> If static typing involves giving these up, we feel that we have lost
> important functionality.

It would involve giving up Lisp as you know it, it wouldn't involve
giving up particular primitives (they will just change). But since
everyone knows and uses those primitives, it will hurt a lot.

And that's why I keep saying that trying to add (complete) static
typing to Lisp is not a good idea. And I certainly don't want to talk
anybody into using static typing at all costs. All I want to do is
fight against the prejudice that static typing is necessarily a
"straightjacket" that keeps you from being productive.

(And another thing one should keep in mind is that "soft typing" is
a lot weaker with respect to benefits as "hard typing". But if
"hard typing" is not an option (as in Lisp), soft typing is certainly
better than nothing.)

- Dirk
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB6530D.5C3A1FD5@his.com>
In a functional language, especially one with static types, you don't
want to go around changing a function's internals at runtime.  You want
to parametrize the function.  Or if you don't want to do that, you can
(in some languages) parametrize the module containing the function.

David
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <eg4qx62g9u.fsf@havengel.ii.uib.no>
Peter Seibel <·····@javamonkey.com> writes:

> On the other hand, I'm curious what a static type checker would make
> of this program:

Well, I'm no expert on type systems, just your average grunt
programmer, but my built-in type- and correctness checker would
probably write

>   (defun foo (x y) (+ x y))

foo x y = x + y

>   (defun bar () (foo "a" "b"))

-- I'm obviously not going to call bar with the foo mentioned above,
-- so add it as a parameter.

bar foo = foo "a" "b"

>   (defun baz () (setf (symbol-function 'bar) #'(lambda () (foo  1 2))))
>   (defun quux () (baz) (bar))

-- only change the definition of bar locally
quux = let bar = foo 1 2 in bar

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Steve Schafer
Subject: Re: More static type fun.
Date: 
Message-ID: <tnucrv8ctq3lbb4v538icbb90n3qlnomje@4ax.com>
On Fri, 14 Nov 2003 20:55:23 +0100, Pascal Costanza <········@web.de>
wrote:

>Wouldn't this mean that such a static type system would have to accept 
>any program written in any dynamically type-checked language? Any static 
>type error would have to be flagged as something that might get 
>corrected at runtime. What would be the point to call this a static type 
>system?

I have to confess that I have absolutely no idea how you come to this
conclusion. If your goal is to create a program that, at runtime, can
accept any arbitrary dynamically-typed code as input, and safely execute
it, then yes, obviously, you have to accommodate dynamically-typed code,
and you have to confine it to an appropriate "sandbox," with
well-defined interfaces to any statically-typed code that exists outside
of the sandbox. (In the U.S. at least, this sort of reasoning is usually
accompanied by "Duh!")

However, I feel confident in asserting that there are plenty of useful
statically-typed programs that _don't_ have that requirement. And I
would even go so far as to say that there exist statically-typed
programs that _don't_ have that requirement, which nevertheless have
equivalent functionality to any program that _does_ have that
requirement.

A few years back, I wrote a Scheme interpreter in Pascal. Everything
worked fine--I could type in arbitrary, dynamically-typed Scheme
expressions and successfully evaluate them inside a statically-typed
Pascal framework. I can hear your objections: But those Scheme
expressions don't directly interact with the Pascal code! To which I
would counter: With a Lisp interpreter, no matter what you type as input
to the interpreter, it won't have any effect on the low-level primitives
(typically written in assembler or C or some such thing) that run the
whole show.* It's exactly the same thing.

If I want to write a Scheme interpreter (in Pascal, for example) that
_does_ interact with the statically-typed "framework" code, I can, and I
can do it in a type-safe way. All I have to do is decide what the type
signatures of the interface functions are going to be, and a
type-related runtime error will never occur in the framework. Yes, I do
have to dynamically type-check the data and/or functions that are passed
from the Scheme side to the Pascal side (as part of the marshalling
process), but all of the dynamic type-checking is handled within the
sandbox in which the Scheme code lives. The statically-typed portions of
the code remain statically-typed.

More objections: But you can't write Scheme functions that _arbitrarily_
interact with the Pascal framework! To which I counter: Asking that the
Scheme functions be allowed to arbitrarily interact with the Pascal
framework is equivalent to asking that Lisp's eval be able to evaluate
_arbitrary_ sequences of characters (not just legal Lisp expressions) in
a meaningful way. The plain truth is that no matter how "dynamic" the
environment, there is going to be a boundary between "these are the
things you can do" and "these are the things you can't do." And the set
of "these are the things you can't do" is never going to be empty, no
matter how hard you try.

From the point of view of dynamic metaprogramming, a statically-typed
framework is certainly going to impose a different set of constraints on
the can do/can't do divide (I think I hear another "Duh!" off in the
distance), and so it will influence _how_ you go about getting something
done, but it won't affect _what_ you are able to do.

>Do you know other languages that are implemented like this? From what I 
>have heard in this discussion, this wouldn't be an acceptable scheme for 
>Haskell, right?

Yes and no. Yes, a typical off-the-shelf function in Haskell isn't going
to allow you to replace its implementation at runtime. But no, that
doesn't mean you can't write a Haskell function in such a way that its
implementation can be replaced at runtime. (You can.)

Once again, _how_ you go about accomplishing dynamic metaprogramming in
Haskell (or other statically-typed language) is different from how you
go about it in Lisp (or other dynamically-typed language), but you can
still do it just as well, and just as easily.**

-Steve

*With the possible exception of Lisp Machine types of systems, where
it's Lisp "all the way down." However, I suspect that even in those
environments there are going to be certain things that you simply cannot
do from the REPL.

**One thing that sets apart the Lisp family from most other programming
environments is the presence of the built-in eval, which certainly makes
certain kinds of things "easy." But there's nothing to prevent inclusion
of the same functionality in statically-typed environments, be they
Haskell or Pascal or whatever, and once it's there, then those tasks
that are made easy by Lisp's eval become just as easy in the
statically-typed environment.
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0311151510.46b8be92@posting.google.com>
Steve Schafer <···@reply.to.header> wrote in message news:<··································@4ax.com>...

> With a Lisp interpreter, no matter what you type as input
> to the interpreter, it won't have any effect on the low-level primitives
> (typically written in assembler or C or some such thing) that run the
> whole show.* It's exactly the same thing.

Common Lisp is compiled, so it's not the same thing.

[snip]

>The plain truth is that no matter how "dynamic" the
> environment, there is going to be a boundary between "these are the
> things you can do" and "these are the things you can't do." And the set
> of "these are the things you can't do" is never going to be empty, no
> matter how hard you try.

Since Common Lisp is compiled, and I can patch the running compiler
while it is running, I really don't see that there is any hard
boundary to how I can change the running environment. Certainly I will
be warned if I try to redefine kernel functions, but I can override
those warnings, so, no, there really is no hare boundary between
"things you can do" and "things you can't do." This is even more true
of the many Common Lisp implementations that allow one to include LAP
or lisp assembly code to define what machine code is generated at
compile time.
From: Steve Schafer
Subject: Re: More static type fun.
Date: 
Message-ID: <jutfrvk0vr1m4sor152c1kok8gd2vs8g80@4ax.com>
On 15 Nov 2003 15:10:57 -0800, ·······@mediaone.net (Raffael Cavallaro)
wrote:

>Common Lisp is compiled, so it's not the same thing.

It doesn't matter. For dynamic metaprogramming, you have to have some
mechanism that converts "data" into "code." Whether that's a compiler,
an interpreter, or a Vatican scribe, it's basically the same thing--an
implementation detail. And everything else stays the same.

>Since Common Lisp is compiled, and I can patch the running compiler
>while it is running, I really don't see that there is any hard
>boundary to how I can change the running environment. Certainly I will
>be warned if I try to redefine kernel functions, but I can override
>those warnings, so, no, there really is no hare boundary between
>"things you can do" and "things you can't do." This is even more true
>of the many Common Lisp implementations that allow one to include LAP
>or lisp assembly code to define what machine code is generated at
>compile time.

Well, I don't think the original idea of dynamic metaprogramming as
presented in this discussion was intended to descend to the level of,
say, converting a Lisp system to one that compiles and runs Visual Basic
programs, but once again, it doesn't matter--it's still an
implementation detail. Nothing about those aspects of the Common Lisp
system have anything to do with dynamic typing (or even with Lisp); they
could just as well be implemented in any other way you might want.

-Steve
From: William D Clinger
Subject: Re: More static type fun.
Date: 
Message-ID: <fb74251e.0311151527.b9034a5@posting.google.com>
Excuse me, but didn't the United States Senate just go over
all this in their recent 40-hour anti-filibuster filibuster?

Oh, I see.  They didn't settle anything.  Carry on, then.

Will
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0311152056.5213dda6@posting.google.com>
··········@verizon.net (William D Clinger) wrote in message news:<···························@posting.google.com>...
> Excuse me, but didn't the United States Senate just go over
> all this in their recent 40-hour anti-filibuster filibuster?
> 
> Oh, I see.  They didn't settle anything.  Carry on, then.
> 
> Will

And I had such high hopes for the legislative fiat approach to solving
the halting problem.
;^)
From: Steve Schafer
Subject: Re: More static type fun.
Date: 
Message-ID: <jotfrvo31agf8ngtf9p8pbvhf573npd7g8@4ax.com>
On 15 Nov 2003 15:27:27 -0800, ··········@verizon.net (William D
Clinger) wrote:

>Excuse me, but didn't the United States Senate just go over
>all this in their recent 40-hour anti-filibuster filibuster?

The Senate debate covered only the strict/lazy issue to any depth. There
was a brief foray into static/dynamic typing, but it was peripheral to
the main discussion and didn't really get anywhere.

-Steve
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp6mm3$f0t$1@newsreader3.netcologne.de>
Steve Schafer wrote:

> On Fri, 14 Nov 2003 20:55:23 +0100, Pascal Costanza <········@web.de>
> wrote:
> 
>>Wouldn't this mean that such a static type system would have to accept 
>>any program written in any dynamically type-checked language? Any static 
>>type error would have to be flagged as something that might get 
>>corrected at runtime. What would be the point to call this a static type 
>>system?
> 
> I have to confess that I have absolutely no idea how you come to this
> conclusion. If your goal is to create a program that, at runtime, can
> accept any arbitrary dynamically-typed code as input, and safely execute
> it, then yes, obviously, you have to accommodate dynamically-typed code,
> and you have to confine it to an appropriate "sandbox," with
> well-defined interfaces to any statically-typed code that exists outside
> of the sandbox. (In the U.S. at least, this sort of reasoning is usually
> accompanied by "Duh!")

Misunderstanding - maybe I wasn't clear enough.

You have described a conceivable approach that doesn't signal a missing 
function as a type error, but instead automatically creates a signature 
as a "stub". Only later when that stub gets "filled", the static type 
checker proceeds to check that function.

Now, if no missing definition is flagged as a type error anymore, would 
you still regard this as a static type system in the usually accepted sense?

Furthermore, assuming that even calls to functions with wrong parameter 
types could be corrected at runtime by providing more specializations 
for that function at runtime (think: generic functions), wouldn't this 
mean that such a "static" type system couldn't flag any static errors 
anymore, but could only warn about "potentially problematic" code?

How would this differ from the dynamic type systems we already have that 
in fact do generate such warnings?

(Please note that I haven't yet come to conclusions wrt to your 
comments. I am only asking questions.)


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <4lsj81-sn.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Steve Schafer wrote:

>> If your goal is to create a program that, at runtime, can accept
>> any arbitrary dynamically-typed code as input, and safely execute
>> it, then yes, obviously, you have to accommodate dynamically-typed
>> code, and you have to confine it to an appropriate "sandbox," with
>> well-defined interfaces to any statically-typed code that exists
>> outside of the sandbox. (In the U.S. at least, this sort of
>> reasoning is usually accompanied by "Duh!")

And that's exactly what you do in Lisp as well -- you write your own
eval function for security reasons. So no difference between dynamic
and static typing here.

I have already discussed this with Pascal ...

- Dirk
From: Steve Schafer
Subject: Re: More static type fun.
Date: 
Message-ID: <42tfrv4f9b3nl91a1e02bf4b8i8kss7baj@4ax.com>
On Sun, 16 Nov 2003 03:16:34 +0100, Pascal Costanza <········@web.de>
wrote:

>Now, if no missing definition is flagged as a type error anymore, would 
>you still regard this as a static type system in the usually accepted sense?

Yes. Absolutely. The point is that once the missing information is
provided, it can be statically type-checked before being executed.
That's all that "static type-checking" really means.

>Furthermore, assuming that even calls to functions with wrong parameter 
>types could be corrected at runtime by providing more specializations 
>for that function at runtime (think: generic functions), wouldn't this 
>mean that such a "static" type system couldn't flag any static errors 
>anymore, but could only warn about "potentially problematic" code?

No. You've gotten away from static type-checking and are now peering at
what you think is static type-checking, but you're doing it through
dynamic type-checking-colored glasses. 

The answer is that you wouldn't ever get into that situation, because it
doesn't make sense from the point of view of developing programs using a
statically type-checked environment. You want to be able to say, in
effect, "I'm writing a program that blantantly fails to satisfy static
type-checking, but at runtime, I promise to give it some additional
information that will indeed satisfy the static type checks."

The statically type-checked environment replies, "No way. How can I
trust you? If you don't fulfill your runtime promise, I'd have to throw
up all over your shoes."

The dynamically type-checked environment replies, "Okay, but if you
don't fulfill your runtime promise, I'm going to throw up all over your
shoes."

Is there really any difference?

[Aside: I am going to have to bow out of this discussion, at least for a
while. I'm going to be in meetings with various levels of the federal
bureaucracy for the next four days.]

-Steve
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpav63$eu9$1@newsreader3.netcologne.de>
Steve Schafer wrote:

>>Furthermore, assuming that even calls to functions with wrong parameter 
>>types could be corrected at runtime by providing more specializations 
>>for that function at runtime (think: generic functions), wouldn't this 
>>mean that such a "static" type system couldn't flag any static errors 
>>anymore, but could only warn about "potentially problematic" code?
> 
> 
> No. You've gotten away from static type-checking and are now peering at
> what you think is static type-checking, but you're doing it through
> dynamic type-checking-colored glasses. 

Of course. That's what I am after.

> The answer is that you wouldn't ever get into that situation, because it
> doesn't make sense from the point of view of developing programs using a
> statically type-checked environment. You want to be able to say, in
> effect, "I'm writing a program that blantantly fails to satisfy static
> type-checking, but at runtime, I promise to give it some additional
> information that will indeed satisfy the static type checks."
> 
> The statically type-checked environment replies, "No way. How can I
> trust you? If you don't fulfill your runtime promise, I'd have to throw
> up all over your shoes."
> 
> The dynamically type-checked environment replies, "Okay, but if you
> don't fulfill your runtime promise, I'm going to throw up all over your
> shoes."
> 
> Is there really any difference?

Yes. The dynamically type-checked environment trusts me, and supports me 
by giving me means to deal with the mess on my shoes. ;)


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Steve Schafer
Subject: Re: More static type fun.
Date: 
Message-ID: <pf7trv8toha5g0crbha6tt8ed4hk140fi2@4ax.com>
On Mon, 17 Nov 2003 18:06:10 +0100, Pascal Costanza <········@web.de>
wrote:

>Yes. The dynamically type-checked environment trusts me, and supports me 
>by giving me means to deal with the mess on my shoes. ;)

I sleep better at night knowing that the cleanliness of my shoes is one
less thing I have to worry about.

-Steve
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2fzgohp6g.fsf@hanabi-air.shimizu.blume>
Steve Schafer <···@reply.to.header> writes:

> On Fri, 14 Nov 2003 20:55:23 +0100, Pascal Costanza <········@web.de>
> wrote:
> 
> >Wouldn't this mean that such a static type system would have to accept 
> >any program written in any dynamically type-checked language? Any static 
> >type error would have to be flagged as something that might get 
> >corrected at runtime. What would be the point to call this a static type 
> >system?
> 
> I have to confess that I have absolutely no idea how you come to this
> conclusion. If your goal is to create a program that, at runtime, can
> accept any arbitrary dynamically-typed code as input, and safely execute
> it, then yes, obviously, you have to accommodate dynamically-typed code,
> and you have to confine it to an appropriate "sandbox," with
> well-defined interfaces to any statically-typed code that exists outside
> of the sandbox. (In the U.S. at least, this sort of reasoning is usually
> accompanied by "Duh!")

Very nicely put. :-)

To dwell on this and other points a bit more...

----

I see two scenarios here (and they are quite different in nature from
each other):

1. The program contains expressions which in all seriousness could
   produce /any/ value whatsoever.  I this case there is really
   nothing for a static type system to pin down further -- the
   receiver of the value should better be prepared for /anything/ to
   come in.  Of course, the static type system can check _that_.

   The example often given is that of the Lisp procedure READ.  A
   static type system could very well make the result type of this
   function be "\exists \alpha . \alpha" (or whatever the favorite way
   of expressing the type of all values might be).  At the same time
   it can then check the remaining program for exhaustiveness --
   whether the receiver code is really ready to receive any value
   without restriction.

   Another possibility is to "overload" the READ procedure on its
   result type.  In effect, if the receiver of the read value is not
   prepared to handle anything but numbers, then the reader can be
   tailored (using the static information from the type checker) to
   restrict the accepted input language to that of numerical values.
   With Haskell's type classes something like this is easily coded up
   and works nicely even in the polymorphic case.


2. The program is subject to "live updates" where existing pieces of
   functionality get augmented, ripped out, or replaced.  In this case
   I see nothing wrong with the idea of re-doing the typecheck.

   First of all, such live updates will be fairly rare.  As other have
   already described here, in situations where real live updates are
   done on mission-critical applications, people *already* do
   extensive tests of the updated code (usually on a spare system)
   before doing the same to the real thing.  Re-doing the static
   typecheck (just like with any other program that gets developed in
   a more traditional edit-compiler-test cycle), if successfull, is
   just another welcome piece of evidence that one is not doing
   something bad.  (Of course, as usual, it is not a substitute for
   testing!)

   If you find yourself doing live updates so frequently that
   typechecking in the manner described above becomes prohibitively
   expensive, then you are doing something wrong, i.e., your program
   is very badly desigend.  In this case I'd say that it should be a
   good candidate for one final live update.  (You can guess which
   kind of "update" I mean... :-)

   The cost of re-doing the typechecking, although not that high on
   modern machines to begin with, can be reduced significantly by
   designing the type system carefully, thus taking advantage of
   modularity.  In this case the cost of typechecking at the time of
   the live update might be comparable to just the size of the code
   directly affected by the change as opposed to the size of the
   entire program.  (But as I said, even the latter would not be the
   end of the world.  The time it takes to typecheck even very
   substantial bodies of code is just a drop in the bucket if you also
   go through an extensive regression suite on you "spare" system.)

   For example, if the type system has what is called /pricipal
   typings/ (as opposed to, just /pricipal types/ as you find them in
   many variants of the HM type system), then program pieces can be
   typed fairly independently.  (Such an approach has its
   disadvantages too, especially if you are *not* interested in
   runtime updates, because many type errors are delayed until the
   pieces of the program are combined.)  Trevor Jim's Ph.D. thesis is
   a good read on this topic.  Also see Appel and Shao's "Smartest
   Recompilation" paper.

[Notice that the first scenario can come up anywhere and has fairly
little to do with "runtime metaprogramming".]

---

Here is one more remark:

What is the meaning of a program fragment that contains references to
variables/functions/whatever that are not (yet) defined (i.e., which
occur /free/ in the fragment)?  Well, one way of thinking about them
is to /close over/ them by explicitly binding those free occurences in
a lambda abstraction.  So if I look at the program fragment

      f(x)

without knowing what f and x are, then to make sense of this I assume
that this code is effectively parameterized over f and x, and that
concrete values/implementations will be supplied for them later.  This
leaves me with

      \ f . \x . f(x)

which is a closed term that has (at least in ML or Haskell) a
perfectly fine type, e.g.,

   \forall \alpha \forall \beta . (\alpha -> \beta) -> \alpha -> \beta

which captures precisely that f needs to be a function (without
pinning down the functions domain or range at all!), and that x needs
to be a value that is in the domain of f.

So, in other words, it is quite easy to imagine that a static
typechecker can deal with such incomplete program fragments
successfully.  Of course, in existing type systems there are a lot of
tricky issues which in effect might make this idea not work out as
planned.  My point is that just because a piece of code refers to some
free variables or functions does not automatically rule out static
type checking.

---

Ok, folks.  Knock yourselves out!  (I'm outta here.)
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp7u4f$g5b$1@newsreader3.netcologne.de>
Matthias Blume wrote:

> I see two scenarios here (and they are quite different in nature from
> each other):
> 
> 1. The program contains expressions which in all seriousness could
>    produce /any/ value whatsoever.  I this case there is really
>    nothing for a static type system to pin down further -- the
>    receiver of the value should better be prepared for /anything/ to
>    come in.  Of course, the static type system can check _that_.
> 
>    The example often given is that of the Lisp procedure READ.  A
>    static type system could very well make the result type of this
>    function be "\exists \alpha . \alpha" (or whatever the favorite way
>    of expressing the type of all values might be).  At the same time
>    it can then check the remaining program for exhaustiveness --
>    whether the receiver code is really ready to receive any value
>    without restriction.
> 
>    Another possibility is to "overload" the READ procedure on its
>    result type.  In effect, if the receiver of the read value is not
>    prepared to handle anything but numbers, then the reader can be
>    tailored (using the static information from the type checker) to
>    restrict the accepted input language to that of numerical values.
>    With Haskell's type classes something like this is easily coded up
>    and works nicely even in the polymorphic case.

Both approaches don't go very well with exploratory programming where 
you don't want to put any effort into satisfying a static type checker 
just to make your code run.

This might be compensated for by a development environment that allows 
you to enable an unsafe execution mode. This means, we're back to 
optional static typing again - the compromise I have suggested from the 
very beginning. The only difference being that some people prefer static 
typing as the default while others prefer dynamic typing as the default.

> 2. The program is subject to "live updates" where existing pieces of
>    functionality get augmented, ripped out, or replaced.  In this case
>    I see nothing wrong with the idea of re-doing the typecheck.
> 
>    First of all, such live updates will be fairly rare.  

This is a very strong assumption.

>    As other have
>    already described here, in situations where real live updates are
>    done on mission-critical applications, people *already* do
>    extensive tests of the updated code (usually on a spare system)
>    before doing the same to the real thing.  Re-doing the static
>    typecheck (just like with any other program that gets developed in
>    a more traditional edit-compiler-test cycle), if successfull, is
>    just another welcome piece of evidence that one is not doing
>    something bad.  (Of course, as usual, it is not a substitute for
>    testing!)

However, if the static typecheck is not successful, it doesn't mean that 
you're doing something bad. People have already described in this thread 
that they are regularly successfully doing live updates without static 
type checking.

> [Notice that the first scenario can come up anywhere and has fairly
> little to do with "runtime metaprogramming".]

Sure, not in the way that you describe it. The idea of the examples that 
involve READ is that the user has the opportunity to affect the program, 
i.e. define or undefine existing functions, etc. That's definitely 
metaprogramming. (just in case you are referring to my examples)

> Here is one more remark:
> 
> What is the meaning of a program fragment that contains references to
> variables/functions/whatever that are not (yet) defined (i.e., which
> occur /free/ in the fragment)?  Well, one way of thinking about them
> is to /close over/ them by explicitly binding those free occurences in
> a lambda abstraction.  So if I look at the program fragment
> 
>       f(x)
> 
> without knowing what f and x are, then to make sense of this I assume
> that this code is effectively parameterized over f and x, and that
> concrete values/implementations will be supplied for them later.  This
> leaves me with
> 
>       \ f . \x . f(x)
> 
> which is a closed term that has (at least in ML or Haskell) a
> perfectly fine type, e.g.,
> 
>    \forall \alpha \forall \beta . (\alpha -> \beta) -> \alpha -> \beta
> 
> which captures precisely that f needs to be a function (without
> pinning down the functions domain or range at all!), and that x needs
> to be a value that is in the domain of f.

...but then you can only make the program run by providing values for 
the variables f and x. In the following example...

(defun f ()
   (if *flag*
     (do-stuff-that-i-am-interested-in-right-now)
     (do-stuff-that-i-am-not-yet-interested-in)))

...there is a clear way to make this code run without needing to provide 
a definition for the stuff that I am not yet interested in. (And please 
keep in mind that this is a "Hello World" example of the stuff that 
Lispers want to do. Of course you can assign a static type to the "not 
yet" function, but that is besides the point. Real-world examples are 
more complicated than that.)

> So, in other words, it is quite easy to imagine that a static
> typechecker can deal with such incomplete program fragments
> successfully.  Of course, in existing type systems there are a lot of
> tricky issues which in effect might make this idea not work out as
> planned.  My point is that just because a piece of code refers to some
> free variables or functions does not automatically rule out static
> type checking.

Another note: Your proposed approach implicitly creates statically 
scoped definitions for the missing bits. I might later on decide that I 
actually want dynamically scoped definitions. So you would need a way to 
make this work as well. (I am not saying that you can't.)

AFAICT, Common Lisp implementations tend to assume missing variable 
definitions to be special (i.e., dynamically scoped). I also don't feel 
comfortable with that solution.

Thanks for the references to related literature.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb8a757$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Both approaches don't go very well with exploratory programming where 
>you don't want to put any effort into satisfying a static type checker 
>just to make your code run.
>
>This might be compensated for by a development environment that allows 
>you to enable an unsafe execution mode. This means, we're back to 
>optional static typing again - the compromise I have suggested from the 
>very beginning. The only difference being that some people prefer static 
>typing as the default while others prefer dynamic typing as the default.

I don't agree.  There is a very big difference between these two approaches.

If the language is statically typed, and there is a compiler or
IDE option to allow the code to be executed even if there were type errors,
then I can easily be sure that the code is free of type errors, by just
compiling without that option.

But with Lisp + soft typing, if I understand the approach correctly there
is no way to be sure that the code is free of run-time type errors.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpabs0$14eg$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Both approaches don't go very well with exploratory programming where 
>>you don't want to put any effort into satisfying a static type checker 
>>just to make your code run.
>>
>>This might be compensated for by a development environment that allows 
>>you to enable an unsafe execution mode. This means, we're back to 
>>optional static typing again - the compromise I have suggested from the 
>>very beginning. The only difference being that some people prefer static 
>>typing as the default while others prefer dynamic typing as the default.
> 
> 
> I don't agree.  There is a very big difference between these two approaches.
> 
> If the language is statically typed, and there is a compiler or
> IDE option to allow the code to be executed even if there were type errors,
> then I can easily be sure that the code is free of type errors, by just
> compiling without that option.
> 
> But with Lisp + soft typing, if I understand the approach correctly there
> is no way to be sure that the code is free of run-time type errors.

I would hope that a (probably future) soft typing approach is as 
complete as other static type systems. Some features in Lisp stand in 
the way of doing complete static analysis, but it should be feasible to 
restrict the language for, say, the code written in some module to make 
it statically checkable.

Add a refactoring tool that allows you to semi-automatically change some 
non-checkable code into a form suitable for a static type system, and 
you would have a good compromise that allows you to change your mind 
late in the game whether you want static type checking or not.

Of course, these things don't exist yet, so we are still left to choose 
to go one way or the other. But what in fact bothers me most with static 
type systems is not so much the approach they take, but rather the fact 
that they force you to make your code suitable whether you actually need 
the added safety or not. And this doesn't make sense at all IMHO because 
you still have the option to completely dump the language and choose a 
different one. And I think that this is counterproductive.

Why not have a single language framework and make static type checking 
an option, even if it requires parts of your program to be developped in 
a different style? Why do I need to learn a different syntax, learn 
different libraries, learn a new programming culture, start my own 
programs and libraries from scratch, and so on, just to have one single 
feature?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb9b6c4$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:
>Fergus Henderson wrote:
>> But with Lisp + soft typing, if I understand the approach correctly there
>> is no way to be sure that the code is free of run-time type errors.
>
>I would hope that a (probably future) soft typing approach is as 
>complete as other static type systems.

I find that very unlikely -- it goes against my understanding of what the
"soft" in "soft typing" means.

>Some features in Lisp stand in 
>the way of doing complete static analysis, but it should be feasible to 
>restrict the language for, say, the code written in some module to make 
>it statically checkable.

In that case, you'd have a statically typed subset of Lisp.

I think that such a beast would most likely be pretty awful
compared to languages which were designed with static type
checking in mind.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpcphr$s96$2@newsreader3.netcologne.de>
Fergus Henderson wrote:

>>Some features in Lisp stand in 
>>the way of doing complete static analysis, but it should be feasible to 
>>restrict the language for, say, the code written in some module to make 
>>it statically checkable.
> 
> 
> In that case, you'd have a statically typed subset of Lisp.

Right. But you wouldn't need to completely switch the language.

> I think that such a beast would most likely be pretty awful
> compared to languages which were designed with static type
> checking in mind.

Maybe.



Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb8a77a$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Both approaches don't go very well with exploratory programming where 
>you don't want to put any effort into satisfying a static type checker 
>just to make your code run.
>
>This might be compensated for by a development environment that allows 
>you to enable an unsafe execution mode. This means, we're back to 
>optional static typing again - the compromise I have suggested from the 
>very beginning. The only difference being that some people prefer static 
>typing as the default while others prefer dynamic typing as the default.

I don't agree.  There is a very big difference between these two approaches.

If the language is statically typed, and there is a compiler or
IDE option to allow the code to be executed even if there were type errors,
then I can easily be sure that the code is free of type errors, by just
compiling without that option.

But with Lisp + soft typing, if I understand the approach correctly there
is no way to be sure that the code is free of run-time type errors.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB59FBA.61E2B0A3@his.com>
Pascal Costanza wrote:

> In the first step, evaluation of an input by the user hasn't provided a
> proper definition for F, therefore DO-IT has produced an error at
> runtime. In the third step, the user of the program has typed in a
> definition for F, and so the program can happily proceed without
> producing an error.

Blah, blah, blah.

Have you ever used an interactive ML or Haskell system (SML/NJ, GHCi,
Hugs, MOSML)?  If you try one, you may note that they are capable of
dealing with statically typed objects in a dynamic way.  I think that
in the unlikely event that you care, you could learn something from
these.

David
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp52ia$keo$1@newsreader3.netcologne.de>
Feuer wrote:

> Pascal Costanza wrote:
> 
>>In the first step, evaluation of an input by the user hasn't provided a
>>proper definition for F, therefore DO-IT has produced an error at
>>runtime. In the third step, the user of the program has typed in a
>>definition for F, and so the program can happily proceed without
>>producing an error.
> 
> Blah, blah, blah.

Ah, that's indeed very insightful.

> Have you ever used an interactive ML or Haskell system (SML/NJ, GHCi,
> Hugs, MOSML)?  If you try one, you may note that they are capable of
> dealing with statically typed objects in a dynamic way.  I think that
> in the unlikely event that you care, you could learn something from
> these.

Another one who misses the point.



Pascal
From: Lauri Alanko
Subject: Re: More static type fun.
Date: 
Message-ID: <bp57sn$1mq$1@la.iki.fi>
Pascal Costanza <········@web.de> virkkoi:
> Another one who misses the point.

Your point seems to boil down to the fact that some erroneous situations
arise only as the result of certain kinds of input, and therefore cannot
be detected prior to execution. No one disputes this. Even statically
type correct programs can have runtime errors, they just can't have
runtime _type_ errors. What exactly counts as a type error depends on
the type system, of course.

As for your "counterexample":

: (defun do-it ()
:   (progn
:     (eval (read))
:     (f)))

Getting these kinds of programs in a statically typed language is
a problem that I've been very interested in for a long time now. It is
true that there are so far no very satisfactory solutions. However, I
see no theoretical reason to believe that such a solution couldn't be
found. I'll try to sketch something that might work.

Your particular example is something that I wouldn't want to pass a type
checker, though: personally, I think that if you are using a function,
you also ought to declare it in some form somewhere. This helps catch
typos.

But that's just a personal opinion. Let us allow for implicitly defined
functions: the presence of a call to "f" means that the compiler adds a
variable "f" to the outer environment and gives it some reasonable
default value that raises an error when used. From f's call site the
type check can infer that f is a nullary function, but we don't know
anything about its return type. Let's say that its
type is inferred as "f : ref (() -> exists x. x)", where "()" is the unit
type. We could get a program something like this:

f : ref (() -> exists x . x)
f = ref (() -> error "variable f undefined")

do-it : () -> exists x . x
do-it = ref (\() -> eval current_env (read_expr ()); !f ())

The types of the system functions are:
eval : env -> expr -> exists x . (R(x),x)
read_expr : () -> expr
current_env : env

So eval takes a run-time representation of an environment, and a
representation of an (untyped) expression, which is parsed from the
input by read_expr. It type-checks and evaluates the expression, and
returns the value along with a representation of the inferred type.
Essentially, a "dynamic value". We ignore eval's return value here,
though. Here current_env is a bit of magic syntax that returns the
runtime representation of the top-level lexical environment where it
occurs.

The trick is that in addition to a mapping from identifiers to values,
the environment representation also includes its signature, ie. a
mapping from the identifiers to their types. Given this run-time type
information, eval can type-check the input expression, and it can throw
an error when appropriate. So if the evaled expression is:

f := (\a b -> a + b)

Then eval's type checker will raise an error in an environment where f's
type is "() -> exists x . x". But if the expression is:

f := (\() -> 5)

Then this will type-check ok, and look up the value of f, which is a
reference, and then update that to a new function.

There's a lot more to it, of course, but I just want to show that eval
is by no means impossible to handle in a statically typed language. It
just hasn't been done... yet.


Lauri Alanko
··@iki.fi
From: Daniel C. Wang
Subject: Re: More static type fun.
Date: 
Message-ID: <uekw9sgfx.fsf@hotmail.com>
Lauri Alanko <··@iki.fi> writes:
{stuff deleted}
> As for your "counterexample":
> 
> : (defun do-it ()
> :   (progn
> :     (eval (read))
> :     (f)))
> 
> Getting these kinds of programs in a statically typed language is
> a problem that I've been very interested in for a long time now. It is
> true that there are so far no very satisfactory solutions. However, I
> see no theoretical reason to believe that such a solution couldn't be
> found. I'll try to sketch something that might work.
> 

Below is an interafce in SML for a type safe eval for the simply-typed
lambda calculus.

signature INTERP =
  sig
    type intT
    type ('a,'b) arrowT

    type 'a exp
    type 'a value

    val i    : int -> intT exp
    val abs  : ('a exp -> 'b exp) -> (('a,'b) arrowT) exp
    val app  : ((('a,'b) arrowT) exp * 'a exp) -> 'b exp
    val if0  : (intT exp * 'a exp * 'a exp) -> 'a exp
    val fix  : ('a exp -> 'a exp) -> 'a exp
    val prim : (string * ('a value -> 'b value)) -> ('a,'b) arrowT exp

    val vint  : intT value -> int
    val vabs  : (('a,'b) arrowT) value -> ('a value -> 'b value)
    val vprim : (('a,'b) arrowT) value -> string option

    val eval : 'a exp -> 'a value
  end

One would have to change the definition of "read" above to be

 val read : string -> 'a exp option

Such an implementation of read will have to statically type-check it's
input, and fail when the input is ill typed.

I will have to leave that as an exercise to the reader to flesh out. 
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp5eeg$cd2$1@newsreader3.netcologne.de>
Lauri Alanko wrote:

> Pascal Costanza <········@web.de> virkkoi:
> 
>>Another one who misses the point.
> 
> 
> Your point seems to boil down to the fact that some erroneous situations
> arise only as the result of certain kinds of input, and therefore cannot
> be detected prior to execution. No one disputes this. Even statically
> type correct programs can have runtime errors, they just can't have
> runtime _type_ errors. What exactly counts as a type error depends on
> the type system, of course.

More or less. A static type system that would accept anything I write 
wouldn't be accepted as a static type system in the static typing 
community. The essential feature of static type systems is that some 
programs are flagged as erroneous at compile-time while others are not.

If everything is subject to change at runtime, this can't be statically 
type checked anymore.

> As for your "counterexample":
> 
> : (defun do-it ()
> :   (progn
> :     (eval (read))
> :     (f)))
> 
> Getting these kinds of programs in a statically typed language is
> a problem that I've been very interested in for a long time now. It is
> true that there are so far no very satisfactory solutions. However, I
> see no theoretical reason to believe that such a solution couldn't be
> found. I'll try to sketch something that might work.
> 
> Your particular example is something that I wouldn't want to pass a type
> checker, though: personally, I think that if you are using a function,
> you also ought to declare it in some form somewhere. This helps catch
> typos.
> 
> But that's just a personal opinion. Let us allow for implicitly defined
> functions: the presence of a call to "f" means that the compiler adds a
> variable "f" to the outer environment and gives it some reasonable
> default value that raises an error when used.

I find it useful that the Common Lisp implementations I know flag the 
code above as problematic. I also find it useful that I can still run 
the program and see what happens.

Your solution means to throw the baby out with the bathwater because it 
doesn't warn me about the code anymore.

> From f's call site the
> type check can infer that f is a nullary function, but we don't know
> anything about its return type. Let's say that its
> type is inferred as "f : ref (() -> exists x. x)", where "()" is the unit
> type. We could get a program something like this:
> 
> f : ref (() -> exists x . x)
> f = ref (() -> error "variable f undefined")
> 
> do-it : () -> exists x . x
> do-it = ref (\() -> eval current_env (read_expr ()); !f ())
[...]

> The trick is that in addition to a mapping from identifiers to values,
> the environment representation also includes its signature, ie. a
> mapping from the identifiers to their types. Given this run-time type
> information, eval can type-check the input expression, and it can throw
> an error when appropriate. So if the evaled expression is:
> 
> f := (\a b -> a + b)
> 
> Then eval's type checker will raise an error in an environment where f's
> type is "() -> exists x . x". But if the expression is:
> 
> f := (\() -> 5)
> 
> Then this will type-check ok, and look up the value of f, which is a
> reference, and then update that to a new function.

What do you mean by "raise an error"? Is this a correctable error or not?

CL-USER 5 > (do-it)
(defun f (x) x)
Error: F got 0 args, wanted at least 1.
   1 (abort) Return to level 0.
   2 Return to top loop level 0.

Type :b for backtrace, :c <option number> to proceed,  or :? for other 
options

CL-USER 6 : 1 > :?

:ed      Edit the function associated with the frame
:v       Print the current frame
:bq      Print a quick backtrace of interesting call frames
:b       Print a backtrace down from the current frame
:error   Print the error and how to continue
:n       Go down the stack
:p       Go up the stack
:top     Abort to top level
:a       Abort one level
:c       Continue from error
:ret     Return from frame
:res     Restart frame
:sres    Restart frame, stepping the function
:trap    Cause the debugger to be re-entered on exit from this frame
:<       Go to the top of the stack
:>       Go to the bottom of the stack
:cc      Get the current condition object
:all     Set the debugger options to show all call frames
:l       Print and return the value of a given variable in the current 
frame.
:bb      Print a full backtrace suitable for a bug report
:lambda  Show the lambda expression for an anonymous interepreted frame
:bug-form <subject> &key <filename>
          Print out a bug report form, optionally to a file.
:get <variable> <command identifier>
          Get a command from the history list and put it in a variable.
:help    Produce this list.
:his &optional <n1> <n2>
          List the command history, optionally the last n1 or range n1 
to n2.
:redo &optional <command identifier>
          Redo a previous command, identified by its number or a substring.
:use <new form> <old form> &optional <command identifier>
          Redo command after replacing old form with new form.

CL-USER 6 : 1 > :ret

Error: The variable X is unbound.
   1 (continue) Try evaluating X again.
   2 Return the value of :X instead.
   3 Specify a value to use this time instead of evaluating X.
   4 Specify a value to set X to.
   5 (abort) Return to level 0.
   6 Return to top loop level 0.

Type :b for backtrace, :c <option number> to proceed,  or :? for other 
options

CL-USER 7 : 1 > :c 3

Enter a form to be evaluated: 42
42


This is relatively close to what you want, right?


Pascal


--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB65614.A7195650@his.com>
Pascal Costanza wrote:

> If everything is subject to change at runtime, this can't be statically
> type checked anymore.

I don't think you've convinced everyone that you gain very much by
being able to change anything at runtime.  In functional programming,
the things you might want to change should generally be passed in as
arguments (or implicit arguments) to a function or module.

> I find it useful that the Common Lisp implementations I know flag the
> code above as problematic. I also find it useful that I can still run
> the program and see what happens.

Why do you find that sort of program useful?  Personally, I very much
prefer to edit code in my editor rather than try to work it out without
seeing the context when the program asks for it.

> Your solution means to throw the baby out with the bathwater because it
> doesn't warn me about the code anymore.

Where's the baby?

David
From: james anderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB66634.BB2A4C5@setf.de>
Feuer wrote:
> 
> Pascal Costanza wrote:
> 
> > If everything is subject to change at runtime, this can't be statically
> > type checked anymore.
> 
> I don't think you've convinced everyone that you gain very much by
> being able to change anything at runtime.  In functional programming,
> the things you might want to change should generally be passed in as
> arguments (or implicit arguments) to a function or module.
> 
> > I find it useful that the Common Lisp implementations I know flag the
> > code above as problematic. I also find it useful that I can still run
> > the program and see what happens.
> 
> Why do you find that sort of program useful?  Personally, I very much
> prefer to edit code in my editor rather than try to work it out without
> seeing the context when the program asks for it.
> 
> > Your solution means to throw the baby out with the bathwater because it
> > doesn't warn me about the code anymore.
> 
> Where's the baby?
>

the second pseudo-example to which i alluded was an implementation mechanism
for optimizing queries compiled against arbitrary documents; where the
documents, and thus the "types" of the document components, are not know when
the code is edited.

...
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp5qk5$9kv$1@newsreader3.netcologne.de>
Feuer wrote:

> 
> Pascal Costanza wrote:
> 
> 
>>If everything is subject to change at runtime, this can't be statically
>>type checked anymore.
> 
> 
> I don't think you've convinced everyone that you gain very much by
> being able to change anything at runtime.  In functional programming,
> the things you might want to change should generally be passed in as
> arguments (or implicit arguments) to a function or module.

I don't try to convince you to use my programming style, and you don't 
try to convince me to use yours. This includes not telling me what I 
"should" do. Deal?

>>I find it useful that the Common Lisp implementations I know flag the
>>code above as problematic. I also find it useful that I can still run
>>the program and see what happens.
> 
> Why do you find that sort of program useful?

This discussion has already seen numerous examples.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB71FE6.BEED9FEE@his.com>
Pascal Costanza wrote:

> > I don't think you've convinced everyone that you gain very much by
> > being able to change anything at runtime.  In functional programming,
> > the things you might want to change should generally be passed in as
> > arguments (or implicit arguments) to a function or module.
> 
> I don't try to convince you to use my programming style, and you don't
> try to convince me to use yours. This includes not telling me what I
> "should" do. Deal?

I'm not telling you what you should do in Lisp.  I'm telling you
how you might want to think about programming when using a language
more specifically designed for functional programming.  The Lisp
idiom does not translate very well to the currently popular
statically typed languages.  The same goal can, however, be
accomplished in a different and equally comprehensible way, at least
in most practical cases.

David
From: Lauri Alanko
Subject: Re: More static type fun.
Date: 
Message-ID: <bp5im8$4ju$1@la.iki.fi>
Pascal Costanza <········@web.de> virkkoi:
> More or less. A static type system that would accept anything I write 
> wouldn't be accepted as a static type system in the static typing 
> community. The essential feature of static type systems is that some 
> programs are flagged as erroneous at compile-time while others are not.
> 
> If everything is subject to change at runtime, this can't be statically 
> type checked anymore.

It can be statically type checked that things work correctly before
anything gets changed at runtime. It can be checked at runtime that no
changes are accepted that will break already existing things. Since a
type system can only give a conservative approximation of correctness,
this inevitably means that some things are rejected that might in fact
work properly. This is a tradeoff that proponents of static type
checking will gladly make, and dynamic typers won't. It's just a matter
of priorities.

> I find it useful that the Common Lisp implementations I know flag the 
> code above as problematic. I also find it useful that I can still run 
> the program and see what happens.
> 
> Your solution means to throw the baby out with the bathwater because it 
> doesn't warn me about the code anymore.

What feature in it exactly is flagged as problematic? That a variable is
used without it being defined anywhere? This is quite orthogonal to any
typing issues, so it can be handled in a statically typed system as
well.

> > an error when appropriate. So if the evaled expression is:
> > 
> > f := (\a b -> a + b)
> > 
> > Then eval's type checker will raise an error in an environment where f's

> What do you mean by "raise an error"? Is this a correctable error or not?

Again, an orthogonal issue. Errors can be dealt with in diverse ways,
static typing or not. Your (admittedly nifty) debugging example doesn't
really conflict with static typing very much.

> CL-USER 7 : 1 > :c 3
> 
> Enter a form to be evaluated: 42
> 42

In a statically typed system, here the continuation you are invoking
would be typed, and the expression entered would have to be of the
proper type. Again, this is quite possible provided that we keep a
run-time representation of an enviroment's signature with us all the
time.

The only reason this kind of introspective debugging is so much more
advanced in dynamic languages like lisp and smalltalk is that it is just
tremendously _easier_ in them. But there is no fundamental reason why
statically typed languages couldn't catch up. You just wait.


Lauri Alanko
··@iki.fi
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <brrdzhp4.fsf@comcast.net>
Lauri Alanko <··@iki.fi> writes:

> Since a type system can only give a conservative approximation of
> correctness, this inevitably means that some things are rejected
> that might in fact work properly.  This is a tradeoff that proponents
> of static type checking will gladly make, and dynamic typers
> won't.  It's just a matter of priorities.

Exactly.

Although there seem to be static typists that assert

  a)`not being rejected' is a prior condition on `working properly',
     so the type system cannot be `conservative'.

  b) there are no interesting things that are rejected (or that the
     rejected things are so contrived as to be mere curiosities).

  c) since type errors are *defined* as statically checkable, no
     runtime error is a `type' error.  (This is often inflated to `no
     runtime exception is an error')
From: Ed Avis
Subject: Re: More static type fun.
Date: 
Message-ID: <l1he14tr28.fsf@budvar.future-i.net>
Joe Marshall <·············@comcast.net> writes:

>Although there seem to be static typists that assert
>
>  a)`not being rejected' is a prior condition on `working properly',

This is absolutely true, *in a statically typed language*.  It would
not be true for Lisp and some hypothetical type-checker that tried to
deal with Lisp as if it were ML.

>     so the type system cannot be `conservative'.

I would not want to use any type checker that was not conservative.
The type checker has no business rejecting programs that have a
defined semantics - its job is to complain about code that is not a
legal program at all.  Obviously in a language such as Lisp that
defers type checking until run time, the set of completely-wrong
programs is very small because any construct, even if it will give a
type exception at run time, has a defined meaning.  In a language like
Tcl even syntax errors cannot all be caught at compile time.

(Of course, I might be happy to use some lint-like program that gave
warning of likely problems ahead.)

>  b) there are no interesting things that are rejected (or that the
>     rejected things are so contrived as to be mere curiosities).

Again absolutely true *in a statically typed language*.  And true for
any type checker that is not broken.

The rejected things are bits of text that are not even programs in the
language concerned, they have no meaning, so of course they must be
rejected.  Asking otherwise is like asking for '))(5(.))) (' to be
accepted as a complete program by the Lisp syntax checker.

-- 
Ed Avis <··@membled.com>
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp5u46$kpd$1@newsreader3.netcologne.de>
Lauri Alanko wrote:

> The only reason this kind of introspective debugging is so much more
> advanced in dynamic languages like lisp and smalltalk is that it is just
> tremendously _easier_ in them. But there is no fundamental reason why
> statically typed languages couldn't catch up. You just wait.

OK, maybe I am missing something fundamentally important here. Please 
keep me informed when such a type system becomes available. Make sure 
that such a type system allows me to ignore any error that it flags, and 
that the language that it is based on gives me at least the same level 
of flexibility as Common Lisp.

Until then, I'll stick to what works best for me.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-1511031340110001@192.168.1.51>
In article <············@newsreader3.netcologne.de>, Pascal Costanza
<········@web.de> wrote:

> Lauri Alanko wrote:
> 
> > The only reason this kind of introspective debugging is so much more
> > advanced in dynamic languages like lisp and smalltalk is that it is just
> > tremendously _easier_ in them. But there is no fundamental reason why
> > statically typed languages couldn't catch up. You just wait.
> 
> OK, maybe I am missing something fundamentally important here.

I think you're missing two things:

1.  The possibility of "sealing", that is, of segregating the language
into those facilites that support what you call "dynamic metaprogramming"
from those that don't, and allowing the programmer to ask the compiler to
perform compile-time type checking and optimization on the assumption that
the DMP facilities will no longer be used from here on out.  Note that
this "promise" can also be made implicitly by saying, in effect, "Compile
this source code on the assumption that no more changes will be made to
it."  Note also that this is a programming environment issue, not a
language issue.

2.  The possibility that someone will come up with a cool new idea.

That said, I have tried both ML and Haskell and find their type systems to
be overly burdensome for my tastes.  I'd much rather see a Lisp
environment augmented with a static type checking tool that can use or not
as I see fit.

E.
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB72342.49AFAE26@his.com>
Erann Gat wrote:

> That said, I have tried both ML and Haskell and find their type systems to
> be overly burdensome for my tastes.  I'd much rather see a Lisp
> environment augmented with a static type checking tool that can use or not
> as I see fit.

Lisp is very hard to type.  It's not going to get easier.  Lisp
just wasn't designed for type inference or type checking, and
there's no way around that.  A type checking tool for Lisp will
generally give incomplete, imprecise information, and find it very
difficult to explain to the programmer what it thinks.

David
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-1511032352060001@192.168.1.51>
In article <·················@his.com>, Feuer <·····@his.com> wrote:

> Erann Gat wrote:
> 
> > That said, I have tried both ML and Haskell and find their type systems to
> > be overly burdensome for my tastes.  I'd much rather see a Lisp
> > environment augmented with a static type checking tool that can use or not
> > as I see fit.
> 
> Lisp is very hard to type.  It's not going to get easier.  Lisp
> just wasn't designed for type inference or type checking, and
> there's no way around that.

Lisp may be hard to type, but it's not true that there's no way around
it.  It's very easy to embed new langauges in Lisp.  One could easily
embed a Lisp-like language that was amenable to static type checking.  I
believe Drew McDermott's Nisp is an attempt to do exactly that.

E.
From: Henrik Motakef
Subject: Re: More static type fun.
Date: 
Message-ID: <86u154gxw5.fsf@pokey.internal.henrik-motakef.de>
···@jpl.nasa.gov (Erann Gat) writes:

>>> That said, I have tried both ML and Haskell and find their type systems to
>>> be overly burdensome for my tastes.  I'd much rather see a Lisp
>>> environment augmented with a static type checking tool that can use or not
>>> as I see fit.
>> 
>> Lisp is very hard to type.  It's not going to get easier.  Lisp
>> just wasn't designed for type inference or type checking, and
>> there's no way around that.
> 
> Lisp may be hard to type, but it's not true that there's no way around
> it.  It's very easy to embed new langauges in Lisp.  One could easily
> embed a Lisp-like language that was amenable to static type checking.  I
> believe Drew McDermott's Nisp is an attempt to do exactly that.

But then you work in a statically-typed Lisp-like language, not Common
Lisp (for all practical purposes).

I can't see how a static checker for the native Common Lisp type
system would work (I won't complain if anybody writes one anyway,
however), simply because the CL type system itself is dynamic.

For one thing, statically checking for the absence of TYPE-ERROR
conditions fails because you can just signal them willy-nilly, even
without any relation to any value's type (you probably shouldn't, but
that's not the point). For example, the "(eval (read))" function
mentioned in this thread will raise a type error if the user enters
"(error 'type-error :datum "foo" :expected-type 'method-combination)".

Another issue is that it is not only possible to redefine types in a
running program, but the set of valid values of a type can depend on
the dynamic state of the system.

An example:

* (defun valid-foo-p (foo)
    (yes-or-no-p "Is ~S a valid value of type foo?" foo))
VALID-FOO-P

* (deftype foo () '(satisfies valid-foo-p))
FOO

* (defun do-something-with-a-foo-or-an-integer (x)
    (etypecase "foo"
      (foo :foo)
      (integer :int)))
DO-SOMETHING-WITH-A FOO-OR-AN-INTEGER

* (do-something-with-a-foo-or-an-integer "foo")
Is "foo" a valid value of type foo? (yes/no) yes
:foo

* (do-something-with-a-foo-or-an-integer "foo")
Is "foo" a valid value of type foo? (yes/no) no
=> type-error

Of course, this example is pretty stupid, but there are valid uses of
SATISFIES, some of them similarly impossible to check statically (and
a type checker should also work for stupid code, as long as it's
conforming). For example, the predicate could depend on some special
variable, the system time, or whatever.

So a static type checker for the CL-native type system would either
have to admit that it just doesn't know whether a call to
DO-SOMETHING-WITH-A-FOO-OR-AN-INTEGER will signal a type-error or not,
or somehow magically consider all possible code paths together with
the general state and behaviour of the outside world.

Or it could invent a new type system that isn't defined in terms of
TYPE-ERROR, DEFTYPE, CHECK-TYPE, TYPECASE and friends, but that
wouldn't be exactly trivial either, and having your type checker use a
different idea of types than your "real" environment seems to be a
great source of confusion.

Both approaches could probably yield usefull tools, but they certainly
would be very different from the type checking in languages like
Haskell or ML.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <he11wt9f.fsf@ccs.neu.edu>
Feuer <·····@his.com> writes:

> Lisp is very hard to type.  

It's easier if you re-map the parenthesis to unshifted keys...

Oh wait, that's not what you meant.  Never mind.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp6liv$bhn$1@newsreader3.netcologne.de>
Erann Gat wrote:

> In article <············@newsreader3.netcologne.de>, Pascal Costanza
> <········@web.de> wrote:
> 
> 
>>Lauri Alanko wrote:
>>
>>
>>>The only reason this kind of introspective debugging is so much more
>>>advanced in dynamic languages like lisp and smalltalk is that it is just
>>>tremendously _easier_ in them. But there is no fundamental reason why
>>>statically typed languages couldn't catch up. You just wait.
>>
>>OK, maybe I am missing something fundamentally important here.
> 
> 
> I think you're missing two things:
> 
> 1.  The possibility of "sealing", that is, of segregating the language
> into those facilites that support what you call "dynamic metaprogramming"
> from those that don't, and allowing the programmer to ask the compiler to
> perform compile-time type checking and optimization on the assumption that
> the DMP facilities will no longer be used from here on out.  Note that
> this "promise" can also be made implicitly by saying, in effect, "Compile
> this source code on the assumption that no more changes will be made to
> it."  Note also that this is a programming environment issue, not a
> language issue.

But then we're back to the need to distinguish between expected and 
unexpected changes, right? (or better, unexpected changes that you can 
and cannot carry out)

> 2.  The possibility that someone will come up with a cool new idea.

That's indeed a good argument I haven't thought of.

I drop my claim wrt impossible reconciliation of static type systems and 
dynamic metaprogramming.

However, I still have strong doubts wrt to static checkability of 
dynamic program features, so I don't hold my breath.

> That said, I have tried both ML and Haskell and find their type systems to
> be overly burdensome for my tastes.  I'd much rather see a Lisp
> environment augmented with a static type checking tool that can use or not
> as I see fit.

Yep.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-1511032348310001@192.168.1.51>
In article <············@newsreader3.netcologne.de>, Pascal Costanza
<········@web.de> wrote:

> But then we're back to the need to distinguish between expected and 
> unexpected changes, right? (or better, unexpected changes that you can 
> and cannot carry out)

I'm not sure I understand what you mean, but you can never make it
"impossible" to introduce a change.  The worst you can do is make it so
that introducing certain changes requires you to recompile the entire
system from scratch.  (The problem with most static typing systems today
is that they seem to run in this mode by default.)  And that can only
happen if you lied to the system when you promised to refrain from making
any more such changes in exchange for more compile-time safety assurances
and optimization.

E.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp7sf6$d7o$1@newsreader3.netcologne.de>
Erann Gat wrote:

> In article <············@newsreader3.netcologne.de>, Pascal Costanza
> <········@web.de> wrote:
> 
> 
>>But then we're back to the need to distinguish between expected and 
>>unexpected changes, right? (or better, unexpected changes that you can 
>>and cannot carry out)
> 
> 
> I'm not sure I understand what you mean, but you can never make it
> "impossible" to introduce a change. 

I mean, at runtime.

> The worst you can do is make it so
> that introducing certain changes requires you to recompile the entire
> system from scratch.  (The problem with most static typing systems today
> is that they seem to run in this mode by default.)  And that can only
> happen if you lied to the system when you promised to refrain from making
> any more such changes in exchange for more compile-time safety assurances
> and optimization.

...or you had the wrong idea which parts of the system are stable and 
which aren't.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Lauri Alanko
Subject: Re: More static type fun.
Date: 
Message-ID: <bp60u0$7uj$1@la.iki.fi>
Pascal Costanza <········@web.de> virkkoi:
> OK, maybe I am missing something fundamentally important here. Please 
> keep me informed when such a type system becomes available. Make sure 
> that such a type system allows me to ignore any error that it flags,

Most typed language implementations provide unsafe functions to coerce
something into another type and thus bypass the typesystem. Eg.
Obj.magic in ocaml, or unsafeCoerce in Haskell implementations.

> and that the language that it is based on gives me at least the same
> level of flexibility as Common Lisp.

Common Lisp's flexibility gives you the power to write programs that
fail in many ways when they are run. This is, to my mind, not a
desirable goal. Much more desirable would be something that allowed you
to do all the _useful_ things that CL provides, yet still providing some
statically determined guarantees about the behavior of the program.

Of course there seems to be quite a wealth of opinions on what exactly
constitutes the useful portion of CL's functionality. Personally, I find
eval extremely valuable, whereas I don't much care that you can refer to
a variable that hasn't been defined, except temporarily as part of an
interactive development cycle.

There is one point that probably hasn't been brought up yet. The fact
is, most proponents of static typing probably wouldn't _want_ a type
system that would accept all programs that a human could show correct
(again, in a very limited sense of type correctness). The reason being,
if a program's correctness is very hard to prove, then it is also very
hard to maintain without breaking it.

A type system prevents you from doing wacky things, yes. But I consider
this a good thing, _provided_ that you can accomplish the same thing in
a non-wacky way. With modern type systems, this is _mostly_ possible.
Not completely, no, and there is definitely room for improvement. But
once a type system allows you to accomplish everything you want -- maybe
not exactly the _way_ you want it, but still in a simple enough way --
then I'd say it is good enough. 


Lauri Alanko
··@iki.fi
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <3ccpz7x4.fsf@comcast.net>
Lauri Alanko <··@iki.fi> writes:

> Pascal Costanza <········@web.de> virkkoi:
>> OK, maybe I am missing something fundamentally important here. Please 
>> keep me informed when such a type system becomes available. Make sure 
>> that such a type system allows me to ignore any error that it flags,
>
> Most typed language implementations provide unsafe functions to coerce
> something into another type and thus bypass the typesystem. Eg.
> Obj.magic in ocaml, or unsafeCoerce in Haskell implementations.

Obviously, the types `get in the way' at times.  At least often enough
that someone decided to implement a workaround rather than re-work his
code.

Languages like Lisp often don't provide a mechanism by which one
could, for example, re-tag a cons cell as a string.  (Although they
may provide fixnum<->address conversion as part of the GC or FFI, it
is non-standard).

> Common Lisp's flexibility gives you the power to write programs that
> fail in many ways when they are run.  This is, to my mind, not a
> desirable goal. 

Running programs is not a desirable goal?

Sure Common Lisp allows you to write programs that throw runtime
errors.  On the other hand, so do nearly every statically typed
language as well.

It's nice to be able to guarantee that certain errors won't occur, but
I can live without the guarantee, especially during development where
I am well aware that certain errors *will* occur.  I still want to run
the program.

> There is one point that probably hasn't been brought up yet. The fact
> is, most proponents of static typing probably wouldn't _want_ a type
> system that would accept all programs that a human could show correct
> (again, in a very limited sense of type correctness). The reason being,
> if a program's correctness is very hard to prove, then it is also very
> hard to maintain without breaking it.

Freedom!  Horrible Freedom!


-- 
~jrm
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <peji81-l5a.ln1@ID-7776.user.dfncis.de>
Joe Marshall <·············@comcast.net> wrote:
> Lauri Alanko <··@iki.fi> writes:
>> Most typed language implementations provide unsafe functions to coerce
>> something into another type and thus bypass the typesystem. Eg.
>> Obj.magic in ocaml, or unsafeCoerce in Haskell implementations.

> Obviously, the types `get in the way' at times.  At least often enough
> that someone decided to implement a workaround rather than re-work his
> code.

I would be really interested in examples where Obj.magic and
unsafeCoerce are actually used. Personally, I don't know any. 
The only reason I could think of is to interface with foreign
programs that require untyped data exchange.

I always understood them in the spirit of "if you really have do to
completely strange stuff for some reason, here there are, so you
can."

In pure Haskell or OCaml, using them doesn't really make sense --
either you convert between the same types, and then you don't have
to do the conversion, or you convert between different types, which
means your program will crash horribly as soon as the bitpattern
is now interpreted in a wrong way.

- Dirk
From: Darius
Subject: Re: More static type fun.
Date: 
Message-ID: <20031115194146.00000bb1.ddarius@hotpop.com>
On Sat, 15 Nov 2003 22:46:33 +0100
Dirk Thierbach <··········@gmx.de> wrote:

> In pure Haskell or OCaml, using them doesn't really make sense --
> either you convert between the same types, and then you don't have
> to do the conversion, or you convert between different types, which
> means your program will crash horribly as soon as the bitpattern
> is now interpreted in a wrong way.
> 
> - Dirk

Not necessarily.

Why go to all that trouble pattern matching a list and building
something of type Maybe when you can just coerce them!  (I finally
actually found GHC's unsafeCoerce#)

Prelude> :m Maybe
Prelude Maybe> listToMaybe [] :: Maybe ()
Nothing
Prelude Maybe> listToMaybe [1]
Just 1
Prelude Maybe> listToMaybe [1,2]
Just 1
Prelude Maybe> :m GHC.Base
Prelude GHC.Base> :set -fglasgow-exts
Prelude GHC.Base> let fastListToMaybe :: [a] -> Maybe a;fastListToMaybe
= unsafeCoerce#
Prelude GHC.Base> fastListToMaybe [] :: Maybe ()
Nothing
Prelude GHC.Base> fastListToMaybe [1]
Just 1
Prelude GHC.Base> fastListToMaybe [1,2]
Just 1

The x86 ASM and C programmer inside of me revels.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <qdsj81-sn.ln1@ID-7776.user.dfncis.de>
Darius <·······@hotpop.com> wrote:

> Why go to all that trouble pattern matching a list and building
> something of type Maybe when you can just coerce them!  (I finally
> actually found GHC's unsafeCoerce#)

Now that's certainly an example of a usage that's (a) unportable
(what happens if you define the List datatype or the Maybe datatype
the other way round?) and (b) dangerous (what happens if the
compiler decides to do some optimizations behind your back that are
for example valid for Maybe, but nor for lists?)

- Dirk
From: Darius
Subject: Re: More static type fun.
Date: 
Message-ID: <20031116102538.00004ea3.ddarius@hotpop.com>
On Sun, 16 Nov 2003 10:25:46 +0100
Dirk Thierbach <··········@gmx.de> wrote:

> Darius <·······@hotpop.com> wrote:
> 
> > Why go to all that trouble pattern matching a list and building
> > something of type Maybe when you can just coerce them!  (I finally
> > actually found GHC's unsafeCoerce#)
> 
> Now that's certainly an example of a usage that's (a) unportable
> (what happens if you define the List datatype or the Maybe datatype
> the other way round?) and (b) dangerous (what happens if the
> compiler decides to do some optimizations behind your back that are
> for example valid for Maybe, but nor for lists?)
and (c) not serious.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <2msk81-f02.ln1@ID-7776.user.dfncis.de>
Darius <·······@hotpop.com> wrote:
> and (c) not serious.

I am sorry if I missed the smiley. It didn't really look serious to me, 
but this is Usenet, so there's a real danger of somebody thinking that
there might now be a good examples for using unsafeCoerce, and therefore
the "static typers" have no idea what they are talking about.

- Dirk
From: Mark Carroll
Subject: Re: More static type fun.
Date: 
Message-ID: <Jxj*FTA7p@news.chiark.greenend.org.uk>
In article <············@comcast.net>,
Joe Marshall  <·············@comcast.net> wrote:
>Lauri Alanko <··@iki.fi> writes:
(snip)
>> Most typed language implementations provide unsafe functions to coerce
>> something into another type and thus bypass the typesystem. Eg.
>> Obj.magic in ocaml, or unsafeCoerce in Haskell implementations.
>
>Obviously, the types `get in the way' at times.  At least often enough
>that someone decided to implement a workaround rather than re-work his
>code.
(snip)

Frequency of "unsafe" (anecdotal): 
Well, only for things like writing your own marshalling/unmarshalling
libraries or foreign-function interfaces or whatever, and I'm not even
sure you need it in Haskell any more for home-grown unmarshalling now
we have the Data.Generics stuff. Probably Haskell and Modula-3 are the
main two languages I've written plenty of code in that had a
clearly-separate collection of "unsafe functions". My point is that I
don't think any of the programs I've written in them (a wide variety
of stuff, from distributed computation managers to interpreters and
simulators for a device modeling language) have ever actually needed
any of those "unsafe" functions for anything, though maybe some of the
libraries I used were full of them for all I know.

How static types get in the way, and why that's okay for me: For me,
the main way that the types "get in the way" is more when I haven't
thought the program through in enough detail and I'm trying to run
programs anyway. Strong, static typing tends to make sure that I have
to figure more stuff out before I can start executing my code. I can
live with that - I think it's probably a good thing, in my case - but
I can understand that it might bug the hell out of others. For
actually-unfinished bits of code it's easy to put in simple stubs, as
has already been explored elsewhere in this behemoth of a discussion,
but Haskell does make me think through half-conceived things a bit
more before I test out those new bits. It's no problem: it really is
the case that when my program gets past the compiler, nearly all the
bugs have been found, and each complaint it had was such that it was
actually making a good point - something that would have had to be
fixed at some point anyway. I guess that this approach works for me
because I code by mostly-completing separate sections one-by-one in a
bottom-up fashion, which is why I love the modularity that purely
functional programming offers. Admittedly, for interoperating with
others' half-written code, I'll sometimes roll some vacuous stub that
also fits the API to that segment and can be used instead. Also, maybe
it works for me because I get happy about code more through staring at
it and convincing myself that it really does what I want, than by
testing it out.

Maybe if types didn't get in the way, something else would:
Admittedly, the way that types "get in the way" might be a bigger part
of the way your brain hurts when you learn Haskell than when you learn
C or whatever, but I'm hesitant to conclude that your brain hurts less
when you learn a language that's not as big on static typing if it's
still a language that's arranged much differently to what you already
knew. It's more about learning to express problems and solutions in a
way that fits well with the ontology a language paradigm. Maybe some
are more flexibly expressive than others, but there's a tradeoff there
between flexibility and the degree to which programmers who also know
that language can actually read each other's code with ease.

-- Mark
From: Stephen J. Bevan
Subject: Re: More static type fun.
Date: 
Message-ID: <m3r809s05x.fsf@dino.dnsalias.com>
Joe Marshall <·············@comcast.net> writes:
> Lauri Alanko <··@iki.fi> writes:
> > Most typed language implementations provide unsafe functions to coerce
> > something into another type and thus bypass the typesystem. Eg.
> > Obj.magic in ocaml, or unsafeCoerce in Haskell implementations.
> 
> Obviously, the types `get in the way' at times.

From the perspective of a Forth programmer, Lisp types also `get in
the way' at times ...

> At least often enough that someone decided to implement a workaround
> rather than re-work his code.

the workaround being ...

> Languages like Lisp often don't provide a mechanism by which one
> could, for example, re-tag a cons cell as a string.  (Although they
> may provide fixnum<->address conversion as part of the GC or FFI, it
> is non-standard).

The situation is similar in Haskell: unsafeCoerce is not in the
standard, it is a property of some implementations.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <smkpxky0.fsf@comcast.net>
·······@dino.dnsalias.com (Stephen J. Bevan) writes:

> Joe Marshall <·············@comcast.net> writes:
>> Lauri Alanko <··@iki.fi> writes:
>> > Most typed language implementations provide unsafe functions to coerce
>> > something into another type and thus bypass the typesystem. Eg.
>> > Obj.magic in ocaml, or unsafeCoerce in Haskell implementations.
>> 
>> Obviously, the types `get in the way' at times.
>
> From the perspective of a Forth programmer, Lisp types also `get in
> the way' at times ...
>
>> At least often enough that someone decided to implement a workaround
>> rather than re-work his code.
>
> the workaround being ...

The unsafe coertion functions.

-- 
~jrm
From: Stephen J. Bevan
Subject: Re: More static type fun.
Date: 
Message-ID: <m3llqhrw4o.fsf@dino.dnsalias.com>
Joe Marshall <·············@comcast.net> writes:
> ·······@dino.dnsalias.com (Stephen J. Bevan) writes:
> > Joe Marshall <·············@comcast.net> writes:
> >> Lauri Alanko <··@iki.fi> writes:
> >> > Most typed language implementations provide unsafe functions to coerce
> >> > something into another type and thus bypass the typesystem. Eg.
> >> > Obj.magic in ocaml, or unsafeCoerce in Haskell implementations.
> >> 
> >> Obviously, the types `get in the way' at times.
> >
> > From the perspective of a Forth programmer, Lisp types also `get in
> > the way' at times ...
> >
> >> At least often enough that someone decided to implement a workaround
> >> rather than re-work his code.
> >
> > the workaround being ...
> 
> The unsafe coertion functions.

It wasn't a question, I was linking the preceeding paragraph with the
next one where you explained why some Lisp implementations contain
functions that can subvert its type system.
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bp6gkg$519$1$830fa79f@news.demon.co.uk>
Joe Marshall wrote:

> Lauri Alanko <··@iki.fi> writes:
>> Most typed language implementations provide unsafe functions to coerce
>> something into another type and thus bypass the typesystem. Eg.
>> Obj.magic in ocaml, or unsafeCoerce in Haskell implementations.
> 
> Obviously, the types `get in the way' at times.  At least often enough
> that someone decided to implement a workaround rather than re-work his
> code.

IIRC unsafeCoerce was a hack somebody came up with in connection with
GHC's dynamics at some point in the dim and distant past. Dunno if
it's still needed. I can't find it my current ghc lib's and I suspect
most Haskellers are scarcely aware of it's existance, and certainly
have never had any reason to use it (certainly those who have never
had any reason to use dynamics). So no, it's not obvious the types
get in the way at times. They never have for me in all my years of
Haskelling.

Regards
--
Adrian Hey
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <un0axqpfu.fsf@dtpq.com>
>>>>> On Sat, 15 Nov 2003 20:05:20 +0000 (UTC), Lauri Alanko ("Lauri") writes:

 Lauri> Common Lisp's flexibility gives you the power to write
 Lauri> programs that fail in many ways when they are run.
 Lauri> This is, to my mind, not a desirable goal.

I think you've hit the nail on the head with this statement, 
which is about about a clash of cultures about how to write 
programs.   Because that ability to fail (and, also the ability
to not necessarily fail when the relevent types change around in
the program), is exactly the characteristic that we Lispers desire.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp6ngs$hmt$1@newsreader3.netcologne.de>
Christopher C. Stacy wrote:

>>>>>>On Sat, 15 Nov 2003 20:05:20 +0000 (UTC), Lauri Alanko ("Lauri") writes:
> 
> 
>  Lauri> Common Lisp's flexibility gives you the power to write
>  Lauri> programs that fail in many ways when they are run.
>  Lauri> This is, to my mind, not a desirable goal.
> 
> I think you've hit the nail on the head with this statement, 
> which is about about a clash of cultures about how to write 
> programs.   Because that ability to fail (and, also the ability
> to not necessarily fail when the relevent types change around in
> the program), is exactly the characteristic that we Lispers desire.

As an additional comment, for those who might find Christophers 
statement to be very wild: In Common Lisp, a failure almost always comes 
with an opportunity to correct its reason, either programmatically or 
manually.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <u8ymhq935.fsf@dtpq.com>
>>>>> On Sun, 16 Nov 2003 03:30:51 +0100, Pascal Costanza ("Pascal") writes:

 Pascal> Christopher C. Stacy wrote:
 >>>>>>> On Sat, 15 Nov 2003 20:05:20 +0000 (UTC), Lauri Alanko ("Lauri") writes:
 Lauri> Common Lisp's flexibility gives you the power to write
 Lauri> programs that fail in many ways when they are run.
 Lauri> This is, to my mind, not a desirable goal.
 >> I think you've hit the nail on the head with this statement, which
 >> is about about a clash of cultures about how to write programs.
 >> Because that ability to fail (and, also the ability
 >> to not necessarily fail when the relevent types change around in
 >> the program), is exactly the characteristic that we Lispers desire.

 Pascal> As an additional comment, for those who might find Christophers
 Pascal> statement to be very wild: In Common Lisp, a failure almost always
 Pascal> comes with an opportunity to correct its reason, either
 Pascal> programmatically or manually.

And of course "failures" are not core dumps, nor are they random
execution by incorrectly handling data; they are dynamic type exceptions.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp6n5r$gat$1@newsreader3.netcologne.de>
Lauri Alanko wrote:

> Of course there seems to be quite a wealth of opinions on what exactly
> constitutes the useful portion of CL's functionality. Personally, I find
> eval extremely valuable, whereas I don't much care that you can refer to
> a variable that hasn't been defined, except temporarily as part of an
> interactive development cycle.

Sidenote: The ability to refer to a variable that hasn't been (properly) 
defined yet typically needs to be simulated in other languages in the 
form of proxies. You could regard this feature of Common Lisp (and other 
dynamic languages) to be a built-in implementation of the proxy pattern. ;)


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb8a315$1@news.unimelb.edu.au>
Lauri Alanko <··@iki.fi> writes:

>Pascal Costanza <········@web.de> virkkoi:
>> OK, maybe I am missing something fundamentally important here. Please 
>> keep me informed when such a type system becomes available. Make sure 
>> that such a type system allows me to ignore any error that it flags,
>
>Most typed language implementations provide unsafe functions to coerce
>something into another type and thus bypass the typesystem. Eg.
>Obj.magic in ocaml, or unsafeCoerce in Haskell implementations.

Furthermore many typed language implementations provide _safe_
(dynamically checked) functions to coerce something into another type
and thus bypass the static type system.  Examples include casts in Java
and C#, "dynamic_cast" in C++ and "std_util.dynamic_cast" in Mercury.

(Glasgow Haskell's "Dynamic.cast" also almost fits into this category.
The reason I say "almost" is that it is not 100% safe; if you try
hard enough, you can abuse it to break run-time type safety.  But
it is safe enough for all practical purposes other than executing
untrusted code.)

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB720D2.7C2C19DA@his.com>
Pascal Costanza wrote:

> OK, maybe I am missing something fundamentally important here. Please
> keep me informed when such a type system becomes available. Make sure
> that such a type system allows me to ignore any error that it flags, and
> that the language that it is based on gives me at least the same level
> of flexibility as Common Lisp.

I believe there are implementations of SML that allow you to run a
program that does not pass the type checker.  This is not really
possible in Haskell, because Haskell supports overloading.

David
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb89b52$1@news.unimelb.edu.au>
Lauri Alanko <··@iki.fi> writes:

>I just want to show that eval is by no means impossible to handle in a
>statically typed language. It just hasn't been done... yet.

Are you sure?  I would be inclined to suspect there is probably at least
one language out there which is statically typed by default, but which
supports eval.

However, the main reason eval hasn't been done in most statically
typed languages is that it isn't very useful.  I could add support
for eval to Mercury without much difficulty, but so far I haven't seen
the point.

When using dynamically typed languages, I find that I do make use of
eval reasonably often.  But in statically typed languages, I do not
miss the presence of "eval", because there are other features (e.g.
higher-order functions) which I can easily use instead to achieve the
same effect.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Lauri Alanko
Subject: Re: More static type fun.
Date: 
Message-ID: <bpa7er$1kf$1@la.iki.fi>
Fergus Henderson <···@cs.mu.oz.au> virkkoi:
> Lauri Alanko <··@iki.fi> writes:
> 
> >I just want to show that eval is by no means impossible to handle in a
> >statically typed language. It just hasn't been done... yet.
> 
> Are you sure?  I would be inclined to suspect there is probably at least
> one language out there which is statically typed by default, but which
> supports eval.

Depends on what you mean by "eval". It is of course quite
straightforward to write an interpreter for a language in the language
itself. The real issue is being able to interface the existing (perhaps
compiled) code with the dynamically evaluated expressions in a
type-safe manner. Ocaml's toplevel library is a step in the right
direction, but it's woefully limited.

> When using dynamically typed languages, I find that I do make use of
> eval reasonably often.  But in statically typed languages, I do not
> miss the presence of "eval", because there are other features (e.g.
> higher-order functions) which I can easily use instead to achieve the
> same effect.

Then you are looking for other things than I am. To me the point is not
internal code generation or suchlike abstraction facilities. Lambda
suffices quite far, macros go still further, and ultimately there are
things like MetaML that allow you to essentially generate code in a
type-safe fashion. But what I want from eval is, well, _eval_: being
able to read, at runtime, code in a turing-complete language and execute
it, _and_ allow it to transparently manipulate data in the host program
(in a controlled fashion). It's not essential that the host language and
interpreted language are the same, but it makes interfacing much
simpler. What _is_ essential is that both of the languages are
statically typed. I don't know any existing typed solution that had the
flexibility of, say, scheme implementations with first-class
environments.


Lauri Alanko
··@iki.fi
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb8da66$1@news.unimelb.edu.au>
Lauri Alanko <··@iki.fi> writes:

>what I want from eval is, well, _eval_: being
>able to read, at runtime, code in a turing-complete language and execute
>it, _and_ allow it to transparently manipulate data in the host program
>(in a controlled fashion).

Oh, you mean you want plug-ins?

I know a few systems written in statically typed languages that support
those.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpa71j$t2r$2@newsreader3.netcologne.de>
Fergus Henderson wrote:

> When using dynamically typed languages, I find that I do make use of
> eval reasonably often.  But in statically typed languages, I do not
> miss the presence of "eval", because there are other features (e.g.
> higher-order functions) which I can easily use instead to achieve the
> same effect.

You're right, in 95% of all the cases, you don't need eval.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Lars Lundgren
Subject: Re: More static type fun.
Date: 
Message-ID: <Pine.GSO.4.10.10311141204170.2862-100000@licia.dtek.chalmers.se>
On Fri, 14 Nov 2003, Pascal Costanza wrote:
> 
> Just another hint: How can you statically check a property to be always 
> present whose presence actually depends on dynamic properties?
> 

Just a comment: Information only available at runtime can not help *at
all* in establishing that a property is *always* present. Either it is
in theory deducable by static analysis, or the property does not always
hold. Runtime info only adds information about particular executions in
particular contexts.

/Lars L
From: Ed Avis
Subject: Re: More static type fun.
Date: 
Message-ID: <l1oevgt20f.fsf@budvar.future-i.net>
Pascal Costanza <········@web.de> writes:

>Runtime metaprogramming and static type checking can't be reconciled.

I don't see why not, in an interpreted language where the whole
program is loaded in for analysis.  You could change 'function taking
an integer' to 'function taking an integer and a string', and when you
do this the interpreter could check for callers to that function and
say hang on, you want this function to take a string as well but
you're not providing that argument in the following places.  Then
you'd have the choice of fixing those at the same time, or leaving
them alone and having them throw a type error at run time.  (Avoiding
type errors at run time is normally a goal of type systems, but
there's no reason not to allow them if the programmer explicitly
chooses it.  Of course your language must define a semantics for this
type error and how it can be caught.)

SQL is statically typed, but you can execute dynamically generated
code ('execute immediate'), redefine procedures and change the schema
at run time.  If you try to make a schema change which is incompatible
with the type of something elsewhere - for example, to drop a table
which is pointed to by a foreign key constraint - you get an error and
have to fix the other thing first.

Not that SQL database systems have anything like the expressive power
of Lisp for general-purpose computing, but they provide an obvious
example that strong, static typing is not incompatible with runtime
malleability of the system.  Certainly you do not have to take the
server down every time you change the type of a column.

-- 
Ed Avis <··@membled.com>
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bp20db$igb$1$8300dec7@news.demon.co.uk>
Pascal Costanza wrote:

> ...and you accused the "dynamic typers" in this discussion that they
> don't have enough experience with advanced static type systems?

Some of then clearly haven't.

> Do you
> know what your statements sound like?

No. What do they sound like?

Regards
--
Adrian Hey
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp213h$o2c$1@newsreader3.netcologne.de>
Adrian Hey wrote:

> Pascal Costanza wrote:
> 
>>...and you accused the "dynamic typers" in this discussion that they
>>don't have enough experience with advanced static type systems?
> 
> Some of then clearly haven't.

And vice versa.


Pascal
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <u14d81-nq7.ln1@ID-7776.user.dfncis.de>
Adrian Hey <····@nospicedham.iee.org> wrote:
> Dirk Thierbach wrote:

>> As someone who usually prefers static typing, I don't doubt the
>> probability -- with enough unit tests, you can certainly approximate
>> static type checking, and for practical use, it is "good enough".

> But on a level playing field, I.E. Equal amount of time available
> for unit tests and debugging, I'm still sceptical that removing
> static typing is would help me up. 

The point is that you don't start out with static typing and then
remove it. You start with a different approach (i.e. dynamic typing).

> If I imagine a (hypothetically) "better" Haskell without static type
> checking (and presumably no type classes or overloading either) I
> have a hard time seeing how this is of any help at all.

It doesn't really make sense to imagine Haskell without static typing.
In the same way it doesn't make sense to imagine Lisp with a HM-style
static type system (because it won't work).

It's a trade-off. You (very roughly) pay with more effort in unit
tests and less certainty for more flexibility in notation and no need
to explicitely write (data)type tags. Both approaches work fine, and
what is "better" really depends on the application and lots of other
things.

> It just makes a bad problem (producing reliable programs) even worse
> AFAICS.

It's not as bad as you think. Unit tests and the "test first"
development style are a neat idea. Think of it as "specification by
example". This is not as good as "specification by logic", but OTOH it
is easily verifyable (without need for proof checker or other fancy
things), and it's certainly better than no specification.

I think you really need to have it done yourself before you can judge
it (and that's maybe something both "camps" should keep in mind).

> Maybe this newly untyped or dynamically typed Haskell could be
> further improved to support dynamic megaprogramming :-)

> But then, maybe existing statically typed Haskell could be
> improved to support that anyway (with the occasional use of
> dynamics of course).

I still refuse to see this as a question of "my favorite language can
do it better than any other language". Both ways work. It's a matter
of taste which one you choose. Some are only happy with one, some
are only happy with the other, some can work with both. But it's a bit
ridiculous to say "this other way will never work" when there are
enough counterexamples. (Again, this holds for both "camps").

- Dirk
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <ud6bwj8d7.fsf@dtpq.com>
>>>>> On Thu, 13 Nov 2003 20:53:02 +0100, Dirk Thierbach ("Dirk") writes:
 Dirk> It's not as bad as you think. Unit tests and the "test first"
 Dirk> development style are a neat idea. Think of it as
 Dirk> "specification by example". This is not as good as
 Dirk> "specification by logic", but OTOH it is easily verifyable
 Dirk> (without need for proof checker or other fancy things), and
 Dirk> it's certainly better than no specification.

I thought that when people say "verifiable", they mean exclusively 
"by proof", since even if you were to test for every possible input,
that is no guarantee of reproducible results.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <hbdd81-8fa.ln1@ID-7776.user.dfncis.de>
Christopher C. Stacy <······@dtpq.com> wrote:
> On Thu, 13 Nov 2003 20:53:02 +0100, Dirk Thierbach ("Dirk") writes:
>> It's not as bad as you think. Unit tests and the "test first"
>> development style are a neat idea. Think of it as
>> "specification by example". This is not as good as
>> "specification by logic", but OTOH it is easily verifyable
>> (without need for proof checker or other fancy things), and
>> it's certainly better than no specification.

> I thought that when people say "verifiable", they mean exclusively 
> "by proof", 

If you "specify by example", then of course you only have to check
that concrete example to verify the (very particular) specification.
If I meant "proved", I would have said "proved" :-)

> since even if you were to test for every possible input,
> that is no guarantee of reproducible results.

But you don't check for every possible input, you just check for some
given ones. Of course this is much weaker than a general specification
that is verified by a proof, and that's exactly the drawback of
dynamic typing + unit tests vs. static typing. But with enough
examples, it can be "good enough" to catch a sufficient number of
bugs. And the cost is partly offset by the fact that at least some of
the tests will verify higher level behavior.

And after all, a static type system doesn't allow you to check against
arbitrary specifications. It just checks against specifications in the
"type language". That's good enough to avoid almost all low level and
many intermediate level "tests", but when it comes to higher level
behaviour (e.g., "is the result of my sort routine really sorted?")
you do the test as well (or at least you should). One could do a proof
for that (and it would be very nice if one could automate it), but for
normal programming, it is currently just too much effort to do it
manually every time.

- Dirk
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bp20dg$igb$3$8300dec7@news.demon.co.uk>
Dirk Thierbach wrote:

> It's not as bad as you think. Unit tests and the "test first"
> development style are a neat idea.

Isn't that precisely what a static type system does for you.
It tests your code is free of certain classes of error before
you even get to run it.

If that ain't first I don't know what is :-)

> Think of it as "specification by
> example". This is not as good as "specification by logic", but OTOH it
> is easily verifyable (without need for proof checker or other fancy
> things), and it's certainly better than no specification.
> 
> I think you really need to have it done yourself before you can judge
> it (and that's maybe something both "camps" should keep in mind).

Certainly I wouldn't dispute that this is a good idea, and I don't
think any static typer (Haskeller) would dispute this, and there
are some pretty cool tools like QuickCheck to help with this
development style even for Haskell.

But I'm still not convinced that a dynamic type system is any
advantage when persuing this approach. Especially since my own
personal experience is that the static type system is not at all
"style cramping" and, in the overwelming majority of cases, the
type checker only rejects programs/functions which couldn't possibly
work (I can only think of 1 occasion where this wasn't true).

Regards
--
Adrian Hey
  
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <fefe81-6o.ln1@ID-7776.user.dfncis.de>
Adrian Hey <····@nospicedham.iee.org> wrote:
> Dirk Thierbach wrote:

>> It's not as bad as you think. Unit tests and the "test first"
>> development style are a neat idea.

> Isn't that precisely what a static type system does for you.
> It tests your code is free of certain classes of error before
> you even get to run it.

Yes. (And that's why I personally like a static type system better).

>> Think of it as "specification by
>> example". This is not as good as "specification by logic", but OTOH it
>> is easily verifyable (without need for proof checker or other fancy
>> things), and it's certainly better than no specification.

>> I think you really need to have it done yourself before you can judge
>> it (and that's maybe something both "camps" should keep in mind).

> Certainly I wouldn't dispute that this is a good idea, and I don't
> think any static typer (Haskeller) would dispute this, and there
> are some pretty cool tools like QuickCheck to help with this
> development style even for Haskell.

Yes. 

> But I'm still not convinced that a dynamic type system is any
> advantage when persuing this approach. 

But "it's an advantage" and "it can't work" are different things.  If
there are other reasons to use a particular language, for example
libraries, language features that fit the application, or even just an
irrational dislike of static types (in other words, a psychological
problem), then you can take a language with a dynamic type system and
make it work.

And some things *are* easier to do in Lisp as in Haskell, for example
language extensions. (That doesn't mean that they are "impossible"
in a statically typed language, of course).

> Especially since my own personal experience is that the static type
> system is not at all "style cramping" and, in the overwelming
> majority of cases, the type checker only rejects programs/functions
> which couldn't possibly work (I can only think of 1 occasion where
> this wasn't true).

And so is my experience. But if there are people who feel different
about that, why shouldn't they use the tools they personally prefer?

- Dirk
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bp3chn$ju8$1$8300dec7@news.demon.co.uk>
Dirk Thierbach wrote:

> But "it's an advantage" and "it can't work" are different things.

They certainly are. Did you mean "it's not an advantage"?

If so I admit I did (and still do) claim "it's not an advantage".

But did I say it can't work? I thought I'd taken great care to
explicitely state that was not my opinion (or my experience)
several times already :-)

> But if there are people who feel different
> about that, why shouldn't they use the tools they personally prefer?

I can understand your anxiety to let this thread die, but please don't
misrepresent what I've written. Just because I chose to try to debunk
some common FUD about statically typed languages and the alleged (by
some people) superiority of dynamically typed languages does not mean
I'm Lisp bashing, Python bashing or evangelising on behalf of Haskell
(personally I'm not wildly enthusiastic about some of the features of
Haskell, but that is another story..)

Regards
--
Adrian Hey
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <qarf81-2b2.ln1@ID-7776.user.dfncis.de>
Adrian Hey <····@nospicedham.iee.org> wrote:
> Dirk Thierbach wrote:

> If so I admit I did (and still do) claim "it's not an advantage".

And that's certainly true. It's a trade-off, you exchange (good) 
static-typing for something else, for example more ease in language
extension, a simple and powerful way of doing reflection, and so
an.

If that is an advantage or a disadvantage depends on what sort of
application you want to write, and on your personal taste.

> Just because I chose to try to debunk some common FUD about
> statically typed languages and the alleged (by some people)
> superiority of dynamically typed languages 

That's also what I would like to do, and that's why I have showed some
examples of how easily you can do things in a (good) statically typed
language that are considered "impossible" by the "dynamic" camp.

> does not mean I'm Lisp bashing, Python bashing

I am afraid if you claim that "everyone should use static typing" then
that can be easily understood as "bashing". It's just not possible to
take, say, Lisp and put HM-style type inference on top. So for
Lispers, that statement basically means "don't use Lisp, use something
else".

> or evangelising on behalf of Haskell (personally I'm not wildly
> enthusiastic about some of the features of Haskell, but that is
> another story..)

So am I, but as you say, it's another story. All languages have
good points and weaknesses; and that's why you choose the right one
for the job.

- Dirk
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp39mi$91k$1@newsreader3.netcologne.de>
Dirk Thierbach wrote:
> Adrian Hey <····@nospicedham.iee.org> wrote:

>>Especially since my own personal experience is that the static type
>>system is not at all "style cramping" and, in the overwelming
>>majority of cases, the type checker only rejects programs/functions
>>which couldn't possibly work (I can only think of 1 occasion where
>>this wasn't true).
> 
> 
> And so is my experience. But if there are people who feel different
> about that, why shouldn't they use the tools they personally prefer?

Although I usually take the latter statement for granted, I feel some 
kind of relief that you have stated it and seem to take it for granted 
as well.

Thank you. ;)


Pascal
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <3nqf81-2b2.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

>> And so is my experience. But if there are people who feel different
>> about that, why shouldn't they use the tools they personally prefer?
> 
> Although I usually take the latter statement for granted, I feel some 
> kind of relief that you have stated it and seem to take it for granted 
> as well.

> Thank you. ;)

No you only have to be consequent and acknowledge that there are people
who prefer doing the kind of things you claim to be "impossible" in
a language *they* personally prefer. 

It really works both ways, you know.

- Dirk
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp3go5$pmh$1@newsreader3.netcologne.de>
Dirk Thierbach wrote:

> Pascal Costanza <········@web.de> wrote:
> 
>>Dirk Thierbach wrote:
> 
> 
>>>And so is my experience. But if there are people who feel different
>>>about that, why shouldn't they use the tools they personally prefer?
>>
>>Although I usually take the latter statement for granted, I feel some 
>>kind of relief that you have stated it and seem to take it for granted 
>>as well.
> 
> 
>>Thank you. ;)
> 
> 
> No you only have to be consequent and acknowledge that there are people
> who prefer doing the kind of things you claim to be "impossible" in
> a language *they* personally prefer. 
> 
> It really works both ways, you know.

I think I have done this before, but I can surely do it again.

Yes, I have made the claim earlier in this discussion that it is not 
possible to implement some things with statically typed languages that 
are possible with dynamically typed languages. This is clearly a wrong 
statement, as it would question Turing equivalence, and the only defense 
I have is that I have used those words while having something else in mind.

In fact, I am not concerned about the what, but about the how; i.e. not 
about the problems to be solved, but about the programming style to be 
used to solve those problems.

I still think it is not possible to treat each and every definition in a 
program as optional and reconcile this with strict static type checking, 
at least not in non-trivial ways.


Pascal
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <q16h81-hm.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> I think I have done this before, but I can surely do it again.

And you still don't get it :-)

> Yes, I have made the claim earlier in this discussion that it is not 
> possible to implement some things with statically typed languages that 
> are possible with dynamically typed languages. This is clearly a wrong 
> statement, 

So far we agree.

> as it would question Turing equivalence, 

And that is not the point.

> In fact, I am not concerned about the what, but about the how; i.e. not 
> about the problems to be solved, but about the programming style to be 
> used to solve those problems.

The *programming style* is tied to a particular language. In Lisp, you
do it Lisp style. You cannot do it in any other language in the same
style. In Haskell, you do it Haskell style. This is different from the
Lisp style, but it is as "convenient" (which of course is a somewhat
vague qualification) as doing it in Lisp style. 

(And in C, for example, you do it C style, which is a lot more
unconvenient than both of the above).

So with respect to programming style, your claim is trivial.
With respect to convenience, it is wrong, and saying "it is impossible
to write programs in any other language than Lisp as easily as one
can do it in Lisp" is on the same level as "it is impossible to
write reliable programs in any language without a static type
system".

> I still think it is not possible to treat each and every definition in a 
> program as optional and reconcile this with strict static type checking, 
> at least not in non-trivial ways.

So why on earth is it necessary to treat every definition as "optional"
to be able to write programs easily?

If that is your style of writing, and of for some reason you
desperately need (but I have no idea why you should), fine. Then use
languages like Lisp that support this style of writing. If you cannot
write programs in any other style, because it is too difficult for
you to think in a different way, then don't do it.

But don't try to tell others that they now also must use this style,
because otherwise it would be impossible for them to write programs
as easily as you can do it in Lisp. And don't try to tell others
that this is all fault of the static type system; because static
typing itself works fine, and it's the differences in the *languages*
you have to adapt to.

If you're still attempting this, you obviously cannot accept that
others might want to use the tools *they* personally prefer, while at
the same time you insist that *you* should be granted that
freedom. That's double standards.

- Dirk
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bp542c$mt3$1@newsreader3.netcologne.de>
Dirk Thierbach wrote:

> So with respect to programming style, your claim is trivial.

Thanks, that's all I am after.

> With respect to convenience, it is wrong, and saying "it is impossible
> to write programs in any other language than Lisp as easily as one
> can do it in Lisp" is on the same level as "it is impossible to
> write reliable programs in any language without a static type
> system".

I have never made such a claim.

>>I still think it is not possible to treat each and every definition in a 
>>program as optional and reconcile this with strict static type checking, 
>>at least not in non-trivial ways.
> 
> So why on earth is it necessary to treat every definition as "optional"
> to be able to write programs easily?

This is important when you care about being able to change a program in 
unexpected ways. (The important part here is "unexpected". I can't be 
any more specific than that. Each specific example for a change of a 
program can be implemented in a way that anticipates that change. But 
that's besides the point. I want to be able to change a program in ways 
that I haven't anticipated. I want to do so at runtime. Consequently, 
this means that you can't rely on almost any invariant.)

> But don't try to tell others that they now also must use this style,
> because otherwise it would be impossible for them to write programs
> as easily as you can do it in Lisp. And don't try to tell others
> that this is all fault of the static type system; because static
> typing itself works fine, and it's the differences in the *languages*
> you have to adapt to.

I have never made such claims. Why do you think this is my position?

It were some of the static typers in this discussion who wanted to fire 
programmers that don't work their way!

> If you're still attempting this, you obviously cannot accept that
> others might want to use the tools *they* personally prefer, while at
> the same time you insist that *you* should be granted that
> freedom. That's double standards.

I have never attempted this.


Pascal

--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <fokh81-3a4.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> Dirk Thierbach wrote:

>> So with respect to programming style, your claim is trivial.

> Thanks, that's all I am after.

Good. I have already said that two times.

>> With respect to convenience, it is wrong, and saying "it is impossible
>> to write programs in any other language than Lisp as easily as one
>> can do it in Lisp" 

> I have never made such a claim.

But it looked like you did, and that's the claim I have responded to
all the time. 

>> So why on earth is it necessary to treat every definition as "optional"
>> to be able to write programs easily?

> This is important when you care about being able to change a program in 
> unexpected ways. (The important part here is "unexpected". I can't be 
> any more specific than that. Each specific example for a change of a 
> program can be implemented in a way that anticipates that change. But 
> that's besides the point. I want to be able to change a program in ways 
> that I haven't anticipated. 

And so do I, and static typing doesn't keep me from doing that.

> I want to do so at runtime. 

So you're really saying you want to change programs at runtime, and
you think it is necessary to treat every definition as optional to
achieve this. This has *nothing* to do with writing programs easily.

> Consequently, this means that you can't rely on almost any
> invariant.)

You have to rely on the invariant that after you have made all your
changes at runtime, the changed functions still all work together.
If they don't, your program will end up with errors.

That means that if the complete new system after the change was
checked at "compile time", it would typecheck.

Now the statically typed languages I know of are implemented as
compilers, which means they don't offer any support for changing part
of the code as runtime. But it would be very easy to just keep the
type information, and offer a "hook" for changing part of the code,
while at the same time verifying the all changes together still work.
One could even couple this with versioning to keep around some old
version of the same code for parts of the system that are still active.

So even for changing code at runtime, it is not *necessary* to
treat all definitions as optional. Yes, you need more support by
the runtime system, but you get extra help in integration testing
in return.

And changing code at runtime is only needed for systems that have to
be up and running all the time. Now while there are certainly enough
applications that need those systems, there are also enough applications
that don't. And for the latter changing code at runtime is absolutely
no issue.

- Dirk
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB65503.7509D115@his.com>
Dirk Thierbach wrote:

> Now the statically typed languages I know of are implemented as
> compilers, which means they don't offer any support for changing part
> of the code as runtime. But it would be very easy to just keep the
> type information, and offer a "hook" for changing part of the code,
> while at the same time verifying the all changes together still work.

I suspect this will not be easy on type inference, especially in
Haskell (because of overloading).

David
From: Daniel C. Wang
Subject: Re: More static type fun.
Date: 
Message-ID: <uad6xsg5a.fsf@hotmail.com>
Feuer <·····@his.com> writes:

> Dirk Thierbach wrote:
> 
> > Now the statically typed languages I know of are implemented as
> > compilers, which means they don't offer any support for changing part
> > of the code as runtime. But it would be very easy to just keep the
> > type information, and offer a "hook" for changing part of the code,
> > while at the same time verifying the all changes together still work.
> 
> I suspect this will not be easy on type inference, especially in
> Haskell (because of overloading).
> 

http://www.cis.upenn.edu/~mwh/flashed.html

Welcome to the FlashEd webserver. This is a version of the Flash webserver
developed by Vivek Pai, with the following differences:

    * It is written in Popcorn, a type-safe, C-like language that is
      compiled to Typed Assembly Language (TAL). This is a variant of
      Proof-carrying Code, and the implementation is for the Intel IA32
      instruction set.

    * It is dynamically updateable. That is, any part of the webserver can
      be changed at runtime without halting service. Because the webserver
      is written in TAL, updates are guaranteed to be type-correct
      internally, as well as with respect to the running program.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb892cf$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>I want to be able to change a program in ways that I haven't anticipated.
>I want to do so at runtime.

Why?

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpa5st$rht$1@newsreader3.netcologne.de>
Fergus Henderson wrote:

> Pascal Costanza <········@web.de> writes:
> 
>>I want to be able to change a program in ways that I haven't anticipated.
>>I want to do so at runtime.
> 
> Why?

Because I can. ;)


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpaaq1$14lu$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>I want to be able to change a program in ways that I haven't anticipated.
>>I want to do so at runtime.
> 
> Why?

For example, think large-scale web applications. They are expensive to 
shut down and start again.

What you can do in Common Lisp [1] is the following. You can create your 
own metaclass that seamlessly maps objects to external storage. This 
means that you can program in an object-oriented style without the need 
to see what happens in the background on your database.

Next, you can add and remove slots (fields) to and from your classes at 
runtime. For example, you might want to add or remove information from 
classes representing customers, goods, and so on.

When you add a field, you may want to leave it unbound by default. This 
will generate a runtime exception in case it is being accessed. One way 
to programmatically deal with such an exception is to catch it, create a 
web dialog that requests the needed information - including the option 
to enter values for all other fields that are yet unbound, if you are so 
inclined - and then to seamlessly proceed from the location in the 
control flow where the exception occurred.

When you remove a field, an attempt to access it will generate another 
runtime exception. Again, one way to deal with it is to catch it, create 
a web dialog that informs the user that the information is not available 
anymore, and then resume operation in some graceful manner.

I don't need to distinguish between optional and mandatory features in 
classes in order to make this work, i.e. the object-oriented way of 
using message sending for everything doesn't leak. [2] And especially, I 
don't need to anticipate which slots are optional or not. This allows 
the customers for whom I develop the web application to change their 
minds very late in the game. To be precise, they can change their mind 
while the system is running.

I don't say that everyone should be using dynamically typed languages or 
object-oriented programming for all projects. But scenarios like the one 
above are very attractive, are nicely modelled with object-oriented 
concepts, and require more dynamicity than staticity by default. At 
least, it's more convenient to use an (advanced) dynamically typed 
language for such things. [3]


Pascal

[1] Just a sidenote: I am mostly using Common Lisp as an example because 
it is the mostly advanced dynamically typed language I know about. Using 
"lesser" dynamic languages as examples against dynamic typing is like 
dismissing static typing because Java sucks.

[2] Checking whether a feature is available in an object from the 
outside before using it is not especially object-oriented. The idea of 
OOP is that objects autonomously decide what they can do and how they do it.

[3] Typical object-oriented programs have trivial properties wrt type 
constraints. A static type sytem doesn't buy you much here IMHO.

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Michael Livshin
Subject: Re: More static type fun.
Date: 
Message-ID: <s3n0av9rv3.fsf@laredo.verisity.com.cmm>
Fergus Henderson <···@cs.mu.oz.au> writes:

> Pascal Costanza <········@web.de> writes:
>
>>I want to be able to change a program in ways that I haven't anticipated.
>>I want to do so at runtime.
>
> Why?

I'm not sure whether your question is a genuine "why?" as in "list
your specific reasons" or actually a "whoever would want such a
thing?".

but the funny thing is, you youself are doing it all the time.  look
at the computer system in front of you: the operating environment it
is running is such a program.  the only thing about it that you
cannot change at runtime is the resident part of the OS kernel.

or consider all those web sites that need to be updated without
downtime.  they are such systems, even though most of their component
programs are not.

you may get away with not thinking about runtime when you are
focussing on any individual task at hand, but for that you should
thank whoever designed the overall system your program runs as part
of.

so really, when you say that you are not interested in changing stuff
at runtime, you are actually making a choice.  you are limiting the
applicability of your preferred language (or, more precisely, your
preferred programming style).  this choice is quite reasonable for the
vast majority of programming tasks out there, I guess, but your
"why?", if meant in the second sense, is uncalled for.

-- 
All ITS machines now have hardware for a new machine instruction --
BOT
Branch On Tree.
Please update your programs.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb8e199$1@news.unimelb.edu.au>
Michael Livshin <······@cmm.kakpryg.net> writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>
>> Pascal Costanza <········@web.de> writes:
>>
>>>I want to be able to change a program in ways that I haven't anticipated.
>>>I want to do so at runtime.
>>
>> Why?
>
>I'm not sure whether your question is a genuine "why?" as in "list
>your specific reasons" or actually a "whoever would want such a
>thing?".

The former.  I realize that this is sometimes needed, but the need for
it seems to be very rare indeed.

>but the funny thing is, you youself are doing it all the time.  look
>at the computer system in front of you: the operating environment it
>is running is such a program.  the only thing about it that you
>cannot change at runtime is the resident part of the OS kernel.

That operating environment is partitioned into units called "programs",
and I can't modify a program while it is running.

If a task has been divided into units, then I would say that replacing
one of those units with an updated version falls into the category of
anticipated change, not unanticipated change.  I don't think I would
have much difficulty doing those sorts of changes at run-time with a
statically typed language.

>so really, when you say that you are not interested in changing stuff
>at runtime, you are actually making a choice.  you are limiting the
>applicability of your preferred language

Perhaps, but not very much, because by designing my software
appropriately, I can ensure that all but a small kernel can be
upgraded without stopping the machine.  This does not require
the ability to make unanticipated changes to the code of a
running program.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb98acc$1@news.unimelb.edu.au>
Michael Livshin <······@cmm.kakpryg.net> writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>
>>>so really, when you say that you are not interested in changing
>>>stuff at runtime, you are actually making a choice.  you are
>>>limiting the applicability of your preferred language
>>
>> Perhaps, but not very much, because by designing my software
>> appropriately, I can ensure that all but a small kernel can be
>> upgraded without stopping the machine.  This does not require the
>> ability to make unanticipated changes to the code of a running
>> program.
>
>OK.  except in my case:
>
>(string= (translation-of "designing appropriately") 
>         "not engaging in compulsive static typing")
>   ==> T
>
>thanks for the clarification,

I agree that one should not engage in _compulsive_ static typing.
Situations do occur in which one should make use of run-time checking
rather than static typing.

But I have not found any really convincing arguments as to why you'd want
to use dynamic typing by default.  Support for dynamic code update has
been proposed as one such reason, but as discussed above we can achieve
that without giving up static typing by structuring the application into
components of some kind, and replacing whole components at a time.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpcoub$s96$1@newsreader3.netcologne.de>
Fergus Henderson wrote:

> I agree that one should not engage in _compulsive_ static typing.
> Situations do occur in which one should make use of run-time checking
> rather than static typing.
> 
> But I have not found any really convincing arguments as to why you'd want
> to use dynamic typing by default.  Support for dynamic code update has
> been proposed as one such reason, but as discussed above we can achieve
> that without giving up static typing by structuring the application into
> components of some kind, and replacing whole components at a time.

Yes, it can be done that way. You can do anything in assembler as well.

The important question is always: How much effort is it?


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <r80bnil1.fsf@ccs.neu.edu>
Adrian Hey <····@NoSpicedHam.iee.org> writes:

> Isn't that precisely what a static type system does for you.
> It tests your code is free of certain classes of error before
> you even get to run it.

Frequently I want to run code that I *know* has type errors.  This may
sound nonsensical, but perhaps I know that a fragment of code has a
path through it that I believe valid even though another path through
it is obviously bogus.  I might want to experiment with the valid part
without having to think hard about the invalid part until later.

I find that I do this quite a bit when I'm experimenting with new code.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <78lf81-e01.ln1@ID-7776.user.dfncis.de>
Joe Marshall <···@ccs.neu.edu> wrote:
> Adrian Hey <····@NoSpicedHam.iee.org> writes:

>> Isn't that precisely what a static type system does for you.
>> It tests your code is free of certain classes of error before
>> you even get to run it.

> Frequently I want to run code that I *know* has type errors.  This may
> sound nonsensical, but perhaps I know that a fragment of code has a
> path through it that I believe valid even though another path through
> it is obviously bogus.  I might want to experiment with the valid part
> without having to think hard about the invalid part until later.

Then factor out the valid path and experiment with it on its on.  If
it's worth experimenting with, it's probably sufficiently self-contained 
to be worth factoring out. With HOFs its easy to extract it,
and the let or the where clause in Haskell encourage this style
anyway.

Type inference means that this is no more effort than writing down
a name and doing a simple cut and paste in the editor

Alternatively, in language like Haskell that supports pattern
matching, you can often just comment out those alternatives that are
not interesting at the moment. Actually, I do this quite frequently.

- Dirk
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <8ymimzk9.fsf@ccs.neu.edu>
Dirk Thierbach <··········@gmx.de> writes:

> Joe Marshall <···@ccs.neu.edu> wrote:
>> Adrian Hey <····@NoSpicedHam.iee.org> writes:
>
>>> Isn't that precisely what a static type system does for you.
>>> It tests your code is free of certain classes of error before
>>> you even get to run it.
>
>> Frequently I want to run code that I *know* has type errors.  This may
>> sound nonsensical, but perhaps I know that a fragment of code has a
>> path through it that I believe valid even though another path through
>> it is obviously bogus.  I might want to experiment with the valid part
>> without having to think hard about the invalid part until later.
>
> Then factor out the valid path and experiment with it on its on.  

Nope.  Don't want to take code apart and put it back together.

> Type inference means that this is no more effort than writing down
> a name and doing a simple cut and paste in the editor
>
> Alternatively, in language like Haskell that supports pattern
> matching, you can often just comment out those alternatives that are
> not interesting at the moment. Actually, I do this quite frequently.

Even temporary changes to the code is a potential source of bugs.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <d75h81-hm.ln1@ID-7776.user.dfncis.de>
Joe Marshall <···@ccs.neu.edu> wrote:
> Dirk Thierbach <··········@gmx.de> writes:

>> Joe Marshall <···@ccs.neu.edu> wrote:

>>> Frequently I want to run code that I *know* has type errors.  This may
>>> sound nonsensical, but perhaps I know that a fragment of code has a
>>> path through it that I believe valid even though another path through
>>> it is obviously bogus.  I might want to experiment with the valid part
>>> without having to think hard about the invalid part until later.

>> Then factor out the valid path and experiment with it on its on.  

> Nope.  Don't want to take code apart and put it back together.

In Lisp, I probably wouldn't want to, either. In e.g. Haskell, code is
normally not longer than a few lines, anyway. If it gets longer, you
give it a name, and "put it together" by HOFs. You frequently see for
example "pipe" constructs like

bigfun = third . second . first 
  where
    first  = ...
    second = ...
    third  = ...

where the steps are put together by function composition. It's quite
natural once you have seen it a few times, and the code becomes better
through it, so factoring out is something you want to do, anyway.

Maybe it's easier to see this in a concrete example with some valid
code part.

> Even temporary changes to the code is a potential source of bugs.

IMHO it's a much bigger potential source of bugs if you leave wrong
code as it is and go on doing something else. If you get sidetracked,
it's easy to forget that you wanted to correct it. (Yes, that's
what unit tests are for, I know; but any higher level test will
show you also if there's part of the code missing.)

And since the part of code you're commenting out is wrong, you have
to modify it, anyway.

So it's next to impossible that this part becomes a source of bugs.

(In general, you're right, of course).

- Dirk
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <k761zj0h.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:

> Joe Marshall <···@ccs.neu.edu> wrote:
>
>> Even temporary changes to the code is a potential source of bugs.
>
> And since the part of code you're commenting out is wrong, you have
> to modify it, anyway.

Actually, it isn't.

Suppose I'm testing some sort of `driver loop':


  (loop (commit (perform-command (parse-command (get-command)))
                (database-state)))

and further suppose that I've added some commands so that there are
more possible outputs from perform-command.  Commit can handle
some of the simple structures that come from perform-command but
since I have been extending the command set, there are return types
that commit cannot handle.

This usage, therefore, will not type check because the return type of
perform-command is a superset of the input type to commit.

But I still want to run the code right up to the point where the call
to commit fails (or perhaps beyond).  Why?  Perhaps I have a base test
suite that has been working and I've been adding commands and I want
to ensure that I didn't break the existing commands.  Perhaps commit
performs a validation step that is polymorphic in the return type of
perform-command, so I actually want to run part of commmit.

I understand that there are a plethora of things I could comment out,
modify or refactor in order to make the code acceptable to the static
type checker, but I don't *want* to do that, I just want to try out
what I've got so far.  Besides, if I *do* comment out some code, then
the old test suites won't run, and I'll have to remember to
*un-comment* it later.  This is all extra work.

-- 
~jrm
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <c3bi81-om9.ln1@ID-7776.user.dfncis.de>
Joe Marshall <·············@comcast.net> wrote:
> Dirk Thierbach <··········@gmx.de> writes:

> Suppose I'm testing some sort of `driver loop':
> 
>  (loop (commit (perform-command (parse-command (get-command)))
>                (database-state)))
> 
> and further suppose that I've added some commands so that there are
> more possible outputs from perform-command.  Commit can handle
> some of the simple structures that come from perform-command but
> since I have been extending the command set, there are return types
> that commit cannot handle.

Since commit has to deal with different return "types", they will
be packaged up in a datatype, which makes them a single type as far
as the type checker is concerned.

So if you add new return types, you will extend the datatype
definition. That will give you warnings that you have not done
exhaustive pattern matching in commit, but it will compile and typecheck
(after all, you're checking the types "dynamically" in this case).

So in this example, it does typecheck, and you don't have to change
the source at all to deal with "incomplete" code. No extra work at
all. At the same time, you get the information that your code is still
incomplete, so you know you're not finished yet.

- Dirk
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <wua1xqb2.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:

> Joe Marshall <·············@comcast.net> wrote:
>> Dirk Thierbach <··········@gmx.de> writes:
>
>> Suppose I'm testing some sort of `driver loop':
>> 
>>  (loop (commit (perform-command (parse-command (get-command)))
>>                (database-state)))
>> 
>> and further suppose that I've added some commands so that there are
>> more possible outputs from perform-command.  Commit can handle
>> some of the simple structures that come from perform-command but
>> since I have been extending the command set, there are return types
>> that commit cannot handle.
>
> Since commit has to deal with different return "types", they will
> be packaged up in a datatype, which makes them a single type as far
> as the type checker is concerned.

Oh?  What makes you say that?

-- 
~jrm
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <31pi81-7fd.ln1@ID-7776.user.dfncis.de>
Joe Marshall <·············@comcast.net> wrote:
> Dirk Thierbach <··········@gmx.de> writes:
>> Joe Marshall <·············@comcast.net> wrote:
>>> Dirk Thierbach <··········@gmx.de> writes:

>>> Suppose I'm testing some sort of `driver loop':
>>> 
>>>  (loop (commit (perform-command (parse-command (get-command)))
>>>                (database-state)))
>>> 
>>> and further suppose that I've added some commands so that there are
>>> more possible outputs from perform-command.  Commit can handle
>>> some of the simple structures that come from perform-command but
>>> since I have been extending the command set, there are return types
>>> that commit cannot handle.

>> Since commit has to deal with different return "types", they will
>> be packaged up in a datatype, which makes them a single type as far
>> as the type checker is concerned.

> Oh?  What makes you say that?

Because that's the way I would write it, and it's the simplest way to
make the program type check at all. Of course, it's just a best guess
without having any details.

What exactly is commit supposed to do with the different outputs
from perform-command? How would they look like, for example?

- Dirk
From: Thomas Lindgren
Subject: Re: More static type fun.
Date: 
Message-ID: <m34qx85d24.fsf@localhost.localdomain>
Adrian Hey <····@NoSpicedHam.iee.org> writes:

> Nobody doubts the *possibility* of success on large projects with dynamic
> typing (only).  We (static typers) merely doubt the probability and ease
> of success.

Why? There are plenty of industrial success stories for the dynamic
camp of functional programming. It's clearly a proven technology.

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bp20dd$igb$2$8300dec7@news.demon.co.uk>
Thomas Lindgren wrote:

> 
> Adrian Hey <····@NoSpicedHam.iee.org> writes:
> 
>> Nobody doubts the *possibility* of success on large projects with dynamic
>> typing (only).  We (static typers) merely doubt the probability and ease
>> of success.
> 
> Why? There are plenty of industrial success stories for the dynamic
> camp of functional programming. It's clearly a proven technology.

Sure there are. I regularly use languages which are either untyped
or so pathetically typed they might just as well be untyped. Like
I said, I don't doubt the possibility of writing correct and 
robust programs in these languages. But this *despite* their
lack of static (or even dynamic) type security, not *because*
of their lack of static type security.

If you have 2 options..

1- Certain classes of comman errors will be detected at compile
   time *for certain*.

2- Those same errors might be detected at run-time if you're
   lucky and/or your tests are thorough enough.

I still fail to see why anybody would chose option 2 in preference
to option 1, unless the restrictions imposed on programming style
to give option 1 were severe. Dynamic typers seem to believe this
is the case. Personally I believe that this view is simply the
result of inexperience with languages which have a decent static
type system.

Pascal's point about some perfectly reasonable things (such as dynamic
metaprogramming) being irreconcilable with static typing is reasonable
I think, but IMO overlooks the point that static typing is not
irreconcilable with dynamic typing where needed.

Regards
--
Adrian Hey
 

   
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzy8uj40dk.fsf@cupid.igpm.rwth-aachen.de>
Adrian Hey <····@NoSpicedHam.iee.org> writes:
> Like I said, I don't doubt the possibility of writing correct and
> robust programs in these languages. But this *despite* their lack of
> static (or even dynamic) type security, not *because* of their lack
> of static type security.

The issue is that you can write correct and robust programs in these
languages _so_damn_well_ "despite" their lack of type security, that
your "despite" looks just like a joke. The above sentence sounds like
"Well, I don't doubt the possibility of these people having fun on a
trip to London, *despite* their lack of a sound strategy to defend
against elephants".

> If you have 2 options..
> 
> 1- Certain classes of comman errors will be detected at compile
>    time *for certain*.
> 
> 2- Those same errors might be detected at run-time if you're
>    lucky and/or your tests are thorough enough.
> 
> I still fail to see why anybody would chose option 2 in preference
> to option 1, unless the restrictions imposed on programming style
> to give option 1 were severe. Dynamic typers seem to believe this
> is the case.

Your pesimistic spin on point 2. makes me think you never write actual
programs.

> Personally I believe that this view is simply the result of
> inexperience with languages which have a decent static type system.

The thing is, for one, that for that what you call static typing you
need to do things like using a different operator for floating point
operations than for integers, or invoking category theory for
implementing interactive behavior. Up with this I shall not put.

> Pascal's point about some perfectly reasonable things (such as dynamic
> metaprogramming) being irreconcilable with static typing is reasonable
> I think, but IMO overlooks the point that static typing is not
> irreconcilable with dynamic typing where needed.

Up to a point, I agree with you. If you ever happen to try cmucl or
sbcl you will see that when you add declarations, and often even
when you don't, static type inference is in fact used.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bp2jhr$57h$1@grizzly.ps.uni-sb.de>
Mario S. Mommer wrote:
> 
> The thing is, for one, that for that what you call static typing you
> need to do things like using a different operator for floating point
> operations than for integers, or invoking category theory for
> implementing interactive behavior.

Attention: off-scale readings from FUD sensors!

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bp3chp$ju8$2$8300dec7@news.demon.co.uk>
Mario S. Mommer wrote:
> Adrian Hey <····@NoSpicedHam.iee.org> writes:
>> Like I said, I don't doubt the possibility of writing correct and
>> robust programs in these languages. But this *despite* their lack of
>> static (or even dynamic) type security, not *because* of their lack
>> of static type security.
> 
> The issue is that you can write correct and robust programs in these
> languages _so_damn_well_ "despite" their lack of type security, that
> your "despite" looks just like a joke.

I think we may be missing some context here. I was talking about the
untyped (assembler) or pathetically typed (c) languages *I* use regularly.
You seem to have interpreted this as an attack on Lisp, or Python maybe?

The implication being that if it's possible to produce reliable s/w
in these languages (and I know it is) then I certainly don't doubt
the possibility of doing the same in Lisp,Python,Erlang..  

I'm sure each of these languages has many other redeeming qualities
and useful features. All I'm saying is that I still extremely
sceptical about claims that lack of static type security is one
of them.

> The above sentence sounds like
> "Well, I don't doubt the possibility of these people having fun on a
> trip to London, *despite* their lack of a sound strategy to defend
> against elephants".

Perhaps people who have never used a decent statically typed language
are not aware of just how often they do get attacked by elephants.

Too be more explicit, what I mean is that I think that maybe the reason
some users of untyped or weakly typed languages might be sceptical
of the value of static type checking is that a typical bug is not
recognised as a "type" error, but as a "value" error. Take this
to it's logical extreme, in a completely untyped language (untyped
lambda calculus or assembler say) then "type errors" will never occur.
So should we conclude that any form static type system is a waste of
time because it only protects us from errors which never occur in
practice?
 
>> If you have 2 options..
>> 
>> 1- Certain classes of comman errors will be detected at compile
>>    time *for certain*.
>> 
>> 2- Those same errors might be detected at run-time if you're
>>    lucky and/or your tests are thorough enough.
>> 
>> I still fail to see why anybody would chose option 2 in preference
>> to option 1, unless the restrictions imposed on programming style
>> to give option 1 were severe. Dynamic typers seem to believe this
>> is the case.
> 
> Your pesimistic spin on point 2. makes me think you never write actual
> programs.

Yeah right. I've only been doing it 40 hours a week for the
last 20 years. The reality in all organisations I've worked in
is that time available for "testing it to death" is scarce and
expensive. I can't see any good reason to waste however many
hours or days of my precious test/debug time chasing down bugs
this way when many of them should (in principle at least) have
been detectable at compile time.

Of course, what's possible in principle (theoretically) and whats
actually possible in practice (using the less than perfect languages
forced on us by pragmatism,economics,contracts,availability..) are
rather different things.

Regards
--
Adrian Hey
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0311152033.34f45d11@posting.google.com>
Adrian Hey <····@NoSpicedHam.iee.org> wrote in message news:<·····················@news.demon.co.uk>...

> Mario S. Mommer wrote:
> > The above sentence sounds like
> > "Well, I don't doubt the possibility of these people having fun on a
> > trip to London, *despite* their lack of a sound strategy to defend
> > against elephants".
> 
> Perhaps people who have never used a decent statically typed language
> are not aware of just how often they do get attacked by elephants.

In London?

Actually, I was on vacation in London earlier this year. I did see one
guy who was screaming about being attacked by elephants, but that was
right before he passed out. He'd just finished off a fifth of some
really nasty rotgut - I think it was called "Old Static Typing." ;^)

Point is, static typing is an effort to solve a problem that:

1. when solved, still doesn't guarantee program correctness. That is,
testing is still needed.
2. If not solved, results in runtime type errors which will be caught
by the tests you have to do anyway - see #1.
3. Isn't really all that frequently seen in the wild. That is, runtime
type errors aren't really seen all that often in programs written in
lisp or smalltalk.
4. In order to solve, forces the programmer to do extra work if he
also wants the flexibility of dynamic typing. In particular, it causes
this extra work at the earliest stages of exploratory programming,
when one is trying to run a program that one knows has paths that
won't statically type check, because it isn't finished yet. This is a
particularly bad time to force the programmer to do book keeping
chores, as it breaks up the flow of the exploration of new ideas.

This makes static typing undesirable for precisely the same reason
that garbage collection and a top-level are desirable. Programmers can
do more exploratory programming faster if they aren't constantly
interrupted by book keeping chores.

This leads dynamic typing advocates to conclude that static typing as
a default with dynamic typing as the exception, through extra work, is
reversed. We prefer the default to be flexibility, with static type
guarantees to be the exception, through extra work.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <dutj81-f31.ln1@ID-7776.user.dfncis.de>
Raffael Cavallaro <·······@mediaone.net> wrote:

> Point is, static typing is an effort to solve a problem that:

> 1. when solved, still doesn't guarantee program correctness. That is,
> testing is still needed.

But it will write a lot of the low level tests for you, so you don't
have to write them by hand. That saves a lot of time.

And, it is *better* than generating those tests by hand, because it will
guarantee you 100% code coverage, without any additional effort.

> 2. If not solved, results in runtime type errors which will be caught
> by the tests you have to do anyway - see #1.

Again, you need a lot less tests.

> 3. Isn't really all that frequently seen in the wild. That is, runtime
> type errors aren't really seen all that often in programs written in
> lisp or smalltalk.

Because they are caught by the tests (and you have to invest time
to write them).

> 4. In order to solve, forces the programmer to do extra work if he
> also wants the flexibility of dynamic typing. 

No, it doesn't. That's the misconception that comes from using crappy
static type systems like C and Java. With a polymorphic type system,
you are (almost, to keep Joe happy) as flexible as with dynamic
types. The only "extra work" that is involved is explicit notation of
type tags in cases where you want to go "dynamic". But that's usually
not a drawback, because it documents what you do, and you have to
do the same in a dynamic typed language if you want to inspect the
values, so it's not "extra".

> In particular, it causes this extra work at the earliest stages of
> exploratory programming,

No, it doesn't. I have tried to give a simple example some time ago
how exploratory programming can look like.

> when one is trying to run a program that one knows has paths that
> won't statically type check, because it isn't finished yet. 

If you write the paths incrementally as cases in the pattern matching,
you don't have to worry about this at all. If I write a function that
I know is not correct, be it for syntactical or for static type
reasons, I don't expect it compile, and I am not able to use it
anyway.

> This is a particularly bad time to force the programmer to do book
> keeping chores, as it breaks up the flow of the exploration of new
> ideas.

HM-style typing never involves book-keeping chores (save the
delaration of datatypes, and that's not a "chore"), because you don't
have to write down type annotations.  You *can* write them down
however, to get a test/specification so you can check the program
you wrote against your expectation. 

> This makes static typing undesirable for precisely the same reason
> that garbage collection and a top-level are desirable. 

Garbage collection is great, and a top-level is good for doing
experiments. Both are available.

> Programmers can do more exploratory programming faster if they
> aren't constantly interrupted by book keeping chores.

That's true; but the point is that they are not interrupted.

> This leads dynamic typing advocates to conclude that static typing as
> a default with dynamic typing as the exception, through extra work, is
> reversed. 

"Reversed" sound suspicously like something you could judge objectively.

> We prefer the default to be flexibility, with static type
> guarantees to be the exception, through extra work.

It's of course up to you what you prefer. *I* prefer static typing
because it keeps me from the extra work of writing low-level unit
tests and it allows flexibility in design while at the same timing
documenting my intentions. Additionally, it allows me to express
invariants in the "type language", which results in even less tests
and more confidence in my program.

- Dirk
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0311160916.58c5001e@posting.google.com>
Dirk Thierbach <··········@gmx.de> wrote in message news:<··············@ID-7776.user.dfncis.de>...

> It's of course up to you what you prefer. *I* prefer static typing
> because it keeps me from the extra work of writing low-level unit
> tests and it allows flexibility in design while at the same timing
> documenting my intentions.

No question that this is a matter of programmer preference. I do
think, however, that what one prefers is greatly influenced by one's
perception of how common runtime type errors are. In other words, what
is the tradeoff between time spent making sure that one's code is
acceptable to the type checker, and the number of would-be runtime
type errors that are caught at compile time, that would not otherwise
be caught in testing anyway.

Clearly, you think that the number of otherwise undetected runtime
type errors would be large, so it is worth it to you to make the small
additional effort to satisfy the type checker in those rare instances
where you need that particular sort of dynamic typing.

The perception of lisp and smalltalk programmers is that this number
of otherwise undetected runtime type errors is small, and not worth
the extra effort of placating the compiler, especially when we're
iteratively writing and testing semi-finished code at the earliest
stages of exploratory programming. To a lisper, Haskell really does
require one to do more in the way of explicitly informing the compiler
than lisp - for example, having to explicitly tell the compiler that
you want to use lists with more than one type of element. This
interruption to one's work flow and thought process has to be weighed
against the number of would-be runtime type errors that would not
otherwise be spotted in testing - a number which dynamic typing
advocates perceive to be small.

I would have to agree with you, that for many, probably most programs,
you will never really need the extra dynamism, so it is tempting to
gain the added assurances of static type checking.

Unfortunately, we sometimes can't know in advance which is the program
that will need that extra dynamism - the ability to reshape the
program without having to largely rewrite it. This was Pascal
Costanza's point, BTW. Not being able to tell in advance what sort of
unanticipated circumstances will create this need, he, and other
lispers/smalltalkers feel safer knowing they have this ability should
they need it. Letting what seems to us to be a very small number of
runtime type errors slip through seems a small price to pay for this
flexibility. The thing about such flexibility is that you don't
usually need it. But when you do, it saves a great deal of time and
effort. Once one gets used to it of course, one comes to rely on it as
a matter of course, and so, one uses it more often.


> Additionally, it allows me to express
> invariants in the "type language", which results in even less tests
> and more confidence in my program.

Certainly this is an advantage if one is used to the particular
language's type system, just as lisp's dynamism seems invaluable to
those who are used to programming this way.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <n6sk81-f02.ln1@ID-7776.user.dfncis.de>
Raffael Cavallaro <·······@mediaone.net> wrote:
> Dirk Thierbach <··········@gmx.de> wrote:

> No question that this is a matter of programmer preference. I do
> think, however, that what one prefers is greatly influenced by one's
> perception of how common runtime type errors are. 

I don't think so.

> In other words, what is the tradeoff between time spent making sure
> that one's code is acceptable to the type checker,

I think it really would help if you could accept that with static
types, you don't have to necessarily invest "useless" time to
"make the code acceptable". Yes, I know with C and Java, you often do.

Try to think of type checks in a polymorphic typed language as a
test-suite that is automatically executed at compile-time, because
passing those tests constitutes the minimum requirement that allows
your program to actually run.

Would you say that you have to "waste" time to make sure that your
code is "acceptable" do your unit tests? The time you spend during
this process is exactly the time that you need to develop your ideas
and shape the final program.

> Clearly, you think that the number of otherwise undetected runtime
> type errors would be large, 

No, I don't think they would. I think the amount of time I would have
to invest in unit tests to catch bugs and gotchas in my program would
be large. That's why you hear from static typers "as soon as my
program typechecks, it is usually correct". (Usually, not
always). That's not because in some magical way, the type checker has
proved them to be bug free, that's because at that stage, the program
has already passed a lot of tests, without the need to write down a
single one of them.

> The perception of lisp and smalltalk programmers is that this number
> of otherwise undetected runtime type errors is small, and not worth
> the extra effort of placating the compiler, 

Still the same problem: You think this is about undetected runtime
type errors, and about placating the compiler. It is not.

> To a lisper, Haskell really does require one to do more in the way
> of explicitly informing the compiler than lisp - for example, having
> to explicitly tell the compiler that you want to use lists with more
> than one type of element.

But the point is that you think differently. You think "will I use
these things uniformly in my program? If yes, it's a list with
a common type. Will I have to make decisions based on what type
there is in the list? If yes, make those decisions explicit (you
can defer this until you reach the part of the program where you
actually have to make the decisions; type inference will figure
it out). Is it not really a list with things I want to operate
uniformly on, but does it contain different things, and I never
intend to use them uniformly? If yes, use tuples."

Lispers do all three things with one datatype, namely a list. So
they have to get used to a new language feature. Once you have
absorbed the difference, it will never slow you down.

> This interruption to one's work flow and thought process

There is no interruption in work flow and thought process. It's an
integral part of the thought process.

> Unfortunately, we sometimes can't know in advance which is the program
> that will need that extra dynamism - the ability to reshape the
> program without having to largely rewrite it. 

Yes. And that's what polymorphic types are good for. The type checker
will figure out what degree of generality my functions have. (I have
run into examples where I figured out by looking at the type after I
wrote my function, that I could use the very same function in very
different situations as well).

Here's a simple example: The well-known map function has type

map :: (a -> b) -> [a] -> [b].

That means I can use map with a function that converts from type a to
type b on any list with elements of type a, and I will get a list of
type b. And I don't have to rewrite it, no matter what a and b will
be. It might even turn out that a is a datatype that mixes, say,
integers and floats, or functions themselves, or trees, any other
thing one can imagine.

> he, and other lispers/smalltalkers feel safer knowing they have this
> ability should they need it.

And I also feel very safe knowing I have the ability to change my
program in unexpected ways should I need it. I feel even safer,
because the automatic test-suite the type checker creates for me is
a big help in refactoring. So when it turns out that I have to change
a small part of my program, the type checker will flag all the places
where this affects the rest of my program, so I can adapt them one
after the other without any fear of forgetting any of them. (After
all, my hand-written unit tests might not have 100% code coverage,
because I was too lazy writing them).

Raffael, I know your arguments, and I have heard them often enough.
They are certainly true for characterizing the difference between Lisp
at statically typed languages like C and Java. But they just don't
apply to polymorphic HM-style types systems; these are very different
beasts.

- Dirk
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB84CAA.52479A12@his.com>
Dirk Thierbach wrote:

> No, I don't think they would. I think the amount of time I would have
> to invest in unit tests to catch bugs and gotchas in my program would
> be large. That's why you hear from static typers "as soon as my
> program typechecks, it is usually correct". (Usually, not
> always). That's not because in some magical way, the type checker has
> proved them to be bug free, that's because at that stage, the program
> has already passed a lot of tests, without the need to write down a
> single one of them.

There's another big advantage to static typing, which is that it
allows the programmer to declare types and have the compiler check
to make sure that the declared type matches the inferred type.
If I start writing down the types I want and find that I am having
trouble getting them to work out the way I want, I often find that
the problem is not an overly-fussy type checker but rather a
serious flaw in the way I have been thinking through the problem.
So getting code to pass the type checker is not a barrier to
exploratory programming, but rather a large part of it.

> Lispers do all three things with one datatype, namely a list. So
> they have to get used to a new language feature. Once you have
> absorbed the difference, it will never slow you down.

I would say it will slow you down -- just enough to make you realize
that you were framing the problem wrong in your mind.

David
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-ED64EF.23084816112003@netnews.attbi.com>
In article <··············@ID-7776.user.dfncis.de>,
 Dirk Thierbach <··········@gmx.de> wrote:

> But the point is that you think differently. You think "will I use
> these things uniformly in my program? If yes, it's a list with
> a common type. Will I have to make decisions based on what type
> there is in the list? If yes, make those decisions explicit (you
> can defer this until you reach the part of the program where you
> actually have to make the decisions; type inference will figure
> it out). Is it not really a list with things I want to operate
> uniformly on, but does it contain different things, and I never
> intend to use them uniformly? If yes, use tuples."

But it is precisely this requirement that the programmer think 
differently that constitutes an interruption to the flow of exploratory 
programming. In other words, having to choose specific data types at a 
stage in development when you know, with certainty, that your current 
data representations are not the ones you will use in the final program. 
Having to think about these issues at this stage is precisely what 
dynamic typing advocaes don't want to do.

> Lispers do all three things with one datatype, namely a list. So
> they have to get used to a new language feature. Once you have
> absorbed the difference, it will never slow you down.

I have no doubt that any decent programmer (and I like to think I could 
be included in that number) could become proficient at thinking in the 
Haskell type system. But I prefer to shape the language to the task at 
hand, rather than shape my thinking to the language, or its type system.

> Raffael, I know your arguments, and I have heard them often enough.
> They are certainly true for characterizing the difference between Lisp
> at statically typed languages like C and Java. But they just don't
> apply to polymorphic HM-style types systems; these are very different
> beasts.

They are less ugly beasts than C and Java, but HM style type systems 
still impose a burden on the programmer to conform his thought to the 
type system. And the only benefit to show for this constraint on one's 
thinking is a suite of automatic tests for errors that are going to be 
flagged in other required testing anyway, with very few exceptions.
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <ur8078vad.fsf@dtpq.com>
>>>>> On Mon, 17 Nov 2003 04:08:49 GMT, Raffael Cavallaro ("Raffael") writes:

 Raffael> In article <··············@ID-7776.user.dfncis.de>,
 Raffael>  Dirk Thierbach <··········@gmx.de> wrote:

 >> But the point is that you think differently. You think "will I use
 >> these things uniformly in my program? If yes, it's a list with
 >> a common type. Will I have to make decisions based on what type
 >> there is in the list? If yes, make those decisions explicit (you
 >> can defer this until you reach the part of the program where you
 >> actually have to make the decisions; type inference will figure
 >> it out). Is it not really a list with things I want to operate
 >> uniformly on, but does it contain different things, and I never
 >> intend to use them uniformly? If yes, use tuples."

 Raffael> But it is precisely this requirement that the programmer think 
 Raffael> differently that constitutes an interruption to the flow of exploratory 
 Raffael> programming. In other words, having to choose specific data types at a 
 Raffael> stage in development when you know, with certainty, that your current 
 Raffael> data representations are not the ones you will use in the final program. 
 Raffael> Having to think about these issues at this stage is precisely what 
 Raffael> dynamic typing advocaes don't want to do.

And we don't generally object to adding them later on,
but only to facilitate certain compiler optimizations.

Barring that, we don't usually want to go back later on and commit 
to code the nailing down of variables types, because we think the 
type errors are not very likely to occur at that stage of maturity.
(And it would preclude further freewheeling changes later on.)

But in Lisp, we do declare some data types (eg. method arglists).
And the good compilers do perform some inference-based type analysis.

 Raffael> HM style type systems still impose a burden on the
 Raffael> programmer to conform his thought to the type system.
 Raffael> And the only benefit to show for this constraint on one's
 Raffael> thinking is a suite of automatic tests for errors that are
 Raffael> going to be flagged in other required testing anyway,
 Raffael> with very few exceptions.

And we unabashadly call that "religion".

It's useful when we can learn something about each other's religions.
But trying to convert people through doctrine and hoping for epiphany, 
without motivating them with their own personal experiences, is unlikely
be very successful.  It wastes time and annoys the pig, 
who probably loves to wallow in the ball of mud.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <67im81-9o1.ln1@ID-7776.user.dfncis.de>
Christopher C. Stacy <······@dtpq.com> wrote:
> Raffael Cavallaro ("Raffael") writes:
>>  Dirk Thierbach <··········@gmx.de> wrote:

[Using the language appropriate features instead of a Lisp list]

>> But it is precisely this requirement that the programmer think
>> differently that constitutes an interruption to the flow of
>> exploratory programming.

No, it doesn't. That's a requirement that you adapt to the *language*
you are using. If I use Lisp, I have to think in Lisp. If I use
Smalltalk, I have to think in Smalltalk. I cannot write Lisp in
Smalltalk, or in Haskell. That's exactly the same mistake that Pascal
makes all the time.

>> In other words, having to choose specific data types at a 
>> stage in development when you know, with certainty, that your current 
>> data representations are not the ones you will use in the final program. 

But I know with certainty if I will use some data uniformly, or if I
won't. Smalltalkers have to do that, too: They know in advance if they use
a collection, or if they make a new class. And Smalltalk is dynamically
typed. Do you want to tell me that Smalltalk "slows you down" because
one may need to convert the contents of a collection into an object, or
vice-versa?

The *concrete* data representation stays polymorphic until I actually
implement it, just as in Lisp.

>> Having to think about these issues at this stage is precisely what 
>> dynamic typing advocaes don't want to do.

Why? What you don't want to think about is the *concrete* representation
of your data. And I don't want to think about this either, until I
actually use it.

Please give me an example where you used some data, say, a list, uniformly, 
and later decided that in fact you wanted to use it positionally only.

> Barring that, we don't usually want to go back later on and commit 
> to code the nailing down of variables types, 

And I don't have to do that either. If it is polymorphic, I am not
nailing down anything. If it has tagged alternatives, I can add as many
alternatives later as I like (as dynamic typers can).

> But in Lisp, we do declare some data types (eg. method arglists).
> And the good compilers do perform some inference-based type analysis.

But it's very very weak.

>> HM style type systems still impose a burden on the programmer to
>> conform his thought to the type system.

No. The type system does not impose that burden, the *language* does.
And every language does that. If I program in Lisp, Lisp forces me
to adapt to The Lisp Way. You don't notice that any longer, because
it is natural for you.

>> And the only benefit to show for this constraint on one's thinking
>> is a suite of automatic tests for errors that are going to be
>> flagged in other required testing anyway, with very few exceptions.

The whole point of a static type system is do avoid some of the burden
of "required" testing, by using a method that is more clever than
having to write lots of tests by hand. So the "anyway" is not true.

> And we unabashadly call that "religion".

Yes. The problem with religions is that it's easy to repeat something
often enough it becomes a matter of belief instead of reason. I don't
want to stop your worshipping your god. I only would like you to acknowledge
that there are other gods who might be as powerful as your god. For 
myself, I am fairly agnostic, and I have no problem dealing with different
gods as the situation requires.

It's not true that "Lisp is The ONLY Way to achieve easy and
flexible development". And it's not true that "ALL static typing 
is evil". It's true that "Lisp is ONE way to achieve easy and
flexible development", and "SOME static typing is evil". Do you see
the difference? It's small, but very significant.

> But trying to convert people through doctrine and hoping for
> epiphany, without motivating them with their own personal
> experiences, is unlikely be very successful.

I have made the personal experience that I can do very flexible
development in Lisp, and I have tried to share some of my experience
how I can do the same in HM-style statically typed languages. I have
also made the personal experience that static typed languages like
C or Java force me to placate the typechecker, to nail down data
types prematurely, etc. So two thirds of my experience match yours.
You're missing the last third.

> It wastes time and annoys the pig, who probably loves to wallow in
> the ball of mud.

I am not really sure what you mean by this.

- Dirk
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-89BD8E.18325717112003@netnews.attbi.com>
In article <··············@ID-7776.user.dfncis.de>,
 Dirk Thierbach <··········@gmx.de> wrote:

> No, it doesn't. That's a requirement that you adapt to the *language*
> you are using. If I use Lisp, I have to think in Lisp.

If I program in lisp, I must think in lisp -lisp syntax, lisp functions, 
etc. If I program in Haskell, I must think in Haskell - Haskell syntax, 
Haskell functions, etc.  *and* I must also think in the HM style type 
system. It is this *additional* requirement, that I see as pointless, 
because it buys me essentially nothing. Virtually all the errors that 
the type system catches will be caught by other required testing anyway.

You are selling a solution to a problem that I don't have. I have to 
test anyway - see Eran Gat's post entitled "Why I don't believe in 
static typing."




N.B. please realize that you've mixed quotes from both me and 
Christopher Stacy  without noting which of us wrote what.
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <eg1xs6rr18.fsf@sefirot.ii.uib.no>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> In article <··············@ID-7776.user.dfncis.de>,
>  Dirk Thierbach <··········@gmx.de> wrote:

>> No, it doesn't. That's a requirement that you adapt to the *language*
>> you are using. If I use Lisp, I have to think in Lisp.

> If I program in lisp, I must think in lisp -lisp syntax, lisp functions, 
> etc. If I program in Haskell, I must think in Haskell - Haskell syntax, 
> Haskell functions, etc.  *and* I must also think in the HM style type 
> system. 

This is obviously true if you are a compiler.  I like Haskell,
precicely because I have tools that reason about types for me, so that
I can focus on expressing the algorithm.  With a dynamically typed
language, I must do this reasoning myself.

> It is this *additional* requirement, that I see as pointless, 

Right.

> You are selling a solution to a problem that I don't have. I have to 
> test anyway - see Eran Gat's post entitled "Why I don't believe in 
> static typing."

a) It is about languages without good type systems; which we all agree
are inferior

b) The problem was caused by race conditions in threaded code.  Now
I'd be the first to agree that they typical C "share everything"
multithreading paradigm is a recipe for disaster, but it's quite
orhtogonal to the static typing.  And as others have pointed out,
threading is hard or impossible to cover adequately with tests.

I think it is sad when people make up their minds based on such
examples like these.  Good cases can be made for dynamic typing, but I
don't think that was one of them. 

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Espen Vestre
Subject: Re: More static type fun.
Date: 
Message-ID: <kwbrra82gz.fsf@merced.netfonds.no>
··········@ii.uib.no writes:

> a) It is about languages without good type systems; which we all agree
> are inferior

...but typically the lone lisp programmer has to fight the 'java has
this great static type system' argument on two fronts (ignorant java
programmers and totally ignorant PHBs).

I guess if the unlikely event happened that the lone lisp programmer
was to meet the lone haskell programmer out there in the real, cruel
commercial world, they would be so happy that they would join forces
and learn from each other rather than fighting endless language wars
like this one (which starts to annoy me, there seems to be little will
to learn anything on both sides).
-- 
  (espen)
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <pe6p81-jq.ln1@ID-7776.user.dfncis.de>
Espen Vestre <·····@*do-not-spam-me*.vestre.net> wrote:
> ··········@ii.uib.no writes:

>> a) It is about languages without good type systems; which we all agree
>> are inferior
> 
> ...but typically the lone lisp programmer has to fight the 'java has
> this great static type system' argument on two fronts (ignorant java
> programmers and totally ignorant PHBs).
> 
> I guess if the unlikely event happened that the lone lisp programmer
> was to meet the lone haskell programmer out there in the real, cruel
> commercial world, they would be so happy that they would join forces
> and learn from each other 

Yes, that would be nice.

> rather than fighting endless language wars like this one (which
> starts to annoy me, there seems to be little will to learn anything
> on both sides).

I still haven't given up hope completely.

- Dirk
From: Tomasz Zielonka
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbrlgs8.66t.t.zielonka@zodiac.mimuw.edu.pl>
Espen Vestre napisa�:
> ··········@ii.uib.no writes:
> 
>> a) It is about languages without good type systems; which we all agree
>> are inferior
> 
> ...but typically the lone lisp programmer has to fight the 'java has
> this great static type system' argument on two fronts (ignorant java
> programmers and totally ignorant PHBs).
> 
> I guess if the unlikely event happened that the lone lisp programmer
> was to meet the lone haskell programmer out there in the real, cruel
> commercial world, they would be so happy that they would join forces
> and learn from each other rather than fighting endless language wars
> like this one (which starts to annoy me, there seems to be little will
> to learn anything on both sides).

Funny... I was thinking exactly about this today :)

Best regards,
Tom

-- 
.signature: Too many levels of symbolic links
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <ba6p81-jq.ln1@ID-7776.user.dfncis.de>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> wrote:
> Dirk Thierbach <··········@gmx.de> wrote:

>> No, it doesn't. That's a requirement that you adapt to the *language*
>> you are using. If I use Lisp, I have to think in Lisp.

> If I program in lisp, I must think in lisp -lisp syntax, lisp functions, 
> etc. If I program in Haskell, I must think in Haskell - Haskell syntax, 
> Haskell functions, etc.  *and* I must also think in the HM style type 
> system. 

But you make the decision whether to use a list or a tuple indepently
of any typing issues. You base this decision on the intended algorithmic
*use* of your values. 

> I have to test anyway - see Eran Gat's post entitled "Why I don't
> believe in static typing."

See the answers there.

> N.B. please realize that you've mixed quotes from both me and 
> Christopher Stacy  without noting which of us wrote what.

Be assured that I noted which of you wrote it. I even included the
attributions at the beginning.

I am sometimes using the "general" you, and I am sometimes using
the personal you when I am directly answering the quoted text above.
I am sorry if this is confusing, I probably should have used names
in those cases.

- Dirk
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpcuko$tlc$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:
> Christopher C. Stacy <······@dtpq.com> wrote:
> 
>>Raffael Cavallaro ("Raffael") writes:
>>
>>> Dirk Thierbach <··········@gmx.de> wrote:
> 
> [Using the language appropriate features instead of a Lisp list]
> 
>>>But it is precisely this requirement that the programmer think
>>>differently that constitutes an interruption to the flow of
>>>exploratory programming.
> 
> No, it doesn't. That's a requirement that you adapt to the *language*
> you are using. If I use Lisp, I have to think in Lisp. If I use
> Smalltalk, I have to think in Smalltalk. I cannot write Lisp in
> Smalltalk, or in Haskell. That's exactly the same mistake that Pascal
> makes all the time.

Maybe I have used a very bad wording in this regard.

I could equally ask the question why Haskell doesn't allow me to 
completely switch off static typing in order to use a more exploratory 
programming style, or for example, run a program that contains type 
errors in order to see what happens at runtime, and some other things 
that have been mentioned as advantages of dynamic type system.

The question is not why I need to switch from Lisp to another language 
in order to make use of some of the advantages of static type systems, 
but rather why do I need to switch languages at all in order to switch 
between dynamic and static typing.

(For example, Strongtalk and Objective-C are relatively flexible in this 
regard - you can choose to use dynamic or static typing on a 
case-by-case basis. However, their static typed part uses explicit typing.)


Pascal

P.S.: Of course, I want s-expressions, but that's just a detail. ;-)

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpd20r$fu0$1@grizzly.ps.uni-sb.de>
Pascal Costanza wrote:
> 
> I could equally ask the question why Haskell doesn't allow me to
> completely switch off static typing in order to use a more exploratory
> programming style

We already told you: because it does not have an untyped semantics. 
Execution semantics of Haskell code is inherently dependent on static type 
information. (Remember the expressiveness discussion?)

Trying to turn Haskell into a dynamically typed language is equally 
pointless as trying to turn Lisp into a statically typed one. It won't work 
either way, because the design choices made in each language make it 
impractical.

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpd76b$16vo$1@f1node01.rhrz.uni-bonn.de>
Andreas Rossberg wrote:

> Pascal Costanza wrote:
> 
>>I could equally ask the question why Haskell doesn't allow me to
>>completely switch off static typing in order to use a more exploratory
>>programming style
> 
> We already told you: because it does not have an untyped semantics. 
> Execution semantics of Haskell code is inherently dependent on static type 
> information. (Remember the expressiveness discussion?)
> 
> Trying to turn Haskell into a dynamically typed language is equally 
> pointless as trying to turn Lisp into a statically typed one. It won't work 
> either way, because the design choices made in each language make it 
> impractical.

Yes, I have got that. The question remains why I need to completely 
change the "language framework" just to switch from dynamic to static 
typing or vice versa.

It's not even clear to me why I need to switch from one statically typed 
language to another in order to explore the different variants of static 
typing.

Yes, I need to change my programming style in order to get the benefits 
of a particular type system, but it's not that hard to imagine to be 
able to do that in a single language framework.

You are continuously reinventing wheels just to change the ashtray.

To puit it differently, the idea behind Microsoft's .NET is not too bad. 
(If they would have only done it "right".)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpd9s3$lqv$1@grizzly.ps.uni-sb.de>
Pascal Costanza wrote:
> 
> The question remains why I need to completely
> change the "language framework" just to switch from dynamic to static
> typing or vice versa.
> 
> It's not even clear to me why I need to switch from one statically typed
> language to another in order to explore the different variants of static
> typing.

Well, the type system is not the only difference between languages. It is 
not even the most important one. Different languages take different design 
choices, details of the type system being just one aspect.

Of course it would be desirable in principle to have that one universal 
language system, which can be configured to cover all imaginable design 
choices. But that is completely utopic. And it would be an ugly, incoherent 
mess. In practice, even comparably tiny language extension switches for 
particular languages already are a pain to deal with.

> To puit it differently, the idea behind Microsoft's .NET is not too bad.
> (If they would have only done it "right".)

I am convinced that the idea is principally flawed. Semantics are just too 
different. If you want to cover a wide enough range of paradigms you will 
either end up with a huge and ugly non-design, or naturally favour certain 
paradigms while penalizing others.

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Fergus Henderson
Subject: .NET and multilanguage programming (was: More static type fun.)
Date: 
Message-ID: <3fbc8242$1@news.unimelb.edu.au>
Andreas Rossberg <········@ps.uni-sb.de> writes:

>Pascal Costanza wrote:
>
>> To puit it differently, the idea behind Microsoft's .NET is not too bad.
>> (If they would have only done it "right".)

What do you think they did wrong?

(I have my own opinions on that, but I'd be very interested to hear yours.)

>I am convinced that the idea is principally flawed. Semantics are just too 
>different. If you want to cover a wide enough range of paradigms you will 
>either end up with a huge and ugly non-design, or naturally favour certain 
>paradigms while penalizing others.

I don't think the idea is fundamentally flawed.  I think you can cover
an extremely wide range of paradigms reasonably well without needing a
huge or ugly intermediate language.  Sure, you're going to favour certain
paradigms to some degree, but it's never going to be a completely level
playing-field; even hardware is not paradigm-neutral.  As hardware becomes
faster and raw performance becomes less of an issue, it matters less if
the mapping from a particular paradigm to the intermediate language is
not as direct as it could be.  So the only question is whether you can
do a better job of interoperability than native code, and I think it's
pretty clear that the answer to that is yes.

What _is_ fundamentally flawed is the idea that the .NET CLR or anything
like it would eliminate all the overheads from multi-language programming.
That, IMHO, was never the idea of .NET, but it is of course quite possible
that the message got lost in the marketing...

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ············@yahoo.com
Subject: Re: .NET and multilanguage programming (was: More static type fun.)
Date: 
Message-ID: <914734f5.0311200903.5c9e9cb0@posting.google.com>
Fergus Henderson <···@cs.mu.oz.au> wrote in message news:<··········@news.unimelb.edu.au>...
> 
>  So the only question is whether you can
> do a better job of interoperability than native code, and I think it's
> pretty clear that the answer to that is yes.

Yes, but the Common Language Specification (CLS), which defines how
interop works on .NET, is too close to C#. This makes it difficult for
non-C# languages to effectively interop with each other in all but the
simplest (nearly C-like) ways. In addition, if the overhead of running
your language on the CLR makes it substantially slower than C#, then
one might consider whether your language is worth the performance
penalty. I would wager that few people (outside this newsgroup) hate
C# & Java that much.
From: Fergus Henderson
Subject: Re: .NET and multilanguage programming (was: More static type fun.)
Date: 
Message-ID: <3fc192e9$1@news.unimelb.edu.au>
············@yahoo.com writes:

>Fergus Henderson <···@cs.mu.oz.au> wrote:
>> 
>> So the only question is whether you can
>> do a better job of interoperability than native code, and I think it's
>> pretty clear that the answer to that is yes.
>
>Yes, but the Common Language Specification (CLS), which defines how
>interop works on .NET, is too close to C#. This makes it difficult for
>non-C# languages to effectively interop with each other in all but the
>simplest (nearly C-like) ways.

It's certainly no more difficult for such languages to interoperate
with each other using .NET than it is for them to interoperate with each
other when compiling to native code level.  So using .NET is not a barrier
to interoperation.  It's a leg up.

Of course it's not a complete solution either.  Languages which want
to communicate at a higher level than the CLS need to define their own
alternatives to the CLS, and to provide a standardized mapping from those
higher-level constructs to IL.  For example, Don Syme's ILX
provides higher-order function types, discriminated unions, and generics.

>In addition, if the overhead of running
>your language on the CLR makes it substantially slower than C#, then
>one might consider whether your language is worth the performance
>penalty. I would wager that few people (outside this newsgroup) hate
>C# & Java that much.

This is a more serious problem.   I don't think it is inherent in the idea
of a common intermediate language for interoperability between different
high-level languages.  But it certainly a major issue for current
implementations of the CLR.

On the other hand, performance is becoming less important for most
applications as machines get faster.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: William D Clinger
Subject: Re: .NET and multilanguage programming (was: More static type fun.)
Date: 
Message-ID: <fb74251e.0311201236.385316e@posting.google.com>
Concerning the multi-language goal of .NET,
Fergus Henderson asked:
> What do you think they did wrong?

They copied too much of the JVM, which was never intended to
support multiple languages.

In particular, neither the JVM nor .NET type systems can express
a union of value and reference types.  This is a major problem
for languages in which every value is a first class object.  The
problem is not just performance, but of interoperability.

To take just one example, consider languages that provide integer
arithmetic, as opposed to arithmetic modulo some power of two.
Integer arithmetic is usually implemented by representing integers
as a union of fixnum and bignum representations.  In Microsoft's
.NET terminology, you'd like for fixnums to be values and bignums
to be references.  The .NET CLS can't express that, however, so
the implementor has an unpleasant choice between representing the
integer type as a record type with both fixnum and bignum fields
(for example), or representing the integer type as a reference
type.

Either way, you'll have a change of representation when passing
an integer from one language to another.  The conversion would
be much simpler if unions of value and reference types were
expressible.  There's no good reason why they aren't, so far as
I can tell.  It looks to me like the Microsoft folks just copied
the Java type system on this, and marketed it as though it were
adequate for multilanguage programming.

Will
From: Roger Corman
Subject: Re: .NET and multilanguage programming (was: More static type fun.)
Date: 
Message-ID: <bqbqrvgubqqrsp3tdfq4pcoo9u1sfk06t6@4ax.com>
On 20 Nov 2003 12:36:02 -0800, ··········@verizon.net (William D
Clinger) wrote:

>Concerning the multi-language goal of .NET,
>Fergus Henderson asked:
>> What do you think they did wrong?
>
>They copied too much of the JVM, which was never intended to
>support multiple languages.
>
>In particular, neither the JVM nor .NET type systems can express
>a union of value and reference types.  This is a major problem
>for languages in which every value is a first class object.  The
>problem is not just performance, but of interoperability.
>

I don't think the CLR was designed to be language neutral, and that's
why it's not (despite marketing claims). It was designed to support a
new better-than-java language (C#) and make it interop with VB (always
Microsoft's favorite language). In the process they figured out how to
make C++ work with it (by replacing the C++ object model with the CLR
much-more-limited object model). Of course you can still use the C++
object model, and that's great, but you can't interop with anybody or
call any system functions using that model, so you either live with a
very complex 2-object model world, or dump the C++ object model.

This is what the CLR supports, and then other languages only work if
they are equally modified to fit the CLR world view. VB.NET is a very
different language than VB. Managed C++ is quite a bit different than
C++, but fortunately a superset.

Supporting a Lisp-like language on the CLR will always cause the
language to be less-efficient, given the CLR limitations, and all
interoperability will have to be based on the .NET/CLR object model.
If you like CLOS, either you have to figure out how to make CLOS have
the same limitations .NET objects do (conform to the model) or you
just have to create an alternative object model as Managed C++ does
and give the users the option. 

All modern processors support functional and lisp-like languages quite
well, through flexibility of stacks, addressing methods, etc. All the
tools are there to do all kinds of creative languages. However, the
CLR byte-code (intermediate code, whatever) is very restrictive and is
not anywhere near as flexible as all modern processors. Processors are
language neutral (not really, but relatively) whereas the CLR is
better described as language-fascist.

Roger Corman
From: Pascal Costanza
Subject: Re: .NET and multilanguage programming
Date: 
Message-ID: <bpjl5h$1r1$1@newsreader3.netcologne.de>
Roger Corman wrote:

> All modern processors support functional and lisp-like languages quite
> well, through flexibility of stacks, addressing methods, etc. All the
> tools are there to do all kinds of creative languages. However, the
> CLR byte-code (intermediate code, whatever) is very restrictive and is
> not anywhere near as flexible as all modern processors. Processors are
> language neutral (not really, but relatively) whereas the CLR is
> better described as language-fascist.

Someone coined the term "skinnable language". I think this hits the nail 
on the head.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Torben �gidius Mogensen
Subject: Re: .NET and multilanguage programming (was: More static type fun.)
Date: 
Message-ID: <w5fzgit6cs.fsf@pc-032.diku.dk>
··········@verizon.net (William D Clinger) writes:

> Concerning the multi-language goal of .NET,
> Fergus Henderson asked:
> > What do you think they did wrong?
> 
> They copied too much of the JVM, which was never intended to
> support multiple languages.
> 
> In particular, neither the JVM nor .NET type systems can express
> a union of value and reference types.  This is a major problem
> for languages in which every value is a first class object.  The
> problem is not just performance, but of interoperability.

I think reading the papers about SML.NET is very instructive in
realising the the problems that the restrictions in CLR impose on
implementing languages that are not C#-like.  A few points from
memory:

 - CLR assumes all pointers can be null, so enforces a check at every
   deref.

 - Closures are difficult to implement efficiently.

 - The lack of tagged union makes ML-style datatypes difficult to
   implement.

 - No support for polymorphism (in the current version.  Polymorphism
   by run-time code replication will come in later CLR versions).

 - Exceptions are implemented _very_ inefficiently in CLR.  This is
   probably an implementation issue, though, rather than a language
   issue.

	Torben
From: Roger Corman
Subject: Re: .NET and multilanguage programming (was: More static type fun.)
Date: 
Message-ID: <e5isrvc31v3cn6i1qli034ka0tc699iqat@4ax.com>
On 21 Nov 2003 10:32:03 +0100, ·······@diku.dk (Torben �gidius
Mogensen) wrote:

>··········@verizon.net (William D Clinger) writes:
>
> - Closures are difficult to implement efficiently.

BTW, have you seen the C# version 2.0 specification?

http://download.microsoft.com/download/8/1/6/81682478-4018-48fe-9e5e-f87a44af3db9/SpecificationVer2.doc

(unfortunately a MS Word doc)

From line 7 of the first page:

"	Anonymous methods allow code blocks to be written "in-line"
where delegate values are expected. Anonymous methods are similar to
lambda functions in the Lisp programming language. C# 2.0 supports the
creation of "closures" where anonymous methods access surrounding
local variables and parameters.

It's nice that Lisp gets credit at the beginning of the new spec. Wow,
lambdas and closures. What will they think of next?  (macros??)

I believe this requires a change to the CLR, which is one reason that
2.0 is not yet available. 

>
> - The lack of tagged union makes ML-style datatypes difficult to
>   implement.
Yes, this is a big deal.

>
> - No support for polymorphism (in the current version.  Polymorphism
>   by run-time code replication will come in later CLR versions).
>
> - Exceptions are implemented _very_ inefficiently in CLR.  This is
>   probably an implementation issue, though, rather than a language
>   issue.

Yes. I recently did some timing loops, and discovered I could throw
1,500,000 times a second in Corman Lisp, vs. 29,000 times a second in
the CLR using Managed C++ code (a factor of about 50x). I was quite
surprised (and disappointed) by the latter figure. When running
unmanaged C++, native code, I got 100,000 times a second. I could
imagine that CLR exceptions could be even faster than native, because
so much meta-data is available at run-time vs. the C++ exception
mechanism has to do some analysis to figure things out.
From: William D Clinger
Subject: Re: .NET and multilanguage programming (was: More static type fun.)
Date: 
Message-ID: <fb74251e.0311220646.d6af1b4@posting.google.com>
Roger Corman <·····@corman.net> wrote:
> On 21 Nov 2003 10:32:03 +0100, ·······@diku.dk (Torben �gidius
> Mogensen) wrote:
> 
> >··········@verizon.net (William D Clinger) writes:
> >
> > - Closures are difficult to implement efficiently.

Just to clarify:  I didn't write that.

Will
From: Pascal Costanza
Subject: Re: .NET and multilanguage programming
Date: 
Message-ID: <bpie9o$r58$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Andreas Rossberg <········@ps.uni-sb.de> writes:
> 
> 
>>Pascal Costanza wrote:
>>
>>
>>>To puit it differently, the idea behind Microsoft's .NET is not too bad.
>>>(If they would have only done it "right".)
> 
> 
> What do you think they did wrong?

AFAICT, the .NET virtual machine is just a copy of the Java Virtual 
Machine with some minor "obvious" improvements. This means, for example, 
that it is class-centric, supports only single inheritance, imposes a 
static type system in which types and classes correspond to each other, 
and so on.

If your language is just another surface syntax for a Java/C# model, 
then this works quite well. If you want to go fundamentally beyond that 
model, it means a lot of work.

> (I have my own opinions on that, but I'd be very interested to hear yours.)

...and what are yours?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: .NET and multilanguage programming
Date: 
Message-ID: <3fc197ad$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>> (I have my own opinions on that, but I'd be very interested to hear yours.)
>
>...and what are yours?

Generally I agree with most of what the other posters have said.

This topic of how to better support non-C#-like languages in the .NET CLR
came up a couple of weeks ago on the ················@discuss.microsoft.com
mailing list.  Here is what I suggested on that list.


1.  Fix verifiability problem with tail calls and byref parameters
------------------------------------------------------------------
 
Currently, the bytecode verifier does not allow by-ref
arguments to a tailcall.  The verifier ought to keep track of
which byrefs might refer to locals or non-byref parameters in the
current procedure, and should allow tail calls with byrefs if none of
the byrefs being passed refer to locals or non-byref parameters.                


2.  Regarding the use of byrefs as fields of value classes
----------------------------------------------------------

Currently byrefs are limited by the following guidelines:
	1. They can be passed as parameters to methods
	2. They can be stored on the stack (as a local)
	3. They cannot be stored on the heap
	4. They cannot be returned from methods
	5. They cannot be passed by reference
           (you cannot have a byref to a byref).
These guidelines allow byrefs to be verified.

Currently, guideline 3 is implemented in .NET by restricting byrefs
so that you cannot use them as fields of a class.

However, value classes are generally stored on the stack, so they don't
automatically fail guideline 3.  It is quite reasonable to allow value classes
to contain byrefs, so long as you impose the same kind of restrictions that
apply to byrefs:
	1. Value classes containing byrefs cannot be placed on the heap.
		(this means no boxing or storing in heap objects)
	2. Value classes containing byrefs cannot be returned from methods
	3. Value classes containing byrefs cannot be passed by reference.

Actually, that last restriction is too harsh.  It is desirable to allow
value clases containing byrefs to be passed by reference in cases where
it is safe.  In particular, you should be able to take a reference to a
value class containing a byref, but it should not be possible to update
the byref via this reference.  That is, the "stind" operation should
not be permitted in the case where the field being updated is a byref.
This is sufficient to prevent you from using pass-by-reference to
return a byref to the caller after it has gone out of scope.

* Why would you want to support this?

Languages that have nested functions typically compile the nested functions
into a bunch of functions with a structure containing the shared variables.
The shared variables may include the input parameters of the main function
(as well as the locals).   The structure is passed as a parameter of each
of the nested functions when they are called from the main function.
The structure represents the environment of the parent function.

The best way to implement this in .NET would be to use a value class
for the shared variable structure, and to pass this value class by reference.
Since some of the shared variables may be byref parameters themselves,
it is necessary to be able to put byref fields in the value class.

* What languages are affected by this?

Pascal implementations usually support nested functions.
Oberon has nested functions.
Mercury uses nested functions to implement non-determinism (backtracking 
search such as in Prolog)
C/C++ can possibly make use of this to implement references to structs.

[Thanks to Don Syme for his feedback on an earlier version of this proposal.]


3.  Efficient support for discriminated unions
----------------------------------------------

Currently support for discriminated unions in .NET is not very efficient.
You can represent discriminated unions using a base class and derived classes,
e.g. the Mercury type

     :- type t ---> f(x1::int) ; g(y1::int, y2::string).

can be represented as

     abstract class T { public abstract int tag(); }
     final class T_F extends T {
	public int tag() { return 0; }
	public int x1;
     }
     final class T_G extends T {
	public int tag() { return 1; }
	public int y1; 
	public string y2; 
     }

but approaches like this are far from optimally efficient.

Firstly, in .NET, discriminated union alternatives always have to be
boxed, and to get the value for a particular alternative, you always need
to do an indirection.  In other systems, values which take up less than
the full word size can be represented using a tagged value, with only
a mask or subtraction needed to remove the tag, and no heap allocation
or indirections required.  In .NET, you need a pointer to an object, which
in turn contains a vtable and then the value.  Examples of data taking
less than a full word include small integers and word-aligned pointers
(on a 32-bit system, an object pointer really only uses 30 bits,
because the bottom two bits are guaranteed to be zero).

Secondly, switching on a discriminated union is expensive.  Dynamic type
casts are themselves expensive, and even worse, if you implement a
discriminated union switch as a series of dynamic casts, this is linear
in the number of alternatives.  A better approach (if there are more than
a few alternatives) is to extract an integer tag value, do a switch on
that, and then do a single dynamic cast.  But even that is inefficient,
because extracting the integer tag value requires a virtual function call
(or wasting more heap space to store extra tag values in every object),
rather than just an indirect load, and furthermore the JIT probably won't
be able to optimize away the final dynamic cast, even though it always
succeeds.

I don't have a specific proposal here yet, but something better what
we currently have is needed.  Perhaps it could be done using special
attributes that specify details about the representation; that way,
it could be backwards compatible with .NET implementations that don't
know about this feature.  There are already some attributes dealing
with how fields are layed out within a class, so this could be a
natural extension of that.


4.  Efficient support for static data containing objects
--------------------------------------------------------

The CLR has support for static data, but static data can't contain
objects.  If you want static data containing objects, you need to
initialize it in the .cctor.  In theory no change is needed to the .NET
CLR, because in theory a good CLR implementation could interpret the
.cctor and generate the data once, either at pre-JIT time or at program
startup with the results being saved somewhere that will persist beyond
a single process invocation.  But I'm pretty sure current implementations
don't do it.  Maybe it would be easier if we gave them some hints, or
if there was some explicit construct for creating static data that
includes objects.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egk75ztmg5.fsf@sefirot.ii.uib.no>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> They are less ugly beasts than C and Java, but HM style type systems 
> still impose a burden on the programmer to conform his thought to the 
> type system. 

Yes, in the sense that as you add expressions and definition,
ill-typed ones will be caught automatically.  In a sense it is
surprising to me that you consider this bad; these errors are either
trivial typos, or indicates that I have misunderstood some crucial
part of the problem.  The former is often trivial to correct, but I'd
rather it was flagged earlier than later.  The latter is a great aid
when getting the design right -- ensuring that I have actually
understood the (part of the) problem I'm trying to solve.  

Static typing probably becomes more important when you program in
a functional style -- HOFs, partial applications, and so on.

> And the only benefit to show for this constraint on one's thinking
> is a suite of automatic tests for errors that are going to be
> flagged in other required testing anyway, with very few exceptions.

True enough.  But you need to write those tests, and run them.  You
must ensure the tests are complete.  You risk catching errors at a
later stage, which is generally considered more expensive. And you
need to get the tests right, too.

BTW, I think a warnings-only static type checker/inference engine
would be interesting, but I haven't seen anything that is remotely as
good as current HM type systems.  (This is not to say they don't exist,
only that my experience is limited).  And I'm not convinced that you
really ever would want to disregard a type error/warning, beyond
leaving something explicitly undefined.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpa32v$mec$1@newsreader3.netcologne.de>
··········@ii.uib.no wrote:

> Static typing probably becomes more important when you program in
> a functional style -- HOFs, partial applications, and so on.

This is also my impression (as an "outsider", so to speak ;).

When you start to write functions that generate functions that generate 
functions that generate functions, the benefits of a static type system 
might become much more important. (and I hope you don't get the 
impression that I am trying to poke fun at such a programming style!)

It seems to me that Clean has a good way to reconcile pure functional 
programing with a limited form of imperative programming, without the 
need to use monads. Can anyone of the static typers comment on this?


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb8d319@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>It seems to me that Clean has a good way to reconcile pure functional 
>programing with a limited form of imperative programming, without the 
>need to use monads. Can anyone of the static typers comment on this?

I like the Clean approach, but monads are not so bad as to be worth
avoiding.

Despite the awful name, you don't need to know any category theory in
order to do I/O in Haskell, or to understand how it works.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpamgf$u6e$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>It seems to me that Clean has a good way to reconcile pure functional 
>>programing with a limited form of imperative programming, without the 
>>need to use monads. Can anyone of the static typers comment on this?
> 
> I like the Clean approach, but monads are not so bad as to be worth
> avoiding.
> 
> Despite the awful name, you don't need to know any category theory in
> order to do I/O in Haskell, or to understand how it works.

So what are the trade-offs between those languages?

 From what I have seen by skipping through high-level descriptions, 
OCaml seems to be quite nice because it is a multi-paradigm language, 
Clean has a nice integration of imperative features while remaining 
"pure", and Haskell has a full numerical tower. This makes it hard for 
me to decide which I would choose to take a closer look at because I 
would rather prefer to have all these features at once. ;)

Are there other considerations?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Clean vs Haskell (was: More static type fun.)
Date: 
Message-ID: <3fb99a7a$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:
>Fergus Henderson wrote:
>> I like the Clean approach, but monads are not so bad as to be worth
>> avoiding.
>> 
>> Despite the awful name, you don't need to know any category theory in
>> order to do I/O in Haskell, or to understand how it works.
>
>So what are the trade-offs between those languages?

Jerzy Karczmarczuk wrote a reasonably good explanation not so long ago.  See
<http://groups.google.com.au/groups?selm=3D2D52DE.93BEF1B1%40info.unicaen.fr>.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Tomasz Zielonka
Subject: Purely functional, dynamically typed GPPL?
Date: 
Message-ID: <slrnbrlg8m.66t.t.zielonka@zodiac.mimuw.edu.pl>
Darius wrote:
> 
>> Can anyone of the static typers comment on this?
> 
> Don't you mean Clean and Haskell users?  This is a purity issue.

By the way, is there any purely functional, dynamically typed general
purpose programming language?

When I say "general purpose", I think about things like operating system
interface, foreign function interface, networking, etc.

I don't consider Untyped Lambda Calculus to be general purpose in this
sense.
I think Pure Lisp doesn't qualify too.

The two purely functional languages I know (Haskell, Clean) rely on
static typing to introduce these (imperative) features. Is there any
other way?

Best regards,
Tom

-- 
.signature: Too many levels of symbolic links
From: Mark Carroll
Subject: Re: More static type fun.
Date: 
Message-ID: <Fic*qiJ7p@news.chiark.greenend.org.uk>
In article <······································@netnews.attbi.com>,
Raffael Cavallaro  <················@junk.mail.me.not.mac.com> wrote:
(snip)
>But it is precisely this requirement that the programmer think 
>differently that constitutes an interruption to the flow of exploratory 
>programming. In other words, having to choose specific data types at a 
>stage in development when you know, with certainty, that your current 
>data representations are not the ones you will use in the final program. 
>Having to think about these issues at this stage is precisely what 
>dynamic typing advocaes don't want to do.

You're way overestimating the effort involved, at least in Haskell's
case. Things like "deriving" write useful helper functions for you
based on the current type definition and, by and large, Haskell code
that needs to be much rewritten when underlying data structures change
was badly written in the first place. People should be using
typeclasses, named accessor functions, HOF, etc. and then most of the
code can remain as it was. I test out a lot of my Haskell code with
data structures that are very unlike the ones it'll end up using,
largely with the help of type classes and type inference. For
instance, the genetic algorithm thing I previously gave the type
signature of wasn't particularly written to let people do I/O in the
code they provide for evaluating individuals' fitness, and with
Haskell being pure you'd think that'd be a big-deal change, but you'd
not actually have to change my code at all to allow that. Well-written
Haskell code should offer plenty of flexibility for easily changing
underlying stuff.

(snip)
>I have no doubt that any decent programmer (and I like to think I could 
>be included in that number) could become proficient at thinking in the 
>Haskell type system. But I prefer to shape the language to the task at 
>hand, rather than shape my thinking to the language, or its type system.
(snip)

I don't think there's usually much difficulty shaping Haskell to the
/task/ at hand once you're past the newbie stage: for a very large
fraction of the software applications that need to be written, both
Common Lisp and Haskell can easily do a good job of them. My bigger
problem with Haskell is that I'm aware of bleedng-edge really cool
stuff like Template Haskell and Functional Reactive Programming and
whatever so it's not so much a question of how to do something at all
in Haskell, so much as if I could do it in a much better way than I
could in most other languages by using some exciting bit of research.

In terms of shaping your thinking to the language, I suspect you
underestimate how much thinking-shaping is needed to become good with
Lisp if you've been thinking in BASIC or whatever previously. If
you're saying that Lisp just happened to fit the way you prefer to
think of things when you discovered it then that's great, but I'd
wonder how normal you are. (-: At least I suppose that Perl hackers
coming to Lisp might already be able to deal with
functions-as-arguments, anonymous functions, map, eval, etc., although
macros might be another thing entirely.

-- Mark
From: Brian McNamara!
Subject: Re: More static type fun.
Date: 
Message-ID: <bpash9$ju9$1@news-int2.gatech.edu>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> once said:
>But it is precisely this requirement that the programmer think 
>differently that constitutes an interruption to the flow of exploratory 
>programming. In other words, having to choose specific data types at a 
>stage in development when you know, with certainty, that your current 
>data representations are not the ones you will use in the final program. 
>Having to think about these issues at this stage is precisely what 
>dynamic typing advocaes don't want to do.

I disagree that static typing commits you to a data representation, or
even that it forces you to think about one.

Here is an example of how I imagine dynamic-typers want to go about
doing exploratory programming.  (If this isn't what you mean by
"exploratory programming", of if I am missing its essence, let me know.)
I will illustrate it in C++ to emphasize the static typing and to force
type annotations (which may help in the exposition).  I presume the
same kind of example can be implemented in a language with type
inference, which would alleviate most of the "busy work".

Suppose I am writing the function doSomething().  I know it needs to
take in some info as a parameter, but I'm not sure exactly what kind of
data structure it will be, or what operations it will support yet.  Fine:

   struct StubRep {};
   typedef StubRep Info;

   void doSomething( Info i ) {}

The typedef provides a point of indirection so that we can always
change the representation of Info objects later.  I have written a
"stub" representation called StubRep only so that we end up with a
full, compilable program.

So we start writing doSomething(), and along the way we develop a feel
for the kind of data structure Info will have to be based on how it will
be used:

   void doSomething( Info i ) {
      ... foo(i) ... bar(i) ...
   }

I can stub out the methods to get the program in a compilable state:

   int  foo( Info ) { throw "Not implemented yet"; }
   bool bar( Info ) { throw "Not implemented yet"; }

Finally, at some later point, in the process of writing doSomething(), 
I have learned enough about Info to have an idea of what a good
representation for the data is.  I can now modify the program with
respect to that choice.  Perhaps I decide on

   typedef tuple<int,bool> Info;
   int  foo( Info i ) { return get<0>(i); }
   bool bar( Info i ) { return get<1>(i); }
   
or whatever.  Later on I might discover that I also need Info objects to
support "float qux(Info)", and so I end up making a modification:

   typedef tuple<int,bool,float> Info;
   float qux( Info i ) { return bar(i) ? get<2>(i) : 0.0f; }

The point I want to make is that the typedef (type alias) shields
clients (like "doSomething") from knowing the actual name/structure of
the representation type, and the interface methods I have discovered
(foo,bar,qux) similarly shield the client from the data representation,
by phrasing operations in terms of the problem domain (foo) rather than
the representation type (get<0>).

The point is, static typing does not force you to commit to a data
representation, and dynamic typing does not automatically shield you
from your data representation choices.  It is the _abstraction_done_by_
_the_programmer_ (by providing a layer of names (e.g. Info,foo,bar))
which provides the shielding and avoids premature commitments.


Note that the C++ code above is verbose in a number of "needless" ways.
In a language with type-inference, most/all of the type annotations
could presumably be removed.


Dynamic languages _do_ have the advantage that they give you stubs 
"for free"; even in Haskell, you'd have to write code like

   foo :: Info -> a
   foo _ = error "not implemented yet"

(or, if you're doing lots of stubbing, probably

   stub :: a -> b
   stub _ = error "not implemented yet"
  
   foo = stub
   bar = stub
   qux = stub

instead) to get the code "compilable" without having finished the
implementation yet.  

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB845EC.9907D757@his.com>
Dirk Thierbach wrote:

> > 3. Isn't really all that frequently seen in the wild. That is, runtime
> > type errors aren't really seen all that often in programs written in
> > lisp or smalltalk.
> 
> Because they are caught by the tests (and you have to invest time
> to write them).

Or because a Lisp programmer doesn't think of them as type errors
while a Haskell programmer (especially one using GHC) might.

> If you write the paths incrementally as cases in the pattern matching,
> you don't have to worry about this at all. If I write a function that
> I know is not correct, be it for syntactical or for static type
> reasons, I don't expect it compile, and I am not able to use it
> anyway.

Type correctness is a part of syntactic correctness.

David
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <ad6twstw.fsf@ccs.neu.edu>
Feuer <·····@his.com> writes:

> Type correctness is a part of syntactic correctness.

Only when types can be determined solely from syntax.
From: Paul Dietz
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB8FAF2.D74403EB@motorola.com>
Dirk Thierbach wrote:

> > 2. If not solved, results in runtime type errors which will be caught
> > by the tests you have to do anyway - see #1.
> 
> Again, you need a lot less tests.


I question the assertion that ones needs many additional tests to
test type correctness.  Type incorrect programs will, in practice,
very often fail on tests that have been written for other purposes.

	Paul
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-FF1246.18452317112003@netnews.attbi.com>
In article <·················@motorola.com>,
 Paul Dietz <············@motorola.com> wrote:

 
> I question the assertion that ones needs many additional tests to
> test type correctness.  Type incorrect programs will, in practice,
> very often fail on tests that have been written for other purposes.
> 
> 	Paul

This was precisely my point, sorry if I didn't make it clear, and thank 
you for pointing this out. That's why I find static typing so useless as 
an automatic testing methodology - without writing any additional tests, 
I find essentially all the type errors the static type checker would 
have found anyway.

What it is useful for is optimization - i.e., allowing the compiler to 
produce better code. But there are lisp implementations that do type 
inference, and will tell you how declarations would allow better code 
generation ( I'm thinking  of cmucl and sbcl here). So with lisp, I can 
have dynamic typing, and the optimization help of a smart, type 
infrencing compiler as well.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <7n8o81-ba5.ln1@ID-7776.user.dfncis.de>
Jon S. Anthony <·········@rcn.com> wrote:
> Dirk Thierbach <··········@gmx.de> writes:
>> Paul Dietz <············@motorola.com> wrote:

>> > Type incorrect programs will, in practice, very often fail on tests
>> > that have been written for other purposes.

> That correlates to 3 9s in the logs on this stuff that I've kept.

So how many tests and what kind of tests do you use? Can you give
some typical examples?

- Dirk
From: Paul F. Dietz
Subject: Re: More static type fun.
Date: 
Message-ID: <756dnY0x5sPLWSeiRVn-iw@dls.net>
Dirk Thierbach wrote:

>>Type incorrect programs will, in practice, very often fail on tests
>>that have been written for other purposes.
> 
> They may, or they may not. To be only *reasonably* sure that they fail
> you need quite a large amount of tests, and you have to control somehow
> the amount of code coverage your tests have.
> 
> Anything else just means crossing your fingers, and trusting luck.

Yes, a large collect of unit tests is required in order to have reliable
software.   This set of unit tests can easily exceed the source code in
size.  Ditto for integration tests.

This is true regardless of whether the language is statically typed or not.

The claim I was addressing was whether lack of static type checking
causes this set of unit tests to be significantly larger.  I would claim
that it does not.  Type errors usually stick out dynamically like a sore
thumb.

Now, if your position is 'if you don't adequately test your code,
static type checking makes the software not suck quite as badly', then
I could agree with that.  However, if you are aiming for anything beyond
an amateur level of reliability, extensive testing *is* required, and
the easy errors would have been found anyway.

I'll add that in the Lisp code I work with, we measure the branch
coverage achieved by the unit tests and work to keep this figure near 100%.
Measuring branch coverage of tests in Lisp is straightforward, using
something like Waters' COVER package.

	Paul
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb89046$1@news.unimelb.edu.au>
Mario S. Mommer <········@yahoo.com> writes:

>Adrian Hey <····@NoSpicedHam.iee.org> writes:
>> Personally I believe that this view is simply the result of
>> inexperience with languages which have a decent static type system.
>
>The thing is, for one, that for that what you call static typing you
>need to do things like using a different operator for floating point
>operations than for integers,

That's clearly not true, since there are _many_ statically typed
languages which use the same operator for both of those.

>or invoking category theory for implementing interactive behavior.

I presume this is a veiled reference to the use of monads in Haskell.
But that is a non sequitur, since the use of monads is of course due
to Haskell's purity, not due to it being statically typed.

>Up with this I shall not put.

I'm not going to put up with Lots of Idiotic Silly Parentheses either,
But I don't try to argue against the use of dynamic typing on the
grounds that Lisp's syntax sucks or that Prolog's impurity destroys
the advantages of logic programming.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Thomas Lindgren
Subject: Re: More static type fun.
Date: 
Message-ID: <m33ccqoevj.fsf@localhost.localdomain>
Adrian Hey <····@NoSpicedHam.iee.org> writes:

> >> Nobody doubts the *possibility* of success on large projects with dynamic
> >> typing (only).  We (static typers) merely doubt the probability and ease
> >> of success.
>
> > Why? There are plenty of industrial success stories for the dynamic
> > camp of functional programming. It's clearly a proven technology.
> 
> Sure there are. I regularly use languages which are either untyped
> or so pathetically typed they might just as well be untyped. 

I don't see the relevance to what I wrote?

> Like I said, I don't doubt the possibility of writing correct and
> robust programs in these languages. But this *despite* their lack of
> static (or even dynamic) type security, not *because* of their lack
         ^^^^^^^^^^^^^^^^^
(What type-unsafe dynamically typed functional languages are you
thinking of?)

> of static type security.

Yet industrial experience shows that dynamic functional programming
languages work very well indeed for real use by ordinary programmers.
My impression is that this is due to features such as garbage
collection; type safety; brevity and clarity (comparatively);
providing terms rather than memory words; comparative lack of
side-effects; a more forgiving programming environment; structured
exceptions; an interactive toploop, and so on. (As an aside, a number
of recent non-functional languages possess most of these properties
too, which may erode the competitive advantage to some extent.)

Successful industrial projects show that the doubts that you refer to
are so far unfounded, in fact *contrary to experience*.  The lack of
static typing seems not to outweigh the advantages, or normally even
to be perceived as a disadvantage (the latter at least as far as I'm
aware).

Even so, it might conceivably still be the case that industrial
projects using statically typed functional language would work even
better. (Then again, they might be about the same or worse; we don't
know at this time.) Showing that is up to you, however.

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bp3i4p$o5u$1$830fa78d@news.demon.co.uk>
Thomas Lindgren wrote:

> Adrian Hey <····@NoSpicedHam.iee.org> writes:
>> > Why? There are plenty of industrial success stories for the dynamic
>> > camp of functional programming. It's clearly a proven technology.
>> 
>> Sure there are. I regularly use languages which are either untyped
>> or so pathetically typed they might just as well be untyped.
> 
> I don't see the relevance to what I wrote?
> 
>> Like I said, I don't doubt the possibility of writing correct and
>> robust programs in these languages. But this *despite* their lack of
>> static (or even dynamic) type security, not *because* of their lack
>          ^^^^^^^^^^^^^^^^^
> (What type-unsafe dynamically typed functional languages are you
> thinking of?)

Sorry, I wasn't talking about any FPL (dynamically typed or
statically typed), but I see you were. (Umm.. I guess that would
be Erlang:-) The languages I had in mind were assembler and C.

> languages work very well indeed for real use by ordinary programmers.
> My impression is that this is due to features such as garbage
> collection; type safety; brevity and clarity (comparatively);
> providing terms rather than memory words; comparative lack of
> side-effects; a more forgiving programming environment; structured
> exceptions; an interactive toploop, and so on. (As an aside, a number
> of recent non-functional languages possess most of these properties
> too, which may erode the competitive advantage to some extent.)
> 
> Successful industrial projects show that the doubts that you refer to
> are so far unfounded, in fact *contrary to experience*.

They are not contrary to my experience (which is what really counts
when shaping my opinion :-)

Since you appear to doubt the assertion that static type systems
trap many (IME *most*) bugs, the only way I can reconcile my
claimed experience with your claimed experience is to conclude
that folk using Erlang rarely get anything wrong. I guess they
all deserve a pay rise :-)

> The lack of static typing seems not to outweigh the advantages,

Could you remind me what the advantages of lack of static typing
actually are?

> Even so, it might conceivably still be the case that industrial
> projects using statically typed functional language would work even
> better.

I believe they would. Unfortunately static type systems are not
the only part of the story. Much as I like Haskell, I wouldn't
consider using it right now in a "lean and mean" high reliability
365/24/7 real time embedded application (this has nothing to do
with the type system btw). But then again, I wouldn't consider
Lisp or Python either for this kind of app (or even Erlang,
sorry to say). But for batch mode or interactive GUI apps running
on a PC or workstation (with prodigious quantities of memory
available if needed and no real time requirements) Haskell is
a fine language IMO. 

> (Then again, they might be about the same or worse; we don't
> know at this time.) Showing that is up to you, however.

Why is up to me? I don't stand to gain anything one way or
the other :-) However, other potential users of current or future
statically typed languages might gain a lot if they actually tried
them (as and when they appear).

Regards
--
Adrian Hey
From: Thomas Lindgren
Subject: Re: More static type fun.
Date: 
Message-ID: <m34qwwsb86.fsf@localhost.localdomain>
Adrian Hey <····@NoSpicedHam.iee.org> writes:

> Sorry, I wasn't talking about any FPL (dynamically typed or
> statically typed), but I see you were. (Umm.. I guess that would
> be Erlang:-) The languages I had in mind were assembler and C.

Mainly, though Common Lisp has seen wider commercial use (in the
sense of more kinds of application areas).

> > Successful industrial projects show that the doubts that you refer to
> > are so far unfounded, in fact *contrary to experience*.
> 
> They are not contrary to my experience (which is what really counts
> when shaping my opinion :-)

Heh. Well, to each his own, I guess ... I'd suggest you add a
disclaimer to those doubts in the future, though :-)

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bpnsv0$jc5$1$8300dec7@news.demon.co.uk>
Thomas Lindgren wrote:

> 
> Adrian Hey <····@NoSpicedHam.iee.org> writes:
> 
>> Sorry, I wasn't talking about any FPL (dynamically typed or
>> statically typed), but I see you were. (Umm.. I guess that would
>> be Erlang:-) The languages I had in mind were assembler and C.
> 
> Mainly, though Common Lisp has seen wider commercial use (in the
> sense of more kinds of application areas).
>
>> > Successful industrial projects show that the doubts that you refer to
>> > are so far unfounded, in fact *contrary to experience*.
>> 
>> They are not contrary to my experience (which is what really counts
>> when shaping my opinion :-)
> 
> Heh. Well, to each his own, I guess ... I'd suggest you add a
> disclaimer to those doubts in the future, though :-)

But you still haven't answered my main question. You stated that the
advantages of lack of static type security outweighed the disadvantages.
I asked what those advantages actually were.

The only answers we've seen so far on this thread either show a complete
lack of understanding of static type systems or consist of vague inuendo
about "extra work", "making thought about the problem conform to the
type system" etc.. (and in the latter case nobody has demonstrated this
despite repeated invitations to do so).

I'll ask again.

What *are* the advantages of lack of static type security?
(and pleeeze.. don't mention C++ in your reply :-)

Regards
--
Adrian Hey
From: David Golden
Subject: Re: More static type fun.
Date: 
Message-ID: <9qNvb.1674$nm6.10529@news.indigo.ie>
Adrian Hey wrote:


> 
> What *are* the advantages of lack of static type security?
> (and pleeeze.. don't mention C++ in your reply :-)
> 

I personally don't think there are much to speak of, given sufficiently
powerful static type systems, though i don't think static type security is
"enough".

BUT I hope you can see that one could approach static-type-security starting
with the notion of dynamic-type as "primitive" - Imagine doing the "inverse
transform" in some way of the haskell "Dynamic" library, to "derive" static
types from dynamic rather than vice-versa...

When I last checked (admittedly a while back), the dynamic-typing ability of
haskell "dynamic" seemed rather weedy compared to common lisp (and led to
rather verbose haskell code), perhaps just as the static-typing abilities
of Common Lisp compilers are rather weedy compared to haskell and lead to
rather verbose lisp code.

I dunno whether Haskell will get (or probably already has got) more powerful
dynamic types before common lisp gets more powerful static types, but it
will probably be haskell because I don't think most lispers care much -
after all, there are other reasons (some might consider them based largely
on personal whim, of course) to like lisp.
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bptbqj$s2u$1$8300dec7@news.demon.co.uk>
David Golden wrote:

> When I last checked (admittedly a while back), the dynamic-typing ability
> of haskell "dynamic" seemed rather weedy compared to common lisp (and led
> to rather verbose haskell code), perhaps just as the static-typing
> abilities of Common Lisp compilers are rather weedy compared to haskell
> and lead to rather verbose lisp code.

The Haskell language doesn't have any support for dynamics.
ghc has a "cheap & cheerful" (quoting the docs) library to provide
limited support for dynamics, but it's looks like bit of a quick &
dirty hack to me (not that I've ever had any reason to use it,
so I may be wrong).

I think Clean might be a better language to look at to see how to
incorporate dynamics into a statically typed FPL.

Regards
--
Adrian Hey
From: Thomas Lindgren
Subject: Re: More static type fun.
Date: 
Message-ID: <m3zneoqby5.fsf@localhost.localdomain>
Adrian Hey <····@NoSpicedHam.iee.org> writes:

> But you still haven't answered my main question. You stated that the
> advantages of lack of static type security outweighed the disadvantages.
> I asked what those advantages actually were.

Ah. Here's what I meant: the advantages I enumerated have been quite
sufficient to outweigh the potential disadvantage of moving from a
conventional statically typed setting (e.g., Java or C++) to a
dynamically typed one.

Here's what I originally wrote, for reference:

>> The lack of static typing seems not to outweigh the advantages, or
>> normally even to be perceived as a disadvantage (the latter at
>> least as far as I'm aware).

I have no experience with large, industrial, statically typed
functional projects (nor has anyone else, as far as I know) so I can't
even begin to compare those one way or the other, as implied
before. *But* there seems to be no cause for doubting the probability
or ease of success when using a dynamic language, which was the claim
I replied to.

> The only answers we've seen so far on this thread either show a complete
> lack of understanding of static type systems or consist of vague inuendo
> about "extra work", "making thought about the problem conform to the
> type system" etc.. (and in the latter case nobody has demonstrated this
> despite repeated invitations to do so).

My main issue was to address the use of dynamically typed languages in
industry; I believe I have tried to stay out of the static vs dynamic
discussion otherwise, so I won't, if you don't mind. The question as
posed seems far too nebulous too: what static type system is being
discussed, how would one decide that something is extra work or not
(and compared to what?) etc? Sounds like a recipe for heat rather than
light, if you see what I mean :-)

My general experience, anyway, is that once you are at the level of
Prolog/Lisp/SML/Haskell/Erlang/Smalltalk etc, the choice of language
is decided by issues other than static/dynamic typing. (I would put
that distinction pretty far down the list.)

I think the periodic outbreaks of trench warfare about static vs
dynamic typing are much less interesting or relevant than getting
functional languages as a group used in industry. (Maybe because I've
already fought that battle a couple of times :-)

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bptbql$s2u$2$8300dec7@news.demon.co.uk>
Thomas Lindgren wrote:

> Adrian Hey <····@NoSpicedHam.iee.org> writes:
> 
>> But you still haven't answered my main question. You stated that the
>> advantages of lack of static type security outweighed the disadvantages.
>> I asked what those advantages actually were.
> 
> Ah. Here's what I meant: the advantages I enumerated have been quite
> sufficient to outweigh the potential disadvantage of moving from a
> conventional statically typed setting (e.g., Java or C++) to a
> dynamically typed one.

Eeek! .. You mentioned C++ after all :-)  

> Here's what I originally wrote, for reference:
> 
>>> The lack of static typing seems not to outweigh the advantages, or
>>> normally even to be perceived as a disadvantage (the latter at
>>> least as far as I'm aware).
> 
> I have no experience with large, industrial, statically typed
> functional projects (nor has anyone else, as far as I know) so I can't
> even begin to compare those one way or the other, as implied
> before. *But* there seems to be no cause for doubting the probability
> or ease of success when using a dynamic language, which was the claim
> I replied to.

What I meant was that I think that for a given programming/test effort
it is less probable that you will get a working program (by working I
mean *zero defect*) without static type security, simply because a
static type system really does detect *and diagnose* sooo many errors
immediately, with no effort or extra work required from the programmer.
Of course for an application of any complexity it's pretty improbable
that you'll have zero defects even with static type security, but I
think it helps a reduce the bug count a lot.

The trick is to achieve this without imposing the severe constraints
on language expressiveness that certain other languages (which may not
be named :-) do. These languages give static typing a bad name.

Regards
--
Adrian Hey

 
From: Tayss
Subject: Re: More static type fun.
Date: 
Message-ID: <5627c6fa.0311132137.498093ce@posting.google.com>
Adrian Hey <····@NoSpicedHam.iee.org> wrote in message news:<·····················@news.demon.co.uk>...
> Tayss wrote:
> > But Python is beginning to demonstrate that it is quite possible to
> > have success on large projects with dynamic typing, by heavily
> > promoting such things as unit testing.  Things that must be done
> > anyway, even for emacs scripts.
> 
> But have they demonstrated that the absence of a good static type
> system (Hindley-Milner or better) is advantageous in any way?

I definitely agree, such a claim is tricky to make, and I would not be
too quick to make it.  The nice thing about the benevolent dictators
in charge of python is that they are able to simply fiat decisions
based on intuition, letting others attempt a friendly language fork.

Though, there is the argument that putting in guardrails can actually
decrease safety; Joel Spolsky argues that studies have been done on
those curvy mountain roads demonstrating that guardrails actually give
people a false sense of safety, leading to a higher rate of accidents.
 I don't know if this applies here, but it is conceivable that people
would decrease unit testing in favor of relying on a static system. 
Which I suspect is a net loss.

I understand that you're not always annotating things with type
declarations, since a program can figure out that 1 is a number, and
any function that uses log() had better operate on numbers somewhere. 
I simply wish some people within the static typing camp (who don't
really represent everyone in it) took a more nuanced view.  I said the
same thing back when I felt some lispers did not respect the python
community enough when selling macros.

There does exist cl fundamentalism (I say this based on my vague
memory of a recent minutes of the cl standardization committee saying
so), but part of that may be the that cl has been fairly stagnant the
last years.  Combine that with the fact you can't take seriously every
yahoo on usenet with a PhD who claims he has a silver bullet.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <65hofi6i.fsf@ccs.neu.edu>
··········@yahoo.com (Tayss) writes:

> But Python is beginning to demonstrate that it is quite possible to
> have success on large projects with dynamic typing, by heavily
> promoting such things as unit testing.

Really?!  ``Python, the Al Gore of programming languages.''
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bovq73$8ob$1@grizzly.ps.uni-sb.de>
Tayss wrote:
>> I think you missed the bit of irony in my answer. ;-)
> 
> No, you missed the sarcasm in mine. ;)

Definitely.

> I noticed that most of the
> discussion was getting extraordinarily repetitious, at least the parts
> I sampled, and I wanted to hint that stalemate would remain until
> static typing proponents at least appeared to accept that sane
> multiparadigm languages would /never/ willingly accept a paradigm
> which claims prominence over other great techniques.

I don't think I claimed that. You quoted my claim below, and I stand by that 
claim, but it's a different one.

BTW, I wouldn't call static typing a paradigm.

> All the points mentioned here could have been found in the first
> chapter of Pierce's _Types and Programming Languages_.  In fact, you
> claim:
> 
>> OTOH, allowing the user to arbitrarily state this type would make the
>> type system unsound, i.e. you'd loose all guarantees the type system can
>> make and hence almost all of its advantages.
> 
> and this confirms my belief that the static typing world has no
> coherent vocabulary;

Sorry, but that is not true. If you think so then I think you have misread 
the respective Pierce chapter. It is rather that "other communities" tend 
to abuse technical terms and hence blur their meaning. "Dynamic typing" is 
a perfect example of this. With the standard meaning, there is no such 
thing as a "dynamic type system".

> different people have different definitions of
> 'unsound.'

No. Soundness is a fundamental property of a type system that has a pretty 
standard definition. However, it is always relative to what your language 
defines as runtime errors.

> In fact, as Pierce argues, "The term 'safe language' is,
> unfortunately, even more contentious than 'type system.'  Although
> people generally feel they know one when they see it, their notions of
> exatly what constitutes language safety are strongly influenced by the
> language community to which they belong."

Yes, safety is a rather general and hence ambiguous term. I prefer to use it 
in the sense defined by Cardelli in his "Type Systems" article, which is 
the definition that made most sense to me. He defines a language as "safe", 
if a program can never reach a corrupted (undefined) state - which is the 
case for Lisp. This is orthogonal to "typed". It's also not the same as 
"type safe": a type system establishes safety when it is sound.

Note that Pierce uses "safe" only in the context of types, i.e. he always 
means "type safe".

>> What you want - at least the way you formulated it - exists in Lisp
>> already. You see, Lisp is a statically typed language - it just happens
>> to have only one universal type. If you want, you can call that type
>> "will-not-result-in-static-type-error". Unfortunately, that does not buy
>> you anything...
> 
> Sure it buys me things.  Let me use alternate terms.  Lisp is a
> dynamically checked language, and strongly typed. There are primitive
> and programmer-defined types;

In the proper meaning of words, Lisp is only typed in the completely trivial 
sense I sketched above. Lisp may have something called types, but they 
aren't types in any standard formal sense.

>> OTOH, allowing the user to arbitrarily state this type would make the
>> type system unsound, i.e. you'd loose all guarantees the type system can
>> make and hence almost all of its advantages.
> 
> And none of its disadvantages.

You loose almost all of its advantages, and none of its disadvantages? Oh, 
then it's even worse than I thought. Why should anybody possibly want that? 
;-)

> Read Bird/Wadler's functional
> programming book.  In section 1.3, they consistently call strong
> typing a "Discipline."  They say, "Strong typing is important because
> adherence to the discipline..."  Well, lispers can be quite
> disciplined, but in things they find important.

So you agree that discipline is a good thing?

> You will not gain
> converts by demanding they 'adhere' to your 'discipline.'

I don't want to convert anybody to using it. I'm just trying to take the 
ignorance out of people who always think of it as a strait jacket, and 
nothing else.

Simultanously, I have to disappoint those people who know better, but 
believe they could have their cake and eat it too, that is, having the 
benefits of static typing without adapting in some way.

> I ask you, are your statically checked languages capable of more than
> one technique to achieve the goal of expressive, sound software?

Sure, why shouldn't they?

> Lisp
> can -- it is the best multiparadigm language I've seen so far, and I
> suspect part of that success lies in trying not to let any given
> paradigm be too bossy.

Again, I don't see typing as a paradigm. It is rather orthogonal to the 
choice of programming paradigms. Admittedly however, some of them lend 
themselves better to typing than others. To me, it is an interesting 
challenge to improve the capabilities of type systems in such ways that 
they can cover more interesting paradigms in satisfactory ways - without 
giving up their advantages.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4vfpojl7h.fsf@franz.com>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> Tayss wrote:
> >> What you want - at least the way you formulated it - exists in Lisp
> >> already. You see, Lisp is a statically typed language - it just happens
> >> to have only one universal type. If you want, you can call that type
> >> "will-not-result-in-static-type-error". Unfortunately, that does not buy
> >> you anything...
> > 
> > Sure it buys me things.  Let me use alternate terms.  Lisp is a
> > dynamically checked language, and strongly typed. There are primitive
> > and programmer-defined types;
> 
> In the proper meaning of words, Lisp is only typed in the completely trivial 
> sense I sketched above. Lisp may have something called types, but they 
> aren't types in any standard formal sense.

Be careful how you use your own words in their "proper meaning"; be sure
to set the context for those who are reading.  Your last phrase is in
fact incorrect from a Common Lisp point of view, because at least one
Lisp (Common Lisp) does indeed have a formal standard which defines
its types, and that is the Ansi Common Lisp specification.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bp2c7u$ee$2@grizzly.ps.uni-sb.de>
Duane Rettig wrote:
> 
>> In the proper meaning of words, Lisp is only typed in the completely
>> trivial sense I sketched above. Lisp may have something called types, but
>> they aren't types in any standard formal sense.
> 
> Be careful how you use your own words in their "proper meaning"; be sure
> to set the context for those who are reading.  Your last phrase is in
> fact incorrect from a Common Lisp point of view, because at least one
> Lisp (Common Lisp) does indeed have a formal standard which defines
> its types, and that is the Ansi Common Lisp specification.

I hope my reply to Joe clarifies what I meant by "standard sense".

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <47k22bwwk.fsf@franz.com>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> Duane Rettig wrote:
> > 
> >> In the proper meaning of words, Lisp is only typed in the completely
> >> trivial sense I sketched above. Lisp may have something called types, but
> >> they aren't types in any standard formal sense.
> > 
> > Be careful how you use your own words in their "proper meaning"; be sure
> > to set the context for those who are reading.  Your last phrase is in
> > fact incorrect from a Common Lisp point of view, because at least one
> > Lisp (Common Lisp) does indeed have a formal standard which defines
> > its types, and that is the Ansi Common Lisp specification.
> 
> I hope my reply to Joe clarifies what I meant by "standard sense".

Only in the sense that it solidifies my original guess as to where
you're coming from.

I had to go back to google to find your article; I had not saved it
in gnus because I had indeed blown it off as yet another "I have the
only true definition" kind of response.  Instead of reinstating it
in gnus or answering it from google, I will simply answer it from
here by reproducing Joe's reply and your entire response, responding
then to it:

===

> > This is ridiculous.  Lisp types can be as formalized as any other type
> > system.  How formal a type system is has nothing to do with whether
> > one can statically analyze code.
> 
> You missed the point. It's not that you cannot formalize what Lisp does, 
> it's just that it isn't a "type system" in a plausible sense.

A plausible sense?  If you really believe that Common Lisp's type
system is not plausible, then you truly don't know Common Lisp at
all.

> The notion of types comes from mathematical logics and predates the first 
> programming language by decades. Much later, the idea has been adopted for 
> programming languages. In the respective scientific communities the notion 
> of type system has a well-established meaning as a certain kind of logic 
> over syntactic phrases. Pierce even defines it in a broader, more pragmatic 
> way taylored for programmming languages:
> 
>   "A type system is a tractable syntactic method for proving the absence of 
> certain program behaviour by classifying phrases according to the kinds of 
> values they compute."

My dictionary, not necessarily a complete one, has no less than 16 complete
definitions for the word "type" as a noun (as well as 4 as an adjective and
6 as a verb).  Of these, only one has anything to do with Mathematics.  And
even the definition for the Mathematics sense is split up into a pair of
sub-definitions, so it is not even a unified definition.  Also, the
Mathematics-based definition is not the first one on the list - the first
definition of type is the one most commonly used: 

 a kind, class, or group alike in some important way (ex: three types of
 local government; "Smallpox of the most malignant type" (Macaulay)
 [from World Book Dictonary, c 1982]

> It is obvious that Lisp is not even remotely covered by this (reasonable) 
> definition. Expressions like "dynamic typing" are pretty much an abuse of 
> terminology.

Yes, of course that is the case.  Common Lisp's definition of type tends
more to be consistent with the #1 definition of type in my dictionary, not
the #9 mathematical ones.  Why do you state this fact pejoratively?  Who's
to say that it isn't the Mathematics version of the word which is the
confiscation and abuse of the original meaning of the word?

> Yes, I'm only nitpicking about terminology here.

And as such, you should be accurate in your nitpicking, by providing
the background context from which you are picking at those nits.

> Actually, I don't even mind 
> much if people call it "dynamic typing". I just made this point to disprove 
> Tayss' inappropriate claim that "the static typing world has no coherent 
> vocabulary".

I agree that such a claim is inappropriate.  However, physician, heal
thyself:

> In fact, it is the other way round: other "communities" have 
> absorbed words like "type" and "type system" without caring what they 
> really meant and changed their meaning almost to the point of 
> unrecognizability.

It may seem that I am arguing against your terminology.  I am not.
In fact, communication of terminology in the context in which it is
being discussed yields great understanding.  However, that context
must never be assumed, but must be made explicit.  You have, now,
to a large extent, stated the context from which you derive your
own definitions of type.  You must now understand that others do
not share your context, and that the context from which those others
talk about types and type systems are no less recognizable _in_
the _correct_ _context_ as your usage is in your context.  The
extent to which these terms become unrecognizable to you points
out a blind spot in your own knowledge of various contexts that
exist for these terms.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m13ccqlqej.fsf@tti5.uchicago.edu>
Duane Rettig <·····@franz.com> writes:

> >   "A type system is a tractable syntactic method for proving the absence of 
> > certain program behaviour by classifying phrases according to the kinds of 
> > values they compute."
> 
> My dictionary, not necessarily a complete one, has no less than 16 complete
> definitions for the word "type" as a noun (as well as 4 as an adjective and
> 6 as a verb).  Of these, only one has anything to do with Mathematics.  And
> even the definition for the Mathematics sense is split up into a pair of
> sub-definitions, so it is not even a unified definition.  Also, the
> Mathematics-based definition is not the first one on the list - the first
> definition of type is the one most commonly used: 
> 
>  a kind, class, or group alike in some important way (ex: three types of
>  local government; "Smallpox of the most malignant type" (Macaulay)
>  [from World Book Dictonary, c 1982]

Notice that the object of discussion is "type system", not "type".
How many entries does your dictionary have for "type system", and what
do they say?
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <41xsabunj.fsf@franz.com>
Matthias Blume <····@my.address.elsewhere> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > >   "A type system is a tractable syntactic method for proving the absence of 
> > > certain program behaviour by classifying phrases according to the kinds of 
> > > values they compute."
> > 
> > My dictionary, not necessarily a complete one, has no less than 16 complete
> > definitions for the word "type" as a noun (as well as 4 as an adjective and
> > 6 as a verb).  Of these, only one has anything to do with Mathematics.  And
> > even the definition for the Mathematics sense is split up into a pair of
> > sub-definitions, so it is not even a unified definition.  Also, the
> > Mathematics-based definition is not the first one on the list - the first
> > definition of type is the one most commonly used: 
> > 
> >  a kind, class, or group alike in some important way (ex: three types of
> >  local government; "Smallpox of the most malignant type" (Macaulay)
> >  [from World Book Dictonary, c 1982]
> 
> Notice that the object of discussion is "type system", not "type".

Incorrect.  The specific paragraph that Andreas wrote to which both
Joe and I originally responded didn't even mention "type system":

[Andreas:]
> In the proper meaning of words, Lisp is only typed in the completely trivial 
> sense I sketched above. Lisp may have something called types, but they 
> aren't types in any standard formal sense.

and even Andreas' response to Joe included the word "type" in his
complaint:

[Andreas:]
> In fact, it is the other way round: other "communities" have 
> absorbed words like "type" and "type system" without caring what they 
> really meant and changed their meaning almost to the point of 
> unrecognizability.

My whole point, as it has been with other terminology issues, is
that it is essential in order for groups of people to understand
each other that each group make the contexts of their discussions
explicit, and not assume that the other group has the same basis
for their terminologies, whether by definition or by construction.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1smkqk8vf.fsf@tti5.uchicago.edu>
Duane Rettig <·····@franz.com> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Duane Rettig <·····@franz.com> writes:
> > 
> > > >   "A type system is a tractable syntactic method for proving the absence of 
> > > > certain program behaviour by classifying phrases according to the kinds of 
> > > > values they compute."
> > > 
> > > My dictionary, not necessarily a complete one, has no less than 16 complete
> > > definitions for the word "type" as a noun (as well as 4 as an adjective and
> > > 6 as a verb).  Of these, only one has anything to do with Mathematics.  And
> > > even the definition for the Mathematics sense is split up into a pair of
> > > sub-definitions, so it is not even a unified definition.  Also, the
> > > Mathematics-based definition is not the first one on the list - the first
> > > definition of type is the one most commonly used: 
> > > 
> > >  a kind, class, or group alike in some important way (ex: three types of
> > >  local government; "Smallpox of the most malignant type" (Macaulay)
> > >  [from World Book Dictonary, c 1982]
> > 
> > Notice that the object of discussion is "type system", not "type".
> 
> Incorrect.  The specific paragraph that Andreas wrote to which both
> Joe and I originally responded didn't even mention "type system":

Maybe my newsreader is playing tricks on me, but the paragraph that
you replied to seems to be the one where Andreas quoted Benjamin
Pierce's book, in particular the sentence (still visibile above) that
starts with "A type system ...".

This is the paragraph that you originally replied to with you
dictionary entry retort.

Anyway, rest assured that both Benjamin and Andreas, when they refer
to "types", refer to the types in "type systems".

As I have already tried to explain ealier in a different subthread,
any other notion of "type" which merely makes "type" a synonym for
"set of values" isn't very interesting, and it certainly is not the
object of study when one investigates type /systems/.  (The fact that
static types are sometimes interpreted as sets of values
notwithstanding.)

> My whole point, as it has been with other terminology issues, is
> that it is essential in order for groups of people to understand
> each other that each group make the contexts of their discussions
> explicit, and not assume that the other group has the same basis
> for their terminologies, whether by definition or by construction.

Sure, but you already knew in which sense Andreas is using the word
type, right?  (I'm pretty sure about that because we are not the first
time having this particular discussion.  I have too high an opinion of
you to think otherwise.)  But then, isn't it somewhat
counterproductive to a meaningful debate if one /intentionally/
misunderstands (or pretends to misunderstand) the other side?
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4wua2ab9j.fsf@franz.com>
Matthias Blume <····@my.address.elsewhere> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > Matthias Blume <····@my.address.elsewhere> writes:
> > 
> > > Duane Rettig <·····@franz.com> writes:
> > > 
> > > > >   "A type system is a tractable syntactic method for proving the absence of 
> > > > > certain program behaviour by classifying phrases according to the kinds of 
> > > > > values they compute."
> > > > 
> > > > My dictionary, not necessarily a complete one, has no less than 16 complete
> > > > definitions for the word "type" as a noun (as well as 4 as an adjective and
> > > > 6 as a verb).  Of these, only one has anything to do with Mathematics.  And
> > > > even the definition for the Mathematics sense is split up into a pair of
> > > > sub-definitions, so it is not even a unified definition.  Also, the
> > > > Mathematics-based definition is not the first one on the list - the first
> > > > definition of type is the one most commonly used: 
> > > > 
> > > >  a kind, class, or group alike in some important way (ex: three types of
> > > >  local government; "Smallpox of the most malignant type" (Macaulay)
> > > >  [from World Book Dictonary, c 1982]
> > > 
> > > Notice that the object of discussion is "type system", not "type".
> > 
> > Incorrect.  The specific paragraph that Andreas wrote to which both
> > Joe and I originally responded didn't even mention "type system":
> 
> Maybe my newsreader is playing tricks on me, but the paragraph that
> you replied to seems to be the one where Andreas quoted Benjamin
> Pierce's book, in particular the sentence (still visibile above) that
> starts with "A type system ...".
> 
> This is the paragraph that you originally replied to with you
> dictionary entry retort.

My entry into this discussion did not start with a dictionary entry,
but with responses to the first paragraph I re-quoted for you.  In
no way was my reply to his definition with a definition an "original"
reply.

> Anyway, rest assured that both Benjamin and Andreas, when they refer
> to "types", refer to the types in "type systems".

Unless there is an explicit statement to such background (and in fact
to what "type system" in fact means), there can be no such rest.  I
did look it up, and indeed there is no formal definition for the
term "type system" in the Common Lisp spec.  However, to a Common Lisper,
and in the CL context, the term tends to be understood by construction
as the part of the CL system which implements types (where "type" is
indeed defined by CL as "a set of objects, usually with common structure,
behavior, or purpose"
(http://www.franz.com/support/documentation/6.2/ansicl/glossary/t.htm)

> As I have already tried to explain ealier in a different subthread,
> any other notion of "type" which merely makes "type" a synonym for
> "set of values" isn't very interesting, and it certainly is not the
> object of study when one investigates type /systems/.  (The fact that
> static types are sometimes interpreted as sets of values
> notwithstanding.)

Perhaps not interesting to you, but to others it may in fact be very
interesting.  To a Common Lisper, such a definition of type is in
fact extermely close to the CL definition of type (see above).

> > My whole point, as it has been with other terminology issues, is
> > that it is essential in order for groups of people to understand
> > each other that each group make the contexts of their discussions
> > explicit, and not assume that the other group has the same basis
> > for their terminologies, whether by definition or by construction.
> 
> Sure, but you already knew in which sense Andreas is using the word
> type, right?  (I'm pretty sure about that because we are not the first
> time having this particular discussion.  I have too high an opinion of
> you to think otherwise.)  But then, isn't it somewhat
> counterproductive to a meaningful debate if one /intentionally/
> misunderstands (or pretends to misunderstand) the other side?

Thanks for the compliment, but what's with this "other side" bit?
Do you even know what I think about static typing?  It may actually
surprise you to learn what I think of static typing and type
inferencing, but it would not be surprising if you think of my
role as an implementor of a dynamic language in a world where
performance is crucial for survival.

No, I have stayed out of most of this thread because I _do_
understand what people on both "sides" are saying, and I also
understand the importance of both styles of thought process.

If you go back to the origins of my entry into this thread,
you'll find evidence that allows you to conclude (correctly)
that I entered this thread not because of Andreas's use of
the terms he was using, but because of his preclusion of any
other valid use of the same terms in different contexts.  Specifically,
he had made a statement that Lisp's types "aren't types in any standard
formal sense", and that is simply an incorrect statement.  It
might have been correct to say "aren't types in the standard
formal sense defined by logic and logic programming", but to include
"any standard formal sense" in that statement makes it too broad and
easy to disprove by counterexample.  In a nutshell, I am disagreeing
with his exclusivism, not his definitions, per se.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <4qx6mzf8.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> As I have already tried to explain ealier in a different subthread,
> any other notion of "type" which merely makes "type" a synonym for
> "set of values" isn't very interesting, 

To whom?  Why not?

> and it certainly is not the object of study when one investigates
> type /systems/.  (The fact that static types are sometimes
> interpreted as sets of values notwithstanding.)

Presumably a type system would investigate relations between types,
not the elements of the types.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1k762k61r.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > As I have already tried to explain ealier in a different subthread,
> > any other notion of "type" which merely makes "type" a synonym for
> > "set of values" isn't very interesting, 
> 
> To whom?  Why not?

Because just re-labeling something for which there is a perfectly good
term isn't going to do any good.  I can say "sets of values".  Why
invent another word for that -- especially if that word is at odds
with other usages.

> > and it certainly is not the object of study when one investigates
> > type /systems/.  (The fact that static types are sometimes
> > interpreted as sets of values notwithstanding.)
> 
> Presumably a type system would investigate relations between types,
> not the elements of the types.

Indeed.
From: Tayss
Subject: Re: More static type fun.
Date: 
Message-ID: <5627c6fa.0311161257.36f1f669@posting.google.com>
Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@tti5.uchicago.edu>...
> Joe Marshall <···@ccs.neu.edu> writes:
> > Matthias Blume <····@my.address.elsewhere> writes:
> > > As I have already tried to explain ealier in a different subthread,
> > > any other notion of "type" which merely makes "type" a synonym for
> > > "set of values" isn't very interesting, 
> > To whom?  Why not?
> 
> Because just re-labeling something for which there is a perfectly good
> term isn't going to do any good.  I can say "sets of values".  Why
> invent another word for that -- especially if that word is at odds
> with other usages.

Are you sure you want to say "sets"?  How about collections?  I
wouldn't destroy vocabulary because some people happen to think a
concept is trivial.  (And one might say that when you name something,
of course you make it trivial.  So triviality shouldn't play a role.)

--
#\Tayssir     ;Tayssir's a character, WOCKA WOCKA!
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpaju6$8vl$1@grizzly.ps.uni-sb.de>
Duane Rettig wrote:
> 
>> You missed the point. It's not that you cannot formalize what Lisp does,
>> it's just that it isn't a "type system" in a plausible sense.
> 
> A plausible sense?  If you really believe that Common Lisp's type
> system is not plausible, then you truly don't know Common Lisp at
> all.

I didn't say it's not plausible either, I said it's not a type system (by a 
"plausible", i.e. comprehensive and established definition of that term).

In other words: I have yet to see a definition of "type system" that would 
include dynamic typing. It would definitely be non-standard, and that was 
about all I wanted to point out. In fact, most definitions by authorities 
in the field that I have seen (off my mind I remember at least Cardelli and 
Pierce, probably Mitchell) make clear that they consider languages with 
dynamic typing as untyped.

> My dictionary, not necessarily a complete one, has no less than 16
> complete definitions for the word "type" as a noun (as well as 4 as an
> adjective and
> 6 as a verb).

Come on, we all know that we are talking about types as in type systems.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4brrazzwi.fsf@franz.com>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> Duane Rettig wrote:
> > 
> >> You missed the point. It's not that you cannot formalize what Lisp does,
> >> it's just that it isn't a "type system" in a plausible sense.
> > 
> > A plausible sense?  If you really believe that Common Lisp's type
> > system is not plausible, then you truly don't know Common Lisp at
> > all.
> 
> I didn't say it's not plausible either, I said it's not a type system (by a 
> "plausible", i.e. comprehensive and established definition of that term).

Only by the definitions you've chosen to adhere to, in a static typing
context.  It would be arrogant to assume that there are no others.

> In other words: I have yet to see a definition of "type system" that would 
> include dynamic typing. It would definitely be non-standard, and that was 
> about all I wanted to point out. In fact, most definitions by authorities 
> in the field that I have seen (off my mind I remember at least Cardelli and 
> Pierce, probably Mitchell) make clear that they consider languages with 
> dynamic typing as untyped.

These are all within the context of the static typing discipline.  So
I have no problem if you claim them to be authorities on the subject,
as long as you cite the context under which you are making the claim.
You have not been doing that, and that is the whole reason for my
responses.

And of course non-static languages should be considered untyped wrt static
typing, because that is definitional in the static typing sense.

> > My dictionary, not necessarily a complete one, has no less than 16
> > complete definitions for the word "type" as a noun (as well as 4 as an
> > adjective and
> > 6 as a verb).
> 
> Come on, we all know that we are talking about types as in type systems.

No, in fact, we are not, as is evidenced by the fact that we are
having this disagreement.  The fundamental difference is this:  Your claim
seems to be that there is only one definition of "type system", because it
happens to have been defined by the static typing community.  However, my
claim is that the phrase "type system" has an entirely different meaning,
by construction, in other communities, and that the definitions might clash.
For those communities who don't care, they may be willing to accept the
definitional meaning.  For the Lisp community, however, and more specifically
the Common Lisp branch which has an actual definition of type which does
in fact fit the dictionary definition, and which tends to assign meaning
to the phrase "type system" by construction, members of that community
cannot accept the static-typing community's definition, because it
conflicts with their own meaning.

This is not something you or the static-type community will change,
unless you/they decide to change the definition to not overlap with
or to confiscate a generally-constructable phrase with more general
meaning.  Likewise, the Common Lisp community is not likely to
suddenly change its mind and adhere to the static-typing community's
definition, unless it were always enclosed in "in the static-typing
sense" [1]. And I wouldn't expect any such changes to happen, because
such overloadings of terminology occur very frequently.  The important
bit of knowledge to keep in mind is that these overloadings of terminology
do often occur, and tend to be the cause of many arguments, many of
which could be avoided if all participants understood the contexts
and assumptions made by the parties involved.

Bottom line:  While you post to comp.lang.functional, the context in which
you are posting is implicit, and need not be stated.  However, when you
post to comp.lang.lisp, if you are talking about "types" or a "type system",
if you say it as a phrase: either "types (in the static-typing sense)",
or "type system" (in the static typing sense)", or if you make the context
explicit at the beginning of your article with a general explicit context
statement, then I wouldn't bother you about it again.

[1] Common Lisp, having grown over many years from different disciplnes,
has its own such overloadings of terminology.  For example, we tend to
use the term "generic function" to mean either a function which handles
many kinds of (usually arithmetic) operands, or as defined by the standard
as meaning a CLOS function which has methods attached to it.  So one
might say, for example, that #'+ is "a generic function (in the generic
sense)", and that #'print-object is "a generic function (in the definitional
or CLOS sense)".  Such terminology overloadings are always a nuisance,
but never present a problem in understanding, as long as the term in
question is modified by its context.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m13ccmkj5n.fsf@tti5.uchicago.edu>
Duane Rettig <·····@franz.com> writes:

> > Come on, we all know that we are talking about types as in type systems.
> 
> No, in fact, we are not, as is evidenced by the fact that we are
> having this disagreement.

Oh, come on now!  You already admitted that you knew.

>  The fundamental difference is this:  Your claim
> seems to be that there is only one definition of "type system", because it
> happens to have been defined by the static typing community.

This is wrong.  The notion of type systems is much older than the
static typing community.

>  However, my
> claim is that the phrase "type system" has an entirely different meaning,
> by construction, in other communities, and that the definitions might clash.

What is the definition of "type system" in your community, if I may ask?

> For those communities who don't care, they may be willing to accept the
> definitional meaning.  For the Lisp community, however, and more specifically
> the Common Lisp branch which has an actual definition of type which does
> in fact fit the dictionary definition, and which tends to assign meaning
> to the phrase "type system" by construction, members of that community
> cannot accept the static-typing community's definition, because it
> conflicts with their own meaning.

Which "construction" are you talking about?  How do you get from
"type" to "type system" by "construction"?

> Bottom line:  While you post to comp.lang.functional, the context in which
> you are posting is implicit, and need not be stated.  However, when you
> post to comp.lang.lisp, if you are talking about "types" or a "type system",
> if you say it as a phrase: either "types (in the static-typing sense)",
> or "type system" (in the static typing sense)", or if you make the context
> explicit at the beginning of your article with a general explicit context
> statement, then I wouldn't bother you about it again.

Again, please clarify: What is a "type system" in the dynamic typing
sense?
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpcu3q$d8o$1@grizzly.ps.uni-sb.de>
Duane Rettig wrote:
> 
> Of course I "knew".  That is precisely my point.  I know that Andreas is
> talking about his own definition of types and type systems,

Once more: this is not *my* definition.

> and that
> others on this thread are _not_ talking about the same thing.  Note again,
> as I said earlier, that I am only in this thread to clear up definitional
> misunderstandings, not to argue about static vs dynamic typing (two
> disciplines each of which I find consistent within their own contexts).

Well, the joys of usenet. Actually, my entering this subthread was for 
pretty much the same reason, if you remember, and see where it went?

> The phrase "type system" is a pair of nouns, the first being a
> noun-modifier (similar to an adjective) and the second being the core noun
> of the phrase. Another way to express "a type system" in English is "a
> system of types".

Come on, we are talking about technical definitions here, not linguistics.

>> Again, please clarify: What is a "type system" in the dynamic typing
>> sense?
> 
> I can't speak for the whole dynamic typing community

Nobody expects that but I would appreciate at least some pointers. Just 
describing what your favorite language does is not insightful - you can 
always do that and it is arbitrary. The real issue is defining the term in 
a way that is *generic*, i.e. describes the general *characteristics* 
coherently, not the technicalities of some particular instance, with 
hand-waving reference to its implementation.

Can you point us to an appropriate reference? Simply arguing that the 
generic definition of the term I quoted and similar references I cited (and 
which you agreed were reasonable in their context) were done by static 
typing proponents is dodging the issue as long as you fail to come up with 
an alternative definition of comparable clarity and authority.

Even if some people prefer to think so, I'm not trolling around here. 
Actually, I would really be interested in such a reference, because I have 
unsuccessfully tried to find one for my own work (which, believe it or not, 
is about bridging between "static" and "dynamic" typing ;-) ).

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4vfphva17.fsf@franz.com>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> Duane Rettig wrote:
> > 
> > Of course I "knew".  That is precisely my point.  I know that Andreas is
> > talking about his own definition of types and type systems,
> 
> Once more: this is not *my* definition.

It is not your definition in the sense that you had created it, but
it is the definition you have accepted as what you believe is the
correct definition.  In that sense it is "yours".  It is the same
sense in which I consider Common Lisp to be "my language", even
though I did not invent CL, and even though I've programmed in many
other languages, CL is the language I currently make my living at,
and so I consider it to be "mine".

> > and that
> > others on this thread are _not_ talking about the same thing.  Note again,
> > as I said earlier, that I am only in this thread to clear up definitional
> > misunderstandings, not to argue about static vs dynamic typing (two
> > disciplines each of which I find consistent within their own contexts).
> 
> Well, the joys of usenet. Actually, my entering this subthread was for 
> pretty much the same reason, if you remember, and see where it went?

Yes, I remember you talking about picking a nit.  Didn't you realize
that nits are usually the most painful to pick?  :-)

> > The phrase "type system" is a pair of nouns, the first being a
> > noun-modifier (similar to an adjective) and the second being the core noun
> > of the phrase. Another way to express "a type system" in English is "a
> > system of types".
> 
> Come on, we are talking about technical definitions here, not linguistics.

You've said this several times, so I assume that you're not going
to accept my correction of your misconception, or my attempt to
lead you to the realization you that technical definitions _are_
linguistic in nature, and that they generally obey the rules of
linguistics.  I've tried to teach you this indirectly, without
actually stating the case as I just did now, but instead tried to
lead you to the natural conclusion that you might make for
yourself.  But you haven't made the connection through this
indirect path, presumably because you believe technical definitions
to be special beasts, unfettered by the rules of linguistics.

I probably won't go much further on this, but I'll leave you with
this thought:

Language (i.e. of the linguistic kind) is essential for communication
with others about concepts.  And that communication is based on not
only what is written or spoken, but also upon the commonality of the
backgrounds which each of the speaker/writer and the listener/reader
have with which to interpret the explicit transfer.  To the
extent that their backgrounds (or even portions of background, or
contextual settings) are different, then even if what is written or spoken
is transmitted and received perfectly, the communication is incomplete,
because the interpretation is from a different background or context.
In order to perfect (or at least improve) the communication, the
assumptions must be made explicit, so that the reason for the
miscommunication can be understood.

I would even go so far as to say that over half (or more) of all
arguments between reasonable people are based on hidden assumptions
(e.g. unknown by one party, and the fact that it is unknown is not
realized by the other party), and that once these assumptions are
brought out into the open and understood by each party, the argument
ceases, even if the partys still disagree with each other - at least
they have finally understood each other.

> >> Again, please clarify: What is a "type system" in the dynamic typing
> >> sense?
> > 
> > I can't speak for the whole dynamic typing community
> 
> Nobody expects that but I would appreciate at least some pointers. Just 
> describing what your favorite language does is not insightful - you can 
> always do that and it is arbitrary. The real issue is defining the term in 
> a way that is *generic*, i.e. describes the general *characteristics* 
> coherently, not the technicalities of some particular instance, with 
> hand-waving reference to its implementation.

My reason for not speaking for the whole dynamic typing community is
because I don't believe that CL is a purely dynamic language.  It is
a hybrid, which has grown with the goal of practicality, to assimilate
portions of many disciplines.  So although I could speak for the CL
community (which apparently you are not interested in) I would not
want to speak for a pure dynamic-typing community, since I don't
consider myself to be a member of that community.

> Can you point us to an appropriate reference? Simply arguing that the 
> generic definition of the term I quoted and similar references I cited (and 
> which you agreed were reasonable in their context) were done by static 
> typing proponents is dodging the issue as long as you fail to come up with 
> an alternative definition of comparable clarity and authority.
> 
> Even if some people prefer to think so, I'm not trolling around here. 
> Actually, I would really be interested in such a reference, because I have 
> unsuccessfully tried to find one for my own work (which, believe it or not, 
> is about bridging between "static" and "dynamic" typing ;-) ).

You've already gotten a response from Pascal, and I'll answer your reply
to his response.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpcvm9$uv2$1@f1node01.rhrz.uni-bonn.de>
Andreas Rossberg wrote:

> Can you point us to an appropriate reference? Simply arguing that the 
> generic definition of the term I quoted and similar references I cited (and 
> which you agreed were reasonable in their context) were done by static 
> typing proponents is dodging the issue as long as you fail to come up with 
> an alternative definition of comparable clarity and authority.
> 
> Even if some people prefer to think so, I'm not trolling around here. 
> Actually, I would really be interested in such a reference, because I have 
> unsuccessfully tried to find one for my own work (which, believe it or not, 
> is about bridging between "static" and "dynamic" typing ;-) ).

The following might be helpful. (Click on the doi link.)

@article{114671,
  author = {Richard P. Gabriel and Jon L. White and Daniel G. Bobrow},
  title = {CLOS: integrating object-oriented and functional programming},
  journal = {Commun. ACM},
  volume = {34},
  number = {9},
  year = {1991},
  issn = {0001-0782},
  pages = {29--38},
  doi = {http://doi.acm.org/10.1145/114669.114671},
  publisher = {ACM Press},
  }


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpd698$j0e$1@grizzly.ps.uni-sb.de>
Pascal Costanza wrote:
> 
> The following might be helpful. (Click on the doi link.)
> 
> @article{114671,
>   author = {Richard P. Gabriel and Jon L. White and Daniel G. Bobrow},
>   title = {CLOS: integrating object-oriented and functional programming},
>   journal = {Commun. ACM},
>   volume = {34},
>   number = {9},
>   year = {1991},
>   issn = {0001-0782},
>   pages = {29--38},
>   doi = {http://doi.acm.org/10.1145/114669.114671},
>   publisher = {ACM Press},
>   }

Thanks for the pointer. The paper contains a high-level discussion of CLOS 
and some of the design space behind it, which certainly is interesting. In 
one section it discusses different pragmatic notions of "type" informally. 
But unless I missed something it does nowhere explain or even make precise 
what a type system actually is or does.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4oev9v95x.fsf@franz.com>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> Pascal Costanza wrote:
> > 
> > The following might be helpful. (Click on the doi link.)
> > 
> > @article{114671,
> >   author = {Richard P. Gabriel and Jon L. White and Daniel G. Bobrow},
> >   title = {CLOS: integrating object-oriented and functional programming},
> >   journal = {Commun. ACM},
> >   volume = {34},
> >   number = {9},
> >   year = {1991},
> >   issn = {0001-0782},
> >   pages = {29--38},
> >   doi = {http://doi.acm.org/10.1145/114669.114671},
> >   publisher = {ACM Press},
> >   }
> 
> Thanks for the pointer. The paper contains a high-level discussion of CLOS 
> and some of the design space behind it, which certainly is interesting. In 
> one section it discusses different pragmatic notions of "type" informally. 

This is precisely the intention of such discussions, and is in harmony
with the Lisp notion of "late binding".

> But unless I missed something it does nowhere explain or even make precise 
> what a type system actually is or does.

If you'll only accept formal definitions, I guarantee you will be
frustrated, because one of the aspects of "the Lisp way" is that
definitions are bound as late as possible, to allow new aspects of
a concept which had otherwise not yet been explored to be incorporated
without having to change basic assumptions.  It is for this reason that
I object to your use of static-typing definitions within the context
of Common Lisp without qualifying the context.

Interestingly (to me, at least), many people have asked Lispers to
describe "the Lisp way", and several have tried to define it.  But
I believe all have failed (many have come up with their own
definitions or descriptions, but others have knocked those descriptions
down with cunterexamples or additions).  My own personal belief is that
"the Lisp way" cannot be defined, because it is not "the Lisp way"
to do so.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpe2uu$8ol$1@grizzly.ps.uni-sb.de>
Duane Rettig <·····@franz.com> wrote:
> >
> > Thanks for the pointer. The paper contains a high-level discussion of
CLOS
> > and some of the design space behind it, which certainly is interesting.
In
> > one section it discusses different pragmatic notions of "type"
informally.
>
> This is precisely the intention of such discussions, and is in harmony
> with the Lisp notion of "late binding".

> > But unless I missed something it does nowhere explain or even make
precise
> > what a type system actually is or does.
>
> If you'll only accept formal definitions,

I'm not looking for a formal definition, just a reasonably comprehensive and
coherent one.

> I guarantee you will be
> frustrated, because one of the aspects of "the Lisp way" is that
> definitions are bound as late as possible, to allow new aspects of
> a concept which had otherwise not yet been explored to be incorporated
> without having to change basic assumptions.  It is for this reason that
> I object to your use of static-typing definitions within the context
> of Common Lisp without qualifying the context.
>
> Interestingly (to me, at least), many people have asked Lispers to
> describe "the Lisp way", and several have tried to define it.  But
> I believe all have failed (many have come up with their own
> definitions or descriptions, but others have knocked those descriptions
> down with cunterexamples or additions).  My own personal belief is that
> "the Lisp way" cannot be defined, because it is not "the Lisp way"
> to do so.


You cannot be serious! How can you possibly communicate if all language you
use is so fuzzy that anybody can interpret it his way? Or even vacuous, as
your description of "the Lisp way", such that you can practically retrofit
any meaning you want???

I'm sorry, but this not only counteracts several thousand years of rhetoric
and scientific discourse, it boils down to a mere question of belief, i.e.
pure religion. Or comedy.
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4ptfpf9dk.fsf@franz.com>
"Andreas Rossberg" <········@ps.uni-sb.de> writes:

> Duane Rettig <·····@franz.com> wrote:
> > >
> > > Thanks for the pointer. The paper contains a high-level discussion of CLOS
> > > and some of the design space behind it, which certainly is interesting. In
> > > one section it discusses different pragmatic notions of "type" informally.
> >
> > This is precisely the intention of such discussions, and is in harmony
> > with the Lisp notion of "late binding".
> 
> > > But unless I missed something it does nowhere explain or even make precise
> > > what a type system actually is or does.
> >
> > If you'll only accept formal definitions,
> 
> I'm not looking for a formal definition, just a reasonably comprehensive
> and coherent one.

OK.  I was responding in part to your earlier request for "an
alternative definition of comparable clarity and authority.",
coupled with your request above for an explanation or "precise"-making
of what a type system actually is or does.

Perhaps this does not necessarily mean a formal definition.  But in
most contexts that frequent, this combination in fact constitutes a
fairly good description what is required for a formal definition.

> > I guarantee you will be
> > frustrated, because one of the aspects of "the Lisp way" is that
> > definitions are bound as late as possible, to allow new aspects of
> > a concept which had otherwise not yet been explored to be incorporated
> > without having to change basic assumptions.  It is for this reason that
> > I object to your use of static-typing definitions within the context
> > of Common Lisp without qualifying the context.
> >
> > Interestingly (to me, at least), many people have asked Lispers to
> > describe "the Lisp way", and several have tried to define it.  But
> > I believe all have failed (many have come up with their own
> > definitions or descriptions, but others have knocked those descriptions
> > down with cunterexamples or additions).  My own personal belief is that
> > "the Lisp way" cannot be defined, because it is not "the Lisp way"
> > to do so.
> 
> You cannot be serious! How can you possibly communicate if all language you
> use is so fuzzy that anybody can interpret it his way? Or even vacuous, as
> your description of "the Lisp way", such that you can practically retrofit
> any meaning you want???

It's quite a leap to get from "the Lisp way" being undefinable to "all"
langauge I use being fuzzy.  Dude, you really need to chill.

> I'm sorry, but this not only counteracts several thousand years of rhetoric
> and scientific discourse, it boils down to a mere question of belief, i.e.
> pure religion. Or comedy.

Or experience with it.  Perhaps it is a mixture of all of these.
Perhaps it is a red herring, because there is no such thing as
"the Lisp way".  Such is the Lisp way...

In the same way, perhaps the mysterious differences between static
and dynamic types, and the definitions of type systems, don't matter
that much to Common Lispers, because these differences are operational
rather than definitional.

My advice, and my example (which apparently has hit a sore spot) was
based on your request of a few posts ago:

| Even if some people prefer to think so, I'm not trolling around here. 
| Actually, I would really be interested in such a reference, because I have 
| unsuccessfully tried to find one for my own work (which, believe it or not, 
| is about bridging between "static" and "dynamic" typing ;-) ).

I don't know what you mean here by "dynamic" typing, but I took your
use of the word "bridge" to mean that you want to tie the two
different concepts together (as opposed to just extending the work
that has been done on "dynamic" typing within the static type
community).  If this is the case, and you truly want to bridge
two communities, then I suggest that you'll be more successful
if you understand the other community.

It would not be necessary to understand the Lisp community in order
to understand dynamic typing.  However, if you consider Common Lisp
to be representative of dynamic typing, then I would think that in
order to build a bridge you would need to spend some time in Common
Lisp and in its community.  The fact that you can't understand "the
Lisp way" (or in fact, why it is hard to define) then you definitely
haven't spent enough time in Common Lisp.  I would highly recomend
it, in order for you to build a good, sturdy bridge.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egn0aspurz.fsf@sefirot.ii.uib.no>
Duane Rettig <·····@franz.com> writes:

> Dude, you really need to chill.

Right on. :-)

I'm really impressed by the heat generated from the simple statement
that "Lisp doesn't have a type system".  I appreciate that the devout
feel this as a blatant attack on their favorite language, but by now
it should be pretty clear that we all know Lisp has types, that "type
system" means something else in the Lisp newsgroups than in Cardelli's
papers, and could we all please just move on?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-9DC0C8.00233219112003@netnews.attbi.com>
In article <·············@franz.com>, Duane Rettig <·····@franz.com> 
wrote:

> My own personal belief is that
> "the Lisp way" cannot be defined, because it is not "the Lisp way"
> to do so.

Does that make Lao Tzu the first lisper?
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4k75wnaoo.fsf@franz.com>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> In article <·············@franz.com>, Duane Rettig <·····@franz.com> 
> wrote:
> 
> > My own personal belief is that
> > "the Lisp way" cannot be defined, because it is not "the Lisp way"
> > to do so.
> 
> Does that make Lao Tzu the first lisper?

:-)

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpft7e$107k$1@f1node01.rhrz.uni-bonn.de>
Duane Rettig wrote:

> Interestingly (to me, at least), many people have asked Lispers to
> describe "the Lisp way", and several have tried to define it.  But
> I believe all have failed (many have come up with their own
> definitions or descriptions, but others have knocked those descriptions
> down with cunterexamples or additions).  My own personal belief is that
> "the Lisp way" cannot be defined, because it is not "the Lisp way"
> to do so.

:)

So, does Lisp have the Zen nature?

Does it have the quality without a name?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fbc7d4a@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Andreas Rossberg wrote:
>
>> Can you point us to an appropriate reference? Simply arguing that the 
>> generic definition of the term I quoted and similar references I cited (and 
>> which you agreed were reasonable in their context) were done by static 
>> typing proponents is dodging the issue as long as you fail to come up with 
>> an alternative definition of comparable clarity and authority.
>> 
>> Even if some people prefer to think so, I'm not trolling around here. 
>> Actually, I would really be interested in such a reference, because I have 
>> unsuccessfully tried to find one for my own work (which, believe it or not, 
>> is about bridging between "static" and "dynamic" typing ;-) ).

I can give you a reference: Lee Naish's work on types in logic programming,
specifically his paper "Types and intended meaning" [1], and the NU-prolog
type system which he and others implemented and which is briefly described
in section 3.1.2 of [2].

Naish's paper [1] discusses a framework for type systems in logic programming,
using a definition that treats types as sets (not necessarily decidable),
and a notion of type systems for which it may not be possible to detect
all type errors at compile time.  His paper mainly uses the phrase
"type scheme" (or just the word "scheme"), which I think is intended
to be more general than "type system", in that a "type system" would
be an instance of a "type scheme".  For example, I think in Naish's
terminology the ML type system would be an instance of the type scheme
described by Milner [4].  But Naish does at one point use the phrase
"type system" in [1], and Thompson (who was working as Naish's RA) also
used that phrase when referring in [2] to Naish's implementation.

I found Naish's use of the word "type" and the phrase "type system"
suprising when I first encountered it, and considered it potentially
misleading.  Indeed, when I first encountered this usage, I might
have considered it an abuse of terminology.  However, I have since
been persuaded that this use is reasonable, and indeed have used the
terminology this way myself.  For example in 1999 I referred to "the
NU-Prolog type system" in an email [3] where I described its influence
on the Mercury type system.

In fact the original Mercury type system was very similar to the NU-Prolog
type system.  The main differences between them, in fact, was that Mercury
restricted types to a decidable set and thus guaranteed to report all type
errors at compile time.  But apart from that, they were very similar,
even using the same syntax, and so it was extremely natural to use the
phrase "type system" to refer to both.  To use completely different
terminology for these very similar systems would have been rather
counter-intuitive.

I don't think there is any need to restrict the meaning of "type system"
so that it refers only to systems in which type checking is done entirely
at compile time.  The phrase "static type system" covers that concept
quite nicely.  It's important to have _some_ phrase whose meaning includes
the type systems of both NU-Prolog and Mercury, and "type system" is the
obvious one.


References
----------

[1] Lee Naish. "Types and intended meaning."  In F. Pfenning, editor,
    Types in Logic Programming, pages 189-215. MIT Press, 1992.

[2] Bert Thompson, "A Guide To The NU-Prolog Debugging Environment",
    Technical Report 96/38, Department of Computer Science,
    The University of Melbourne, 1996.
    <http://citeseer.nj.nec.com/11193.html>.

[3] Fergus Henderson, email to the mercury-users mailing list, December 1999.
    <http://www.cs.mu.oz.au/mercury/mailing-lists/mercury-users/mercury-users.9911/0025.html>.

[4] Robin Milner. "A theory of type polymorphism in programming."
    Journal of Computer and System Sciences, volume 17, number 3,
    December 1978, pages 348-375.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpimpb$cds$1@grizzly.ps.uni-sb.de>
Fergus Henderson wrote:
>
>>> Actually, I would really be interested in such a reference, because I
>>> have unsuccessfully tried to find one for my own work (which, believe it
>>> or not, is about bridging between "static" and "dynamic" typing ;-) ).
> 
> I can give you a reference: Lee Naish's work on types in logic
> programming, specifically his paper "Types and intended meaning" [1], and
> the NU-prolog type system which he and others implemented and which is
> briefly described in section 3.1.2 of [2].

That's very interesting, thanks for the pointer! I will certainly have a 
look.

        - Andreas
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpctnq$cvv$1@grizzly.ps.uni-sb.de>
Joe Marshall wrote:
> 
>> In other words:  I have yet to see a definition of "type system" that
>> would include dynamic typing.
> 
> You need to get out more. These papers may be of interest:

Oh well, I was about to include "like in Lisp" in order to make it crystal 
clear what kind of dynamic typing I was talking about. I decided against it 
because I feared it would again be misinterpreted as Lisp bashing. And I 
thought it would be clear from the context of the thread anyway. Obviously, 
I was wrong.

> @inproceedings{ shields98dynamic,
>     author = "Mark Shields and Tim Sheard and Simon L. Peyton Jones",
>     title = "Dynamic Typing as Staged Type Inference",
>     booktitle = "Symposium on Principles of Programming Languages",
>     pages = "289-302",
>     year = "1998",
>     url = "citeseer.nj.nec.com/shields98dynamic.html" }
> 
> @article{ abadi91dynamic,
>     author = "Mart{\'\i}n Abadi and Luca Cardelli and Benjamin Pierce and
>     Gordon Plotkin", title = "Dynamic Typing in a Statically Typed
>     Language", journal = "ACM Transactions on Programming Languages and
>     Systems", volume = "13",
>     number = "2",
>     month = "April",
>     publisher = "ACM Press",
>     pages = "237--268",
>     year = "1991",
>     url = "citeseer.nj.nec.com/abadi89dynamic.html" }
> 
> @inproceedings{ abadi92dynamic,
>     author = "M. Abadi and L. Cardelli and B. Pierce and G. Plotkin and D.
>     R{\`e}my", title = "Dynamic Typing in Polymorphic Languages",
>     booktitle = "Proceedings of the {ACM} {SIGPLAN} Workshop on {ML} and
>     its Applications", month = "June",
>     address = "San Francisco",
>     year = "1992",
>     url = "citeseer.nj.nec.com/abadi92dynamic.html" }

Oh, I know these papers very well, thank you. If you look closer you will 
notice that they propose quite a different notion of dynamic typing, namely 
one that does *not* compromise the characteristics of (static) type 
systems.

That approach to dynamic typing has already been proposed several times in 
this thread, with the standard response from the Lisp camp that it was "the 
wrong default". <shrug>

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <znetvdjl.fsf@ccs.neu.edu>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> Joe Marshall wrote:
>> 
>>> In other words:  I have yet to see a definition of "type system" that
>>> would include dynamic typing.
>> 
>> You need to get out more. These papers may be of interest:
>
> Oh well, I was about to include "like in Lisp" in order to make it crystal 
> clear what kind of dynamic typing I was talking about. I decided against it 
> because I feared it would again be misinterpreted as Lisp bashing. And I 
> thought it would be clear from the context of the thread anyway. Obviously, 
> I was wrong.

Given the amount of mis-interpretation this thread has seen, I'd be
very careful.

> Oh, I know these papers very well, thank you. If you look closer you will 
> notice that they propose quite a different notion of dynamic typing, namely 
> one that does *not* compromise the characteristics of (static) type 
> systems.

Which characteristics are those?  It certainly compromises the `static' part.

> That approach to dynamic typing has already been proposed several times in 
> this thread, with the standard response from the Lisp camp that it was "the 
> wrong default". <shrug>

The Lisp camp thinks that kowtowing to the type checker is the wrong
default, not that finding *actual* bugs as soon as possible.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpdkmi$t5$1@grizzly.ps.uni-sb.de>
Joe Marshall wrote:
> 
>> Oh, I know these papers very well, thank you. If you look closer you will
>> notice that they propose quite a different notion of dynamic typing,
>> namely one that does *not* compromise the characteristics of (static)
>> type systems.
> 
> Which characteristics are those?  It certainly compromises the `static'
> part.

No, that's the whole point. Everything in the program remains statically 
typesafe. The only operation that can fail is the typecase, if it is 
allowed to be non-exhaustive (which, of course, is statically detectable). 
Errors are kept local, except if you pass around dynamics, and then you 
explicitly ask for it. And even then you won't get failure at unexpected 
places.

>> That approach to dynamic typing has already been proposed several times
>> in this thread, with the standard response from the Lisp camp that it was
>> "the wrong default". <shrug>
> 
> The Lisp camp thinks that kowtowing to the type checker is the wrong
> default, not that finding *actual* bugs as soon as possible.

The other "camp" thinks that it isn't "kowtowing" at all, and highly 
appreciates the type checker's constructive assistance in finding *actual* 
bugs.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <znettu52.fsf@ccs.neu.edu>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> Joe Marshall wrote:
>> 
>>> Oh, I know these papers very well, thank you. If you look closer you will
>>> notice that they propose quite a different notion of dynamic typing,
>>> namely one that does *not* compromise the characteristics of (static)
>>> type systems.
>> 
>> Which characteristics are those?  It certainly compromises the `static'
>> part.
>
> No, that's the whole point. Everything in the program remains statically 
> typesafe. The only operation that can fail is the typecase, if it is 
> allowed to be non-exhaustive (which, of course, is statically detectable). 

Um, yeah.  A non-exhaustive typecase *could* throw an error at
runtime.  Exactly how is this `static'?  Exactly how is this
`typesafe'?  Exactly how does this differ from Lisp throwing a runtime
error?
From: Mark Carroll
Subject: Re: More static type fun.
Date: 
Message-ID: <qwi*BbQ7p@news.chiark.greenend.org.uk>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>Andreas Rossberg <········@ps.uni-sb.de> writes:
(snip)
>> No, that's the whole point. Everything in the program remains statically 
>> typesafe. The only operation that can fail is the typecase, if it is 
>> allowed to be non-exhaustive (which, of course, is statically detectable). 
>
>Um, yeah.  A non-exhaustive typecase *could* throw an error at
>runtime.  Exactly how is this `static'?  Exactly how is this
>`typesafe'?  Exactly how does this differ from Lisp throwing a runtime
>error?

Well, it's at least statically detectable: it differs from Lisp
throwing a runtime error because the compiler can guarantee to warn
you of the eventuality, and you don't have to happen to hit exactly
that case in your unit tests or whatever to get that warning. Of
course, you can still wrongly convince yourself that that omitted case
may never be exercised, but at least you had your attention drawn to
the issue.

Still, that's only one class of error. There's all manner of runtime
errors that can crop up in a Haskell program, of course, that you may
not have been warned by the compiler about. "f = head []" will compile
yet throw a runtime exception, for example. Static typing only goes so
far.

-- Mark
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <llqds1g0.fsf@comcast.net>
Mark Carroll <·····@chiark.greenend.org.uk> writes:

> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>>Andreas Rossberg <········@ps.uni-sb.de> writes:
> (snip)
>>> No, that's the whole point. Everything in the program remains statically 
>>> typesafe. The only operation that can fail is the typecase, if it is 
>>> allowed to be non-exhaustive (which, of course, is statically detectable). 
>>
>>Um, yeah.  A non-exhaustive typecase *could* throw an error at
>>runtime.  Exactly how is this `static'?  Exactly how is this
>>`typesafe'?  Exactly how does this differ from Lisp throwing a runtime
>>error?
>
> Well, it's at least statically detectable: it differs from Lisp
> throwing a runtime error because the compiler can guarantee to warn
> you of the eventuality, and you don't have to happen to hit exactly
> that case in your unit tests or whatever to get that warning. 

It's statically detectable in Lisp, too.  But since there is usually
not enough information around to determine the type statically, it
would be rather nasty to issue a compile-time warning.

-- 
~jrm
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpe1k4$81a$1@grizzly.ps.uni-sb.de>
Joe Marshall <···@ccs.neu.edu> wrote:
> > No, that's the whole point. Everything in the program remains statically
> > typesafe. The only operation that can fail is the typecase, if it is
> > allowed to be non-exhaustive (which, of course, is statically
detectable).
>
> Um, yeah.  A non-exhaustive typecase *could* throw an error at
> runtime.

That construct is for dynamic typing, where you anticipate failure.

>  Exactly how is this `static'?  Exactly how is this
> `typesafe'?  Exactly how does this differ from Lisp throwing a runtime
> error?

In the way I described in the rest of the paragraph that you snipped. In the
way that whenever some variable has type, say, int, I still have the static
guarantee that it will always be an int. In the way that I know that only
values of type dynamic can have an unexpected and statically undetermined
actual type. In the way that I cannot accidently pass such values to
arbitrary functions not expecting them. In the way that dynamic typing stays
isolated where requested and does not affect the rest of the language and
its type system in any way.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <he11s151.fsf@comcast.net>
"Andreas Rossberg" <········@ps.uni-sb.de> writes:

> Joe Marshall <···@ccs.neu.edu> wrote:
>> > No, that's the whole point. Everything in the program remains statically
>> > typesafe. The only operation that can fail is the typecase, if it is
>> > allowed to be non-exhaustive (which, of course, is statically
> detectable).
>>
>> Um, yeah.  A non-exhaustive typecase *could* throw an error at
>> runtime.
>
> That construct is for dynamic typing, where you anticipate failure.
>
>>  Exactly how is this `static'?  Exactly how is this
>> `typesafe'?  Exactly how does this differ from Lisp throwing a runtime
>> error?
>
> In the way I described in the rest of the paragraph that you snipped. In the
> way that whenever some variable has type, say, int, I still have the static
> guarantee that it will always be an int. In the way that I know that only
> values of type dynamic can have an unexpected and statically undetermined
> actual type. In the way that I cannot accidently pass such values to
> arbitrary functions not expecting them. In the way that dynamic typing stays
> isolated where requested and does not affect the rest of the language and
> its type system in any way.

All of these apply to Lisp as well.  The *big* difference here is
separate compilation and the `default' settings.  When you write a
function FOO that takes an argument X, the compiler has no idea where
you might use it.  If you don't tell it what X is, it assumes the
worst.

But many lisp systems can precisely infer the type of intermediate
expressions and make use of that information.  In such cases where the
compiler can prove the intermediate types cannot match, however, you
generally get a warning (the compiler isn't required to be failfast,
and it doesn't try to read your mind --- maybe the code was generated
from a macro-expansion and *ought* to generate a run-time error).

-- 
~jrm
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpfeo1$18h$1@grizzly.ps.uni-sb.de>
Joe Marshall wrote:
>>
>>>  Exactly how is this `static'?  Exactly how is this
>>> `typesafe'?  Exactly how does this differ from Lisp throwing a runtime
>>> error?
>>
>> In the way I described in the rest of the paragraph that you snipped. In
>> the way that whenever some variable has type, say, int, I still have the
>> static guarantee that it will always be an int. In the way that I know
>> that only values of type dynamic can have an unexpected and statically
>> undetermined actual type. In the way that I cannot accidently pass such
>> values to arbitrary functions not expecting them. In the way that dynamic
>> typing stays isolated where requested and does not affect the rest of the
>> language and its type system in any way.
> 
> All of these apply to Lisp as well.

No, not at all, in general a Lisp compiler cannot give me static guarantees, 
and is not required to do so either.

(Sitting and waiting for the replies saying that "this is exactly what we 
want". Maybe, but that wasn't the point.)

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <islg4j8h.fsf@comcast.net>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> Joe Marshall wrote:
>>>
>>>>  Exactly how is this `static'?  Exactly how is this
>>>> `typesafe'?  Exactly how does this differ from Lisp throwing a runtime
>>>> error?
>>>
>>> In the way I described in the rest of the paragraph that you snipped. In
>>> the way that whenever some variable has type, say, int, I still have the
>>> static guarantee that it will always be an int. In the way that I know
>>> that only values of type dynamic can have an unexpected and statically
>>> undetermined actual type. In the way that I cannot accidently pass such
>>> values to arbitrary functions not expecting them. In the way that dynamic
>>> typing stays isolated where requested and does not affect the rest of the
>>> language and its type system in any way.
>> 
>> All of these apply to Lisp as well.
>
> No, not at all, in general a Lisp compiler cannot give me static guarantees, 
> and is not required to do so either.

It can, although it is not required to.  Note that in the usual case the
Lisp compiler has insufficient information to deduce anything non-trivial.

-- 
~jrm
From: Mark Carroll
Subject: Re: More static type fun.
Date: 
Message-ID: <GPg*J-P7p@news.chiark.greenend.org.uk>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
(snip)
>The Lisp camp thinks that kowtowing to the type checker is the wrong
>default, not that finding *actual* bugs as soon as possible.

Your phrasing is odd, but if you mean to suggest that the time spent
placating the typechecker isn't largely time spent in fixing actual
bugs, then IMHO you're mistaken. It normally does a nice job of
pinpointing and explaining them, too. I find the static typechecking
a very handy debugging aid. (We do write unit tests, etc. too!)

-- Mark
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <1xscfhr3.fsf@ccs.neu.edu>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> In the proper meaning of words, Lisp is only typed in the completely trivial 
> sense I sketched above.  Lisp may have something called types, but they 
> aren't types in any standard formal sense.

This is ridiculous.  Lisp types can be as formalized as any other type
system.  How formal a type system is has nothing to do with whether
one can statically analyze code.

> Simultanously, I have to disappoint those people who know better, but 
> believe they could have their cake and eat it too, that is, having the 
> benefits of static typing without adapting in some way.

I'm happy to use any programming aid that doesn't require me to adapt.
From: Ed Avis
Subject: Re: More static type fun.
Date: 
Message-ID: <l1k764t16r.fsf@budvar.future-i.net>
Joe Marshall <···@ccs.neu.edu> writes:

>I'm happy to use any programming aid that doesn't require me to adapt.

Heh, this may sum up the argument.

Any really worthwhile programming tool does require one to adapt.
Your culture will adapt to serve the type checker.  Resistance is
futile.

Many people are happy to insert the implants and lurch forward, man
and machine combined, to attack the problem.  Yes, you may have to
make some changes.  But the payoff is worth it: the more help you can
give the computer, the more help it can give you.

-- 
Ed Avis <··@membled.com>
From: Darius
Subject: Re: More static type fun.
Date: 
Message-ID: <20031113163259.00003d38.ddarius@hotpop.com>
On 13 Nov 2003 21:21:00 +0000
Ed Avis <··@membled.com> wrote:

> the more help you can give the computer, the more help it can give
> you.

Hmm, I was thinking of saying exactly that a bit back.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <1xsbondv.fsf@comcast.net>
Ed Avis <··@membled.com> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>>I'm happy to use any programming aid that doesn't require me to adapt.
>
> Heh, this may sum up the argument.
>
> Any really worthwhile programming tool does require one to adapt.

I absolutely disagree with this point of view!  The *best conceivable*
programming tool would just automatically do what I intended without
my input at all.  Obviously that's impossible, but the whole reason I
hack programming languages is to try to approximate that kind of
interaction as closely as possible.  My view is:

  ``No worthwhile programming tool requires one to adapt.''

As always, on usenet de gustibus semper disputandum.

-- 
~jrm
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bp2c7h$ee$1@grizzly.ps.uni-sb.de>
Joe Marshall wrote:
> 
>> In the proper meaning of words, Lisp is only typed in the completely
>> trivial
>> sense I sketched above.  Lisp may have something called types, but they
>> aren't types in any standard formal sense.
> 
> This is ridiculous.  Lisp types can be as formalized as any other type
> system.  How formal a type system is has nothing to do with whether
> one can statically analyze code.

You missed the point. It's not that you cannot formalize what Lisp does, 
it's just that it isn't a "type system" in a plausible sense.

The notion of types comes from mathematical logics and predates the first 
programming language by decades. Much later, the idea has been adopted for 
programming languages. In the respective scientific communities the notion 
of type system has a well-established meaning as a certain kind of logic 
over syntactic phrases. Pierce even defines it in a broader, more pragmatic 
way taylored for programmming languages:

  "A type system is a tractable syntactic method for proving the absence of 
certain program behaviour by classifying phrases according to the kinds of 
values they compute."

It is obvious that Lisp is not even remotely covered by this (reasonable) 
definition. Expressions like "dynamic typing" are pretty much an abuse of 
terminology.

Yes, I'm only nitpicking about terminology here. Actually, I don't even mind 
much if people call it "dynamic typing". I just made this point to disprove 
Tayss' inappropriate claim that "the static typing world has no coherent 
vocabulary". In fact, it is the other way round: other "communities" have 
absorbed words like "type" and "type system" without caring what they 
really meant and changed their meaning almost to the point of 
unrecognizability.

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <n0aznhcv.fsf@ccs.neu.edu>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> Joe Marshall wrote:
>> 
>>> In the proper meaning of words, Lisp is only typed in the completely
>>> trivial
>>> sense I sketched above.  Lisp may have something called types, but they
>>> aren't types in any standard formal sense.
>> 
>> This is ridiculous.  Lisp types can be as formalized as any other type
>> system.  How formal a type system is has nothing to do with whether
>> one can statically analyze code.
>
> You missed the point.  It's not that you cannot formalize what Lisp does, 
> it's just that it isn't a "type system" in a plausible sense.
>
> The notion of types comes from mathematical logics and predates the first 
> programming language by decades.  Much later, the idea has been adopted for 
> programming languages.  In the respective scientific communities the notion 
> of type system has a well-established meaning as a certain kind of logic 
> over syntactic phrases.  Pierce even defines it in a broader, more pragmatic 
> way taylored for programmming languages:
>
>   "A type system is a tractable syntactic method for proving the absence of 
> certain program behaviour by classifying phrases according to the kinds of 
> values they compute."

Since Pierce published this statement in 2002, it hardly supports the
claim that the notion of types predates programming languages by
decades.

> It is obvious that Lisp is not even remotely covered by this (reasonable) 
> definition.  Expressions like "dynamic typing" are pretty much an abuse of 
> terminology.

It is not obvious to me.

Suppose the Lisp system throws a runtime error because you attempted,
say, to take the CAR of a number.

Is this ``a tractable, syntactic method for proving the absence of
certain program behavior by classifying phrases according to the
kinds of values they compute''?  Sure looks like it to me.

Is it tractable?  Certainly.  Checking a primitive runtime type is
always a bounded operation.

Is it syntactic?  Certainly.  Computers push symbols around.

Does it prove the absence of certain program behavior?  It certainly
will not let you take the CAR of a string.

Does it classify phrases according to the kinds of values they
compute?  That string came from somewhere!

What is *isn't* is statically analyzable, but I don't see that phrase
in Pierce's definition.  (Perhaps Pierce intended that.  If so, he's
wrong.)

> Other "communities" have absorbed words like "type" and "type
> system" without caring what they really meant and changed their
> meaning almost to the point of unrecognizability.

I'm a bit confused by what static typists mean by the word `type'.  It
often seems to be used as if it means `a property that is statically
analyzable'.

Is `integer' a type?  How about `positive integer'?  How about `even
positive integer'?  How about `real integer solutions to a^2 + b^2 =
c^2 where a and b are 3 and 4 respectively'?
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bp2uq4$fmb$1@grizzly.ps.uni-sb.de>
[Re-added c.l.f, since I don't read c.l.l on a regular basis.]

Joe Marshall wrote:
> 
>>   "A type system is a tractable syntactic method for proving the absence
>>   of
>> certain program behaviour by classifying phrases according to the kinds
>> of values they compute."
> 
> Since Pierce published this statement in 2002, it hardly supports the
> claim that the notion of types predates programming languages by
> decades.

No, but it's not that he made this definition up all new by himself, of 
course.

Anyway, that "claim" is a simple fact. See e.g. Russell, early 1900s, or 
Church, 1930s for the origins of type systems.

>> It is obvious that Lisp is not even remotely covered by this (reasonable)
>> definition.  Expressions like "dynamic typing" are pretty much an abuse
>> of terminology.
> 
> It is not obvious to me.

Come on, this is getting silly.

> Suppose the Lisp system throws a runtime error because you attempted,
> say, to take the CAR of a number.
> 
> Is this ``a tractable, syntactic method for proving the absence of
> certain program behavior by classifying phrases according to the
> kinds of values they compute''?  Sure looks like it to me.
> 
> Is it tractable?  Certainly.  Checking a primitive runtime type is
> always a bounded operation.
> 
> Is it syntactic?  Certainly.  Computers push symbols around.

Be serious, "pushed symbols" are not syntax. Syntactic method is saying that 
it works by looking at the syntactic structure of the program. Dynamic 
typing only looks at values.

> Does it prove the absence of certain program behavior?  It certainly
> will not let you take the CAR of a string.

It does not prove its absence, it just detects its presence when it occurs.

> Does it classify phrases according to the kinds of values they
> compute?  That string came from somewhere!

What is that supposed to mean? How is it related to the classification of 
program phrases??

> What is *isn't* is statically analyzable, but I don't see that phrase
> in Pierce's definition.

It is implied by the other characteristics described (especially the 
classification of phrases part).

> (Perhaps Pierce intended that.  If so, he's wrong.)

Sure...

>> Other "communities" have absorbed words like "type" and "type
>> system" without caring what they really meant and changed their
>> meaning almost to the point of unrecognizability.
> 
> I'm a bit confused by what static typists mean by the word `type'.  It
> often seems to be used as if it means `a property that is statically
> analyzable'.

A type simply is what is expressible in a given type system (and a type 
systems is always something static in its original meaning).

> Is `integer' a type?  How about `positive integer'?  How about `even
> positive integer'?  How about `real integer solutions to a^2 + b^2 =
> c^2 where a and b are 3 and 4 respectively'?

All of these are expressible as types in suitable type systems. Such type 
systems actually exist. Of course, you will hardly find them in today's 
programming languages, because they tend to fail hitting a good balance 
between expressiveness and complexity.

Anyway, let me recapitulate that this subthread is merely nitpicking on 
terminology, which I didn't start. I see no point in continuing it further.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1islmlx6p.fsf@tti5.uchicago.edu>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> > I'm a bit confused by what static typists mean by the word `type'.  It
> > often seems to be used as if it means `a property that is statically
> > analyzable'.
> 
> A type simply is what is expressible in a given type system (and a type 
> systems is always something static in its original meaning).

IMO, the confusion arises from the following:

When we give semantic meaning to types (types in the original sense),
i.e., when we design /models/ of static type systems, then we
sometimes use /sets of values/ as the interpretation of types.  Now,
Lisp has sets of values (as does every other programming language).
By abuse of terminology, some of these sets end up being called
"types".  But the sets of values in question are not interpretations
of terms in some logic, so there is no type system.
From: Don Geddis
Subject: Re: More static type fun.
Date: 
Message-ID: <873ccq5klo.fsf@sidious.geddis.org>
> Andreas Rossberg <········@ps.uni-sb.de> writes:
> > A type simply is what is expressible in a given type system (and a type 
> > systems is always something static in its original meaning).

Of course!  So now it's clear why Lisp can't have types: you've defined the
terms explicitly to exclude it!  A type is something expressible in a type
system, which (by definition) must be static.

But that's simply not an interesting definition.  When programming people
(in general) talk about types, they're talking about data representation within
computer programs.  Your "types" are a strict subset of this more interesting
concept.  Moreover, a lot of the work you do in your domain also applies to
the more general concept.

I can see no utility out of artificially restricting the discussion to static
analysis of type logics.  It certainly doesn't help with a conversation about
the design of future programming languages.

Matthias Blume <····@my.address.elsewhere> writes:
> When we give semantic meaning to types (types in the original sense),
> i.e., when we design /models/ of static type systems, then we
> sometimes use /sets of values/ as the interpretation of types.  Now,
> Lisp has sets of values (as does every other programming language).
> By abuse of terminology, some of these sets end up being called
> "types".  But the sets of values in question are not interpretations
> of terms in some logic, so there is no type system.

But come on, your static types aren't really mathematical types either.
In the end, what matters is whether a subroutine can handle the inputs or
not.  And that's a question of data representation, not a question of
abstract platonic mathematical types.  A function that is well defined for
the integer 3 won't necessarily compute correctly if you pass in a float 3.0.

The issue of data representation _is_ the important one for computer type
systems, so I don't understand why you pretend that you're working with some
grander notion of mathematical types, and why you denigrate systems that do
inference over sets of values.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb89242$1@news.unimelb.edu.au>
Don Geddis <···@geddis.org> writes:

>When programming people (in general) talk about types, they're talking
>about data representation within computer programs.

For that sense of "type", it would be appropriate to say that Lisp has only
one type.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpa6v1$t2r$1@newsreader3.netcologne.de>
Fergus Henderson wrote:

> Don Geddis <···@geddis.org> writes:
> 
> 
>>When programming people (in general) talk about types, they're talking
>>about data representation within computer programs.
> 
> 
> For that sense of "type", it would be appropriate to say that Lisp has only
> one type.

When skipping through the ANSI Common Lisp standard, I see a classes, 
structures, symbols, various number types, including bits and bytes and 
intervals, characters, including support for character encodings, conses 
(lists), arrays/strings, hash tables, filenames, files, streams, and a 
type specification sublanguage.

When you add the MOP, you even get programmable data representation. For 
example, this allows you to seamlessly map classes to external storage, 
among other things.

Maybe this is a too bread characterization, and you need to shrink that 
list because of redundancies. But you definitely have many options of 
dealing with data representation in Lisp.


Pascal

P.S.: I am looking forward to a portable interface for lexical 
environment objects, currently being worked on at Franz AFAIK, so that 
you can add customizable static analysis to the dish.

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb8d7f3$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>
>> Don Geddis <···@geddis.org> writes:
>> 
>>>When programming people (in general) talk about types, they're talking
>>>about data representation within computer programs.
>> 
>> For that sense of "type", it would be appropriate to say that Lisp has only
>> one type.
>
>When skipping through the ANSI Common Lisp standard, I see a classes, 
>structures, symbols, various number types, including bits and bytes and 
>intervals, characters, including support for character encodings, conses 
>(lists), arrays/strings, hash tables, filenames, files, streams,

Those all correspond to a single data representation which is the tagged
union of all of the different possibilities, i.e. a single "type" in the
sense mentioned above.

>and a type specification sublanguage.

Lisp's "type" specification sublanguage specifies subsets of values,
not data representations.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb997dd$1@news.unimelb.edu.au>
Don Geddis <···@geddis.org> writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>> All [Lisp types] correspond to a single data representation which is the
>> tagged union of all of the different possibilities, i.e. a single "type" in
>> the sense mentioned above.
>
>That's meaningless.

No, it's not.  

>If a function is defined on strings, and you pass it an
>integer, you'll get an error in Lisp.  This is what ordinary programmers
>mean by data type.

Not always.  A data type has a set of values and also determines a
representation for those values.  Here you are talking about the set
of values, but the representation is also important.  For example, it
affects performance in a number of ways (space usage, costs of conversions
between different representations, efficiency of particular operations).

>You're creating definitions that are almost deliberately
>designed to confuse.

No, I didn't create any definitions here.  You did!
I'm just applying _your_ definition to Lisp.
Let me quote it:

 |	When programming people (in general) talk about types,
 |	they're talking about data representation within computer programs.

I agree that people do often use the word in this sense.
This sense is however different to the Lisp sense.

>> Lisp's "type" specification sublanguage specifies subsets of values,
>> not data representations.
>
>Incorrect.

No, you just misunderstood what I meant.  When I said "subsets of values",
I meant subsets of the set of all lisp values.

>For that matter, you can even define types that require computation to
>determine membership.  From
>        http://www.lispworks.com/reference/HyperSpec/Body/m_deftp.htm
>I found:
>
>         (defun equidimensional (a)
>           (or (< (array-rank a) 2)
>               (apply #'= (array-dimensions a))))
>         (deftype square-matrix (&optional type size)
>           `(and (array ,type (,size ,size))
>                 (satisfies equidimensional)))
>
>(I'll also note in passing that "square-matrix" now becomes a legitimate
>type in the Lisp program, but there is no built-in data tag in the
>implementation which indicates which data objects are members of this type,
>and which aren't.)

It is a legitimate type in the Lisp sense.

But in the types as data representations sense, which _you_ introduced,
it is not a distinct type in its own right; it is merely a subtype of
the type of all Lisp terms.

This is an important notion, because it has some useful consequences.
For example, it implies that converting a list of square matrices
to a list of Lisp terms should be a no-op.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Iain Little
Subject: Re: More static type fun.
Date: 
Message-ID: <87r8062s9a.fsf@yahoo.com>
Fergus Henderson <···@cs.mu.oz.au> writes:

> Don Geddis <···@geddis.org> writes:
>>For that matter, you can even define types that require computation to
>>determine membership.  From
>>        http://www.lispworks.com/reference/HyperSpec/Body/m_deftp.htm
>>I found:
>>
>>         (defun equidimensional (a)
>>           (or (< (array-rank a) 2)
>>               (apply #'= (array-dimensions a))))
>>         (deftype square-matrix (&optional type size)
>>           `(and (array ,type (,size ,size))
>>                 (satisfies equidimensional)))
>>
>>(I'll also note in passing that "square-matrix" now becomes a legitimate
>>type in the Lisp program, but there is no built-in data tag in the
>>implementation which indicates which data objects are members of this type,
>>and which aren't.)
>
> It is a legitimate type in the Lisp sense.
>
> But in the types as data representations sense, which _you_ introduced,
> it is not a distinct type in its own right; it is merely a subtype of
> the type of all Lisp terms.
>
> This is an important notion, because it has some useful consequences.
> For example, it implies that converting a list of square matrices
> to a list of Lisp terms should be a no-op.

If you want this, then 'deftype' is the wrong way to define a new
type; 'defclass' or 'defstruct' would be more appropriate.

(Its a similar situation to defining new types in SML; depending on
how you do it, you'll either get a new type representation or you
won't...)


Iain
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fc1b637$1@news.unimelb.edu.au>
Iain Little <······@yahoo.com> writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>
>> Don Geddis <···@geddis.org> writes:
>>>For that matter, you can even define types that require computation to
>>>determine membership.  From
>>>        http://www.lispworks.com/reference/HyperSpec/Body/m_deftp.htm
>>>I found:
>>>
>>>         (defun equidimensional (a)
>>>           (or (< (array-rank a) 2)
>>>               (apply #'= (array-dimensions a))))
>>>         (deftype square-matrix (&optional type size)
>>>           `(and (array ,type (,size ,size))
>>>                 (satisfies equidimensional)))
>>>
>>>(I'll also note in passing that "square-matrix" now becomes a legitimate
>>>type in the Lisp program, but there is no built-in data tag in the
>>>implementation which indicates which data objects are members of this type,
>>>and which aren't.)
>>
>> It is a legitimate type in the Lisp sense.
>>
>> But in the types as data representations sense, which _you_ introduced,
>> it is not a distinct type in its own right; it is merely a subtype of
>> the type of all Lisp terms.
>>
>> This is an important notion, because it has some useful consequences.
>> For example, it implies that converting a list of square matrices
>> to a list of Lisp terms should be a no-op.
>
>If you want this, then 'deftype' is the wrong way to define a new
>type; 'defclass' or 'defstruct' would be more appropriate.

Using defclass or defstruct just creates a new element of the discriminated
union which is the type of all Lisp terms.  It doesn't give you a way of
creating data with an untagged representation.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Adam Warner
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.11.18.08.38.11.614306@consulting.net.nz>
Hi Fergus Henderson,

>>If a function is defined on strings, and you pass it an integer, you'll
>>get an error in Lisp.  This is what ordinary programmers mean by data
>>type.
> 
> Not always.  A data type has a set of values and also determines a
> representation for those values.  Here you are talking about the set of
> values, but the representation is also important.  For example, it
> affects performance in a number of ways (space usage, costs of
> conversions between different representations, efficiency of particular
> operations).

Fergus this was your original response (to Don Geddis, quoted first):

   >When programming people (in general) talk about types, they're talking
   >about data representation within computer programs.

   For that sense of "type", it would be appropriate to say that Lisp has 
   only one type.

Here is a list of some of the different representations of types in the
CMUCL implementation of Common Lisp that affects performance in a number
of ways (space usage, costs of conversions between different
representations, efficiency of particular operations). After you have
viewed this list I hope you will realise that some Common Lisp
implementations provide for a wide variety of data representation types
(even untagged, CMUCL calls these non-descriptor representations).

5.10 Object Representation
<http://cvs2.cons.org/ftp-area/cmucl/doc/cmu-user/compiler-hint.html#htoc164>

5.10.2 	Structure Representation
One of the best ways of building complex data structures is to define
appropriate structure types using defstruct. In Python, access of
structure slots is always at least as fast as list or vector access, and
is usually faster. In comparison to a list representation of a tuple,
structures also have a space advantage.

5.11.6 	Word Integers
Python is unique in its efficient implementation of arithmetic on
full-word integers through non-descriptor representations and open coding.
Arithmetic on any subtype of these types:

(signed-byte 32)
(unsigned-byte 32)

is reasonably efficient, although subtypes of fixnum remain somewhat more
efficient.

5.11.8 	Specialized Arrays

Common Lisp supports specialized array element types through the
:element-type argument to make-array. When an array has a specialized
element type, only elements of that type can be stored in the array. From
this restriction comes two major efficiency advantages:

   * A specialized array can save space by packing multiple elements into
   a single word. For example, a base-char array can have 4 elements per
   word, and a bit array can have 32. This space-efficient representation
   is possible because it is not necessary to separately indicate the type
   of each element.

   * The elements in a specialized array can be given the same
   non-descriptor representation as the one used in registers and on the
   stack, eliminating the need for representation conversions when reading
   and writing array elements. For objects with pointer descriptor
   representations (such as floats and word integers) there is also a
   substantial consing reduction because it is not necessary to allocate a
   new object every time an array element is modified.

These are the specialized element types currently supported:

bit
(unsigned-byte 2)
(unsigned-byte 4)
(unsigned-byte 8)
(unsigned-byte 16)
(unsigned-byte 32)
(signed-byte 8)
(signed-byte 16)
(signed-byte 30)
(signed-byte 32)
base-character
single-float
double-float
(complex single-float)
(complex double-float)


It is not appropriate to say that Lisp has only one type, especially when
responding to a comment on data representation within computer programs.

"Although Lisp's creator seemed to think that it was for LISt Processing,
the astute observer may have noticed that the chapter on list manipulation
makes up less that three percent of Common Lisp: The Language II. The
language has grown since Lisp 1.5---new data types supersede lists for
many purposes."

Hence my previous joke.

Regards,
Adam
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fc1bff2$1@news.unimelb.edu.au>
Adam Warner <······@consulting.net.nz> writes:

>Hi Fergus Henderson,
>
>>>If a function is defined on strings, and you pass it an integer, you'll
>>>get an error in Lisp.  This is what ordinary programmers mean by data
>>>type.
>> 
>> Not always.  A data type has a set of values and also determines a
>> representation for those values.  Here you are talking about the set of
>> values, but the representation is also important.  For example, it
>> affects performance in a number of ways (space usage, costs of
>> conversions between different representations, efficiency of particular
>> operations).
>
>Fergus this was your original response (to Don Geddis, quoted first):
>
>   >When programming people (in general) talk about types, they're talking
>   >about data representation within computer programs.
>
>   For that sense of "type", it would be appropriate to say that Lisp has 
>   only one type.
>
>Here is a list of some of the different representations of types in the
>CMUCL implementation of Common Lisp

I was talking about Lisp in general.  I agree that CMUCL is an exception,
because of its support for untagged ("non-descriptor") representations.
But AFAIK such support is not something that a Lisp programmer can portably
rely on when writing code which should run on multiple Lisp implementations.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4ptfhu3ua.fsf@franz.com>
Fergus Henderson <···@cs.mu.oz.au> writes:

> Adam Warner <······@consulting.net.nz> writes:
> 
> >Hi Fergus Henderson,
> >
> >>>If a function is defined on strings, and you pass it an integer, you'll
> >>>get an error in Lisp.  This is what ordinary programmers mean by data
> >>>type.
> >> 
> >> Not always.  A data type has a set of values and also determines a
> >> representation for those values.  Here you are talking about the set of
> >> values, but the representation is also important.  For example, it
> >> affects performance in a number of ways (space usage, costs of
> >> conversions between different representations, efficiency of particular
> >> operations).
> >
> >Fergus this was your original response (to Don Geddis, quoted first):
> >
> >   >When programming people (in general) talk about types, they're talking
> >   >about data representation within computer programs.
> >
> >   For that sense of "type", it would be appropriate to say that Lisp has 
> >   only one type.
> >
> >Here is a list of some of the different representations of types in the
> >CMUCL implementation of Common Lisp
> 
> I was talking about Lisp in general.  I agree that CMUCL is an exception,
> because of its support for untagged ("non-descriptor") representations.

It sounds like you are talking about arrays of Specialized element-types (see:
http://www.franz.com/support/documentation/6.2/ansicl/subsecti/special0.htm)

The list which Adam showed is actually typical of most Common Lisps, so it
is not CMUCL which is the exception, but Common Lisp in general.

> But AFAIK such support is not something that a Lisp programmer can portably
> rely on when writing code which should run on multiple Lisp implementations.

Although Common Lisp only requires a few specific specializations (see
http://www.franz.com/support/documentation/6.2/ansicl/subsubse/required.htm)
most Common Lisp implementations contain a fairly rich set of array
specilizations on signed and unsigned integers and floats, usually which
correspond to standard C types of similar nature.  Allegro CL's specialized
type set is similar to CMUCL's, and is described in part in
http://www.franz.com/support/documentation/6.2/doc/implementation.htm#cl-make-array-2
and in addition on 64-bit versions there are (unsigned-byte 64) and
(signed-byte 64) as well.

The reason (I believe) why there is not more detail in the ANSI CL
spec is due to the usage of specialized arrays; they tend to be used
to either allow a close interface through a foreign-function interface
(an interface to non-CL code) or to pack data tightly for space or
consing considerations.  Since for the former the CL spec also doesn't
describe a foreign-function interface, it was not necessary to detail
the requirements of array specializations. [On the other hand, most CL
implementations have FFI apis which are similar enough to be drawn
together into a macroized interface called UFFI].  And for the latter,
optimizations for space or consing are encouraged by competition rather
than specification.

So although your statement is true on the surface, in practice it is
not true, since most programmers count on such types when dealing with
interfacing to C/C++.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fc2c67f$1@news.unimelb.edu.au>
Duane Rettig <·····@franz.com> writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>
>> I was talking about Lisp in general.  I agree that CMUCL is an exception,
>> because of its support for untagged ("non-descriptor") representations.
>
>It sounds like you are talking about arrays of Specialized element-types (see:
>http://www.franz.com/support/documentation/6.2/ansicl/subsecti/special0.htm)

Not really.  It was mainly CMUCL's support for structures containing
non-descriptor representations for their fields (slots) which convinced
me that it was an exception.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <48ym4vpzr.fsf@franz.com>
Fergus Henderson <···@cs.mu.oz.au> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> >Fergus Henderson <···@cs.mu.oz.au> writes:
> >
> >> I was talking about Lisp in general.  I agree that CMUCL is an exception,
> >> because of its support for untagged ("non-descriptor") representations.
> >
> >It sounds like you are talking about arrays of Specialized element-types (see:
> >http://www.franz.com/support/documentation/6.2/ansicl/subsecti/special0.htm)
> 
> Not really.  It was mainly CMUCL's support for structures containing
> non-descriptor representations for their fields (slots) which convinced
> me that it was an exception.

Descriptors look like internal CMUCL documentation to me, but as near
as I can tell, they are describing what I have called lispvals in my
own descriptions, and these are nothing more than tagged pointers.
We also tend to use the terms "boxed" and "unboxed", and many CL compilers
will do unboxed-arithmetic if properly declared.  Allegro CL even
has an undocmented feature which allows some types of these unboxed values
to be passed to and returned from functions.

Also, Common Lisp has a standard defstruct option called the :type option,
which allows slots to be specialized .   In CMUCL:

* (defstruct (foo (:type (vector single-float))) a b)

FOO
* (make-foo :a 10.0 :b 20.0)

#(10.0 20.0)
* (type-of *)

(SIMPLE-ARRAY SINGLE-FLOAT (2))
* 

and note that this is not unique - in Allegro CL:

CL-USER(1): (defstruct (foo (:type (vector single-float))) a b)
FOO
CL-USER(2): (make-foo :a 10.0 :b 20.0)
#(10.0 20.0)
CL-USER(3): (type-of *)
(SIMPLE-ARRAY SINGLE-FLOAT (2))
CL-USER(4): 

I'm sure other CL's have the same capabilities.

Or how about a struct with a different type?

CL-USER(4): (defstruct (bar (:type (vector bit))) (x 1)  (y 0) (z 1))
BAR
CL-USER(5): (make-bar)
#*101
CL-USER(6): 

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Christophe Rhodes
Subject: Re: More static type fun.
Date: 
Message-ID: <sqekvuomx1.fsf@lambda.jcn.srcf.net>
[ probably isn't riveting for clf; note f'ups ]

Duane Rettig <·····@franz.com> writes:

> Fergus Henderson <···@cs.mu.oz.au> writes:
>
>> Duane Rettig <·····@franz.com> writes:
>> 
>> >Fergus Henderson <···@cs.mu.oz.au> writes:
>> >
>> >> I was talking about Lisp in general.  I agree that CMUCL is an exception,
>> >> because of its support for untagged ("non-descriptor") representations.
>> >
>> >It sounds like you are talking about arrays of Specialized element-types (see:
>> >http://www.franz.com/support/documentation/6.2/ansicl/subsecti/special0.htm)
>> 
>> Not really.  It was mainly CMUCL's support for structures containing
>> non-descriptor representations for their fields (slots) which convinced
>> me that it was an exception.
>
> Descriptors look like internal CMUCL documentation to me, but as near
> as I can tell, they are describing what I have called lispvals in my
> own descriptions, and these are nothing more than tagged pointers.
> We also tend to use the terms "boxed" and "unboxed", and many CL compilers
> will do unboxed-arithmetic if properly declared.  Allegro CL even
> has an undocmented feature which allows some types of these unboxed values
> to be passed to and returned from functions.

I think you're still missing the point; or at least, you're not
responding to Fergus'.

What he's describing is storing the untagged value in a heterogeneous
structure; for instance, in
  (defstruct foo ; not FOO :TYPE, so ordinary structure
    (a 0 :type integer)
    (b nil :type symbol)
    (c 0.0d0 :type double-float)
    (d 0 :type (unsigned-byte 32)))
the values stored in slots A and B will be as lispvals (your
terminology) but those stored in slots C and D will be untagged
machine representation.  This is enforced by type checking and type
inference for the slot accessors (though this can be evaded by
sufficient nastiness, most usually involving &aux BOA-constructors in
CMUCL; some of these holes have been blocked in SBCL).

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4n0aiwxet.fsf@franz.com>
Christophe Rhodes <·····@cam.ac.uk> writes:

> [ probably isn't riveting for clf; note f'ups ]
> 
> > Descriptors look like internal CMUCL documentation to me, but as near
> > as I can tell, they are describing what I have called lispvals in my
> > own descriptions, and these are nothing more than tagged pointers.
> > We also tend to use the terms "boxed" and "unboxed", and many CL compilers
> > will do unboxed-arithmetic if properly declared.  Allegro CL even
> > has an undocmented feature which allows some types of these unboxed values
> > to be passed to and returned from functions.
> 
> I think you're still missing the point; or at least, you're not
> responding to Fergus'.
> 
> What he's describing is storing the untagged value in a heterogeneous
> structure; for instance, in
>   (defstruct foo ; not FOO :TYPE, so ordinary structure
>     (a 0 :type integer)
>     (b nil :type symbol)
>     (c 0.0d0 :type double-float)
>     (d 0 :type (unsigned-byte 32)))
> the values stored in slots A and B will be as lispvals (your
> terminology) but those stored in slots C and D will be untagged
> machine representation.  This is enforced by type checking and type
> inference for the slot accessors (though this can be evaded by
> sufficient nastiness, most usually involving &aux BOA-constructors in
> CMUCL; some of these holes have been blocked in SBCL).

Ah, I see, now.  So then does the garbage collector take care to
avoid scanning these particular slots when forwarding?  Or does it
count on probabilities of the bit patterns for the float values'
bit representations not representing lispvals by mistake?

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Christophe Rhodes
Subject: Re: More static type fun.
Date: 
Message-ID: <sq1xrto9z4.fsf@lambda.jcn.srcf.net>
Duane Rettig <·····@franz.com> writes:

> Christophe Rhodes <·····@cam.ac.uk> writes:
>
>> What he's describing is storing the untagged value in a heterogeneous
>> structure; for instance, in
>>   (defstruct foo ; not FOO :TYPE, so ordinary structure
>>     (a 0 :type integer)
>>     (b nil :type symbol)
>>     (c 0.0d0 :type double-float)
>>     (d 0 :type (unsigned-byte 32)))
>> the values stored in slots A and B will be as lispvals (your
>> terminology) but those stored in slots C and D will be untagged
>> machine representation.  This is enforced by type checking and type
>> inference for the slot accessors (though this can be evaded by
>> sufficient nastiness, most usually involving &aux BOA-constructors in
>> CMUCL; some of these holes have been blocked in SBCL).
>
> Ah, I see, now.  So then does the garbage collector take care to
> avoid scanning these particular slots when forwarding?  Or does it
> count on probabilities of the bit patterns for the float values'
> bit representations not representing lispvals by mistake?

The former.  Without looking at the source, this is implemented by
rearranging so that all the "raw" slots are stored at the end of the
structure, and putting in the structure a "raw-index" beyond which it
is not scanned for pointers.  It had better be something like this,
since (unsigned-byte 32) and (signed-byte 32) are also types treated
in the same way, and I don't think that only even numbers[*] are ever
stored in such slots :-)

Christophe

[*] for those following along at home, in cmucl and derivatives bit
representations ending in 0 are immediates, and those ending in 1 are
pointers.
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4smk8lv8s.fsf@franz.com>
Christophe Rhodes <·····@cam.ac.uk> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > Christophe Rhodes <·····@cam.ac.uk> writes:
> >
> >> What he's describing is storing the untagged value in a heterogeneous
> >> structure; for instance, in
> >>   (defstruct foo ; not FOO :TYPE, so ordinary structure
> >>     (a 0 :type integer)
> >>     (b nil :type symbol)
> >>     (c 0.0d0 :type double-float)
> >>     (d 0 :type (unsigned-byte 32)))
> >> the values stored in slots A and B will be as lispvals (your
> >> terminology) but those stored in slots C and D will be untagged
> >> machine representation.  This is enforced by type checking and type
> >> inference for the slot accessors (though this can be evaded by
> >> sufficient nastiness, most usually involving &aux BOA-constructors in
> >> CMUCL; some of these holes have been blocked in SBCL).
> >
> > Ah, I see, now.  So then does the garbage collector take care to
> > avoid scanning these particular slots when forwarding?  Or does it
> > count on probabilities of the bit patterns for the float values'
> > bit representations not representing lispvals by mistake?
> 
> The former.

That's good.

>  Without looking at the source, this is implemented by
> rearranging so that all the "raw" slots are stored at the end of the
> structure, and putting in the structure a "raw-index" beyond which it
> is not scanned for pointers.

That could be bad.  Does this also then mean that one cannot use the
:include option to include such structs?  If so, how do the accessors
determine the index of those structs doing the including?  (for example,
if struct foo (which has mostly lispval slots) includes struct bar
(which has all untagged slots) how do bar's accessors know how to
access the bar-slots in a foo?  They can't take on the same index, as
they do in normal defstruct implementations...

We have toyed with the idea of such untagged slots, because we have had
a couple of requests and the idea is intriguing.  However, we would
plan to do it instead with some kind of a bitmap within the structure
to note which slots are tagged and which are untagged.  I don't see
how one allow inclusion of superclass structures without losing the
performance of the accessors in figuring out where the slots are. 
(Of course, this could just be because I'm currently stuffed with
turkey and yams and pumpkin pie, and just can't think straight tonight
:-)

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Christophe Rhodes
Subject: Re: More static type fun.
Date: 
Message-ID: <sq1xrsmd9k.fsf@lambda.jcn.srcf.net>
Duane Rettig <·····@franz.com> writes:

> Christophe Rhodes <·····@cam.ac.uk> writes:
>
>>  Without looking at the source, this is implemented by
>> rearranging so that all the "raw" slots are stored at the end of the
>> structure, and putting in the structure a "raw-index" beyond which it
>> is not scanned for pointers.
>
> That could be bad.  Does this also then mean that one cannot use the
> :include option to include such structs?  If so, how do the accessors
> determine the index of those structs doing the including?  (for example,
> if struct foo (which has mostly lispval slots) includes struct bar
> (which has all untagged slots) how do bar's accessors know how to
> access the bar-slots in a foo?  They can't take on the same index, as
> they do in normal defstruct implementations...

Right.  I'm sorry, I shouldn't have spoken without looking things up,
particularly since this is inherited functionality and not something
I've looked at (beyond occasionally commenting it out when making
other binary-incompatible modifications :-)

What actually happens is that structure types have a "raw-index" where
a vector specialized to (unsigned-byte 32) [*] is placed, of suitable
length; then accessors for these well-typed slots access raw bits of
this vector.  So we pay for the relative ease of implementation (I
presume; I don't know how long the CMUCL gods slaved away at this way
back in 198X :-) with an extra memory indirection.

> We have toyed with the idea of such untagged slots, because we have had
> a couple of requests and the idea is intriguing.  However, we would
> plan to do it instead with some kind of a bitmap within the structure
> to note which slots are tagged and which are untagged.  I don't see
> how one allow inclusion of superclass structures without losing the
> performance of the accessors in figuring out where the slots are. 

A bitmask is the first implementation strategy that came to mind when
your objection was relayed to me over IRC earlier; is your final
sentence referring to the bitmask strategy or to what I mistakenly led
you to believe cmucl and derivatives were doing?

> (Of course, this could just be because I'm currently stuffed with
> turkey and yams and pumpkin pie, and just can't think straight tonight
> :-)

Gosh, and here I was thinking that food was good for me ;-)

Christophe

[*] one reason I've been commenting it out is that I have an embryonic
64-bit-capable sbcl, which needs a certain amount more love and
tenderness before it is suitable for the real world.
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4wu9kw0m5.fsf@franz.com>
Christophe Rhodes <·····@cam.ac.uk> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > Christophe Rhodes <·····@cam.ac.uk> writes:
> >
> >>  Without looking at the source, this is implemented by
> >> rearranging so that all the "raw" slots are stored at the end of the
> >> structure, and putting in the structure a "raw-index" beyond which it
> >> is not scanned for pointers.
> >
> > That could be bad.  Does this also then mean that one cannot use the
> > :include option to include such structs?  If so, how do the accessors
> > determine the index of those structs doing the including?  (for example,
> > if struct foo (which has mostly lispval slots) includes struct bar
> > (which has all untagged slots) how do bar's accessors know how to
> > access the bar-slots in a foo?  They can't take on the same index, as
> > they do in normal defstruct implementations...
> 
> Right.  I'm sorry, I shouldn't have spoken without looking things up,
> particularly since this is inherited functionality and not something
> I've looked at (beyond occasionally commenting it out when making
> other binary-incompatible modifications :-)
> 
> What actually happens is that structure types have a "raw-index" where
> a vector specialized to (unsigned-byte 32) [*] is placed, of suitable
> length; then accessors for these well-typed slots access raw bits of
> this vector.  So we pay for the relative ease of implementation (I
> presume; I don't know how long the CMUCL gods slaved away at this way
> back in 198X :-) with an extra memory indirection.

OK, this seems more reasonable.  Slower than I would want to implement,
but at least semantically defensible.

> > We have toyed with the idea of such untagged slots, because we have had
> > a couple of requests and the idea is intriguing.  However, we would
> > plan to do it instead with some kind of a bitmap within the structure
> > to note which slots are tagged and which are untagged.  I don't see
> > how one allow inclusion of superclass structures without losing the
> > performance of the accessors in figuring out where the slots are. 
> 
> A bitmask is the first implementation strategy that came to mind when
> your objection was relayed to me over IRC earlier; is your final
> sentence referring to the bitmask strategy or to what I mistakenly led
> you to believe cmucl and derivatives were doing?

The latter; the bitmap strategy is much harder on the allocator and
garbage collector, but fastest (with no indirections).  What you
are describing costs one indirection.  Any further costs, and one
might as well create some specialized CLOS slots/accessors.  What you
had originally described seems unworkable, because in fact it would
have the same slot-index-variability that CLOS has (without extra
efforts to hold them steady) and in fact keeping the same indeces would
in fact be impossible, thus leading to the need for some kind of
lookup per structure class...

> > (Of course, this could just be because I'm currently stuffed with
> > turkey and yams and pumpkin pie, and just can't think straight tonight
> > :-)
> 
> Gosh, and here I was thinking that food was good for me ;-)

Well, you know about "too much of a good thing?"  That's the tradition
we tend to hold to (even swearing beforehand that we won't overindulge)
when we go to family's house for our US Thanksgiving holiday.

> Christophe
> 
> [*] one reason I've been commenting it out is that I have an embryonic
> 64-bit-capable sbcl, which needs a certain amount more love and
> tenderness before it is suitable for the real world.

Hopefully you'll be additionally adding ((un)signed-byte 64) for those
ports...

http://www.franz.com/support/documentation/6.2/doc/implementation.htm#data-types-1

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Christophe Rhodes
Subject: Re: More static type fun.
Date: 
Message-ID: <sqsmk8kqm7.fsf@lambda.jcn.srcf.net>
Duane Rettig <·····@franz.com> writes:

> Christophe Rhodes <·····@cam.ac.uk> writes:
>
>> A bitmask is the first implementation strategy that came to mind when
>> your objection was relayed to me over IRC earlier; is your final
>> sentence referring to the bitmask strategy or to what I mistakenly led
>> you to believe cmucl and derivatives were doing?
>
> The latter; the bitmap strategy is much harder on the allocator and
> garbage collector, but fastest (with no indirections).  What you
> are describing costs one indirection.  Any further costs, and one
> might as well create some specialized CLOS slots/accessors.  What you
> had originally described seems unworkable, because in fact it would
> have the same slot-index-variability that CLOS has (without extra
> efforts to hold them steady) and in fact keeping the same indeces would
> in fact be impossible, thus leading to the need for some kind of
> lookup per structure class...

Right, this was the conclusion I'd come to; I was just worried that
you'd seen a problem with the bitmap strategy that I'd missed.  Thanks
for the clarification.

>> [*] one reason I've been commenting it out is that I have an embryonic
>> 64-bit-capable sbcl, which needs a certain amount more love and
>> tenderness before it is suitable for the real world.
>
> Hopefully you'll be additionally adding ((un)signed-byte 64) for those
> ports...
>
> http://www.franz.com/support/documentation/6.2/doc/implementation.htm#data-types-1

I hesitate to mention this, but you do know that you need
(unsigned-byte 63) too, right? (implementor's in-joke ;-) 

SBCL, additionally to the specialized types in your table, has a
specialization for FIXNUM [ and (AND FIXNUM UNSIGNED-BYTE) ] which
stores the lispvals directly.  In answer to the direct point, I
wouldn't consider a lisp 64-bit-capable if it didn't have at least
decently-sized fixnums and 64-bit type array specializations.  I can't
honestly say whether the 64-bit arrays are currently working in the
development branch (CVS tag is alpha64_2_branch, if anyone is
interested), but at least the code is trying to do the right thing.

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: james anderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3FC5BEAD.1E217BD8@setf.de>
?
beyond the matter of non-uniformity, how does a structure with non-boxed typed
slots differ from an aray with uniformly unboxed elements? why does the
inhomogeneity imply "not really"?

Christophe Rhodes wrote:
> 
> [ probably isn't riveting for clf; note f'ups ]
> 
> Duane Rettig <·····@franz.com> writes:
> 
> > Fergus Henderson <···@cs.mu.oz.au> writes:
> >
> >> Duane Rettig <·····@franz.com> writes:
> >>
> >> >Fergus Henderson <···@cs.mu.oz.au> writes:
> >> >
> >> >> I was talking about Lisp in general.  I agree that CMUCL is an exception,
> >> >> because of its support for untagged ("non-descriptor") representations.
> >> >
> >> >It sounds like you are talking about arrays of Specialized element-types (see:
> >> >http://www.franz.com/support/documentation/6.2/ansicl/subsecti/special0.htm)
> >>
> >> Not really.  It was mainly CMUCL's support for structures containing
> >> non-descriptor representations for their fields (slots) which convinced
> >> me that it was an exception.
> >
> > Descriptors look like internal CMUCL documentation to me, but as near
> > as I can tell, they are describing what I have called lispvals in my
> > own descriptions, and these are nothing more than tagged pointers.
> > We also tend to use the terms "boxed" and "unboxed", and many CL compilers
> > will do unboxed-arithmetic if properly declared.  Allegro CL even
> > has an undocmented feature which allows some types of these unboxed values
> > to be passed to and returned from functions.
> 
> I think you're still missing the point; or at least, you're not
> responding to Fergus'.
> 
> What he's describing is storing the untagged value in a heterogeneous
> structure; for instance, in
>   (defstruct foo ; not FOO :TYPE, so ordinary structure
>     (a 0 :type integer)
>     (b nil :type symbol)
>     (c 0.0d0 :type double-float)
>     (d 0 :type (unsigned-byte 32)))
> the values stored in slots A and B will be as lispvals (your
> terminology) but those stored in slots C and D will be untagged
> machine representation.  This is enforced by type checking and type
> inference for the slot accessors (though this can be evaded by
> sufficient nastiness, most usually involving &aux BOA-constructors in
> CMUCL; some of these holes have been blocked in SBCL).
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <llqet5b3.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <·············@comcast.net> writes:
>
>> Matthias Blume <····@my.address.elsewhere> writes:
>> 
>> > Don Geddis <···@geddis.org> writes:
>> >
>> >> Someone wrote:
>> >>
>> >> > Lisp's "type" specification sublanguage specifies subsets of values,
>> >> > not data representations.
>> >> 
>> >> Incorrect.  Yes, you can specify subsets, but you can also specify unions:
>> >
>> > So the union of two subsets is not a subset then?
>> 
>> If sa is a subset of A, and sb is a subset of B, where A and B are
>> disjoint, and usasb is the union of sa and sb, usasb is not
>> necessarily a subset of either A or B.
>> 
>> This is pretty clear, so you must have meant something else.
>
> Indeed.  Where did you get the idea that A and B be disjoint?  

Since Don Geddis said `you can specify subsets, but you can also
specify unions' I assumed that he was speaking informally and simply
wanted to illustrate that taking subsets wasn't the only operation one
could perform.  If A and B are disjoint (like for instance numbers and
cons cells) then it is clear that the union of A and B is neither a
subset of A nor of B.  If A and B are not disjoint (like for instance
positive integers and all integers), then it is not so clear.

> In the case under consideration they are both equal to the set of
> all values (and therefore to each other).  So the correct formal
> rendering is: if s \subseteq U and also s' \subseteq U, then (s \cup
> s') \subseteq U.

Sure.  Any type is a subset of the union of all types.  But this isn't
very interesting because there isn't a whole lot of information there.
Operations like union are rather trivial.  It is far more interesting
to take the union of selected subtypes.  

>> >> Unions, intersections, subsets, arbitrary computation, plus an extensive set
>> >> of built-in types ... surely that's about all you could ask from any type
>> >> definition system?
>> >
>> > No.  This is definitely not all (and AFAIC not even nearly so).
>> 
>> Can I ask what else you might want?  Can I ask how you expect to get
>> it if you cannot compute it?
>
> See Pierce's definition of "type system".  *That* is what I want.
> "Arbitrary computation" is precisely the thing that kills it.

So we're back at Pierce.

`There are more things on heaven and earth, Horatio, than are dreamt
of in your philosophy.'

-- 
~jrm
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1d6bp8vm8.fsf@tti5.uchicago.edu>
Joe Marshall <·············@comcast.net> writes:

> Since Don Geddis said `you can specify subsets, but you can also
> specify unions' I assumed that he was speaking informally and simply
> wanted to illustrate that taking subsets wasn't the only operation one
> could perform.

The original claim was quite explicitly talking about the fact that
Lisp's "types" are subsets of the set of all values.

> Sure.  Any type is a subset of the union of all types.  But this isn't
> very interesting because there isn't a whole lot of information there.

*Exactly*

> `There are more things on heaven and earth, Horatio, than are dreamt
> of in your philosophy.'

...but a Lisp "type system" ain't one of them. :-)
From: Nikodemus Siivola
Subject: Re: More static type fun.
Date: 
Message-ID: <bpdjq3$5ho$1@nyytiset.pp.htv.fi>
In comp.lang.lisp Matthias Blume <····@my.address.elsewhere> wrote:

> The original claim was quite explicitly talking about the fact that
> Lisp's "types" are subsets of the set of all values.

I've clearly misunderstood something.

From this discussion I was under the impression that several
statically typed functional languages provide "Any" type (or
equivalent). 

The existence of these union-of-all-types-types was offered as an
example of how these languages provide the same level of convenience
as dynamically typed.

Is that not true, or how is that different from the Common Lisp type
"T"?

Cheers,

 -- Nikodemus
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1znet7fnd.fsf@tti5.uchicago.edu>
Nikodemus Siivola <······@random-state.net> writes:

> In comp.lang.lisp Matthias Blume <····@my.address.elsewhere> wrote:
> 
> > The original claim was quite explicitly talking about the fact that
> > Lisp's "types" are subsets of the set of all values.
> 
> I've clearly misunderstood something.
> 
> From this discussion I was under the impression that several
> statically typed functional languages provide "Any" type (or
> equivalent). 
> 
> The existence of these union-of-all-types-types was offered as an
> example of how these languages provide the same level of convenience
> as dynamically typed.
> 
> Is that not true, or how is that different from the Common Lisp type
> "T"?

The distinction is not in what types exist (i.e., which sets of values
are there that are [interpetations of] types).  It is in when
reasoning about such types takes place.  Again, see Pierce's
"definition" (from the back cover of "Types and Programming Languages"
by Benjamin C. Pierce, MIT Press, 0-262-16209-1):

  "A type system is a syntact method for automatically checking the
   absence of certain erroneous behaviors by classifying program phrases
   according to the kinds of values they compute. [...]"

The emphasis is on /phrases/: In a type system, types are used to
classify program phrases rather than values (even though one can
usually find a semantic intepretation of types as sets of values --
which can then for example be used to discuss soundness and related
properties that may or may not hold for type systems).
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpfsbi$107g$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:

> The distinction is not in what types exist (i.e., which sets of values
> are there that are [interpetations of] types).  It is in when
> reasoning about such types takes place.  Again, see Pierce's
> "definition" (from the back cover of "Types and Programming Languages"
> by Benjamin C. Pierce, MIT Press, 0-262-16209-1):
> 
>   "A type system is a syntact method for automatically checking the
>    absence of certain erroneous behaviors by classifying program phrases
>    according to the kinds of values they compute. [...]"
> 
> The emphasis is on /phrases/: In a type system, types are used to
> classify program phrases rather than values (even though one can
> usually find a semantic intepretation of types as sets of values --
> which can then for example be used to discuss soundness and related
> properties that may or may not hold for type systems).

Would it be a big problem for you static typers if you could leave out 
terms like "errors", "bugs", and so on from your descriptions?

What if the definition was like this:

  "A type system is a syntact method for automatically checking the
   absence of certain behaviors by classifying program phrases
   according to the kinds of values they compute."

This would, from my point of view, much better address both sides of the 
coin.

To repeat this again, errors can be extremely useful at runtime because 
they can drive the intended behavior. In a strict sense, it doesn't 
matter whether a language provides dynamic checks by default or whether 
you have to simulate those dynamic checks. But dynamic checks that 
determine the absence of a property at runtime _are errors_. This 
doesn't say anything about the fact whether these errors have a positive 
or negative net effect on the intended behavior of a program.

So, in a sense, it is not at all clear that one doesn't want a program 
to be potentially erroneous.

That's probably the main point that concerns dynamic typers. A static 
type system doesn't say anything about the absence or presence of 
errors, nor about whether errors should be avoided or not in general.

Static type systems only provide a way to prove the absence of _some_ 
behavior - and if they are expressive enough that they allow you to 
express a certain class of errors that you happen not to want in a 
program, that's certainly a benefit. But nothing about this "method" is 
"automatic".


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpfufn$bjp$1@grizzly.ps.uni-sb.de>
Pascal Costanza wrote:
> 
> What if the definition was like this:
> 
>   "A type system is a syntact method for automatically checking the
>    absence of certain behaviors by classifying program phrases
>    according to the kinds of values they compute."
> 
> This would, from my point of view, much better address both sides of the
> coin.

How is that? AFAICS, the only significant modifications you made are 
removing the requirement for tractability (which I don't follow), and 
replacing "proving" by "automatically checking", of which I don't see what 
it would change significantly.

The definition still clearly characterises something static.

> A static
> type system doesn't say anything about the absence or presence of
> errors

Sure it does say *something*. It does not say *everything*, though.

> Static type systems only provide a way to prove the absence of _some_
> behavior - and if they are expressive enough that they allow you to
> express a certain class of errors that you happen not to want in a
> program, that's certainly a benefit. But nothing about this "method" is
> "automatic".

I don't follow. The checking *is* fully automatic.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzzneso1yk.fsf@cupid.igpm.rwth-aachen.de>
Andreas Rossberg <········@ps.uni-sb.de> writes:
> I don't follow. The checking *is* fully automatic.

You have to add annotations, so it is not automatic.

I have the feeling that we are moving in circles here.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpg0nq$emi$1@grizzly.ps.uni-sb.de>
Mario S. Mommer wrote:
>> I don't follow. The checking *is* fully automatic.
> 
> You have to add annotations, so it is not automatic.

Don't snip essential context. Pascal claimed *nothing* is automatic about 
it, which is clearly wrong.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpg44j$107u$1@f1node01.rhrz.uni-bonn.de>
Andreas Rossberg wrote:
> Pascal Costanza wrote:
> 
>>What if the definition was like this:
>>
>>  "A type system is a syntact method for automatically checking the
>>   absence of certain behaviors by classifying program phrases
>>   according to the kinds of values they compute."
>>
>>This would, from my point of view, much better address both sides of the
>>coin.
> 
> How is that? AFAICS, the only significant modifications you made are 
> removing the requirement for tractability (which I don't follow), and 
> replacing "proving" by "automatically checking", of which I don't see what 
> it would change significantly.

I didn't make those changes - I have made only one single change from 
the original paragraph.

> The definition still clearly characterises something static.

Yes, but it doesn't call the things that are proven absent "errors". At 
the moment, I think this is the important misunderstanding.

With a static type system you can prove the absence of certain 
behaviors. They might or might not be errors, and it might or might not 
be desirable to have them absent. Especially, it might be the case that 
you only want to prove the absence of a _part_ of the behavior that a 
static type system can prove absent. Static type systems can, by their 
very nature, only perform systematic checks. In general, they can't 
prove anything about individual cases, for example.

Under this light, the whole static vs. dynamic typing issue becomes a 
trade-off, which essentially goes like this: If most of the code you 
write needs to be proven absent of most behaviors that can be dealt with 
static type systems, you should choose a language that has one. If most 
of the code you write needs to be as flexible as possible, you should 
choose a language that has a dynamic type system.

Note that, while "as flexible as possible" might sound as an overly 
positive statement about the things dynamic type systems provide, it 
isn't necessarily meant as an endorsement to use dynamic type systems 
for everything.

However, it is important to note that static typers merely _belief_ that 
most code needs to be correct. They can't know this. No empirical data. 
Equally, dynamic typers merely _belief_ that most code needs to be as 
flexible as possible. Again, they can't know this. Again, no empirical date.

What people of both camps probably have are _anecdotes_ and _personal_ 
experience when their favorite type system/language really saved their 
ass, because it was exactly designed the way it was designed. But this 
still boils down to be belief systems, and this just can mean that the 
people involved just happened to be people whose working styles better 
fit certain type systems/languages respectively.

An important point in this light is that noone can predict the future. 
So, in each single case when you start to write a program, it might turn 
out later on that you have used the wrong approach. A project started 
with a statically typed language might turn out, unexpectedly, to need 
much more flexibility, and vice versa, a project started with a 
dynamically typed language might turn out, unexpectedly, to need much 
more static checks. That's why I think a unified language framework 
where all these things are optional would be much much better.

However, we can't change the world as it is right now. Life is 
dangerous, programming is hard, we can all fail if we are unlucky, and 
that's the way it is. To base a decision on a belief system (whether 
static or dynamic ;) is most probably the best thing we can do.

Pretending to have a silver bullet is overestimating one's own 
subjective experience, and when someone says he has a solution that 
works in general, I simply don't believe him.

Unless, of course, he is proposing Common Lisp. ;-))

>>A static
>>type system doesn't say anything about the absence or presence of
>>errors
> 
> Sure it does say *something*. It does not say *everything*, though.

No, it only says something about the absence or presence of certain 
behaviors.

>>Static type systems only provide a way to prove the absence of _some_
>>behavior - and if they are expressive enough that they allow you to
>>express a certain class of errors that you happen not to want in a
>>program, that's certainly a benefit. But nothing about this "method" is
>>"automatic".
> 
> I don't follow. The checking *is* fully automatic.

No system can automatically check the absence of behavior. You can only 
get from a system what you feed it.

To put it like this: a type language allows you to instruct the compiler 
to perform certain checks on your code. Without such instructions, the 
compiler can't do anything useful.

And the fact that advanced static type systems are based on inference is 
irrelevant here. The compiler needs _something_ - it doesn't matter 
whether it's "only" the code or not. What matters is: What is the 
default behavior of a language in the case when no type annotations are 
present? Does it opt for staticity, or for dynamicity by default? And 
then, we are back to the trade-offs I have mentioned above. (Or, 
alternatively, can the language framework be tweaked to switch its 
default behavior?)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzad6sf8c2.fsf@cupid.igpm.rwth-aachen.de>
Andreas Rossberg <········@ps.uni-sb.de> writes:
> Fully agreed. But who did so? (Well, apart from those who claimed that Lisp 
> is that bullet because you could plug-in static typing if you want and have 
> all that eg MLs have? I'm deeply afraid they still actually believe that 
> claim.)

Locally at least it is indeed possible. Not only in principle, but
actually posible. It includes writing an ML compiler with a prefix
version of ML syntax. Then you can embed ML in lisp code. Natively.

> I don't need to put "instructions" in my code (although I can if I want, in 
> the form of type declarations).

Well, can I just write, for integer n,

integer i = (n^3)/3 + n^2- n/3

?

Unless your language includes an algebra system, it has to barf at
this code. So I'll _have_ to add annotations to appease the type
system. Or else, I would have to write a different representation for
this formula, again only to appease the type system.

And please refrain from saying that n/3 is not defined. Of course it
is defined. It is called "n thirds", FYI.

> > And the fact that advanced static type systems are based on inference is
> > irrelevant here. The compiler needs _something_ - it doesn't matter
> > whether it's "only" the code or not.
> 
> Please, what kind of logic is that? Nobody can check anything if there isn't 
> anything to check, whether automatically or manually. That does say 
> absolutely nothing about the automation level of checks in the presence of 
> "something", by the basic laws of logic.

If it were completely automatic, one would not need annotations. It is
unreasonable to ask this from static type systems, but it is also
unreasonable to pretend they can do such things.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpi3ib$ta0$1@grizzly.ps.uni-sb.de>
Mario S. Mommer <········@yahoo.com> wrote:
>
> > I don't need to put "instructions" in my code (although I can if I want,
in
> > the form of type declarations).
>
> Well, can I just write, for integer n,
>
> integer i = (n^3)/3 + n^2- n/3
>
> ?

You still don't get it, right? We've been over this several times, and still
you refuse to accept that this is an issue that is completely orthogonal to
static vs dynamic typing.

It is trivial to put up a static type system that allows that. It is
probably even easier than what most statically typed languages provide
(likewise, you can will build a dynamic language that raises an error on
mixed arithmetics). Thing is: most people don't want such implicit
conversions between arithmetic types, because they know that such implicit
conversions - whether statically typed or dynamically typed - are
potentially dangerous. YMMV, but that is a question completely independent
of static typing.

So what you wrote was beside the point (again).

    - Andreas
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzn0archo8.fsf@cupid.igpm.rwth-aachen.de>
"Andreas Rossberg" <········@ps.uni-sb.de> writes:
> Mario S. Mommer <········@yahoo.com> wrote:
> >
> > > I don't need to put "instructions" in my code (although I can if I want,
> in
> > > the form of type declarations).
> >
> > Well, can I just write, for integer n,
> >
> > integer i = (n^3)/3 + n^2- n/3
> >
> > ?
> 
> You still don't get it, right? We've been over this several times,
> and still you refuse to accept that this is an issue that is
> completely orthogonal to static vs dynamic typing.

the point was that you claimed that redundant annotations are not
nedded. They are. The result of the above expression, *which includes
rationals* (!), is an integer. I have to state that explictly,
although *I* already know this. In the dynamic language language I use
I get an integer when I evaluate the above. It performs the arithmetic
in the correct numeric type (rational) and then returns the resulting
integer.

The point of this is not to show your language is broken, only to
point out that "no redundant annotations are needed" is wrong. Even in
extremely simple cases your type checker fails, unless you have a
broken language in which

1/2+1/2=0

as it seems to be the case in Clean (this is the impression I get from
other posts).

And don't come with "this is rare". It doesn't matter if it is rare
because it is simply a counterexample. And depending on the
application domain you are working in, this kind of stuff is actually
relatively frequent.

> It is trivial to put up a static type system that allows that.

Of course, by deciding that '/' performs 'integer division'. This is
IMO worse than having to add annotations everywhere.

> Thing is: most people don't want such implicit conversions between
> arithmetic types, because they know that such implicit conversions -
> whether statically typed or dynamically typed - are potentially
> dangerous.

Evidence?

> So what you wrote was beside the point (again).

No. You were claiming that the type inference is fully automatic. I
have shown a simple example where you need to manualy help the type
system so that it can make sense of a perfectly correct program. So it
is not fully automatic.
From: Isaac Gouy
Subject: Re: More static type fun.
Date: 
Message-ID: <ce7ef1c8.0311201214.2c10bc47@posting.google.com>
Mario S. Mommer <········@yahoo.com> wrote in message news:<··············@cupid.igpm.rwth-aachen.de>...
-snip-
> In the dynamic language language I use
> I get an integer when I evaluate the above. It performs the arithmetic
> in the correct numeric type (rational) and then returns the resulting
> integer.
> 
> The point of this is not to show your language is broken, only to
> point out that "no redundant annotations are needed" is wrong. Even in
> extremely simple cases your type checker fails, unless you have a
> broken language in which
> 
> 1/2+1/2=0
> 
> as it seems to be the case in Clean (this is the impression I get from
> other posts).

Clean - "a broken language" - because implementing Rationals is less
interesting to the language designers than building an OS?

or 

Clean - "a broken language" - because by-design "The arguments of an
arithmetic operator must both be integer or both be real. The
expression 1.5 + 2 is not accepted by the compiler"?

One of the silliest performance problems I've seen in Smalltalk was
caused by implicit runtime conversions between SmallIntegers and
Doubles. Easily fixed by explicit conversion of values to the same
class at the beginning of the algorithm.

best wishes, Isaac
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpgu3n$siq$1@newsreader3.netcologne.de>
Andreas Rossberg wrote:

>>>The definition still clearly characterises something static.
>>
>>Yes, but it doesn't call the things that are proven absent "errors". At
>>the moment, I think this is the important misunderstanding.
> 
> Er, the original definition you find in Pierce' book (and which I quoted 
> some time ago) didn't speak about errors either, but also sayed behaviour. 
> It was almost the same as yours.

Matthias claimed to have quoted the definition from the back cover of 
Pierce's book. I haven't actually checked whether the quote is correct.

Maybe the quote on Pierce's book cover is only the "marketing version" 
of his actual definition, but this would point to the very essential 
problem of this whole discussion.

Static type systems cannot prove the absence of "certain erroneous 
behaviors". It's the programmers that do so who make use of those static 
type systems. Static type systems can support programmers in doing so, 
but that's about all they can do. This is true for all technology that 
supports you in writing well-behaved programs.

If Pierce really doesn't talk about "erroneous behaviors" in his actual 
definition, then I'm fine. What I object to is the misconception that is 
created by narrowing such statements to talk about "bugs" and "errors", 
especially when it isn't even clear that those "bugs" and "errors" are 
undesirable in certain contexts.

>>Pretending to have a silver bullet is overestimating one's own
>>subjective experience, and when someone says he has a solution that
>>works in general, I simply don't believe him.
> 
> Fully agreed. But who did so? (Well, apart from those who claimed that Lisp 
> is that bullet because you could plug-in static typing if you want and have 
> all that eg MLs have? I'm deeply afraid they still actually believe that 
> claim.)

Well, I would even go so far to claim that this is trivially true.

The essence of Lisp is that programs and data are the same. In Lisp, 
it's trivial to treat an s-expression as a piece of data in one moment 
and as an executable piece of code in the very next, and vice versa.

 From a very general perspective, all you would need to do in Common 
Lisp is to define a package that inherits from the COMMON-LISP package, 
and inside that package you shadow and/or redefine some of the Common 
Lisp definitions that stand in the way of doing advanced, 100% safe 
static type checking.

To that puzzle, add a code walker that performs the actual type checks, 
and you're basically done.

The (negative and positive) consequences would be as follows:

+ You could move code that you have developed in one style to the other 
style, simple by letting the packages that you develop your code in 
inherit from that "static" package instead of COMMON-LISP.

+ You could use different code walkers, i.e. different "pluggable" type 
systems. They could also vary wrt their "strongness". You could also add 
other static analysis tools, like those that perform analysis for race 
conditions.

- You would certainly need to adapt the programming style in some 
non-trivial ways according to the actual type system you would use. The 
new STATIC-COMMON-LISP packages clearly cannot be completely compatible 
to COMMON-LISP. (But at least you wouldn't need to completely change the 
language syntax, the set of tools you use - compilers, debuggers, and so 
on - and there is a chance that you can mold your code into the needs of 
some static type system via some semi-automatic refactorings. There 
could be a model to include a specialized set of refactorings along with 
each pluggable static type system.)

- It could be problematic to combine code developed under different type 
systems (i.e. code walkers). For example, code developed under system A 
might need to simply call code developed under system B, but there 
typing approaches are rather different. (This is the point that I am not 
so certain about whether it is actually feasible. However, this is not 
unlike the situation that current statically typed languages have to 
face when they need to interact with code developed in other languages, 
or even with the outside world. Again, at least you wouldn't need to 
switch the complete language framework just to change one aspect of your 
programming style.)

In a certain sense, such approaches have already been undertaken. ACL2 
and Qi can be considered as being close to such an approach.

For a Common Lisper, the claim above is obviously true because that's 
how we program all the time: We incorporate domain-specific languages 
into our programs to solve subtasks. A static type system would just be 
a domain-specific language that reasons about code without executing it. 
And this is usually just a matter of getting the macros right. ;)

(Of course, the details are probably much harder than this sketch might 
suggest. But the details are not fundamentally harder than to get a type 
system right for a language implemented in some other way.)

>>>>A static
>>>>type system doesn't say anything about the absence or presence of
>>>>errors
>>>
>>>Sure it does say *something*. It does not say *everything*, though.
>>
>>No, it only says something about the absence or presence of certain
>>behaviors.
> 
> In most application domains much of that "certain behaviour" clearly has to 
> be classified as erroneous, though, and in that case the type system tells 
> you about "errors" wrt that domain. So such an absolute statement is wrong.

How many application domains do you think exist? How many application 
domains do you actually know exist? How large is the subset of those 
that you know really well?

Maybe you have unconsciously chosen to work only for those application 
domains that fit your working style well, and therefore you conclude 
that the tools you like to use "somehow miraculously" fit the 
applications domain well that you like to work in.

This is probably true for anyone involved in such discussions.

"People who like this sort of thing will find this the sort of thing 
they like" - Abraham Lincoln

>>>>Static type systems only provide a way to prove the absence of _some_
>>>>behavior - and if they are expressive enough that they allow you to
>>>>express a certain class of errors that you happen not to want in a
>>>>program, that's certainly a benefit. But nothing about this "method" is
>>>>"automatic".
>>>
>>>I don't follow. The checking *is* fully automatic.
>>
>>No system can automatically check the absence of behavior. You can only
>>get from a system what you feed it.
> 
> I need to feed the code. But that's the very subject of analysis.

Many static typers have already admitted that the static type systems 
require you to avoid certain programming styles and adopt another one. 
They usually quote this as an improvement by saying that this already 
improves their understanding of a problem. This might or might not be 
true - there are certainly also other people who have difficulties to 
get used to working with static type systems.

However this might be, you have to adapt to the type system, and this is 
also a kind of "feeding".

Yes, I know that Lisp also requires you to adapt your programming style 
in some other ways, and this may also be difficult to adapt to. But 
that's besides the point. This amounts simply to the fact that there is 
no free lunch.

To put it differently, as soon as you get the impression that my 
statement above is trivial from a certain perspective, you have 
understood the perspective that I am trying to talk about. ;)

>>To put it like this: a type language allows you to instruct the compiler
>>to perform certain checks on your code. Without such instructions, the
>>compiler can't do anything useful.
> 
> I don't need to put "instructions" in my code (although I can if I want, in 
> the form of type declarations).

But you need to avoid some.

>>And the fact that advanced static type systems are based on inference is
>>irrelevant here. The compiler needs _something_ - it doesn't matter
>>whether it's "only" the code or not.
> 
> Please, what kind of logic is that? Nobody can check anything if there isn't 
> anything to check, whether automatically or manually. That does say 
> absolutely nothing about the automation level of checks in the presence of 
> "something", by the basic laws of logic.

Please, try to understand my statements in this regard in the trivial sense.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2ad6r9086.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

> Static type systems cannot prove the absence of "certain erroneous
> behaviors".

Oh, come on now!  Of course they can.  It is called the "soundness
theorem" usually consisting of two parts: "preservation" (subject
reduction) and "progress".  Proving type soundness relates static and
dynamic semantics of a programming language, showing that no
well-typed program can get into a situation which the dynamic
semantics is not prepared to handle.  In popular speak, this is often
called "well-typed programs do not get stuck".

Getting into a situation that is not handled by the operational
semantics of the language is arguably "erroneous behavior".

One way of getting around this sort of erroneous behavior is to make
sure that the operational semantics handle *every* situation that the
program can get into.  That is what essentially what safe, dynamically
typed languages such as Lisp are about.

The advantage of static typing comes if you can define your own
"erroneous behaviors" and have the type system guarantee that they
will not occur.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpidfg$r56$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Static type systems cannot prove the absence of "certain erroneous
>>behaviors".
[...]

> The advantage of static typing comes if you can define your own
> "erroneous behaviors" and have the type system guarantee that they
> will not occur.

You have made the important difference in that paragraph yourself: The 
programmer must decide what the "erroneous behaviors" are, the type 
system can't do that - it can't read your mind.

As soon as you have defined (implicitly or explicitly) what the 
"erroneous behaviors" are, a static type system can help you to 
guarantee their absence, but only when you are sure that these 
"erroneous behaviors" indeed are systematic by nature.

A type system can only help you with the "absence of certain behaviors" 
part, not with the definition of "erroneous behaviors" part.

That's why the general claim that static type systems _automatically_ 
prove the absence of _errors_ is wrong in a strict sense. It's only 
semi-automatic. It can't be otherwise.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpiks4$aq5$1@grizzly.ps.uni-sb.de>
Pascal Costanza wrote:
> 
>  From a very general perspective, all you would need to do in Common
> Lisp is to define a package that inherits from the COMMON-LISP package,
> and inside that package you shadow and/or redefine some of the Common
> Lisp definitions that stand in the way of doing advanced, 100% safe
> static type checking.
> 
> To that puzzle, add a code walker that performs the actual type checks,
> and you're basically done.

Yes, right. But then you are not programming in CL anymore. You have (after 
about 1 or 2 years) implemented a full-fledged, new language that just 
happens to be embedded in CL. It cannot interact in arbitrary ways with 
other CL code. So I don't think that qualifies, just as other arguments 
along the line of "I can always write my own interpreter" don't.

>> In most application domains much of that "certain behaviour" clearly has
>> to be classified as erroneous, though, and in that case the type system
>> tells you about "errors" wrt that domain. So such an absolute statement
>> is wrong.
> 
> How many application domains do you think exist? How many application
> domains do you actually know exist? How large is the subset of those
> that you know really well?

Oh come on. How many application domains really require to *not* treat, say, 
adding 5 to the print function as erroneous behaviour? And even if it were 
99% (although common sense tells me to rather estimate something around 
0.0000001%) your absolute statement would be false.

>>>No system can automatically check the absence of behavior. You can only
>>>get from a system what you feed it.
>> 
>> I need to feed the code. But that's the very subject of analysis.
> 
> Many static typers have already admitted that the static type systems
> require you to avoid certain programming styles and adopt another one.
> They usually quote this as an improvement by saying that this already
> improves their understanding of a problem. This might or might not be
> true - there are certainly also other people who have difficulties to
> get used to working with static type systems.
> 
> However this might be, you have to adapt to the type system, and this is
> also a kind of "feeding".

In that sense, dynamic typing can also only detect what I feed it either, 
because I have to adapt to the lack of static invariants by programming 
more defensively. So where's the meat in your argument?

> Yes, I know that Lisp also requires you to adapt your programming style
> in some other ways, and this may also be difficult to adapt to. But
> that's besides the point. This amounts simply to the fact that there is
> no free lunch.

But nobody said there was - in fact I have repeatedly stated quite the 
opposite.

>> I don't need to put "instructions" in my code (although I can if I want,
>> in the form of type declarations).
> 
> But you need to avoid some.

Pardon?

>>>And the fact that advanced static type systems are based on inference is
>>>irrelevant here. The compiler needs _something_ - it doesn't matter
>>>whether it's "only" the code or not.
>> 
>> Please, what kind of logic is that? Nobody can check anything if there
>> isn't anything to check, whether automatically or manually. That does say
>> absolutely nothing about the automation level of checks in the presence
>> of "something", by the basic laws of logic.
> 
> Please, try to understand my statements in this regard in the trivial
> sense.

Sorry, but what you wrote was a logically invalid implication to support 
your (likewise invalid) claim that *nothing* about type checking was 
automatic.

So can we agree that type *checking* is mostly automatic, although you may 
have to adapt your style?

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpimbd$olu$1@f1node01.rhrz.uni-bonn.de>
Andreas Rossberg wrote:

> Pascal Costanza wrote:
> 
>> From a very general perspective, all you would need to do in Common
>>Lisp is to define a package that inherits from the COMMON-LISP package,
>>and inside that package you shadow and/or redefine some of the Common
>>Lisp definitions that stand in the way of doing advanced, 100% safe
>>static type checking.
>>
>>To that puzzle, add a code walker that performs the actual type checks,
>>and you're basically done.
> 
> Yes, right. But then you are not programming in CL anymore. You have (after 
> about 1 or 2 years) implemented a full-fledged, new language that just 
> happens to be embedded in CL. It cannot interact in arbitrary ways with 
> other CL code. So I don't think that qualifies, just as other arguments 
> along the line of "I can always write my own interpreter" don't.

I haven't claimed that. To the contrary, I have made that very clear in 
my posting. I have also given some arguments why you still might want 
such an approach.

>>>In most application domains much of that "certain behaviour" clearly has
>>>to be classified as erroneous, though, and in that case the type system
>>>tells you about "errors" wrt that domain. So such an absolute statement
>>>is wrong.
>>
>>How many application domains do you think exist? How many application
>>domains do you actually know exist? How large is the subset of those
>>that you know really well?
> 
> Oh come on. How many application domains really require to *not* treat, say, 
> adding 5 to the print function as erroneous behaviour? And even if it were 
> 99% (although common sense tells me to rather estimate something around 
> 0.0000001%) your absolute statement would be false.

You are now only trying to ridicule my statements.

As Joe Marshall has put it very nicely some time ago, the only thing I 
know for sure about most applications is that I don't know them.

If you have a better empirical basis for your claims, then please cite 
your sources. If it is only too hard for you to _imagine_ why on earth 
anyone would want the things that I propose, then it might as well be 
that you are just missing something.

>>>>No system can automatically check the absence of behavior. You can only
>>>>get from a system what you feed it.
>>>
>>>I need to feed the code. But that's the very subject of analysis.
>>
>>Many static typers have already admitted that the static type systems
>>require you to avoid certain programming styles and adopt another one.
>>They usually quote this as an improvement by saying that this already
>>improves their understanding of a problem. This might or might not be
>>true - there are certainly also other people who have difficulties to
>>get used to working with static type systems.
>>
>>However this might be, you have to adapt to the type system, and this is
>>also a kind of "feeding".
> 
> In that sense, dynamic typing can also only detect what I feed it either, 
> because I have to adapt to the lack of static invariants by programming 
> more defensively. So where's the meat in your argument?

There is probably none. I am only disputing the arguments brought 
forward by some static typers that there is some.

>>Yes, I know that Lisp also requires you to adapt your programming style
>>in some other ways, and this may also be difficult to adapt to. But
>>that's besides the point. This amounts simply to the fact that there is
>>no free lunch.
> 
> But nobody said there was - in fact I have repeatedly stated quite the 
> opposite.

Then we just agree in this regard.

>>>I don't need to put "instructions" in my code (although I can if I want,
>>>in the form of type declarations).
>>
>>But you need to avoid some.
> 
> Pardon?

If you need to introduce dynamic checking into your otherwise statically 
typed code, those portions statically type check only in a trivial 
sense. The interesting "stuff" is executed at runtime.

If you want to have certain erroneous behaviors proved absent 
statically, you have to avoid using manual dynamic checks.

This really only boils down to the question what the default is in your 
language of choice.

>>Please, try to understand my statements in this regard in the trivial
>>sense.
> 
> Sorry, but what you wrote was a logically invalid implication to support 
> your (likewise invalid) claim that *nothing* about type checking was 
> automatic.

I haven't said that. Maybe I have worded it ambiguously - if that's the 
case I am sorry.

> So can we agree that type *checking* is mostly automatic, although you may 
> have to adapt your style?

I don't think it's "mostly" automatic. It's partially automatic.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <a14q81-fu7.ln1@ID-7776.user.dfncis.de>
Nikodemus Siivola <······@random-state.net> wrote:
> In comp.lang.lisp Matthias Blume <····@my.address.elsewhere> wrote:

> From this discussion I was under the impression that several
> statically typed functional languages provide "Any" type (or
> equivalent). 

No, they don't. But what you can do is to declare a datatype that
captures s-expressions (or equivalent), including CMUCL-style
types as tags, if one wants. This datatype would have a very specific
type in the "real" static type system.

> The existence of these union-of-all-types-types was offered as an
> example of how these languages provide the same level of convenience
> as dynamically typed.

It has about the same level of convenience as a restricted statically
typed sublanguage of Lisp would have. In other words, it doesn't
interact well with the rest of the language.

So it's a proof of concept (in principle, you can emulate one language
in the other, no matter which way round). It's not really practical.

> Is that not true, or how is that different from the Common Lisp type
> "T"?

They are completely different concepts. Or least different enough
one shouldn't compare them directly.

- Dirk
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpfe1s$as$1@grizzly.ps.uni-sb.de>
Dirk Thierbach wrote:
> 
>> From this discussion I was under the impression that several
>> statically typed functional languages provide "Any" type (or
>> equivalent).
> 
> No, they don't.

Yes, they have. See some of the postings discussing type Dynamic. It 
essentially is an infinite sum tagged by types.

>> Is that not true, or how is that different from the Common Lisp type
>> "T"?

It is different because it is isolated from other types in the type system, 
as I described elsewhere. You cannot accidently mix dynamics with other 
values. All guarantees of static typing are maintained.

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpijj1$8lv$1@grizzly.ps.uni-sb.de>
Dirk Thierbach wrote:
> 
> (And without typeclasses, I think Dynamic would be pretty diffult.
> Does OCaml have something like this? I cannot remember at the moment.)

Dynamic has nothing to do with type classes. But you can use type classes to 
get something close to a simple form of type Dynamic.

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FBCE445.7149A7BB@his.com>
Andreas Rossberg wrote:
> 
> Dirk Thierbach wrote:
> >
> > (And without typeclasses, I think Dynamic would be pretty diffult.
> > Does OCaml have something like this? I cannot remember at the moment.)
> 
> Dynamic has nothing to do with type classes. But you can use type classes to
> get something close to a simple form of type Dynamic.

Dynamic has a problem:  it is a nasty hack.

David
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpj4fm$me5$1@grizzly.ps.uni-sb.de>
Feuer <·····@his.com> wrote:
>
> > Dynamic has nothing to do with type classes. But you can use type
classes to
> > get something close to a simple form of type Dynamic.
>
> Dynamic has a problem:  it is a nasty hack.

You mean the Dynamic in Haskell? True.

    - Andreas
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FBDC195.44D82CBD@his.com>
Andreas Rossberg wrote:
> 
> Feuer <·····@his.com> wrote:
> > Dynamic has a problem:  it is a nasty hack.
> 
> You mean the Dynamic in Haskell? True.

Yes.  That one.  And it will be a nasty hack for as long as it exists,
because it obviously Doesn't Belong in the language.  Kind of like
some of the wizardly macrology of Al* Petrofsky and "Oleg" on
c.l.scheme.

David
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fc18a7a$1@news.unimelb.edu.au>
Feuer <·····@his.com> writes:

>Andreas Rossberg wrote:
>> 
>> Feuer <·····@his.com> wrote:
>> > Dynamic has a problem:  it is a nasty hack.
>> 
>> You mean the Dynamic in Haskell? True.
>
>Yes.  That one.  And it will be a nasty hack for as long as it exists,
>because it obviously Doesn't Belong in the language.

Well, I suppose you're entitled to your opinion.  But in my opinion,
the overall _concept_ of Dynamic is fine, it's just the implementations
of this concept in Haskell that suck.

It is certainly NOT obvious to me why something like Dynamic doesn't belong
in Haskell.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Dirk Thierbach
Subject: Dynamic (was: More static type fun.)
Date: 
Message-ID: <mfe291-fi6.ln1@ID-7776.user.dfncis.de>
Andreas Rossberg <········@ps.uni-sb.de> wrote:
> Dirk Thierbach wrote:
>> 
>> (And without typeclasses, I think Dynamic would be pretty diffult.
>> Does OCaml have something like this? I cannot remember at the moment.)

> Dynamic has nothing to do with type classes. But you can use type
> classes to get something close to a simple form of type Dynamic.

Isn't Dynamic implemented with class Typeable, or has this changed?
Is this now automatically inferred by the compiler? (I haven't
updated ghc for a while).

- Dirk
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fbcedaf$1@news.unimelb.edu.au>
Dirk Thierbach <··········@gmx.de> writes:

>without typeclasses, I think Dynamic would be pretty diffult. 

Mercury had "univ" (Mercury's equivalent to Dynamic) long before it had
type classes.

Really what you need for it is some kind of RTTI.  Type classes are
just one way of implementing that RTTI.  But you may want RTTI for other
reasons (e.g. debugging, serialization, GC) anyway.

>Does OCaml have something like this? I cannot remember at the moment.)

I don't think so.  However, apparently there are some alternatives which
can be used instead.  See [1] and [2] for more details.

[1] Xavier Leroy, email to the caml mailing list.
    <http://caml.inria.fr/archives/200211/msg00057.html>.

[2] Stephanie Weirich. Type-safe cast: Functional pearl.
    In Proc. ICFP, Montreal, pages 58--67, 2000.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fbc8c00$1@news.unimelb.edu.au>
Dirk Thierbach <··········@gmx.de> writes:

>Nikodemus Siivola <······@random-state.net> wrote:
>> In comp.lang.lisp Matthias Blume <····@my.address.elsewhere> wrote:
>
>> From this discussion I was under the impression that several
>> statically typed functional languages provide "Any" type (or
>> equivalent). 
>
>No, they don't.

Not exactly equivalent, but to a first approximation, yes, they do.
Maybe you are just not aware of it?

>But what you can do is to declare a datatype that
>captures s-expressions (or equivalent), including CMUCL-style
>types as tags, if one wants. This datatype would have a very specific
>type in the "real" static type system.

The types used for dynamic typing in statically typed languages,
i.e. "univ" in Mercury, "Dynamic" in Haskell, etc., are generally not
like s-expressions at all.  They just package together a representation
of a type and an object of that type.

It is _also_ possible to declarate a datatype that captures s-expression
or equivalent.  The Mercury standard library type "term", which represents
Prolog/Mercury terms, uses that approach.  This generally has a quite
different purpose and is used for quite different things than "univ".

>> The existence of these union-of-all-types-types was offered as an
>> example of how these languages provide the same level of convenience
>> as dynamically typed.
>
>It has about the same level of convenience as a restricted statically
>typed sublanguage of Lisp would have. In other words, it doesn't
>interact well with the rest of the language.

That's true for Mercury's "term" data type, but it is NOT true for
Mercury's "univ" data type.  

>So it's a proof of concept (in principle, you can emulate one language
>in the other, no matter which way round). It's not really practical.

If, by that, you mean that it's not practical to use "term" as
a substitute for "univ", I entirely agree.  If you mean that "univ"
can't be used as a substitute for dynamic typing in cases where you need
dynamic typing, then I entirely disagree.

I suggest you go read up some more on Haskell's Dynamic type,
or Mercury's univ type, or the java.lang.Object type in Java,
or the System.Object type in C#.  These are entirely practical.

Of course most uses of Object in C#/Java are purely there to compensate
for the lack of parametric polymorphism in the type system.  In Haskell
and Mercury, the use of univ or Dynamic is much rarer.  But in those
very rare cases when you do need it, it is really not at all difficult
to do so.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fbc8881$1@news.unimelb.edu.au>
Nikodemus Siivola <······@random-state.net> writes:

>In comp.lang.lisp Matthias Blume <····@my.address.elsewhere> wrote:
>
>> The original claim was quite explicitly talking about the fact that
>> Lisp's "types" are subsets of the set of all values.
>
>I've clearly misunderstood something.
>
>From this discussion I was under the impression that several
>statically typed functional languages provide "Any" type (or
>equivalent). 

Yes.  Well, _almost_ equivalent.

Any type can be converted to the "Any" type.  However, such a conversion
will change the representation; it will add a type tag.  In many of these
languages, such a conversion must be explicit, though there are a few
(e.g. C#) in which it is implicit.  Generally the conversion _from_ such
a type is always explicit though, since it could fail.

This is different to the situation in dynamically typed languages
where every value already is an instance of the universal type,
without needing any conversion, and where a value of the universal
type can be used in a place where a more specific type is expected
without any explicit downcast.

>The existence of these union-of-all-types-types was offered as an
>example of how these languages provide the same level of convenience
>as dynamically typed.
>
>Is that not true,

Convenience is in the eye of the programmer.  There's certainly a
difference between static typing with a union-of-all-types-type
and dynamic typing.  But which is more convenient?

The static typing camp would generally consider the explicitness of
these conversions to be a feature, not a bug, especially in the case of
the downcasts.  But dynamic typing fans would probably not like it.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Bourguignon
Subject: Re: More static type fun.
Date: 
Message-ID: <87u14ynxol.fsf@thalassa.informatimago.com>
Fergus Henderson <···@cs.mu.oz.au> writes:
> Nikodemus Siivola <······@random-state.net> writes:
> >From this discussion I was under the impression that several
> >statically typed functional languages provide "Any" type (or
> >equivalent). 
> 
> Yes.  Well, _almost_ equivalent.
> 
> Any type can be converted to the "Any" type.  However, such a conversion
> will change the representation; it will add a type tag.  

Aren't you  aware that even loading  a byte from RAM  to register DOES
change the representation of the encoded number?

The more obvious case is when the data bus is NOTed. But I could go to
such lengths  as to say that  the representation of bits  in a Dynamic
RAM is  not the  same as the  representation of  bits in a  Static RAM
register.


So  what importance  may  that  have if  something  changes under  the
considered abstraction layer?


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fc1e66c@news.unimelb.edu.au>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>> Nikodemus Siivola <······@random-state.net> writes:
>> >From this discussion I was under the impression that several
>> >statically typed functional languages provide "Any" type (or
>> >equivalent). 
>> 
>> Yes.  Well, _almost_ equivalent.
>> 
>> Any type can be converted to the "Any" type.  However, such a conversion
>> will change the representation; it will add a type tag.  
>
>Aren't you  aware that even loading  a byte from RAM  to register DOES
>change the representation of the encoded number?

That's not very important, since it doesn't affect the performance.

>So  what importance  may  that  have if  something  changes under  the
>considered abstraction layer?

If it truly does not affect the considered abstraction layer, it is
not important.  But the operational semantics, and in particular the
performance model, are an important part of the programming language
abstraction layer, and data representation affects those.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Lex Spoon
Subject: Re: More static type fun.
Date: 
Message-ID: <m3ekvpqazf.fsf@logrus.dnsalias.net>
>>So  what importance  may  that  have if  something  changes under  the
>>considered abstraction layer?
>
> If it truly does not affect the considered abstraction layer, it is
> not important.  But the operational semantics, and in particular the
> performance model, are an important part of the programming language
> abstraction layer, and data representation affects those.

There was an interview with Robin Milner lately that got stuck on this
very point for a little while.  Milner really wanted to ignore
performance issues, because he thought it was hard enough already just
to describe what a program does.  The interviewer actually argued with
over it, taking the stance that "what a program does" includes
performance issues.

Anyway, it is interesting how ingrained these varying ideas get into
different people.


  http://nick.dcs.qmul.ac.uk/~martinb/interviews/milner/


Search for "stack" on the page and then go backwards a little.


Lex
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FBB0F2C.883BAEA@his.com>
Don Geddis wrote:

> That's meaningless.  If a function is defined on strings, and you pass it an
> integer, you'll get an error in Lisp.  This is what ordinary programmers mean
> by data type.  You're creating definitions that are almost deliberately
> designed to confuse.  There is no interesting sense in which Lisp only has
> a "single" data type.

Ahem.  Ordinary Lisp programmers maybe.  And there _is_ an
interesting sense in which Lisp only has a single datatype.  It
is not interesting _within_ Lisp, but it is interesting from a
language-comparison perspective.  Because Lisp is unityped and
safe, anything can be passed to any function without causing the
Lisp system to crash.

David
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpakqp$9fd$1@grizzly.ps.uni-sb.de>
Pascal Costanza wrote:
> 
>>>When programming people (in general) talk about types, they're talking
>>>about data representation within computer programs.
>> 
>> For that sense of "type", it would be appropriate to say that Lisp has
>> only one type.
> 
> When skipping through the ANSI Common Lisp standard, I see a classes,
> structures, symbols, various number types, including bits and bytes and
> intervals, characters, including support for character encodings, conses
> (lists), arrays/strings, hash tables, filenames, files, streams, and a
> type specification sublanguage.

Fergus was pointing at the fact that values of all these things are 
represented by tagging, i.e. the low-level representation type is actually 
a single union type.

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <u1xs7vvld.fsf@dtpq.com>
>>>>> On Mon, 17 Nov 2003 15:09:29 +0100, Andreas Rossberg (" Andreas") writes:

  Andreas> Pascal Costanza wrote:
 >> 
 >>>> When programming people (in general) talk about types, they're talking
 >>>> about data representation within computer programs.
 >>> 
 >>> For that sense of "type", it would be appropriate to say that Lisp has
 >>> only one type.
 >> 
 >> When skipping through the ANSI Common Lisp standard, I see a classes,
 >> structures, symbols, various number types, including bits and bytes and
 >> intervals, characters, including support for character encodings, conses
 >> (lists), arrays/strings, hash tables, filenames, files, streams, and a
 >> type specification sublanguage.

  Andreas> Fergus was pointing at the fact that values of all these
  Andreas> things are represented by tagging, i.e. the low-level
  Andreas> representation type is actually a single union type.

I am not sure what you mean by this in this context.  

Lisp code can certainly distinguish data types from one another,
which (aside from type checking) is used for computing method
dispatch, and it has operators such as TYPEP and SUBTYPEP.

My jargonless reading would suggest that your statement is meaningless,
since at the lowest level, all the languages represent data as "bits".
But in Lisp you can't see those bits, nor coerce bits from one type
to another, because they are constrained to have distinct types.

Could you please explain what you mean by, "low-level representation
type is actually a single union type" and why you think it is important?

Or is this just more trolling of the form, "We define a type system
to mean a statically typed system, which Lisp is not, therefore Lisp
doesn't have a type system"?
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <uu153uh0j.fsf@dtpq.com>
>>>>> On Mon, 17 Nov 2003 15:09:29 +0100, Andreas Rossberg (" Andreas") writes:

  Andreas> Pascal Costanza wrote:
 >> 
 >>>> When programming people (in general) talk about types, they're talking
 >>>> about data representation within computer programs.
 >>> 
 >>> For that sense of "type", it would be appropriate to say that Lisp has
 >>> only one type.
 >> 
 >> When skipping through the ANSI Common Lisp standard, I see a classes,
 >> structures, symbols, various number types, including bits and bytes and
 >> intervals, characters, including support for character encodings, conses
 >> (lists), arrays/strings, hash tables, filenames, files, streams, and a
 >> type specification sublanguage.

  Andreas> Fergus was pointing at the fact that values of all these
  Andreas> things are represented by tagging, i.e. the low-level
  Andreas> representation type is actually a single union type.

I am not sure what you mean by this in this context.  

Lisp code can certainly distinguish data types from one another,
which (aside from type checking) is used for computing method
dispatch, and it has operators such as TYPEP and SUBTYPEP.

My jargonless reading would suggest that your statement is meaningless,
since at the lowest level, all the languages represent data as "bits".
But in Lisp you can't see those bits, nor coerce bits from one type
to another, because they are constrained to have distinct types.

Could you please explain what you mean by, "low-level representation
type is actually a single union type" and why you think it is important?

Or is this just more trolling of the form, "We define a type system
to mean a statically typed system, which Lisp is not, therefore Lisp
doesn't have a type system"?
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <un0avuh04.fsf@dtpq.com>
>>>>> On Mon, 17 Nov 2003 15:09:29 +0100, Andreas Rossberg (" Andreas") writes:

  Andreas> Pascal Costanza wrote:
 >> 
 >>>> When programming people (in general) talk about types, they're talking
 >>>> about data representation within computer programs.
 >>> 
 >>> For that sense of "type", it would be appropriate to say that Lisp has
 >>> only one type.
 >> 
 >> When skipping through the ANSI Common Lisp standard, I see a classes,
 >> structures, symbols, various number types, including bits and bytes and
 >> intervals, characters, including support for character encodings, conses
 >> (lists), arrays/strings, hash tables, filenames, files, streams, and a
 >> type specification sublanguage.

  Andreas> Fergus was pointing at the fact that values of all these
  Andreas> things are represented by tagging, i.e. the low-level
  Andreas> representation type is actually a single union type.

I am not sure what you mean by this in this context.  

Lisp code can certainly distinguish data types from one another,
which (aside from type checking) is used for computing method
dispatch, and it has operators such as TYPEP and SUBTYPEP.

My jargonless reading would suggest that your statement is meaningless,
since at the lowest level, all the languages represent data as "bits".
But in Lisp you can't see those bits, nor coerce bits from one type
to another, because they are constrained to have distinct types.

Could you please explain what you mean by, "low-level representation
type is actually a single union type" and why you think it is important?

Or is this just more trolling of the form, "We define a type system
to mean a statically typed system, which Lisp is not, therefore Lisp
doesn't have a type system"?
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpat5e$4u7$1@grizzly.ps.uni-sb.de>
Christopher C. Stacy wrote:
> 
>   Andreas> Fergus was pointing at the fact that values of all these
>   Andreas> things are represented by tagging, i.e. the low-level
>   Andreas> representation type is actually a single union type.
> 
> I am not sure what you mean by this in this context.

Fergus explained it himself.

> Could you please explain what you mean by, "low-level representation
> type is actually a single union type" and why you think it is important?

I didn't say it was. Nor do I think it is. It's a mere technicality.

> Or is this just more trolling of the form, "We define a type system
> to mean a statically typed system, which Lisp is not, therefore Lisp
> doesn't have a type system"?

Think what you want, call me names, start a new science, I'm out of this 
stupid discussion where even the attempt to clarify terminology is taken as 
offense.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: David Golden
Subject: Re: More static type fun.
Date: 
Message-ID: <Gd7ub.945$nm6.3190@news.indigo.ie>
> Think what you want, call me names, start a new science, I'm out of this
> stupid discussion where even the attempt to clarify terminology is taken
> as offense.
> 

Well, if, say, Microsoft chose to "clarify" terminology whereby Open Source
meant what Microsoft wanted it to mean instead of what Open Source
developers use it to mean, you can probably see why people might object...
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m17k1ykjf7.fsf@tti5.uchicago.edu>
David Golden <············@oceanfree.net> writes:

> > Think what you want, call me names, start a new science, I'm out of this
> > stupid discussion where even the attempt to clarify terminology is taken
> > as offense.
> > 
> 
> Well, if, say, Microsoft chose to "clarify" terminology whereby Open Source
> meant what Microsoft wanted it to mean instead of what Open Source
> developers use it to mean, you can probably see why people might object...

Indeed.  That's why we all complain when Lispers say that Lisp has a
type system.  It's as if Microsoft claimed that its software is Open
Source.

Yes, Lisp has *types* (in the sense of "sets of values" -- as does
every other programming language for that matter), but it does not
have a *type system*.

By the way, I have no idea why people get offended when they are told
their favorite language does not have a type system.  Is having one
such a good thing after all, in your opinion?
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <uisli52xk.fsf@dtpq.com>
>>>>> On 17 Nov 2003 11:13:16 -0600, Matthias Blume ("Matthias") writes:

 Matthias> Yes, Lisp has *types* (in the sense of "sets of values" -- as does
 Matthias> every other programming language for that matter), but it does not
 Matthias> have a *type system*.

So, this conversation is over, then.
Too bad you had to give up in this fashion.

 Matthias> By the way, I have no idea why people get offended when they are told
 Matthias> their favorite language does not have a type system.  Is having one
 Matthias> such a good thing after all, in your opinion?

There is more than community that claims the use of the phrase "type system".
The people in Lisp community believe that there is more than one
sensible usage of that phrase, while you apparently are intent on
restricting its meaning in such a way that it only supports your arguments.
That's what's offensive.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1y8uej4cd.fsf@tti5.uchicago.edu>
······@dtpq.com (Christopher C. Stacy) writes:

> There is more than community that claims the use of the phrase "type system".
> The people in Lisp community believe that there is more than one
> sensible usage of that phrase, while you apparently are intent on
> restricting its meaning in such a way that it only supports your arguments.

Give me another "sensible" usage.  So far I have seen only claims
about their existence, nothing else.
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-CEAD0F.00200118112003@netnews.attbi.com>
Here's a quote. In article <··············@tti5.uchicago.edu>, you,  
(Matthias Blume) wrote:

> Lisp has sets of values (as does every other programming language).
> By abuse of terminology, some of these sets end up being called
> "types".

The abuse of terminology is yours, since these entities have never been 
called "sets of values" in lisp. There is no "sets-of-values-error," nor 
"sets-of-values-of," nor "sets-of-values-p," nor "check-sets-of-values," 
nor "sets-of-values specifier" in the ANSI Common Lisp standard. No one 
has ever called Lisp a "dynamically sets-of-valued language." They have 
been called "types," and lisp and smalltalk have been called 
"dynamically typed languages" for decades.

You are attempting to appropriate the term "type" so that it only 
applies to statically typed languages, possibly so that it only applies 
to HM style type systems. It is this attempted appropriation of a 
commonly used programming term of long standing that others find both 
offensive and obtuse. If you feel the need to coin some new term with 
the exclusive meaning of "types in a HM style type system," feel free to 
do so. But do not appropriate a term that has been used for decades in 
discussions of programming languages, both academic publications, and in 
converstation among programmers.

This is precisely the same sort of narrow redefinition of terms you 
attempted in the discussion of abstraction, where you tried to redefine 
"abstraction" so narrowly that it only applied to your favorite 
language(s). In general, one can't take a term of long standing usage, 
attempt to narrowly restrict its meaning, and expect those who have used 
the term for decades to submit to a gratuitous amputation of an 
essential part of their vocabulary.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2n0augqjn.fsf@hanabi-air.shimizu.blume>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> Here's a quote. In article <··············@tti5.uchicago.edu>, you,  
> (Matthias Blume) wrote:

Meta question: Which part of this quote supports the claim that I said
that 'any possible position that the other side has is, BY DEFINITION,
"nonsensible"'?

> > Lisp has sets of values (as does every other programming language).
> > By abuse of terminology, some of these sets end up being called
> > "types".
> 
> The abuse of terminology is yours, since these entities have never been 
> called "sets of values" in lisp. There is no "sets-of-values-error," nor 
> "sets-of-values-of," nor "sets-of-values-p," nor "check-sets-of-values," 
> nor "sets-of-values specifier" in the ANSI Common Lisp standard. No one 
> has ever called Lisp a "dynamically sets-of-valued language." They have 
> been called "types," and lisp and smalltalk have been called 
> "dynamically typed languages" for decades.

So you think that abuse that lasts long enough or even from the
beginning is then no longer abuse?

> You are attempting to appropriate the term "type" [...]

No.  First of all, the term "type" is much older than all of
programming, including the dynamically typed variety.  So if anyone
has done any "appropriating" here, then it surely wasn't me.

Second, I did not even mean "abuse of terminology" as such a bad thing
in and of itself.  (I often abuse terminology or notation myself if it
helps making a point clearer.)  I can even agree to call Lisp's types
types.  But that does not mean that Lisp has a /type system/ (except
in the most trivial of sense -- see below).

Let me say this more clearly: I am not interested in this silly war of
words of whether one side or the other hijacked a particular word.  Go
ahead and call whatever Lisp has a "type system".  It does not matter.
Names do not matter.  The fact, however, is that Lisp does not have
the "thing" that, e.g, Pierce refers to when he talks about "type
systems".  *That* is what I and a few others here have been trying to
say, and I hope it is not your intention to dispute that.

What you call a "type system" and a "type" is something that you find
in *every* general-purpose programming languge.  I can always single
out certain sets of values (which may or may not have decision
procedures -- although in the dynamically typed case they most of the
time do (*)) and call them "types".  I can always single out sufficiently
many of them and call that a "type system".  Nothing is won that way,
which is why I find the discussion of this notion of "types" and "type
systems" extremely unfruitful and boring.

Now, if you start paying attention to which sets you single out so
that they are in certain useful relationships to each other -- which
in turn makes it possible to reason about the values that program
variables can or cannot assume and do so without actually executing
the program, then it gets interesting.  Of course, that is one of the
things that "type systems" (in the standard sense) are all about...

Matthias

(*) See the work by Findler and Felleisen on "contract types".  These
are runtime-checked types which do not (in general) have decision
procedures.  Incidentally, they are also not explicitly given as sets
of values.  (In fact, it turns out that due to certain corner cases a
sound interpretation of contract types as sets of values is somewhat
subtle.)
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <d6bqt4li.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> Names do not matter.  The fact, however, is that Lisp does not have
> the "thing" that, e.g, Pierce refers to when he talks about "type
> systems".  *That* is what I and a few others here have been trying to
> say, and I hope it is not your intention to dispute that.

Pierce is a static typist, and this is a free country.  He can call it
what he wants.  Doesn't have much to do with lisp, though.

> Now, if you start paying attention to which sets you single out so
> that they are in certain useful relationships to each other -- 

ok...

> which in turn makes it possible to reason about the values that program
> variables can or cannot assume and do so without actually executing
> the program, then it gets interesting.  

So you are only interested in static analysis.  Fine.

-- 
~jrm
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m21xs54u3w.fsf@hanabi-air.shimizu.blume>
Joe Marshall <·············@comcast.net> writes:

> Pierce is a static typist, and this is a free country.  He can call it
> what he wants.

I'm sure he is glad to have your permission.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpcprm$s0$1@newsreader3.netcologne.de>
Matthias Blume wrote:

> Now, if you start paying attention to which sets you single out so
> that they are in certain useful relationships to each other -- which
> in turn makes it possible to reason about the values that program
> variables can or cannot assume and do so without actually executing
> the program, then it gets interesting.  Of course, that is one of the
> things that "type systems" (in the standard sense) are all about...

a) The compiler is also a program that gets executed, so we're back to 
the time when reasoning about types occurs.

b) What's wrong, for example, with SUBTYPEP, part of ANSI Common Lisp? 
It allows you to determine the relationship between two types without 
actually looking at the values they might or might not contain.

Admittedly this is a pretty weak form of reasoning, but you can't 
rightfully say that Common Lisp doesn't have a type system at all.

(Yes, HM type systems are probably more expressive in this regard.)


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzr806x7tt.fsf@cupid.igpm.rwth-aachen.de>
Matthias Blume <····@my.address.elsewhere> writes:
> Now, if you start paying attention to which sets you single out so
> that they are in certain useful relationships to each other -- which
> in turn makes it possible to reason about the values that program
> variables can or cannot assume and do so without actually executing
> the program, then it gets interesting.

Yes, but you can only do so by restricting your programming
language. You can go on yapping madly about it, but the truth is: /you
have restricted your programming language/. It can do less than the
same language without that restriction [1]. You assert that *for you*
the benefits of this mutilation of functionality outweight the damage,
which is alright, but please accept that others find this position
bizarre.

> Of course, that is one of the things that "type systems" (in the
> standard sense) are all about...

It is not the standard sense. It is the static typing community's
sense. Please stop trying to cover the sun with your thumb.

---

[1] For instance, executing a program with a type error.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1he118vtz.fsf@tti5.uchicago.edu>
Mario S. Mommer <········@yahoo.com> writes:

> Yes, but you can only do so by restricting your programming
> language. You can go on yapping madly about it, but the truth is: /you
> have restricted your programming language/. It can do less than the
> same language without that restriction [1]. You assert that *for you*
> the benefits of this mutilation of functionality outweight the damage,
> which is alright, but please accept that others find this position
> bizarre.

"yapping", "madly", "mutilation", "damage"

I find your language bizarre.  Now, please, someone call us "nazis" so
that this "conversation" be over.
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fz4qx1sjgu.fsf@cupid.igpm.rwth-aachen.de>
Matthias Blume <····@my.address.elsewhere> writes:
> I find your language bizarre.  Now, please, someone call us "nazis" so
> that this "conversation" be over.

Sidestepping the issue again? Once you are cornered, you cannot but
resort to this "type" of exit, can't you?
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m14qx18uj1.fsf@tti5.uchicago.edu>
Mario S. Mommer <········@yahoo.com> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> > I find your language bizarre.  Now, please, someone call us "nazis" so
> > that this "conversation" be over.
> 
> Sidestepping the issue again? Once you are cornered, you cannot but
> resort to this "type" of exit, can't you?

You cornered me?  Sorry, must have missed that.

<g>
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-8E6D42.19381918112003@netnews.attbi.com>
In article <··············@hanabi-air.shimizu.blume>,
 Matthias Blume <····@my.address.elsewhere> wrote:

> No.  First of all, the term "type" is much older than all of
> programming, including the dynamically typed variety.  So if anyone
> has done any "appropriating" here, then it surely wasn't me.

Earth to Mattias. This is comp.lang.lisp/comp.lang.functional. You were 
discussing lisp, a programming language.We are among programmers.  Among 
programmers, in verbal discussion, in academic publilcations, and even 
in formal standardization body standards, the term "type" has been used 
for years and years with a meaning that is nothing like the narrow one 
you wish to shoehorn the term into.

Reaching back into the history of mathematics and claiming we should all 
take that definition for the term "type" as the one to be used when 
discussing programming languages is a non-starter, if only because so 
many published standards would have to be rewritten to conform to the 
Blume-approved terminology.
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-383580.23361018112003@netnews.attbi.com>
In article <··············@hanabi-air.shimizu.blume>,
 Matthias Blume <····@my.address.elsewhere> wrote:

> Let me say this more clearly: I am not interested in this silly war of
> words of whether one side or the other hijacked a particular word.  Go
> ahead and call whatever Lisp has a "type system".  It does not matter.
> Names do not matter.

Names matter a great deal. If you use them in narrowly exclusive, 
idiosyncratic ways, you make what to others seem like intentionally 
obtuse statements.

>  The fact, however, is that Lisp does not have
> the "thing" that, e.g, Pierce refers to when he talks about "type
> systems".  *That* is what I and a few others here have been trying to
> say, and I hope it is not your intention to dispute that.

"The 'thing' that, e.g., Piece refers to when he talks about 'type 
systems' " is using type inference and type constraints to prove program 
type safety at *compile time*. No one disputes that the common lisp type 
system does not do this. But that is  because lisp never tried to 
achieve the dubious goal of using types to prove program type safety at 
compile time.

However, even Pierce himself has written that dynamic typing is 
sometimes unavoidable, and *essential to type safety*:
"However  even in statically typed languages  there is often the need to 
deal with data whose type cannot be determined at compile time.  For 
example  full static typechecking of programs that exchange data with 
other programs or access persistent data is in general not possible.  A 
certain amount of dynamic checking must be performed in order to 
preserve type safety." (from "Dynamic Typing in a Statically Typed 
Language" available at 
<http://www.cis.upenn.edu/~bcpierce/papers/dynamic.ps>)

Note that any program that allows significant user interaction 
constitutes a "program that exchanges data with other programs" (i.e., 
those programs of the host system that translate user inputs into 
machine usable form). In the modern era, this is a very large proportion 
of software, if not the majority. This is why Wegner, Milner, and others 
feel that  "a theory of concurrency and interaction requires a new 
conceptual framework, not just a refinement of what we find natural for 
sequential [algorithmic] computing." (from "Elements of Interaction" 
(Turing Award lecture), Communications of the ACM (36:1), 1993, cited in 
"Computation Beyond Turing Machines" available at: 
<http://www.cse.uconn.edu/~dqg/papers/cacm02.rtf>

The reality of interactive computing is that in order to guarantee type 
safety, we will have to do dynamic type checking. Even Piece himself 
knows this. (BTW, Cardelli is a co-author of that paper). The only 
question then becomes, how useful is static type checking, if we are 
forced to do a good deal of dynamic type checking anyway? The answer 
from the lisp community's experience is "not all that important." The 
overwhelming majority of type errors that would be caught by a static 
type checker are caught anyway in testing required for other purposes. 
This relative uselessness comes at the cost of also having to placate 
the static type checker. This makes static typing a very unattractive 
proposition to programmers who know they'll have to do dynamic type 
checking anyway.


> I can even agree to call Lisp's types
> types.  But that does not mean that Lisp has a /type system/ (except
> in the most trivial of sense -- see below).

The fact that lisp doesn't try to prove program type safety at compile 
time doesn't make the system common lisp uses to deal with types 
trivial. In fact, it is quite useful. It just isn't useful for doing the 
kind of static inference about program correctness that you are enamored 
of. Lispers aren't particularly interested in such static verifications 
of type safety, because they write more dynamic programs which would be 
impossible to statically verify except in the most useless sense. That 
is, unless, one overrode the static type checker to get the necessary 
dynamism, but then, what's the point of static type checking? The lisp 
type system allows for detection of runtime type errors precisely 
because lispers know that many of the programs they want to write 
couldn't possibly be usefully statically verified - and Pierce himself 
agrees with us.

You equate a useful type system with one that does static verification 
of program type safety. But this is *not* useful to lispers, who know, 
along with Pierce and Cardelli, that there simply isn't enough known at 
compile time to provide for verification of runtime program type safety. 
Therefore, we have a type system that focuses on flagging runtime type 
errors, allowing them to be corrected, or dealt with gracefully the only 
time they can be dealth with - at runtime.
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FBB12D1.C97682E2@his.com>
Raffael Cavallaro wrote:

> You equate a useful type system with one that does static verification
> of program type safety. But this is *not* useful to lispers, who know,
> along with Pierce and Cardelli, that there simply isn't enough known at
> compile time to provide for verification of runtime program type safety.
> Therefore, we have a type system that focuses on flagging runtime type
> errors, allowing them to be corrected, or dealt with gracefully the only
> time they can be dealth with - at runtime.
<etc>

I think you're missing Matthias' point.  He isn't saying that lisp
is bad because it doesn't have a type system.  He's saying
that lisp doesn't have a type system.  He seems also to be saying
that he personally prefers to program in languages with (static)
type systems and is arguing that (static) type systems are much more
useful (in the appropriate context, which is not Lisp) than you and
some other Lispers claim.

David
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2znessutv.fsf@hanabi-air.shimizu.blume>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> However, even Pierce himself has written that dynamic typing is 
> sometimes unavoidable, and *essential to type safety*:
> "However  even in statically typed languages  there is often the need to 
> deal with data whose type cannot be determined at compile time.  For 
> example  full static typechecking of programs that exchange data with 
> other programs or access persistent data is in general not possible.  A 
> certain amount of dynamic checking must be performed in order to 
> preserve type safety." (from "Dynamic Typing in a Statically Typed 
> Language" available at 
> <http://www.cis.upenn.edu/~bcpierce/papers/dynamic.ps>)

Sure.  Nobody is disputing the need for dynamic tests.  But read what
Pierce et al. wrote carefully, and you'd notice that he did not say
"dynamic typing is sometimes unavoidable" or "essential to type
safety".  He merely said that "dynamic checking must be performed".

Dynamic tests can be handled quite well in a statically typed context
in a variety of ways.  One of these is what the above paper is all
about.  But the paper does not say or imply that a type Dynamic is the
only way to go, and it is also not proof that dynamic typing is
inherently needed almost everywhere.  After all, the paper does not
suggest to throw out the rest of the type system in favor of type
Dynamic.

> Note that any program that allows significant user interaction 
> constitutes a "program that exchanges data with other programs" (i.e., 
> those programs of the host system that translate user inputs into 
> machine usable form). In the modern era, this is a very large proportion 
> of software, if not the majority.

See, now you are *way* overstating your case.  Leading up to here you
wrote a lot of things (many of which I snipped) that one can agree
with, at least in principle, after perhaps aligning some terminology.

The problems with interactive input can be handled quite well using,
e.g., parsing techniques, and those work well enough regardless of
whether the language is statically or dynamically typed.  One can
actually think of Abadi/Cardelli/Pierce/Plotkin's type "Dynamic" (at
least as used in their introductory example) as a way of packaging up
a generic parser in a statically typed language.  Yes, type Dynamic is
useful at times (but not always, and not even in the majority of
cases), and it is never /needed/ in the strict sense of the word.

> This is why Wegner, Milner, and others 
> feel that  "a theory of concurrency and interaction requires a new 
> conceptual framework, not just a refinement of what we find natural for 
> sequential [algorithmic] computing." (from "Elements of Interaction" 
> (Turing Award lecture), Communications of the ACM (36:1), 1993, cited in 
> "Computation Beyond Turing Machines" available at: 
> <http://www.cse.uconn.edu/~dqg/papers/cacm02.rtf>

You don't seriously think that they were refering to pervasive dynamic
typing with this remark, do you?  After all, that wouldn't be a new
framework at all!

> The only question then becomes, how useful is static type checking,
> if we are forced to do a good deal of dynamic type checking anyway?
> The answer from the lisp community's experience is "not all that
> important."

Pierce wrote "a certain amount", not "a good deal".  The claim that it
is "not all that important" is completely at odds with my own
experience.  I come from a Lisp and dynamic typing background, and
precisely because I found it important (and, in fact, *extremely* so)
I turned to what I feel are better language designs.

> The overwhelming majority of type errors that would be caught by a
> static type checker are caught anyway in testing required for other
> purposes.

Again, you are *way* overstating your case here.  The above statement
is simply false according to my own experience and according to that
reported by many other respected members of the computing community.

> This relative uselessness comes at the cost of also having to placate 
> the static type checker.

Another claim that is not only not supported but directly contradicted
by experience.  I, for example, do not "placate" a type checker, I
actively /utilize/ its power to my own advantage.  This way it is
neither useless nor bothersome but rather the opposite on both counts.

> This makes static typing a very unattractive proposition to
> programmers who know they'll have to do dynamic type checking
> anyway.

Well, yes, dynamic tests are unavoidable. (Dynamic type checking
certainly is not!)  I am a programmer who knows that dynamic tests are
needed, and static typing is still extremely attractive to me.

> The fact that lisp doesn't try to prove program type safety at compile 
> time doesn't make the system common lisp uses to deal with types 
> trivial. In fact, it is quite useful.

"Useful" and "not trivial" are not the same.  Moreover, a "system that
deals with types" is not necessarily a "type system".

> The lisp type system allows for detection of runtime type errors
> precisely because lispers know that many of the programs they want
> to write couldn't possibly be usefully statically verified - and
> Pierce himself agrees with us.

You'd better ask the man himself.  I doubt that he will agree with you
on significantly more points than I do.

> You equate a useful type system with one that does static verification 
> of program type safety. But this is *not* useful to lispers, who know, 
> along with Pierce and Cardelli, that there simply isn't enough known at 
> compile time to provide for verification of runtime program type safety. 

Type Dynamic (which has actually become rather well-known throughout
the static typing community) is precisely a way of *statically*
dealing with the uncertainties that come with the dynamics of
runtime. In the program fragment that they use in the introduction:

   typecase image of                                (* 1 *)
     (b:Bitmap) => displayBitmap(b)                 (* 2 *)
   | (s:String) => displayString(s)                 (* 3 *)

we /statically/ know that the value of variable "image" will always be
of type Dynamic, that variable "b" in line 2 holds only values of type
Bitmap, and that variable "s" always assumes to String values in line
3.  Therefore we are able to assign those types statically to each
program phrase without having to wait for things to unfold at runtime.
It is the same technique that is used in "ordinary" case expressions
like:

 fun traverse tree =
   case tree of
      Leaf x => ...
    | Node (left, right) => (traverse(left); traverse(right))

where (assuming a suitable type definition in place) we statically
know "tree", "left" and "right" to be of a certain sum type with cases
Leaf and Node and where we know "x" to be of the type that is the
domain of the "Leaf" constructor.

So there is a whole lot of static type information available, and type
Dynamic along with the typecase construct have been designed with
precisely that in mind.

There are other ways of dealing with dynamic data, too (and the paper
says so) which do not involve type Dynamic.  In any case, in no way do
the authors of the paper advocate throwing out the baby with the
bathwater (e.g., by suddenly using type Dynamic for *everything*).
But that's what you are effectively suggesting with your "we don't
need no stinkin' static types" attitude.

[By the way, there is an amusing paper by Baars and Swierstra
(ICFP'02) which shows how to statically type type Dynamic.]
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-81F847.14290019112003@netnews.attbi.com>
In article <··············@hanabi-air.shimizu.blume>,
 Matthias Blume <····@my.address.elsewhere> wrote:

> Type Dynamic (which has actually become rather well-known throughout
> the static typing community) is precisely a way of *statically*
> dealing with the uncertainties that come with the dynamics of
> runtime.

This is another redefinition of terminology, in this case the word 
"static." Type Dynamic can't possibly be a *static* method of dealing 
with runtime. "Static" means,  based on the program itself, not it's 
runtime consequences (i.e., at compile time), so this is a contradiction 
in terms. Type Dynamic is just an escape hatch from the static type 
checker. It is precisely a means of saying "we won't check this until 
runtime."

>  In the program fragment that they use in the introduction:
> 
>    typecase image of                                (* 1 *)
>      (b:Bitmap) => displayBitmap(b)                 (* 2 *)
>    | (s:String) => displayString(s)                 (* 3 *)
> 
> we /statically/ know that the value of variable "image" will always be
> of type Dynamic,

Which is doublespeak. "We know that this is something that we don't know 
what it is yet." This reduces to "We don't know what this is." This 
serves the useful (to a static type checker) function of delineating 
that portion of the program which needs to be type checked at runtime, 
but it isn't a static method of checking - the checking still has to 
happen at runtime.

>  that variable "b" in line 2 holds only values of type
> Bitmap, and that variable "s" always assumes to String values in line
> 3.  Therefore we are able to assign those types statically to each
> program phrase without having to wait for things to unfold at runtime.

But you have to wait for things to unfold at runtime *anyway* so you can 
do your type checks on the dynamic data. Since you're waiting for 
runtime for important type checks anyway, why bother me with the static 
ones, especially as I have to live two feet in one shoe to use them.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1vfpg583e.fsf@tti5.uchicago.edu>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> >  In the program fragment that they use in the introduction:
> > 
> >    typecase image of                                (* 1 *)
> >      (b:Bitmap) => displayBitmap(b)                 (* 2 *)
> >    | (s:String) => displayString(s)                 (* 3 *)
> > 
> > we /statically/ know that the value of variable "image" will always be
> > of type Dynamic,
> 
> Which is doublespeak. "We know that this is something that we don't know 
> what it is yet."

Nonsense.  For example, it not an int, it is not a Bitmap, it is not a
String.  Yes, you might be able to get an int out of it, or a Bitmap,
or a String, but doing so requires the use of the typecase construct.

> This reduces to "We don't know what this is."

Of course we know what it is: It is a value of the static type whose
name is "Dynamic".  We don't know which case of this infinite sum type
it belongs to, but that's just the same as with ordinary sum types.
Finding out requires (type)case.

> But you have to wait for things to unfold at runtime *anyway* so you can 
> do your type checks on the dynamic data.

As has been said many times over by now, some of the things the static
checker will find out for you are things you cannot find out
dynamically by just looking at a value.

> Since you're waiting for runtime for important type checks anyway,
> why bother me with the static ones, especially as I have to live two
> feet in one shoe to use them.

Since I have to die anyway same day, why don't you shoot me right
away?
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1r80457wa.fsf@tti5.uchicago.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Since I have to die anyway same day, why don't you shoot me right
                              ^o
> away?

(Need to use spellchecker...)
From: Pascal Bourguignon
Subject: Re: More static type fun.
Date: 
Message-ID: <878ymcq81j.fsf@thalassa.informatimago.com>
Matthias Blume <····@my.address.elsewhere> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Since I have to die anyway same day, why don't you shoot me right
>                               ^o
> > away?
> 
> (Need to use spellchecker...)

Better not. Some day will happen, same day may be not.

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <smkj3oav.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
>
>> >  In the program fragment that they use in the introduction:
>> > 
>> >    typecase image of                                (* 1 *)
>> >      (b:Bitmap) => displayBitmap(b)                 (* 2 *)
>> >    | (s:String) => displayString(s)                 (* 3 *)
>> > 
>> > we /statically/ know that the value of variable "image" will always be
>> > of type Dynamic,
>> 
>> Which is doublespeak. "We know that this is something that we don't know 
>> what it is yet."
>
> Nonsense.  For example, it not an int, it is not a Bitmap, it is not a
> String.  Yes, you might be able to get an int out of it, or a Bitmap,
> or a String, but doing so requires the use of the typecase construct.
>
>> This reduces to "We don't know what this is."
>
> Of course we know what it is: It is a value of the static type whose
> name is "Dynamic".  We don't know which case of this infinite sum type
> it belongs to, but that's just the same as with ordinary sum types.
> Finding out requires (type)case.

I'm picturing Matthias at Christmas.  He is very disappointed that all
he received were colorfully wrapped boxes of various sizes.

-- 
~jrm
From: Darius
Subject: Re: More static type fun.
Date: 
Message-ID: <20031119191412.0000044a.ddarius@hotpop.com>
On Wed, 19 Nov 2003 23:56:42 GMT
Joe Marshall <·············@comcast.net> wrote:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Raffael Cavallaro <················@junk.mail.me.not.mac.com>
> > writes:
> 
> >> >  In the program fragment that they use in the introduction:
> >> > 
> >> >    typecase image of                                (* 1 *)
> >> >      (b:Bitmap) => displayBitmap(b)                 (* 2 *)
> >> >    | (s:String) => displayString(s)                 (* 3 *)
> >> > 
> >> > we /statically/ know that the value of variable "image" will
> >always be> > of type Dynamic,
> >> 
> >> Which is doublespeak. "We know that this is something that we don't
> >know > what it is yet."
> 
> > Nonsense.  For example, it not an int, it is not a Bitmap, it is not
> > a String.  Yes, you might be able to get an int out of it, or a
> > Bitmap, or a String, but doing so requires the use of the typecase
> > construct.
> 
> >> This reduces to "We don't know what this is."
> 
> > Of course we know what it is: It is a value of the static type whose
> > name is "Dynamic".  We don't know which case of this infinite sum
> > type it belongs to, but that's just the same as with ordinary sum
> > types. Finding out requires (type)case.
> 
> I'm picturing Matthias at Christmas.  He is very disappointed that all
> he received were colorfully wrapped boxes of various sizes.

Actually, it would be quite the opposite.  He'd receive a bunch of
plain identical boxes full of many strange and wonderful things, but
he's disappointed because none of the boxes contain what he really
wanted.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2ekw390os.fsf@hanabi-air.shimizu.blume>
Darius <·······@hotpop.com> writes:

> On Wed, 19 Nov 2003 23:56:42 GMT
> Joe Marshall <·············@comcast.net> wrote:
> 
> > Matthias Blume <····@my.address.elsewhere> writes:
> > 
> > > Raffael Cavallaro <················@junk.mail.me.not.mac.com>
> > > writes:
> > 
> > >> >  In the program fragment that they use in the introduction:
> > >> > 
> > >> >    typecase image of                                (* 1 *)
> > >> >      (b:Bitmap) => displayBitmap(b)                 (* 2 *)
> > >> >    | (s:String) => displayString(s)                 (* 3 *)
> > >> > 
> > >> > we /statically/ know that the value of variable "image" will
> > >always be> > of type Dynamic,
> > >> 
> > >> Which is doublespeak. "We know that this is something that we don't
> > >know > what it is yet."
> > 
> > > Nonsense.  For example, it not an int, it is not a Bitmap, it is not
> > > a String.  Yes, you might be able to get an int out of it, or a
> > > Bitmap, or a String, but doing so requires the use of the typecase
> > > construct.
> > 
> > >> This reduces to "We don't know what this is."
> > 
> > > Of course we know what it is: It is a value of the static type whose
> > > name is "Dynamic".  We don't know which case of this infinite sum
> > > type it belongs to, but that's just the same as with ordinary sum
> > > types. Finding out requires (type)case.
> > 
> > I'm picturing Matthias at Christmas.  He is very disappointed that all
> > he received were colorfully wrapped boxes of various sizes.
> 
> Actually, it would be quite the opposite.  He'd receive a bunch of
> plain identical boxes full of many strange and wonderful things, but
> he's disappointed because none of the boxes contain what he really
> wanted.

Wrong.  I get *one* box, and then I ask "Dear box, do you contain an
iPod?"  Should the box answer "yes" (upon which it will also magically
open up and hand me the iPod), I'll directly jump into the "be happy
and upload my iTunes library to the iPod" routine without fear that
this ends up failing due to the fact that what I have in my hand is
actually some crappy no-name mp3 player.  If the box answers "no",
then I jump right into my "by disappointed" routine...  :-)
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-2F5565.11200820112003@netnews.attbi.com>
In article <·······························@hotpop.com>,
 Darius <·······@hotpop.com> wrote:

> On Wed, 19 Nov 2003 23:56:42 GMT
> Joe Marshall <·············@comcast.net> wrote:
> > I'm picturing Matthias at Christmas.  He is very disappointed that all
> > he received were colorfully wrapped boxes of various sizes.
> 
> Actually, it would be quite the opposite.  He'd receive a bunch of
> plain identical boxes full of many strange and wonderful things, but
> he's disappointed because none of the boxes contain what he really
> wanted.

(with-usual-disclaimers-about-having-to-explain-a-joke-killing-the-humor
 (

I think the point is that, to Mattias, all things that one doesn't know 
what they are (represented in Joe's joke by wrapped presents) are the 
same "type" in the HM sense  - type Dynamic. So  Mattias would claim he 
knows what any wrapped present is - it's a value of the  HM "type"  
WrappedPresent. In the real world, the only way he could know what any 
value of WrappedPresent is, would be if all the boxes were empty.

As soon as we admit that what we really want to know is what's *in* the 
WrappedPresent, that not all WrappedPresents are the same, we must admit 
that these boxes are not really of the same "type" (here, in the lisp 
sense). A WrappedPresent containing an iPod is *not* the same as a 
WrappedPresent containing a no-name mp3 player. Since these 
WrappedPresents are not really of the same "type" in the lisp sense, we 
only discover their "type" (lisp sense) when we open them up, at 
runtime. Since we are only determining their "type" (lisp sense) at 
runtime, we are really doing dynamic type checking (again, "type" in the 
lisp sense). The static type WrappedPresent is really just an empty box.


P.S. Mattias, the fact that you really want an iPod for xmas is a sign 
that you can't be that misguided ;^)
From: Thant Tessman
Subject: Re: More static type fun.
Date: 
Message-ID: <bpj2i7$e6q$1@terabinaries.xmission.com>
Raffael Cavallaro wrote:

[...]

> As soon as we admit that what we really want to know is what's *in* the 
> WrappedPresent, that not all WrappedPresents are the same, we must admit 
> that these boxes are not really of the same "type" (here, in the lisp 
> sense). A WrappedPresent containing an iPod is *not* the same as a 
> WrappedPresent containing a no-name mp3 player. Since these 
> WrappedPresents are not really of the same "type" in the lisp sense, we 
> only discover their "type" (lisp sense) when we open them up, at 
> runtime. [...]

The point is that Matthias knows that the box *doesn't* contain a lawn 
mower, or a German shepherd, or a lake. So he knows before he opens the 
box that he can safely ignore those possibilities.

-thant
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-1F5076.21295520112003@netnews.attbi.com>
In article <············@terabinaries.xmission.com>,
 Thant Tessman <·····@acm.org> wrote:

> The point is that Matthias knows that the box *doesn't* contain a lawn 
> mower, or a German shepherd, or a lake. So he knows before he opens the 
> box that he can safely ignore those possibilities.

Well I might grant you lake - although it could contain a small puddle 
;^) but how could he know that it doesn't contain a lawnmower? No one 
said the box was very small. It could contain an alsatian as well, as 
long as it was asleep. IOW, type Dynamic could be *anything*.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <b20b8d03.0311210651.11261ea2@posting.google.com>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> wrote:
> 
> > The point is that Matthias knows that the box *doesn't* contain a lawn 
> > mower, or a German shepherd, or a lake. So he knows before he opens the 
> > box that he can safely ignore those possibilities.
> 
> Well I might grant you lake - although it could contain a small puddle 
> ;^) but how could he know that it doesn't contain a lawnmower? No one 
> said the box was very small. It could contain an alsatian as well, as 
> long as it was asleep. IOW, type Dynamic could be *anything*.

Let me have a stab at it. :-)

Matthias gets a box, which looks like all other present boxes. But
there is a little label on it saying what he will find inside. If it
says "iPod", then Matthias can happily open the box, take the iPod
out, play with it, whatever. He can also give it to a friend, to make
him jealous or ask him to upload his song collection (unfortunately
the iPod is stateful, so he cannot make a copy for his friend :-) ).

If he would experience Xmas on planet Lisp, things would be slightly
different. He could not pass the iPod to his friend. He could only
pass the box to his friend. If he doesn't get the desired iPod he
could still lie to his friend, saying "Look, I got a box with an iPod,
nananana!". The friend couldn't tell it was a lie unless he got hold
of the box himself somehow, and look at the label.

Moreover, he could not listen to the iPod directly. He could only
listen to it through the box - what a crappy sound experience! :-)

Oh, and the worst part is: Xmas would really be boring, because he
gets the same present boxes the whole year anyway.

Cheers,

    - Andreas
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-8486EA.01501622112003@netnews.attbi.com>
In article <····························@posting.google.com>,
 ········@ps.uni-sb.de (Andreas Rossberg) wrote:

> If he would experience Xmas on planet Lisp, things would be slightly
> different. He could not pass the iPod to his friend.

why not? In lisp-world, Mattias can:
1. give the iPod to a friend - pass the symbol itself - so the friend 
can upload songs to the iPod (i.e., modify it). The friend can, in fact, 
verify that it is an iPod - he doesn't have to take Mattias's word. 
That's what typep, check-type etc. are for.

2. Give the friend a firewire cable to connect to the iPod - that is, a  
different symbol that also "points to" the iPod itself. This firewire 
cable can be used to upload songs, download songs - that is, modify the 
iPod, verify that it is, in fact, an iPod, etc. In short, in lisp-world, 
Mattias can do everything he can do in Haskell world, and he can...

3. copy the iPod entirely, so the friend has one of his own, with no 
shared state, to load with whatever totally different music he wants.

The iPod itself:

? (setq my-iPod '("jingle bells" ("holly jolly xmas" "rockin rudolph") 
"blue xmas"))
("jingle bells" ("holly jolly xmas" "rockin rudolph") "blue xmas")
? (push '"linus and lucy" my-iPod)
("linus and lucy" "jingle bells" ("holly jolly xmas" "rockin rudolph") 
"blue xmas")

a reference to it:

? (setq Mattiass-iPod my-iPod)
("linus and lucy" "jingle bells" ("holly jolly xmas" "rockin rudolph") 
"blue xmas")
? my-iPod
("linus and lucy" "jingle bells" ("holly jolly xmas" "rockin rudolph") 
"blue xmas")
? Mattiass-iPod
("linus and lucy" "jingle bells" ("holly jolly xmas" "rockin rudolph") 
"blue xmas")
? (setf (second mattiass-iPod) '"deck the halls")
"deck the halls"
? mattiass-iPod
("linus and lucy" "deck the halls" ("holly jolly xmas" "rockin rudolph") 
"blue xmas")
? my-iPod
("linus and lucy" "deck the halls" ("holly jolly xmas" "rockin rudolph") 
"blue xmas")

an independent copy of it:

? (setq clone-iPod (copy-list my-iPod))
("linus and lucy" "deck the halls" ("holly jolly xmas" "rockin rudolph") 
"blue xmas")
? (setf (first clone-iPod) '"silver bells")
"silver bells"
? my-iPod
("linus and lucy" "deck the halls" ("holly jolly xmas" "rockin rudolph") 
"blue xmas")
? clone-iPod
("silver bells" "deck the halls" ("holly jolly xmas" "rockin rudolph") 
"blue xmas")


Lisper's know when they need the thing itself, when they need a separate 
reference to that thing, and when they need a copy of it. We just check 
types at runtime, not compile time - something everyone has to do anyway 
unless one's program is hermetically sealed from other programs or 
persistent data. (BTW, for the reason there is no default copy for all 
purposes see Kent Pitman's paper at: 
<http://www.nhplace.com/kent/PS/EQUAL.html>)
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpnaeu$548$1@grizzly.ps.uni-sb.de>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> wrote:
>
> > If he would experience Xmas on planet Lisp, things would be slightly
> > different. He could not pass the iPod to his friend.
>
> why not?

You shouldn't take that posting overly serious.

> In lisp-world, Mattias can:
> 1. give the iPod to a friend - pass the symbol itself - so the friend
> can upload songs to the iPod (i.e., modify it).

No, because for dynamic typing you always need the box with the label.
Admittedly, the label may be less precise, though ;-)

> The friend can, in fact,
> verify that it is an iPod - he doesn't have to take Mattias's word.
> That's what typep, check-type etc. are for.

As I said: only when he actually gets his hands on it.

> 2. Give the friend a firewire cable to connect to the iPod - that is, a
> different symbol that also "points to" the iPod itself. This firewire
> cable can be used to upload songs, download songs - that is, modify the
> iPod, verify that it is, in fact, an iPod, etc. In short, in lisp-world,
> Mattias can do everything he can do in Haskell world, and he can...

But only by reaching into the box, blindly. If he has verified the label,
he'll probably know where to find the jog wheel. Otherwise he may well be
bitten by a poisonous snake.

With type Dynamic there is a secret magic preventing you from opening the
box without having read the label.

> 3. copy the iPod entirely, so the friend has one of his own, with no
> shared state, to load with whatever totally different music he wants.

That was just an independent side joke. Of course you can in fact clone
stateful devices on most planets - whether one ever should is another
question, for the lawyers. :-)

> Lisper's know when they need the thing itself, when they need a separate
> reference to that thing

The reference would be a second box containing a piece of paper describing
where to find the first box. ;-)

    - Andreas
From: Nikodemus Siivola
Subject: Re: More static type fun.
Date: 
Message-ID: <bpni4v$gcc$1@nyytiset.pp.htv.fi>
In comp.lang.lisp Andreas Rossberg <········@ps.uni-sb.de> wrote:

>> 1. give the iPod to a friend - pass the symbol itself - so the friend
>> can upload songs to the iPod (i.e., modify it).

> No, because for dynamic typing you always need the box with the label.
> Admittedly, the label may be less precise, though ;-)

This a a nit, but actually, no. To push the analogy:

 If the friend trusts Matthias to give him an iPod, there is no need
 for the label.

Values do not need to be boxed when the compiler can prove the type of
the value at that point.

Cheers,

 -- Nikodemus
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpnk7r$7sr$1@grizzly.ps.uni-sb.de>
Nikodemus Siivola <······@random-state.net> wrote:
>
> >> 1. give the iPod to a friend - pass the symbol itself - so the friend
> >> can upload songs to the iPod (i.e., modify it).
>
> > No, because for dynamic typing you always need the box with the label.
> > Admittedly, the label may be less precise, though ;-)
>
> This a a nit, but actually, no. To push the analogy:
>
>  If the friend trusts Matthias to give him an iPod, there is no need
>  for the label.
>
> Values do not need to be boxed when the compiler can prove the type of
> the value at that point.

But that would require the iPod not being a surprise present in the first
place!

Moreover, the friend not only had to trust Matthias to be sure, but also at
least his wife (who gave it to him), UPS (who delivered it), Amazon (who
took the order), and the Apple factory in Asia (who assembled it). Well,
that may be reasonable in this particular case, but what if his wife had
found a cheaper offer from less reliable sources? Or in the very likely
event that the friend just doesn't know where Matthias got it from? ;-)

    - Andreas
From: Nikodemus Siivola
Subject: Re: More static type fun.
Date: 
Message-ID: <bpnn59$nj2$1@nyytiset.pp.htv.fi>
In comp.lang.lisp Andreas Rossberg <········@ps.uni-sb.de> wrote:

> Nikodemus Siivola <······@random-state.net> wrote:

> found a cheaper offer from less reliable sources? Or in the very likely
> event that the friend just doesn't know where Matthias got it from? ;-)

No. He just has to trust Matthias to make sure that the box contains
an iPod. If it doesn't, he (Matthias) will handle to problem -- but
what he won't do is give his friend the iPoS he got instead.

Actually, if Matthias trusts the Santa to give him the iPod he asked
for (though trusting Santa is probably a bad idea), then Santa can give
Matthis an unlabeled box without any problems.

Like many people have pointed out, dynamically typed languages can do
type inference as well.

Cheers,

 -- Nikodemus
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpocld$ef7$1@grizzly.ps.uni-sb.de>
Nikodemus Siivola <······@random-state.net> wrote:
>
> No. He just has to trust Matthias to make sure that the box contains
> an iPod. If it doesn't, he (Matthias) will handle to problem

Well, right, but he cannot just trust him, he actually has to see that
Matthias verified the label. Moreover, he has to promise to never take an
iPod from somebody else he does not trust in the same way.

But I think the analogy starts to fall apart here...

    - Andreas
From: Darius
Subject: Re: More static type fun.
Date: 
Message-ID: <20031120133710.00003a5f.ddarius@hotpop.com>
On Thu, 20 Nov 2003 16:20:08 GMT
Raffael Cavallaro <················@junk.mail.me.not.mac.com> wrote:

> In article <·······························@hotpop.com>,
>  Darius <·······@hotpop.com> wrote:
> 
> > On Wed, 19 Nov 2003 23:56:42 GMT
> > Joe Marshall <·············@comcast.net> wrote:
> > > I'm picturing Matthias at Christmas.  He is very disappointed that
> > > all he received were colorfully wrapped boxes of various sizes.
> > 
> > Actually, it would be quite the opposite.  He'd receive a bunch of
> > plain identical boxes full of many strange and wonderful things, but
> > he's disappointed because none of the boxes contain what he really
> > wanted.
> 
> (with-usual-disclaimers-about-having-to-explain-a-joke-killing-the-hu
> mor
>  (
> 
> I think the point is that, to Mattias, all things that one doesn't
> know what they are (represented in Joe's joke by wrapped presents) are
> the same "type" in the HM sense  - type Dynamic. So  Mattias would
> claim he knows what any wrapped present is - it's a value of the  HM
> "type"  WrappedPresent. In the real world, the only way he could know
> what any value of WrappedPresent is, would be if all the boxes were
> empty.

But he didn't say he knows the -value-.  By that logic, if I said I knew
something was of type Int, that would mean there could only be one of
them.

> As soon as we admit that what we really want to know is what's *in*
> the WrappedPresent, that not all WrappedPresents are the same, we must
> admit that these boxes are not really of the same "type" (here, in the
> lisp sense). 

I thought the Lisp sense was that a "type" was a set of values.  All the
boxes are in the same "set of values" no matter what's in them.

> A WrappedPresent containing an iPod is *not* the same as
> a WrappedPresent containing a no-name mp3 player. 

And 3 is not the same as 5, that doesn't mean they don't have the same
type.

> Since these 
> WrappedPresents are not really of the same "type" in the lisp sense,

So 3 is a different type than 5 in Lisp?  Let me ask CLISP...
[1]> (type-of 3)
FIXNUM
[2]> (type-of 5)
	FIXNUM
Oh I see what you mean, it needs to be a container-ish thing, so I'm
guessing a cons of 1 and 2 is a different Lisp type than a cons of #\a
and #\b...
[3]> (type-of (cons 1 2))
CONS
[4]> (type-of (cons #\a #\b))
CONS
*throws up hands* Well now I'm just totally confused.

> we only discover their "type" (lisp sense) when we open them up, at 
> runtime. Since we are only determining their "type" (lisp sense) at 
> runtime, we are really doing dynamic type checking (again, "type" in
> the lisp sense). The static type WrappedPresent is really just an
> empty box.

Then there would be nothing to get out of it.

Since we are no longer disappointing Matthias, the reasoning behind my
characterization, is that to the (static) type system all Dynamics look
alike, so we can have a (homogenous) list of them for example, and more
specifically and technically, they all respond to fromDynamic (and
that's pretty much all they respond to).  Responding to Matthias, in
both cases (well I'm assuming for Joe), the multiplicity of boxes
corresponds to multiple invocations.  Anyways, Matthias wouldn't have
been disappointed if he had wrote down what he wanted ;), but I guess
Christmas just isn't the same if you know what you are going to get
ahead of time.
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-47B5AF.11254520112003@netnews.attbi.com>
In article <·······························@hotpop.com>,
 Darius <·······@hotpop.com> wrote:

> On Wed, 19 Nov 2003 23:56:42 GMT
> Joe Marshall <·············@comcast.net> wrote:
> > I'm picturing Matthias at Christmas.  He is very disappointed that all
> > he received were colorfully wrapped boxes of various sizes.
> 
> Actually, it would be quite the opposite.  He'd receive a bunch of
> plain identical boxes full of many strange and wonderful things, but
> he's disappointed because none of the boxes contain what he really
> wanted.

(with-usual-disclaimers-about-having-to-explain-a-joke-killing-the-humor
 (

I think the point is that, to Mattias, all things that one doesn't know 
what they are (represented in Joe's joke by wrapped presents) are the 
same "type" in the HM sense  - type Dynamic. So  Mattias would claim he 
knows what any wrapped present is - it's a value of the  HM "type"  
WrappedPresent. In the real world, the only way he could know what any 
value of WrappedPresent is, would be if all the boxes were empty.

As soon as we admit that what we really want to know is what's *in* the 
WrappedPresent, that not all WrappedPresents are the same, we must admit 
that these boxes are not really of the same "type" (here, in the lisp 
sense). A WrappedPresent containing an iPod is *not* the same as a 
WrappedPresent containing a no-name mp3 player. Since these 
WrappedPresents are not really of the same "type" in the lisp sense, we 
only discover their "type" (lisp sense) when we open them up, at 
runtime. Since we are only determining their "type" (lisp sense) at 
runtime, we are really doing dynamic type checking (again, "type" in the 
lisp sense). The static type WrappedPresent is really just an empty box.


P.S. Mattias, the fact that you really want an iPod for xmas is a sign 
that you can't be that misguided ;^)
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-63139E.22322419112003@netnews.attbi.com>
In article <··············@tti5.uchicago.edu>,
 Matthias Blume <····@my.address.elsewhere> wrote:

> Since I have to die anyway same day, why don't you shoot me right
> away?

So here, you explicitly compare doing dynamic type checking, something 
that Pierce, etc. state clearly you *must do* to maintain type safety, 
with dying. I don't think one can get more unthinkinly dogmatic than 
that.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m265hf8zpx.fsf@hanabi-air.shimizu.blume>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> In article <··············@tti5.uchicago.edu>,
>  Matthias Blume <····@my.address.elsewhere> wrote:
> 
> > Since I have to die anyway same day, why don't you shoot me right
> > away?
> 
> So here, you explicitly compare doing dynamic type checking, something 
> that Pierce, etc. state clearly you *must do* to maintain type safety, 
> with dying.

Pierce said no such thing!  He said "dynamic tests" or "dynamic
checking".  Dynamic type checking is definitely *not* necessary for
maintaining type safety.

> I don't think one can get more unthinkinly dogmatic than that.

I think you completely missed my point.  What I was referring to is
your suggestion that because dynamic type checking is /sometimes/
necessary (let's assume for a moment that this is actually true), one
might a well completely give up on static typing.  But that conclusion
is just as ridiculous as the one in the "shoot me" sentence: Just
because something is needed sometimes does not mean that one should
use it always or right away.  Just because I have to die some day does
not mean I cannot try and go on living until then.

In any case, dynamic type checking is *not* needed for type safety.
This is a simple fact witnessed by all those safe statically typed
languages which do not have type Dynamic or any of its equivalents.
So not only is the implication ("IF I have to use it /sometimes/, THEN
it is reasonable to /always/ use it") itself bogus, its premise
("dynamic type checking is necessary for safety") is bogus as well.
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-643CAE.09253320112003@netnews.attbi.com>
In article <··············@hanabi-air.shimizu.blume>,
 Matthias Blume <····@my.address.elsewhere> wrote:

> Pierce said no such thing!  He said "dynamic tests" or "dynamic
> checking".  Dynamic type checking is definitely *not* necessary for
> maintaining type safety.

 I think maybe our problem here stems for your insistence that the word 
"type" only applies to the types of a HM style type system. I think 
we've seen that no productive discussion between static typing advocates 
and lispers can come from restricting the usage of the word "type" to 
types in a HM style type system.

In lisp terminology [*], if you test the type (again, in lisp 
terminology) of a piece of data at runtime, you are doing a runtime type 
check. These runtime type tests are dynamic tests of data type, so they 
are, dynamic type checking. In a statically typed language, if we 
perform dynamic checking of the type of data (again, "type" in the more 
inclusive lisp sense) then we are doing a runtime type check, or dynamic 
type checking. Simply referring to all such data by a single "type" in 
the HM sense, doesn't change the fact that we are really doing a check 
of its type (in the lisp sense) at runtime.

Since we have to do these anyway, and, yes, this is what Pierce et al 
mean, then it makes sense, in many cases, to design our methodology 
around dynamic type checking and not bother with having to code in a 
style that fits the type system of a static type checker. 


[*]object n. 1. any Lisp datum. ``The function cons creates an object 
which refers to two other objects.'' 2. (immediately following the name 
of a type) an object which is of that type, used to emphasize that the 
object is not just a name for an object of that type but really an 
element of the type in cases where objects of that type (such as 
function or class) are commonly referred to by name. ``The function 
symbol-function takes a function name and returns a function object.''

type n. 1. a set of objects, usually with common structure, behavior, or 
purpose. (Note that the expression ``X is of type Sa'' naturally implies 
that ``X is of typeSb'' if Sa is a subtype of Sb.) 2. (immediately 
following the name of a type) a subtype of that type. ``The type vector 
is an array type.''

subtype n. a type whose membership is the same as or a proper subset of 
the membership of another type, called a supertype. (Every type is a 
subtype of itself.)
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m13ccjf1dz.fsf@tti5.uchicago.edu>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> In article <··············@hanabi-air.shimizu.blume>,
>  Matthias Blume <····@my.address.elsewhere> wrote:
> 
> > Pierce said no such thing!  He said "dynamic tests" or "dynamic
> > checking".  Dynamic type checking is definitely *not* necessary for
> > maintaining type safety.
> 
>  I think maybe our problem here stems for your insistence that the word 
> "type" only applies to the types of a HM style type system. I think 
> we've seen that no productive discussion between static typing advocates 
> and lispers can come from restricting the usage of the word "type" to 
> types in a HM style type system.

You are right, of course.  But may I point out that under this
interpretation of "type test" *every* test in *any* program becomes a
"type test"?  If I write, e.g.,

   if (i < 10)
      ...
   else ...

I dynamically test membership of the type { i | i < 10 }.  Every test
for any property P becomes a "type test" -- the test that
discriminates between the types { x | P(x) } and { x | ~P(x) }.

I don't understand what is gained by saying "we need dynamic /type/
tests" if that merely means "we need dynamic tests".
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <7k1v5609.fsf@ccs.neu.edu>
> Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
>>
>>  I think maybe our problem here stems for your insistence that the word 
>> "type" only applies to the types of a HM style type system. I think 
>> we've seen that no productive discussion between static typing advocates 
>> and lispers can come from restricting the usage of the word "type" to 
>> types in a HM style type system.

Matthias Blume <····@my.address.elsewhere> writes:
>
> You are right, of course.  But may I point out that under this
> interpretation of "type test" *every* test in *any* program becomes a
> "type test"?  

Yes.  What's the problem?
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-4670A3.22334220112003@netnews.attbi.com>
In article <··············@tti5.uchicago.edu>,
 Matthias Blume <····@my.address.elsewhere> wrote:

> I don't understand what is gained by saying "we need dynamic /type/
> tests" if that merely means "we need dynamic tests".

Because in lisp, at any rate, you can use the language to help you if 
you formalize these particular tests as types, just as in Haskell, the 
language provides infrastructure to aid you if you formalize these sorts 
of tests as types - only Haskell does so statically of course. For 
example, a lisp implementation can signal a type error if you try to 
assign the wrong type of value to a struct slot (sbcl does, for 
example), there are 3 built in case statements that switch on types, 
etc. Could you write all this supporting infrastructure manually 
yourself? - sure, but now your moving into Greenspun's 10th territory.
From: Pascal Bourguignon
Subject: Re: More static type fun.
Date: 
Message-ID: <87ptfmnww5.fsf@thalassa.informatimago.com>
Matthias Blume <····@my.address.elsewhere> writes:

> Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
> 
> > In article <··············@hanabi-air.shimizu.blume>,
> >  Matthias Blume <····@my.address.elsewhere> wrote:
> > 
> > > Pierce said no such thing!  He said "dynamic tests" or "dynamic
> > > checking".  Dynamic type checking is definitely *not* necessary for
> > > maintaining type safety.
> > 
> >  I think maybe our problem here stems for your insistence that the word 
> > "type" only applies to the types of a HM style type system. I think 
> > we've seen that no productive discussion between static typing advocates 
> > and lispers can come from restricting the usage of the word "type" to 
> > types in a HM style type system.
> 
> You are right, of course.  But may I point out that under this
> interpretation of "type test" *every* test in *any* program becomes a
> "type test"?  If I write, e.g.,
> 
>    if (i < 10)
>       ...
>    else ...
> 
> I dynamically test membership of the type { i | i < 10 }.  Every test
> for any property P becomes a "type test" -- the test that
> discriminates between the types { x | P(x) } and { x | ~P(x) }.
> 
> I don't understand what is gained by saying "we need dynamic /type/
> tests" if that merely means "we need dynamic tests".

It's  "worse" than  that!  The  language runs  implicity  dynamic type
tests all the time:

[57]> (+ "a" 1)
*** - argument to + should be a number: "a"
1. Break [58]> 
[59]> (car :toto)
*** - CAR: :TOTO is not a list
1. Break [60]> 
[61]> (3 2 1)
*** - EVAL: 3 is not a function name
1. Break [62]> 



-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-0A571E.22412820112003@netnews.attbi.com>
In article <··········@news.unimelb.edu.au>,
 Fergus Henderson <···@cs.mu.oz.au> wrote:

> But that
> doesn't work -- it causes no end of problems.


It causes problems only if you're commited to statically analyzing 
types. If you do dynamic typing, this causes no problems at all.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fc0ac29$1@news.unimelb.edu.au>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

>Fergus Henderson <···@cs.mu.oz.au> wrote:
>
>> But that doesn't work -- it causes no end of problems.
>
>It causes problems only if you're commited to statically analyzing 
>types. If you do dynamic typing, this causes no problems at all.

No, you're wrong: if you had a "univ" type or equivalent, then
defining type_of(univ(X)) = type_of(X) would cause no end of problems,
even in a dynamically typed language.

However, if you're _only_ doing dynamic typing, you probably won't have
a "univ" type or equivalent, and so the issue will never arise.  The only
time it would be likely to arise in a dynamically typed language would be
if you are trying to interoperate with code written in a statically typed
language.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-0A18EF.14224523112003@netnews.attbi.com>
In article <··········@news.unimelb.edu.au>,
 Fergus Henderson <···@cs.mu.oz.au> wrote:

> However, if you're _only_ doing dynamic typing, you probably won't have
> a "univ" type or equivalent, and so the issue will never arise.

Which is why it's not a problem in dynamically typed languages - you 
aren't trying to segregate the program into statically type safe 
portions, and dynamically checked portions - everything is checked 
dynamically.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fc0a987$1@news.unimelb.edu.au>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> Matthias Blume <····@my.address.elsewhere> wrote:
>
>> Type Dynamic (which has actually become rather well-known throughout
>> the static typing community) is precisely a way of *statically*
>> dealing with the uncertainties that come with the dynamics of
>> runtime.
>
>This is another redefinition of terminology, in this case the word 
>"static." Type Dynamic can't possibly be a *static* method of dealing 
>with runtime.

Well... you _statically_ decide at what program points you will
use dynamic checking :)

>>  In the program fragment that they use in the introduction:
>> 
>>    typecase image of                                (* 1 *)
>>      (b:Bitmap) => displayBitmap(b)                 (* 2 *)
>>    | (s:String) => displayString(s)                 (* 3 *)
>> 
>> we /statically/ know that the value of variable "image" will always be
>> of type Dynamic,
>
>Which is doublespeak.

No, this is an important distinction.  It has real effects.

For example, in Mercury, the equivalent code is

	( univ_to_type(Image, B `with_type` bitmap) -> displayBitmap(B, !IO)
	; univ_to_type(Image, S `with_type` string) -> displayString(S, !IO)
	; error("expecting bitmap or string")
	)

If I add a call to `print(type_of(Image))', it will print "std_util.univ",
whereas if I call `print(type_of(S))' or `print(type_of(univ_value(Image)))',
it will print "string".

It's tempting to try to fudge this distinction, so that type_of(Image)
would return the same thing as type_of(univ_value(Image)).  But that
doesn't work -- it causes no end of problems.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Bourguignon
Subject: Re: More static type fun.
Date: 
Message-ID: <87smkksxhm.fsf@thalassa.informatimago.com>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
> 
> However, even Pierce himself has written that dynamic typing is 
> sometimes unavoidable, and *essential to type safety*:
> "However  even in statically typed languages  there is often the need to 
> deal with data whose type cannot be determined at compile time.  For 
> example  full static typechecking of programs that exchange data with 
> other programs or access persistent data is in general not possible.  A 
> certain amount of dynamic checking must be performed in order to 
> preserve type safety." (from "Dynamic Typing in a Statically Typed 
> Language" available at 
> <http://www.cis.upenn.edu/~bcpierce/papers/dynamic.ps>)

Yes, for the obvious case where you write:

    {
        Apple var;
        scanf("%{Apple}\n",&var);
    }

and you enter a dead dog. Happens everydays...



Doesn't he  realize that ALL input  is always done only  with the type
(INTEGER 0 255) {unsigned char in a more statically typed language}?

Who needs dynamic typing to read a keyboard input or a file containing:

Dead Dog Number 42 { name=Dogbert; death-date=2003-11-19; };

I for sure only need (INTEGER 0 255)...


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Isaac Gouy
Subject: Re: More static type fun.
Date: 
Message-ID: <ce7ef1c8.0311181349.3e1c488c@posting.google.com>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> wrote in message news:<······································@netnews.attbi.com>...
> Here's a quote. In article <··············@tti5.uchicago.edu>, you,  
> (Matthias Blume) wrote:
> 
> > Lisp has sets of values (as does every other programming language).
> > By abuse of terminology, some of these sets end up being called
> > "types".
> 
> The abuse of terminology is yours, since these entities have never been 
> called "sets of values" in lisp. There is no "sets-of-values-error," nor 
> "sets-of-values-of," nor "sets-of-values-p," nor "check-sets-of-values," 
> nor "sets-of-values specifier" in the ANSI Common Lisp standard. No one 
> has ever called Lisp a "dynamically sets-of-valued language." They have 
> been called "types," and lisp and smalltalk have been called 
> "dynamically typed languages" for decades.

Indeed, Smalltalk has been called "dynamically typed" but what does
that mean? We can probably agree that "Smalltalk uses run-time
type-checking" - maybe it would be clearer to describe Smalltalk as a
dynamically-checked language?

The authors of "A Type System for Smalltalk" (1990) were quite clear
that Smalltalk is 'untyped' - and they have always been Smalltalk
advocates.

http://citeseer.nj.nec.com/graver90type.html

best wishes, Isaac
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <raffaelcavallaro-D460DA.19305118112003@netnews.attbi.com>
In article <····························@posting.google.com>,
 ·····@yahoo.com (Isaac Gouy) wrote:

> maybe it would be clearer to describe Smalltalk as a
> dynamically-checked language?

Maybe it would be clearer if certain static typing advocates stopped 
trying to appropriate the term "type" by narrowly redfining it so that 
it only applies to statically typed languages.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1vfph78x6.fsf@tti5.uchicago.edu>
Joe Marshall <·············@comcast.net> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > They said a whole lot of things, but none of it amounted to a
> > convincing explanation of what the term /type system/ means when used
> > by the dynamic typing community.
> 
> I'm sure that you have some idea of what is meant.  Certainly the term
> `dynamic type system' is frequently found in papers written by people
> in the static type community.

Just for kicks, I checked Citeseer with the exact search term "dynamic
type system".  It reports an astounding number of hits: 2 (in words: two).
This is actually much, much lower than I expected myself.

One paper is by Olin Shivers who at the time he wrote the paper seems
to have worn his dynamic typing hat. (I know he likes that hat.) I
highly respect Olin, but I wish he would have used slightly different
terminology.

The other paper is by three folks from Brasil -- which, I am ashamed
to admit, I have never heard of before.

Anyway, the claim that the term is "frequently used" (not to mention
in papers written by people in the static type community) seems a bit
far-fetched now.

> Informally, A `type' is an abstract collection of objects with a `type
> predicate' that can be applied to any object and determine whether it
> is a member of the collection.

See, now you are already deviating even from what can be found in the
dynamic typing community.  Type membership cannot always be decided
algorithmically.  Ask your collegues at NEU!
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <llqd5t84.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> Joe Marshall <·············@comcast.net> writes:
>
>> Matthias Blume <····@my.address.elsewhere> writes:
>> 
>> > They said a whole lot of things, but none of it amounted to a
>> > convincing explanation of what the term /type system/ means when used
>> > by the dynamic typing community.
>> 
>> I'm sure that you have some idea of what is meant.  Certainly the term
>> `dynamic type system' is frequently found in papers written by people
>> in the static type community.
>
> Just for kicks, I checked Citeseer with the exact search term "dynamic
> type system".  It reports an astounding number of hits: 2 (in words: two).
> This is actually much, much lower than I expected myself.

Hmm.  I found 3:

Practical Soft Typing - Wright (1994)   (Correct)   (5 citations) 
:3 1.2 Dynamic Type Systems :
www.star-lab.com/wright/thesis.ps.gz 


Supporting dynamic languages on the Java virtual machine - Shivers (1996)   (Correct)   (4 citations) 
imposed by Scheme's polymorphism and dynamic type system. Scheme's polymorphism requires a "uniform 
www.ai.mit.edu/people/shivers/javaScheme.ps 

Using Reflexivity to Interface with CORBA - Ierusalimschy, Cerqueira.. (1998)   (Correct)   (1 citation) 
mapping its dynamic character to the dynamic type system of the language. In this way, a program
www.inf.puc-rio.br/~roberto/docs/iccl.ps.gz 

And there was this note:

  One or more of the query terms is very common - only partial results have been returned. 
 
Google says:
http://www.google.com/search?hl=en&lr=&ie=ISO-8859-1&q=site%3Aciteseer.nj.nec.com+%22dynamic+type+system%22&btnG=Google+Search

about 29 references

Google reports about 597 references for `static type system'

> Anyway, the claim that the term is "frequently used" (not to mention
> in papers written by people in the static type community) seems a bit
> far-fetched now.

It's not foreign to the community.

>> Informally, A `type' is an abstract collection of objects with a `type
>> predicate' that can be applied to any object and determine whether it
>> is a member of the collection.
>
> See, now you are already deviating even from what can be found in the
> dynamic typing community.  Type membership cannot always be decided
> algorithmically.  Ask your collegues at NEU!

Yes, yes, yes, I know.  Do we have to do the whole `decidable' song
and dance again?  Yes I can define a type whose membership cannot
always be decided algorithmically, or one in which whose membership
can *never* be decided algorithmically, or one in which no
*interesting* members can be decided algorithmically.  In practice
people tend to prefer those types for which have a non-zero number of
interesting decidable elements because they want to mechanically
reason about types and they don't want to wait for the computer to
solve the halting problem.  (That's why I said `informally'!)
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1r80577b4.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > Joe Marshall <·············@comcast.net> writes:
> >
> >> Matthias Blume <····@my.address.elsewhere> writes:
> >> 
> >> > They said a whole lot of things, but none of it amounted to a
> >> > convincing explanation of what the term /type system/ means when used
> >> > by the dynamic typing community.
> >> 
> >> I'm sure that you have some idea of what is meant.  Certainly the term
> >> `dynamic type system' is frequently found in papers written by people
> >> in the static type community.
> >
> > Just for kicks, I checked Citeseer with the exact search term "dynamic
> > type system".  It reports an astounding number of hits: 2 (in words: two).
> > This is actually much, much lower than I expected myself.
> 
> Hmm.  I found 3:
> 
> Practical Soft Typing - Wright (1994)   (Correct)   (5 citations) 
> :3 1.2 Dynamic Type Systems :
> www.star-lab.com/wright/thesis.ps.gz 

Oops, yes.  This one escaped my eye.

Ok, three then.

And since you brought it up, let's see what Andrew had to write about
"dynamic type systems" in section 1.2:

  "... A dynamic type system can be viewed as a static type system with
  only one datatype D, as in Figure 1.2.  All values are variants within
  D.  Hence all values have tags and procedures that do not accept all
  elements of D perform run-time checks.
    Because dynamically typed languages have only one datatype, the type
  assignment methods used by static type systems are useless.  If
  applied, these methods always yield a trivial consistent assignment of
  D to every expression and identifier.  Thus dynamic type systems,
  while they ensure that programs cannot misinterpret data, are unable
  to provide the benefits of type information."

His Figure 1.2, btw. looks like this:

    D = num | true | false | nil | (cons D D) | (D -> D)

The caption is "Datatype for a Dynamic Type System".

I could not have put it any better.

Matthias
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <he115p2i.fsf@ccs.neu.edu>
Matthias Blume <····@my.address.elsewhere> writes:

> And since you brought it up, let's see what Andrew had to write about
> "dynamic type systems" in section 1.2:
>
>   "... A dynamic type system can be viewed as a static type system with
>   only one datatype D, as in Figure 1.2.  All values are variants within
>   D.  Hence all values have tags and procedures that do not accept all
>   elements of D perform run-time checks.
>     Because dynamically typed languages have only one datatype, the type
>   assignment methods used by static type systems are useless.  If
>   applied, these methods always yield a trivial consistent assignment of
>   D to every expression and identifier.  Thus dynamic type systems,
>   while they ensure that programs cannot misinterpret data, are unable
>   to provide the benefits of type information."
>
> His Figure 1.2, btw. looks like this:
>
>     D = num | true | false | nil | (cons D D) | (D -> D)
>
> The caption is "Datatype for a Dynamic Type System".
>
> I could not have put it any better.

The point of this exercise, other than fulfilling my daily quota of
frustration, was to point out that the expression `dynamic type
system' is *not* meaningless.  It may be trivial, it may be boring, it
may be simply an edge case of static type systems, it may be useless,
it may provide no benefits, whatever.  I'm not making any of those
claims.  I'm simply pointing out that the phrase `dynamic type system'
is not a self-contradictory statement.

It's hard enough getting people to agree on *definitions* of obvious
terms without actually engaging in discourse on them.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1n0at723k.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> > And since you brought it up, let's see what Andrew had to write about
> > "dynamic type systems" in section 1.2:
> >
> >   "... A dynamic type system can be viewed as a static type system with
> >   only one datatype D, as in Figure 1.2.  All values are variants within
> >   D.  Hence all values have tags and procedures that do not accept all
> >   elements of D perform run-time checks.
> >     Because dynamically typed languages have only one datatype, the type
> >   assignment methods used by static type systems are useless.  If
> >   applied, these methods always yield a trivial consistent assignment of
> >   D to every expression and identifier.  Thus dynamic type systems,
> >   while they ensure that programs cannot misinterpret data, are unable
> >   to provide the benefits of type information."
> >
> > His Figure 1.2, btw. looks like this:
> >
> >     D = num | true | false | nil | (cons D D) | (D -> D)
> >
> > The caption is "Datatype for a Dynamic Type System".
> >
> > I could not have put it any better.
> 
> The point of this exercise, other than fulfilling my daily quota of
> frustration, was to point out that the expression `dynamic type
> system' is *not* meaningless.  It may be trivial, it may be boring, it
> may be simply an edge case of static type systems, it may be useless,
> it may provide no benefits, whatever.  I'm not making any of those
> claims.  I'm simply pointing out that the phrase `dynamic type system'
> is not a self-contradictory statement.

Well, I (and others) have brought forward the very same point (namely
that a "dynamic type system" is merely one which has only one single
type) here several times and got *screamed at*.  So yes, you are
absolutely right when you say:

> It's hard enough getting people to agree on *definitions* of obvious
> terms without actually engaging in discourse on them.

... which I find highly frustrating, too.

At least we agree on something.

Matthias
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <d6bps0mv.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

>
> Well, I (and others) have brought forward the very same point (namely
> that a "dynamic type system" is merely one which has only one single
> type) here several times and got *screamed at*.  So yes, you are
> absolutely right when you say:
>
>> It's hard enough getting people to agree on *definitions* of obvious
>> terms without actually engaging in discourse on them.
>
> ... which I find highly frustrating, too.

Let's leave the `one type', `more than one type' argument aside for a
bit.  I'll acknowledge that many people model dynamic types by adding
a `any' type to the system, but they usually add a runtime type
discrimination function as well.

> At least we agree on something.

Scary, eh?

-- 
~jrm
From: Jesse Tov
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbrm9r6.dlj.tov@tov.student.harvard.edu>
Matthias Blume <····@my.address.elsewhere>:
> The other paper is by three folks from Brasil -- which, I am ashamed
> to admit, I have never heard of before.

Roberto Ierusalimschy is one of the Lua folks.
http://www.lua.org/authors.html

Jesse
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpfugt$107o$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Joe Marshall <·············@comcast.net> writes:
> 
>>Matthias Blume <····@my.address.elsewhere> writes:
>>
>>>They said a whole lot of things, but none of it amounted to a
>>>convincing explanation of what the term /type system/ means when used
>>>by the dynamic typing community.
>>
>>I'm sure that you have some idea of what is meant.  Certainly the term
>>`dynamic type system' is frequently found in papers written by people
>>in the static type community.
> 
> Just for kicks, I checked Citeseer with the exact search term "dynamic
> type system".  It reports an astounding number of hits: 2 (in words: two).
> This is actually much, much lower than I expected myself.

Google:

- "dynamic type system": 537
- "static type system": 2810

Citeseer:

- "static type system": 21


BTW, why do some of the the highly-respected authors in the static 
typing community even use the term "static type system" when it is 
allegedly so evident that all type systems must be static.

Or, to put it differently, could you just stop to behave like a static 
type sytem and only nitpick about terminology? ;-)

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fbc9e13$1@news.unimelb.edu.au>
Matthias Blume <····@my.address.elsewhere> writes:

>Just for kicks, I checked Citeseer with the exact search term "dynamic
>type system".  It reports an astounding number of hits: 2 (in words: two).

Just for kicks, I tried googling for "type system".

The first two hits were nothing to do with programming or mathematics
(a blog publishing tool, and a system of proverbs).
Number three, the first programming-related entry,
was a reference to the .NET Common Type System.

I think it is dubious as to whether the .NET Common Type System
fits Benjamin Pierce's definition:

 | 	A type system is a tractable syntactic method for proving absence
 | 	of certain program behaviours by classifying phrases according
 | 	to the kinds of values they compute.

What MS refer to as the .NET Common Type System (CTS) is really just the
type hierarchy/algebra: the different kinds of types, and the rules for
their construction.  The ECMA standard says "... Type system - which
types [there] are and how to define them.".  These are both using the
phrase "type system" to refer to the system of types, NOT to the OPTIONAL
verification process which a .NET CLR implementation can use to enforce
type safety.

It's only when we get to entry number 8 on the search list (the fifth
programming-related entry) that we find an actual definition of type
system.  The definition is as follows:

 | 	Type System
 | 
 | 	Programming languages usually come with a type-system, a term
 | 	for some algebraic structure whose elements are the types of data
 | 	that can be manipulated in the language, together with a mapping
 | 	from the set of objects involved in defining the semantics of
 | 	the language into the typesystem.

Again, nothing about _proving_ anything.  The .NET CTS fits this definition.

So I think the evidence is mixed.  The phrase "dynamic type system"
doesn't seem to be used very often in academic papers, but the
most-popular-according-to-google uses of the term "type system"
seem to be referring to systems of types, not static checking.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Coby Beck
Subject: Re: More static type fun.
Date: 
Message-ID: <bpd0ic$25ui$1@otis.netspace.net.au>
"Matthias Blume" <····@my.address.elsewhere> wrote in message
···················@tti5.uchicago.edu...
> Pascal Costanza <········@web.de> writes:
>
> > Matthias Blume wrote:
> >
> > > Is having one [a type system -M.]  such a good thing after all, in
> > > your opinion?
> >
> > Yes, when checked at the right time.
>
> ???
>
> Type systems are not being checked.  Types are.

Why didn't you pretend he meant "when your opinion is checked.."  You had
three choices to fill in the blank, you should have chosen the *most*
ridiculous, not the second most ridiculous possible object for "checked."

> (I guess your non-answer is a pretty good illustration of where the
> communication problem is.)

I agree it is obvious where the communication problem is.

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m18ymd8up8.fsf@tti5.uchicago.edu>
"Coby Beck" <·····@mercury.bc.ca> writes:

> "Matthias Blume" <····@my.address.elsewhere> wrote in message
> ···················@tti5.uchicago.edu...
> > Pascal Costanza <········@web.de> writes:
> >
> > > Matthias Blume wrote:
> > >
> > > > Is having one [a type system -M.]  such a good thing after all, in
> > > > your opinion?
> > >
> > > Yes, when checked at the right time.
> >
> > ???
> >
> > Type systems are not being checked.  Types are.
> 
> Why didn't you pretend he meant "when your opinion is checked.."  You had
> three choices to fill in the blank, you should have chosen the *most*
> ridiculous, not the second most ridiculous possible object for "checked."

Strictly speaking there was no such choice.  The sentences to which
Pascal was replying read:

  "By the way, I have no idea why people get offended when they are
   told their favorite language does not have a type system.  Is having
   one such a good thing after all, in your opinion?"

The noun phrases in here are "people" , "their favorite language",
"type system", "one" (referring to "type system"), "good thing", and
"opinion".  Among these it seems that "type system" is the least
ridiculous choice.

But I agree with you that it was clear to me that Pascal actually
meant "type".  But "type" was not even being discussed here, which is
why I found the response somewhat -- how shall I put it -- lacking.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpfrho$sug$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> "Coby Beck" <·····@mercury.bc.ca> writes:
> 
> 
>>"Matthias Blume" <····@my.address.elsewhere> wrote in message
>>···················@tti5.uchicago.edu...
>>
>>>Pascal Costanza <········@web.de> writes:
>>>
>>>
>>>>Matthias Blume wrote:
>>>>
>>>>
>>>>>Is having one [a type system -M.]  such a good thing after all, in
>>>>>your opinion?
>>>>
>>>>Yes, when checked at the right time.
>>>
>>>???
>>>
>>>Type systems are not being checked.  Types are.
>>
>>Why didn't you pretend he meant "when your opinion is checked.."  You had
>>three choices to fill in the blank, you should have chosen the *most*
>>ridiculous, not the second most ridiculous possible object for "checked."
> 
> 
> Strictly speaking there was no such choice.  The sentences to which
> Pascal was replying read:
> 
>   "By the way, I have no idea why people get offended when they are
>    told their favorite language does not have a type system.  Is having
>    one such a good thing after all, in your opinion?"
> 
> The noun phrases in here are "people" , "their favorite language",
> "type system", "one" (referring to "type system"), "good thing", and
> "opinion".  Among these it seems that "type system" is the least
> ridiculous choice.
> 
> But I agree with you that it was clear to me that Pascal actually
> meant "type".  But "type" was not even being discussed here, which is
> why I found the response somewhat -- how shall I put it -- lacking.

That's what I don't like about static type system. They only address the 
superficial bugs, without being able to get at the deeper level - the 
one that is of actual interest. ;-P


Pascal

P.S.: This is a joke. I know that type system are better than this.

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <usmkm533c.fsf@dtpq.com>
>>>>> On Mon, 17 Nov 2003 17:31:42 +0100, Andreas Rossberg ("Andreas") writes:

 Andreas> Christopher C. Stacy wrote:
 >> 
 Andreas> Fergus was pointing at the fact that values of all these
 Andreas> things are represented by tagging, i.e. the low-level
 Andreas> representation type is actually a single union type.
 >> 
 >> I am not sure what you mean by this in this context.

 Andreas> Fergus explained it himself.

 >> Could you please explain what you mean by, "low-level representation
 >> type is actually a single union type" and why you think it is important?

 Andreas> I didn't say it was. Nor do I think it is. It's a mere technicality.

 >> Or is this just more trolling of the form, "We define a type system
 >> to mean a statically typed system, which Lisp is not, therefore Lisp
 >> doesn't have a type system"?

 Andreas> Think what you want, call me names, start a new science,
 Andreas> I'm out of this stupid discussion where even the attempt to
 Andreas> clarify terminology is taken as offense.

Thanks goodness.
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <un0au5336.fsf@dtpq.com>
>>>>> On Mon, 17 Nov 2003 17:31:42 +0100, Andreas Rossberg ("Andreas") writes:

 Andreas> Christopher C. Stacy wrote:
 >> 
 Andreas> Fergus was pointing at the fact that values of all these
 Andreas> things are represented by tagging, i.e. the low-level
 Andreas> representation type is actually a single union type.
 >> 
 >> I am not sure what you mean by this in this context.

 Andreas> Fergus explained it himself.

 >> Could you please explain what you mean by, "low-level representation
 >> type is actually a single union type" and why you think it is important?

 Andreas> I didn't say it was. Nor do I think it is. It's a mere technicality.

 >> Or is this just more trolling of the form, "We define a type system
 >> to mean a statically typed system, which Lisp is not, therefore Lisp
 >> doesn't have a type system"?

 Andreas> Think what you want, call me names, start a new science,
 Andreas> I'm out of this stupid discussion where even the attempt to
 Andreas> clarify terminology is taken as offense.

Thanks goodness.
From: Adam Warner
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.11.17.09.48.31.156182@consulting.net.nz>
Hi Fergus Henderson,

> Don Geddis <···@geddis.org> writes:
> 
>>When programming people (in general) talk about types, they're talking
>>about data representation within computer programs.
> 
> For that sense of "type", it would be appropriate to say that Lisp has only
> one type.

Fergus, the 1960s are calling. They want their joke back.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpakc9$98e$1@grizzly.ps.uni-sb.de>
Don Geddis wrote:
>> > A type simply is what is expressible in a given type system (and a type
>> > systems is always something static in its original meaning).
> 
> Of course!  So now it's clear why Lisp can't have types: you've defined
> the
> terms explicitly to exclude it!

I haven't, I just pointed on what seems to be a agreed-upon (by authorities 
of the field) definition of "type system".

> But that's simply not an interesting definition.  When programming people
> (in general) talk about types, they're talking about data representation
> within
> computer programs.

That may be true for the C folks.

> Your "types" are a strict subset of this more
> interesting
> concept.

No, the other way round, they are a *superset* of this concept (and hence 
more interesting).

> But come on, your static types aren't really mathematical types either.

Any argument for backing that up?

> In the end, what matters is whether a subroutine can handle the inputs or
> not.  And that's a question of data representation, not a question of
> abstract platonic mathematical types.  A function that is well defined for
> the integer 3 won't necessarily compute correctly if you pass in a float
> 3.0.
> 
> The issue of data representation _is_ the important one for computer type
> systems,

Sorry, but your idea of what is interesting about type systems in 
programming languages may have been the state of affairs in the 1950s, but 
not today.

> so I don't understand why you pretend that you're working with
> some grander notion of mathematical types, and why you denigrate systems
> that do inference over sets of values.

I never made any such discriminating statement.

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <r80an6e2.fsf@ccs.neu.edu>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> [Re-added c.l.f, since I don't read c.l.l on a regular basis.]
>
> Joe Marshall wrote:
>
>> I'm a bit confused by what static typists mean by the word `type'.  It
>> often seems to be used as if it means `a property that is statically
>> analyzable'.
>
> A type simply is what is expressible in a given type system (and a type 
> systems is always something static in its original meaning).
>
>> Is `integer' a type?  How about `positive integer'?  How about `even
>> positive integer'?  How about `real integer solutions to a^2 + b^2 =
>> c^2 where a and b are 3 and 4 respectively'?
>
> All of these are expressible as types in suitable type systems.  Such type 
> systems actually exist.  Of course, you will hardly find them in today's 
> programming languages, because they tend to fail hitting a good balance 
> between expressiveness and complexity.

Ok, so suppose that I restrict my input types to explicit singletons
and demand the narrowest provable type be used at all stages of
analysis?  Obviously this may not terminate, but would the result not
be either an error or the actual answer?
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpajvs$8vl$2@grizzly.ps.uni-sb.de>
Joe Marshall wrote:
> 
> Ok, so suppose that I restrict my input types to explicit singletons
> and demand the narrowest provable type be used at all stages of
> analysis?  Obviously this may not terminate, but would the result not
> be either an error or the actual answer?

No, in general you can only approximate the answer, because the exact one is 
not provable within the type system.

I can see where you are heading: if you can make static typing that precise 
than you could as well call running a program to check for dynamic type 
errors static type checking, right?

But that is not the case. Type checking is a form of abstract 
interpretation, it is not the same as running a program. Type checking is 
compositional, unlike execution.

Assume a hypothetical type system that rejects division by 0, and consider 
some definitions:

  f x y = x / y
  a = f 4 2
  b = f 4 0

The type system classifies f by only looking at it once. With that 
classification knowledge alone it can then deduce that a is well-typed but 
b isn't. OTOH, actually running the code would need to execute f twice to 
find that.

In particular this implies that type checking does not go into recursion 
where the program would. (Your comment about non-termination seems to stem 
from this misunderstanding. If type checking would recurse that way than it 
was actually incomplete.)

Also note that running may depend on, or produce, side-effects in impure 
languages, while type checking may not.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <65hhwshj.fsf@ccs.neu.edu>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> Joe Marshall wrote:
>> 
>> Ok, so suppose that I restrict my input types to explicit singletons
>> and demand the narrowest provable type be used at all stages of
>> analysis?  Obviously this may not terminate, but would the result not
>> be either an error or the actual answer?
>
> No, in general you can only approximate the answer, because the exact one is 
> not provable within the type system.
>
> I can see where you are heading: if you can make static typing that precise 
> than you could as well call running a program to check for dynamic type 
> errors static type checking, right?

See  http://okmij.org/ftp/Computation/type-arithmetics.html
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bpfeu5$rbi$1@f1node01.rhrz.uni-bonn.de>
Joe Marshall wrote:

> See  http://okmij.org/ftp/Computation/type-arithmetics.html

This is a very cool link!

Thanks!


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Darius
Subject: Re: More static type fun.
Date: 
Message-ID: <20031119052130.0000780d.ddarius@hotpop.com>
On Wed, 19 Nov 2003 10:59:33 +0100
Pascal Costanza <········@web.de> wrote:

> Joe Marshall wrote:
> 
> > See  http://okmij.org/ftp/Computation/type-arithmetics.html
> 
> This is a very cool link!

Haskell 98 + multi-parameter typeclasses/functional dependencies and a
"let the typechecker loop" option (-fallow-undecidable-instances in GHC)
gives the same thing (a dependent type system).  See 
www.haskell.org/hawiki/SimulatingDependentTypes and the links it
references.

Cayenne, one of those Haskell offspring Dirk or Matthias mentioned,
directly supports this dependent types.  It's type level language is the
same as it's term level language.
http://www.math.chalmers.se/~augustss/cayenne/
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzsmkr3z47.fsf@cupid.igpm.rwth-aachen.de>
Andreas Rossberg <········@ps.uni-sb.de> writes:
> Joe Marshall wrote:
> > 
> >> In the proper meaning of words, Lisp is only typed in the completely
> >> trivial
> >> sense I sketched above.  Lisp may have something called types, but they
> >> aren't types in any standard formal sense.
> > 
> > This is ridiculous.  Lisp types can be as formalized as any other type
> > system.  How formal a type system is has nothing to do with whether
> > one can statically analyze code.
> 
> You missed the point. It's not that you cannot formalize what Lisp does, 
> it's just that it isn't a "type system" in a plausible sense.

Where "plausible" gets to be defined by you, right?
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bp2jts$57h$3@grizzly.ps.uni-sb.de>
Mario S. Mommer wrote:
>> 
>> You missed the point. It's not that you cannot formalize what Lisp does,
>> it's just that it isn't a "type system" in a plausible sense.
> 
> Where "plausible" gets to be defined by you, right?

No, by well-established interpretation and a useful definition, as I 
explained further on.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fz7k233vw1.fsf@cupid.igpm.rwth-aachen.de>
Andreas Rossberg <········@ps.uni-sb.de> writes:
> Mario S. Mommer wrote:
> >> 
> >> You missed the point. It's not that you cannot formalize what Lisp does,
> >> it's just that it isn't a "type system" in a plausible sense.
> > 
> > Where "plausible" gets to be defined by you, right?
> 
> No, by well-established interpretation and a useful definition, as I 
> explained further on.

If this usefull definition of yours implies that modern Common Lisp
(for instance) does not have a type system, then it is like having a
definition of tree that does not accept oaks. Sorry, that is not my
definition of useful. Maybe it is useful to some because it helps
discredit perfectly fine languages, or maybe for some other reason; I
don't know, I won't investigate this further.
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87islnhx13.fsf@gruk.tech.ensign.ftech.net>
Mario S. Mommer <········@yahoo.com> writes:

> Andreas Rossberg <········@ps.uni-sb.de> writes:
> > Mario S. Mommer wrote:
> > >> 
> > >> You missed the point. It's not that you cannot formalize what Lisp does,
> > >> it's just that it isn't a "type system" in a plausible sense.
> > > 
> > > Where "plausible" gets to be defined by you, right?
> > 
> > No, by well-established interpretation and a useful definition, as I 
> > explained further on.
> 
> If this usefull definition of yours implies that modern Common Lisp
> (for instance) does not have a type system, then it is like having a
> definition of tree that does not accept oaks. Sorry, that is not my
> definition of useful. Maybe it is useful to some because it helps
> discredit perfectly fine languages, or maybe for some other reason; I
> don't know, I won't investigate this further.

It's probably a sensible definition for what I shall (for the purpose
of this small discourse) call "scientific type theory as applied to
computing".

It is obviously a good thing having what one studies well-defined, so
one can see what is and waht isn't encompassed by one's theory.
However, all involved are sure they know what is meant by the small
word "type". It's like discussing moral philosophy with a layman. The
domain specialist says "ethics" and the layman thinks "aha, morals"
instead of "a theory of morals" and all gets confused.

Then, the layman says "ethics" and the domain specialist gets all
confused, since the layman attributes several non-standard things to
the "ethics" and it all ends up with everyone shouting.

Now, for the purpose of "scientific type theory as applied to
computing" most people in c.l.l would probably classify as laymen and
the most vocal people in this thread would classify as "domain
specialists" since the two sides *clearly* mean different things with
the simple word "type", thus making discourse rather meaningless.

It is my simple hope that this, hopefully being my last incursion in
this thread, since it has now squarely left the Kansas I am used to
for the Oz of mathematical type theory (or is that the Oz I am used to
and the Kansas of ... Doesn't matter) and, while I do find the theory
somewhat appealing and probably worth a study, I am not at a stage
where I can commit the time I need to separate the "type" from the
"type", as it were.

//Ingvar
-- 
((lambda (x y l) (format nil "~{~a~}" (loop for a in x for b in y with c = t
if a collect (funcall (if c #'char-upcase #'char-downcase) (elt (elt l a) b))
else collect #\space if c do (setq c ())))) '(76 1 0 0 nil 0 nil 0 3 0 5 nil 0
0 12 0 0 0) '(2 2 16 8 nil 1 nil 2 4 16 2 nil 9 1 1 13 10 11) (sort (loop for
foo being the external-symbols in :cl collect (string-upcase foo)) #'string<))
From: Christopher C. Stacy
Subject: Re: More static type fun.
Date: 
Message-ID: <ufzgqaj8g.fsf@dtpq.com>
>>>>> On 14 Nov 2003 13:58:32 +0000, Ingvar Mattsson ("Ingvar") writes:

 Ingvar> Mario S. Mommer <········@yahoo.com> writes:
 >> Andreas Rossberg <········@ps.uni-sb.de> writes:
 >> > Mario S. Mommer wrote:
 >> > >> 
 >> > >> You missed the point. It's not that you cannot formalize what Lisp does,
 >> > >> it's just that it isn't a "type system" in a plausible sense.
 >> > > 
 >> > > Where "plausible" gets to be defined by you, right?
 >> > 
 >> > No, by well-established interpretation and a useful definition, as I 
 >> > explained further on.
 >> 
 >> If this usefull definition of yours implies that modern Common Lisp
 >> (for instance) does not have a type system, then it is like having a
 >> definition of tree that does not accept oaks. Sorry, that is not my
 >> definition of useful. Maybe it is useful to some because it helps
 >> discredit perfectly fine languages, or maybe for some other reason; I
 >> don't know, I won't investigate this further.

 Ingvar> It's probably a sensible definition for what I shall (for the purpose
 Ingvar> of this small discourse) call "scientific type theory as applied to
 Ingvar> computing".

Yes, adding the word "scientific" to a claim always makes the claim more believable...
From: Arthur Lemmens
Subject: Re: More static type fun.
Date: 
Message-ID: <oprym0fpxik6vmsw@news.xs4all.nl>
Andreas Rossberg <········@ps.uni-sb.de> wrote:

> "A type system is a tractable syntactic method for proving the absence of certain program behaviour by classifying phrases according to the kinds of values they compute."

As in:

(defmethod serialize ((x float) stream)
  (serialize-float x stream))

(defmethod serialize ((x string) stream)
  (serialize-string x stream))

Looks like I have "a tractable syntactic method" here to prove that
serialize-float will be called for floats only, and serialize-string
for strings only. So I suppose Lisp must have some kind of type
system after all.

> It is obvious that Lisp is not even remotely covered by this (reasonable) definition.

The only thing that's obvious is that you don't know enough Lisp to make
such blanket statements. So please don't do that; it's annoying to people
who do know Lisp.

Arthur Lemmens
From: Feuer
Subject: Re: More static type fun.
Date: 
Message-ID: <3FB56E8E.C5E22238@his.com>
Joe Marshall wrote:

> I'm happy to use any programming aid that doesn't require me to adapt.

I keep having to adapt to the untyped nature of Scheme by writing code
to check things that I really want a type system to make sure don't
have to be checked.

David
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <7k21zgpt.fsf@comcast.net>
Feuer <·····@his.com> writes:

> Joe Marshall wrote:
>
>> I'm happy to use any programming aid that doesn't require me to adapt.
>
> I keep having to adapt to the untyped nature of Scheme by writing code
> to check things that I really want a type system to make sure don't
> have to be checked.

Why?

@article{ wright97practical,
    author = "Andrew K. Wright and Robert Cartwright",
    title = "A Practical Soft Type System for Scheme",
    journal = "ACM Transactions on Programming Languages and Systems",
    volume = "19",
    number = "1",
    month = "January",
    publisher = "ACM Press",
    pages = "87--152",
    year = "1997",
    url = "citeseer.nj.nec.com/article/wright94practical.html" }

@inproceedings{ henglein95safe,
    author = "Fritz Henglein and Jakob Rehof",
    title = "Safe Polymorphic Type Inference for a Dynamically Typed Language: Translating {Scheme} to {ML}",
    booktitle = "Proc. {ACM} Conf. on Functional Programming Languages and Computer Architecture ({FPCA}), La Jolla, California",
    publisher = "ACM Press",
    year = "1995",
    url = "citeseer.nj.nec.com/henglein95safe.html" }

@article{ jenkins96polymorphic,
    author = "Steven Jenkins and Gary T. Leavens",
    title = "Polymorphic Type-Checking in Scheme",
    journal = "Computer Lanugages",
    volume = "22",
    number = "4",
    pages = "215--223",
    year = "1996",
    url = "citeseer.nj.nec.com/jenkins97polymorphic.html" }

One of the primary objections to static typing is that it is `in your
face'.  You cannot `opt out'.  Scheme and Lisp let you opt out.

But if you don't *want* to opt out, no one is forcing you.

-- 
~jrm
From: Tayss
Subject: Re: More static type fun.
Date: 
Message-ID: <5627c6fa.0311140855.f6ffdd7@posting.google.com>
Andreas Rossberg <········@ps.uni-sb.de> wrote in message news:<············@grizzly.ps.uni-sb.de>...
> >> OTOH, allowing the user to arbitrarily state this type would make the
> >> type system unsound, i.e. you'd loose all guarantees the type system can
> >> make and hence almost all of its advantages.
> > 
> > And none of its disadvantages.
> 
> You loose almost all of its advantages, and none of its disadvantages? Oh, 
> then it's even worse than I thought. Why should anybody possibly want that? 
> ;-)

You know, that is a pretty embarrassing error, since lisp's sexps are
known for being the foundation of a few knowledge representation
languages.  Maybe I should ditch English and speak in Lisp.

Or join the Rice Blackboard Debating Society:
http://www.cs.rice.edu/~eallen/debate/

Of course, no doubt a static type system for English would catch these
errors...
From: Stefan Ljungstrand
Subject: Re: More static type fun.
Date: 
Message-ID: <Pine.SOL.4.30.0311121737140.730-100000@grosse.mdstud.chalmers.se>
On Wed, 12 Nov 2003, Andreas Rossberg wrote:

> Tayss wrote:
> >> > Out of curiosity, are there any sufficiently powerful static type
> >> > systems out there that allow one to specify something like "x belongs
> >> > to type will-not-result-in-static-type-error"?  This sounds like a
> >> > perfectly good type, but I am not familiar enough with current static
> >> > systems.
> >>
> >> Yes, because in a language with a static type system, all expressions
> >> would belong to such a type, by definition.
> >
> > In that case, this can become a default type in certain languages
> > whose features otherwise tend to oppose static typing.
>
> I think you missed the bit of irony in my answer. ;-)
>
> What you want - at least the way you formulated it - exists in Lisp already.
> You see, Lisp is a statically typed language - it just happens to have only
> one universal type. If you want, you can call that type
> "will-not-result-in-static-type-error". Unfortunately, that does not buy
> you anything...
>
> So let me assume you rather meant something like "Can ordinary static type
> systems express a universal type?", i.e. a type that fits everywhere? Yes,
> it is trivial, it would be the type "forall T.T".

Err, do you mean "exists T.T" ?
Or something like Dynamic in Clean or univ in Mercury ?
(Just trying to understand what you are thinking of.)

> OTOH, allowing the user to arbitrarily state this type would make the type
> system unsound, i.e. you'd loose all guarantees the type system can make
> and hence almost all of its advantages. As Pascal pointed out, something
> like that has been done (many times, in many different ways, in fact) and
> is usually called soft typing.
>
> --
> Andreas Rossberg, ········@ps.uni-sb.de
>


--
Stefan Lj
md9slj

The infinity that can be finitely expressed is not the true infinity
From: Stefan Ljungstrand
Subject: Re: More static type fun.
Date: 
Message-ID: <Pine.SOL.4.30.0311201051380.2613-100000@grosse.mdstud.chalmers.se>
(Sorry to respond so late, seems an earlier post didn't make it.)

On Wed, 12 Nov 2003, Andreas Rossberg wrote:

> Tayss wrote:
> >> > Out of curiosity, are there any sufficiently powerful static type
> >> > systems out there that allow one to specify something like "x belongs
> >> > to type will-not-result-in-static-type-error"?  This sounds like a
> >> > perfectly good type, but I am not familiar enough with current static
> >> > systems.
> >>
> >> Yes, because in a language with a static type system, all expressions
> >> would belong to such a type, by definition.
> >
> > In that case, this can become a default type in certain languages
> > whose features otherwise tend to oppose static typing.
>
> I think you missed the bit of irony in my answer. ;-)
>
> What you want - at least the way you formulated it - exists in Lisp already.
> You see, Lisp is a statically typed language - it just happens to have only
> one universal type. If you want, you can call that type
> "will-not-result-in-static-type-error". Unfortunately, that does not buy
> you anything...
>
> So let me assume you rather meant something like "Can ordinary static type
> systems express a universal type?", i.e. a type that fits everywhere? Yes,
> it is trivial, it would be the type "forall T.T".

Hmm, you're sure you don't meant the type "exists T.T" ?

(Note that a value of this type would be rather useless, because even
 though one could make any expression have this type, one couldn't
 operate on it in any way (except passing around, duplicating the
 reference and dropping it), *unless* you provide some sort of (fallible)
 "downcast" or similar. Like Dynamic in Clean and univ in Mercury do.
)

> OTOH, allowing the user to arbitrarily state this type would make the type
> system unsound, i.e. you'd loose all guarantees the type system can make
> and hence almost all of its advantages. As Pascal pointed out, something
> like that has been done (many times, in many different ways, in fact) and
> is usually called soft typing.
>
> --
> Andreas Rossberg, ········@ps.uni-sb.de
>

--
Stefan Lj
md9slj

The infinity that can be finitely expressed is not the true infinity
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bpio8h$djo$1@grizzly.ps.uni-sb.de>
Stefan Ljungstrand wrote:
>>
>> So let me assume you rather meant something like "Can ordinary static
>> type systems express a universal type?", i.e. a type that fits
>> everywhere? Yes, it is trivial, it would be the type "forall T.T".
> 
> Hmm, you're sure you don't meant the type "exists T.T" ?

Tayss asked for a type that is universal in the sense that using something 
of this type does not trigger any static type errors. That's what forall 
T.T does. Of course, assigning this type to any actual value is always 
unsound, but that was exactly what he asked for, because he wanted to make 
the type system shut up since he "knew" what he was doing.

The existential type is "universal" in the dual sense: you can - safely - 
assign it to whatever expression you want. But as you say, that is rather 
useless, unless you have some form of type inspection.

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bor43a$s7g$1@f1node01.rhrz.uni-bonn.de>
Tayss wrote:
> Out of curiosity, are there any sufficiently powerful static type
> systems out there that allow one to specify something like "x belongs
> to type will-not-result-in-static-type-error"?  This sounds like a
> perfectly good type, but I am not familiar enough with current static
> systems.
> 
> The big win is that it can then fit in well with lisp's philosophy of
> programmer versatility, by making such a type default.

Soft typing seems to me to be close. See http://c2.com/cgi/wiki?SoftTyping

AFAIK, CMU Common Lisp is also relatively advanced wrt to optional 
static typing.

Another approach I recall reading about is Strongtalk. See 
http://www.cs.ucsb.edu/projects/strongtalk/pages/index.html


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb0e852@news.unimelb.edu.au>
"Coby Beck" <·····@mercury.bc.ca> writes:

>"Fergus Henderson" <···@cs.mu.oz.au> wrote in message
>> "Coby Beck" <·····@mercury.bc.ca> writes:
>>
>> >Give me the damn rope, if I hang myself, I promise I will not come crying
>> >to you!
>>
>> If you are writing programs all alone, and no-one else will ever need to
>> maintain them, that may be a reasonable request.
>
>You can not force me to write good code no matter how much protection you
>build into your language.

Sure.  But the language shouldn't go out of its way to help you write
_bad_ code, either.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <boqpr5$uhq$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> "Coby Beck" <·····@mercury.bc.ca> writes:
> 
>>"Fergus Henderson" <···@cs.mu.oz.au> wrote in message
>>
>>>"Coby Beck" <·····@mercury.bc.ca> writes:
>>>
>>>
>>>>Give me the damn rope, if I hang myself, I promise I will not come crying
>>>>to you!
>>>
>>>If you are writing programs all alone, and no-one else will ever need to
>>>maintain them, that may be a reasonable request.
>>
>>You can not force me to write good code no matter how much protection you
>>build into your language.
> 
> Sure.  But the language shouldn't go out of its way to help you write
> _bad_ code, either.

This should be taken for granted to have a constructive discussion!


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fb0fdef$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:
>Fergus Henderson wrote:
>>"Coby Beck" <·····@mercury.bc.ca> writes:
>>>"Fergus Henderson" <···@cs.mu.oz.au> wrote in message
>>>>"Coby Beck" <·····@mercury.bc.ca> writes:
>>>>
>>>>>Give me the damn rope, if I hang myself, I promise I will not come crying
>>>>>to you!
>>>>
>>>>If you are writing programs all alone, and no-one else will ever need to
>>>>maintain them, that may be a reasonable request.
>>>
>>>You can not force me to write good code no matter how much protection you
>>>build into your language.
>> 
>> Sure.  But the language shouldn't go out of its way to help you write
>> _bad_ code, either.
>
>This should be taken for granted to have a constructive discussion!

I thought this whole sub-thread was about whether or not languages should
provide facilities for violating encapsulation.  For a language to do
that, IMHO it would be going out of its way to help you write bad code.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87znf23o30.fsf@gruk.tech.ensign.ftech.net>
Fergus Henderson <···@cs.mu.oz.au> writes:

> Pascal Costanza <········@web.de> writes:
> >Fergus Henderson wrote:
> >>"Coby Beck" <·····@mercury.bc.ca> writes:
> >>>"Fergus Henderson" <···@cs.mu.oz.au> wrote in message
> >>>>"Coby Beck" <·····@mercury.bc.ca> writes:
> >>>>
> >>>>>Give me the damn rope, if I hang myself, I promise I will not come crying
> >>>>>to you!
> >>>>
> >>>>If you are writing programs all alone, and no-one else will ever need to
> >>>>maintain them, that may be a reasonable request.
> >>>
> >>>You can not force me to write good code no matter how much protection you
> >>>build into your language.
> >> 
> >> Sure.  But the language shouldn't go out of its way to help you write
> >> _bad_ code, either.
> >
> >This should be taken for granted to have a constructive discussion!
> 
> I thought this whole sub-thread was about whether or not languages should
> provide facilities for violating encapsulation.  For a language to do
> that, IMHO it would be going out of its way to help you write bad code.

IMAO, if you need a straightjacket to normally avoid violating
abstraction layers, you should possibly considering a career not in
programming.

That one *can* doesn't necessarily mean that one normally *do*. But
when one needs to, one can. one can then talk to whoever made the
abstraction and say "I couldn't do <thing>, but breaking teh
abstraction boundary thus, I could. Would updating the abstraction be
good or am I smoking bad crack?".

//Ingvar
-- 
When C++ is your hammer, everything looks like a thumb
	Latest seen from Steven M. Haflich, in c.l.l
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bor284$s78$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>Fergus Henderson wrote:

>>>Sure.  But the language shouldn't go out of its way to help you write
>>>_bad_ code, either.
>>
>>This should be taken for granted to have a constructive discussion!
> 
> 
> I thought this whole sub-thread was about whether or not languages should
> provide facilities for violating encapsulation.  For a language to do
> that, IMHO it would be going out of its way to help you write bad code.

No.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <boqog5$r62$1@grizzly.ps.uni-sb.de>
Coby Beck wrote:
> 
> You can not force me to write good code no matter how much protection you
> build into your language.  Lack of comments, badly named variables, cut
> and paste 10 times but fix the bug in only 8 instances and the poor design
> that leads to this, scatter-brained spaghetti code, lack of exception
> handling, misunderstood requirements, laziness, *these* are the enemies,
> not flexibility, power and freedom.

Note that spaghetti code stems from total freedom for control flow. Do you 
consider GOTO a good idea?

        - Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
 as kids, we would all be running around in darkened rooms, munching
 magic pills, and listening to repetitive electronic music."
 - Kristian Wilson, Nintendo Inc.
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzvfprgfln.fsf@cupid.igpm.rwth-aachen.de>
Andreas Rossberg <········@ps.uni-sb.de> writes:
> Note that spaghetti code stems from total freedom for control flow.

But lack of freedom won't prevent pasta coding.

> Do you consider GOTO a good idea?

Would you forbid goto completely? It seems to be quite popular for
machine generated code; that's why most languages have it, no matter
how high level they are.
From: Alexander Schmolck
Subject: Re: More static type fun.
Date: 
Message-ID: <yfs7k272d1c.fsf@black132.ex.ac.uk>
Mario S. Mommer <········@yahoo.com> writes:
> Andreas Rossberg <········@ps.uni-sb.de> writes:
> > Note that spaghetti code stems from total freedom for control flow.
> 
> But lack of freedom won't prevent pasta coding.

It won't eliminate it (or miraculously transform poor programmers into good
ones), but given the choice to maintain code written by average (read
mediocre) progammers in language A (with goto) or language B (same as A, but
with restricted goto a la Java), which one would you opt for?

> Would you forbid goto completely? It seems to be quite popular for
> machine generated code; that's why most languages have it, no matter
> how high level they are.

Which high level languages with goto do you have in mind -- apart from Common
Lisp (I honestly can't think of any)?

'as
From: Jon S. Anthony
Subject: Re: More static type fun.
Date: 
Message-ID: <m3y8unoruz.fsf@rigel.goldenthreadtech.com>
Alexander Schmolck <··········@gmx.net> writes:

> Mario S. Mommer <········@yahoo.com> writes:
> > Andreas Rossberg <········@ps.uni-sb.de> writes:
> > > Note that spaghetti code stems from total freedom for control flow.
> > 
> > But lack of freedom won't prevent pasta coding.
> 
> It won't eliminate it (or miraculously transform poor programmers into good
> ones), but given the choice to maintain code written by average (read
> mediocre) progammers in language A (with goto) or language B (same as A, but
> with restricted goto a la Java), which one would you opt for?
> 
> > Would you forbid goto completely? It seems to be quite popular for
> > machine generated code; that's why most languages have it, no matter
> > how high level they are.
> 
> Which high level languages with goto do you have in mind -- apart from Common
> Lisp (I honestly can't think of any)?

Depending on your definition of HLL, Ada does, which in the context of this
discussion should seem pretty ironic for the B&D advocates.  Of course,
it is in there pretty much for the reasons Mario cites.

/Jon
From: Alexander Schmolck
Subject: Re: More static type fun.
Date: 
Message-ID: <yfs3ccu3ox6.fsf@black132.ex.ac.uk>
·········@rcn.com (Jon S. Anthony) writes:

> Alexander Schmolck <··········@gmx.net> writes:
> > Which high level languages with goto do you have in mind -- apart from Common
> > Lisp (I honestly can't think of any)?

> Depending on your definition of HLL, Ada does, 

Any definition of HLL that encompasses languages without GC is highly suspect.

'as
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzk766hltr.fsf@cupid.igpm.rwth-aachen.de>
Alexander Schmolck <··········@gmx.net> writes:
> ·········@rcn.com (Jon S. Anthony) writes:
> 
> > Alexander Schmolck <··········@gmx.net> writes:
> > > Which high level languages with goto do you have in mind -- apart from Common
> > > Lisp (I honestly can't think of any)?
> 
> > Depending on your definition of HLL, Ada does, 
> 
> Any definition of HLL that encompasses languages without GC is highly suspect.

Then compile C++ with Boehm GC. There you go(to).
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bor301$ui4$1@f1node01.rhrz.uni-bonn.de>
Alexander Schmolck wrote:
> Mario S. Mommer <········@yahoo.com> writes:
> 
>>Andreas Rossberg <········@ps.uni-sb.de> writes:
>>
>>>Note that spaghetti code stems from total freedom for control flow.
>>
>>But lack of freedom won't prevent pasta coding.
> 
> It won't eliminate it (or miraculously transform poor programmers into good
> ones), but given the choice to maintain code written by average (read
> mediocre) progammers in language A (with goto) or language B (same as A, but
> with restricted goto a la Java), which one would you opt for?


I would try to make sure that they learn their tools better.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.11.11.14.21.23.898991@knm.org.pl>
On Tue, 11 Nov 2003 15:11:16 +0100, Mario S. Mommer wrote:

> Would you forbid goto completely? It seems to be quite popular for
> machine generated code; that's why most languages have it, no matter
> how high level they are.

I disagree with "most languages". For example most functional languages
don't have goto but implement tail calls in constant space and have local
functions (so they don't really need goto that much).

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Tim Bradshaw
Subject: Re: More static type fun.
Date: 
Message-ID: <ey3fzguvt28.fsf@cley.com>
* Marcin Kowalczyk wrote:

> I disagree with "most languages". For example most functional languages
> don't have goto but implement tail calls in constant space and have local
> functions (so they don't really need goto that much).

Of course, the reason they don't need GOTO is because tail-call
optimisation and local functions *are* GOTO.  You can write amazingly
unstructured and obscure code with named LET.  But that's OK of
course, because it's pure, and won't corrupt your precious bodily
fluids the way GOTO does.  

Purity in programming languages as in other areas of life is all that
matters, as we all know.

--tim
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bor2lr$s7a$1@f1node01.rhrz.uni-bonn.de>
Andreas Rossberg wrote:
> Coby Beck wrote:
> 
>>You can not force me to write good code no matter how much protection you
>>build into your language.  Lack of comments, badly named variables, cut
>>and paste 10 times but fix the bug in only 8 instances and the poor design
>>that leads to this, scatter-brained spaghetti code, lack of exception
>>handling, misunderstood requirements, laziness, *these* are the enemies,
>>not flexibility, power and freedom.
> 
> 
> Note that spaghetti code stems from total freedom for control flow.

No, spaghetti code stems from abusing that freedom. Don't confuse 
technological and social problems.

> Do you 
> consider GOTO a good idea?

Many high-level languages I know have at least one or more forms of 
controlled gotos. Exception handling is typically achieved with 
restricted gotos, for example. As soon as you have continuations + 
proper tail calls, you can build your own goto abstractions. This is 
most probably also true for monads.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2d6by8iof.fsf@hanabi-air.shimizu.blume>
Pascal Costanza <········@web.de> writes:

> As soon as you have continuations + proper tail calls, you can build
> your own goto abstractions. This is most probably also true for
> monads.

Maybe you could briefly explain how having a monad gives you a way of
defining your own goto abstractions.  Just to be sure we are on the
same page...
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bosthd$oq8$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>As soon as you have continuations + proper tail calls, you can build
>>your own goto abstractions. This is most probably also true for
>>monads.
> 
> 
> Maybe you could briefly explain how having a monad gives you a way of
> defining your own goto abstractions.  Just to be sure we are on the
> same page...

I have said "most probably" because I am not sure. But I recall reading 
good arguments that monads can be understood as a generalization of 
continuations, and someone has shown me Haskell code that implements an 
exception handling facility - this is typically done with some form of goto.

 From that I conclude that it is probably possible to implement your own 
goto abstraction with monads. Others can probably say more about this.


Pascal
From: Brian McNamara!
Subject: Re: More static type fun.
Date: 
Message-ID: <bosvkr$jjv$1@news-int.gatech.edu>
Pascal Costanza <········@web.de> once said:
>Matthias Blume wrote:
>> Maybe you could briefly explain how having a monad gives you a way of
>> defining your own goto abstractions.  Just to be sure we are on the
>> same page...
>
>I have said "most probably" because I am not sure. But I recall reading 
>good arguments that monads can be understood as a generalization of 
>continuations, and someone has shown me Haskell code that implements an 
>exception handling facility - this is typically done with some form of goto.
>
> From that I conclude that it is probably possible to implement your own 
>goto abstraction with monads. Others can probably say more about this.

Presumably you mean stuff like
   http://www.nomaware.com/monads/html/errormonad.html
   http://www.nomaware.com/monads/html/contmonad.html
as described in the "All about monads" tutorial:
   http://www.nomaware.com/monads/html/index.html

I'll leave it to you-all to decide if/how these fare as "goto
abstractions", but I think that's the "standard" monad stuff in this
arena.

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2brrgx55a.fsf@wireless-5-198-50.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Matthias Blume wrote:
> 
> > Pascal Costanza <········@web.de> writes:
> > 
> >>As soon as you have continuations + proper tail calls, you can build
> >>your own goto abstractions. This is most probably also true for
> >>monads.
> > Maybe you could briefly explain how having a monad gives you a way
> > of
> > defining your own goto abstractions.  Just to be sure we are on the
> > same page...
> 
> I have said "most probably" because I am not sure. But I recall
> reading good arguments that monads can be understood as a
> generalization of continuations, and someone has shown me Haskell code
> that implements an exception handling facility - this is typically
> done with some form of goto.

This is jumping to (wrong) conclusions at several levels:

1.  Just because A is a generalization of B does not mean that
    everything that B can do A can do as well.(**)  Usually the opposite
    is true. (But I am aware of the fact that sometimes the word
    "generalization" is used in the sense of "taking the sum of all
    features".)

2.  Exceptions are a very restricted form of goto.  Being able to
    implement exceptions does not imply being able to implement
    general goto.  More generally, just because the implementation of
    a foo often uses a bar does not mean that you can synthesize a bar
    out of a foo.

3.  Some monads do give you continuations, but not all do.  Moreover,
    those that do do so only for programs written in monadic style.
    Even in the presence of a continuation monad, I can write my
    purely functional code without fear of someone randomly jumping
    out of and back into its execution.

For more and more detailed information see Phil Wadler's excellent
introduction to monads: "The essence of functional prgramming".

>  From that I conclude that it is probably possible to implement your
>  own goto abstraction with monads. Others can probably say more about
>  this.

This sentence is phrased sufficiently ambiguously that it could be
construed as being true (see point 3 above) -- namely in the case that
you are thinking of a /specific/ monad.  To be clear: just because you
have a monad does not mean you can implement goto.  Only certain
monads give you that ability, and in those cases the ability is
already "built into" the monad in question.

The notion of a "monad" is an abstraction: a type constructor with two
polymorphic operations and three axioms that these operations must
satisfy.  If you say that monads gives you goto, then you are breaking
this abstraction and are thinking of one of its specific instances.
In other words, you use information that you don't have.

Maybe an example will make this clearer.  Here is a simple (and fairly
useless) monad in SML:

   type 'a M = 'a
   fun unitM x = x
   fun bindM x f = f x

Now go and build me a goto out of that!

The bottom line of all this is the following: Completely unrestricted
goto is certainly not present in every language, so any argument that
goes "those other features effectively give us unrestricted goto, so
we might as well give up and throw one in" does not carry any weight.

Matthias

(**) In what sense this description fits the relationship between monads
and contiuations I will not try to discuss here.
From: Coby Beck
Subject: Re: More static type fun.
Date: 
Message-ID: <bosdom$1pvv$1@otis.netspace.net.au>
"Andreas Rossberg" <········@ps.uni-sb.de> wrote in message
·················@grizzly.ps.uni-sb.de...
> Coby Beck wrote:
> >
> > You can not force me to write good code no matter how much protection
you
> > build into your language.  Lack of comments, badly named variables, cut
> > and paste 10 times but fix the bug in only 8 instances and the poor
design
> > that leads to this, scatter-brained spaghetti code, lack of exception
> > handling, misunderstood requirements, laziness, *these* are the enemies,
> > not flexibility, power and freedom.
>
> Note that spaghetti code stems from total freedom for control flow.

Don't confuse freedom with anarchy!

> Do you
> consider GOTO a good idea?

I've never needed it, but i wouldn't take it away from someone else who
wanted it.

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Espen Vestre
Subject: Re: More static type fun.
Date: 
Message-ID: <kw8ymmj85f.fsf@merced.netfonds.no>
"Coby Beck" <·····@mercury.bc.ca> writes:

>> Note that spaghetti code stems from total freedom for control flow.
>
> Don't confuse freedom with anarchy!

Don't confuse anarchy with chaos!

;-)
-- 
  (espen)
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fzfzgwjo0r.fsf@cupid.igpm.rwth-aachen.de>
Fergus Henderson <···@cs.mu.oz.au> writes:
> Mario S. Mommer <········@yahoo.com> writes:
> >As a user of libraries, I want the power to disagree with the
> >implementor's judgement, thank you very much.
> 
> As a maintainer of code, I want the power to update the implementation
> of an abstraction without having the world fall down because some lazy
> hacker violated abstraction boundaries, thank you very much.

It you write libraries that need to be hacked to do what the hard
working hacker with a deadline needs, it is *your* fault. If it was an
error of judgment on part of the hacker, then this apocalyptic
cataclism is his fault. Not yours, nor that from the people who know
what they are doing when they bend rules.

> >I've used libraries where the restrictions imposed were there only because
> >the implementor
> >
> >        A] Hadn't thought about that particular use.
> >
> >        B] Just because it was said in a book that making restrictions
> >        is always good.
> 
> So modify the library source.

But this implies that I have to distribute this new library with my
code! This abstraction breaking might only be a hack around a known
bug and will become unnecessary once the next release is out, etc.

> >If a programmer decides to break an abstraction, why do you think is
> >it a good thing to put obstacles in his way?
> 
> Because it makes maintenance and debugging easier.

Very nice, but why is it any of your business if somebody does not
want that in a particular special case? Why not let the programmer
decide what he needs? Do you think that your wisdom in designing
abstractions is infallible?

> Programmers can rely on abstractions keeping their invariants, which
> makes debugging easier.  They can also modify implementations of
> abstractions without having to examine all the rest of the code to
> see if it will break.

I understand your point of view. Generally, breaking an abstraction is
a bad idea, but sometimes it is exactly what is needed. Remember that
we do not live in a perfect world! I want the /option/ of breaking the
abstraction. The boundaries should be there to warn me, not to stand
in my way.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fafb5a9$1@news.unimelb.edu.au>
Mario S. Mommer <········@yahoo.com> writes:
>Fergus Henderson <···@cs.mu.oz.au> writes:
>> Mario S. Mommer <········@yahoo.com> writes:
>> >As a user of libraries, I want the power to disagree with the
>> >implementor's judgement, thank you very much.
>> 
>> As a maintainer of code, I want the power to update the implementation
>> of an abstraction without having the world fall down because some lazy
>> hacker violated abstraction boundaries, thank you very much.
>
>It you write libraries that need to be hacked to do what the hard
>working hacker with a deadline needs, it is *your* fault.

Funny how that ties in with what I wrote earlier:

|My clients will complain that their programs broke, and even though it
|is arguably their fault, I will get a lot of the blame.

See what I mean?  Someone breaks my abstractions, and now _I_ get blamed!

Anyway, your point is fundamentally wrong.  My libraries were written
with _my_ requirements and design aims in mind.  Of course, to be maximally
reusable, I should aim for as much generality as possible, but often there
is a trade-off between generality, efficiency, simplicity, and other
desirable characteristics, so generality does not always win.

That hard working hacker with a deadline has a _different_ set of
requirements.  If my library does not meet _his_ requirements, that hardly
implies it is my fault.  To think that would be to commit the same sort of
error as blaming the Ariane-4 programmers for the failure of the Ariane-5.

>If it was an error of judgment on part of the hacker, then this apocalyptic
>cataclism is his fault.

Tell that to the accident investigators!
Or to the liquidators, when the company goes into receivership.

Or, more mundanely, let us consider the more optimistic case when the
bug gets caught by the test suite.  Now, it's _my_ change that has
caused the test failure, so _I_ need to track down the cause of the bug.
In order to do that, I'm probably going to need to understand in detail
the code written by the aforementioned lazy hacker.  That will soak up
a lot of my personal intellectual bandwidth and cache.  Even once I do
track down the cause of the problem, it still needs to be fixed.  I can
curse the lazy hacker all I like, but since he may have moved on to a
different project, most likely _I'm_ the one who will have to fix it.

>> >I've used libraries where the restrictions imposed were there only because
>> >the implementor
>> >
>> >        A] Hadn't thought about that particular use.
>> >
>> >        B] Just because it was said in a book that making restrictions
>> >        is always good.
>> 
>> So modify the library source.
>
>But this implies that I have to distribute this new library with my
>code!

<Shrug> No big deal.

>> >If a programmer decides to break an abstraction, why do you think is
>> >it a good thing to put obstacles in his way?
>> 
>> Because it makes maintenance and debugging easier.
>
>Very nice, but why is it any of your business if somebody does not
>want that in a particular special case? Why not let the programmer
>decide what he needs? Do you think that your wisdom in designing
>abstractions is infallible?

The programmer is always free to design his own abstractions that
satisfy his own needs.  The question is whether he should be allowed
to mess with _my_ abstractions.

Furthermore, the real problem is that if the language doesn't enforce
such abstractions, even if said programmer never breaks my abstractions,
then the mere possibility that he _might_ have done so makes maintenance
and debugging harder.

>> Programmers can rely on abstractions keeping their invariants, which
>> makes debugging easier.  They can also modify implementations of
>> abstractions without having to examine all the rest of the code to
>> see if it will break.
>
>I understand your point of view. Generally, breaking an abstraction is
>a bad idea, but sometimes it is exactly what is needed. Remember that
>we do not live in a perfect world! I want the /option/ of breaking the
>abstraction.

Unless you're using closed source libraries, you always have that option.

>The boundaries should be there to warn me, not to stand in my way.

I agree.  But they should warn you *at library upgrade time*, if your
abstraction breaking conflicts with a change to the library.
I get that only in languages which enforce encapsulation.
In those languages, the warning comes when when I merge my modified
library with the new changes to the library -- the version control system
reports the conflict.

Languages which don't enforce encapsulation give me no warning at all
at that critical time.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Mario S. Mommer
Subject: Re: More static type fun.
Date: 
Message-ID: <fz7k28jg1r.fsf@cupid.igpm.rwth-aachen.de>
Fergus Henderson <···@cs.mu.oz.au> writes:
> >It you write libraries that need to be hacked to do what the hard
> >working hacker with a deadline needs, it is *your* fault.
> 
> Funny how that ties in with what I wrote earlier:
> 
> |My clients will complain that their programs broke, and even though it
> |is arguably their fault, I will get a lot of the blame.
> 
> See what I mean?  Someone breaks my abstractions, and now _I_ get blamed!

If they modify your code to break the abstraction it will break too,
and they will blame you just the same.

> Anyway, your point is fundamentally wrong.

My point is that I know what I am doing and don't want to be
gratuitiously denied of options in an unforseeable or experimental
setting. You are talking with a world of fantasy in mind where sources
are all available and kept in some version control system, where you
have infinite wisdom and don't make any mistakes, everybody writes
attitude controllers for Ariane-5s, and nobody ever needs to hack up a
prototype!

> To think that would be to commit the same sort of error as blaming
> the Ariane-4 programmers for the failure of the Ariane-5.

Interesting that you mention that example. Human error won at the end,
despite monstruous efforts to prevent it. And IIRC, the blame went to
Murphy's law.

> >If it was an error of judgment on part of the hacker, then this apocalyptic
> >cataclism is his fault.
> 
> Tell that to the accident investigators!
> Or to the liquidators, when the company goes into receivership.

Why would that be different if the hacker just modified your code?

Besides, when was the last time you saw an open source (since this is
the option you mention) lib with any kind of warranty or whatever?
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fafe8d4$1@news.unimelb.edu.au>
Mario S. Mommer <········@yahoo.com> writes:

>Fergus Henderson <···@cs.mu.oz.au> writes:
>> >It you write libraries that need to be hacked to do what the hard
>> >working hacker with a deadline needs, it is *your* fault.
>> 
>> Funny how that ties in with what I wrote earlier:
>> 
>> |My clients will complain that their programs broke, and even though it
>> |is arguably their fault, I will get a lot of the blame.
>> 
>> See what I mean?  Someone breaks my abstractions, and now _I_ get blamed!
>
>If they modify your code to break the abstraction it will break too,

Not necessarily.  See below.

>and they will blame you just the same.

I don't think so.  In this case, even if it does break, it will be
much clearer where the fault lies.

>You are talking with a world of fantasy in mind where sources
>are all available

I am typing this on a Linux system, where the sources are indeed all
available.  This is not a world of fantasy, it is a real system.

But I'm not just talking about free software.  Even proprietary libraries
often come with source code.  That's because people recognize the value
of having source code around.

>and kept in some version control system,

I keep my own sources in a version control system, and import any other
libraries that I modify into that.  This sort of thing is standard
practice; we teach similar stuff to second year students.

>where you have infinite wisdom and don't make any mistakes, everybody writes
>attitude controllers for Ariane-5s, and nobody ever needs to hack up a
>prototype!

No, I'm not assuming any of those.

>> >If it was an error of judgment on part of the hacker, then this apocalyptic
>> >cataclism is his fault.
>> 
>> Tell that to the accident investigators!
>> Or to the liquidators, when the company goes into receivership.
>
>Why would that be different if the hacker just modified your code?

Well, that depends on whether the hacker incorporates his changes into
my library's source (either directly by modifying the library's source
code repository, or by sending me a patch), or whether he just makes
his own copy and modifies that.

In the former case, the changes to break the abstraction will be already
visible in the interface when I go to make my change.  So I will be
immediately confronted with the issue that my change may break code
which uses this library.  I can then decide to do the change in a different
way that is backward compatible, or I can decide to do a breaking change
and at the same time modify the interface (e.g. changing the name of the
functions whose semantics have changed) so that callers will be properly
notified.

In the latter case, where the hacker just made his changes in a local
copy of the library, the programmer who combines the new version of the
library with the hacker's copy will get a merge conflict when importing
the library, that would warn them of the impending problem.  They can
then take a look at the hacker's code, see what assumptions it is making
about the internals of my library, and check whether those assumptions
are still satisfied.

Either way, the chance of the problem slipping through to the released
product is a lot less.

>Besides, when was the last time you saw an open source (since this is
>the option you mention) lib with any kind of warranty or whatever?

When was the last time you saw a closed source lib with any kind of
warranty?  I don't think there is any significant difference between
open source and closed source libraries in this respect.  Either kind
can have a warranty, but you'll probably have to pay extra for it.

But I think this issue is irrelevant to my point.  My point is that if you
get to the stage of having to go to court to determine liability, then
you've already failed.  Much better to catch the bug before the product
is shipped.  Languages that enforce encapsulation help you do that.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bnpfrb$27g$1@grizzly.ps.uni-sb.de>
"Jon S. Anthony" <·········@rcn.com> wrote:
> >
> > I think that something the dynamic typing advocates don't realize is
> > that static typing helps avoiding many logic errors.
>
> Having done a lot of both, I disagree with this assertion.  In
> practice, ime, ketil assertion has never happened and raffael
> assertion is in exact alignment.

Well, seems to work for me. Surely not always, but often enough.

>  While YMMV, I think this sort of
> experience is typical for the "dynamic" camp.

Maybe, but allow me to have some doubts that this "typical" experience often
includes decent type systems, approached in a constructive way.

    - Andreas
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <3F9FBE8D.3060709@ps.uni-sb.de>
··········@ii.uib.no wrote:
> 
>>I think something that the static typing advocates don't know is that
>>the number of runtime type errors (as opposed to runtime program logic
>>errors) in real world projects in lisp or smalltalk is actually tiny.
> 
> I think that something the dynamic typing advocates don't realize is
> that static typing helps avoiding many logic errors.

Exactly, because you can use it to turn a lot of logic errors into type 
errors by defining your type abstractions.

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Jon S. Anthony
Subject: Re: More static type fun.
Date: 
Message-ID: <m3ad7kvsvt.fsf@rigel.goldenthreadtech.com>
Andreas Rossberg <········@ps.uni-sb.de> writes:

> ··········@ii.uib.no wrote:
> >
> >>I think something that the static typing advocates don't know is that
> >>the number of runtime type errors (as opposed to runtime program logic
> >>errors) in real world projects in lisp or smalltalk is actually tiny.
> > I think that something the dynamic typing advocates don't realize is
> > that static typing helps avoiding many logic errors.
> 
> Exactly, because you can use it to turn a lot of logic errors into
> type errors by defining your type abstractions.

This is always claimed, but ime, never seemed to occur.  I'm sure you
can cook up cases where it is plausible, but in practice it sounds
pretty hollow...

/Jon
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnmq9n$fqu$4@newsreader2.netcologne.de>
Dirk Thierbach wrote:

> Most of the examples here are somewhat artificial and require mixing
> different types, because people think that this won't work with static
> typing.

What about the other examples?


Pascal
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3f9f9d00$1@news.unimelb.edu.au>
·······@mediaone.net (Raffael Cavallaro) writes:

>I think the dynamic typing argument is that if you want the
>flexibility of a type that contains all possible objects, what's the
>point of having to jump through the hoops of a static type checker?

The point is that even if you do need that flexibility for some of your
code, you won't need it for most of your code.  So you get the benefits
of static type checking for the majority of your code, plus the flexibility
of dynamic type checking for those occaisions when you need it.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnjnk7$8a1$1@news.oberberg.net>
Dirk Thierbach wrote:
> 
> In test1, I cheat a bit because outside the I/O monad, there cannot
> be side effects like printing, but for typechecking this is close enough:
> 
>>test1 = foo (\thing -> show thing `seq` thing)

Simply use

   test1 = foo (\thing -> trace thing thing)

which does what you want. (If you look at the warnings in the "trace" 
documentation in hslibs, you'll find that it's useful just for tracing 
execution - exactly what "foo" is written for.)

Regards,
Jo
From: ·············@comcast.net
Subject: Re: More static type fun.
Date: 
Message-ID: <ekwygne4.fsf@comcast.net>
Dirk Thierbach <··········@gmx.de> writes:

> ·············@comcast.net wrote:
>
>> (defun foo (f)
>>  (funcall (funcall f #'+) 
>>           (funcall f 3)
>>           (funcall f 2)))
>>
>> (defun test1 ()
>>  (foo (lambda (thing)
>>         (format t "~&--> ~s" thing)
>>         thing)))
>> 
>> (defun test2 ()
>>  (foo (lambda (thing)
>>         (if (eq thing #'+)
>>             #'*
>>             thing))))
>
> You're using arguments of different type with the same function, and
> you're then relying on library functions like "format" to sort out
> the tags and act accordingly. The equivalent in Haskell is to
> create a datatype that supplies those tags:

If we leave format out of it, then the argument being passed in to
function foo is the identity function, but I am using arguments of
different types to the same function:  whatever F is, it is being
handed a function from  int -> int -> int  and some ints themselves.

>> data Foo a = Op (a -> a -> a) | Val a 

So here you are creating a union type that can hold objects of type
`a' and binary functions upon those objects. 


> We have to tell the 'show' formatter how to deal with that datatype
> (in Haskell, it is not possible to print a function, otherwise this
> could be done in an easier way):
>
>> instance Show a => Show (Foo a) where
>>   show (Op f)  = "operator"
>>   show (Val x) = show x

Ok, that's just defining a print method.  It isn't important.


> We define an auxiliary function to do the application. Since you
> will check for an error dynamically, we do the same:
>
>> foocall (Op p) (Val x) (Val y) = p x y
>> foocall _                      = error "dynamic type error"

This appears to be taking a 3-tuple and more or less `unwrapping' it.
If the 3-tuple isn't an Op, Val, Val, we raise an error.

> Note that foo is a bogus function; there are values for f which will
> cause the program to crash.  We here say honestly and explicitely "we
> don't care about such situations", whereas with pure dynamic typing,
> you sort of brush it under the carpet and hope nobody will ever do
> this.  If someone now uses foo and isn't aware of that restriction,
> and innocently passes a wrong argument in some obscure situation
> that is not covered by a unit test, you're in trouble.

Er, more or less.  The program won't `crash', but it does enter the
error handler (unless I catch it otherwise).

> Foo is now easy:
>
>> foo f = foocall (f (Op (+))) (f (Val 3)) (f (Val 2))

This is a sort of a rewrite of foo.  You're injecting the operator
and values into the union type so that F can project them back down.

> In test1, I cheat a bit because outside the I/O monad, there cannot
> be side effects like printing, but for typechecking this is close enough:

I don't care about the printing, it's just illustrative.  I am
concerned, though, that with the auxiliary function you are
essentially `turning off' the static type checking.  This is no safer
than the lisp version, but the projection of the values is clumsier.

> It doesn't do exactly the same, but it's close, and after all, this
> example doesn't do anything useful.

The other issue I see is that this doesn't generalize.  Suppose that
foo were defined such:

(defun monitored-function (monitor function)
  (lambda (&rest arglist)
    (funcall monitor 'return-value
            (apply (funcall monitor 'operator function)
                   (mapcar (lambda (arg)
                             (funcall monitor 'argument arg))
                           arglist)))))

(defun my-monitor (reason thing)
  (format t "~&~s is ~s" reason thing)
  thing)

(defun test (verbose)
  (mapcar (if verbose
              (monitored-function #'my-monitor #'*)
              #'*) 
              '(1 2 3) 
              '(4 5 6)))
 
So if I call test with verbose set to NIL, I get the result:  (4 10 18)
But if verbose is T, I get this printed:

OPERATOR is #<function * 20118052>
ARGUMENT is 1
ARGUMENT is 4
RETURN-VALUE is 4
OPERATOR is #<function * 20118052>
ARGUMENT is 2
ARGUMENT is 5
RETURN-VALUE is 10
OPERATOR is #<function * 20118052>
ARGUMENT is 3
ARGUMENT is 6
RETURN-VALUE is 18

before getting the result of (4 10 18)

The problem is that FOOCALL is not actually the function foo, but the
injection of foo into a space where the arguments have been wrapped.
This is all very well and good if you are doing this in one known spot
where you can inject the arguments into the same space as foo, but if
we abstract out the arguments, you no longer know what they might be.

-----

>> (defun transpose-tensor (tensor)
>>  (apply #'mapcar #'mapcar (list #'list #'list) tensor))
>
> That's the most interesting example so far, because it really has an
> application. 

[discussion snipped]
>
>> mapcar :: ([a] -> b) -> [[a]] -> [b]
>> mapcar f m = map f (transpose m)
>
>> f = mapcar (mapcar id)
>
>> mapcar2 :: (a -> [b] -> c) -> [a] -> [[b]] -> [c]
>> mapcar2 f l m = zipWith f l (transpose m)
>
> zipWith is just the Lisp mapcar with two lists to operate on. Then
> we have 
>
>> g = mapcar2 mapcar [id, id]

Here's where I have a problem.  MAPCAR2 and MAPCAR are very different
functions here, but in mine they were identical.  What if I do this:

(defun mixmaster (f1 f2 list)
  (apply f1 f1 (list f2 f2) list))

(defun bogus-transpose (x)
  (mixmaster #'mapcar #'list x))

and for the sake of offering *some* indication that mixmaster might
not be totally useless:

(defun bogus-multiply (x)
  (mixmaster #'mapcar  #'* x))
From: Jesse Tov
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbprmca.48u.tov@tov.student.harvard.edu>
·············@comcast.net <·············@comcast.net>:
> I don't care about the printing, it's just illustrative.  I am
> concerned, though, that with the auxiliary function you are
> essentially `turning off' the static type checking.  This is no safer
> than the lisp version, but the projection of the values is clumsier.

Precisely.  It's just opt-out instead of opt-in.  The error cases are
the same, and in either language you can either catch them and do
something to correct the situation or leave them unchecked and enter the
appropriate error state.  It's probably about as clumsy to opt-out here
as it is to opt-in in Lisp.

I don't see much value in most of this debate.  I'm quite happy to use
Haskell and Python on a daily basis, and even not unhappy to use Perl
and C occasionally.  Haskell's type system has never gotten in my way;
Python's lack of static checking has never bothered me.  (What does
bother me is that I have to use Python 2.1 when I'd really like 2.3.)
I'm unhappy when I have to use PHP, sometimes because of its braindead
type system which is but a small part of a largely braindead design.
I'd like to learn Common Lisp, but where's the time?

Jesse
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310280750.5fabace0@posting.google.com>
Jesse Tov <···@eecs.harvREMOVEard.edu> wrote in message news:<··················@tov.student.harvard.edu>...
> Precisely.  It's just opt-out instead of opt-in.  The error cases are
> the same, and in either language you can either catch them and do
> something to correct the situation or leave them unchecked and enter the
> appropriate error state.  It's probably about as clumsy to opt-out here
> as it is to opt-in in Lisp.

But I think the difference is the order in which one opts out (or in).
I think the lisp camp would say that statically typed languages get
the order of the development cycle backwards. They force you to make
decisions about implementation detail *right from the start* before
you've had a chance to sketch out your code. Moreover, they interfere
with the interactive top-level design of software right at the very
point where it is most important that the language/compiler *not* get
in the way - at the beginning, when you are experimenting with new
ideas, etc. The only way to avoid this annoyance is to create an all
inclusive type and use that. However, at that point, what's the point
of your static type checker?

In a dynamically typed language, you start out without being pressed
for precise data type details by the language/compiler. Lists do for
most everything, and you don't need to declare that these lists can
hold any type of element because that's the default. Only later, when
you've worked out your design by actually building it, do you go back
and make choices about representation, and type checking if needed.

The language/compiler should only get in your way to the extent that
you've declared your intention to be held to certain type constraints.
To do so by default, right from the start defeats the benefits of an
interactive top-level development style.

To quote Paul Graham at <http://www.paulgraham.com/desres.html>:

"For example, it is a huge win in developing software to have an
interactive toplevel, what in Lisp is called a read-eval-print loop.
And when you have one this has real effects on the design of the
language. It would not work well for a language where you have to
declare variables before using them, for example. When you're just
typing expressions into the toplevel, you want to be able to set x to
some value and then start doing things to x. You don't want to have to
declare the type of x first. You may dispute either of the premises,
but if a language has to have a toplevel to be convenient, and
mandatory type declarations are incompatible with a toplevel, then no
language that makes type declarations mandatory could be convenient to
program in."

Mandatory type declarations are incompatible with a top level because
they break up the flow of your thought by forcing you to focus on type
concerns prematurely, before you even know what your final data types
will be.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <l6q271-v98.ln1@ID-7776.user.dfncis.de>
Raffael Cavallaro <·······@mediaone.net> wrote:
> Jesse Tov <···@eecs.harvREMOVEard.edu> wrote in message news:<··················@tov.student.harvard.edu>...

>> Precisely.  It's just opt-out instead of opt-in.  The error cases are
>> the same, and in either language you can either catch them and do
>> something to correct the situation or leave them unchecked and enter the
>> appropriate error state.  It's probably about as clumsy to opt-out here
>> as it is to opt-in in Lisp.
> 
> But I think the difference is the order in which one opts out (or in).
> I think the lisp camp would say that statically typed languages get
> the order of the development cycle backwards. They force you to make
> decisions about implementation detail *right from the start* before
> you've had a chance to sketch out your code. 

They don't. When translating the examples, I started to sketch out
code, compiled (i.e., run the tests), got type errors, corrected
them, compiled, and so on. Sometimes correcting the errors forced me
to change the code in substantial ways. Right as you would do it in Lisp.

> Moreover, they interfere with the interactive top-level design of
> software right at the very point where it is most important that the
> language/compiler *not* get in the way - at the beginning, when you
> are experimenting with new ideas, etc.

Why?

> The only way to avoid this annoyance is to create an all inclusive
> type and use that.

No. The way to avoid this annoyance is to use polymorphic types and
type inference. An all inclusive type is only necessary if you
want to use different types *in the same place at once*. Which doesn't
happen very often.

> In a dynamically typed language, you start out without being pressed
> for precise data type details by the language/compiler. Lists do for
> most everything, and you don't need to declare that these lists can
> hold any type of element because that's the default. 

Some with polymorphic types: You don't declare the type of the lists;
the type inference works out if you can use lists with any type,
or if you are using lists with a particular type.

> "For example, it is a huge win in developing software to have an
> interactive toplevel, what in Lisp is called a read-eval-print loop.

You get an interactive toplevel in ghci, and recompiling modules is
also fast enough.

> And when you have one this has real effects on the design of the
> language. It would not work well for a language where you have to
> declare variables before using them, for example. 

Yes. That's why you don't declare variables. You write type annotations
if you want to clear your head before writing a function, like you
write unit tests before writing the function, but that is all.

> Mandatory type declarations are incompatible with a top level because
> they break up the flow of your thought by forcing you to focus on type
> concerns prematurely, before you even know what your final data types
> will be.

That's why you can go without any mandatory type declarations if you
want.

All the arguments you give are fine for "ad-hoc statically typed"
languages, but they don't apply to "functionally statically typed"
languages.

- Dirk
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2810031136050001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@ID-7776.user.dfncis.de>, Dirk Thierbach
<··········@gmx.de> wrote:

> Raffael Cavallaro <·······@mediaone.net> wrote:
> > Jesse Tov <···@eecs.harvREMOVEard.edu> wrote in message
news:<··················@tov.student.harvard.edu>...
> 
> >> Precisely.  It's just opt-out instead of opt-in.  The error cases are
> >> the same, and in either language you can either catch them and do
> >> something to correct the situation or leave them unchecked and enter the
> >> appropriate error state.  It's probably about as clumsy to opt-out here
> >> as it is to opt-in in Lisp.
> > 
> > But I think the difference is the order in which one opts out (or in).
> > I think the lisp camp would say that statically typed languages get
> > the order of the development cycle backwards. They force you to make
> > decisions about implementation detail *right from the start* before
> > you've had a chance to sketch out your code. 
> 
> They don't. When translating the examples, I started to sketch out
> code, compiled (i.e., run the tests), got type errors, corrected
> them, compiled, and so on. Sometimes correcting the errors forced me
> to change the code in substantial ways. Right as you would do it in Lisp.
> 
> > Moreover, they interfere with the interactive top-level design of
> > software right at the very point where it is most important that the
> > language/compiler *not* get in the way - at the beginning, when you
> > are experimenting with new ideas, etc.
> 
> Why?

Because you can't run the code until it type-checks.  That prevents you
from discovering certain classes of errors until you have fixed all
instances of another class of errors.  But the class of errors that the
compiler forces you to fix before you can discover the other class of
errors might not be the ones that really matter to you, so all the time
you spent making the compiler happy might well have been wasted.

Furthermore, you might be tempted to try to "fool" the compiler.  For
example, suppose I write:

(defun top-level ()
  (with-complicated-setup
    (if (condition)
      (some-hairy-computation)
      (some-other-hairy-computation))))

a statically typed langauge will not allow me to run top-level until I
have defined both hairy computations.  If all I care about at the moment
is one of them I might be tempted to write a stub for the other just to
make the compiler happy.  Now my program is really broken (because the
stub doesn't do the right thing) but I can no longer rely on the system to
tell me about it, at least not at compile time.

Lisp will very happily compile and run top-level even if one of the hairy
computations is undefined.  What's more, most implementations will remind
me at compile time (with a warning, not an error) that it's undefined, so
that relieves me of the burden of keeping track of what is really finished
and what is stubbed out to make the compiler happy.

E.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <rv3371-ruc.ln1@ID-7776.user.dfncis.de>
Erann Gat <···@jpl.nasa.gov> wrote:

> Because you can't run the code until it type-checks.  That prevents you
> from discovering certain classes of errors until you have fixed all
> instances of another class of errors.  

True.

> But the class of errors that the compiler forces you to fix before
> you can discover the other class of errors might not be the ones
> that really matter to you,

Can you give an example? 

[...]
> a statically typed langauge will not allow me to run top-level until I
> have defined both hairy computations.  If all I care about at the moment
> is one of them I might be tempted to write a stub for the other just to
> make the compiler happy.  

Yes. That's exactly what you do.

> Now my program is really broken (because the stub doesn't do the
> right thing) but I can no longer rely on the system to tell me about
> it, at least not at compile time.

But stubs are easy: You just write

stub = error "FIXME"

This stub will compile (though it certainly doesn't do the right thing),
and won't get in the way until you execute it, at which time you
need to write a better stub anyway, no matter what type checking discipline
you use.

> Lisp will very happily compile and run top-level even if one of the hairy
> computations is undefined.  What's more, most implementations will remind
> me at compile time (with a warning, not an error) that it's undefined, so
> that relieves me of the burden of keeping track of what is really finished
> and what is stubbed out to make the compiler happy.

As it works with the above way. The only drawback is that you have
to type in an additional single line.

- Dirk
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2810031310330001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@ID-7776.user.dfncis.de>, Dirk Thierbach
<··········@gmx.de> wrote:

> Erann Gat <···@jpl.nasa.gov> wrote:
> 
> > Because you can't run the code until it type-checks.  That prevents you
> > from discovering certain classes of errors until you have fixed all
> > instances of another class of errors.  
> 
> True.
> 
> > But the class of errors that the compiler forces you to fix before
> > you can discover the other class of errors might not be the ones
> > that really matter to you,
> 
> Can you give an example? 

I thought I did.

> > a statically typed langauge will not allow me to run top-level until I
> > have defined both hairy computations.  If all I care about at the moment
> > is one of them I might be tempted to write a stub for the other just to
> > make the compiler happy.  
> 
> Yes. That's exactly what you do.
> 
> > Now my program is really broken (because the stub doesn't do the
> > right thing) but I can no longer rely on the system to tell me about
> > it, at least not at compile time.
> 
> But stubs are easy: You just write
> 
> stub = error "FIXME"

I'm a little out of my element here since the only statically typed
languages I know are C/C++/Pascal/Java and not ML/Haskell/etc. but I doubt
that it's as simple as that.  Certainly in C++ providing stubs can become
a major hassle, especially in cases where you're trying to refactor
someone else's code.  And then there's still the issue that if you forget
about the stub (easy to do on a large development) you only find out about
it at run time, not compile time.

E.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1k76p2ey7.fsf@tti5.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> [ ... ] And then there's still the issue that if you forget
> about the stub (easy to do on a large development) you only find out about
> it at run time, not compile time.

It is trivial: Just take the definition of the stub out of scope and
recompile.  The compiler will point out each and every occurrence of a
leftover stub which you have forgotten.

(I have developed and do develop software that way, and it works
wonderfully.)

Matthias
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2810031433550001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > [ ... ] And then there's still the issue that if you forget
> > about the stub (easy to do on a large development) you only find out about
> > it at run time, not compile time.
> 
> It is trivial: Just take the definition of the stub out of scope

I think you misunderstood the point I was trying to make.  My premise is
that you have put in a stub and then forgotten about it.  Or if you think
that's too unrealistic, Bob puts in a stub, then gets fired (maybe because
he was working for Joe Marshall and made the mistake of admitting that he
couldn't prove the code correct ;-)  ).  Bill inherits the code.  How is
he supposed to know there's a stub in there?

Obviously if you know there's a stub it's not hard to take it out in order
to get the compiler to produce an appropriate complaint, but if you know
there's a stub you no longer need the compiler to remind you.

E.
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m165i92awj.fsf@tti5.uchicago.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > [ ... ] And then there's still the issue that if you forget
> > > about the stub (easy to do on a large development) you only find out about
> > > it at run time, not compile time.
> > 
> > It is trivial: Just take the definition of the stub out of scope
> 
> I think you misunderstood the point I was trying to make.  My premise is
> that you have put in a stub and then forgotten about it.  Or if you think
> that's too unrealistic, Bob puts in a stub, then gets fired (maybe because
> he was working for Joe Marshall and made the mistake of admitting that he
> couldn't prove the code correct ;-)  ).  Bill inherits the code.  How is
> he supposed to know there's a stub in there?

The stub is in one single prominents place.  Of course, if you don't
know about that, you won't immediately find it.  That's the price to
pay for this technique.

> Obviously if you know there's a stub it's not hard to take it out in order
> to get the compiler to produce an appropriate complaint, but if you know
> there's a stub you no longer need the compiler to remind you.

The point is that even though you might know that there are/were
stubs, you might not remember exactly where you used them.  By getting
rid of the definition of the stub, you enable the compiler to give you
all the necessary reminders.

(Anyway, how is that different from the dynamically typed case?  It is
certainly not worse.)

Matthias
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2810031759560001@192.168.1.51>
In article <··············@tti5.uchicago.edu>, Matthias Blume
<····@my.address.elsewhere> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@tti5.uchicago.edu>, Matthias Blume
> > <····@my.address.elsewhere> wrote:
> > 
> > > ···@jpl.nasa.gov (Erann Gat) writes:
> > > 
> > > > [ ... ] And then there's still the issue that if you forget
> > > > about the stub (easy to do on a large development) you only find
out about
> > > > it at run time, not compile time.
> > > 
> > > It is trivial: Just take the definition of the stub out of scope
> > 
> > I think you misunderstood the point I was trying to make.  My premise is
> > that you have put in a stub and then forgotten about it.  Or if you think
> > that's too unrealistic, Bob puts in a stub, then gets fired (maybe because
> > he was working for Joe Marshall and made the mistake of admitting that he
> > couldn't prove the code correct ;-)  ).  Bill inherits the code.  How is
> > he supposed to know there's a stub in there?
> 
> The stub is in one single prominents place.  Of course, if you don't
> know about that, you won't immediately find it.  That's the price to
> pay for this technique.
> 
> > Obviously if you know there's a stub it's not hard to take it out in order
> > to get the compiler to produce an appropriate complaint, but if you know
> > there's a stub you no longer need the compiler to remind you.
> 
> The point is that even though you might know that there are/were
> stubs, you might not remember exactly where you used them.  By getting
> rid of the definition of the stub, you enable the compiler to give you
> all the necessary reminders.
> 
> (Anyway, how is that different from the dynamically typed case?  It is
> certainly not worse.)

Of course it's worse.  In Lisp I can do this:

? (defun branch-1 () 'foo)
BRANCH-1
? (defun top-level ()
(if (read) (branch-1) (branch-2)))
;Compiler warnings :
;   Undefined function BRANCH-2, in TOP-LEVEL.
TOP-LEVEL
? (top-level)
t
FOO
? 

Note that branch-2 being undefined did not prevent me from testing
top-level and branch-1.  If I turn this code over to someone else for
further development they will discover that branch-2 still needs to be
written as soon as they try to compile the code.  They don't have to rely
on me to tell them, or to follow some convention about putting stubs in a
particular place.

In a static language I have to choose: either I get the compiler error, or
I can run the code, but I can't have both at the same time.

E.
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2810031829150001@192.168.1.51>
In article <····················@192.168.1.51>, ···@jpl.nasa.gov (Erann
Gat) wrote:

> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > In article <··············@tti5.uchicago.edu>, Matthias Blume
> > > <····@my.address.elsewhere> wrote:
> > > 
> > > > ···@jpl.nasa.gov (Erann Gat) writes:
> > > > 
> > > > > [ ... ] And then there's still the issue that if you forget
> > > > > about the stub (easy to do on a large development) you only find
> out about
> > > > > it at run time, not compile time.
> > > > 
> > > > It is trivial: Just take the definition of the stub out of scope
> > > 
> > > I think you misunderstood the point I was trying to make.  My premise is
> > > that you have put in a stub and then forgotten about it.  Or if you think
> > > that's too unrealistic, Bob puts in a stub, then gets fired (maybe because
> > > he was working for Joe Marshall and made the mistake of admitting that he
> > > couldn't prove the code correct ;-)  ).  Bill inherits the code.  How is
> > > he supposed to know there's a stub in there?
> > 
> > The stub is in one single prominents place.  Of course, if you don't
> > know about that, you won't immediately find it.  That's the price to
> > pay for this technique.
> > 
> > > Obviously if you know there's a stub it's not hard to take it out in order
> > > to get the compiler to produce an appropriate complaint, but if you know
> > > there's a stub you no longer need the compiler to remind you.
> > 
> > The point is that even though you might know that there are/were
> > stubs, you might not remember exactly where you used them.  By getting
> > rid of the definition of the stub, you enable the compiler to give you
> > all the necessary reminders.
> > 
> > (Anyway, how is that different from the dynamically typed case?  It is
> > certainly not worse.)
> 
> Of course it's worse.  In Lisp I can do this:
> 
> ? (defun branch-1 () 'foo)
> BRANCH-1
> ? (defun top-level ()
> (if (read) (branch-1) (branch-2)))
> ;Compiler warnings :
> ;   Undefined function BRANCH-2, in TOP-LEVEL.
> TOP-LEVEL
> ? (top-level)
> t
> FOO
> ? 
> 
> Note that branch-2 being undefined did not prevent me from testing
> top-level and branch-1.  If I turn this code over to someone else for
> further development they will discover that branch-2 still needs to be
> written as soon as they try to compile the code.  They don't have to rely
> on me to tell them, or to follow some convention about putting stubs in a
> particular place.
> 
> In a static language I have to choose: either I get the compiler error, or
> I can run the code, but I can't have both at the same time.
> 
> E.

It just occurred to me that there are other interesting things that can
happen as well.  For example, here is an unedited transcript of a Lisp
session (except for adding a couple of line-breaks between FOO and nil):

? (defun branch-1 () 'foo)
BRANCH-1
? (defun top-level ()
 (if (read) (branch-1) (branch-2)))
;Compiler warnings :
;   Undefined function BRANCH-2, in TOP-LEVEL.
TOP-LEVEL
? (loop (print (top-level)))
t

FOO

nil

SURPRISE! 

-------------------------

I leave it as an excercise for the reader to figure out what happened.

Bottom line on this for me is that the static typing credo seems to be
that the compiler always knows best.  I don't buy it.  I believe that *I*
always know best, at least when it comes to arguments with the compiler,
and I don't want my compiler telling me, "Sorry, I won't let you run this
code because you haven't satisfied me that there are no bugs in it."  It's
fine for the compiler to say, "Er, excuse me sir, but I think I see a
problem...", but I want to be able to reply, "That's OK, I want you to run
it anyway."  Because I might know something that the compiler doesn't know
(as in the example above).  Or maybe because I just want to give it a
whirl and see what happens just for the sheer thrill of it.  Whatever. 
IMHO until compilers become sentient it is not their place to second-guess
my motives.

E.
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bnnpjl$hbj$4$8300dec7@news.demon.co.uk>
Erann Gat wrote:
> In article <··············@tti5.uchicago.edu>, Matthias Blume
> <····@my.address.elsewhere> wrote:
>> The point is that even though you might know that there are/were
>> stubs, you might not remember exactly where you used them.  By getting
>> rid of the definition of the stub, you enable the compiler to give you
>> all the necessary reminders.
>> 
>> (Anyway, how is that different from the dynamically typed case?  It is
>> certainly not worse.)
> 
> Of course it's worse.  In Lisp I can do this:
> 
> ? (defun branch-1 () 'foo)
> BRANCH-1
> ? (defun top-level ()
> (if (read) (branch-1) (branch-2)))
> ;Compiler warnings :
> ;   Undefined function BRANCH-2, in TOP-LEVEL.
> TOP-LEVEL
> ? (top-level)
> t
> FOO
> ?
> 
> Note that branch-2 being undefined did not prevent me from testing
> top-level and branch-1.  If I turn this code over to someone else for
> further development they will discover that branch-2 still needs to be
> written as soon as they try to compile the code.  They don't have to rely
> on me to tell them, or to follow some convention about putting stubs in a
> particular place.
> 
> In a static language I have to choose: either I get the compiler error, or
> I can run the code, but I can't have both at the same time.

I think you should be careful about generalising what you percieve to
be a deficiency in the implementation of 1 particular statically typed
language as being representative of all statically typed languages,
current or future.

Unless you've actually tried using the language to write real programs
how do you know for sure. Maybe the (alleged) deficiency is actually a
carefully considered design feature.

As for the immediate problem of testing and/or typechecking of partially
written programs, it is true Haskell (and every other FPL I'm aware of)
does require you to write some kind of stub for yet to be defined
functions, but even this is not a fundamental limitation of *all*
static type systems. For better or worse, it's just the way Haskell
works, that's all.

There are other type systems and inference algorithms which can
type check incomplete programs. Given such a type system I guess
it would be a trivial matter to automatically generate appropriate
exception raising stubs so that such programs could be run and
(partially) tested.

Regards
--
Adrian Hey
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2910031046540001@k-137-79-50-101.jpl.nasa.gov>
In article <·····················@news.demon.co.uk>, Adrian Hey
<····@NoSpicedHam.iee.org> wrote:

> Unless you've actually tried using the language to write real programs
> how do you know for sure. Maybe the (alleged) deficiency is actually a
> carefully considered design feature.

I'm sure it is a carefully considered design feature, and I'm sure that
there are people who feel it is useful.  I'm just not one of them.  I'm
equally sure that my assessment is not "correct" in any absolute sense but
is a bias born of long experience with dynamic languages.

American cars with automatic transmissions have a "carefully considered
design feature" where they will automatically lock the doors when you
shift from park into drive.  I hate that feature with a passion.  It may
well be that someone has done a study showing that it is in fact safer to
have the doors locked when you drive, but god damn it, if I want to drive
with my doors unlocked I don't want my car second-guessing me.  I don't
want to drive a car that thinks it's smarter than I am.

Likewise, I think static typing is very useful.  The more information I
can have at compile time and more more efficient my code can be the
better.  But I don't want to use a language that thinks it's smarter than
I am.  When I say "run the program" I want it to run the god damned
program.  "I'm sorry Dave, I'm afraid I can't do that" is not an
acceptable response from a computer IMHO.


> As for the immediate problem of testing and/or typechecking of partially
> written programs, it is true Haskell (and every other FPL I'm aware of)
> does require you to write some kind of stub for yet to be defined
> functions, but even this is not a fundamental limitation of *all*
> static type systems. For better or worse, it's just the way Haskell
> works, that's all.

Yes, I understand that.  A static type system on top of Lisp that could be
used as a diagnositc tool would be a very useful feature IMO.  I would
like to see such a thing standardized (not that there's much hope of that,
but I can dream).


> There are other type systems and inference algorithms which can
> type check incomplete programs. Given such a type system I guess
> it would be a trivial matter to automatically generate appropriate
> exception raising stubs so that such programs could be run and
> (partially) tested.

That sounds like the Right Thing to me.  Any that you would particularly
recommend looking into?

E.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <65i77tce.fsf@ccs.neu.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> American cars with automatic transmissions have a "carefully considered
> design feature" where they will automatically lock the doors when you
> shift from park into drive.  I hate that feature with a passion.  It may
> well be that someone has done a study showing that it is in fact safer to
> have the doors locked when you drive, but god damn it, if I want to drive
> with my doors unlocked I don't want my car second-guessing me.  I don't
> want to drive a car that thinks it's smarter than I am.

As a completely random aside....

You can often program that particular `feature' of the car.  But since
automobiles have a rather limited number of input devices, the
programming sequence is bizarre.  I remember that one car I rented
allowed you to change that default, but to do that you had to turn on
and off the key three times in one second, toggle the lock, then shift
from park to neutral three times.  The horn would then beep to
acknowledge the change.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <bnpf47$1rt$1@grizzly.ps.uni-sb.de>
"Erann Gat" <···@jpl.nasa.gov> wrote:
>
> But I don't want to use a language that thinks it's smarter than
> I am.  When I say "run the program" I want it to run the god damned
> program.

Be careful, you might find yourself being interpreted as proof for the
sarcastic statement I once heard from a static typing proponent. He said
that people oppose static typing only because their egos cannot take being
corrected by a machine. ;-)

Unfortunately (?), machines can indeed be much "smarter" than us for
specialised tasks. Just consider adding two numbers. Checking certain
aspects of program consistency is another example.

    - Andreas
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2910031503330001@k-137-79-50-101.jpl.nasa.gov>
In article <············@grizzly.ps.uni-sb.de>, "Andreas Rossberg"
<········@ps.uni-sb.de> wrote:

> "Erann Gat" <···@jpl.nasa.gov> wrote:
> >
> > But I don't want to use a language that thinks it's smarter than
> > I am.  When I say "run the program" I want it to run the god damned
> > program.
> 
> Be careful, you might find yourself being interpreted as proof for the
> sarcastic statement I once heard from a static typing proponent. He said
> that people oppose static typing only because their egos cannot take being
> corrected by a machine. ;-)

The day may come when the machine is smarter than I am, but it is not yet here.

> Unfortunately (?), machines can indeed be much "smarter" than us for
> specialised tasks. Just consider adding two numbers.

        Objective Caml version 3.07+2

# 9999999999999999999 + 1;;
- : int = 166199296
# 



Feh.

E.
From: Andreas Rossberg
Subject: Re: More static type fun.
Date: 
Message-ID: <3FA0E322.9000102@ps.uni-sb.de>
Erann Gat wrote:
> 
>>Unfortunately (?), machines can indeed be much "smarter" than us for
>>specialised tasks. Just consider adding two numbers.
> 
>         Objective Caml version 3.07+2
> 
> # 9999999999999999999 + 1;;
> - : int = 166199296

So? Would you have been able to compute that in mod 2^31 arithmetics in 
the same amount of time? ;-)

Maybe you are relieved to hear that I also dislike this design choice of 
OCaml - default arithmetics should not wrap-around. But anyway, it's 
totally unrelated demagogy.

	- Andreas

-- 
Andreas Rossberg, ········@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
  as kids, we would all be running around in darkened rooms, munching
  magic pills, and listening to repetitive electronic music."
  - Kristian Wilson, Nintendo Inc.
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bnqc3i$5im$2$8300dec7@news.demon.co.uk>
Erann Gat wrote:

>> There are other type systems and inference algorithms which can
>> type check incomplete programs. Given such a type system I guess
>> it would be a trivial matter to automatically generate appropriate
>> exception raising stubs so that such programs could be run and
>> (partially) tested.
> 
> That sounds like the Right Thing to me.  Any that you would particularly
> recommend looking into?

Well there are probably others in this thread who could give
better references, but I think these would worth a look..

 "What Are Principal Typings and What Are They Good For?"
  http://citeseer.nj.nec.com/jim95what.html

and..

 System CT Homepage..
 http://www.dcc.ufmg.br/~camarao/CT/

System CT seems to use this ability to resolve unconstrained
overloading (not a sw engineering practice I would advocate,
but still interesting).

Regards
--
Adrian Hey      
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnmus4$pek$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> 
>>In article <··············@tti5.uchicago.edu>, Matthias Blume
>><····@my.address.elsewhere> wrote:
>>
>>
>>>···@jpl.nasa.gov (Erann Gat) writes:
>>>
>>>
>>>>[ ... ] And then there's still the issue that if you forget
>>>>about the stub (easy to do on a large development) you only find out about
>>>>it at run time, not compile time.
>>>
>>>It is trivial: Just take the definition of the stub out of scope
>>
>>I think you misunderstood the point I was trying to make.  My premise is
>>that you have put in a stub and then forgotten about it.  Or if you think
>>that's too unrealistic, Bob puts in a stub, then gets fired (maybe because
>>he was working for Joe Marshall and made the mistake of admitting that he
>>couldn't prove the code correct ;-)  ).  Bill inherits the code.  How is
>>he supposed to know there's a stub in there?
> 
> 
> The stub is in one single prominents place.  Of course, if you don't
> know about that, you won't immediately find it.  That's the price to
> pay for this technique.
> 
> 
>>Obviously if you know there's a stub it's not hard to take it out in order
>>to get the compiler to produce an appropriate complaint, but if you know
>>there's a stub you no longer need the compiler to remind you.
> 
> 
> The point is that even though you might know that there are/were
> stubs, you might not remember exactly where you used them.  By getting
> rid of the definition of the stub, you enable the compiler to give you
> all the necessary reminders.
> 
> (Anyway, how is that different from the dynamically typed case?  It is
> certainly not worse.)

The statically typed language obviously requires you to put all test 
cases in "one single prominent place". I have explained in another post 
why this might be a disadvantage and how this doesn't incur a "price to 
pay for this technique" in a dynamically typed language.


Pascal
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m1wuao2aco.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> The statically typed language obviously requires you to put all test
> cases in "one single prominent place". I have explained in another
> post why this might be a disadvantage and how this doesn't incur a
> "price to pay for this technique" in a dynamically typed language.

Sorry, I'm not going to look up your other post.  This thread is way
out of control.  In any case, judging just from your comment (see
quote) this does not even seem to be related to what Erann and I were
discussing just now.  (What "test cases" are you talking about?)

Matthias
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310282041.5318fcc5@posting.google.com>
Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@tti5.uchicago.edu>...


> > > Pascal Costanza <········@web.de> writes:
> > No, for christ's sake! There are dynamically typed programs that you
> > cannot translate into statically typed ones!
> 
> Yes you can.  (In the worst case scenario you lose all the benefits of
> static typing.  But a translation is *always* possible. After all,
> dynamically typed programs are already statically typed in the trival
> "one type fits all" sense.)

This is sophistry at its worst. If you "translate" a dynamically typed
program into a statically typed language by eliminating all the static
type checking, then WTF is the point of the static type checking?

It's also possible to "translate" any program into a turing machine
tape, so we should all start coding that way!

Introducing TuringTape(TM), the ultimate bondage and discipline
language!
From: Matthias Blume
Subject: Re: More static type fun.
Date: 
Message-ID: <m2wuao3ae8.fsf@hanabi-air.shimizu.blume>
·······@mediaone.net (Raffael Cavallaro) writes:

> Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@tti5.uchicago.edu>...
> 
> 
> > > > Pascal Costanza <········@web.de> writes:
> > > No, for christ's sake! There are dynamically typed programs that you
> > > cannot translate into statically typed ones!
> > 
> > Yes you can.  (In the worst case scenario you lose all the benefits of
> > static typing.  But a translation is *always* possible. After all,
> > dynamically typed programs are already statically typed in the trival
> > "one type fits all" sense.)
> 
> This is sophistry at its worst. If you "translate" a dynamically typed
> program into a statically typed language by eliminating all the static
> type checking, then WTF is the point of the static type checking?

None, of course.  But that was not the point.  Pascal made an absolute
statement which is false precisely because of its absoluteness.
Notice that he did not say "cannot be translated into statically typed
ones without losing the benefits of static typing" or some such.  And
notice that this is only the worst-case scenario.  Most of the time a
faithful translation will benefit from the additional constraints
provided by static typing.

Matthias
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnoioj$iqi$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> ·······@mediaone.net (Raffael Cavallaro) writes:
> 
> 
>>Matthias Blume <····@my.address.elsewhere> wrote in message news:<··············@tti5.uchicago.edu>...
>>
>>
>>
>>>>>Pascal Costanza <········@web.de> writes:
>>>>
>>>>No, for christ's sake! There are dynamically typed programs that you
>>>>cannot translate into statically typed ones!
>>>
>>>Yes you can.  (In the worst case scenario you lose all the benefits of
>>>static typing.  But a translation is *always* possible. After all,
>>>dynamically typed programs are already statically typed in the trival
>>>"one type fits all" sense.)
>>
>>This is sophistry at its worst. If you "translate" a dynamically typed
>>program into a statically typed language by eliminating all the static
>>type checking, then WTF is the point of the static type checking?
> 
> 
> None, of course.  But that was not the point.  Pascal made an absolute
> statement which is false precisely because of its absoluteness.
> Notice that he did not say "cannot be translated into statically typed
> ones without losing the benefits of static typing" or some such.  And
> notice that this is only the worst-case scenario.  Most of the time a
> faithful translation will benefit from the additional constraints
> provided by static typing.

So you are suggesting that I should have said "There are dynamically 
typed programs that you cannot translate into statically typed ones 
without losing the benefits of static typing"? That doesn't make any 
sense at all.

What do you mean?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnn1qj$8n$1@newsreader2.netcologne.de>
Matthias Blume wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>The statically typed language obviously requires you to put all test
>>cases in "one single prominent place". I have explained in another
>>post why this might be a disadvantage and how this doesn't incur a
>>"price to pay for this technique" in a dynamically typed language.
> 
> 
> Sorry, I'm not going to look up your other post.  This thread is way
> out of control.  In any case, judging just from your comment (see
> quote) this does not even seem to be related to what Erann and I were
> discussing just now.  (What "test cases" are you talking about?)

It is related. Sorry, I don't feel like repeating myself again and again 
anymore. Especially in your case, because you are continually ignoring 
everything that doesn't fit your point of view.


Pascal
From: ·············@comcast.net
Subject: Re: More static type fun.
Date: 
Message-ID: <ptggbyht.fsf@comcast.net>
···@jpl.nasa.gov (Erann Gat) writes:

> My premise is that you have put in a stub and then forgotten about
> it.  Or if you think that's too unrealistic, Bob puts in a stub,
> then gets fired (maybe because he was working for Joe Marshall and
> made the mistake of admitting that he couldn't prove the code
> correct ;-) ).

As if anyone around here would.
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnnadn$s5$1@news.oberberg.net>
Erann Gat wrote:

> Matthias Blume <····@my.address.elsewhere> wrote:
> 
>>···@jpl.nasa.gov (Erann Gat) writes:
>>
>>>[ ... ] And then there's still the issue that if you forget
>>>about the stub (easy to do on a large development) you only find out about
>>>it at run time, not compile time.
>>
>>It is trivial: Just take the definition of the stub out of scope
> 
> I think you misunderstood the point I was trying to make.  My premise is
> that you have put in a stub and then forgotten about it.  Or if you think
> that's too unrealistic, Bob puts in a stub, then gets fired (maybe because
> he was working for Joe Marshall and made the mistake of admitting that he
> couldn't prove the code correct ;-)  ).  Bill inherits the code.  How is
> he supposed to know there's a stub in there?

I think Matthias meant that all the unimplemented routines are just 
aliased to one single "stub" routine, like this:
   stub :: a -> ()
   stub = error "Stub called"
To test how many stubs there are left in a program, just remove that 
definition and see the compiler complain. If that's a standard part of 
regression testing, leftover stubs will be found even if people forget 
about a stub they left in.

Just guessing and HTH
Jo
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2810031929030001@192.168.1.51>
In article <···········@news.oberberg.net>, Joachim Durchholz
<·················@web.de> wrote:

> Erann Gat wrote:
> 
> > Matthias Blume <····@my.address.elsewhere> wrote:
> > 
> >>···@jpl.nasa.gov (Erann Gat) writes:
> >>
> >>>[ ... ] And then there's still the issue that if you forget
> >>>about the stub (easy to do on a large development) you only find out about
> >>>it at run time, not compile time.
> >>
> >>It is trivial: Just take the definition of the stub out of scope
> > 
> > I think you misunderstood the point I was trying to make.  My premise is
> > that you have put in a stub and then forgotten about it.  Or if you think
> > that's too unrealistic, Bob puts in a stub, then gets fired (maybe because
> > he was working for Joe Marshall and made the mistake of admitting that he
> > couldn't prove the code correct ;-)  ).  Bill inherits the code.  How is
> > he supposed to know there's a stub in there?
> 
> I think Matthias meant that all the unimplemented routines are just 
> aliased to one single "stub" routine, like this:
>    stub :: a -> ()
>    stub = error "Stub called"
> To test how many stubs there are left in a program, just remove that 
> definition and see the compiler complain. If that's a standard part of 
> regression testing, leftover stubs will be found even if people forget 
> about a stub they left in.
> 
> Just guessing and HTH
> Jo

I don't think that would work for several reasons.

1.  The objective is to be able to write the actual call in the caller
code, otherwise there's no point.  One might as well write "error"
directly.  Writing "stub" buys you nothing.

2.  Having all the unimplemented calls go to the same function wouldn't
type-check.

But we should probably let Matthias weigh in.

E.
From: Tomasz Zielonka
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbpusq5.g8a.t.zielonka@zodiac.mimuw.edu.pl>
Erann Gat wrote:
>> 
>> I think Matthias meant that all the unimplemented routines are just 
>> aliased to one single "stub" routine, like this:
>>    stub :: a -> ()
>>    stub = error "Stub called"
>> To test how many stubs there are left in a program, just remove that 
>> definition and see the compiler complain. If that's a standard part of 
>> regression testing, leftover stubs will be found even if people forget 
>> about a stub they left in.
>> 
>> Just guessing and HTH
>> Jo
> 
> I don't think that would work for several reasons.
> 
> 1.  The objective is to be able to write the actual call in the caller
> code, otherwise there's no point.  One might as well write "error"
> directly.  Writing "stub" buys you nothing.

I think the idea is to write something like this:

stub :: String -> a
stub msg = error $ "stub: " ++ msg

someFunction :: [Int] -> Int
someFunction = stub "someFunction"

otherFunction :: Double -> Double -> Double
otherFunction = stub "otherFunction"

aFunctionIHaveNoIdeaHowToTypeYet :: a
aFunctionIHaveNoIdeaHowToTypeYet = stub "aFunctionIHaveNoIdeaHowToTypeYet"

> 2.  Having all the unimplemented calls go to the same function wouldn't
> type-check.

If the function/value has the most general possible type - forall a. a -
then it would typecheck. Perhaps you don't understand how a polymorphic
type system works.

But that's not the point.

Best regards,
Tom

-- 
.signature: Too many levels of symbolic links
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.10.29.07.39.23.886122@knm.org.pl>
On Tue, 28 Oct 2003 19:29:03 -0800, Erann Gat wrote:

> 1.  The objective is to be able to write the actual call in the caller
> code, otherwise there's no point.  One might as well write "error"
> directly.  Writing "stub" buys you nothing.

The call is written in the caller. Stub replaces the body of the
definition of the function which doesn't exist, not its call. The function
can even be given the correct type before it's implemented.

> 2.  Having all the unimplemented calls go to the same function wouldn't
> type-check.

It would. There was an error in the previous post though.

stub :: a   -- Means: any type (not only functions, you can stub any object)
stub = error "Not implemented yet"

In ML you can stub only functions this way, but all functions using one stub.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Jesse Tov
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbput8k.94j.tov@tov.student.harvard.edu>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl>:
>> 2.  Having all the unimplemented calls go to the same function wouldn't
>> type-check.
> 
> It would. There was an error in the previous post though.
> 
> stub :: a   -- Means: any type (not only functions, you can stub any object)
> stub = error "Not implemented yet"

Let me elaborate, since we've been stuck on this point for too long
(ha!).  You can write:

    stub = error "Not implemented yet"

    some_function a = ...
                        if ...
                          then stub
                          else ...

    some_other_thing = ... stub ... ... not_yet ... where
      not_yet = stub

    bool_list   = True : False : stub
    string_list = "foo" : "bar" : stub

stub's type is "forall a. a", so I can use it in place of any expression
that I want.  It's no problem to use it at different types, meaning it's
used as a Bool in bool_list and as a String in string_list--that's how
the Hindley-Milner type system works.

If you remove the definition of stub above, then the compiler will
complain at all the places you've used it, so it's easy to find.  Yeah,
you do have to remember it's there, but of course you'd document that.

> In ML you can stub only functions this way, but all functions using one stub.

I just deluded myself into thinking I could get this to work for about
four minutes.

Jesse
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2910031049450001@k-137-79-50-101.jpl.nasa.gov>
In article <······························@knm.org.pl>, Marcin 'Qrczak'
Kowalczyk <······@knm.org.pl> wrote:

> On Tue, 28 Oct 2003 19:29:03 -0800, Erann Gat wrote:
> 
> > 1.  The objective is to be able to write the actual call in the caller
> > code, otherwise there's no point.  One might as well write "error"
> > directly.  Writing "stub" buys you nothing.
> 
> The call is written in the caller. Stub replaces the body of the
> definition of the function which doesn't exist, not its call. The function
> can even be given the correct type before it's implemented.
> 
> > 2.  Having all the unimplemented calls go to the same function wouldn't
> > type-check.
> 
> It would. There was an error in the previous post though.
> 
> stub :: a   -- Means: any type (not only functions, you can stub any object)
> stub = error "Not implemented yet"
> 
> In ML you can stub only functions this way, but all functions using one stub.

My ignorance is showing here -- I presume then that ML and Haskell have
the semantics that all functions take only one argument, and that calling
a function with more than one argument passes a tuple, is that right?

In that case, yes, a single stub would work.  That diminishes the
magnitude of my objection from being a major problem to a minor
annoyance.  But we should probably put this branch of the discussion on
hold until I have a chance to go learn ML and/or Haskell so that I
actually know what I'm talking about.  Which one should I start with (or
is there something else I should look at first)?

E.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.10.29.20.18.26.371268@knm.org.pl>
On Wed, 29 Oct 2003 10:49:45 -0800, Erann Gat wrote:

>> In ML you can stub only functions this way, but all functions using one stub.
> 
> My ignorance is showing here -- I presume then that ML and Haskell have
> the semantics that all functions take only one argument, and that calling
> a function with more than one argument passes a tuple, is that right?

In SML this is the common way to express multi-argument functions. The
other is currying: a 2-argument function is expressed as a function which
takes the first argument and returns the function which takes the second
argument and returns the final result. In Haskell and OCaml currying is
the idiomatic way to express multi-argument functions.

A stub defined as (OCaml syntax):
   let stub _ = failwith "Not implemented"
works for both ways.

> But we should probably put this branch of the discussion on
> hold until I have a chance to go learn ML and/or Haskell so that I
> actually know what I'm talking about.  Which one should I start with (or
> is there something else I should look at first)?

SML and OCaml are quite similar to imperative languages and you can
easily program imperatively too. Haskell is weird from that perspective,
imperative programming is tricky, and it has a more convenient IMHO type
system (because of type classes). I would start from Haskell. OCaml is
very similar to SML and IMHO more pleasant, it has very good implementation
and other advanced features in the type system, so I would not bother with
SML at all if you have OCaml.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnptr6$a0h$1@news.oberberg.net>
Erann Gat wrote:

> My ignorance is showing here -- I presume then that ML and Haskell have
> the semantics that all functions take only one argument, and that calling
> a function with more than one argument passes a tuple, is that right?

I don't know about ML, but Haskell most definitely has only 
single-parameter functions.
Two-parameter functions are actually a function that takes a single 
parameter and returns a one-parameter function, which then takes up the 
remaining parameter.

In other words,
   f x y
is equivalent to
   (f x) y
where the result of (f x) is a function that takes a single parameter.

... well, I lied - if I interpret the type rules of Haskell correctly, 
the above code doesn't nail the number of parameters down completely.
((f x) y) might be a function that can take yet another parameter, the 
type system won't care unless you write code that immediately depends on 
the exact number of parameters (actually it's a bit difficult to write 
such code, the easiest way would be a type annotation).

> In that case, yes, a single stub would work.  That diminishes the
> magnitude of my objection from being a major problem to a minor
> annoyance.  But we should probably put this branch of the discussion on
> hold until I have a chance to go learn ML and/or Haskell so that I
> actually know what I'm talking about.  Which one should I start with (or
> is there something else I should look at first)?

The usual advice is "learn both". Functional programming with 
Hindley-Milner type systems is still a large design space, and much of 
the spectrum is covered by these languages.
(The main difference being that Haskell is lazy-by-default and ML is 
strict-by-default - this leads to marked differences in programming 
styles because different idioms are efficient.)

Regards,
Jo
From: Jacques Garrigue
Subject: Re: More static type fun.
Date: 
Message-ID: <l2brrzfk5m.fsf@suiren.i-did-not-set--mail-host-address--so-shoot-me>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> > 2.  Having all the unimplemented calls go to the same function wouldn't
> > type-check.
> 
> It would. There was an error in the previous post though.
> 
> stub :: a   -- Means: any type (not only functions, you can stub any object)
> stub = error "Not implemented yet"
> 
> In ML you can stub only functions this way, but all functions using one stub.

Actually, in ocaml 3.07 you can define it:

        # let stub = Obj.magic 0;;
        val stub : 'a = <poly>

Of course this is completely unsafe.
But we all agree that this is about letting the program typecheck, and that
we are going to remove it later.

(And what happens if you actually run it?)
(Hopefully it is not that bad: all types either contain 0 as a
meaningful value, or should generate a segmentation fault when you try
to use it)
(Oops, there is one exception, polymorphic variants)

---------------------------------------------------------------------------
Jacques Garrigue      Kyoto University     garrigue at kurims.kyoto-u.ac.jp
		<A HREF=http://wwwfun.kurims.kyoto-u.ac.jp/~garrigue/>JG</A>
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egekwwl81e.fsf@sefirot.ii.uib.no>
···@jpl.nasa.gov (Erann Gat) writes:

>> I think Matthias meant that all the unimplemented routines are just 
>> aliased to one single "stub" routine, like this:

Cool, I never thought of that!

> I don't think that would work for several reasons.

Like this?

  Prelude> let stub = error "Undefined function, time to implement it?"
  Prelude> let branch1 = stub
  Prelude> let branch2 x y z = stub -- just to make them different types
 
  <interactive>:1: Warning: Defined but not used: x, y, z
  Prelude> let toplevel x = case x of { True -> branch1; False -> branch2 1 2 3 }
  Prelude> toplevel True
  *** Exception: Undefined function, time to implement it?
  Prelude> toplevel False
  *** Exception: Undefined function, time to implement it?

> 1.  The objective is to be able to write the actual call in the caller
> code, otherwise there's no point.  One might as well write "error"
> directly.  Writing "stub" buys you nothing.

I don't get it, surely the call is in the caller code?  The toplevel
function knows only about branch1 and branch2?  

> 2.  Having all the unimplemented calls go to the same function wouldn't
> type-check.

  Prelude> :i branch1
  -- branch1 is a variable, defined at <interactive>:1
  branch1 :: forall a. a
  Prelude> :i branch2
  -- branch2 is a variable, defined at <interactive>:1
  branch2 :: forall a t_a12q t_a12r t_a12s.
           t_a12s -> t_a12r -> t_a12q -> a

Why not?

I think you should actually try things out a bit before you jump to
conclusions.  Or explain what you are unhappy with in the above code. 

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-2910031105230001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@sefirot.ii.uib.no>, ··········@ii.uib.no wrote:

> > 2.  Having all the unimplemented calls go to the same function wouldn't
> > type-check.

> Why not?

I thought that at the very least you'd need different stubs for calls of
different aritys, but I believe I was wrong about that.

> I think you should actually try things out a bit before you jump to
> conclusions.  Or explain what you are unhappy with in the above code. 

I tried.  I went to www.haskell.org and downloaded Hugs, which billed
itself as an interpreter for Haskell that was specifically designed for
teaching.

Not to put too fine a point on it, but Hugs sucks.  It reinforced all of
my negative prejudices about statically typed languages.  You can't even
enter function definitions into the "interpreter".  I tried running some
code snippets posted as part of this discussion and none of them worked. 
(The responses I got when I pointed this out went something like, "Why are
you nitpicking on trivial little errors?"  Well, it's because I don't know
the language yet, so I don't know how to spot the trivial little errors.)

If this is the best that Haskell has to offer I'll stick with Lisp and
Python, thank you very much.

Now, I am given to understand that Hugs is not the best that Haskell has
to offer, but there is only so much time I have to devote to this.  I
offer this not so much as an indictment of Haskell, but as feedback to the
Haskell community about the experience of one person who came to Haskell
with a pretty open mind.  I have not given up on Haskell yet, but my
initial experience definitely did not impress me.

E.
From: Mark Carroll
Subject: Re: More static type fun.
Date: 
Message-ID: <gxc*pIa6p@news.chiark.greenend.org.uk>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
Erann Gat <···@jpl.nasa.gov> wrote:
(snip)
>I tried.  I went to www.haskell.org and downloaded Hugs, which billed
>itself as an interpreter for Haskell that was specifically designed for
>teaching.

Given that description, I'd look at Helium[1] if you don't need
typeclasses.

[1] http://www.cs.uu.nl/helium/

>Not to put too fine a point on it, but Hugs sucks.  It reinforced all of
>my negative prejudices about statically typed languages.  You can't even
>enter function definitions into the "interpreter".  I tried running some

Oh, that's a real pain, considering Hugs is meant to be an
interpreter! Even GHC[2] lets you do that:

xerxes:~$ ghci
   ___         ___ _
  / _ \ /\  /\/ __(_)
 / /_\// /_/ / /  | |      GHC Interactive, version 6.0.1, for Haskell 98.
/ /_\\/ __  / /___| |      http://www.haskell.org/ghc/
\____/\/ /_/\____/|_|      Type :? for help.

Loading package base ... linking ... done.
Prelude> let fib = 1 : 1 : [ a+b | (a,b) <- zip fib (tail fib) ]
Prelude> take 10 fib
[1,1,2,3,5,8,13,21,34,55]
Prelude> 

...etc.

[2] http://www.haskell.org/ghc/

(snip)
>Now, I am given to understand that Hugs is not the best that Haskell has
>to offer, but there is only so much time I have to devote to this.  I
>offer this not so much as an indictment of Haskell, but as feedback to the
>Haskell community about the experience of one person who came to Haskell
>with a pretty open mind.  I have not given up on Haskell yet, but my
>initial experience definitely did not impress me.

That's a shame. ): I have no idea why Hugs was so bad: I don't use it,
but I know it's meant to make it easy to play with things.

-- Mark
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <24q571-7r3.ln1@ID-7776.user.dfncis.de>
Erann Gat <···@jpl.nasa.gov> wrote:

> I tried.  I went to www.haskell.org and downloaded Hugs, which billed
> itself as an interpreter for Haskell that was specifically designed for
> teaching.

Hugs is a pure interpreter, and is not as actively developing as GHC
is. 

> Not to put too fine a point on it, but Hugs sucks.  

Yes. It's fine for experiments, but not for real development.

> It reinforced all of my negative prejudices about statically typed
> languages.  You can't even enter function definitions into the
> "interpreter".

Yes. In GHC, you can (you have to start them with "let".)

The toplevel is nice more "experiments", but it doesn't work well when
you have a larger amount of code. I'd suggest editing a file, and
using the :l and :r commands. It's not considerably slower than using
the toplevel directly.

> I tried running some code snippets posted as part of this discussion
> and none of them worked.

I have run all the code snippets I made successfully with GHC. Sometimes
I have ommitted some imports, but everything else should work, unless
I introduced typos during cut and paste.

> Now, I am given to understand that Hugs is not the best that Haskell has
> to offer, but there is only so much time I have to devote to this.  I
> offer this not so much as an indictment of Haskell, but as feedback to the
> Haskell community about the experience of one person who came to Haskell
> with a pretty open mind.  I have not given up on Haskell yet, but my
> initial experience definitely did not impress me.

Yes. It's a known problem that there is no good IDE, and this
turns people off. Haskell is a comparatively young language, and comes
from academics. It's hard to get funding to develop an IDE; most of
the development goes into the language. There are also people who
complain that there is no good introduction, and the tutorials are too
difficult (personally, I didn't have any trouble with them when I
learned Haskell). The Helium project tries to improve at least this 
situation a bit.

Your critique is completely justified. I don't know any good solution
for this problem.

If you can spare a little bit more time, keep trying. Have a look at the
tutorials first.

- Dirk
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egad7jw29f.fsf@vipe.ii.uib.no>
···@jpl.nasa.gov (Erann Gat) writes:

> Not to put too fine a point on it, but Hugs sucks.  It reinforced all of
> my negative prejudices about statically typed languages.  

I think GHC and it's interactive version GHCi, is better, in
particular if you have a reasonably modern computer.  You should
perhaps also consider Helium, which is also teaching oriented, and
consequently simplifies the type system a bit (I haven't tried it, but
it looks like it goes a long way to provide comprehensible and
suggestive error messages for beginners)

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3f9fcb09$1@news.unimelb.edu.au>
···@jpl.nasa.gov (Erann Gat) writes:

>Joachim Durchholz <·················@web.de> wrote:
>
>> I think Matthias meant that all the unimplemented routines are just 
>> aliased to one single "stub" routine, like this:
>>    stub :: a -> ()
>>    stub = error "Stub called"
>> To test how many stubs there are left in a program, just remove that 
>> definition and see the compiler complain. If that's a standard part of 
>> regression testing, leftover stubs will be found even if people forget 
>> about a stub they left in.
>
>I don't think that would work for several reasons.
>
>1.  The objective is to be able to write the actual call in the caller
>code, otherwise there's no point.  One might as well write "error"
>directly.  Writing "stub" buys you nothing.

Calling "stub" rather than "error" buys you the ability to comment
out the declaration of "stub" and thus have the compiler list for you
all the stubs that you haven't yet implemented.

Also, there is documentation value: calling "stub" rather than "error"
makes it clearer to the reader that this code is just not yet implemented,
rather than it being a case of e.g. some internal invariant being broken.

>2.  Having all the unimplemented calls go to the same function wouldn't
>type-check.

No, it would, so long as you give it the right type declaration:

	stub :: a

This is fully polymorphic -- "a" is a type variable here --
so it can have any type.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnop4k$pi3$1@news.oberberg.net>
Erann Gat wrote:

>>   stub :: a -> ()
>>   stub = error "Stub called"
> 
> I don't think that would work for several reasons.
> 
> 1.  The objective is to be able to write the actual call in the caller
> code, otherwise there's no point.  One might as well write "error"
> directly.  Writing "stub" buys you nothing.

Yes it does - "error" might occur in shipping code, "stub" not.

> 2.  Having all the unimplemented calls go to the same function wouldn't
> type-check.

But it would. "stub" doesn't make use of its arguments, so they can be 
anything.

Regards,
Jo
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3f9fa637$1@news.unimelb.edu.au>
Joachim Durchholz <·················@web.de> writes:

>I think Matthias meant that all the unimplemented routines are just 
>aliased to one single "stub" routine, like this:
>   stub :: a -> ()
>   stub = error "Stub called"

Actually just

   stub :: a

not

   stub :: a -> ()

But yes, I'm pretty sure that is what Matthias meant.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <qmb471-vt.ln1@ID-7776.user.dfncis.de>
Erann Gat <···@jpl.nasa.gov> wrote:
> In article <··············@ID-7776.user.dfncis.de>, Dirk Thierbach
> <··········@gmx.de> wrote:

>> Can you give an example? 

> I thought I did.

I thought I had explained that this is not a problem :-)

>> But stubs are easy: You just write

>> stub = error "FIXME"

> I'm a little out of my element here since the only statically typed
> languages I know are C/C++/Pascal/Java and not ML/Haskell/etc. but I doubt
> that it's as simple as that. 

It is. It works because the stub is assigned a polymorphic type: Type
inference will figure out that 

stub :: a

for any type a (type variables are implicitely all-quantified). So 
whereever you use the stub, the type checker will figure out what type
it ought to have, and then see that it can replace 'a' with the required
type. So it checks. That's what polymorphic types are for.

> Certainly in C++ providing stubs can become a major hassle,
> especially in cases where you're trying to refactor someone else's
> code.

Yes. That's why there is a big difference between those type systems
and "functionaly statically typed" one.

> And then there's still the issue that if you forget about the stub
> (easy to do on a large development) you only find out about it at
> run time, not compile time.
 
You can always search for "FIXME" if you want to find out if there
are any stubs left.

You can also let the compiler help you: For example, in a large
project, write something like

fixme = error "FIXME"

and put in in the module Stubs. Everyone agrees to write his stubs in the
following way:

import Stubs

stub1 = fixme
stub2 = fixme
...

Then if you want to check in somebody's code if there are stubs left,
you comment out the import clause, and the compiler will tell you each
occurence.

- Dirk
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egism8l95l.fsf@sefirot.ii.uib.no>
···@jpl.nasa.gov (Erann Gat) writes:

>> Raffael Cavallaro <·······@mediaone.net> wrote:

>>> I think the lisp camp would say that statically typed languages get
>>> the order of the development cycle backwards. They force you to make
>>> decisions about implementation detail *right from the start* before
>>> you've had a chance to sketch out your code. 

They may say it, but they would be wrong.  This is what type inference
does for you, this is what separates HM type systems from the inferior
stuff supplied with Java and C and whatnot.

>>> Moreover, they interfere with the interactive top-level design of
>>> software right at the very point where it is most important that the
>>> language/compiler *not* get in the way - at the beginning, when you
>>> are experimenting with new ideas, etc.

For Haskell, Hugs, GHCi and NHC provide interactive top levels.
I'll admit they may not be as good as the ones for Lisp, but I don't
see why they couldn't be in principle.

> Furthermore, you might be tempted to try to "fool" the compiler.  For
> example, suppose I write:

> (defun top-level ()
>   (with-complicated-setup
>     (if (condition)
>       (some-hairy-computation)
>       (some-other-hairy-computation))))

> a statically typed langauge will not allow me to run top-level until I
> have defined both hairy computations.

You can always type (modulo minuses, which aren''t acceptable in
identifiers) 
        
        some-other-hairy-computation = undefined

or

        some-other-hairy-computation = 
            error "TODO: implement another hairy computation"

which will give you normal dynamic typing behaviour -- i.e. a runtime
error.  But if you structure it as:

   toplevel = let condition = ...
        in case condition of True -> some-hairy-computation
                -- (optional comment): remember to implement for False later

the compiler will let you run it, but complain that you haven't
covered all cases (yet).  Joe or Bill or whoever is hopefully
observant enough to notice the warning each time he compiles the code,
and will fix it later on.  (This also works well if the condition is a
complex data structure, of course.)

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310292031.5c3492d1@posting.google.com>
··········@ii.uib.no wrote in message news:<··············@sefirot.ii.uib.no>...
> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> >> Raffael Cavallaro <·······@mediaone.net> wrote:
>  
> >>> I think the lisp camp would say that statically typed languages get
> >>> the order of the development cycle backwards. They force you to make
> >>> decisions about implementation detail *right from the start* before
> >>> you've had a chance to sketch out your code. 
> 
> They may say it, but they would be wrong.  This is what type inference
> does for you, this is what separates HM type systems from the inferior
> stuff supplied with Java and C and whatnot.

No, what type inference buys for you is complaints from the compiler
when you try to add a number to a list of symbols (gasp!). I guess you
have to have used lisp once or twice to see how funny it is that the
compiler should give a damn that I'm adding a number, or a string, or
another list (of any sort of elements, of course) to a list that
happens, at present, to contain symbols.

Again, and this is getting kind of boring at this point, the problem
with statically typed languages is that they are backwards. They don't
default to everything-is-ok-use-whatever-you-want-anywhere at the
beginning, and then, let the programmer add stricter, and stricter
typing when, and where the programmer is ready. Instead, they require
you to warn the dumb compiler that, yes, here you *really* do want to
put a number into that list with symbols.

This sort of explicit permission requesting by the programmer is
useful later in development, when things are nailed down, and we are
seeking a clean implementation. But such bookkeeping merely gets in
the way early on, because we *know* that these representations will
*not* survive to the final version.
From: Jesse Tov
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbq2lvu.fvl.tov@tov.student.harvard.edu>
Raffael Cavallaro <·······@mediaone.net>:
> No, what type inference buys for you is complaints from the compiler
> when you try to add a number to a list of symbols (gasp!). I guess you
> have to have used lisp once or twice to see how funny it is that the
> compiler should give a damn that I'm adding a number, or a string, or
> another list (of any sort of elements, of course) to a list that
> happens, at present, to contain symbols.

When I learned Standard ML, I thought it was pretty funny too.  Scheme
let me cons anything I wanted--why wouldn't ML?  It was weird and
annoying.  But I perservered, and after a few days it didn't bother me
any more.  Pretty soon, it made a lot of sense, and today I'd hate to
live without it.  That's how language features are in general--people
who aren't familiar think they're weird, useless, and possibly harmful;
people who get used to them think everyone else is missing out.

Have you given it an honest try for a few days?  I don't care if you do,
but those of us who like the Hindley-Milner type system and its variants
don't like them just out of ignorance.

Jesse
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3f9fa39f$1@news.unimelb.edu.au>
···@jpl.nasa.gov (Erann Gat) writes:

>suppose I write:
>
>(defun top-level ()
>  (with-complicated-setup
>    (if (condition)
>      (some-hairy-computation)
>      (some-other-hairy-computation))))
>
>a statically typed langauge will not allow me to run top-level until I
>have defined both hairy computations.  If all I care about at the moment
>is one of them I might be tempted to write a stub for the other just to
>make the compiler happy.  Now my program is really broken (because the
>stub doesn't do the right thing) but I can no longer rely on the system to
>tell me about it, at least not at compile time.
>
>Lisp will very happily compile and run top-level even if one of the hairy
>computations is undefined.  What's more, most implementations will remind
>me at compile time (with a warning, not an error) that it's undefined, so
>that relieves me of the burden of keeping track of what is really finished
>and what is stubbed out to make the compiler happy.

This is not unique to dynamically typed languages.  With the Mercury
compiler, you can write a stub by just giving the type declaration and
using the `--allow-stubs' compiler option.  You can then go ahead and
run the code without giving a definition.  If the procedure gets called,
then it will throw an exception at runtime.  Just as with your favourite
Lisp implementations, the Mercury compiler will remind you about the stub
being undefined by issuing a warning at compile time.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310281500.43e671b8@posting.google.com>
Dirk Thierbach <··········@gmx.de> wrote in message news:<··············@ID-7776.user.dfncis.de>...
> > Moreover, they interfere with the interactive top-level design of
> > software right at the very point where it is most important that the
> > language/compiler *not* get in the way - at the beginning, when you
> > are experimenting with new ideas, etc.
> 
> Why?

See Eran's reply also. (But to chime in) because the compiler forces
you to deal with a problem you don't have yet. Since I haven't chosen
a real data representation yet, why should I take *any* time to
placate the compiler's demand that my types are consistent?

> All the arguments you give are fine for "ad-hoc statically typed"
> languages, but they don't apply to "functionally statically typed"
> languages.

They most certainly do. Are you saying that I can use ghci without
making any effort to inform the compiler of my type intentions at the
initial stages? If I have to take *any* time at the initial stages to
inform the compiler about the types of data representations, then that
time is completely wasted. In fact, it is doubly wasted, because it is
also distracting me from my real task of sketching out my design *in
code*. Only later, when I've actually chosen my final data
representations, should I be making any effort at all to tell the
compiler how to type constrain my code.

I find that one of the real annoyances is the lack of really general
polymorphic operators. Functions like cons that would work on any
element in lisp, only work on a single type at a time (characters, or
numbers, for example, but not both) in ghci.

raffaelc$ ghci
   ___         ___ _
  / _ \ /\  /\/ __(_)
 / /_\// /_/ / /  | |      GHC Interactive, version 6.0.1, for Haskell
98.
/ /_\\/ __  / /___| |      http://www.haskell.org/ghc/
\____/\/ /_/\____/|_|      Type :? for help.

Loading package base ... linking ... done.
Prelude> 't':5:[]

<interactive>:1:
    No instance for (Num Char)
      arising from the literal `5' at <interactive>:1
    In the first argument of `(:)', namely `5'
    In the second argument of `(:)', namely `5 : []'
    In the definition of `it': it = 't' : (5 : [])

which in lisp would be:
(list #\t 5)
or, more literally:
(cons #\t (cons 5 nil))

Why can't I make lists with different types of elements, especially at
the early stages of development? Why should I have to deal with this
stuff just to make some overly rigid compiler happy?
From: Brian McNamara!
Subject: Re: More static type fun.
Date: 
Message-ID: <bnncrt$93l$1@news-int.gatech.edu>
·······@mediaone.net (Raffael Cavallaro) once said:
>I find that one of the real annoyances is the lack of really general
>polymorphic operators. Functions like cons that would work on any
>element in lisp, only work on a single type at a time (characters, or
>numbers, for example, but not both) in ghci.

[example showing you can't do "(list 't 5)" in Haskell]

>Why can't I make lists with different types of elements, especially at
>the early stages of development? Why should I have to deal with this
>stuff just to make some overly rigid compiler happy?

IMO, it's not "just to make the compiler happy".  At some point I am
presumably going to need to consume this list of data.  How will I be
able to consume it?  There are a few possibilities:

 - If there are always going to be two elements, a character and an 
   integer, then I use a tuple instead of a list.
      ( Char, Int )

 - If I just need to be able to print (more generally, "foo") all these 
   elements later, then I create a list of Showables (Fooables).
      data Showable = (Show a) => forall a. a  
         -- (or however you spell it in Haskell, I dunno)
      [ Showable ]

 - If each element will be either a character or an integer, but nothing 
   else, then I use a list of that type.
      [ Either Char Int ]

As someone with "static typing on the brain", the "unreasonable"
possibility is:

 - I don't know what's going on, so I will stuff any type of data I like
   into the list and assume that when someone else consumes it they
   will somehow know to treat the first element like a character, and
   the second element like an integer, etc.

I suppose the "dynamic typists" think it's often completely
unreasonable to go with one of the first three possibilities above,
because they consider it some kind of "premature commitment".  To a
"static typist", it's not a matter of "premature commitment", but
rather a matter of "program sanity".  The structure of the data guides
the computation; unstructured data is anathema.  

Furthermore, "commitment" is an unreasonable term.  Whether you're a
static-typer or a dynamic-typer, data abstraction still applies.  At the
end of the day, hopefully neither camp is writing code like

   printGrades :: ( [Char], [Int] ) -> IO ()
   printGrades (name,grades) = 
      do print (name ++ "'s grades are ")
         mapM print grades

but rather code like

   type StudentInfo = SI ( String, [Int] )

   name   :: StudentInfo -> String
   name   (SI (name,_))   = name

   grades :: StudentInfo -> [Int]
   grades (SI (_,grades)) = grades

   printGrades :: StudentInfo -> IO ()
   printGrades stud = 
      do print ((name stud) ++ "'s grades are ")
         mapM print (grades stud)

The point is, it is data abstraction (and not a choice of typing
discipline) which staves off premature commitment to a data structure.
Accessor functions (like "name" and "grades" above) shield our
algorithms from having to know about the structure of our data.  As a
result, we are always free to change the data representation,
assured that the number of concomitant changes will be small (at most
one for each accessor) and local (confined to the accessors).

-- 
 Brian M. McNamara   ······@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <gbd471-hf1.ln1@ID-7776.user.dfncis.de>
Raffael Cavallaro <·······@mediaone.net> wrote:
> Dirk Thierbach <··········@gmx.de> wrote in message news:<··············@ID-7776.user.dfncis.de>...

> Since I haven't chosen a real data representation yet, why should I
> take *any* time to placate the compiler's demand that my types are
> consistent?

If you haven't chose any data representation, you cannot act on the
data. So you can happily write programs that treat this data as
black box. The type checker will assign this data a polymorphic
type, and you can move it around all the way you want without ever
telling the type checker what data you need.

> Are you saying that I can use ghci without making any effort to
> inform the compiler of my type intentions at the initial stages?

Yes.

> Only later, when I've actually chosen my final data representations,
> should I be making any effort at all to tell the compiler how to
> type constrain my code.

Exactly. 

> I find that one of the real annoyances is the lack of really general
> polymorphic operators. Functions like cons that would work on any
> element in lisp, only work on a single type at a time (characters, or
> numbers, for example, but not both) in ghci.

No, they don't. Cons will work on a list of any type, as long as it is
the same (complex) type. It doesn't matter at this point if that type
is characters, or characters and integers, or an arbitrary
s-expression. You can decide that later on, when you know more about
your data. 

As soon as you decide "hm, I want characters here, but at the same
time I want something else" you write the data type, and you extend
it if you want more types.

- Dirk
From: Matthew Danish
Subject: Re: More static type fun.
Date: 
Message-ID: <20031029092136.GV1454@mapcar.org>
On Wed, Oct 29, 2003 at 10:20:16AM +0100, Dirk Thierbach wrote:
> As soon as you decide "hm, I want characters here, but at the same
> time I want something else" you write the data type, and you extend
> it if you want more types.

Can you extend it if it is in a library and you don't have the sources?

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <7um471-vn3.ln1@ID-7776.user.dfncis.de>
Matthew Danish <·······@andrew.cmu.edu> wrote:
> On Wed, Oct 29, 2003 at 10:20:16AM +0100, Dirk Thierbach wrote:
>> As soon as you decide "hm, I want characters here, but at the same
>> time I want something else" you write the data type, and you extend
>> it if you want more types.

> Can you extend it if it is in a library and you don't have the sources?

Yes.

- Dirk
From: Raffael Cavallaro
Subject: Re: More static type fun.
Date: 
Message-ID: <aeb7ff58.0310292003.74c7a3d6@posting.google.com>
Dirk Thierbach <··········@gmx.de> wrote in message news:<··············@ID-7776.user.dfncis.de>...
> No, they don't. Cons will work on a list of any type, as long as it is
> the same (complex) type. It doesn't matter at this point if that type
> is characters, or characters and integers, or an arbitrary
> s-expression. You can decide that later on, when you know more about
> your data. 

This is simply untrue. Didn't you see the post you're replying to? 

Remember, *without additional effort*. If I have to tell the compiler
"here, in this particular place, I mean to use a list that has some
elements that are numbers, and some that are symbols," that is
additional effort. If I later want to add strings to this list, I now
have to inform the compiler of that as well. Again, more effort whose
only purpose is to placate a dumb compiler with regard to a question
of absolutely no importance: what will be the contents of a data
representation that I *know* I will throw away later in the design
process.

Out of the box, ghci, does *not* let you use cons to create lists of
different types of elements. Again, here is the simple proof:

raffaelc$ ghci
   ___         ___ _
  / _ \ /\  /\/ __(_)
 / /_\// /_/ / /  | |      GHC Interactive, version 6.0.1, for Haskell
98.
/ /_\\/ __  / /___| |      http://www.haskell.org/ghc/
\____/\/ /_/\____/|_|      Type :? for help.

Loading package base ... linking ... done.
Prelude> 't':5:[]

<interactive>:1:
    No instance for (Num Char)
      arising from the literal `5' at <interactive>:1
    In the first argument of `(:)', namely `5'
    In the second argument of `(:)', namely `5 : []'
    In the definition of `it': it = 't' : (5 : [])

which in lisp is trivially:
(cons #\t (cons 5 nil)) --> (#\t 5)
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <kdg471-322.ln1@ID-7776.user.dfncis.de>
Raffael Cavallaro <·······@mediaone.net> wrote:
> I find that one of the real annoyances is the lack of really general
> polymorphic operators. Functions like cons that would work on any
> element in lisp, only work on a single type at a time (characters, or
> numbers, for example, but not both) in ghci.

It just occured to me that one of the communication problems may be
that Lispers are used to use lists for everything.

If you have a finite number of things you want to work with, you
put them in a tuple. In a tuple, each item can have a different type,
but you cannot work uniformly on every element of that type.

If you have a possible unlimited number of things that you want 
to process all in the same way, you put it into a list. Since every
element is processed in the same way, it makes sense for the list
to only contain elements of the same type.

If you have a possible unlimited number of things that you want to
process in different ways depending on what things they are, you use a
list on a datatype. In Lisp, you would have to do a dynamic type check
on the contents of the list as well, because otherwise you don't know
how to process it. (Unless you use library functions that to the check
for you, but that would be the same thing).

- Dirk
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3f9fa592$1@news.unimelb.edu.au>
·······@mediaone.net (Raffael Cavallaro) writes:

>Why can't I make lists with different types of elements,

You can -- they're called "tuples" ;-)  Note that the syntax is slightly
different than for ordinary lists whose elements are all the same type.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnpuc9$a8q$1@news.oberberg.net>
Fergus Henderson wrote:

> ·······@mediaone.net (Raffael Cavallaro) writes:
> 
>>Why can't I make lists with different types of elements,
> 
> You can -- they're called "tuples" ;-)  Note that the syntax is slightly
> different than for ordinary lists whose elements are all the same type.

These aren't lists - you typically don't have CAR, CDR, or CONS 
functions for tuples, you can't iterate over the items of a tuple, etc.

Of course, these functions make only marginal sense if
a) the types of the tuple elements are allowed to be different and
b) the language offers good idioms for accessing and replacing fields.

(I just took a somewhat in-depth look at (b) for Haskell and found 
everything I'd ever want to use, all defined in a very straightforward 
fashion - I was impressed.)

Regards,
Jo
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa239bd$1@news.unimelb.edu.au>
Joachim Durchholz <·················@web.de> writes:

>Fergus Henderson wrote:
>
>> ·······@mediaone.net (Raffael Cavallaro) writes:
>> 
>>>Why can't I make lists with different types of elements,
>> 
>> You can -- they're called "tuples" ;-)  Note that the syntax is slightly
>> different than for ordinary lists whose elements are all the same type.
>
>These aren't lists - you typically don't have CAR, CDR, or CONS 
>functions for tuples,

Sure you do.  In Haskell, these are spelt "fst", "snd", and "(,)".

>you can't iterate over the items of a tuple, etc.

Ah, now we're getting somewhere.  Iterating over the items in a tuple-list
is indeed rather tricky.  But not impossible... see the example below.

	-- Define a type class for the elements of our heterogenous "lists"

	class Elem e where
		val :: e -> Integer
	instance Elem Integer where
		val = fromIntegral
	instance Elem Double where
		val = truncate
	instance Elem [t] where
		val = toInteger . length

	-- Define a type class for accumulation functions that work
	-- on collections of these elements, and provide an example instance
	-- that sums their values

	class Acc f a | f -> a where
		app_acc :: Elem e => f -> a -> e -> a

	data SumVal = SumVal
	instance Acc SumVal Integer where
		app_acc SumVal a e = a + val e

	-- Define the TupleList class, for heterogenous "lists" built out of
	-- tuples; a function foldr_tuple for iterating over them; and give
	-- an example of its use.

	class TupleList l where
		foldr_tuple :: Acc f a => f -> a -> l -> a
	instance TupleList () where
		foldr_tuple _ acc () = acc
	instance (Elem a, TupleList b) => TupleList (a,b) where
		foldr_tuple f acc (x,xs) = app_acc f (foldr_tuple f acc xs) x

	sum_tuple l = foldr_tuple SumVal 0 l

	-- Construct an example heterogenous "list", sum the values of its
	-- elements, and print the results.

	list :: (Integer, (String, (Double, ())))
	list = (1, ("four", (3.5, ())))

	main = do { putStr "xs = "; print list;
		    putStr "sum_tuple xs = "; print (sum_tuple list);
		  }

Sample Hugs transcript:

	Main> main
	xs = (1,("four",(3.5,())))
	sum_tuple xs = 8

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa274d8$1@news.unimelb.edu.au>
Fergus Henderson <···@cs.mu.oz.au> writes:

>Iterating over the items in a tuple-list
>is indeed rather tricky.  But not impossible... see the example below.

Actually, on second thoughts, don't.  That code was unnecessarily
complicated -- the "Acc" class and associated baggage was entirely
unnecessary.  Here's a simpler version.

	-- Define a type class for the elements of our heterogenous "lists"

	class Elem e where
		val :: e -> Integer
	instance Elem Integer where
		val = fromIntegral
	instance Elem Double where
		val = truncate
	instance Elem [t] where
		val = toInteger . length

	-- Define the TupleList class, for heterogenous "lists" built out of
	-- tuples; a function foldr_tuple for iterating over them; and give
	-- an example of its use.
	class TupleList l where
		foldr_tuple :: (forall e . Elem e => a -> e -> a) -> a -> l -> a
	instance TupleList () where
		foldr_tuple _ acc () = acc
	instance (Elem a, TupleList b) => TupleList (a,b) where
		foldr_tuple f acc (x,xs) = f (foldr_tuple f acc xs) x

	sum_tuple l = foldr_tuple (\a e -> a + val e) 0 l

	-- Construct an example heterogenous "list", sum the values of
	-- its elements, and print the results.
	list :: (Integer, (String, (Double, ())))
	list = (1, ("four", (3.5, ())))
	main = do { putStr "xs = "; print list;
		    putStr "sum_tuple xs = "; print (sum_tuple list);
		  }

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <9mm471-1k3.ln1@ID-7776.user.dfncis.de>
Raffael Cavallaro <·······@mediaone.net> wrote:
> Dirk Thierbach <··········@gmx.de> wrote in 

> Are you saying that I can use ghci without making any effort to
> inform the compiler of my type intentions at the initial stages?

Here's a simple and stupid example how it might look like. Code
starts with a '>'.

Assume you want to develop an application that deals with student
records. I include stubs just for the sake of it, so the first thing
is

> fixme = error "Tried to execute a stub"

You also have to include a main routine in your file (which is a quirk
of ghc). You don't have to do that if you're working toplevel, but I 
want to make changes to my file because that's how I usually work.

> main = return ()

Then I start out unsystematically by deciding that I want to sort student
records. So i write

> import List
>
> sortStudent students = sortBy fixme students

Compiles and typechecks. I decide that I want to sort students by name.
So somehow I must access the name of the student. I also decide that I want
in a record with named fields to make it easier to remember the
meaning of each fields. So I write

> data StudentRec a = SR { name :: a }

Compiles and typechecks. I haven't commited to a particular representation,
and I can extend my record later on. Now I can do the sorting:

> sortStudent students = sortBy (\x y -> compare (name x) (name y)) students

I still didn't commit on any particular type. Let's write a testcase:

> testSortStudent = 
>   sortStudent [SR {name="John Smith"}, SR {name="Zacharias Adams"}]

Compiles, typechecks and runs. But I discover it doesn't sort by last
name, as it should. So I really should seperate first and last names,
and put the last name first:

> testSortStudent = 
>   sortStudent [SR {name=("Smith", "John")}, 
>                SR {name=("Adams", "Zacharias")}]

Compiles, typechecks and runs (with the same sort routine). Now
I decide I want an address, too.

> data StudentRec a b = SR { name :: a, address :: b }

Compiles and typechecks with two warnings that the second field is not
initialized in the test case. Fine, I don't care.

I want to print the address and decide on the fields while writing this
routine:

> printAddress x = let (street, number, zipcode, city) = address x in
>   show number ++ ", " ++ show street ++ "\n" ++
>   show zipcode ++ " " ++ show city

Compiles and typechecks with the same two warnings. So I factor out
the data from the testcase, and add the address:

> testData = [
>   SR {name=("Smith", "John"), 
>       address=(42, "Nowhere driver", 10000, "Dodge City")}, 
>   SR {name=("Adams", "Zacharias"),
>       address=(13, "Halloween street", 22300, "Ghosttown")} ]

> testSortStudent = sortStudent testData

> testPrintAddress = map printAddress testData

And so on, and so on. No type annotations, just a little data declaration
that you will need anyway in the end, and that also supplies your accessor
methods you otherwise would have to write by hand. Convincing enough?

Raffael Cavallaro <·······@mediaone.net> wrote:
> which in lisp would be:
> (list #\t 5)
> or, more literally:
> (cons #\t (cons 5 nil))

Did you try ('t', 5) or ('t', (5, ())) ? (It's not the same, BTW, but
works both.)

- Dirk
From: Russell Wallace
Subject: Re: More static type fun.
Date: 
Message-ID: <3f9f06e6.53512624@news.eircom.net>
On Tue, 28 Oct 2003 19:47:17 +0100, Dirk Thierbach <··········@gmx.de>
wrote:

>No. The way to avoid this annoyance is to use polymorphic types and
>type inference. An all inclusive type is only necessary if you
>want to use different types *in the same place at once*.

Personally, I agree with this...

>Which doesn't
>happen very often.

And if I found this to be the case, on the whole I'd prefer static
typing. People simply differ in the extent to which they find it
annoying to be forced to decide on types up front; I, like Dirk,
happen to fall into the group that doesn't find it annoying.

The reason why, after all these years, I've acquired a definite
preference for dynamic typing, is that I find I do indeed want to use
different types in the same place at once, sufficiently often that
doing it in a static language just makes me an example of Greenspun's
Tenth.

If you don't find that, you should probably decide between static and
dynamic typing based on what order you're comfortable doing things in.

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <d1e471-hf1.ln1@ID-7776.user.dfncis.de>
Russell Wallace <················@eircom.net> wrote:

> The reason why, after all these years, I've acquired a definite
> preference for dynamic typing, is that I find I do indeed want to use
> different types in the same place at once, sufficiently often that
> doing it in a static language just makes me an example of Greenspun's
> Tenth.

Interesting. Can you give examples?

> If you don't find that, you should probably decide between static and
> dynamic typing based on what order you're comfortable doing things in.

To me, this discussion is not about deciding between static and dynamic
typing. Both have advantages, both have disadvantages, you choose what
is best in a particular situation.

What I don't understand is why people (not you) say that for them it
is "impossible" to do some things with static typing, and that
languages that don't use static typing are therefore "better". Clearly
you can do the same things. You do it a little bit differently, but
the difference is not so big to be such important. Other differences
of the languages are much more important.

Especially when it turns out that most of them make these claims
out of ignorance: They don't know what good static type systems can
do, they have only used crappy static type systems before, etc.

If you don't know about the alternatives, why so much emotion?

- Dirk
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnoj64$iqk$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:

> What I don't understand is why people (not you) say that for them it
> is "impossible" to do some things with static typing, and that
> languages that don't use static typing are therefore "better". Clearly
> you can do the same things. You do it a little bit differently, but
> the difference is not so big to be such important. Other differences
> of the languages are much more important.
> 
> Especially when it turns out that most of them make these claims
> out of ignorance: They don't know what good static type systems can
> do, they have only used crappy static type systems before, etc.

You are obviously making your claims out of ignorance of runtime 
metaprogramming and dynamic metaobject protocols.

What would a static type system for CLOS+MOP, or for Smalltalk with a 
MOP look like?

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Isaac Gouy
Subject: Re: More static type fun.
Date: 
Message-ID: <ce7ef1c8.0310291232.701d3f20@posting.google.com>
Pascal Costanza <········@web.de> wrote in message news:<············@f1node01.rhrz.uni-bonn.de>...
> Dirk Thierbach wrote:
> 
> > What I don't understand is why people (not you) say that for them it
> > is "impossible" to do some things with static typing, and that
> > languages that don't use static typing are therefore "better". Clearly
> > you can do the same things. You do it a little bit differently, but
> > the difference is not so big to be such important. Other differences
> > of the languages are much more important.
> > 
> > Especially when it turns out that most of them make these claims
> > out of ignorance: They don't know what good static type systems can
> > do, they have only used crappy static type systems before, etc.

It's ordinary to generalize from our experiences, and difficult to
recognise that our hard-learned-lessons may no longer be so true.
Especially when languages with "good static type systems" have been so
little used - when would we encounter them?
 
> You are obviously making your claims out of ignorance of runtime 
> metaprogramming and dynamic metaobject protocols.

tit-for-tat

> What would a static type system for CLOS+MOP, or for Smalltalk with a 
> MOP look like?

As a Smalltalker, I'm not sure why we would be interested in the
static checking aspects of reflection? The interesting part is having
type information available at runtime.

Have a look at Clean 'Dynamics' http://www.cs.kun.nl/~clean/

"The Clean 2.0 type system combines the best of two worlds: static
typing in the best functional tradition, and dynamic typing. Static
types can be converted to dynamic types and vice versa in a type safe
way. With dynamics you can exchange code and data in a flexible and
type-safe way between Clean applications."
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnpeuk$sr2$1@newsreader2.netcologne.de>
Isaac Gouy wrote:

> Have a look at Clean 'Dynamics' http://www.cs.kun.nl/~clean/
> 
> "The Clean 2.0 type system combines the best of two worlds: static
> typing in the best functional tradition, and dynamic typing. Static
> types can be converted to dynamic types and vice versa in a type safe
> way. With dynamics you can exchange code and data in a flexible and
> type-safe way between Clean applications."

Thanks for the link. I recall other people recommending Clean for some 
other reasons that sounded interesting. It's probably worth taking a 
look at it.


Thanks,
Pascal
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <pcc571-nl.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:
> You are obviously making your claims out of ignorance of runtime 
> metaprogramming and dynamic metaobject protocols.

> What would a static type system for CLOS+MOP, or for Smalltalk with a 
> MOP look like?

You cannot make one. We already discussed this. You cannot even make
a "good" static type system for CL, let alone CLOS, or Smalltalk.

Why do you insist on putting up strawmen? 

- Dirk
From: Neelakantan Krishnaswami
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbq01vs.vtk.neelk@gs3106.sp.cs.cmu.edu>
In article <·············@ID-7776.user.dfncis.de>, Dirk Thierbach wrote:
> Pascal Costanza <········@web.de> wrote:
>> You are obviously making your claims out of ignorance of runtime 
>> metaprogramming and dynamic metaobject protocols.
> 
>> What would a static type system for CLOS+MOP, or for Smalltalk with a 
>> MOP look like?
> 
> You cannot make one. We already discussed this. You cannot even make
> a "good" static type system for CL, let alone CLOS, or Smalltalk.
> Why do you insist on putting up strawmen?

I think Pascal is asking what a statically-typed language that offers
much of the same feeling as programming in CLOS would look like,
rather than for a type system for CLOS per se. This is certainly a
reasonable question! It's definitely possible to build
statically-typed languages that support multiple dispatch/generic
functions and have ML-style polymorphic static typing. I know, because
I've written such a language.[*]

I don't know what a static type system for the CLOS MOP would look
like, though, because I've never seriously used it.

However, as for Smalltalk, Cardelli and Abadi's book on type theories
for object-oriented programming would be a good place to look, since
IIRC they spend a lot of time working out the type theory for
prototype based languages. Since that blurs the distinction between
classes and objects, it would likely be a good place to look.

(I distinguish the CLOS and Smalltalk MOPs since there are huge
differences between generic-function/multimethod languages and
single-dispatch/message-passing languages.)

[*] So has Daniel Bonniot, and he's gone /much/ further in his
implementation than I did in mine.

-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnp3b4$sss$1@f1node01.rhrz.uni-bonn.de>
Neelakantan Krishnaswami wrote:

> I think Pascal is asking what a statically-typed language that offers
> much of the same feeling as programming in CLOS would look like,
> rather than for a type system for CLOS per se. This is certainly a
> reasonable question! It's definitely possible to build
> statically-typed languages that support multiple dispatch/generic
> functions and have ML-style polymorphic static typing. I know, because
> I've written such a language.[*]

I have tried to google for your language, but I haven't found it. Would 
you be so kind and tell us the name of your language? ;)

> I don't know what a static type system for the CLOS MOP would look
> like, though, because I've never seriously used it.

I think it is obvious that you cannot define a sound static type system 
for a runtime MOP. For example, a MOP allows you to add and remove 
arbitrary slots and methods to and from classes. A runtime MOP allows 
you to do this at runtime, and this effects the classes that are used in 
the running system.

I think a static type system can especially not handle the removal of 
slots or methods because this breaks invariants. And this doesn't even 
take into account switching metaclasses at runtime, or addition and/or 
removal of classes at runtime.

> However, as for Smalltalk, Cardelli and Abadi's book on type theories
> for object-oriented programming would be a good place to look, since
> IIRC they spend a lot of time working out the type theory for
> prototype based languages. Since that blurs the distinction between
> classes and objects, it would likely be a good place to look.

AFAIK, sophisticated OO type systems can at most handle the addition, 
but not the removal, of features to a running program, and they do so in 
restricted ways.

(A nice example of such a type system is the one for gbeta.)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Neelakantan Krishnaswami
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbq0ab7.vtk.neelk@gs3106.sp.cs.cmu.edu>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote:
> Neelakantan Krishnaswami wrote:
> 
>> I think Pascal is asking what a statically-typed language that offers
>> much of the same feeling as programming in CLOS would look like,
>> rather than for a type system for CLOS per se. This is certainly a
>> reasonable question! It's definitely possible to build
>> statically-typed languages that support multiple dispatch/generic
>> functions and have ML-style polymorphic static typing. I know, because
>> I've written such a language.[*]
> 
> I have tried to google for your language, but I haven't found it. Would 
> you be so kind and tell us the name of your language? ;)

I didn't name it on purpose, because I've mostly stopped working on
it. But, it's called Needle, and you can see some slides describing it 
at: <http://www.nongnu.org/needle/mit-needle-talk.pdf>

If you want to really use a language with the same basic type system,
I'd recommend looking at Daniel Bonniot's Nice language instead,
because it is much closer to maturity and usability. Nice is an
explicitly typed language, to avoid having to do type simplification,
and to integrate better with Java, though. You can find it at:

  <http://nice.sourceforge.net>

-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnpf32$sr2$2@newsreader2.netcologne.de>
Neelakantan Krishnaswami wrote:

>>I have tried to google for your language, but I haven't found it. Would 
>>you be so kind and tell us the name of your language? ;)
> 
> 
> I didn't name it on purpose, because I've mostly stopped working on
> it. But, it's called Needle, and you can see some slides describing it 
> at: <http://www.nongnu.org/needle/mit-needle-talk.pdf>

Ah, I have seen this on TV. ;-)

(actually, on my Quicktime Player - the talk you gave at MIT... ;)

> If you want to really use a language with the same basic type system,
> I'd recommend looking at Daniel Bonniot's Nice language instead,
> because it is much closer to maturity and usability. Nice is an
> explicitly typed language, to avoid having to do type simplification,
> and to integrate better with Java, though. You can find it at:
> 
>   <http://nice.sourceforge.net>

Thanks!


Pascal
From: Marshall Spight
Subject: Re: More static type fun.
Date: 
Message-ID: <19_nb.58710$Fm2.37989@attbi_s04>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> For example, a MOP allows you to add and remove
> arbitrary slots and methods to and from classes. A runtime MOP allows
> you to do this at runtime, and this effects the classes that are used in
> the running system.
>
> I think a static type system can especially not handle the removal of
> slots or methods because this breaks invariants. And this doesn't even
> take into account switching metaclasses at runtime, or addition and/or
> removal of classes at runtime.

(Does "slot" mean something like "field"?)

I have questions about the usage and intended semantics of
this kind of capability.

In a running program, if you add a field to a class, what should
happen to the existing instances of that class? Would the addition
only be allowed if the field had a default value, or would you want
to be able to add fields at runtime without specifying a default? If
so, would the fields in existing classes not have an associated value,
or would you require somehow specifying a separate value for each
of them?

If you remove a field, what happens to all the instances? What happens
when you invoke a method that refers to a removed field?

What about object construction code in the face of adding or removing
fields?

If you remove a method from a class, what happens when you invoke
methods that refer to the now-removed method?

As you say, adding methods raises no problems.

Also, I'm not sure why you'd *want* to remove fields or methods.
What does forcing a method to go away at runtime buy you?
What is the use case?


Marshall
From: Brian Downing
Subject: Re: More static type fun.
Date: 
Message-ID: <Rj1ob.61274$e01.194786@attbi_s02>
In article <·····················@attbi_s04>,
Marshall Spight <·······@dnai.com> wrote:
> I have questions about the usage and intended semantics of
> this kind of capability.
> 
> In a running program, if you add a field to a class, what should
> happen to the existing instances of that class? Would the addition
> only be allowed if the field had a default value, or would you want
> to be able to add fields at runtime without specifying a default? If
> so, would the fields in existing classes not have an associated value,
> or would you require somehow specifying a separate value for each
> of them?
> 
> If you remove a field, what happens to all the instances? What happens
> when you invoke a method that refers to a removed field?

What happens in CLOS is that update-instance-for-redefined-class is
called on your instances, and they are updated in the way that you
specify therein.  You have control over what happens.

http://www.lispworks.com/reference/HyperSpec/Body/f_upda_1.htm

See the examples at the page above.

-bcd
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87vfq7yr3n.fsf@gruk.tech.ensign.ftech.net>
"Marshall Spight" <·······@dnai.com> writes:

> "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> > For example, a MOP allows you to add and remove
> > arbitrary slots and methods to and from classes. A runtime MOP allows
> > you to do this at runtime, and this effects the classes that are used in
> > the running system.
> >
> > I think a static type system can especially not handle the removal of
> > slots or methods because this breaks invariants. And this doesn't even
> > take into account switching metaclasses at runtime, or addition and/or
> > removal of classes at runtime.
> 
> (Does "slot" mean something like "field"?)
> 
> I have questions about the usage and intended semantics of
> this kind of capability.
[ SNIP ]
> Also, I'm not sure why you'd *want* to remove fields or methods.
> What does forcing a method to go away at runtime buy you?
> What is the use case?

Long-running system, where a "stop the world" and "restart the world"
is Not Feasible. Think, say, banking. One day, a bank manager gets the
brilliant idea that every account should have a new thing added to it
(accounts are modeled as instances of an account class). So,
run-time, you add a frobnitz slot to the account class and sometime
between "add the slot" and the next time an instance of that class is
touched, the instance will have had UPDATE-INSTANCE-FOR-REDEFINED-CLASS
run on it, so the right things can have happened.

//Ingvar
-- 
Q: What do you call a Discworld admin?
A: Chelonius Monk
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnr8or$tee$1@news.oberberg.net>
Ingvar Mattsson wrote:
> "Marshall Spight" <·······@dnai.com> writes:
>>Also, I'm not sure why you'd *want* to remove fields or methods.
>>What does forcing a method to go away at runtime buy you?
>>What is the use case?
> 
> Long-running system, where a "stop the world" and "restart the world"
> is Not Feasible. Think, say, banking. One day, a bank manager gets the
> brilliant idea that every account should have a new thing added to it
> (accounts are modeled as instances of an account class). So,
> run-time, you add a frobnitz slot to the account class and sometime
> between "add the slot" and the next time an instance of that class is
> touched, the instance will have had UPDATE-INSTANCE-FOR-REDEFINED-CLASS
> run on it, so the right things can have happened.

Are such changes really implemented at a live, running system? Including 
testing?

Let me elaborate a bit.
When I last worked on a long-running system, I had a "test" system and a 
"productive" system, and only tested changes were allowed to migrate 
into the production system. The result of this policy was that what got 
integrated into the production system was usually a whole bunch of 
changes, all to be implemented atomically - and to ensure the atomicity, 
the system had to be taken down anyway. (Of course things were organized 
to minimize the downtime.)

If a change involves modifications to existing data, you have two options:
1) Convert all data from old format to new format, replace all the 
functions that access it. Since this change must be atomic, you'll have 
to disable the system while the change is in progress.
2) Incremental change: add enough mechanisms that old data and new data 
can coexist. That is, first rewrite all accessing functions so that they 
can work both with old and new format, then start a background process 
that converts all data to the new format, and finally (optionally) 
remove the code that handled the old format.

Strategy (1) will work for static and dynamic languages alike.
Static languages tend to be unprepared for (2) - not because this is 
difficult to do, but because disallowing dynamic code loading allows 
some additional optimizations (and static languages tend to be developed 
by people who are efficiency-conscious).

Regards,
Jo
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4y8v2is3o.fsf@franz.com>
Joachim Durchholz <·················@web.de> writes:

> Ingvar Mattsson wrote:
> > "Marshall Spight" <·······@dnai.com> writes:
> >>Also, I'm not sure why you'd *want* to remove fields or methods.
> >>What does forcing a method to go away at runtime buy you?
> >>What is the use case?
> > Long-running system, where a "stop the world" and "restart the world"
> 
> > is Not Feasible. Think, say, banking. One day, a bank manager gets the
> > brilliant idea that every account should have a new thing added to it
> > (accounts are modeled as instances of an account class). So,
> > run-time, you add a frobnitz slot to the account class and sometime
> > between "add the slot" and the next time an instance of that class is
> > touched, the instance will have had UPDATE-INSTANCE-FOR-REDEFINED-CLASS
> > run on it, so the right things can have happened.
> 
> Are such changes really implemented at a live, running system? 
> Including testing?

Yes, many of our customers (and users of other CL implementations) do
just that.  The testing is usually done on a test system, as Espen has
also said later in this thread.

> Let me elaborate a bit.
> When I last worked on a long-running system, I had a "test" system and
> a "productive" system, and only tested changes were allowed to migrate
> into the production system. The result of this policy was that what
> got integrated into the production system was usually a whole bunch of
> changes, all to be implemented atomically - and to ensure the
> atomicity, the system had to be taken down anyway. (Of course things
> were organized to minimize the downtime.)

Yes, it's mostly similar, except for the downtime necessity.  In reality,
CL users tend to plan for potential downtime, just in case it's needed,
but in reality the downtime isn't needed at all.

> If a change involves modifications to existing data, you have two options:

No, there are 3 options:

> 1) Convert all data from old format to new format, replace all the
> functions that access it. Since this change must be atomic, you'll
> have to disable the system while the change is in progress.
> 
> 2) Incremental change: add enough mechanisms that old data and new
> data can coexist. That is, first rewrite all accessing functions so
> that they can work both with old and new format, then start a
> background process that converts all data to the new format, and
> finally (optionally) remove the code that handled the old format.

3) Lazy Incremental Change: Using a language that allows for this
and which already has the mechanisms in place for (2), plus the ability
to do the data conversion lazily, i.e. instances are only converted to
the new style at sometime between the time the format is changed and
the instance is accessed.  This also has the advantage that if the
number of instances in a system is large and the data format changes
frequent, an instance may skip intermediate updates if it had not
been accessed when those intermediate data formats were in play.

> Strategy (1) will work for static and dynamic languages alike.
> Static languages tend to be unprepared for (2) - not because this is
> difficult to do, but because disallowing dynamic code loading allows
> some additional optimizations (and static languages tend to be
> developed by people who are efficiency-conscious).

At ILC2003, John McCarthy emphasized the word "satisfice", coined by
Herbert Simon in 1957 to mean to search for local optimiums for
acceptable performance, rather than to search for the most optimal
solution.  Software tends to be dynamic in nature; requirements for
usage of software changes dynamically, and thus the code as written
tends to go out-of-spec quickly (or to become obsolete, if there is
no attempt to track the changing requirements).  For those who choose
to try to meet these requirement changes real-time, it is the dynamic
code which allows the efficiency conscious to practice satisfication
and to meet needs of their customers.

I personally think that Common Lisp could benefit from some static
compilation styles, to provide more of a continuum along the static
vs dynamic dimension.  Already, many optimizations in CLOS
implementations take advantage of simpler and faster operations, but
they also tend to be "breakable" in that if the structure of the data
or the method combination changes in incompatible ways, the
optimizations are modified or thrown away to be re-established later.
I would like to see a defacto adoption of "sealing" similar to what
Dylan does, except that, in the same never-fully-static tradition of
CL, such sealing would be based on simple runtime tests of assumptions,
allowing them to be breakable optimizations.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnrmkb$3vq$1@news.oberberg.net>
Duane Rettig wrote:

> 3) Lazy Incremental Change: Using a language that allows for this
> and which already has the mechanisms in place for (2), plus the ability
> to do the data conversion lazily, i.e. instances are only converted to
> the new style at sometime between the time the format is changed and
> the instance is accessed.  This also has the advantage that if the
> number of instances in a system is large and the data format changes
> frequent, an instance may skip intermediate updates if it had not
> been accessed when those intermediate data formats were in play.

It's a useful strategy, though I'm a bit sceptical: the system will 
amass old data formats, and it will have to retain the ability to 
convert any format (no matter how hold) into current format.
Enforcing the change to happen within a given time frame opens the 
opportunity to throw away the code for old formats.

On the plus side, this allows reading archived data that's years old - 
something that's a valuable commodity in itself.
Probably code for old data should be phased out in unison with archive 
clean-up (e.g. some data is routinely thrown away after 10 fiscal 
years... oh, and it may never be converted, so the laziness really 
persists until the bitter end).

> I personally think that Common Lisp could benefit from some static
> compilation styles, to provide more of a continuum along the static
> vs dynamic dimension.

Actually, I think similarly though I'm coming from the static side of 
things.
The gap should certainly be narrowed and (if at all possible) closed to 
give a full spectrum.

Unfortunately, current languages are designed to live on a rather narrow 
band of the dynamic-to-static spectrum; I don't think that Lisp can be 
made to accommodate a more static style, and getting Haskell or OCaml to 
work with a more dynamic style would be difficult as well.
Well, a change is "impossible" precisely in the moment when designing a 
new language is less work than adapting an existing one :-)

Regards,
Jo
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <48yn2e2ea.fsf@franz.com>
Joachim Durchholz <·················@web.de> writes:

> Duane Rettig wrote:
> 
> > 3) Lazy Incremental Change: Using a language that allows for this
> > and which already has the mechanisms in place for (2), plus the ability
> > to do the data conversion lazily, i.e. instances are only converted to
> > the new style at sometime between the time the format is changed and
> > the instance is accessed.  This also has the advantage that if the
> > number of instances in a system is large and the data format changes
> > frequent, an instance may skip intermediate updates if it had not
> > been accessed when those intermediate data formats were in play.
> 
> It's a useful strategy, though I'm a bit sceptical: the system will
> amass old data formats, and it will have to retain the ability to
> convert any format (no matter how hold) into current format.

Yes, this is consistent with the automatic memory management
style that CL uses.

> Enforcing the change to happen within a given time frame opens the
> opportunity to throw away the code for old formats.

True, but the structure that is being forced to become gc-able tends
not to be very big.

> On the plus side, this allows reading archived data that's years old -
> 
> something that's a valuable commodity in itself.
> Probably code for old data should be phased out in unison with archive
> clean-up (e.g. some data is routinely thrown away after 10 fiscal
> years... oh, and it may never be converted, so the laziness really
> persists until the bitter end).

Externalization/persistence introduce new issues, but they are
similar to gc issues, only on a more global (and external) scale.

> > I personally think that Common Lisp could benefit from some static
> > compilation styles, to provide more of a continuum along the static
> > vs dynamic dimension.
> 
> Actually, I think similarly though I'm coming from the static side of
> things.
> 
> The gap should certainly be narrowed and (if at all possible) closed
> to give a full spectrum.
> 
> 
> Unfortunately, current languages are designed to live on a rather
> narrow band of the dynamic-to-static spectrum; I don't think that Lisp
> can be made to accommodate a more static style, and getting Haskell or
> OCaml to work with a more dynamic style would be difficult as well.

I think that I have been under the impression that the fp languages
tend to take a purist's view on staticity, and I think CL will never
go there.  I guess you could say we take on a purely impure philosophy
:-)

> Well, a change is "impossible" precisely in the moment when designing
> a new language is less work than adapting an existing one :-)

Being an implementor, my point of view is that anythig is possible.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <1xsto5ja.fsf@ccs.neu.edu>
Duane Rettig <·····@franz.com> writes:

>
> Being an implementor, my point of view is that anythig is possible.
>

One of the salesmen at LMI persuaded a potential customer that we
could provide software that would detect when a process got stuck in
an infinite loop and raise an error.
From: Duane Rettig
Subject: Re: More static type fun.
Date: 
Message-ID: <4fzh9cper.fsf@franz.com>
Joe Marshall <···@ccs.neu.edu> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> >
> > Being an implementor, my point of view is that anythig is possible.
> >
> 
> One of the salesmen at LMI persuaded a potential customer that we
> could provide software that would detect when a process got stuck in
> an infinite loop and raise an error.

How much did you tell the salesman to charge the customer for the
job?

:-)

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <r80tkw14.fsf@ccs.neu.edu>
Duane Rettig <·····@franz.com> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> Duane Rettig <·····@franz.com> writes:
>> 
>> >
>> > Being an implementor, my point of view is that anythig is possible.
>> >
>> 
>> One of the salesmen at LMI persuaded a potential customer that we
>> could provide software that would detect when a process got stuck in
>> an infinite loop and raise an error.
>
> How much did you tell the salesman to charge the customer for the
> job?
>
> :-)

We wanted to get paid by the hour.

The sticking point was the acceptance test:  they said our code was
incomplete, we argued that their test was inconsistent....
From: Espen Vestre
Subject: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <kwptgeahdc.fsf_-_@merced.netfonds.no>
Joachim Durchholz <·················@web.de> writes:

> Are such changes really implemented at a live, running system?
> Including testing?

Sure.

The last time I live-upgraded one of my server systems with some code
that modified some very critical existing CLOS-objects (session objects
for users logged into the server), I did the following:

1) First I live-upgraded a test server to confirm that the patch didn't
   break any existing testing

2) Then I live-upgraded the production servers at late night, when
   crashing any running sessions would have minimal impact (and it
   didn't have any impact at all, the old session objects were upgraded
   and continued to do their work).

-- 
  (espen)
From: Raymond Wiker
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <8665i621k2.fsf@raw.grenland.fast.no>
Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:

> Joachim Durchholz <·················@web.de> writes:
>
>> Are such changes really implemented at a live, running system?
>> Including testing?
>
> Sure.
>
> The last time I live-upgraded one of my server systems with some code
> that modified some very critical existing CLOS-objects (session objects
> for users logged into the server), I did the following:
>
> 1) First I live-upgraded a test server to confirm that the patch didn't
>    break any existing testing
>
> 2) Then I live-upgraded the production servers at late night, when
>    crashing any running sessions would have minimal impact (and it
>    didn't have any impact at all, the old session objects were upgraded
>    and continued to do their work).

        Note that this sort of thing is "normal" in telecom. Erlang
(for example) has support for replacing code in a running system,
though I think this support is less ambitious that what Common Lisp /
CLOS offers. 


-- 
Raymond Wiker                        Mail:  ·············@fast.no
Senior Software Engineer             Web:   http://www.fast.no/
Fast Search & Transfer ASA           Phone: +47 23 01 11 60
P.O. Box 1677 Vika                   Fax:   +47 35 54 87 99
NO-0120 Oslo, NORWAY                 Mob:   +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
From: Espen Vestre
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <kwfzhaagpj.fsf@merced.netfonds.no>
Raymond Wiker <·············@fast.no> writes:

>         Note that this sort of thing is "normal" in telecom. 

Sure. My previous employer was a telecom, and I did the same thing
there, all the time (with CL).
-- 
  (espen)
From: Joachim Durchholz
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <bnrfns$qu$1@news.oberberg.net>
Raymond Wiker wrote:

>         Note that this sort of thing is "normal" in telecom. Erlang
> (for example) has support for replacing code in a running system,
> though I think this support is less ambitious that what Common Lisp /
> CLOS offers. 

It need not be as ambitious: it doesn't have to deal with mutable objects.

Regards,
Jo
From: Joachim Durchholz
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <bnrg73$13a$1@news.oberberg.net>
Espen Vestre wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
>>Are such changes really implemented at a live, running system?
>>Including testing?
> 
> Sure.
> 
> The last time I live-upgraded one of my server systems with some code
> that modified some very critical existing CLOS-objects (session objects
> for users logged into the server), I did the following:
> 
> 1) First I live-upgraded a test server to confirm that the patch didn't
>    break any existing testing

OK so far.
(I didn't really expect people to do things in a different manner.)

> 2) Then I live-upgraded the production servers at late night, when
>    crashing any running sessions would have minimal impact (and it
>    didn't have any impact at all, the old session objects were upgraded
>    and continued to do their work).

Hmm... if the chances of breaking the production server are large enough 
that you do the upgrade at night, having to take the system down for 
half a minute to install new binaries doesn't seem so much of a 
disadvantage anymore.

For me, the trade-off is like this: it would help me if I could do 
dynamic upgrades in my projects, but the advantages of static typing 
definitely outweigh this.
YMMV :-)

Regards,
Jo
From: Marc Battyani
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <bnrj4l$ooh@library1.airnews.net>
"Joachim Durchholz" <·················@web.de> wrote

> > 2) Then I live-upgraded the production servers at late night, when
> >    crashing any running sessions would have minimal impact (and it
> >    didn't have any impact at all, the old session objects were upgraded
> >    and continued to do their work).
>
> Hmm... if the chances of breaking the production server are large enough
> that you do the upgrade at night, having to take the system down for
> half a minute to install new binaries doesn't seem so much of a
> disadvantage anymore.
>
> For me, the trade-off is like this: it would help me if I could do
> dynamic upgrades in my projects, but the advantages of static typing
> definitely outweigh this.
> YMMV :-)

Patching the running servers is something I do each time I release a new
version. I do this in the working day hours and generally people are rather
surprised to see the interface and functionnalities changing while they use
it.

Marc
From: Joachim Durchholz
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <bnrm5g$3li$1@news.oberberg.net>
Marc Battyani wrote:
> Patching the running servers is something I do each time I release a new
> version. I do this in the working day hours and generally people are rather
> surprised to see the interface and functionnalities changing while they use
> it.

Ah, that's interesting.
How do you handle "incompatible changes", e.g. when you replace a data 
structure by a completely different one?

Regards,
Jo
From: Frode Vatvedt Fjeld
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <2hptgeo756.fsf@vserver.cs.uit.no>
Joachim Durchholz <·················@web.de> writes:

> Ah, that's interesting.  How do you handle "incompatible changes",
> e.g. when you replace a data structure by a completely different
> one?

Have you heard about this wonderful thing called "dynamic typing"? :-)

-- 
Frode Vatvedt Fjeld
From: Joachim Durchholz
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <bnvapp$oc2$2@news.oberberg.net>
Frode Vatvedt Fjeld wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
> 
>>Ah, that's interesting.  How do you handle "incompatible changes",
>>e.g. when you replace a data structure by a completely different
>>one?
> 
> Have you heard about this wonderful thing called "dynamic typing"? :-)

Yes.
It's unrelated to questions like "how to I prevent the system from 
accessing an object that's in the midst of an incompatible change?"

Regards,
Jo
From: Marc Battyani
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <bnrph1$mvi@library1.airnews.net>
"Joachim Durchholz" <·················@web.de> wrote
> Marc Battyani wrote:
> > Patching the running servers is something I do each time I release a new
> > version. I do this in the working day hours and generally people are
rather
> > surprised to see the interface and functionnalities changing while they
use
> > it.
>
> Ah, that's interesting.
> How do you handle "incompatible changes", e.g. when you replace a data
> structure by a completely different one?

What I do is this:

1 The object writes to the SQL database are stopped

2 The CLOS classes are changed (this will re-generate and compile all the
HTML UI stuff)

3 The SQL database structure is changed

4 Human written code to perform non obvious conversion between the old and
the new data structures is executed.

5 The object writes to the SQL database are enabled again

Generally you don't have a complete change of the data structure. But even
small changes are incompatible. Removing a slot from a class to add it to
another class is rather frequent and is handling in step 4. At the end of the
month I will put a new version with a major data structure change (it has
been decided to use a completely orthogonal structure for a part of the
application) and I don't plan to stop the server.

One of the applications on which I'm working these days has seen 5 rather
important changes like this done without stopping the server.

Marc
From: Espen Vestre
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <GEdob.35238$os2.512295@news2.e.nsc.no>
Joachim Durchholz <·················@web.de> writes:

> Hmm... if the chances of breaking the production server are large enough 
> that you do the upgrade at night, having to take the system down for 
> half a minute to install new binaries doesn't seem so much of a 
> disadvantage anymore.

Usually I do it during daytime, but in this case I wasn't 100% sure of the
effect on logged-in users, and the need for the patch wasn't immediate.

The biggest advantage of live patching is when the patch is a minor
but urgent bug fix. This particular server can't be halted during
the day, so the ability to fix it while it's running is a tremendous
advantage.
-- 
  (espen)
From: Raymond Wiker
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <86wualzs7k.fsf@raw.grenland.fast.no>
Joachim Durchholz <·················@web.de> writes:

> Hmm... if the chances of breaking the production server are large
> enough that you do the upgrade at night, having to take the system
> down for half a minute to install new binaries doesn't seem so much of
> a disadvantage anymore.

        Where do you get "half a minute" from? Large systems (in
the telecom domain, at least) can typically take several *hours* to
start up.

-- 
Raymond Wiker                        Mail:  ·············@fast.no
Senior Software Engineer             Web:   http://www.fast.no/
Fast Search & Transfer ASA           Phone: +47 23 01 11 60
P.O. Box 1677 Vika                   Fax:   +47 35 54 87 99
NO-0120 Oslo, NORWAY                 Mob:   +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
From: Joachim Durchholz
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <bo0904$6d0$1@news.oberberg.net>
Raymond Wiker wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
>>Hmm... if the chances of breaking the production server are large
>>enough that you do the upgrade at night, having to take the system
>>down for half a minute to install new binaries doesn't seem so much of
>>a disadvantage anymore.
> 
>         Where do you get "half a minute" from? Large systems (in
> the telecom domain, at least) can typically take several *hours* to
> start up.

Amazing.
I'm wondering why one would have so much internal state in a system that 
rebuilding it on start-up takes hours.
But maybe nobody cared because the system was designed for live surgery, 
not for restarts. After all, every additional design constraint makes it 
more difficult to satisfy the previous constraints, and if you have good 
tools for live surgery, restart behavious automatically takes a back seat.

Regards,
Jo
From: Ingvar Mattsson
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <87ptgbt75k.fsf@gruk.tech.ensign.ftech.net>
Joachim Durchholz <·················@web.de> writes:

> Raymond Wiker wrote:
> 
> > Joachim Durchholz <·················@web.de> writes:
> >
> >>Hmm... if the chances of breaking the production server are large
> >>enough that you do the upgrade at night, having to take the system
> >>down for half a minute to install new binaries doesn't seem so much of
> >>a disadvantage anymore.
> >         Where do you get "half a minute" from? Large systems (in
> > the telecom domain, at least) can typically take several *hours* to
> > start up.
> 
> Amazing.
> I'm wondering why one would have so much internal state in a system
> that rebuilding it on start-up takes hours.
> But maybe nobody cared because the system was designed for live
> surgery, not for restarts. After all, every additional design
> constraint makes it more difficult to satisfy the previous
> constraints, and if you have good tools for live surgery, restart
> behavious automatically takes a back seat.

Clock syncing? Amazingly costly thing, really and takes bloody forever
to happen. Syncing between multiple nodes? A normal (Ericsson, at
least) large phone switch comes in quite a few pieces, some who are
running their own software. Also, it needs to perform power-on self
tests and when you're looking at a couple of thousand line cards, that
takes a while.

//Ingvar
-- 
When the SysAdmin answers the phone politely, say "sorry", hang up and
run awaaaaay!
	Informal advice to users at Karolinska Institutet, 1993-1994
From: Joachim Durchholz
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <bo2v2t$ajh$2@news.oberberg.net>
Ingvar Mattsson wrote:

> Clock syncing? Amazingly costly thing, really and takes bloody forever
> to happen.

Er... wouldn't the nodes be able to do that work in parallel, so that 
even if the total overhead is high, it doesn't count too much into the 
total start-up time?
Besides, clock synching could be done in parallel with restoring 
internal state, at least to a large degree.

 > Syncing between multiple nodes? A normal (Ericsson, at
> least) large phone switch comes in quite a few pieces, some who are
> running their own software. Also, it needs to perform power-on self
> tests and when you're looking at a couple of thousand line cards, that
> takes a while.

No parallelization possible?

Just wondering - I don't really know about this kind of system.

Regards,
Jo
From: Ingvar Mattsson
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <87he1lyaqw.fsf@gruk.tech.ensign.ftech.net>
Joachim Durchholz <·················@web.de> writes:

> Ingvar Mattsson wrote:
> 
> > Clock syncing? Amazingly costly thing, really and takes bloody forever
> > to happen.
> 
> Er... wouldn't the nodes be able to do that work in parallel, so that
> even if the total overhead is high, it doesn't count too much into the
> total start-up time?
> Besides, clock synching could be done in parallel with restoring
> internal state, at least to a large degree.

Doesn't matter when you're restarting a single switch (and a single
real-time clock). It takes on the order of "a good while" getting a
stable synced clock, IIRC. Sequence is "get clock data from
battery-powered clock, if available, otherwise check last log entry
and ask operator, then start syncing with whatever clock data we can
get from peers and iterate that until we are deemed stable enough".

Even if it's just a couple of minutes, we need a stable clock before
we can proceed further in the load process.

>  > Syncing between multiple nodes? A normal (Ericsson, at
> > least) large phone switch comes in quite a few pieces, some who are
> > running their own software. Also, it needs to perform power-on self
> > tests and when you're looking at a couple of thousand line cards, that
> > takes a while.
> 
> No parallelization possible?

To a degree. Both the A controller and the B controller do their POST
in parallell, then the two sides check each other. Then they check all
attached modules, not sure to what degree they do this in parallell
(I've only been an interested bystander, rather than a switch
operator, even if I've done my share of configuring routing tables and
taking ISDN-30 aggregates up and down). FWIW, a soft reload takes
about 2 seconds and is not noticeable from other than log entries. A
medium reload takes baout 15 seconds, IIRC and any calls taht are in
teh progress of being placed through the switch will be aborted, calls
in progress are not affected. A full reload (cold load, effectively)
takes anywhere from 10 minutes to a couple of hours, depending on
exact switch configuration.

//Ingvar
-- 
"No. Most Scandiwegians use the same algorithm as you Brits.
 "Ingvar is just a freak."
Stig Morten Valstad, in the Monastery
From: Isaac Gouy
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <ce7ef1c8.0311010930.6a101c3f@posting.google.com>
Joachim Durchholz <·················@web.de> wrote in message news:<············@news.oberberg.net>...
> Espen Vestre wrote:
> 
> > Joachim Durchholz <·················@web.de> writes:
> > 
> >>Are such changes really implemented at a live, running system?
> >>Including testing?
> > 
> > Sure.

And Smalltalk systems in various industries - from 24x7 process
control to investment banking.

OT, recently noticed that VW Smalltalk now defines fold: in Collection
class, there always were equivalents like inject:into:
From: Coby Beck
Subject: Re: Live Patching (Re: More static type fun.)
Date: 
Message-ID: <bnvjqr$2usf$1@otis.netspace.net.au>
"Joachim Durchholz" <·················@web.de> wrote in message
·················@news.oberberg.net...
> > 2) Then I live-upgraded the production servers at late night, when
> >    crashing any running sessions would have minimal impact (and it
> >    didn't have any impact at all, the old session objects were upgraded
> >    and continued to do their work).
>
> Hmm... if the chances of breaking the production server are large enough
> that you do the upgrade at night, having to take the system down for
> half a minute to install new binaries doesn't seem so much of a
> disadvantage anymore.

I used to do this kind of thing for Airline Crew Control software, and with
this system a restart took 3-4 hours, so was definately the last resort.

I also used to perform micro surgery on live objects in the production
system, repairing broken links etc, but this is an indication of desparate
times and poor management, not brilliant software engineering practice!
(There's a reason you shouldn't sell unfinished software to unsuspecting
customers...)

I have yet to find another software job with similar adrenalin rushes LOL!

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87znfiy8w9.fsf@gruk.tech.ensign.ftech.net>
Joachim Durchholz <·················@web.de> writes:

> Ingvar Mattsson wrote:
> > "Marshall Spight" <·······@dnai.com> writes:
> >>Also, I'm not sure why you'd *want* to remove fields or methods.
> >>What does forcing a method to go away at runtime buy you?
> >>What is the use case?
> > Long-running system, where a "stop the world" and "restart the world"
> > is Not Feasible. Think, say, banking. One day, a bank manager gets the
> > brilliant idea that every account should have a new thing added to it
> > (accounts are modeled as instances of an account class). So,
> > run-time, you add a frobnitz slot to the account class and sometime
> > between "add the slot" and the next time an instance of that class is
> > touched, the instance will have had UPDATE-INSTANCE-FOR-REDEFINED-CLASS
> > run on it, so the right things can have happened.
> 
> Are such changes really implemented at a live, running system?
> Including testing?

Testing is done on a testing system. Always.
I don't know off-hand of a long-running banking system that uses CL as
its implementation language, but I know of a couple of long-running
telco systems implemented in Erland that use similar tricks (not
necessarily opject changes, but loading a new module on the fly and
then swap it in). If you change function signatures *too* much, you
are (obviously) in deep water, but the geenral answer is "don't do
that, then".

If I remember correctly, the on-the-fly modification and update
functionality was the erason behind the Norwegian Stock Exchange
implementing its internal systems for trading in Common Lisp.

> Let me elaborate a bit.
> When I last worked on a long-running system, I had a "test" system and
> a "productive" system, and only tested changes were allowed to migrate
> into the production system. The result of this policy was that what
> got integrated into the production system was usually a whole bunch of
> changes, all to be implemented atomically - and to ensure the
> atomicity, the system had to be taken down anyway. (Of course things
> were organized to minimize the downtime.)

Yes, but if you had a programming environment that allowed you to
*not* take things down, wouldn't you have preferred doing taht, after
having verified that the on-the-fly dynamic upadte did, in fact, work?

> If a change involves modifications to existing data, you have two options:
> 1) Convert all data from old format to new format, replace all the
> functions that access it. Since this change must be atomic, you'll
> have to disable the system while the change is in progress.

No, it is just that all data must be updated between "class
redefinition" and "used". The simplest way is "stop the world, do a
mass data conversion, start the world".

//Ingvar
-- 
A routing decision is made at every routing point, making local hacks
hard to permeate the network with.
From: Paolo Amoroso
Subject: Re: More static type fun.
Date: 
Message-ID: <878yn1bjxk.fsf@plato.moon.paoloamoroso.it>
[following up to comp.lang.lisp only]

Ingvar Mattsson writes:

> If I remember correctly, the on-the-fly modification and update
> functionality was the erason behind the Norwegian Stock Exchange
> implementing its internal systems for trading in Common Lisp.

This is interesting. Can you provide some more information? Are there
any online/offline references?


Paolo
-- 
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
From: Rob Warnock
Subject: Re: More static type fun.
Date: 
Message-ID: <keidnVbwSaQgED-iXTWc-g@speakeasy.net>
Paolo Amoroso  <·······@mclink.it> wrote:
+---------------
| Ingvar Mattsson writes:
| > If I remember correctly, the on-the-fly modification and update
| > functionality was the erason behind the Norwegian Stock Exchange
| > implementing its internal systems for trading in Common Lisp.
| 
| This is interesting. Can you provide some more information? Are there
| any online/offline references?
+---------------

I think maybe he's thinking about Espen Vestre's talk at ILC 2003
about the "Net Fonds" on-line broker <URL:http://netfonds.no/>,
whose trades are about 10% of the Norwegian Stock Exchange's volume
(but is not the exchange itself).


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87fzh7sznm.fsf@gruk.tech.ensign.ftech.net>
····@rpw3.org (Rob Warnock) writes:

> Paolo Amoroso  <·······@mclink.it> wrote:
> +---------------
> | Ingvar Mattsson writes:
> | > If I remember correctly, the on-the-fly modification and update
> | > functionality was the erason behind the Norwegian Stock Exchange
> | > implementing its internal systems for trading in Common Lisp.
> | 
> | This is interesting. Can you provide some more information? Are there
> | any online/offline references?
> +---------------
> 
> I think maybe he's thinking about Espen Vestre's talk at ILC 2003
> about the "Net Fonds" on-line broker <URL:http://netfonds.no/>,
> whose trades are about 10% of the Norwegian Stock Exchange's volume
> (but is not the exchange itself).

Actually no, I was thinking about one or Erik Naggum's "Lisp success
stories". Roughly in the same time frame he was discussing writing
better time-handling code.

//Ingvar
-- 
My posts are fair game for anybody who wants to distribute the countless
pearls of wisdom sprinkled in them, as long as I'm attributed.
	-- Martin Wisse, in a.f.p
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87k76jszp7.fsf@gruk.tech.ensign.ftech.net>
Paolo Amoroso <·······@mclink.it> writes:

> [following up to comp.lang.lisp only]
> 
> Ingvar Mattsson writes:
> 
> > If I remember correctly, the on-the-fly modification and update
> > functionality was the erason behind the Norwegian Stock Exchange
> > implementing its internal systems for trading in Common Lisp.
> 
> This is interesting. Can you provide some more information? Are there
> any online/offline references?

Erik Naggum wrote a few articles here in comp.lang.lisp a few years
ago. I imagine a look at Google's news archive would be the thing.

-- 
My posts are fair game for anybody who wants to distribute the countless
pearls of wisdom sprinkled in them, as long as I'm attributed.
	-- Martin Wisse, in a.f.p
From: Marshall Spight
Subject: Re: More static type fun.
Date: 
Message-ID: <zNaob.63336$Tr4.169143@attbi_s03>
"Ingvar Mattsson" <······@cathouse.bofh.se> wrote in message ···················@gruk.tech.ensign.ftech.net...
> "Marshall Spight" <·······@dnai.com> writes:
>
> > Also, I'm not sure why you'd *want* to remove fields or methods.
> > What does forcing a method to go away at runtime buy you?
> > What is the use case?
>
> Long-running system, where a "stop the world" and "restart the world"
> is Not Feasible. Think, say, banking.

Hrm. Well, although I disagree with your specific example, it
was nonetheless illustrative. So, maybe embedded software for
life support. A restart might require surgery. (But then, I don't
see software upgrades happening in the field, either.)

I've worked in finance, and it's not been 24x7. Where I work
now is 24x7 and we have no problem with restarts. We restart
our app servers at least once every two weeks. It strikes me
that we'd have to write code to migrate all the changes as well
as make the changes; I'm not sure that's so much more efficient
than a restart, but then, I haven't tried it, so I'm just guessing.


Marshall
From: Ingvar Mattsson
Subject: Re: More static type fun.
Date: 
Message-ID: <87vfq6y8pj.fsf@gruk.tech.ensign.ftech.net>
"Marshall Spight" <·······@dnai.com> writes:

> "Ingvar Mattsson" <······@cathouse.bofh.se> wrote in message ···················@gruk.tech.ensign.ftech.net...
> > "Marshall Spight" <·······@dnai.com> writes:
> >
> > > Also, I'm not sure why you'd *want* to remove fields or methods.
> > > What does forcing a method to go away at runtime buy you?
> > > What is the use case?
> >
> > Long-running system, where a "stop the world" and "restart the world"
> > is Not Feasible. Think, say, banking.
> 
> Hrm. Well, although I disagree with your specific example, it
> was nonetheless illustrative. So, maybe embedded software for
> life support. A restart might require surgery. (But then, I don't
> see software upgrades happening in the field, either.)

OK, let us take "comms" as an example. When I worked in a company that
was a split telco/ISP, the telco-side guys laughed at us for having to
reboot our routers to load new code on them. I mean, they only managed
uptimes on the order of 1-2 years before we needed an interface that
wasn't well supported in the existing code. Meanwhile, at least one of
the phone switches had an uptime of five years, with at least one
major (and multiple minor) upgrades of the running code base.

//Ingvar
-- 
(defun m (f)
  (let ((db (make-hash-table :key #'equal)))
    #'(lambda (&rest a)
        (or (gethash a db) (setf (gethash a db) (apply f a))))))
From: Shiro Kawai
Subject: Re: More static type fun.
Date: 
Message-ID: <1bc2f7b2.0310301958.4c72b90b@posting.google.com>
"Marshall Spight" <·······@dnai.com> wrote in message news:<······················@attbi_s03>...
> I've worked in finance, and it's not been 24x7. Where I work
> now is 24x7 and we have no problem with restarts. We restart
> our app servers at least once every two weeks.

I've once maintained a server software in almost 24x7 
environment---"almost" means that we likely could find some
period to restart, usually in early morning, when transactions
were very low.   However, we had number of occasions that
we got issues which must have been addressed right there,
while a frustrated producer was waiting right behind you.
Live patching was indispensable in such occasions.   

If we hadn't had ability to live-patch the server, we 
could've begged for a emergency downtime to the entire 
studio and they'd have accepted---after all, they hadn't had
no other choice---however frustrated they'd have been.
It's like the situation that, if you have only one OS that
shuts down every now and then, you need to live with it.
But once you know there's other OSes that doesn't need to
reboot, you'll switch to it and not look back.
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egism54ylw.fsf@sefirot.ii.uib.no>
Ingvar Mattsson <······@cathouse.bofh.se> writes:

[Case where you need to dynamically upgrade without taking down the
system] 

> Long-running system, where a "stop the world" and "restart the world"
> is Not Feasible. Think, say, banking.

Or online multiplayer real-time games?  I know at least some of these
have been riddled with (and perhaps even failed because of)
instability. 

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnrn21$kpc$1@newsreader2.netcologne.de>
Marshall Spight wrote:

> "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> 
>>For example, a MOP allows you to add and remove
>>arbitrary slots and methods to and from classes. A runtime MOP allows
>>you to do this at runtime, and this effects the classes that are used in
>>the running system.
>>
>>I think a static type system can especially not handle the removal of
>>slots or methods because this breaks invariants. And this doesn't even
>>take into account switching metaclasses at runtime, or addition and/or
>>removal of classes at runtime.
> 
> (Does "slot" mean something like "field"?)

Yes.

> I have questions about the usage and intended semantics of
> this kind of capability.
> 
> In a running program, if you add a field to a class, what should
> happen to the existing instances of that class? Would the addition
> only be allowed if the field had a default value, or would you want
> to be able to add fields at runtime without specifying a default? If
> so, would the fields in existing classes not have an associated value,
> or would you require somehow specifying a separate value for each
> of them?

Whatever you want. ;)

CLOS allows you to choose from the variants you mention, and then some. 
See http://www.lispworks.com/reference/HyperSpec/Body/04_cf.htm for some 
more details. (Warning: This is strong stuff! ;)

(Isn't it interesting that CLOS already offers, among other things, 
exactly what you have just envisioned?)

> If you remove a field, what happens to all the instances? What happens
> when you invoke a method that refers to a removed field?

When the field is accessed, the functions SLOT-MISSING is called. By 
default, this function throws an exception, which in Common Lisp means 
that you have a chance to take corrective action and then continue the 
execution at the point where the exception occurred. (This is very 
different from what Java does, for example.) You can also implement your 
own methods for SLOT-MISSING so that you can make the program do 
automatically whatever you consider to be the right thing for this case.

BTW, there is also a function SLOT-UNBOUND that is called by default for 
fields that you have added to a class but that are not initialized yet. 
(Just in case you haven't provided a default inital value, or a default 
initialization method.)

> What about object construction code in the face of adding or removing
> fields?

Constructors are handled somewhat differently in CLOS. You could say 
that each field has its own constructor. (Again, there are more options, 
but this takes a while to explain. ;)

> If you remove a method from a class, what happens when you invoke
> methods that refer to the now-removed method?

First of all, methods don't belong to classes in CLOS. They are defined 
outside of the classes they are implemented for. This has several 
advantages, ranging from the fact that you don't need the Visitor 
pattern anymore, up to multi methods.

Again, when you call methods that don't exist anymore, appropriate 
exceptions might be thrown and you can take the necessary corrective 
actions. Or let this be handled automatically.

> As you say, adding methods raises no problems.
> 
> Also, I'm not sure why you'd *want* to remove fields or methods.
> What does forcing a method to go away at runtime buy you?
> What is the use case?

A general answer would be: optional features.

Imagine classes that have features that you sometimes want and sometimes 
not. In statically typed languages you have to use a flag to check 
whether such a feature is currently present or not, and then after the 
check use that feature.

In CLOS, you don't need the flag. (And of course, you can also check for 
the presence or absence of slots and/or methods at runtime.)

Here is another take on this:

One of the most criticized features of Java is the need for many, many 
class casts. The standard idiom is this:

if (obj instanceof SomeClass) {
   SomeClass scobj = (SomeClass)obj;
   scobj.useSpecificFeature();
}

There several problems with this. One is that in general, this isn't 
thread safe. In the short time frame between the instanceof check and 
the actual class cast, obj might have been changed by some other thread. 
The other is that you need to write lots of code to do this.

Statically typed languages take care of the second problem. The goal is 
that you already have the right type at the moment you want to use the 
object. However, when you still want to have optional features, you now 
need to write them manually, roughly as follows:

if (obj.hasFeature()) {
   obj.useFeature();
}

You still have potential thread safety problems here.

In a dynamically typed language, you have a chance to directly use the 
optional feature without checking first, and let an exception handler 
deal with the potential absence of features, like this:

try {
   obj.useFeature();
} catch (FeatureMissingException e) {
   correctSituation();
   if (canRetry) {
     retry; // <- Java doesn't have this!
   }
}

Now, you don't have the thread safety problems anymore.


Actually, the potential problems with thread safety stem from the fact 
that the language allows for side effects. One answer from the 
functional programming community is to completely get rid of side 
effects. Then you can't change the object anymore from the outside. In 
some situations, this can indeed be a good idea.

However, some things are better modelled with objects that do allow for 
side effects, so this is not always the best solution IMHO.


Pascal
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa26a35$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>try {
>   obj.useFeature();
>} catch (FeatureMissingException e) {
>   correctSituation();
>   if (canRetry) {
>     retry; // <- Java doesn't have this!
>   }
>}

Support for resumption in exception handling is not specific to
dynamically typed languages; it would be quite possible to support
resumption in a statically typed language.

Java inherited the lack of support for resumption in its exceptions handling
from C++.  The C++ designers considered the idea of resumption, and
noticed that they didn't need to support it in the language, since you
could do it as a library.  And that is true so long as all the exceptions
that are thrown are thrown by user code.  But the C++ designers, having
correctly analyzed that no _language_ support was needed for resumable
exceptions, seemed to promptly forget all about them, and in particular
failed to include any _library_ support. :-(

The Java designers then copied this flawed design, even though the
assumptions which led to it were not true in Java. :-( :-(

(The exception handling facilities of Haskell and Mercury don't support
resumption either, but at least in those cases there is a good reason:
it doesn't sit well in a pure language, since resumption basically relies
on side effects.)

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Daniel C. Wang
Subject: Re: More static type fun.
Date: 
Message-ID: <usml9x7dr.fsf@hotmail.com>
Fergus Henderson <···@cs.mu.oz.au> writes:
{stuff deleted}
> Support for resumption in exception handling is not specific to
> dynamically typed languages; it would be quite possible to support
> resumption in a statically typed language.

As an aside, you can implement resumable exceptions in SML/NJ and MLton,
simply by throwing an exception value that happens to contain the current
continuation captured by call-cc. I actually, did such an evil thing once,
when I needed a resumable exception for some reason I forgot.
From: Neelakantan Krishnaswami
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbq638n.6mn.neelk@gs3106.sp.cs.cmu.edu>
In article <·············@hotmail.com>, Daniel C. Wang wrote:
> 
> Fergus Henderson <···@cs.mu.oz.au> writes:
> {stuff deleted}
>> Support for resumption in exception handling is not specific to
>> dynamically typed languages; it would be quite possible to support
>> resumption in a statically typed language.
> 
> As an aside, you can implement resumable exceptions in SML/NJ and
> MLton, simply by throwing an exception value that happens to contain
> the current continuation captured by call-cc. I actually, did such
> an evil thing once, when I needed a resumable exception for some
> reason I forgot.

I don't think it's more particularly evil than a ref cell. But here's
an example for all the curious bystanders:

  structure CC = SMLofNJ.Cont
  
  exception Resume of int CC.cont
  fun throw_resumable() = CC.callcc (fn r => raise (Resume r))
  
  val example =
      let in 
  	  5 + throw_resumable()
      end
      handle Resume r => CC.throw r 13

example will be bound to 18. 

-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnun18$noh$1@newsreader2.netcologne.de>
Fergus Henderson wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>try {
>>  obj.useFeature();
>>} catch (FeatureMissingException e) {
>>  correctSituation();
>>  if (canRetry) {
>>    retry; // <- Java doesn't have this!
>>  }
>>}
> 
> 
> Support for resumption in exception handling is not specific to
> dynamically typed languages; it would be quite possible to support
> resumption in a statically typed language.

Yes, would be, but isn't considered "politically correct", as you 
mention yourself below.

> (The exception handling facilities of Haskell and Mercury don't support
> resumption either, but at least in those cases there is a good reason:
> it doesn't sit well in a pure language, since resumption basically relies
> on side effects.)

I think you could do something with continuations/monads here.

I didn't mean to give an exhaustive description of the topic. But note 
that in statically typed languages you have to anticipate the need for 
such facilities while in dynamically typed languages is the default 
behavior anyway.


Pascal
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa3a9ac$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>
>> Pascal Costanza <········@web.de> writes:
>> 
>> 
>>>try {
>>>  obj.useFeature();
>>>} catch (FeatureMissingException e) {
>>>  correctSituation();
>>>  if (canRetry) {
>>>    retry; // <- Java doesn't have this!
>>>  }
>>>}
>> 
>> Support for resumption in exception handling is not specific to
>> dynamically typed languages; it would be quite possible to support
>> resumption in a statically typed language.
>
>Yes, would be, but isn't considered "politically correct", as you 
>mention yourself below.

? 

It is _impure_, which means it is considered politically incorrect by
those in the purely declarative programming camp.  But the decision
about pure versus impure is orthogonal to the decision about
statically typed versus dynamically typed.

>I didn't mean to give an exhaustive description of the topic. But note 
>that in statically typed languages you have to anticipate the need for 
>such facilities while in dynamically typed languages is the default 
>behavior anyway.

That's just not true.  In Prolog, for example, exceptions are not resumable.

There is no direct relationship between resumability of exceptions and
static versus dynamic typing.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Adrian Hey
Subject: Re: More static type fun.
Date: 
Message-ID: <bo0kvn$apc$1$8300dec7@news.demon.co.uk>
Pascal Costanza wrote:

> Fergus Henderson wrote:
>> Support for resumption in exception handling is not specific to
>> dynamically typed languages; it would be quite possible to support
>> resumption in a statically typed language.
> 
> Yes, would be, but isn't considered "politically correct", as you
> mention yourself below.
> 
>> (The exception handling facilities of Haskell and Mercury don't support
>> resumption either, but at least in those cases there is a good reason:
>> it doesn't sit well in a pure language, since resumption basically relies
>> on side effects.)

Purity isn't just a matter of being "politically correct".
It's pretty much essential in a language which uses lazy evaluation
(such as Haskell).

Regards
--
Adrian Hey
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnp1fg$v44$1@f1node01.rhrz.uni-bonn.de>
Dirk Thierbach wrote:

> Why do you insist on putting up strawmen? 

That's not a straw man.

You said:

>> What I don't understand is why people (not you) say that for them it
>> is "impossible" to do some things with static typing, and that
       ^^^^^^^^^^^^
>> languages that don't use static typing are therefore "better". Clearly
>> you can do the same things. You do it a little bit differently, but
>> the difference is not so big to be such important. Other differences
>> of the languages are much more important.

In response to...

 >>What would a static type system for CLOS+MOP, or for Smalltalk with a
 >>MOP look like?

...you now say this:

 > You cannot make one. We already discussed this. You cannot even make
       ^^^^^^
 > a "good" static type system for CL, let alone CLOS, or Smalltalk.


Why is it so hard to understand that you are contradicting yourself here?

Shall I rephrase? It's _impossible_ to make a static type sytem for 
CLOS+MOP. Therefore, it's _impossible_ to use a CLOS-style MOP together 
with static typing. That's _exactly_ the thing that's "impossible to do 
with static typing".

When you need the flexibility that a MOP provides, you would just not be 
able to do things "a little bit differently" in a statically typed 
language, except perhaps by applying Greenspun's tenth rule.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <1hr571-d44.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:

> Shall I rephrase? It's _impossible_ to make a static type sytem for 
> CLOS+MOP. Therefore, it's _impossible_ to use a CLOS-style MOP together 
> with static typing. That's _exactly_ the thing that's "impossible to do 
> with static typing".

In that sense, it is already "impossible" to use CL itself with static
typing (you have repeated this "objective truth" often enough :-)

And I grant you that it is "impossible" to take a CL program and
to expect that it will work exactly in this way in another language
(say, Smalltalk).

But that's not the point.

It's not impossible to solve the same task you solve in CL, or CLOS,
or CLOS+MP, in another language. And with code that is about as
convenient as the code you would write in CL, or CLOS, or CLOS+MOP.

> When you need the flexibility that a MOP provides, you would just not be 
> able to do things "a little bit differently" in a statically typed 
> language, except perhaps by applying Greenspun's tenth rule.

But you won't need the "flexibility that a MOP provides" if there is
no comparable object system in the other language. You use a
completely different approach. (I admit that that makes it hard to
compare if you can do it as conveniently as in CLOS+MOP.)

You insist all the time that you want to do it in exactly the same
way as you do it as in CLOS+MOP. The only language where you can
do that in exactly the same way *is* CLOS+MOP. So you shouldn't
be surprised that it is impossible to do it in the same way in
any other language, no matter if it is statically or dynamically typed.

There's a difference between those two types of "impossible" (one is
trivial, and the other is just not true), and the difference isn't
based on "all languages are Turing complete, anyway".

Neelakantan Krishnaswami has given some pointers. You might also
want to look at Haskell typeclasses, at Monads, at O'Haskell, and at
the objects of OCaml.

Somewhere in this thread, Joachim has also made a list of examples of
MOP usage, and how you could get the same effect in a different way.

Can we now EOT this part before we iterate it the n-th time?

- Dirk
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnqpjn$399$1@newsreader2.netcologne.de>
Dirk Thierbach wrote:

> Can we now EOT this part before we iterate it the n-th time?

OK.


Pascal
From: Russell Wallace
Subject: Re: More static type fun.
Date: 
Message-ID: <3f9ff444.114287024@news.eircom.net>
On Wed, 29 Oct 2003 10:31:57 +0100, Dirk Thierbach <··········@gmx.de>
wrote:

>Russell Wallace <················@eircom.net> wrote:
>
>> The reason why, after all these years, I've acquired a definite
>> preference for dynamic typing, is that I find I do indeed want to use
>> different types in the same place at once, sufficiently often that
>> doing it in a static language just makes me an example of Greenspun's
>> Tenth.
>
>Interesting. Can you give examples?

Sure.

- Business systems, where in order to avoid bloating the code to the
point where the project becomes unviable, a great deal of it
(particularly the user interface) has to be able to handle generic
"values" (number, date, string, record reference etc).

- Games, where a lot of action/event code similarly needs to be able
to handle generic "entities" (items, characters, vehicles etc).

- Evolutionary computation, where in order to keep the code flexible
in the face of change, you want as much of it as possible to be able
to do something sensible whatever type of data it's given.

>What I don't understand is why people (not you) say that for them it
>is "impossible" to do some things with static typing, and that
>languages that don't use static typing are therefore "better". Clearly
>you can do the same things. You do it a little bit differently, but
>the difference is not so big to be such important. Other differences
>of the languages are much more important.
>
>Especially when it turns out that most of them make these claims
>out of ignorance: They don't know what good static type systems can
>do, they have only used crappy static type systems before, etc.
>
>If you don't know about the alternatives, why so much emotion?

Well, I agree with you that the whole static versus dynamic typing
issue doesn't really merit the amount of heated argument it gets.
Saying this or that is _impossible_ in one or the other is hyperbole,
a more reasonable statement would be "for this particular pattern,
system X is more convenient than system Y" or suchlike.

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Mark Carroll
Subject: Re: More static type fun.
Date: 
Message-ID: <Jku*Zxa6p@news.chiark.greenend.org.uk>
In article <··················@news.eircom.net>,
Russell Wallace <················@eircom.net> wrote:
>On Wed, 29 Oct 2003 10:31:57 +0100, Dirk Thierbach <··········@gmx.de>
>wrote:
>
>>Russell Wallace <················@eircom.net> wrote:
>>
>>> The reason why, after all these years, I've acquired a definite
>>> preference for dynamic typing, is that I find I do indeed want to use
>>> different types in the same place at once, sufficiently often that
>>> doing it in a static language just makes me an example of Greenspun's
>>> Tenth.
>>
>>Interesting. Can you give examples?
(snip)
>- Business systems, where in order to avoid bloating the code to the
(snip)
>- Games, where a lot of action/event code similarly needs to be able
>to handle generic "entities" (items, characters, vehicles etc).

These both sound like that in Haskell I'd solve them with typeclasses.
Where Haskell fails at the moment, I suspect, is when new things have
to be introduced into already-running code.

>- Evolutionary computation, where in order to keep the code flexible
>in the face of change, you want as much of it as possible to be able
>to do something sensible whatever type of data it's given.

My Haskell genetic-algorithm code is indeed pretty flexible. The
biggest constraint that I can see in it is that it requires that
alternatives by composed of parts that can be freely mixed and
matched, and that alternatives' fitness can be expressed in terms of a
set of numbers. Beyond that, it's pretty generic. The signature's
simply,

evolveM :: (Ord a, Num b, Ord b, RandomGen g, Monad m) => g -> [a] -> [[a]] -> ([a] -> [a] -> [a]) -> ([a] -> m [b]) -> Int -> Int -> m [([a], [b])]

evolveM rng possibilities seeds combine appraise target_population generations

(possibilities is the list of valid parts, appraise is the fitness evaluator)
(a is the type of alternatives' parts but, by using algebraic datatypes,
these could be different sorts of thing)

(snip)
>Well, I agree with you that the whole static versus dynamic typing
>issue doesn't really merit the amount of heated argument it gets.
>Saying this or that is _impossible_ in one or the other is hyperbole,
>a more reasonable statement would be "for this particular pattern,
>system X is more convenient than system Y" or suchlike.

I also think it has a lot to do what with suits whose programming
style and way of thinking about new programming projects. I was
probably the best Common Lisp coder at my previous workplace (which,
admittedly, isn't saying much!), but I am still very happy to be using
Haskell now. I appreciate Lisp's dynamism, and see that it's very
powerful, I just don't much need it for the way I tend to do things
(and I do write a wide variety of types of stuff) and I find adding
type signatures a useful, helpful discipline (I add them even though
they're usually not needed because the compiler can infer them). I
really think it's partly just a matter of different styles and
approaches where both are often useful, but they fit different
individuals differently.

-- Mark
From: Russell Wallace
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa0272d.127322185@news.eircom.net>
On 29 Oct 2003 19:41:03 +0000 (GMT), Mark Carroll
<·····@chiark.greenend.org.uk> wrote:

>These both sound like that in Haskell I'd solve them with typeclasses.

Yes, that would probably work, though to my mind using a dynamically
typed language is simpler and more convenient.

>Where Haskell fails at the moment, I suspect, is when new things have
>to be introduced into already-running code.

That's something I haven't ever done. (The one thing I have in mind
for my next project that's liable to need a feature which may be
unique to Lisp is code-level introspection - a program reasoning about
the behavior of parts of itself by analyzing the S-expression form of
the code.)

>My Haskell genetic-algorithm code is indeed pretty flexible. The
>biggest constraint that I can see in it is that it requires that
>alternatives by composed of parts that can be freely mixed and
>matched, and that alternatives' fitness can be expressed in terms of a
>set of numbers. Beyond that, it's pretty generic.

Yes, that works too. (Can Haskell put together and compile chunks of
Haskell on the fly, or do you need to use an interpreted
sub-language?)

>I also think it has a lot to do what with suits whose programming
>style and way of thinking about new programming projects. I was
>probably the best Common Lisp coder at my previous workplace (which,
>admittedly, isn't saying much!), but I am still very happy to be using
>Haskell now. I appreciate Lisp's dynamism, and see that it's very
>powerful, I just don't much need it for the way I tend to do things
>(and I do write a wide variety of types of stuff) and I find adding
>type signatures a useful, helpful discipline (I add them even though
>they're usually not needed because the compiler can infer them). I
>really think it's partly just a matter of different styles and
>approaches where both are often useful, but they fit different
>individuals differently.

Yes, I agree completely.

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Mark Carroll
Subject: Re: More static type fun.
Date: 
Message-ID: <XTd*Glb6p@news.chiark.greenend.org.uk>
In article <··················@news.eircom.net>,
Russell Wallace <················@eircom.net> wrote:
>On 29 Oct 2003 19:41:03 +0000 (GMT), Mark Carroll
><·····@chiark.greenend.org.uk> wrote:
(snip)
>>Where Haskell fails at the moment, I suspect, is when new things have
>>to be introduced into already-running code.
>
>That's something I haven't ever done. (The one thing I have in mind
>for my next project that's liable to need a feature which may be
>unique to Lisp is code-level introspection - a program reasoning about
>the behavior of parts of itself by analyzing the S-expression form of
>the code.)

Oh, that sounds like an interesting challenge!

I saw a bit of interesting runtime stuff with Modula-3 being able to
"pickle" data objects, and "unpickle" them into later versions of the
code, and I presume that the SPIN operating system touched on some of
that, but I don't know to what extent all that relied on subverting
Modula-3's static typing (LOOPHOLE, etc.) instead of somehow living
within it.

Argh. I'm worried I'm drifting completely off-topic here. I can at
least avoid posting mostly-Haskell replies to comp.lang.lisp if that
would be the polite thing to do.

(snip)
>Yes, that works too. (Can Haskell put together and compile chunks of
>Haskell on the fly, or do you need to use an interpreted
>sub-language?)
(snip)

I believe that there has been some recent work, or at least thought,
in that direction, using ghci (interactive GHC) as a basis - I don't
know much about that, but FWIW IIRC ghci goes via some sort of
bytecode when you're just typing code at it interactively. Some
earlier work involved compiling new .o files and dynamically linking
and de-linking them.

So, in a nutshell, I'm not sure: I don't think it's easy at the
moment, but I think it's possible and on the way.

-- Mark
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <83h571-3l1.ln1@ID-7776.user.dfncis.de>
Russell Wallace <················@eircom.net> wrote:

> - Business systems, where in order to avoid bloating the code to the
> point where the project becomes unviable, a great deal of it
> (particularly the user interface) has to be able to handle generic
> "values" (number, date, string, record reference etc).
> 
> - Games, where a lot of action/event code similarly needs to be able
> to handle generic "entities" (items, characters, vehicles etc).

I don't know enough details to really judge it, but naively it looks
to me like you could handle that well with Haskell or ML datatypes.

> - Evolutionary computation, where in order to keep the code flexible
> in the face of change, you want as much of it as possible to be able
> to do something sensible whatever type of data it's given.

If the code is in a "special language" that is interpreted, it also
looks like such an interpreter could be easily written using datatypes.

I certainly wouldn't want to do either of them in C, C++, or Java,
though :-)

> Saying this or that is _impossible_ in one or the other is hyperbole,
> a more reasonable statement would be "for this particular pattern,
> system X is more convenient than system Y" or suchlike.

Exactly.

- Dirk
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <bhr171-g92.ln1@ID-7776.user.dfncis.de>
·············@comcast.net wrote:
> Dirk Thierbach <··········@gmx.de> writes:

[...]
> Er, more or less.  The program won't `crash', but it does enter the
> error handler (unless I catch it otherwise).

It will throw an exception. One point of a static type system is
that you can write programs where you can be sure that they will never
throw exceptions, and verify this at compile time. 

If that is too difficult to prove for the type checker, then you can
of course also revert to dynamic checks and throw exceptions if needed.

> I am concerned, though, that with the auxiliary function you are
> essentially `turning off' the static type checking.

Yes. That's because you wrote a function that required it, because you
were convinced that this function would be useful. You can 'turn 
the static type checking on' again by handling all the possible
combinations. You can also modify your function by giving it
two arguments. Then you can write it directly:

> foo2 f g = (g (+)) (f 3) (f 2)

Test2 becomes easy then:

> test2 = foo2 id (\_ -> (*))

> This is no safer than the lisp version, but the projection of the
> values is clumsier.

Yes. The advantage is that you have to be honest about what you
do, so it becomes obvious that there might be a problem if you
are not careful.

> The other issue I see is that this doesn't generalize.  Suppose that
[you define a function to monitor other functions]

You're right, you cannot do that. The reason is that it won't work in
compiled code. The information how many arguments a function takes and
how the bit patterns are to be interpreted is not available at
run-time (in Lisp, it is). So you cannot expect to write a function
that "reaches into" a compiled function, extracts this information,
and works with it.

All one can do with a function at runtime is apply arguments to it and
execute it. For the same reason you cannot compare functions for
equality (they might be different compiled optimizations of the same
function, for example).

So if you want to extend the language itself and allow tracing of
arbitrary functions, you cannot do it in the language. You have to
extend the compiler and the interpreter, or use the debugging features
that are already available.

(If you want to trace a particular function, then of course you
can just write a wrapper function with the same type).

So part of the issue is that you don't have an interpreted language,
but a compiled one, where the static type information is used only at
compile time, and in consequence the number of necessary automatic
dynamic checks at runtime is a lot less. But then of course this
information is gone, and you cannot expect to recover it.

> -----
> 
>>> (defun transpose-tensor (tensor)
>>>  (apply #'mapcar #'mapcar (list #'list #'list) tensor))
>>
>> That's the most interesting example so far, because it really has an
>> application. 
> 
> [discussion snipped]
>>
>>> mapcar :: ([a] -> b) -> [[a]] -> [b]
>>> mapcar f m = map f (transpose m)
>>
>>> f = mapcar (mapcar id)
>>
>>> mapcar2 :: (a -> [b] -> c) -> [a] -> [[b]] -> [c]
>>> mapcar2 f l m = zipWith f l (transpose m)
>>
>> zipWith is just the Lisp mapcar with two lists to operate on. Then
>> we have 
>>
>>> g = mapcar2 mapcar [id, id]
> 
> Here's where I have a problem.  MAPCAR2 and MAPCAR are very different
> functions here, but in mine they were identical.  What if I do this:

> (defun mixmaster (f1 f2 list)
>  (apply f1 f1 (list f2 f2) list))

> (defun bogus-transpose (x)
>  (mixmaster #'mapcar #'list x))

(apply f1 f1) is bogus again, it requires a recursive type (the
function is applied to itself). The easiest thing to do here is to
admit that the both functions do different things, (after all, they work
on different levels of list nesting), so you use two arguments.

mixmaster f1 f1' f2 = f1 f1' [f2, f2]

bogus_transpose = mixmaster mapcar2 mapcar id

> and for the sake of offering *some* indication that mixmaster might
> not be totally useless:
> 
> (defun bogus-multiply (x)
>  (mixmaster #'mapcar  #'* x))

bogus_multiply = mixmaster mapcar2 mapcar product


- Dirk
From: Pascal Costanza
Subject: Re: More static type fun.
Date: 
Message-ID: <bnmqkt$fqu$6@newsreader2.netcologne.de>
Dirk Thierbach wrote:

> ·············@comcast.net wrote:
> 
>>Dirk Thierbach <··········@gmx.de> writes:
> 
> 
> [...]
> 
>>Er, more or less.  The program won't `crash', but it does enter the
>>error handler (unless I catch it otherwise).
> 
> 
> It will throw an exception. One point of a static type system is
> that you can write programs where you can be sure that they will never
> throw exceptions, and verify this at compile time. 
> 
> If that is too difficult to prove for the type checker, then you can
> of course also revert to dynamic checks and throw exceptions if needed.

Please, _please_, think one minute about the possibility that runtime 
exceptions might be a good thing and exactly what you want!

(No, not "rarely", "haven't seen this", "not in 99% of all cases", etc., 
but actually, depending on the circumstances, _exactly_ what you want as 
the default behavior.)


Pascal
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <8sc471-hf1.ln1@ID-7776.user.dfncis.de>
Pascal Costanza <········@web.de> wrote:

> Please, _please_, think one minute about the possibility that runtime 
> exceptions might be a good thing and exactly what you want!

Yes, there are. But I don't want them all the time. I, as the programmer,
want to be able to decide when I want exceptions, and when I want to
be sure there are none.

Lisp, with the "opt-out" approach, doesn't allow me to do this.
I can statistically check that there will be probably no exceptions
thrown, but I can never be sure. I must always be prepared to handle
them, even if I don't want to. That's the disadvantage. The advantage
is that it is easer to get dynamic types with exceptions if I want them.

Haskell etc. with the "opt-in" approach have "no exceptions" as the
default behaviour, but if you want to go dynamic, you can do it. So
the advantage is that it is possible to be 100% percent sure that
there won't be any exceptions. The disadvantage is that you need some
extra effort to go dynamic, but in my experience you don't do that
very often (in fact, I cannot remember I had ever to do it).

It's a tradeoff, but none of them is "better".

> (No, not "rarely", "haven't seen this", "not in 99% of all cases", etc., 
> but actually, depending on the circumstances, _exactly_ what you want as 
> the default behavior.)

I most definitely and emphatically *don't* want to have this as the
default behaviour. And unless you are trying to tell me that you write
so bad code that you can never be sure that it won't throw some error,
even you would probably have no problem in "99%' of the cases.

But it's a matter of taste, not of "power" or "expressiveness". That's
the point.

If you want to have dynamic types and if you want to catch all your
bugs at runtime, that's fine. Use Lisp. I do, too, if I want this
behaviour. But don't tell me that this is the way I *ought* to write
my programs, and that any language that at least offers some
guarantess at compile-time is "worse" than one that doesn't.

- Dirk
From: ···@cs.rutgers.edu
Subject: Re: More static type fun.
Date: 
Message-ID: <eb10b8a8.0310302338.5e45f85d@posting.google.com>
Dirk Thierbach <··········@gmx.de> wrote in message news:<··············@ID-7776.user.dfncis.de>...
> [with Haskell etc. ]
> the advantage is that it is possible to be 100% percent sure that
> there won't be any exceptions. 

You mean no _type_ exceptions - in general you can't avoid
divide-by-zero and similar things
with only compile-time checks.  I'm pretty sure "no exceptions at all"
reduces to the Turing Machine halting problem.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <wualmqly.fsf@ccs.neu.edu>
···@cs.rutgers.edu writes:

> Dirk Thierbach <··········@gmx.de> wrote in message news:<··············@ID-7776.user.dfncis.de>...
>> [with Haskell etc. ]
>> the advantage is that it is possible to be 100% percent sure that
>> there won't be any exceptions. 
>
> You mean no _type_ exceptions - in general you can't avoid
> divide-by-zero and similar things
> with only compile-time checks.  I'm pretty sure "no exceptions at all"
> reduces to the Turing Machine halting problem.

He also means no `statically detectable type exceptions'.  This sort
of thing is `statically typable':

 foo (Bar x) = x
 foo _       = error "No matching clause."

But it still throws a runtime exception if you pass it a non-bar
object.  Static typers don't like to call these sorts of things type
errors, though.
From: Erann Gat
Subject: Re: More static type fun.
Date: 
Message-ID: <gat-3110030901530001@192.168.1.51>
In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> wrote:

> ···@cs.rutgers.edu writes:
> 
> > Dirk Thierbach <··········@gmx.de> wrote in message
news:<··············@ID-7776.user.dfncis.de>...
> >> [with Haskell etc. ]
> >> the advantage is that it is possible to be 100% percent sure that
> >> there won't be any exceptions. 
> >
> > You mean no _type_ exceptions - in general you can't avoid
> > divide-by-zero and similar things
> > with only compile-time checks.  I'm pretty sure "no exceptions at all"
> > reduces to the Turing Machine halting problem.
> 
> He also means no `statically detectable type exceptions'.  This sort
> of thing is `statically typable':
> 
>  foo (Bar x) = x
>  foo _       = error "No matching clause."
> 
> But it still throws a runtime exception if you pass it a non-bar
> object.  Static typers don't like to call these sorts of things type
> errors, though.

My (recently acquired and possibly still wrong) understanding of this is
that the static typers have this thing called "bottom" which is defined as
the "value" that is "returned" by the error function.  It's sort of like
the "value" that is a "member" of type NIL (the empty type) in Common
Lisp.

I was told recently that bottom is considered to be a boolean as an
explanation of why (defun foo () (not (foo))) was not considered a type
error.  I extrapolate from that that bottom must be a member of any type. 
(This is again parallel to the claim that NIL is a sub-type of all types
in Common Lisp.)

If you think about it, "bottom is a member of all types" means exactly the
same thing as "functions that pass static type checking can still generate
run-time errors".

E.
From: Neelakantan Krishnaswami
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbq62r0.6mn.neelk@gs3106.sp.cs.cmu.edu>
In article <····················@192.168.1.51>, Erann Gat wrote:
>
> My (recently acquired and possibly still wrong) understanding of
> this is that the static typers have this thing called "bottom" which
> is defined as the "value" that is "returned" by the error function.
> It's sort of like the "value" that is a "member" of type NIL (the
> empty type) in Common Lisp.

Bottom actually originated in denotational semantics, in order to give
a semantics to programs that don't terminate. In fact, bottom *means*
that an expression that doesn't reduce to a value. Two examples of
this are looping forever and throwing an exception.

Since the designers of ML and Haskell wanted the possibility of
writing looping programs at any type, bottom is an element of every
type's value domain. Another way of thinking about it is: if a
function doesn't return a value, then it's safe to give it any type
because you can never get an ill-typed value out of it.

> If you think about it, "bottom is a member of all types" means
> exactly the same thing as "functions that pass static type checking
> can still generate run-time errors".

You have an excellent intuition! The only real elaboration needed is
to be very clear that bottom isn't a value: it exists precisely to
mathematically represent things like run-time errors and infinite
loops. People are sometimes sloppy about this distinction, which
causes all sorts of problems in discussion.

However, there are type systems in which all well-typed programs
terminate. Historically these haven't been expressive enough for
substantial use, but the landscape here is changing very quickly.
Personally, I'd be surprised if the next big functional language
didn't have a terminating core language, with nonterminating programs
existing at a different type from terminating ones.

-- 
Neel Krishnaswami
·····@cs.cmu.edu
From: Fergus Henderson
Subject: Re: More static type fun.
Date: 
Message-ID: <3fa3bbf8$1@news.unimelb.edu.au>
···@jpl.nasa.gov (Erann Gat) writes:

>My (recently acquired and possibly still wrong) understanding of this is
>that the static typers have this thing called "bottom" which is defined as
>the "value" that is "returned" by the error function.

Not all of them.  The Haskell community certainly do, but other
communities such as the statically typed logic programming community
(as exemplified by e.g. Goedel and Mercury) prefer to use other approaches
to modelling nontermination and run-time errors.  In these languages,
the semantics says that the boolean type has only two values, not three.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: More static type fun.
Date: 
Message-ID: <pan.2003.11.01.14.19.28.255804@knm.org.pl>
On Fri, 31 Oct 2003 09:01:53 -0800, Erann Gat wrote:

> My (recently acquired and possibly still wrong) understanding of this is
> that the static typers have this thing called "bottom" which is defined as
> the "value" that is "returned" by the error function.

I think this applies only to lazy languages (e.g. Haskell, Clean). It's
because bottom can be also bound to an identifier, passed as a parameter
or stored in data structures. So it's simpler to say that it's a member
of every type (or that every type has its bottom) than to say that an
identifier is bound to a value or bottom, an expression can denote a value
or bottom etc. A function applied to an expression is being reduced with
its parameter name bound to the value of that experssion, whenever it's
bottom or not.

In strict languages bottom can be only returned from functions. In this
case it's simpler to not treat it as a value, and say that a function
either returns a value or doesn't return or throws an exception.
Evaluation of an application checks whether the argument evaluates to a
value, and if not, the function is not entered - if it's entered, its
parameter is bound to a value (which doesn't include bottom).

It's a matter of terminology and of simpler or more complex presentation.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bnvb6b$oc2$3@news.oberberg.net>
Joe Marshall wrote:

> He also means no `statically detectable type exceptions'.  This sort
> of thing is `statically typable':
> 
>  foo (Bar x) = x
>  foo _       = error "No matching clause."
> 
> But it still throws a runtime exception if you pass it a non-bar
> object.  Static typers don't like to call these sorts of things type
> errors, though.

Of course not - "foo _" turns foo into a function of arbitrary input and 
output type.
Which is why you either put in a type signature (to limit what that 
underscore may stand for), or don't write that "error" clause (which is 
generally recommended anyway).

Regards,
Jo
From: ·············@comcast.net
Subject: Re: More static type fun.
Date: 
Message-ID: <ekwsn20t.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> Joe Marshall wrote:
>
>> He also means no `statically detectable type exceptions'.  This sort
>> of thing is `statically typable':
>>  foo (Bar x) = x
>>  foo _       = error "No matching clause."
>> But it still throws a runtime exception if you pass it a non-bar
>> object.  Static typers don't like to call these sorts of things type
>> errors, though.
>
> Of course not - "foo _" turns foo into a function of arbitrary input
> and output type.

(defun safe-divide (x y)
  (if (zero? y)
      (error "You cannot divide by zero.")
      (/ x y)))

This function never throws a divide by zero error.
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bo094e$6d0$2@news.oberberg.net>
·············@comcast.net wrote:

> Joachim Durchholz <·················@web.de> writes:
> 
> 
>>Joe Marshall wrote:
>>
>>
>>>He also means no `statically detectable type exceptions'.  This sort
>>>of thing is `statically typable':
>>> foo (Bar x) = x
>>> foo _       = error "No matching clause."
>>>But it still throws a runtime exception if you pass it a non-bar
>>>object.  Static typers don't like to call these sorts of things type
>>>errors, though.
>>
>>Of course not - "foo _" turns foo into a function of arbitrary input
>>and output type.
> 
> 
> (defun safe-divide (x y)
>   (if (zero? y)
>       (error "You cannot divide by zero.")
>       (/ x y)))
> 
> This function never throws a divide by zero error.

It replaces a divide-by-error exception by an "error" exception - not 
much of a difference.

Well, at least that's the case in a functional language - the Haskell 
report says that "an error is indistinguishable from a nonterminating 
computation". Lisp's "error" may be something entirely different.

Regards,
Jo
From: ·············@comcast.net
Subject: Re: More static type fun.
Date: 
Message-ID: <u15op8g4.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> ·············@comcast.net wrote:
>
>> Joachim Durchholz <·················@web.de> writes:
>>
>>>Joe Marshall wrote:
>>>
>>>
>>>>He also means no `statically detectable type exceptions'.  This sort
>>>>of thing is `statically typable':
>>>> foo (Bar x) = x
>>>> foo _       = error "No matching clause."
>>>>But it still throws a runtime exception if you pass it a non-bar
>>>>object.  Static typers don't like to call these sorts of things type
>>>>errors, though.
>>>
>>>Of course not - "foo _" turns foo into a function of arbitrary input
>>>and output type.
>> (defun safe-divide (x y)
>>   (if (zero? y)
>>       (error "You cannot divide by zero.")
>>       (/ x y)))
>> This function never throws a divide by zero error.
>
> It replaces a divide-by-error exception by an "error" exception - not
> much of a difference.

It's exactly the same difference as between this:

>>>> foo (Bar x) = x
>>>> foo _       = error "No matching clause."

and a runtime type error.
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bo0dor$8gv$1@news.oberberg.net>
·············@comcast.net wrote:

>>>(defun safe-divide (x y)
>>>  (if (zero? y)
>>>      (error "You cannot divide by zero.")
>>>      (/ x y)))
>>>This function never throws a divide by zero error.
>>
>>It replaces a divide-by-error exception by an "error" exception - not
>>much of a difference.
> 
> 
> It's exactly the same difference as between this:
> 
>>>>>foo (Bar x) = x
>>>>>foo _       = error "No matching clause."
> 
> and a runtime type error.

Which is why you don't write "foo _" very often - exactly to avoid 
run-time type errors.

I don't know where there's the disagreement here. Or if we disagree at all.

Regards,
Jo
From: ·············@comcast.net
Subject: Re: More static type fun.
Date: 
Message-ID: <7k2kp04c.fsf@comcast.net>
Joachim Durchholz <·················@web.de> writes:

> ·············@comcast.net wrote:
>
>>>>(defun safe-divide (x y)
>>>>  (if (zero? y)
>>>>      (error "You cannot divide by zero.")
>>>>      (/ x y)))
>>>>This function never throws a divide by zero error.
>>>
>>>It replaces a divide-by-error exception by an "error" exception - not
>>>much of a difference.
>> It's exactly the same difference as between this:
>>
>>>>>>foo (Bar x) = x
>>>>>>foo _       = error "No matching clause."
>> and a runtime type error.
>
> Which is why you don't write "foo _" very often - exactly to avoid
> run-time type errors.
>
> I don't know where there's the disagreement here. Or if we disagree at all.

I was only pointing out that you can indeed get a runtime type error
in a statically typed language.  I *often* see claims to the contrary,
and I'm usually told that 

    foo (Bar x) = x
    foo _       = error "No matching clause."

isn't a `type error', it's a `union discrimination error' (or
`match failure' or whatever).

But I don't think we disagree that this will cause an error at runtime
if I invoke foo on a non-bar object.
From: ··········@ii.uib.no
Subject: Re: More static type fun.
Date: 
Message-ID: <egllqyv7ve.fsf@vipe.ii.uib.no>
·············@comcast.net writes:

> I was only pointing out that you can indeed get a runtime type error
> in a statically typed language.  I *often* see claims to the contrary,
> and I'm usually told that 
> 
>     foo (Bar x) = x
>     foo _       = error "No matching clause."
> 
> isn't a `type error', it's a `union discrimination error' (or
> `match failure' or whatever).

Right, and I would agree.  Just like divide by zero isn't a type
error.  It's just a value (of the correct type) for which the function
happens to be undefined. 

I guess what one considers a type error becomes colored by the
facilities offered by one's tools?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
From: Tomasz Zielonka
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbq6tjq.u9t.t.zielonka@zodiac.mimuw.edu.pl>
Joachim Durchholz wrote:
> Joe Marshall wrote:
> 
>> He also means no `statically detectable type exceptions'.  This sort
>> of thing is `statically typable':
>> 
>>  foo (Bar x) = x
>>  foo _       = error "No matching clause."
>> 
>> But it still throws a runtime exception if you pass it a non-bar
>> object.  Static typers don't like to call these sorts of things type
>> errors, though.
> 
> Of course not - "foo _" turns foo into a function of arbitrary input and 
> output type.

Not in Haskell. It will have a type corresponding to the datatype having
a Bar constructor. For example

  data Foo a = Bar a | Zzz		-->	    Foo a -> a
  data Foo a b = Bar a | Zzz b		-->	    Foo a b -> a
  data Foo a = Bar (a, (Foo a)) | Nil	-->	    Foo a -> (a, Foo a)
  data Foo a = Bar a			-->	    Foo -> a
  data Foo = Bar Int			-->	    Foo -> Int

In 4th and 5th case foo will even be a total function.

Best regards,
Tom

-- 
.signature: Too many levels of symbolic links
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <b1qc71-td2.ln1@ID-7776.user.dfncis.de>
Joachim Durchholz <·················@web.de> wrote:

>>  foo (Bar x) = x
>>  foo _       = error "No matching clause."

> Of course not - "foo _" turns foo into a function of arbitrary input and 
> output type.

Sorry Joachim, but in this context, that is nonsense, and it will
confuse the Lispers now completely. (It's approximately right if the
second line is the only of the function definition, but this is
clearly not the case in the above example).

- Dirk
From: Joachim Durchholz
Subject: Re: More static type fun.
Date: 
Message-ID: <bo0e32$8iu$1@news.oberberg.net>
Dirk Thierbach wrote:

> Joachim Durchholz <·················@web.de> wrote:
> 
>>> foo (Bar x) = x
>>> foo _       = error "No matching clause."
> 
> 
>>Of course not - "foo _" turns foo into a function of arbitrary input and 
>>output type.
> 
> Sorry Joachim, but in this context, that is nonsense, and it will
> confuse the Lispers now completely. (It's approximately right if the
> second line is the only of the function definition, but this is
> clearly not the case in the above example).

You're right.
Which amazes me - _ matches anything, I would have expected that a
"foo _ = ..." definition would make foo accept all values, including 
those of arbitrary types.
But you never stop learning :-)

Regards,
Jo
From: Artie Gold
Subject: Re: More static type fun.
Date: 
Message-ID: <3FA483F5.5010102@austin.rr.com>
Joachim Durchholz wrote:
> Dirk Thierbach wrote:
> 
>> Joachim Durchholz <·················@web.de> wrote:
>>
>>>> foo (Bar x) = x
>>>> foo _       = error "No matching clause."
>>>
>>
>>
>>> Of course not - "foo _" turns foo into a function of arbitrary input 
>>> and output type.
>>
>>
>> Sorry Joachim, but in this context, that is nonsense, and it will
>> confuse the Lispers now completely. (It's approximately right if the
>> second line is the only of the function definition, but this is
>> clearly not the case in the above example).
> 
> 
> You're right.
> Which amazes me - _ matches anything, I would have expected that a
> "foo _ = ..." definition would make foo accept all values, including 
> those of arbitrary types.
> But you never stop learning :-)
> 
Just to clarify, the following function definition would be totally 
equivalent:

foo (Bar x) = x
foo x       = error "no matching clause"

The important point (expessed elsethreaed) is that it's a *value* 
error, not a *type* error.

Cheers,
--ag

-- 
Artie Gold -- Austin, Texas
Oh, for the good old days of regular old SPAM.
From: ·············@comcast.net
Subject: Re: More static type fun.
Date: 
Message-ID: <d6cblzza.fsf@comcast.net>
Artie Gold <·········@austin.rr.com> writes:

>>> Joachim Durchholz <·················@web.de> wrote:
>>>
>>>>> foo (Bar x) = x
>>>>> foo _       = error "No matching clause."
>>
> Just to clarify, the following function definition would be totally
> equivalent:
>
> foo (Bar x) = x
> foo x       = error "no matching clause"
>
> The important point (expessed elsethreaed) is that it's a *value*
> error, not a *type* error.

Why do you think this is different?
Why do you think this is important?
From: Darius
Subject: Re: More static type fun.
Date: 
Message-ID: <20031102193212.00000de6.ddarius@hotpop.com>
On Sun, 02 Nov 2003 12:24:58 GMT
·············@comcast.net wrote:

> Artie Gold <·········@austin.rr.com> writes:
> 
> >>> Joachim Durchholz <·················@web.de> wrote:
> >>>
> >>>>> foo (Bar x) = x
> >>>>> foo _       = error "No matching clause."
> >>
> > Just to clarify, the following function definition would be totally
> > equivalent:
> 
> > foo (Bar x) = x
> > foo x       = error "no matching clause"
> 
> > The important point (expessed elsethreaed) is that it's a *value*
> > error, not a *type* error.
> 
> Why do you think this is different?

Do you think of head of an empty list, divide by zero, or a failed
assertion as a type error?

> Why do you think this is important?

Disallowing sum-types or an equivalent would require writing your
program in continuation passing style.  If the compiler did that for
you, you'd then have sum-types.

However, you are right. You -can- think of these things as type errors
and therefore in some cases you can use the static type system to
ensure chunks of your code maintain invariants.  You will still need a
runtime check for input, but once the input passes that check you can be
sure your code will never break that invariant.
From: ·············@comcast.net
Subject: Re: More static type fun.
Date: 
Message-ID: <he1lz6ti.fsf@comcast.net>
Darius <·······@hotpop.com> writes:

> On Sun, 02 Nov 2003 12:24:58 GMT
> ·············@comcast.net wrote:
>
>> Artie Gold <·········@austin.rr.com> writes:
>> 
>> >>> Joachim Durchholz <·················@web.de> wrote:
>> >>>
>> >>>>> foo (Bar x) = x
>> >>>>> foo _       = error "No matching clause."
>> >>
>> > Just to clarify, the following function definition would be totally
>> > equivalent:
>> 
>> > foo (Bar x) = x
>> > foo x       = error "no matching clause"
>> 
>> > The important point (expessed elsethreaed) is that it's a *value*
>> > error, not a *type* error.
>> 
>> Why do you think this is different?
>
> Do you think of head of an empty list, divide by zero, or a failed
> assertion as a type error?

Quite possibly.  Are you asserting that whether something is
a type error or not is simply a matter of opinion?

If that's the case, then whether static typing guarantees that
no type errors will occur is only a matter of opinion. 
From: Darius
Subject: Re: More static type fun.
Date: 
Message-ID: <20031104005614.00005240.ddarius@hotpop.com>
On Mon, 03 Nov 2003 17:43:23 GMT
·············@comcast.net wrote:

> Darius <·······@hotpop.com> writes:
> 
> > On Sun, 02 Nov 2003 12:24:58 GMT
> > ·············@comcast.net wrote:
> 
> >> Artie Gold <·········@austin.rr.com> writes:
> >> 
> >> >>> Joachim Durchholz <·················@web.de> wrote:
> >> >>>
> >> >>>>> foo (Bar x) = x
> >> >>>>> foo _       = error "No matching clause."
> >> >>
> >> > Just to clarify, the following function definition would be
> >totally> > equivalent:
> >> 
> >> > foo (Bar x) = x
> >> > foo x       = error "no matching clause"
> >> 
> >> > The important point (expessed elsethreaed) is that it's a *value*
> >> > error, not a *type* error.
> >> 
> >> Why do you think this is different?
> 
> > Do you think of head of an empty list, divide by zero, or a failed
> > assertion as a type error?
> 
> Quite possibly.  Are you asserting that whether something is
> a type error or not is simply a matter of opinion?

I didn't realize one could assert with an interrogative.

> If that's the case, then whether static typing guarantees that
> no type errors will occur is only a matter of opinion. 

No. What's a matter of opinion is what is a 'type', but it is not a
matter of opinion to the compiler; besides some builtin types, the
programmer tells the compiler what is and isn't a type*.  Is it a
failing of Haskell's static type system if I get a runtime type error
when using Dynamics?  I can think of dividedBy as having type Int ->
NonZeroInt -> Int, but if I give it the type, Int -> Int -> Int is it
the compiler's fault when I get a divide by zero exception?

As Matthias and others have said, static type systems allow you to
express what is a compiler-checked type and by implication what isn't*. 
Sure enough, with some generally accepted extensions to Haskell**, you
can be anywhere along the gamut between completely dynamic type checking
to static types so anal they constitute an almost complete machine
checkable specification.

* Well, you can always blur distinctions, you can't always add them.

** The extensions aren't necessary for dynamic typing, though they do
make it more convenient.  This is a corollary to *.
From: Jesse Tov
Subject: Re: More static type fun.
Date: 
Message-ID: <slrnbqbfdr.ea6.tov@tov.student.harvard.edu>
·············@comcast.net <·············@comcast.net>:
> Artie Gold <·········@austin.rr.com> writes:
>> foo (Bar x) = x
>> foo x       = error "no matching clause"
>>
>> The important point (expessed elsethreaed) is that it's a *value*
>> error, not a *type* error.
> 
> Why do you think this is different?
> Why do you think this is important?

You're right.  But it's easy to fix:
 - Don't call error.
 - Don't use inexhaustive patterns.
 - Don't use unsafe functions such as "tail".

In practice, I use error in cases where I'd use assert in C; that is,
where I'm certain that it's unreachable unless there's a significant bug
in my code.  I can't think of when I've ever used an inexhaustive
pattern.  I consider functions such as "tail" to impose a proof
obligation--in this case, that it can never be called on an empty list.

So is  tail []  a type error?  It can be if you want it to be, but I
don't.  I don't think you do either.

Jesse
-- 
"A hungry man is not a free man."      --Adlai Stevenson
From: Dirk Thierbach
Subject: Re: More static type fun.
Date: 
Message-ID: <n47a71-n44.ln1@ID-7776.user.dfncis.de>
···@cs.rutgers.edu wrote:
> Dirk Thierbach <··········@gmx.de> wrote in message news:<··············@ID-7776.user.dfncis.de>...
>> [with Haskell etc. ]
>> the advantage is that it is possible to be 100% percent sure that
>> there won't be any exceptions. 

> You mean no _type_ exceptions 

Yes. Including any invariants that are encoded as type exceptions by
the programmer.

> in general you can't avoid divide-by-zero and similar things with
> only compile-time checks.  I'm pretty sure "no exceptions at all"
> reduces to the Turing Machine halting problem.

Yes.

- Dirk
From: Brian Downing
Subject: Re: More static type fun.
Date: 
Message-ID: <QHDnb.51862$Fm2.26754@attbi_s04>
In article <··············@ID-7776.user.dfncis.de>,
Dirk Thierbach  <··········@gmx.de> wrote:
> > The other issue I see is that this doesn't generalize.  Suppose that
> [you define a function to monitor other functions]
>
> You're right, you cannot do that. The reason is that it won't work in
> compiled code. The information how many arguments a function takes and
> how the bit patterns are to be interpreted is not available at
> run-time (in Lisp, it is). So you cannot expect to write a function
> that "reaches into" a compiled function, extracts this information,
> and works with it.
>
> All one can do with a function at runtime is apply arguments to it and
> execute it.  [...]
[...]
> So part of the issue is that you don't have an interpreted language,
> but a compiled one, where the static type information is used only at
> compile time, and in consequence the number of necessary automatic
> dynamic checks at runtime is a lot less. But then of course this
> information is gone, and you cannot expect to recover it.

In the interests of fairness, that you cannot do that in your compiled
code is strictly a limitation of your compiler.  Lisp implementations
let you do all this and be compiled at the same time.  They have no
problem "reaching into" compiled functions and pulling out all sorts of
useful information.  It's sounds like you are making an argument that
generality, introspection, and malleability is something you need to
give up to use compiled code instead of interpreted.  With Lisp
compilers that is simply not the case.

-bcd
-- 
*** Brian Downing <bdowning at lavos dot net> 
From: Stephen J. Bevan
Subject: Re: More static type fun.
Date: 
Message-ID: <m3u15sk8ls.fsf@dino.dnsalias.com>
Brian Downing <·············@lavos.net> writes:
> In the interests of fairness, that you cannot do that in your compiled
> code is strictly a limitation of your compiler.  Lisp implementations
> let you do all this and be compiled at the same time.  They have no
> problem "reaching into" compiled functions and pulling out all sorts of
> useful information.  It's sounds like you are making an argument that
> generality, introspection, and malleability is something you need to
> give up to use compiled code instead of interpreted.  With Lisp
> compilers that is simply not the case.

I agree it is a limitation of the compiler.  IIRC in Harlequin MLWorks
it was possible to write code in SML that could be wrapped around
other functions to provide tracing ... etc.  This was an implementaion
detail rather than a feature of SML.  The fact that more
implementations don't do it probably has more to do with the (lack of)
utility that the implementers see in it.  For example although I've
tried it out in SML, Common Lisp, Scheme and Prolog, I don't actively
use it in any of them.  As I noted elsewhere I'm low tech and instead
just resort to the equivalent of "printf" instead.  YMMV.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <ekwv7um4.fsf@ccs.neu.edu>
Dirk Thierbach <··········@gmx.de> writes:

> ·············@comcast.net wrote:
>> Dirk Thierbach <··········@gmx.de> writes:
>
> [...]
>> Er, more or less.  The program won't `crash', but it does enter the
>> error handler (unless I catch it otherwise).
>
> It will throw an exception.  One point of a static type system is
> that you can write programs where you can be sure that they will never
> throw exceptions, and verify this at compile time. 

(ignore-errors ..arbitrary code...)

>> Here's where I have a problem.  MAPCAR2 and MAPCAR are very different
>> functions here, but in mine they were identical.  What if I do this:
>
>> (defun mixmaster (f1 f2 list)
>>  (apply f1 f1 (list f2 f2) list))
>
>> (defun bogus-transpose (x)
>>  (mixmaster #'mapcar #'list x))
>
> (apply f1 f1) is bogus again, it requires a recursive type (the
> function is applied to itself). 

It must be recursive and polytype.

> The easiest thing to do here is to
> admit that the both functions do different things, (after all, they work
> on different levels of list nesting), so you use two arguments.

Um, both functions don't do different things.  They are the *same* function.
There is *no* mechanism by which you could distinguish them because they
are the same thing.
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <SAgmb.19081$Fm2.9524@attbi_s04>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
>
> >>>For one thing, type declarations *cannot* become out-of-date (as
> >>>comments can and often do) because a discrepancy between type
> >>>declaration and definition will be immidiately flagged by the compiler.
> >>
> >>They same holds for assertions as soon as they are run by the test suite.
> >
> > That is not true unless your test suite is bit-wise exhaustive.
>
> Assertions cannot become out-of-date. If an assertion doesn't hold
> anymore, it will be flagged by the test suite.

This is only correct if all assertions receive coverage from the
test suite, which requires significant discipline, manual testcase
writing,  and recurring manual verification, and even then,
it is only true at runtime. And assertions only verify what
you manually specify.

The benefits of a type system require significantly less manual
work, and find errors at compile time. Also, they verify that
type errors provable do not occur anywhere in the program,
vs. just where you manually specify.


Marshall
From: Don Geddis
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87r8125wl2.fsf@sidious.geddis.org>
····@cs.mu.oz.au (Ralph Becket) writes:
> Let me put it like this.  Say I have a statically, expressively, strongly 
> typed language L.  And I have another language L' that is identical to
> L except it lacks the type system.  Now, any program in L that has the
> type declarations removed is also a program in L'.  The difference is
> that a program P rejected by the compiler for L can be converted to a
> program P' in L' which *may even appear to run fine for most cases*.  
> However, and this is the really important point, P' is *still* a 
> *broken* program.  Simply ignoring the type problems does not make 
> them go away: P' still contains all the bugs that program P did.

Two points:

1. There is a difference between having provable (type) bugs, and being able
   to prove a program bug-free.  Most "static type" languages refuse to compile
   a program unless they can prove it free of type bugs.  Since proof
   procedures are (often) undecidable, a lack of bug-free proof does not
   necessarily imply the existance of bugs.

2. Even a program that has bugs for some code paths/sets of inputs, is not
   necessarily useless in all the other cases.  With a rapid prototyping
   approach especially, it may be more valuable to run the common cases
   quickly (and perhaps catch an abstract design/specification error), than
   worry about eliminating type bugs in the corner cases first.

> Why not make the argument more concrete?  Present a problem 
> specification for an every-day programming task that you think 
> seriously benefits from dynamic typing.  Then we can discuss the 
> pros and cons of different approaches.

In Lisp:
        (setq input (read))
        (print (+ input 1))
The "(read)" function returns an object of arbitrary type.  The most a
compiler (or even a person) could infer about the "input" variable is that
it will contain an object of the most generic type ("t" in Lisp).  However,
the "+" function only accepts numbers.  Numbers are a subset of objects.

Will this code result in a type error when run?  It might, it might not.
That depends on the input that the user types.  If the user happens to enter
an integer, the code will run just fine.  If the user enters a string, there
will be a run-time type error (in Lisp) at the evaluation of the "+" function.

Should this code be run or not?  Fans of static type systems suggest refusing
to compile it, until the type subsetting is conscious and checked, perhaps
like this:
        (setq input (read))
        (check-type input number)
        (print (+ input 1))
Presto!  No more compiler type error.

But what if the real problem with the program is that the input should have
been multiplied by two instead of incremented by one?  A rapid-prototyping
programmer would much rather run the first program, type in "3", notice that
"4" gets printed instead of "6", realize that there's a logic bug, and generate
this next program instead:
        (setq input (read))
        (print (* input 2))
This one still has the type error, but not the logic error.

I agree that by the time you get to production code, you'd like to eliminate
all errors.  And static type inference can help find some.  But why insist on
refusing to compile a program unless it is provably free of type bugs?  That
forces a programmer to work on the program in an artificial order, instead of
working on the most important/efficient piece next.

Surely it's better to let the programmer make the decision where to concentrate
his effort.  This can be done if static type inference merely results in
warnings, rather than compiler-stopping errors.  (Check out the Common Lisp
implementation CMUCL for an example of such an approach.)

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
If I were meta-agnostic, I'd be confused over whether I'm agnostic or not---but
I'm not quite sure if I feel that way; hence I must be meta-meta-agnostic (I
guess).  -- Douglas R. Hofstadter, _Godel, Escher, Bach_
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <1eLlb.2151$ao4.8213@attbi_s51>
"Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
>
> I wouldn't count the use of java.lang.Object as a case of dynamic
> typing. You need to explicitly cast objects of this type to some class
> in order to make useful method calls. You only do this to satisfy the
> static type system. (BTW, this is one of the sources for potential bugs
> that you don't have in a decent dynamically typed language.)

Huh? The explicit-downcast construct present in Java is the
programmer saying to the compiler: "trust me; you can accept
this type of parameter." In a dynamically-typed language, *every*
call is like this! So if this is a source of errors (which I believe it
is) then dynamically-typed languages have this potential source
of errors with every function call, vs. statically-typed languages
which have them only in those few cases where the programmer
explicitly puts them in.


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8550$cpm$1@newsreader2.netcologne.de>
Marshall Spight wrote:

> "Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
> 
>>I wouldn't count the use of java.lang.Object as a case of dynamic
>>typing. You need to explicitly cast objects of this type to some class
>>in order to make useful method calls. You only do this to satisfy the
>>static type system. (BTW, this is one of the sources for potential bugs
>>that you don't have in a decent dynamically typed language.)
> 
> 
> Huh? The explicit-downcast construct present in Java is the
> programmer saying to the compiler: "trust me; you can accept
> this type of parameter." In a dynamically-typed language, *every*
> call is like this! So if this is a source of errors (which I believe it
> is) then dynamically-typed languages have this potential source
> of errors with every function call, vs. statically-typed languages
> which have them only in those few cases where the programmer
> explicitly puts them in.

What can happen in Java is the following:

- You might accidentally use the wrong class in a class cast.
- For the method you try to call, there happens to be a method with the 
same name and signature in that class.

In this situation, the static type system would be happy, but the code 
is buggy.

In a decent dynamically typed language, you have proper name space 
management, so that a method cannot ever be defined for a class only by 
accident.

(Indeed, Java uses types for many different unrelated things - in this 
case as a very weak name space mechanism.)


Pascal
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <F0gmb.19376$e01.36419@attbi_s02>
"Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
> Marshall Spight wrote:
>
> > "Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
> >
> >>I wouldn't count the use of java.lang.Object as a case of dynamic
> >>typing. You need to explicitly cast objects of this type to some class
> >>in order to make useful method calls. You only do this to satisfy the
> >>static type system. (BTW, this is one of the sources for potential bugs
> >>that you don't have in a decent dynamically typed language.)
> >
> > Huh? The explicit-downcast construct present in Java is the
> > programmer saying to the compiler: "trust me; you can accept
> > this type of parameter." In a dynamically-typed language, *every*
> > call is like this! So if this is a source of errors (which I believe it
> > is) then dynamically-typed languages have this potential source
> > of errors with every function call, vs. statically-typed languages
> > which have them only in those few cases where the programmer
> > explicitly puts them in.
>
> What can happen in Java is the following:
>
> - You might accidentally use the wrong class in a class cast.
> - For the method you try to call, there happens to be a method with the
> same name and signature in that class.
>
> In this situation, the static type system would be happy, but the code
> is buggy.

How is this any different a bug than if the programmer types the
wrong name of the method he wants to call? This doesn't demonstrate
anything that I can figure.

Here's a logically identical argument:

In a typed language, a programmer might type "a-b" when he meant
to type "a+b". The type system would be happy, but the code will
be buggy.

Well, yes, that's true.

My claim is: explicit downcasting is a technique, manually specified
by the programmer, that weakens the guarantees the compiler makes
to be exactly as weak as those guarantees made by a dynamically
typed language.

So I can see a valid complaint about the extra typing needed, but
I see no validity to the claim that this makes a statically typed
language any more bug-prone than a dynamically typed language.
Indeed, it gives the statically-typed languages *exactly the same*
degree of bug-proneness as a dynamically typed language for the
scope of a single function call, after which the languages returns
to being strikingly less prone to that specific class of bug. (In fact,
completely immune.)


> In a decent dynamically typed language, you have proper name space
> management, so that a method cannot ever be defined for a class only by
> accident.

How can a method be defined "by accident?" I can't figure out what
you're trying to say.


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnc3cj$pv0$1@f1node01.rhrz.uni-bonn.de>
Marshall Spight wrote:

>>In a decent dynamically typed language, you have proper name space
>>management, so that a method cannot ever be defined for a class only by
>>accident.
> 
> 
> How can a method be defined "by accident?" I can't figure out what
> you're trying to say.

class C {
   void m();
}

class D {
   void m();
}

...

void doSomething (Object o) {
   if (o instanceof C) {
     ((D)o).m();
   }
}

"Oops, by accident method m is also defined in D, although I wanted to 
call method m in C."

Doesn't happen in languages with proper name space management. (The 
problem is that Java gives you only the illusion of well-behaved 
namespaces.)

Pascal

P.S, before anyone repeats the same issue again: Yes, Java has a badly 
designed static type system. The example was not a very good one in the 
first place. Doesn't matter wrt to my essential message though.

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <PWhmb.19483$Fm2.9896@attbi_s04>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
>
> class C {
>    void m();
> }
>
> class D {
>    void m();
> }
>
> ...
>
> void doSomething (Object o) {
>    if (o instanceof C) {
>      ((D)o).m();
>    }
> }
>
> "Oops, by accident method m is also defined in D, although I wanted to
> call method m in C."
>
> Doesn't happen in languages with proper name space management. (The
> problem is that Java gives you only the illusion of well-behaved
> namespaces.)

The above code in Java would fail at runtime. What do you think it
ought to do? What would it do in Python? How is this superior to
what Java does? Do you consider this a real-world example?

Does the fact that you didn't respond to the other items
in my post mean you are no longer holding the position that
"explicitly cast[ing] objects" "is one of the sources for potential bugs
that you don't have in a decent dynamically typed language."


Marshall
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bncp01$au7$1@newsreader2.netcologne.de>
Marshall Spight wrote:
> "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> 
>>class C {
>>   void m();
>>}
>>
>>class D {
>>   void m();
>>}
>>
>>...
>>
>>void doSomething (Object o) {
>>   if (o instanceof C) {
>>     ((D)o).m();
>>   }
>>}
>>
>>"Oops, by accident method m is also defined in D, although I wanted to
>>call method m in C."
>>
>>Doesn't happen in languages with proper name space management. (The
>>problem is that Java gives you only the illusion of well-behaved
>>namespaces.)
> 
> 
> The above code in Java would fail at runtime. What do you think it
> ought to do? What would it do in Python? How is this superior to
> what Java does? Do you consider this a real-world example?

You should be able to choose unique names in the first place. The 
problem Java has here is that there is no safe way to avoid that in 
general. There is a risk of name clashes here.

There is some information about one way to properly deal with namespaces 
at 
http://www.cs.northwestern.edu/academics/courses/325/readings/packages.html

There are also other approaches. For example, there exist several module 
systems for Scheme. (I don't know a lot about them, though.)

I don't know how Python handles potential name conflicts.

> Does the fact that you didn't respond to the other items
> in my post mean you are no longer holding the position that
> "explicitly cast[ing] objects" "is one of the sources for potential bugs
> that you don't have in a decent dynamically typed language."

No.

Pascal
From: Marshall Spight
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <0Mqmb.23163$Tr4.48699@attbi_s03>
"Pascal Costanza" <········@web.de> wrote in message ·················@newsreader2.netcologne.de...
> Marshall Spight wrote:
> >
> > The above code in Java would fail at runtime. What do you think it
> > ought to do? What would it do in Python? How is this superior to
> > what Java does? Do you consider this a real-world example?
>
> You should be able to choose unique names in the first place. The
> problem Java has here is that there is no safe way to avoid that in
> general. There is a risk of name clashes here.

Grrr. You didn't answer any of my questions. (Except the one
about Python, for which you said you didn't know.) Or maybe
your response is meant to be an answer to one or more of
them, only I can't tell which or how.

I'm having a hard time following you. Also, you seem to be
shifting what you're talking about quite frequently, which
makes me suspicious.

At this stage I've entirely lost track of the thread. Ah, well.


> > Does the fact that you didn't respond to the other items
> > in my post mean you are no longer holding the position that ...
>
> No.

I didn't think so. :-)


Marshall
From: Nikodemus Siivola
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bndp6e$fbd$2@nyytiset.pp.htv.fi>
In comp.lang.lisp Marshall Spight <·······@dnai.com> wrote:

> I'm having a hard time following you. 

I have a guess about this. You seem -- by your references to Python --
to be implicitly assuming that Pascal, Joe &co are setting up Python
as the lightbearer of dynamism.

I find this rather unlikely, as most -- if not all -- of the
pro-dynamic side of this argument hail from Lisp, not Python.

Just a datapoint, since there seemed to be an unstated
misunderstanding there.

Cheers,

 -- Nikodemus
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9d4f4e$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>
>> Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
>> OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway. 
>> They generally have some support for *optional* dynamic typing.
>> 
>> This is IMHO a good trade-off.  Most of the time, you want static typing;
>> it helps in the design process, with documentation, error checking, and
>> efficiency.
>
>+ Design process: There are clear indications that processes like 
>extreme programming work better than processes that require some kind of 
>specification stage.

This is certainly not clear to me, especially if you consider writing
type declarations to be "some kind of specification stage".

>Dynamic typing works better with XP than static 
>typing because with dynamic typing you can write unit tests without 
>having the need to immediately write appropriate target code.

That one seems to have been pretty thoroughly debunked by other responses
in this thread.  A static type system won't stop you writing unit tests.
And if you want to actually run the unit tests, then you are going to
need appropriate target code, regardless of whether the system is
statically or dynamically typed.

>+ Error checking: I can only guess what you mean by this.

I mean compile-time detection of type errors.
I'm just talking about ordinary everyday detection of typos, functions
called with the wrong number of arguments, arguments in the wrong order,
arguments of the wrong type -- that kind of thing.

>If you mean something like Java's checked exceptions,
>there are clear signs that this is a very bad feature.

No, I do not mean that.  I tend to agree that statically checked
exception specifications are not worth the complication and can be
positively harmful in some situations.

>+ Efficiency: As Paul Graham puts it, efficiency comes from profiling. 
>In order to achieve efficiency, you need to identify the bottle-necks of 
>your program. No amount of static checks can identify bottle-necks, you 
>have to actually run the program to determine them.

It's not enough to just identify the bottlenecks.  You have to make those
bottlenecks go fast!  That's a lot harder with a dynamically typed language,
because you pay a lot of overhead: greater memory usage, and hence worse
cache performance, due to less efficient representations of types in memory;
plus added overhead from all of those dynamic type checks.  Of course good
compilers for dynamic languages analyze the source to try to infer the types,
but since the language is not statically typed, such analysis will often fail.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnjjq2$qb8$1@f1node01.rhrz.uni-bonn.de>
Fergus Henderson wrote:

>>Dynamic typing works better with XP than static 
>>typing because with dynamic typing you can write unit tests without 
>>having the need to immediately write appropriate target code.
> 
> 
> That one seems to have been pretty thoroughly debunked by other responses
> in this thread.  A static type system won't stop you writing unit tests.
> And if you want to actually run the unit tests, then you are going to
> need appropriate target code, regardless of whether the system is
> statically or dynamically typed.

Not if I only want to check whether the first ten tests work, and don't 
care about the remaining ones.

>>+ Efficiency: As Paul Graham puts it, efficiency comes from profiling. 
>>In order to achieve efficiency, you need to identify the bottle-necks of 
>>your program. No amount of static checks can identify bottle-necks, you 
>>have to actually run the program to determine them.
> 
> 
> It's not enough to just identify the bottlenecks.  You have to make those
> bottlenecks go fast!  That's a lot harder with a dynamically typed language,
> because you pay a lot of overhead: greater memory usage, and hence worse
> cache performance, due to less efficient representations of types in memory;
> plus added overhead from all of those dynamic type checks.  Of course good
> compilers for dynamic languages analyze the source to try to infer the types,
> but since the language is not statically typed, such analysis will often fail.

Good dynamically typed languages provide very good options in this 
regard. However, other Lispers than me can probably provide much better 
comments on that.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Stephen J. Bevan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3brs2klvc.fsf@dino.dnsalias.com>
Pascal Costanza <········@web.de> writes:
> Fergus Henderson wrote:
> 
> >> Dynamic typing works better with XP than static typing because with
> >> dynamic typing you can write unit tests without having the need to
> >> immediately write appropriate target code.
> > That one seems to have been pretty thoroughly debunked by other
> > responses
> > in this thread.  A static type system won't stop you writing unit tests.
> > And if you want to actually run the unit tests, then you are going to
> > need appropriate target code, regardless of whether the system is
> > statically or dynamically typed.
> 
> Not if I only want to check whether the first ten tests work, and
> don't care about the remaining ones.

Perhaps I'm just a low tech kind of guy but if I just want to run the
first ten then I comment out the rest.  Even without a fancy IDE that
only take a few key presses.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnmps4$fqu$1@newsreader2.netcologne.de>
Stephen J. Bevan wrote:

> Pascal Costanza <········@web.de> writes:
> 
>>Fergus Henderson wrote:
>>
>>
>>>>Dynamic typing works better with XP than static typing because with
>>>>dynamic typing you can write unit tests without having the need to
>>>>immediately write appropriate target code.
>>>
>>>That one seems to have been pretty thoroughly debunked by other
>>>responses
>>>in this thread.  A static type system won't stop you writing unit tests.
>>>And if you want to actually run the unit tests, then you are going to
>>>need appropriate target code, regardless of whether the system is
>>>statically or dynamically typed.
>>
>>Not if I only want to check whether the first ten tests work, and
>>don't care about the remaining ones.
> 
> 
> Perhaps I'm just a low tech kind of guy but if I just want to run the
> first ten then I comment out the rest.  Even without a fancy IDE that
> only take a few key presses.

...and it requires you to go to all the places where they are defined.

Yes, I know the answer: "But they should be all in one place." No, they 
shouldn't need to be all in one place. For example, I might want to 
place test code close to the definitions that they test. Or I might want 
to organize them according to some other criteria.

No, it's not hard to find them all, then. I can use grep or my IDE to 
find them. But that's still more work than just commenting them out. If 
I seldomly need to find all test cases, I can trade locality of all test 
cases for some other possible advantages.

Ah, another example: What if my test code is actually produced by some 
macro, or some other code generation facility?


Pascal
From: Stephen J. Bevan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m33cdclpl7.fsf@dino.dnsalias.com>
Pascal Costanza <········@web.de> writes:
> > Perhaps I'm just a low tech kind of guy but if I just want to run the
> > first ten then I comment out the rest.  Even without a fancy IDE that
> > only take a few key presses.
> 
> ...and it requires you to go to all the places where they are defined.
> 
> Yes, I know the answer: "But they should be all in one place." No,
> they shouldn't need to be all in one place.

As I wrote, I'm a low tech guy, I put all the tests for a particular
feature in the same file.  If I only want to run some of the tests in
the file then I comment out those tests.  If I only want to run the
tests in some file rather than others then I comment out the names of
the files containing the tests I don't want to run.  I can see how
things can get more complicated if you use other approaches, which is
one of the reasons I don't use those approaches.  YMMV.


> Ah, another example: What if my test code is actually produced by some
> macro, or some other code generation facility?

Er, comment out either definition of the macro and the calls to it or
the code generation facility.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bno0rj$ftm$1@newsreader2.netcologne.de>
Stephen J. Bevan wrote:

> Pascal Costanza <········@web.de> writes:
> 
>>>Perhaps I'm just a low tech kind of guy but if I just want to run the
>>>first ten then I comment out the rest.  Even without a fancy IDE that
>>>only take a few key presses.
>>
>>...and it requires you to go to all the places where they are defined.
>>
>>Yes, I know the answer: "But they should be all in one place." No,
>>they shouldn't need to be all in one place.
> 
> 
> As I wrote, I'm a low tech guy, I put all the tests for a particular
> feature in the same file.  If I only want to run some of the tests in
> the file then I comment out those tests.  If I only want to run the
> tests in some file rather than others then I comment out the names of
> the files containing the tests I don't want to run.  I can see how
> things can get more complicated if you use other approaches, which is
> one of the reasons I don't use those approaches.  YMMV.
> 
> 
> 
>>Ah, another example: What if my test code is actually produced by some
>>macro, or some other code generation facility?
> 
> 
> Er, comment out either definition of the macro and the calls to it or
> the code generation facility.

These are both all or nothing solutions.

+ "all the tests for a particular feature in one place" - maybe that's 
not what I want (and you have ignored my arguments in this regard)

and:
+ what if I want to run _some_ of the tests that my macro produces but 
not _all_ of them?


Actually, that seems to be the typical reaction of static typing fans. 
This reminds me of a joke.

Imagine yourself back in the 1980's. A general of the former Soviet 
Union says: "We can travel anywhere we want." Question: "What about 
Great Britain, Italy, France, the US?" "We don't want to travel there."


Pascal
From: Stephen J. Bevan
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3ptgfjvbf.fsf@dino.dnsalias.com>
Pascal Costanza <········@web.de> writes:
> These are both all or nothing solutions.
> 
> + "all the tests for a particular feature in one place" - maybe that's
> not what I want (and you have ignored my arguments in this regard)
> 
> and:
> + what if I want to run _some_ of the tests that my macro produces but
> not _all_ of them?
> 
> 
> Actually, that seems to be the typical reaction of static typing
> fans.

The solutions may be all or nothing but IMHO they are simple and I
like simple things.  I can't really say the same for scenarios which
involve running only some tests generated by macros that may or may
not be in the same files as other tests generated from the same
macros.  Perhaps it all comes down to different approaches to the
programming process rather than languages per se, e.g. I don't do
either of the above even when writing Common Lisp.
From: Lex Spoon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m3d6cegvjz.fsf@logrus.dnsalias.net>
Pascal Costanza <········@web.de> writes:
> ...and it requires you to go to all the places where they are defined.
>
> Yes, I know the answer: "But they should be all in one place." No,
> they shouldn't need to be all in one place. For example, I might want
> to place test code close to the definitions that they test. Or I might
> want to organize them according to some other criteria.
>
> No, it's not hard to find them all, then. I can use grep or my IDE to
> find them. But that's still more work than just commenting them
> out. If I seldomly need to find all test cases, I can trade locality
> of all test cases for some other possible advantages.


With a good IDE, "distance" should be the same as "semantic nearness",
if that term makes sense.  In a good IDE, there already is an existing
way to browse all the tests, or it is easy to extend the IDE to allow
it.  So things are in the "same place" whenever they have a semantic
attribute that the tools can index on.  No matter how you layout
tests, there is sure to be a way for a decent IDE to show you all the
tests.


-Lex
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bnsbk1$ns3$1@newsreader2.netcologne.de>
Lex Spoon wrote:

> Pascal Costanza <········@web.de> writes:
> 
>>...and it requires you to go to all the places where they are defined.
>>
>>Yes, I know the answer: "But they should be all in one place." No,
>>they shouldn't need to be all in one place. For example, I might want
>>to place test code close to the definitions that they test. Or I might
>>want to organize them according to some other criteria.
>>
>>No, it's not hard to find them all, then. I can use grep or my IDE to
>>find them. But that's still more work than just commenting them
>>out. If I seldomly need to find all test cases, I can trade locality
>>of all test cases for some other possible advantages.
> 
> 
> 
> With a good IDE, "distance" should be the same as "semantic nearness",
> if that term makes sense.  In a good IDE, there already is an existing
> way to browse all the tests, or it is easy to extend the IDE to allow
> it.  So things are in the "same place" whenever they have a semantic
> attribute that the tools can index on.  No matter how you layout
> tests, there is sure to be a way for a decent IDE to show you all the
> tests.

...an if the IDE is already that smart, why should it still require me 
to comment out code just to make some other part of the program run?

A programming language environment should make programming as convenient 
as possible, not in some areas convenient and in some other arbitrary 
areas less convenient.

(And if you think that static type checking makes programming more 
convenient, then yes, why not? Add that as an additional option! But 
make it possible to switch it on or off on demand!)


Pascal
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn77ns$si3$1@news.oberberg.net>
Pascal Costanza wrote:
> ....because static type systems work by reducing the expressive power of 
> a language. It can't be any different for a strict static type system. 
> You can't solve the halting problem in a general-purpose language.

The final statement is correct, but you don't need to solve the halting 
problem: it's enough to allow the specification of some easy-to-prove 
properties, without hindering the programmer too much.

Most functional languages with a static type system don't require that 
the programmer writes down the types, they are inferred from usage. And 
the type checker will complain as soon as the usage of some data item is 
inconsistent.
IOW if you write
   a = b + "asdf"
the type checker will infer that both a and b are strings; however, if 
you continue with
   c = a + b + 3
it will report a type error because 3 and "adsf" don't have a common 
supertype with a "+" operation.

It's the best of both worlds: no fuss with type declarations (which is 
one of the less interesting things one spends time with) while getting 
good static checking.
(Nothing is as good in practice as it sounds in theory, and type 
inference is no exception. Interpreting type error messages requires 
some getting used to - just like interpreting syntax error messages is a 
bit of an art, leaving one confounded for a while until one "gets it".)

> (Now you could argue that current sophisticated type systems cover 90% 
> of all cases and that this is good enough, but then I would ask you for 
> empirical studies that back this claim. ;)

My 100% subjective private study reveals not a single complaint about 
over-restrictive type systems in comp.lang.functional in the last 12 months.

Regards,
Jo
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f972e54$1@news.unimelb.edu.au>
Joachim Durchholz <·················@web.de> writes:

>My 100% subjective private study reveals not a single complaint about 
>over-restrictive type systems in comp.lang.functional in the last 12 months.

While I tend to agree that such complaints are rare, such complaints also
tend to be language-specific, and thus get posted to language-specific
forums, e.g. the Haskell mailing list, the Clean mailing list, the OCaml
mailing list, etc., rather than to more general forums like
comp.lang.functional.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Lulu of the Lotus-Eaters
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <mailman.41.1066885873.702.python-list@python.org>
Joachim Durchholz <·················@web.de> writes:
>My 100% subjective private study reveals not a single complaint about
>over-restrictive type systems in comp.lang.functional in the last 12
>months.

I also read c.l.functional (albeit only lightly).  In the last 12
months, I have encountered dozens of complaints about over-restrictive
type sytems in Haskell, OCaml, SML, etc.

The trick is that these complaints are not phrased in precisely that
way.  Rather, someone is trying to do some specific task, and has
difficulty arriving at a usable type needed in the task.  Often posters
provide good answers--Durchholz included.  But the underlying complaint
-really was- about the restrictiveness of the type system.

That's not even to say that the overall advantages of a strong type
system are not worthwhile--even perhaps better than more dynamic
languages.  But it's quite disingenuous to claim that no one ever
complains about it.  Obviously, no one who finds a strong static type
system unacceptable is going to be committed to using, e.g.
Haskell--the complaint doesn't take the form of "I'm taking my marbles
and going home".

Yours, Lulu...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons.  Intellectual
property is to the 21st century what the slave trade was to the 16th.
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9795dd$1@news.unimelb.edu.au>
Lulu of the Lotus-Eaters <·····@gnosis.cx> writes:

>Joachim Durchholz <·················@web.de> writes:
>>My 100% subjective private study reveals not a single complaint about
>>over-restrictive type systems in comp.lang.functional in the last 12
>>months.
>
>I also read c.l.functional (albeit only lightly).  In the last 12
>months, I have encountered dozens of complaints about over-restrictive
>type sytems in Haskell, OCaml, SML, etc.
>
>The trick is that these complaints are not phrased in precisely that
>way.  Rather, someone is trying to do some specific task, and has
>difficulty arriving at a usable type needed in the task.  Often posters
>provide good answers--Durchholz included.  But the underlying complaint
>-really was- about the restrictiveness of the type system.

Could you provide a link to an example of such a post?

In my experience, people who have difficulties in getting their programs
to typecheck usually have an inconsistent design, not a design which is
consistent but which the type checker is too restrictive to support.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn86rg$gfh$2@newsreader2.netcologne.de>
Fergus Henderson wrote:

> In my experience, people who have difficulties in getting their programs
> to typecheck usually have an inconsistent design, not a design which is
> consistent but which the type checker is too restrictive to support.

Have you made sure that this is not a circular argument?

Does "consistent design" mean "acceptable by a type checker" in your book?


Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1oew8ou97.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Fergus Henderson wrote:
> 
> > In my experience, people who have difficulties in getting their programs
> > to typecheck usually have an inconsistent design, not a design which is
> > consistent but which the type checker is too restrictive to support.
> 
> Have you made sure that this is not a circular argument?

Sure.  What he says is that the problems with those programs are
usually still there even after you erase types and, thus, arrive at an
untyped program.

> Does "consistent design" mean "acceptable by a type checker" in your book?

I cannot speak for Fergus, but I suspect (and hope!) the answer is
"no".  By "consistent design" we usually mean design that is free of
certain problems at the time the code executes on a real machine.

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8shs$p1m$1@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Fergus Henderson wrote:
>>
>>
>>>In my experience, people who have difficulties in getting their programs
>>>to typecheck usually have an inconsistent design, not a design which is
>>>consistent but which the type checker is too restrictive to support.
>>
>>Have you made sure that this is not a circular argument?
> 
> Sure.  What he says is that the problems with those programs are
> usually still there even after you erase types and, thus, arrive at an
> untyped program.

Well, to say this once more, there are programs out there that have a 
consistent design, that don't have "problems", and that cannot be 
statically checked.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m14qy0ot12.fsf@tti5.uchicago.edu>
Pascal Costanza <········@web.de> writes:

> Well, to say this once more, there are programs out there that have a
> consistent design, that don't have "problems", and that cannot be
> statically checked.

Care to give an example?  How you you know that the design is
consistent?  Do you have a proof for that claim?  Can you write that
proof down for me, please?

:-)

Matthias
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8u5m$p1s$2@f1node01.rhrz.uni-bonn.de>
Matthias Blume wrote:
> Pascal Costanza <········@web.de> writes:
> 
> 
>>Well, to say this once more, there are programs out there that have a
>>consistent design, that don't have "problems", and that cannot be
>>statically checked.
> 
> 
> Care to give an example?  How you you know that the design is
> consistent? 

Squeak, probably. Lisp development environments. Probably almost any 
development environment with a good debugger that allows for changing 
code on the fly.

> Do you have a proof for that claim?  Can you write that
> proof down for me, please?

No. Design consistency is an aesthetical category.

> :-)
> 
> Matthias

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Adam Warner
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.24.01.19.25.620316@consulting.net.nz>
Hi Matthias Blume,

> Pascal Costanza <········@web.de> writes:
> 
>> Well, to say this once more, there are programs out there that have a
>> consistent design, that don't have "problems", and that cannot be
>> statically checked.
> 
> Care to give an example?

(setf *debugger-hook*
      (lambda (condition value)
        (declare (ignorable condition value))
        (invoke-restart (psychic))))

(defun psychic ()
  (let* ((*read-eval* nil)
         (input (ignore-errors (read))))
    (format t "Input ~S is of type ~S.~%" input (type-of input))))

(loop (psychic))

This can only be statically compiled in the most trivial sense where every
input object type is permissible (i.e. every object is of type T).

Regards,
Adam
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9f7ee7$1@news.unimelb.edu.au>
Adam Warner <······@consulting.net.nz> writes:

>(setf *debugger-hook*
>      (lambda (condition value)
>        (declare (ignorable condition value))
>        (invoke-restart (psychic))))
>
>(defun psychic ()
>  (let* ((*read-eval* nil)
>         (input (ignore-errors (read))))
>    (format t "Input ~S is of type ~S.~%" input (type-of input))))
>
>(loop (psychic))
>
>This can only be statically compiled in the most trivial sense where every
>input object type is permissible (i.e. every object is of type T).

Could you please explain what this does, for those of us who are not
that familiar with lisp?  The stuff about *debugger-hook*, *read-eval*,
ignore-errors, etc. is a bit confusing for the uninitiated.  Does this
program just read in an arbitrary lisp term, print out the term and its
type, and then repeat?

If so, then it seems to be much the same as the following similarly sized
Mercury program:

	:- import_module io, term, term_io, list.

	main -->
		read_term(R),
		({ R = term(V,T) } ->
			print("Input "), write_term(V,T),
			print(" is of type "), print(typeof(T)), nl
		; []),
		main.

Naturally this reads in Mercury terms, not lisp terms!
Here's a transcript of running the Mercury program:

	abc.
	Input abc is of type atom
	"42".
	Input "42" is of type string
	42.
	Input 42 is of type int
	f(42).
	Input f(42) is of type compound

Actually I lied a little.  The Mercury program does need to be a little bit
longer than that.  Firstly, every Mercury program starts with a few
lines of boiler-plate code:

	:- interface.
	:- import_module io.
	:- pred main(io::di, io::uo) is det.
	:- implementation.

Secondly, the Mercury standard library does not include any routine
typeof(), so we need to define that ourselves.
But of course it is always easy to come up with programs that are shorter
because they make use of a library routine present in one language's library
but not others, so this difference does not seem significant.

The definition of typeof() is probably pretty simple.
I used the following:

	:- type term_type ---> int ; string ; float ; atom ; compound ; var.

	typeof(functor(integer(_), _, _)) = int.
	typeof(functor(string(_), _, _)) = string.
	typeof(functor(float(_), _, _)) = float.
	typeof(functor(atom(_), [], _)) = atom.
	typeof(functor(atom(_), [_|_], _)) = compound.
	typeof(variable(_)) = var.

This is just my guess as to what the lisp "type-of" function might do.

But maybe I completely misunderstood the whole Lisp program.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Adam Warner
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.29.10.22.57.949088@consulting.net.nz>
Hi Fergus Henderson,

> Adam Warner <······@consulting.net.nz> writes:
> 
>>(setf *debugger-hook*
>>      (lambda (condition value)
>>        (declare (ignorable condition value)) (invoke-restart
>>        (psychic))))
>>
>>(defun psychic ()
>>  (let* ((*read-eval* nil)
>>         (input (ignore-errors (read))))
>>    (format t "Input ~S is of type ~S.~%" input (type-of input))))
>>
>>(loop (psychic))
>>
>>This can only be statically compiled in the most trivial sense where
>>every input object type is permissible (i.e. every object is of type T).
> 
> Could you please explain what this does, for those of us who are not that
> familiar with lisp?  The stuff about *debugger-hook*, *read-eval*,
> ignore-errors, etc. is a bit confusing for the uninitiated.  Does this
> program just read in an arbitrary lisp term, print out the term and its
> type, and then repeat?

That was the idea. It has to handle any type of input, even illegal data,
without leading to any unhandled exception (illegal input returns nil).
Frode Vatvedt Fjeld has remarked that I misused invoke-restart so perhaps
he could provide further details about how I should have implemented the
program.

Note that I overlooked handling the printing of circular data types
(controlled via *print-circle*). And doing anything useful with the data
afterwards :-)

* (loop (psychic))
'abc
Input 'abc is of type cons.
abc
Input abc is of type symbol.
"42"
Input "42" is of type (simple-base-string 2).
42
Input 42 is of type (integer 42 42).
#(1 2 3)
Input #(1 2 3) is of type (simple-vector 3).
(1 2 3)
Input (1 2 3) is of type cons.
1.0
Input 1.0 is of type double-float.
1f0
Input 1.0f0 is of type single-float.
1/2
Input 1/2 is of type ratio.
unknown-package:abc
Input nil is of type null.
"multi-line
string"
Input "multi-line
string" is of type (simple-base-string 17).
(#\a 1 "string" 1.2 / *)
Input (#\a 1 "string" 1.2 / *) is of type cons.
#.(launch-DoS)
Input nil is of type null.
4t3hthgfhggu 9403tu tret35 9&&7*F*8ED8 e88*EYF YEYF&EY*&TtIIGRG#G * 8#Y#RYE(#R
Input 4t3hthgfhggu is of type symbol.
Input 9403tu is of type symbol.
Input tret35 is of type symbol.
Input 9&&7*F*8ED8 is of type symbol.
Input e88*EYF is of type symbol.
Input |YEYF&EY*&TtIIGRG#G| is of type symbol.
Input * is of type symbol.
Input |8#y#rye| is of type symbol.
Input nil is of type null.
#1=(1 . #1#)
Input (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
       1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
       1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
       1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
       1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
       ...

Whoops!

> If so, then it seems to be much the same as the following similarly sized
> Mercury program:
> 
> 	:- import_module io, term, term_io, list.
> 
> 	main -->
> 		read_term(R),
> 		({ R = term(V,T) } ->
> 			print("Input "), write_term(V,T),
> 			print(" is of type "), print(typeof(T)), nl
> 		; []),
> 		main.
> 
> Naturally this reads in Mercury terms, not lisp terms! Here's a transcript
> of running the Mercury program:
> 
> 	abc.
> 	Input abc is of type atom
> 	"42".
> 	Input "42" is of type string
> 	42.
> 	Input 42 is of type int
> 	f(42).
> 	Input f(42) is of type compound
> 
> Actually I lied a little.  The Mercury program does need to be a little
> bit longer than that.  Firstly, every Mercury program starts with a few
> lines of boiler-plate code:
> 
> 	:- interface.
> 	:- import_module io.
> 	:- pred main(io::di, io::uo) is det.
> 	:- implementation.
> 
> Secondly, the Mercury standard library does not include any routine
> typeof(), so we need to define that ourselves. But of course it is always
> easy to come up with programs that are shorter because they make use of a
> library routine present in one language's library but not others, so this
> difference does not seem significant.
> 
> The definition of typeof() is probably pretty simple. I used the
> following:
> 
> 	:- type term_type ---> int ; string ; float ; atom ; compound ; var.
> 
> 	typeof(functor(integer(_), _, _)) = int. typeof(functor(string(_), _, _))
> 	= string. typeof(functor(float(_), _, _)) = float.
> 	typeof(functor(atom(_), [], _)) = atom. typeof(functor(atom(_), [_|_],
> 	_)) = compound. typeof(variable(_)) = var.
> 
> This is just my guess as to what the lisp "type-of" function might do.
> 
> But maybe I completely misunderstood the whole Lisp program.

No, you were spot on. You just have to rewrite typeof() so it supports
every language data type that can be input without evaluation. Since I
disabled read-time evaluation for Lisp it's a subset of these types:
<http://www.lispworks.com/reference/HyperSpec/Body/04_bc.htm>

Note that at no stage was I performing any explicit string handling. The
input becomes a Lisp object. TYPE-OF just returns the type of the Lisp
object: <http://www.lispworks.com/reference/HyperSpec/Body/f_tp_of.htm>

Regards,
Adam
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9fc8b9$1@news.unimelb.edu.au>
Adam Warner <······@consulting.net.nz> writes:

>Hi Fergus Henderson,
>
>> But maybe I completely misunderstood the whole Lisp program.
>
>No, you were spot on. You just have to rewrite typeof() so it supports
>every language data type that can be input without evaluation.

Fine; exercise for the reader.

>Note that at no stage was I performing any explicit string handling.

Nor was I.  That was all done "under the hood", by one of the Mercury
standard library routines that my code called.

I thought this debate was about ``static typing versus dynamic typing'',
not ``my standard library is better than yours''!  It seems pretty clear
that it is sometimes quite useful to have some way of reading in terms
of the source language.  But as I've shown, you can do this just fine
in a statically typed language if you have the right support in your
standard library.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Adam Warner
Subject: ``my standard library is better than yours'' [was Re: Python from Wise Guy's Viewpoint]
Date: 
Message-ID: <pan.2003.10.29.22.47.48.839762@consulting.net.nz>
Hi Fergus Henderson,

> I thought this debate was about ``static typing versus dynamic typing'',
> not ``my standard library is better than yours''!  It seems pretty clear
> that it is sometimes quite useful to have some way of reading in terms
> of the source language.  But as I've shown, you can do this just fine in
> a statically typed language if you have the right support in your
> standard library.

What you appear to have shown is that Mercury has strong dynamic type
support layered upon a strong statically typed language. All Common Lisp
implementations are strongly and dynamically typed. Some also provide
compile-time type support, inference and assertion checking. I've been
using this support to check the logic and improve the efficiency of my
code.

My example was deficient in demonstrating the dynamic limitations of Mercury:
<http://www.cs.mu.oz.au/research/mercury/information/features.html>

   Being a compiled language, Mercury does not have any means for altering
   the program at runtime, although we may later provide facilities for
   adding code to a running program.

The statement is dumb. Compare with this analogy:

   Being a compiled language, SBCL does not have any means for altering 
   the program at runtime, although we may later provide facilities for
   adding code to a running program.

SBCL only evaluates compiled code. Yet it is still extremely dynamic.
For example one may redefine functions at runtime which not only compile
to native assembly code but automatically integrate into the rest of the
program (unless originally inlined).

Regards,
Adam
From: Fergus Henderson
Subject: Re: ``my standard library is better than yours'' [was Re: Python from Wise Guy's Viewpoint]
Date: 
Message-ID: <3fa24a17$1@news.unimelb.edu.au>
Adam Warner <······@consulting.net.nz> writes:

>My example was deficient in demonstrating the dynamic limitations of Mercury:
><http://www.cs.mu.oz.au/research/mercury/information/features.html>
>
>   Being a compiled language, Mercury does not have any means for altering
>   the program at runtime, although we may later provide facilities for
>   adding code to a running program.
>
>The statement is dumb.

I basically agree.

Firstly, instead of "Being a compiled language, ...", it should have been
"Because our emphasis was on efficiency and ease of reasoning about code,
...".

Secondly, the statement is now out-of-date.  Mercury provides interfaces
to dlopen(), dlsym(), and dlclose(), which allow you to thereby alter
the program at runtime.  This has been done in a way that has essentially
no impact at all on efficiency or the ease of reasoning about code which
does not use that feature.

IIRC, there was some debate about that statement when we were drafting
the Mercury web pages.  I think the original wording just said "Being
a compiled language, Mercury does not have any means for altering the
program at runtime."  I did at least get the bit about dynamically
loading code added.  I was still not in favour of the current wording,
but you have to choose which battles to fight :).  The counter-argument
which convinced me to drop the issue was that we should be optimizing our
web pages for ease of understandability rather than pedantic correctness.
Sometimes it is better to make a simple statement which is not pedantically
correct, rather than a more complicated one which is pedant-proof.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Adam Warner
Subject: Re: ``my standard library is better than yours'' [was Re: Python from Wise Guy's Viewpoint]
Date: 
Message-ID: <pan.2003.10.31.12.11.32.777783@consulting.net.nz>
Hi Fergus Henderson,

> Adam Warner <······@consulting.net.nz> writes:
> 
>>My example was deficient in demonstrating the dynamic limitations of Mercury:
>><http://www.cs.mu.oz.au/research/mercury/information/features.html>
>>
>>   Being a compiled language, Mercury does not have any means for altering
>>   the program at runtime, although we may later provide facilities for
>>   adding code to a running program.
>>
>>The statement is dumb.
> 
> I basically agree.

[But I still regret being so blunt]

> Firstly, instead of "Being a compiled language, ...", it should have been
> "Because our emphasis was on efficiency and ease of reasoning about code,
> ...".
> 
> Secondly, the statement is now out-of-date.  Mercury provides interfaces
> to dlopen(), dlsym(), and dlclose(), which allow you to thereby alter
> the program at runtime.  This has been done in a way that has essentially
> no impact at all on efficiency or the ease of reasoning about code which
> does not use that feature.
> 
> IIRC, there was some debate about that statement when we were drafting
> the Mercury web pages.  I think the original wording just said "Being
> a compiled language, Mercury does not have any means for altering the
> program at runtime."  I did at least get the bit about dynamically
> loading code added.  I was still not in favour of the current wording,
> but you have to choose which battles to fight :).  The counter-argument
> which convinced me to drop the issue was that we should be optimizing our
> web pages for ease of understandability rather than pedantic correctness.
> Sometimes it is better to make a simple statement which is not pedantically
> correct, rather than a more complicated one which is pedant-proof.

Perhaps using the word "static" at least once could help understandability!

I learned a lot from this recent post by Jacques Garrigue:
<http://groups.google.co.nz/groups?selm=l2ekwvflhq.fsf%40suiren.i-did-not-set--mail-host-address--so-shoot-me&output=gplain>

The terminology it employs could be helpful in constructing a simple but
correct statement.

Regards,
Adam
From: Frode Vatvedt Fjeld
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <2hekwws9on.fsf@vserver.cs.uit.no>
Adam Warner <······@consulting.net.nz> writes:

> (setf *debugger-hook*
>       (lambda (condition value)
>         (declare (ignorable condition value))
>         (invoke-restart (psychic))))
>
> (defun psychic ()
>   (let* ((*read-eval* nil)
>          (input (ignore-errors (read))))
>     (format t "Input ~S is of type ~S.~%" input (type-of input))))
>
> (loop (psychic))

I don't know if you're trying to make som point with this besides
writing meaningful lisp. But if not, it seems from this that you have
seriously misunderstood what invoke-restart is or does. Look it up.
And I'd also advise you to _bind_ *debugger-hook* rather than just
setting it.

-- 
Frode Vatvedt Fjeld
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8dhb$dma$2@news.oberberg.net>
Pascal Costanza wrote:

> Fergus Henderson wrote:
> 
>> In my experience, people who have difficulties in getting their programs
>> to typecheck usually have an inconsistent design, not a design which is
>> consistent but which the type checker is too restrictive to support.
> 
> Have you made sure that this is not a circular argument?
> 
> Does "consistent design" mean "acceptable by a type checker" in your book?

Matthias Blume already posted that the type checker caught several 
design errors. He's not the only to post such reports, I regularly 
statements like that on comp.lang.functional.

Regards,
Jo
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8pnn$g56$1$830fa795@news.demon.co.uk>
Pascal Costanza wrote:

> Fergus Henderson wrote:
> 
>> In my experience, people who have difficulties in getting their programs
>> to typecheck usually have an inconsistent design, not a design which is
>> consistent but which the type checker is too restrictive to support.
> 
> Have you made sure that this is not a circular argument?

It is obviously not a circular argument.

> Does "consistent design" mean "acceptable by a type checker" in your book?

Fergus was refering to programs that simply would not work, even if the
compiler ignored the type errors and generated code anyway. Obviously
not all type inconsistent programs fall into this category (only about
99.999..% of them:-). I've been using statically typed FPL's for a 
good few years now, and I can only think of one occasion where I had
"good" code rejected by the type checker (and even then the work around
was trivial). All other occasions it was telling me my programs were
broken (and where they were broken), without me having to test it.

This is good thing.  

As for dynamics, I don't think anybody would deny the usefulness of a
dynamic type system as a *supplement to* the static type system. (As
you have observed, there are occasions when type checking just can't
be done at compile time). But no way is a dynamic type system an
adequate *substitute for* modern static type systems. Most code
can be (and should be IMO) checked for type errors at compile time.

Regards
--
Adrian Hey 
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn8rn1$smu$1@f1node01.rhrz.uni-bonn.de>
Adrian Hey wrote:
> Pascal Costanza wrote:
> 
> 
>>Fergus Henderson wrote:

> I've been using statically typed FPL's for a 
> good few years now, and I can only think of one occasion where I had
> "good" code rejected by the type checker (and even then the work around
> was trivial). All other occasions it was telling me my programs were
> broken (and where they were broken), without me having to test it.
> 
> This is good thing.  

Maybe you haven't written the kind of programs yet that a static type 
system can't handle.

> As for dynamics, I don't think anybody would deny the usefulness of a
> dynamic type system as a *supplement to* the static type system.

I don't deny that static type systems can be a useful supplement to a 
dynamic type system in certain contexts.

> But no way is a dynamic type system an
> adequate *substitute for* modern static type systems. Most code
> can be (and should be IMO) checked for type errors at compile time.

There is an important class of programs - those that can reason about 
themselves and can change themselves at runtime - that cannot be 
statically checked.

Your claim implies that such code should not be written, at least not 
"most of the time" (whatever that means). Why? Maybe I am missing an 
important insight about such programs that you have.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Adrian Hey
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9ft5$fof$1$830fa79f@news.demon.co.uk>
Pascal Costanza wrote:

> Adrian Hey wrote:
>> I've been using statically typed FPL's for a
>> good few years now, and I can only think of one occasion where I had
>> "good" code rejected by the type checker (and even then the work around
>> was trivial). All other occasions it was telling me my programs were
>> broken (and where they were broken), without me having to test it.
>> 
>> This is good thing.
> 
> Maybe you haven't written the kind of programs yet that a static type
> system can't handle.

Your right, I haven't. I would say the overwhelming majority of programs
"out there" fall into this category. I am aware that some situations are
difficult to handle in a statically typed language. An obvious example
in Haskell would be trying to type a function which interpreted strings
representing arbitrary haskell expressions and returned their value..

        eval :: String -> ??

If this situation is to be dealt with at all, some kind of dynamic
type system seems necessary. I don't think anybody is denying that
(certainly not me).

>> As for dynamics, I don't think anybody would deny the usefulness of a
>> dynamic type system as a *supplement to* the static type system.
> 
> I don't deny that static type systems can be a useful supplement to a
> dynamic type system in certain contexts.

I don't think anybody who read your posts would get that impression :-) 
 
> There is an important class of programs - those that can reason about
> themselves and can change themselves at runtime - that cannot be
> statically checked.

Yes indeed. Even your common or garden OS falls into this category I
think, but that doesn't mean you can't statically type check individual
fragments of code (programs) that run under that OS. It just means
you can't statically type check the entire system (OS + application
programs).  

> Your claim implies that such code should not be written,

What claim? I guess you mean the one about dynamic typing being a
useful supplement to, but not a substitute for, static typing.

If so, I don't think it implies that at all.

> at least not "most of the time" (whatever that means).

Dunno who you're quoting there, but it isn't me.

> Why? Maybe I am missing an important insight about such programs
> that you have.

Possibly, but it seems more likely that you are simply misrepresenting
what I (and others) have written in order to create a straw man to demolish.

Regards
--
Adrian Hey
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn9rml$dp8$1@newsreader2.netcologne.de>
Adrian Hey wrote:

> Your right, I haven't. I would say the overwhelming majority of programs
> "out there" fall into this category.

Do you have empirical evidence for this statement? Maybe your sample set 
is not representative?

>>>As for dynamics, I don't think anybody would deny the usefulness of a
>>>dynamic type system as a *supplement to* the static type system.
>>
>>I don't deny that static type systems can be a useful supplement to a
>>dynamic type system in certain contexts.
> 
> 
> I don't think anybody who read your posts would get that impression :-) 

Well, then they don't read close enough. In my very posting wrt to this 
topic, I have suggested soft typing as a good compromise. See 
http://groups.google.com/groups?selm=bn687n%24l6u%241%40f1node01.rhrz.uni-bonn.de

Yes, you can certainly tell that I am a fan of dynamic type systems. So 
what? Someone has asked why one would want to get rid of a static type 
system, and I am responding.

(Thanks for the smiley. ;)

>>Your claim implies that such code should not be written,
> 
> 
> What claim?

"Most code [...] should be [...] checked for type errors at compile time."


>>at least not "most of the time" (whatever that means).
> 
> 
> Dunno who you're quoting there, but it isn't me.


Pascal
From: Fergus Henderson
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <3f9cf793$1@news.unimelb.edu.au>
Pascal Costanza <········@web.de> writes:

>Fergus Henderson wrote:
>
>> In my experience, people who have difficulties in getting their programs
>> to typecheck usually have an inconsistent design, not a design which is
>> consistent but which the type checker is too restrictive to support.
>
>Have you made sure that this is not a circular argument?

Yes.

>Does "consistent design" mean "acceptable by a type checker" in your book?

No.

-- 
Fergus Henderson <···@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
From: Remi Vanicat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ismgxhnj.dlv@wanadoo.fr>
Fergus Henderson <···@cs.mu.oz.au> writes:

> Lulu of the Lotus-Eaters <·····@gnosis.cx> writes:
>
>>Joachim Durchholz <·················@web.de> writes:
>>>My 100% subjective private study reveals not a single complaint about
>>>over-restrictive type systems in comp.lang.functional in the last 12
>>>months.
>>
>>I also read c.l.functional (albeit only lightly).  In the last 12
>>months, I have encountered dozens of complaints about over-restrictive
>>type sytems in Haskell, OCaml, SML, etc.
>>
>>The trick is that these complaints are not phrased in precisely that
>>way.  Rather, someone is trying to do some specific task, and has
>>difficulty arriving at a usable type needed in the task.  Often posters
>>provide good answers--Durchholz included.  But the underlying complaint
>>-really was- about the restrictiveness of the type system.
>
> Could you provide a link to an example of such a post?

I've no link but I'm sure to have seen (here or on one of the caml
list) people trying to do polymorphic recursion, something that is not
easy to do in ocaml...

-- 
R�mi Vanicat
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn7aee$1sr$1@newsreader2.netcologne.de>
Joachim Durchholz wrote:

> Most functional languages with a static type system don't require that 
> the programmer writes down the types, they are inferred from usage. And 
> the type checker will complain as soon as the usage of some data item is 
> inconsistent.

I know about type inference. The set of programs that can be checked 
with type inference is still a subset of all useful programs.

> My 100% subjective private study reveals not a single complaint about 
> over-restrictive type systems in comp.lang.functional in the last 12 
> months.

I am not surprised. :)


Pascal
From: Matthias Blume
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <m1smlkouf7.fsf@tti5.uchicago.edu>
Joachim Durchholz <·················@web.de> writes:

> Pascal Costanza wrote:
> > ....because static type systems work by reducing the expressive
> > power of a language. It can't be any different for a strict static
> > type system. You can't solve the halting problem in a
> > general-purpose language.
> 
> 
> The final statement is correct, but you don't need to solve the
> halting problem: it's enough to allow the specification of some
> easy-to-prove properties, without hindering the programmer too much.

In fact, you should never need to "solve the halting problem" in order
to statically check you program.  After all, the programmer *already
has a proof* in her mind when she writes the code!  All that's needed
(:-) is for her to provide enough hints as to what that proof is so
that the compiler can verify it.  (The smiley is there because, as we
are all poinfully aware of, this is much easier said than done.)

Matthias
From: ·············@comcast.net
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <65ifikya.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

> In fact, you should never need to "solve the halting problem" in order
> to statically check you program.  After all, the programmer *already
> has a proof* in her mind when she writes the code!  All that's needed
> (:-) is for her to provide enough hints as to what that proof is so
> that the compiler can verify it.  (The smiley is there because, as we
> are all poinfully aware of, this is much easier said than done.)


I'm having trouble proving that MYSTERY returns T for lists of finite
length.  I an idea that it would but now I'm not sure.  Can the
compiler verify it?

(defun kernel (s i)
  (list (not (car s))
	(if (car s)
	    (cadr s)
	  (cons i (cadr s)))
	(cons 'y (cons i (cons 'z (caddr s))))))

(defconstant k0 '(t () (x)))

(defun mystery (list)
  (let ((result (reduce #'kernel list :initial-value k0)))
    (cond ((null (cadr result)))
	  ((car result) (mystery (cadr result)))
	  (t (mystery (caddr result))))))
From: Mike Silva
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20619edc.0310241306.631ff013@posting.google.com>
"Marshall Spight" <·······@dnai.com> wrote in message news:<······················@rwcrnsc51.ops.asp.att.net>...
> "Scott McIntire" <····················@comcast.net> wrote in message ····························@sccrnsc01...
> > It seems to me that the Agency would have fared better if they just used
> > Lisp - which has bignums - and relied more on regression suites and less on
> > the belief that static type checking systems would save the day.
> 
> I find that an odd conclusion. Given that the cost of bugs is so high
> (especially in the cited case) I don't see a good reason for discarding
> *anything* that leads to better correctness. Yes, bignums is a good
> idea: overflow bugs in this day and age are as bad as C-style buffer
> overruns. Why work with a language that allows them when there
> are languages that don't?

As I understand it, the Operand Error that caused the grief was a
hardware trap in the 68k FPU.  Seems that this trap would have been
programmed to do the same thing regardless of the language used.

Also, I wouldn't call this an overflow "bug."  The code was written to
assume (based on proofs) that any overflow indicated a hardware
failure, and to take the designed action for hardware failure.  It
would have been trivial for the programmers to prevent the overflow or
handle it in another way.  Instead, they made a deliberate decision
that the default exception handling was exactly the right response for
overflow on this variable.

Mike
From: Mike Silva
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20619edc.0310241309.4698cde0@posting.google.com>
"Marshall Spight" <·······@dnai.com> wrote in message news:<······················@rwcrnsc51.ops.asp.att.net>...
> "Scott McIntire" <····················@comcast.net> wrote in message ····························@sccrnsc01...
> > It seems to me that the Agency would have fared better if they just used
> > Lisp - which has bignums - and relied more on regression suites and less on
> > the belief that static type checking systems would save the day.
> 
> I find that an odd conclusion. Given that the cost of bugs is so high
> (especially in the cited case) I don't see a good reason for discarding
> *anything* that leads to better correctness. Yes, bignums is a good
> idea: overflow bugs in this day and age are as bad as C-style buffer
> overruns. Why work with a language that allows them when there
> are languages that don't?

As I understand it, the Operand Error that caused the grief was a
hardware trap in the 68k FPU.  Seems that this trap would have been
programmed to do the same thing regardless of the language used.

Also, I wouldn't call this an overflow "bug."  The code was written to
assume (based on proofs) that any overflow indicated a hardware
failure, and to take the designed action for hardware failure.  It
would have been trivial for the programmers to prevent the overflow or
handle it in another way.  Instead, they made a deliberate decision
that the default exception handling was exactly the right response for
overflow on this variable.

Mike

(Google is gagging right now, so appologies for any multiple posts)
From: Pascal Costanza
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bmuv7n$9pr$1@newsreader2.netcologne.de>
Joachim Durchholz wrote:

> Oh, you're trolling for an inter-language flame fest...
> well, anyway:
> 
>> 3. no multimethods (why? Guido did not know Lisp, so he did not know 
>>    about them) You now have to suffer from visitor patterns, etc. like
>>     lowly Java monkeys.
> 
> 
> Multimethods suck.

Do they suck more or less than the Visitor pattern?

> The longer answer: Multimethods have modularity issues (if whatever 
> domain they're dispatching on can be extended by independent developers: 
> different developers may extend the dispatch domain of a function in 
> different directions, and leave undefined combinations; standard 
> dispatch strategies as I've seen in some Lisps just cover up the 
> undefined behaviour, with a slightly less than 50% chance of being 
> correct).

So how do you implement an equality operator correctly with only single 
dynamic dispatch?


Pascal
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn0fvj$qmu$1@news.oberberg.net>
Pascal Costanza wrote:

> Joachim Durchholz wrote:
> 
>> Oh, you're trolling for an inter-language flame fest...
>> well, anyway:
>>
>>> 3. no multimethods (why? Guido did not know Lisp, so he did not know 
>>>    about them) You now have to suffer from visitor patterns, etc. like
>>>     lowly Java monkeys.
>>
>> Multimethods suck.
> 
> Do they suck more or less than the Visitor pattern?

Well, the visitor pattern is worse.
Generics would be better though.

> So how do you implement an equality operator correctly with only single 
> dynamic dispatch?

Good question.

In practice, you don't use dispatch, you use some built-in mechanism.

Even more in practice, all equality operators that I have seen tended to 
compare more or less than one wanted to have compared, at least for 
complicated types with large hidden internal structures, or different 
equivalent internal structures. I have seen many cases where people 
implemented several equality operators - of course, with different 
names, and for most cases, I'm under the impression they weren't even 
aware that it was equality that they were implementing :-)

Examples are:
Lisp with its multitude of equality predicates nicely exposes the 
problems, and provides a solution.
Various string representations (7-bit Ascii, 8-bit Ascii, various 
Unicode flavors). Do you want to compare representations or contents? Do 
you need a code table to compare?
Various number representation: do you want to make 1 different from 1.0, 
or do you want to have them equal?

I think that dynamic dispatch is an interesting answer, but not to 
equality :-)

Regards,
Jo
From: Alex Martelli
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <q_Skb.21024$e5.754552@news1.tin.it>
Pascal Costanza wrote:
   ...
> So how do you implement an equality operator correctly with only single
> dynamic dispatch?

Equality is easy, as it's commutative -- pseudocode for it might be:

def operator==(a, b):
    try: return a.__eq__(b)
    except I_Have_No_Idea:
        try: return b.__eq__(a)
        except I_Have_No_Idea:
            return False

Non-commutative operators require a tad more, e.g. Python lets each
type define both an __add__ and a __radd__ (rightwise-add):

def operator+(a, b):
    try: return a.__add__(b)
    except (I_Have_No_Idea, AttributeError):
        try: return b.__radd__(a)
        except (I_Have_No_Idea, AttributeError):
            raise TypeError, "can't add %r and %r" % (type(a),type(b))

Multimethods really shine in HARDER problems, e.g., when you have
MORE than just two operands (or, perhaps, some _very_ complicated
inheritance structure -- but in such cases, even multimethods are
admittedly no panacea).  Python's pow(a, b, c) is an example --
and, indeed, Python does NOT let you overload THAT (3-operand)
version, only the two-operand one that you can spell pow(a, b)
or a**b.


ALex
From: Duncan Booth
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <Xns941A65ECE6F6Bduncanrcpcouk@127.0.0.1>
·······@ziplip.com wrote in 
·············································@ziplip.com:

> 1. f(x,y,z) sucks. f x y z  would be much easier to type (see Haskell)
>    90% of the code is function applictions. Why not make it convenient?

What syntax do you propose to use for f(x(y,z)), or f(x(y(z))), or 
f(x,y(z)) or f(x(y),z) or f(x)(y)(z) or numerous other variants which are 
not currently ambiguous?

-- 
Duncan Booth                                             ······@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?
From: Alex Martelli
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <s4Okb.315815$R32.10447714@news2.tin.it>
Duncan Booth wrote:

> ·······@ziplip.com wrote in
> ·············································@ziplip.com:
> 
>> 1. f(x,y,z) sucks. f x y z  would be much easier to type (see Haskell)
>>    90% of the code is function applictions. Why not make it convenient?
> 
> What syntax do you propose to use for f(x(y,z)), or f(x(y(z))), or
> f(x,y(z)) or f(x(y),z) or f(x)(y)(z) or numerous other variants which are
> not currently ambiguous?

Haskell has it easy -- f x y z is the same as ((f x) y) z -- as an
N-ary function is "conceptualized" as a unary function that returns
an (N-1)-ary function [as Haskell Curry conceptualized it -- which
is why the language is named Haskell, and the concept currying:-)].
So, your 5th case, f(x)(y)(z), would be exactly the same thing.

When you want to apply operators in other than their normal order
of priority, then and only then you must use parentheses, e.g. for 
your various cases they would be f (x y z) [1st case], f (x (y z))
[2nd case], f x (y z) [3rd case], f (x y) z [4th case].  You CAN,
if you wish, add redundant parentheses, of course, just like in
Python [where parentheses are overloaded to mean: function call,
class inheritance, function definition, empty tuples, tuples in
list comprehensions, apply operators with specified priority --
I hope I recalled them all;-)].

Of course this will never happen in Python, as it would break all
backwards compatibility.  And I doubt it could sensibly happen in
any "simil-Python" without adopting many other Haskell ideas, such
as implicit currying and nonstrictness.  What "x = f" should mean
in a language with assignment, everything first-class, and implicit
rather than explicit calling, is quite troublesome too.

Ruby allows some calls without parentheses, but the way it disambiguates 
"f x y" between f(x(y)) and f(x, y) is, IMHO, pricey -- it has to KNOW
whether x is a method, and if it is it won't just let you pass it as such
as an argument to f; that's the slippery slope whereby you end up having to
write x.call(y) because not just any object is callable.
"x = f" CALLS f if f is a method, so you can't just treat methods
as first-class citizens like any other... etc, etc...
AND good Ruby texts recommend AVOIDING "f x y" without parentheses,
anyway, because it's ambiguous to a human reader, even when it's
clear to the compiler -- so the benefit you get for that price is
dubious indeed.


Alex
From: Tim Sweeney
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <9ef8dc7.0310201252.19edf85f@posting.google.com>
> THE GOOD:
> THE BAD:
> 
> 1. f(x,y,z) sucks. f x y z  would be much easier to type (see Haskell)
>    90% of the code is function applictions. Why not make it convenient?
> 
> 9. Syntax for arrays is also bad [a (b c d) e f] would be better
>    than [a, b(c,d), e, f]

Agreed with your analysis, except for these two items.

#1 is a matter of opinion, but in general:

- f(x,y) is the standard set by mathematical notation and all the
mainstream programming language families, and is library neutral:
calling a curried function is f(x)(y), while calling an uncurried
function is f(x,y).

- "f x y" is unique to the Haskell and LISP families of languages, and
implies that most library functions are curried.  Otherwise you have a
weird asymmetry between curried calls "f x y" and uncurried calls
which translate back to "f(x,y)".  Widespread use of currying can lead
to weird error messages when calling functions of many parameters: a
missing third parameter in a call like f(x,y) is easy to report, while
with curried notation, "f x y" is still valid, yet results in a type
other than what you were expecting, moving the error up the AST to a
less useful obvious.

I think #9 is inconsistent with #1.

In general, I'm wary of notations like "f x" that use whitespace as an
operator (see http://www.research.att.com/~bs/whitespace98.pdf).
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031021013026.GQ1454@mapcar.org>
On Mon, Oct 20, 2003 at 01:52:14PM -0700, Tim Sweeney wrote:
> > 1. f(x,y,z) sucks. f x y z  would be much easier to type (see Haskell)
> >    90% of the code is function applictions. Why not make it convenient?
> > 
> > 9. Syntax for arrays is also bad [a (b c d) e f] would be better
> >    than [a, b(c,d), e, f]
> #1 is a matter of opinion, but in general:
> 
> - f(x,y) is the standard set by mathematical notation and all the
> mainstream programming language families, and is library neutral:
> calling a curried function is f(x)(y), while calling an uncurried
> function is f(x,y).

And lambda notation is: \xy.yx or something like that.  Math notation is
rather ad-hoc, designed for shorthand scribbling on paper, and in
general a bad idea to imitate for programming languages which are
written on the computer in an ASCII editor (which is one thing which
bothers me about ML and Haskell).

> - "f x y" is unique to the Haskell and LISP families of languages, and
> implies that most library functions are curried.  Otherwise you have a
> weird asymmetry between curried calls "f x y" and uncurried calls
> which translate back to "f(x,y)".  

Here's an "aha" moment for you:

In Haskell and ML, the two biggest languages with built-in syntactic
support for currying, there is also a datatype called a tuple (which is
a record with positional fields).  All functions, in fact, only take a
single argument.  The trick is that the syntax for tuples and the syntax
for currying combine to form the syntax for function calling:

f (x, y, z)  ==>  calling f with a tuple (x, y, z)
f x (y, z) ==> calling f with x, and then calling the result with (y, z).

This, I think, is a win for a functional language.  However, in a
not-so-functionally-oriented language such as Lisp, this gets in the way
of flexible parameter-list parsing, and doesn't provide that much value.
In Lisp, a form's meaning is determined by its first element, hence (f x
y) has a meaning determined by F (whether it is a macro, or functionally
bound), and Lisp permits such things as "optional", "keyword" (a.k.a. by
name) arguments, and ways to obtain the arguments as a list.

"f x y", to Lisp, is just three separate forms (all symbols).

> Widespread use of currying can lead
> to weird error messages when calling functions of many parameters: a
> missing third parameter in a call like f(x,y) is easy to report, while
> with curried notation, "f x y" is still valid, yet results in a type
> other than what you were expecting, moving the error up the AST to a
> less useful obvious.

Nah, it should still be able to report the line number correctly.
Though I freely admit that the error messages spat out of compilers like
SML/NJ are not so wonderful.

> I think #9 is inconsistent with #1.

I think that if the parser recognizes that it is directly within a [ ]
form, it can figure out that these are not function calls but rather
elements, though it would require that function calls be wrapped in (
)'s now.  And the grammar would be made much more complicated I think.

Personally, I prefer (list a (b c d) e f).

> In general, I'm wary of notations like "f x" that use whitespace as an
> operator (see http://www.research.att.com/~bs/whitespace98.pdf).

Hmm, rather curious paper.  I never really though of "f x" using
whitespace as an operator--it's a delimiter in the strict sense.  The
grammar of ML and Haskell define that consecutive expressions form a
function application.  Lisp certainly uses whitespace as a simple
delimiter.  I'm not a big fan of required commas because it gets
annoying when you are editting large tables or function calls with many
parameters.  The behavior of Emacs's C-M-t or M-t is not terribly good
with extraneous characters like those, though it does try.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Thomas F. Burdick
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <xcvhe23s5jv.fsf@famine.OCF.Berkeley.EDU>
Matthew Danish <·······@andrew.cmu.edu> writes:

> On Mon, Oct 20, 2003 at 01:52:14PM -0700, Tim Sweeney wrote:
>
> > In general, I'm wary of notations like "f x" that use whitespace as an
> > operator (see http://www.research.att.com/~bs/whitespace98.pdf).
> 
> Hmm, rather curious paper.  I never really though of "f x" using
> whitespace as an operator--it's a delimiter in the strict sense.  The
> grammar of ML and Haskell define that consecutive expressions form a
> function application.  Lisp certainly uses whitespace as a simple
> delimiter.  I'm not a big fan of required commas because it gets
> annoying when you are editting large tables or function calls with many
> parameters.  The behavior of Emacs's C-M-t or M-t is not terribly good
> with extraneous characters like those, though it does try.

It's true that (f x y) and "f x y" don't use whitespace as an
operator; however, I attempted something sneaky once, trying to get
lisp used via a custom reader that did use whitespace as an operator
(for the record, it worked until someone figured out what was going
on, then they were pissed, for no rational reason).  Its real use used
all domain-specific functions, but some example code that you can read
with SNEAKY:READ :

  let (list list 1, 2, 3;;
       times 3)
   {
    dotimes (x, times)
     { format (t, "x is ~S", x);
       print list;
     }
   }

It's all s-expressions, but they look like:

  f x, y, z;
or
  f (x, y, z);
or
  (sexp, sexp, sexp ...)
or
  f x, y, {sexp; sexp; ...}
or
  f x {sexp; sexp; ...}

It can look remarkably non-lispy, but once one catches on that it's
just a lot of ways of expressing where lists start and end, one can
figure out what's happening pretty quickly.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Michael Geary
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <vp96d616o3ea8@corp.supernews.com>
> > In general, I'm wary of notations like "f x" that use whitespace as an
> > operator (see http://www.research.att.com/~bs/whitespace98.pdf).

> Hmm, rather curious paper.  I never really though of "f x" using
> whitespace as an operator--it's a delimiter in the strict sense.  The
> grammar of ML and Haskell define that consecutive expressions form a
> function application.  Lisp certainly uses whitespace as a simple
> delimiter...

Did you read the cited paper *all the way to the end*?

-Mike
From: Matthew Danish
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <20031021074255.GR1454@mapcar.org>
On Mon, Oct 20, 2003 at 07:27:49PM -0700, Michael Geary wrote:
> > > In general, I'm wary of notations like "f x" that use whitespace as an
> > > operator (see http://www.research.att.com/~bs/whitespace98.pdf).
> 
> > Hmm, rather curious paper.  I never really though of "f x" using
> > whitespace as an operator--it's a delimiter in the strict sense.  The
> > grammar of ML and Haskell define that consecutive expressions form a
> > function application.  Lisp certainly uses whitespace as a simple
> > delimiter...
> 
> Did you read the cited paper *all the way to the end*?

Why bother?  It says "April 1" in the Abstract, and got boring about 2
paragraphs later.  I should have scare-quoted "operator" above, or
rather the lack of one, which is interpreted as meaning function
application.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <pan.2003.10.20.21.35.50.95447@knm.org.pl>
On Mon, 20 Oct 2003 13:52:14 -0700, Tim Sweeney wrote:

> - "f x y" is unique to the Haskell and LISP families of languages, and
> implies that most library functions are curried.

No, Lisp doesn't curry. It really writes "(f x y)", which is different
from "((f x) y)" (which is actually Scheme, not Lisp).

In fact the syntax "f x y" without mandatory parens fits non-lispish
non-curried syntaxes too. The space doesn't have to be left- or
right-associative; it just binds all arguments at once, and this
expression is different both from "f (x y)" and "(f x) y".

The only glitch is that you have to express application to 0 arguments
somehow. If you use "f()", you can't use "()" as an expression (for
empty tuple for example). But when you accept it, it works. It's my
favorite function application syntax.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Joachim Durchholz
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn33fo$1u7$1@news.oberberg.net>
Tim Sweeney wrote:
>>
>>1. f(x,y,z) sucks. f x y z  would be much easier to type (see Haskell)
>>   90% of the code is function applictions. Why not make it convenient?
>>
>>9. Syntax for arrays is also bad [a (b c d) e f] would be better
>>   than [a, b(c,d), e, f]
> 
> Agreed with your analysis, except for these two items.
> 
> #1 is a matter of opinion, but in general:
> 
> - f(x,y) is the standard set by mathematical notation and all the
> mainstream programming language families, and is library neutral:
> calling a curried function is f(x)(y), while calling an uncurried
> function is f(x,y).

Well, in most languages, curried functions are the standard.
This has some syntactic advantages, in areas that go beyond mathematical 
tradition. (Since each branch of mathematics has its own traditions, 
it's probably possible to find a branch where the functional programming 
way of writing functions is indeed tradition *g*)

> - "f x y" is unique to the Haskell and LISP families of languages, and
> implies that most library functions are curried.

No, Lisp languages require parentheses around the call, i.e.
   (f x y)
Lisp does share the trait that it doesn't need commas.

 > Otherwise you have a
> weird asymmetry between curried calls "f x y" and uncurried calls
> which translate back to "f(x,y)".

It's not an asymmetry. "f x y" is a function of two parameters.
"f (x, y)" is a function of a single parameter, which is an ordered pair.
In most cases such a difference is irrelevant, but there are cases where 
it isn't.

 > Widespread use of currying can lead
> to weird error messages when calling functions of many parameters: a
> missing third parameter in a call like f(x,y) is easy to report, while
> with curried notation, "f x y" is still valid, yet results in a type
> other than what you were expecting, moving the error up the AST to a
> less useful obvious.

That's right.
On the other hand, it makes it easy to write code that just fills the 
first parameter of a function, and returns the result. Such code is so 
commonplace that having weird error messages is considered a small price 
to pay.
Actually, writing functional code is more about sticking together 
functions than actually calling them. With such use, having to write 
code like
   f (x, ...)
instead of
   f x
will gain in precision, but it will clutter up the code so much that I'd 
exptect the gain in readability to be little, nonexistent or even negative.
It might be interesting to transform real-life code to a more standard 
syntax and see whether my expectation indeed holds.

> In general, I'm wary of notations like "f x" that use whitespace as an
> operator (see http://www.research.att.com/~bs/whitespace98.pdf).

That was an April Fool's joke. A particularly clever one: the paper 
starts by laying a marginally reasonable groundwork, only to advance 
into realms of absurdity later on.
It would be unreasonable to make whitespace an operator in C++. This 
doesn't mean that a language with a syntax designed for whitespace 
cannot be reasonable, and in fact some languages do that, with good 
effect. Reading Haskell code is like a fresh breeze, since you don't 
have to mentally filter out all that syntactic noise.
The downside is that it's easy to get some detail wrong. One example is 
a decision (was that Python?) to equate a tab with eight blanks, which 
tends to mess up syntactic structure when editing the code with 
over-eager editors. There are some other lessons to learn - but then, 
whitespace-as-syntactic-element is a relatively new concept, and people 
are still playing with it and trying out alternatives. The idea in 
itself is useful, its incarnations aren't perfect (yet).

Regards,
Jo
From: Pascal Bourguignon
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <87ptgqcf36.fsf@thalassa.informatimago.com>
···@epicgames.com (Tim Sweeney) writes:
> In general, I'm wary of notations like "f x" that use whitespace as an
> operator (see http://www.research.att.com/~bs/whitespace98.pdf).

The \\ comment successor is GREAT!

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Wojtek Walczak
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <bn0r8a$5om$1@atlantis.news.tpi.pl>
Dnia Sun, 19 Oct 2003 04:18:31 -0700 (PDT), ·······@ziplip.com napisa�(a):
> THE GOOD:
[...]
> THE BAD:
[...]

Well, in the variety of languages and plenty of conceptions you can search
for your language of choice. Because all the things you mentioned in
"THE BAD" are available in other languages it doesn't mean it should also
exist in Python. Languages are different, just as people are. If you find
Python has more cons than pros it means that this is not a language from
which you can take 100% of fun. Anyway, changing it into next haskell,
smalltalk or ruby has no sense. Python fills certain niche and it does
its job as it should. Differences are necessity, so don't waste your time
on talks about making Python similar to something else.

-- 
[ Wojtek Walczak - gminick (at) underground.org.pl ]
[        <http://gminick.linuxsecurity.pl/>        ]
[ "...rozmaite zwroty, matowe od patyny dawnosci." ]
From: q u a s i
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <ksc7qvsnmjkd3soj5qp18p9imsr7j09cqa@4ax.com>
only 74 more to go before we get 1000 in this thread !!!

has the c.l.l traffic increased a bit of late ?

--
quasi
http://abhijit-rao.tripod.com/
From: Erann Gat
Subject: Re: Python from Wise Guy's Viewpoint
Date: 
Message-ID: <gat-0111030720370001@192.168.1.51>
In article <··································@4ax.com>, q u a s i
<·········@yahoo.com> wrote:

> only 74 more to go before we get 1000 in this thread !!!
> 
> has the c.l.l traffic increased a bit of late ?
> 
> --
> quasi
> http://abhijit-rao.tripod.com/

Not only that, but this thread still has interesting technical content and
has not degenerated into a flame war.  That's got to be a usenet record!

E.
From: Joe Marshall
Subject: Re: More static type fun.
Date: 
Message-ID: <he12t567.fsf@comcast.net>
Lauri Alanko <··@iki.fi> writes:

> Joe Marshall <·············@comcast.net> virkkoi:
>> Can I ask what else you might want?  Can I ask how you expect to get
>> it if you cannot compute it?
>
> I might want the type of all functions that, when given a number, also
> return a number. What kind of a predicate would compute whether a given
> value is such a function or not?

None, you can declare them directly:

  (proclaim (ftype (function (number) number) foo))

Or were you looking for a co-variant type?  That's a bit harder.

-- 
~jrm