From: Ray Dillinger
Subject: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49dd575d$0$95546$742ec2ed@news.sonic.net>
David Moon has created a programming language called PLOT, 
for "Programming Language for Old-Timers." 

His introduction to it can be found at:

http://users.rcn.com/david-moon/PLOT/

There's ongoing discussion on the Lambda-The-Ultimate Programming
languages website, at

http://lambda-the-ultimate.org/node/3253

It's interesting.  It appears to be a lisp (in that it's a multiparadigm
programming language with a code-data correspondence and macros work 
on the data form of code) but it manages to miss such traditional lisp
builtins as cons cells and fully parenthesized prefix syntax.  So far 
there is some skepticism about PLOT's macrology; it may turn out to 
be more complicated and harder to compose than regular lisp macrology.
Well, okay, I'm personally skeptical about it.  I think if you don't 
use an absolutely regular syntax like traditional lisp syntax you 
probably can't avoid having your macrology become more complicated 
and harder to compose. 

PLOT's design goes to some lengths to keep expression nesting levels 
shallow compared to traditional lisps, and prefers to use indentation 
rather than parens to denote expression nesting.  It has objects, 
generic functions, user-defined first-class types, and macros, so 
it appears to be a full-on multiparadigm red-pill language.  

It's mostly-functional.  Pure-functional algorithms work and are easy 
to express in it, but assignment is not impossible as it is  in the
blue-pill functional languages.

It's also mostly-OO.  Object-Oriented programming is supported with 
Objects, Methods, and Generic functions, but doesn't dominate all 
possible ways of expressing algorithms as it does in the blue-pill 
OO languages.  

I thought folks would be interested. Especially that minority that 
prefers to gripe about cons cells and parens and prefix notation,
instead of actually developing alternatives that have the power of 
Lisp to do syntactic abstraction. 

                                Bear

From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <2f31a091-b16c-415d-87f4-9821920b8522@y13g2000yqn.googlegroups.com>
Hi Ray, list!

  I'm a rather marginal person at comp.lang.lisp, but here are my
thoughts:

> it may turn out to be more complicated and harder to compose than regular
lisp macrology

This is ok. Easy macros are harmful. Greatest failure of CL is that
presence of
macros serves as an excuse for having disgusting default syntax. It
took me
years to understand that. People think "if I need sweeter syntax, I'll
write macros".
So, they write macros like aif, functions like length=. Other people
are forced to
learn that macros and functions instead of learning one good syntax
once.

> I think if you don't use an absolutely regular syntax like traditional lisp
syntax you probably can't avoid having your macrology become more
complicated
and harder to compose.
My advice is to take a look at Prolog and Mathematica. They can
handle
infix expressions in a very simple way. Quasiquoting can be
implemented
in an infix syntax too (boo has it). And even in an HTML syntax
(PHP).

Maybe PLOT is good for someone, but this

> fully powerful macros (but hygienic!)

is not acceptable for me. Sometimes I want my macros to capture
variables with
some fixed names, or with names constructed from parameters. Macros
really lose
half of their power when they are hygienic. Variable capture is a non-
issue in practice
when we have with-gensyms and are just careful.

It looks like the most conceptually correct modern language for lisper
is boo. It is
mostly statically-typed (and hence fast) but allows for dynamic typing
too. It
has type inference and duck typing. It is not purely functional, but
it have
closures, anonymous functions and functions are first-class objects.

It is dynamic and it has macros. Better macros than Scheme. It has
really
extensible compiler. Its only serious design disadvantage vs lisp
is pytonish syntax which is harder to manipulate in text editor. Lisp
has a
great advantage that you can juggle sexprs in an editor with a very
few
keystrokes.

Technically, boo seems to be not completely mature, but it hosts
at .net
platform which is now rather portable and rich. And it is here since
2003.
I don't know if I'd better prefer boo or clojure. Unfortunately there
is
a great amount of lisp code and there is some beauty in CL which
I still can't repudiate.

http://boo.codehaus.org/

But my real language of choice should be not only like boo, it should
support multiple backends like haXe. I know no such a language...
From: Michele Simionato
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <e3dcb132-9b74-4fcf-a33f-c03bee108991@b16g2000yqb.googlegroups.com>
On Apr 9, 9:31 am, budden <···········@mail.ru> wrote:
> It is mostly statically-typed (and hence fast)

This implication is utterly wrong.

> It is dynamic and it has macros. Better macros than Scheme.

This is also wrong.
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5895a349-a3d2-4b98-85dd-cb917e19b60d@r37g2000yqn.googlegroups.com>
On Apr 9, 3:31 am, budden <···········@mail.ru> wrote:
[...]
> My advice is to take a look at Prolog and Mathematica. They can
> handle infix expressions in a very simple way.

I don't know a whole helluva lot about Prolog, but I sure wouldn't
describe Mathematica's handling of infix syntax as "very simple". Most
of the time, when I need to do any but the most trivial manipulations
of Mma expressions, I end up defaulting to the underlying, Lispoid
prefix syntax, where,

a = b + c;
d = Sin[a]

becomes

CompoundExpression[Set[a, Plus[b, c]], Set[d, Sin[a]]]

A bit prolix, but then if being prolix bugged me, why would I like the
language that gave us MULTIPLE-VALUE-BIND?

There's also the 97 million levels of precedence you have to keep
track of, and somehow they all manage to be annoying. There are some
genuinely neat aspects of Mma syntax, but they all involve the way you
can use a close approximation of real, 2-dimensional mathematical
notation in your programs, and post-date the plain infix stuff by
about a decade.

Cheers,
Pillsy
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <d6a60f04-d0dd-402e-8ea5-60411fd50871@f19g2000yqo.googlegroups.com>
Hi Pillsy,

> I don't know a whole helluva lot about Prolog, but
> I sure wouldn't describe Mathematica's handling of
> infix syntax as "very simple".
It is not "very simple", it is just "reasonable simple".
Most CL projects I've seen have about 90% of functions
and about 10% of macros. Of macros, 60% is quasiquoting.
So, about 96% of the time I don't need to know how my
"sweet" syntax is presented internally.
For me, I'd better have simpler syntax 96% of time,
and have some difficulty 4% of the time.
What goes to Mathematica's metaprogramming, it
was really hard to understand after lisp and it
seems it is less convinient. I mean
just that Mathematica can easily access and
manipulate underlying "FullForm".
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <745s22F121am7U1@mid.individual.net>
budden wrote:

> Maybe PLOT is good for someone, but this
> 
>> fully powerful macros (but hygienic!)
> 
> is not acceptable for me. Sometimes I want my macros to capture
> variables with
> some fixed names, or with names constructed from parameters. Macros
> really lose
> half of their power when they are hygienic. Variable capture is a non-
> issue in practice
> when we have with-gensyms and are just careful.

PLOT's macro system allows you to break macro hygiene on demand.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <746drfF125nbrU1@mid.individual.net>
On Thu, 09 Apr 2009 00:31:19 -0700, budden wrote:

> But my real language of choice should be not only like boo, it should
> support multiple backends like haXe. I know no such a language...

Be careful, you are treading on thin ice here.  If your search for the
Ideal Language ever terminates, you will no longer have excuse for not
writing actual applications.

Tamas
From: Chris Barts
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87ljq9jdbr.fsf@chbarts.motzarella.org>
budden <···········@mail.ru> writes:

> Hi Ray, list!
>
>   I'm a rather marginal person at comp.lang.lisp, but here are my
> thoughts:
>
>> it may turn out to be more complicated and harder to compose than regular
> lisp macrology
>
> This is ok. Easy macros are harmful. 

This is like saying easy functions are harmful, because it shows up
deficiencies in the standard library. It's a non-sequitur.

I'll also say that what you think of as good syntax and what I think
of as good syntax are two very different things, based on what you say
about Prolog's syntax below. Erlang is a great language in many ways
but its adoption of Prolog's bizarre, inflexible syntax makes it
unpleasant to actually work with.

> My advice is to take a look at Prolog and Mathematica. They can
> handle infix expressions in a very simple way.

Someone else addressed Mathematica already. I'll just throw another
piece of mud at Prolog by saying 'simple' and 'sane' are two very
different things.

> It looks like the most conceptually correct modern language for
> lisper is boo.

I'll look into it. I don't think I've ever seen it before.

> It is mostly statically-typed (and hence fast)

This is both wrong and wrong-headed. It's wrong because a
well-optimized dynamic language (like Common Lisp) can be a lot faster
than a badly-optimized static language (like C, as the semantics of C
don't allow a lot of wiggle room for an optimizer to work). It's
wrong-headed because it sacrifices productivity on the altar of
performance and nails the working programmer to a cross of gold, er,
machine language.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090410063913.GM3826@gildor.inglorion.net>
On Thu, Apr 09, 2009 at 11:19:04PM -0600, Chris Barts wrote:
> budden <···········@mail.ru> writes:
> > 
> > It looks like the most conceptually correct modern language for
> > lisper is boo.
> 
> I'll look into it. I don't think I've ever seen it before.
> 
> > It is mostly statically-typed (and hence fast)
> 
> This is both wrong and wrong-headed. It's wrong because a
> well-optimized dynamic language (like Common Lisp) can be a lot faster
> than a badly-optimized static language (like C, as the semantics of C
> don't allow a lot of wiggle room for an optimizer to work). It's
> wrong-headed because it sacrifices productivity on the altar of
> performance and nails the working programmer to a cross of gold, er,
> machine language.

I disagree. You are implying that dynamic typing leads to greater 
productivity than static typing. I don't think this is the case.

Taking "static typing" to mean that programs that cannot be correctly at 
compile time are rejected at compile time, whereas "dynamic typing" 
means type errors lead to rejection at run-time, static typing means, by 
definition, rejecting bad programs early. It seems to me this would be a 
productivity gain.

Also, requiring types to be checked at compile time requires the types 
to be determined at compile time, which means the knowledge of types is 
available to perform optimizations.

Now, in theory, you could perform all the same type inference and type 
checking on a dynamically typed language that you could perform on a 
statically typed language, as long as your program is written in a style 
that we know how to do type inference for. In practice, this is often 
not the case. The result is that programs written in dynamically typed 
languages will often not have all their types known and checked at 
compile time, leading to less efficient code generation and the 
possibility of type errors at run time.

What makes these kinds of comparisons difficult is that there are few if 
any cases where the only difference between two languages or 
implementations thereof is static vs. dynamic typing. There are always 
other features and implementation details that muddy the waters.

Just my 2 cents.

Bob

-- 
The sendmail configuration file is one of those files that looks like someone
beat their head on the keyboard. After working with it... I can see why!
	-- Harry Skelton


From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49df405c$0$5913$607ed4bc@cv.net>
Robbert Haarman wrote:
> On Thu, Apr 09, 2009 at 11:19:04PM -0600, Chris Barts wrote:
>> budden <···········@mail.ru> writes:
>>> It looks like the most conceptually correct modern language for
>>> lisper is boo.
>> I'll look into it. I don't think I've ever seen it before.
>>
>>> It is mostly statically-typed (and hence fast)
>> This is both wrong and wrong-headed. It's wrong because a
>> well-optimized dynamic language (like Common Lisp) can be a lot faster
>> than a badly-optimized static language (like C, as the semantics of C
>> don't allow a lot of wiggle room for an optimizer to work). It's
>> wrong-headed because it sacrifices productivity on the altar of
>> performance and nails the working programmer to a cross of gold, er,
>> machine language.
> 
> I disagree. You are implying that dynamic typing leads to greater 
> productivity than static typing. I don't think this is the case.
> 
> Taking "static typing" to mean that programs that cannot be correctly at 
> compile time are rejected at compile time, whereas "dynamic typing" 
> means type errors lead to rejection at run-time, static typing means, by 
> definition, rejecting bad programs early. It seems to me this would be a 
> productivity gain.

This old debate? The problem is how much effort goes into getting one's 
code past the compiler, and the nature of working with dynamic vs 
statically typed languages. How fast can I try a new idea and find out 
it was a bad one? If I have to refactor everything before running once, 
I am slower to find out I have had a bad idea. If that is the case, I am 
less likely to explore new ideas, some of which work out fine. Suddenly 
the tail is wagging the dog: static typing, meant to make us more 
effective is now in the way costing more than it is worth. But...

... to some that is not the case. They work deliberately and 
methodically anyway, and they have a low tolerance for uncertainty. Some 
people balance their checkbooks to the penny every month, some people 
check every other month for unexpected $5k discrepancies. Some people 
wait for the light to turn green, some people can't because there is no 
light, they are in the middle of the block and reading the newspaper 
talking on the cell phone as they cross.

And it's no good asking one programmer to play another's game: these 
emotiopsychosocial deals affect our productivity.

kt
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <748upnF12avqpU2@mid.individual.net>
On Fri, 10 Apr 2009 08:49:50 -0400, Kenneth Tilton wrote:

> statically typed languages. How fast can I try a new idea and find out
> it was a bad one? If I have to refactor everything before running once,
> I am slower to find out I have had a bad idea. If that is the case, I am
> less likely to explore new ideas, some of which work out fine. Suddenly
> the tail is wagging the dog: static typing, meant to make us more
> effective is now in the way costing more than it is worth. But...

Ah, a fellow Fortran enthusiast :-)

Tamas
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <7489ipFpotmcU2@mid.individual.net>
Robbert Haarman wrote:
> On Thu, Apr 09, 2009 at 11:19:04PM -0600, Chris Barts wrote:
>> budden <···········@mail.ru> writes:
>>> It looks like the most conceptually correct modern language for
>>> lisper is boo.
>> I'll look into it. I don't think I've ever seen it before.
>>
>>> It is mostly statically-typed (and hence fast)
>> This is both wrong and wrong-headed. It's wrong because a
>> well-optimized dynamic language (like Common Lisp) can be a lot faster
>> than a badly-optimized static language (like C, as the semantics of C
>> don't allow a lot of wiggle room for an optimizer to work). It's
>> wrong-headed because it sacrifices productivity on the altar of
>> performance and nails the working programmer to a cross of gold, er,
>> machine language.
> 
> I disagree. You are implying that dynamic typing leads to greater 
> productivity than static typing. I don't think this is the case.
> 
> Taking "static typing" to mean that programs that cannot be correctly at 
> compile time are rejected at compile time, whereas "dynamic typing" 
> means type errors lead to rejection at run-time, static typing means, by 
> definition, rejecting bad programs early. It seems to me this would be a 
> productivity gain.

...but that's a wrong conclusion: http://p-cos.net/documents/dynatype.pdf


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090410172406.GN3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 09:08:09AM +0200, Pascal Costanza wrote:
> Robbert Haarman wrote:
>>
>> Taking "static typing" to mean that programs that cannot be correctly 
>> at compile time are rejected at compile time, whereas "dynamic typing"  
>> means type errors lead to rejection at run-time, static typing means, 
>> by definition, rejecting bad programs early. It seems to me this would 
>> be a productivity gain.
>
> ...but that's a wrong conclusion: http://p-cos.net/documents/dynatype.pdf

Reading the PDF failed to convince me.

> 2.1 Statically Checked Implementation of Interfaces

You're going about it the wrong way. You shouldn't declare that your 
class implements the interface when it doesn't, then add stub methods 
until it does, and hope you remember to fix it later.

You should implement the interface first, and then you declare that your 
class implements it. From that point on, you can have the compiler check 
that you have actually implemented the interface.

In practice, what happens is often that you use an IDE which lets you 
declare the interfaces you implement. The IDE then generates the stubs 
for you, with a little reminder in each to tell you you still need to 
make that stub do something useful. A good IDE will also tell you if you 
haven't done that yet. This works, as long as you don't ignore your 
IDE's warnings.

Having these warnings is the best scenario you can hope for with dynamic 
typing, because dynamic typing, by its nature, is not allowed to reject 
your program before run time. Many implementations of dynamically typed 
language will not provide any warning at all.

I agree with you that returning a default value is wrong and you should 
signal an error instead if a stub method is called. But that's 
orthogonal to static typing.

> 2.2 Statically Checked Exceptions

These have their pros and cons. It would be nice if you could prove at 
compile time that any error condition that could arise at run time is 
handled in some way. However, I am not aware of any languages that 
actually provide such guarantees. Either way, I really dislike the way 
exceptions work in Java but that, again, is orthogonal to static typing.

> 2.3     Checking Feature Availability

> Checking if a resource provides a specific feature and actually using 
> that feature should be an atomic step in the face of multiple access 
> paths to that resource. Otherwise, that feature might get lost in 
> between the check and the actual use.

Yes, race conditions are a problem. But the problem here is not with 
static typing. In fact, the problem here is that you are breaking static 
typing! And the end result is that you get the same thing you would have 
gotten under dynamic typing.

As an aside, I think this example highlights one of the deficiencies of 
the objects-with-methods flavor of object orientation. The example would 
map to a relational universe much better.

Regards,

Bob

-- 
Sed quis custodiet ipsos custodes?
	-- Juvenal

From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74bbhrF12mopiU1@mid.individual.net>
Robbert Haarman wrote:
> On Fri, Apr 10, 2009 at 09:08:09AM +0200, Pascal Costanza wrote:
>> Robbert Haarman wrote:
>>> Taking "static typing" to mean that programs that cannot be correctly 
>>> at compile time are rejected at compile time, whereas "dynamic typing"  
>>> means type errors lead to rejection at run-time, static typing means, 
>>> by definition, rejecting bad programs early. It seems to me this would 
>>> be a productivity gain.
>> ...but that's a wrong conclusion: http://p-cos.net/documents/dynatype.pdf
> 
> Reading the PDF failed to convince me.

Of course not. ;)

>> 2.1 Statically Checked Implementation of Interfaces
> 
> You're going about it the wrong way. You shouldn't declare that your 
> class implements the interface when it doesn't, then add stub methods 
> until it does, and hope you remember to fix it later.
> 
> You should implement the interface first, and then you declare that your 
> class implements it. From that point on, you can have the compiler check 
> that you have actually implemented the interface.

Who are you to tell me what I should and shouldn't do?

_I_ don't want to be interrupted in my flow of thinking. The programming 
language shouldn't tell me what to focus on, I should have control over 
the programming language to tell it what to focus on.

> In practice, what happens is often that you use an IDE which lets you 
> declare the interfaces you implement. The IDE then generates the stubs 
> for you, with a little reminder in each to tell you you still need to 
> make that stub do something useful. A good IDE will also tell you if you 
> haven't done that yet. This works, as long as you don't ignore your 
> IDE's warnings.

Read my paper again. The erroneous situation was partially caused by 
such a "smart" IDE!

> Having these warnings is the best scenario you can hope for with dynamic 
> typing, because dynamic typing, by its nature, is not allowed to reject 
> your program before run time. Many implementations of dynamically typed 
> language will not provide any warning at all.
> 
> I agree with you that returning a default value is wrong and you should 
> signal an error instead if a stub method is called. But that's 
> orthogonal to static typing.

The problem I describe in the paper is a problem I have with a 
statically typed language, and that I don't have with a dynamically 
typed language. It may be true that dynamically typed language have 
their own problems, but it's certainly not true that statically typed 
languages prevent problems from happening. To the contrary, in this 
specific situation, the concrete statically typed language at hand made 
the situation worse.

>> 2.2 Statically Checked Exceptions
> 
> These have their pros and cons. It would be nice if you could prove at 
> compile time that any error condition that could arise at run time is 
> handled in some way.

You mean like that the program runs into an endless loop, for example? :-P

> However, I am not aware of any languages that 
> actually provide such guarantees.

No big surprise there, because it's impossible.

> Either way, I really dislike the way 
> exceptions work in Java but that, again, is orthogonal to static typing.

Is it? They are part of the static type system in Java...

>> 2.3     Checking Feature Availability
> 
>> Checking if a resource provides a specific feature and actually using 
>> that feature should be an atomic step in the face of multiple access 
>> paths to that resource. Otherwise, that feature might get lost in 
>> between the check and the actual use.
> 
> Yes, race conditions are a problem. But the problem here is not with 
> static typing. In fact, the problem here is that you are breaking static 
> typing!

Incorrect. The static type system promises me something here that it 
cannot hold. So why does it promise me that?

> And the end result is that you get the same thing you would have 
> gotten under dynamic typing.

Nope, with a dynamic language I can just invoke the method without 
further effort, I don't first have to ensure that it's there. At the 
time the runtime system makes the decision that the method is there, 
there is no gap anymore in which it can be removed before its actual 
execution.

> As an aside, I think this example highlights one of the deficiencies of 
> the objects-with-methods flavor of object orientation. The example would 
> map to a relational universe much better.

That's a different discussion.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090411131927.GX3826@gildor.inglorion.net>
On Sat, Apr 11, 2009 at 01:00:10PM +0200, Pascal Costanza wrote:
> Robbert Haarman wrote:
>> On Fri, Apr 10, 2009 at 09:08:09AM +0200, Pascal Costanza wrote:
>
>>> 2.1 Statically Checked Implementation of Interfaces
>>
>> You're going about it the wrong way. You shouldn't declare that your  
>> class implements the interface when it doesn't, then add stub methods  
>> until it does, and hope you remember to fix it later.
>>
>> You should implement the interface first, and then you declare that 
>> your class implements it. From that point on, you can have the compiler 
>> check that you have actually implemented the interface.
>
> Who are you to tell me what I should and shouldn't do?

About the same as you are by telling people the anti-patterns you 
mention in your paper are not the way to go.

What I am saying is that I agree with you that the anti-pattern isn't 
the way to do it, but that I disagree with the analysis in your paper.

> _I_ don't want to be interrupted in my flow of thinking. The programming  
> language shouldn't tell me what to focus on, I should have control over  
> the programming language to tell it what to focus on.

But you have that power. Nothing prevents you from developing your 
program step by step, without being forced to implement all the methods 
of a certain interface.

Only if _you_ declare that your class implements an interface will the 
compiler check for you that the class does, in fact, implement that 
interface.

The anti-pattern here is not that the compiler checks that you have 
actually implemented CharSequence when you tell it to; the anti-pattern 
is that you tell the compiler to check that you have implemented 
CharSequence when you don't actually want the compiler to check that.

>> In practice, what happens is often that you use an IDE which lets you  
>> declare the interfaces you implement. The IDE then generates the stubs  
>> for you, with a little reminder in each to tell you you still need to  
>> make that stub do something useful. A good IDE will also tell you if 
>> you haven't done that yet. This works, as long as you don't ignore your 
>> IDE's warnings.
>
> Read my paper again. The erroneous situation was partially caused by  
> such a "smart" IDE!

You mean the part about Eclipse? I thought I had already addressed that 
in my previous post. Or are you talking about something else? In that 
case, could you point me to the relevant part of the paper?

>> Having these warnings is the best scenario you can hope for with 
>> dynamic typing, because dynamic typing, by its nature, is not allowed 
>> to reject your program before run time. Many implementations of 
>> dynamically typed language will not provide any warning at all.
>>
>> I agree with you that returning a default value is wrong and you should 
>> signal an error instead if a stub method is called. But that's  
>> orthogonal to static typing.
>
> The problem I describe in the paper is a problem I have with a  
> statically typed language, and that I don't have with a dynamically  
> typed language.

Perhaps I don't fully understand what you are claiming the problem is. 

The way I understand the paper is that you don't want the compiler to 
check that you have actually implemented all the methods mandated by 
CharSequence. There is nothing about static typing that forces this 
check on you; it only comes about when you tell the compiler to perform 
this check for you by means of "implements CharSequence". If you don't 
want the compiler to perform the check, simply don't tell it to perform 
the check.

> It may be true that dynamically typed language have their own 
> problems, but it's certainly not true that statically typed languages 
> prevent problems from happening.

I like to think that they do prevent some problems from happening. 
Specifically, type errors in production systems. It is true that not all 
errors are type errors, and that exhaustive testing would also reveal 
type errors. However, exhaustive testing is not always (usually not) 
performed. There is a good reason for that: testing is expensive, and, 
at some point, it just doesn't make economic sense to test more (whether 
the expense is measured in time or money or both). Static type checking 
allows you to find some errors very quickly, and static typing prevents 
those errors from ever making it into a production system.

> To the contrary, in this specific situation, the concrete statically 
> typed language at hand made the situation worse.

Still assuming I understand the situation correctly, I don't agree. You 
told the compiler that your class implemented CharSequence, but it 
didn't. The compiler dutifully pointed this out to you. This is not 
worse, this is better.

>>> 2.2 Statically Checked Exceptions
>>
>> These have their pros and cons. It would be nice if you could prove at  
>> compile time that any error condition that could arise at run time is  
>> handled in some way.
>
> You mean like that the program runs into an endless loop, for example? :-P

Sure, why not? I have seen plenty of programs that are supposed to 
run in endless loops, but don't when certain error conditions occur, 
because the error causes the whole program to terminate.

>> However, I am not aware of any languages that actually provide such 
>> guarantees.
>
> No big surprise there, because it's impossible.

Oh?

>> Either way, I really dislike the way exceptions work in Java but that, 
>> again, is orthogonal to static typing.
>
> Is it? They are part of the static type system in Java...

Oh, yes. I forgot. Java is the only statically typed language, and there 
is no other way to design a statically typed language! ;-)

Seriously, though, most languages I know, even statically typed ones, 
don't have both checked and unchecked exceptions like Java has.

>>> 2.3     Checking Feature Availability
>>
>>> Checking if a resource provides a specific feature and actually using  
>>> that feature should be an atomic step in the face of multiple access  
>>> paths to that resource. Otherwise, that feature might get lost in  
>>> between the check and the actual use.
>>
>> Yes, race conditions are a problem. But the problem here is not with  
>> static typing. In fact, the problem here is that you are breaking 
>> static typing!
>
> Incorrect. The static type system promises me something here that it  
> cannot hold. So why does it promise me that?

The static type system promises you that you can call getEmployer() on 
an instance of Employee. This is true.

What the static type system does not promise you is that dilbert is an 
Employee. This is because you have not told it to prove that; you've 
only told it to prove that dilbert is a Person.

You then tell the compiler "oh, by the way, whatever you believe, 
dilbert is actually an Employee". Rather than taking this claim at face 
value (like a C compiler would), the compiler inserts a run-time check 
to verify that this is actually the case. It is that run-time check that 
fails, and it fails if and only if your claim is actually false and 
dilbert is not an Employee. This may or may not actually happen, because 
there is a race condition in your program.

>> And the end result is that you get the same thing you would have  
>> gotten under dynamic typing.
>
> Nope, with a dynamic language I can just invoke the method without  
> further effort, I don't first have to ensure that it's there. At the  
> time the runtime system makes the decision that the method is there,  
> there is no gap anymore in which it can be removed before its actual  
> execution.

Yes, and you could have done the same thing in the Java program. The 
only change you would need to make is removing the instanceof check. 
Alternatively, you could add such a check to the program in a 
dynamically typed language. It both cases, the program in the 
dynamically typed language would exhibit the exact same behavior as the 
Java program.

Regards,

Bob

-- 
#include <stdio.h>
#define SIX 1 + 5
#define NINE 8 + 1
int main(int argc, char **argv) {
	return printf("When you multiply SIX by NINE, you get %d\n",
		SIX * NINE);
}


From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74bpb8F12kv4mU1@mid.individual.net>
Robbert Haarman wrote:
> On Sat, Apr 11, 2009 at 01:00:10PM +0200, Pascal Costanza wrote:
>> Robbert Haarman wrote:
>>> On Fri, Apr 10, 2009 at 09:08:09AM +0200, Pascal Costanza wrote:
>>>> 2.1 Statically Checked Implementation of Interfaces
>>> You're going about it the wrong way. You shouldn't declare that your  
>>> class implements the interface when it doesn't, then add stub methods  
>>> until it does, and hope you remember to fix it later.
>>>
>>> You should implement the interface first, and then you declare that 
>>> your class implements it. From that point on, you can have the compiler 
>>> check that you have actually implemented the interface.
>> Who are you to tell me what I should and shouldn't do?
> 
> About the same as you are by telling people the anti-patterns you 
> mention in your paper are not the way to go.
> 
> What I am saying is that I agree with you that the anti-pattern isn't 
> the way to do it, but that I disagree with the analysis in your paper.
> 
>> _I_ don't want to be interrupted in my flow of thinking. The programming  
>> language shouldn't tell me what to focus on, I should have control over  
>> the programming language to tell it what to focus on.
> 
> But you have that power. Nothing prevents you from developing your 
> program step by step, without being forced to implement all the methods 
> of a certain interface.
> 
> Only if _you_ declare that your class implements an interface will the 
> compiler check for you that the class does, in fact, implement that 
> interface.
> 
> The anti-pattern here is not that the compiler checks that you have 
> actually implemented CharSequence when you tell it to; the anti-pattern 
> is that you tell the compiler to check that you have implemented 
> CharSequence when you don't actually want the compiler to check that.

I may want to run tests by passing instances of the class I'm developing 
to a third-party method that declares to expect a CharSequence as a 
parameter, which means I may have no choice.

>>> In practice, what happens is often that you use an IDE which lets you  
>>> declare the interfaces you implement. The IDE then generates the stubs  
>>> for you, with a little reminder in each to tell you you still need to  
>>> make that stub do something useful. A good IDE will also tell you if 
>>> you haven't done that yet. This works, as long as you don't ignore your 
>>> IDE's warnings.
>> Read my paper again. The erroneous situation was partially caused by  
>> such a "smart" IDE!
> 
> You mean the part about Eclipse? I thought I had already addressed that 
> in my previous post. 

I don't see where you did.

> Or are you talking about something else? In that 
> case, could you point me to the relevant part of the paper?

Eclipse offers to introduce stub methods that return arbitrary values 
which happen to make the type checker happy, but which are erroneous. 
So, no guarantee at all provided by the type system.

>> The problem I describe in the paper is a problem I have with a  
>> statically typed language, and that I don't have with a dynamically  
>> typed language.
> 
> Perhaps I don't fully understand what you are claiming the problem is. 
> 
> The way I understand the paper is that you don't want the compiler to 
> check that you have actually implemented all the methods mandated by 
> CharSequence. There is nothing about static typing that forces this 
> check on you; it only comes about when you tell the compiler to perform 
> this check for you by means of "implements CharSequence". If you don't 
> want the compiler to perform the check, simply don't tell it to perform 
> the check.

See above.

>> It may be true that dynamically typed language have their own 
>> problems, but it's certainly not true that statically typed languages 
>> prevent problems from happening.
> 
> I like to think that they do prevent some problems from happening. 

But they also introduce other problems.

> Specifically, type errors in production systems. It is true that not all 
> errors are type errors, and that exhaustive testing would also reveal 
> type errors.

It's also true that not all type errors are errors. A programmer has to 
actively work for having an overlap between type errors and actual 
errors, otherwise he won't get any benefits from the static type checker.

It's a tool that, if it's in line with how the programmer thinks about 
his program and is used by him accordingly, can be useful, but can be a 
serious burden if it's either not in line with how he thinks about the 
program or is not used correctly.

Static type systems by themselves don't do anything.

>> To the contrary, in this specific situation, the concrete statically 
>> typed language at hand made the situation worse.
> 
> Still assuming I understand the situation correctly, I don't agree. You 
> told the compiler that your class implemented CharSequence, but it 
> didn't. The compiler dutifully pointed this out to you. This is not 
> worse, this is better.

See above.

>>> Either way, I really dislike the way exceptions work in Java but that, 
>>> again, is orthogonal to static typing.
>> Is it? They are part of the static type system in Java...
> 
> Oh, yes. I forgot. Java is the only statically typed language, and there 
> is no other way to design a statically typed language! ;-)

It's an example for a statically typed language which proves that the 
presence of a static type system by itself doesn't mean a lot yet (and 
whose design can be considered a good-faith effort to provide a 'good' 
static type system - some people actually claim to like it).

> Seriously, though, most languages I know, even statically typed ones, 
> don't have both checked and unchecked exceptions like Java has.

Sure.

>>>> 2.3     Checking Feature Availability
>>>> Checking if a resource provides a specific feature and actually using  
>>>> that feature should be an atomic step in the face of multiple access  
>>>> paths to that resource. Otherwise, that feature might get lost in  
>>>> between the check and the actual use.
>>> Yes, race conditions are a problem. But the problem here is not with  
>>> static typing. In fact, the problem here is that you are breaking 
>>> static typing!
>> Incorrect. The static type system promises me something here that it  
>> cannot hold. So why does it promise me that?
> 
> The static type system promises you that you can call getEmployer() on 
> an instance of Employee. This is true.
> 
> What the static type system does not promise you is that dilbert is an 
> Employee. This is because you have not told it to prove that; you've 
> only told it to prove that dilbert is a Person.

if (dilbert instanceof Employee) { // <<<=== this is the promise
                                    // (Employee is a static type!)
   System.out.println("Employer: " +
      ((Employee)dilbert).getEmployer().getName();
}

>>> And the end result is that you get the same thing you would have  
>>> gotten under dynamic typing.
>> Nope, with a dynamic language I can just invoke the method without  
>> further effort, I don't first have to ensure that it's there. At the  
>> time the runtime system makes the decision that the method is there,  
>> there is no gap anymore in which it can be removed before its actual  
>> execution.
> 
> Yes, and you could have done the same thing in the Java program. The 
> only change you would need to make is removing the instanceof check. 
> Alternatively, you could add such a check to the program in a 
> dynamically typed language. It both cases, the program in the 
> dynamically typed language would exhibit the exact same behavior as the 
> Java program.

Yes, that's one of the claims of the paper: That in the examples given 
in the paper, the right thing to do is to simulate what you would have 
done in a dynamic language.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413112128.GG3826@gildor.inglorion.net>
On Sat, Apr 11, 2009 at 04:55:34PM +0200, Pascal Costanza wrote:
> Robbert Haarman wrote:
>>
>
> I may want to run tests by passing instances of the class I'm developing  
> to a third-party method that declares to expect a CharSequence as a  
> parameter, which means I may have no choice.

Indeed. However, this is not forced upon you by static typing. It is 
forced upon you by the third-party method requiring that the argument 
you pass to it implement the interface. You may argue that requiring 
this is wrong, but that's an issue you have with the way the method is 
implemented, not with static typing.

>>> Read my paper again. The erroneous situation was partially caused by  
>>> such a "smart" IDE!
>>
>> You mean the part about Eclipse? I thought I had already addressed that 
>> in my previous post. 
>
> I don't see where you did.

=== BEGIN QUOTE ===

I agree with you that returning a default value is wrong and you should                                                
signal an error instead if a stub method is called. But that's                                                         
orthogonal to static typing.

=== END QUOTE ===

and

=== BEGIN QUOTE ===

In practice, what happens is often that you use an IDE which lets you
declare the interfaces you implement. The IDE then generates the stubs
for you, with a little reminder in each to tell you you still need to
make that stub do something useful. A good IDE will also tell you if you                                               
haven't done that yet. This works, as long as you don't ignore your                                                    
IDE's warnings.                                                                                                        
                                                                                                                       
Having these warnings is the best scenario you can hope for with dynamic                                               
typing, because dynamic typing, by its nature, is not allowed to reject                                                
your program before run time. Many implementations of dynamically typed                                                
language will not provide any warning at all.

=== END QUOTE ===

>> Or are you talking about something else? In that case, could you point 
>> me to the relevant part of the paper?
>
> Eclipse offers to introduce stub methods that return arbitrary values  
> which happen to make the type checker happy, but which are erroneous.  
> So, no guarantee at all provided by the type system.

See above.

>>> It may be true that dynamically typed language have their own  
>>> problems, but it's certainly not true that statically typed languages 
>>> prevent problems from happening.
>>
>> I like to think that they do prevent some problems from happening. 
>
> But they also introduce other problems.

So you claim, but, so far, I have only seen you provide examples where 
the problem is due to something else than static typing.

>> Specifically, type errors in production systems. It is true that not 
>> all errors are type errors, and that exhaustive testing would also 
>> reveal type errors.
>
> It's also true that not all type errors are errors. A programmer has to  
> actively work for having an overlap between type errors and actual  
> errors, otherwise he won't get any benefits from the static type checker.

Could you give an example of a type error that is not an error?

> It's a tool that, if it's in line with how the programmer thinks about  
> his program and is used by him accordingly, can be useful, but can be a  
> serious burden if it's either not in line with how he thinks about the  
> program or is not used correctly.

This is certainly something I can agree with.

Now, it seems you are saying "you will run into problems if you use the 
tool incorrectly, therefore the tool is bad". I would rather say "you 
will run into problems if you use the tool incorrectly, so don't do 
that".

>>>>> 2.3     Checking Feature Availability
>>>>> Checking if a resource provides a specific feature and actually 
>>>>> using  that feature should be an atomic step in the face of 
>>>>> multiple access  paths to that resource. Otherwise, that feature 
>>>>> might get lost in  between the check and the actual use.
>>>> Yes, race conditions are a problem. But the problem here is not 
>>>> with  static typing. In fact, the problem here is that you are 
>>>> breaking static typing!
>>> Incorrect. The static type system promises me something here that it  
>>> cannot hold. So why does it promise me that?
>>
>> The static type system promises you that you can call getEmployer() on  
>> an instance of Employee. This is true.
>>
>> What the static type system does not promise you is that dilbert is an  
>> Employee. This is because you have not told it to prove that; you've  
>> only told it to prove that dilbert is a Person.
>
> if (dilbert instanceof Employee) { // <<<=== this is the promise
>                                    // (Employee is a static type!)
>   System.out.println("Employer: " +
>      ((Employee)dilbert).getEmployer().getName();
> }

There is no promise that if dilbert holds an instance of Employee at the 
time of the check, it will hold an instance of Employee forever. If 
there were, you wouldn't be able to assign a value that wasn't an 
Employee to it.

Regards,

Bob

-- 
Do not meddle in the affairs of sysadmins, for they are quick to anger and
have not need for subtlety.


From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74gr07F10cpo8U1@mid.individual.net>
Robbert Haarman wrote:
> On Sat, Apr 11, 2009 at 04:55:34PM +0200, Pascal Costanza wrote:
>> Robbert Haarman wrote:
>> I may want to run tests by passing instances of the class I'm developing  
>> to a third-party method that declares to expect a CharSequence as a  
>> parameter, which means I may have no choice.
> 
> Indeed. However, this is not forced upon you by static typing. It is 
> forced upon you by the third-party method requiring that the argument 
> you pass to it implement the interface. You may argue that requiring 
> this is wrong, but that's an issue you have with the way the method is 
> implemented, not with static typing.

You're trying to explain away the problem. However, the fact remains 
that this is an issue I have in a statically typed language which I 
don't have in a dynamically typed language.

>>> Or are you talking about something else? In that case, could you point 
>>> me to the relevant part of the paper?
>> Eclipse offers to introduce stub methods that return arbitrary values  
>> which happen to make the type checker happy, but which are erroneous.  
>> So, no guarantee at all provided by the type system.
> 
> See above.

See above. ;)

>>>> It may be true that dynamically typed language have their own  
>>>> problems, but it's certainly not true that statically typed languages 
>>>> prevent problems from happening.
>>> I like to think that they do prevent some problems from happening. 
>> But they also introduce other problems.
> 
> So you claim, but, so far, I have only seen you provide examples where 
> the problem is due to something else than static typing.

See above.

>>> Specifically, type errors in production systems. It is true that not 
>>> all errors are type errors, and that exhaustive testing would also 
>>> reveal type errors.
>> It's also true that not all type errors are errors. A programmer has to  
>> actively work for having an overlap between type errors and actual  
>> errors, otherwise he won't get any benefits from the static type checker.
> 
> Could you give an example of a type error that is not an error?

interface Foo {
   public void m();
   public void n();
}

class Bar implements Foo {
   public void m() {System.out.println("Hello, World!");}
}

class Baz {
   public static void main(String[] args) {
    new Bar().m();
   }
}

>> It's a tool that, if it's in line with how the programmer thinks about  
>> his program and is used by him accordingly, can be useful, but can be a  
>> serious burden if it's either not in line with how he thinks about the  
>> program or is not used correctly.
> 
> This is certainly something I can agree with.
> 
> Now, it seems you are saying "you will run into problems if you use the 
> tool incorrectly, therefore the tool is bad". I would rather say "you 
> will run into problems if you use the tool incorrectly, so don't do 
> that".

No, I'm saying that _I_ run into problems with the tool because I don't 
have a use for it, and it just stands in my way.

>>>>>> 2.3     Checking Feature Availability
>>>>>> Checking if a resource provides a specific feature and actually 
>>>>>> using  that feature should be an atomic step in the face of 
>>>>>> multiple access  paths to that resource. Otherwise, that feature 
>>>>>> might get lost in  between the check and the actual use.
>>>>> Yes, race conditions are a problem. But the problem here is not 
>>>>> with  static typing. In fact, the problem here is that you are 
>>>>> breaking static typing!
>>>> Incorrect. The static type system promises me something here that it  
>>>> cannot hold. So why does it promise me that?
>>> The static type system promises you that you can call getEmployer() on  
>>> an instance of Employee. This is true.
>>>
>>> What the static type system does not promise you is that dilbert is an  
>>> Employee. This is because you have not told it to prove that; you've  
>>> only told it to prove that dilbert is a Person.
>> if (dilbert instanceof Employee) { // <<<=== this is the promise
>>                                    // (Employee is a static type!)
>>   System.out.println("Employer: " +
>>      ((Employee)dilbert).getEmployer().getName();
>> }
> 
> There is no promise that if dilbert holds an instance of Employee at the 
> time of the check, it will hold an instance of Employee forever. If 
> there were, you wouldn't be able to assign a value that wasn't an 
> Employee to it.

Makes me curious why there is an instanceof operator in Java... ;)



Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <3fc50bff-5887-41e4-993c-801392b939d6@r33g2000yqn.googlegroups.com>
On 13 Apr, 14:54, Pascal Costanza <····@p-cos.net> wrote:
> > Could you give an example of a type error that is not an error?
>
> interface Foo {
>    public void m();
>    public void n();
>
> }
>
> class Bar implements Foo {
>    public void m() {System.out.println("Hello, World!");}
>
> }
>
> class Baz {
>    public static void main(String[] args) {
>     new Bar().m();
>    }
>
> }

You can use type definition to add unnecessary constraints.

However the purpose of Java interfaces is to make sure that different
parts of an application share a consistent interface. If you
overspecify an interface you can't blame the type system.

> >> It's a tool that, if it's in line with how the programmer thinks about  
> >> his program and is used by him accordingly, can be useful, but can be a  
> >> serious burden if it's either not in line with how he thinks about the  
> >> program or is not used correctly.
>
> > This is certainly something I can agree with.
>
> > Now, it seems you are saying "you will run into problems if you use the
> > tool incorrectly, therefore the tool is bad". I would rather say "you
> > will run into problems if you use the tool incorrectly, so don't do
> > that".
>
> No, I'm saying that _I_ run into problems with the tool because I don't
> have a use for it, and it just stands in my way.
>
>
>
> >>>>>> 2.3     Checking Feature Availability
> >>>>>> Checking if a resource provides a specific feature and actually
> >>>>>> using  that feature should be an atomic step in the face of
> >>>>>> multiple access  paths to that resource. Otherwise, that feature
> >>>>>> might get lost in  between the check and the actual use.
> >>>>> Yes, race conditions are a problem. But the problem here is not
> >>>>> with  static typing. In fact, the problem here is that you are
> >>>>> breaking static typing!
> >>>> Incorrect. The static type system promises me something here that it  
> >>>> cannot hold. So why does it promise me that?
> >>> The static type system promises you that you can call getEmployer() on  
> >>> an instance of Employee. This is true.
>
> >>> What the static type system does not promise you is that dilbert is an  
> >>> Employee. This is because you have not told it to prove that; you've  
> >>> only told it to prove that dilbert is a Person.
> >> if (dilbert instanceof Employee) { // <<<=== this is the promise
> >>                                    // (Employee is a static type!)
> >>   System.out.println("Employer: " +
> >>      ((Employee)dilbert).getEmployer().getName();
> >> }
>
> > There is no promise that if dilbert holds an instance of Employee at the
> > time of the check, it will hold an instance of Employee forever. If
> > there were, you wouldn't be able to assign a value that wasn't an
> > Employee to it.
>
> Makes me curious why there is an instanceof operator in Java... ;)

Java type system is a mixture of static and dynamic typing.
the instanceof operator pertains mostly with the dynamic part of the
type system: in a fully static type systems any instanceof expression
would be a compile-time constant.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74h2kkF132tngU1@mid.individual.net>
Vend wrote:
> On 13 Apr, 14:54, Pascal Costanza <····@p-cos.net> wrote:
>>> Could you give an example of a type error that is not an error?
>> interface Foo {
>>    public void m();
>>    public void n();
>>
>> }
>>
>> class Bar implements Foo {
>>    public void m() {System.out.println("Hello, World!");}
>>
>> }
>>
>> class Baz {
>>    public static void main(String[] args) {
>>     new Bar().m();
>>    }
>>
>> }
> 
> You can use type definition to add unnecessary constraints.
> 
> However the purpose of Java interfaces is to make sure that different
> parts of an application share a consistent interface. If you
> overspecify an interface you can't blame the type system.

This is nevertheless an example of a type error that is not an error.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <da4ca6ea-5688-42a2-a898-71bbf013a64a@c9g2000yqm.googlegroups.com>
On 13 Apr, 17:04, Pascal Costanza <····@p-cos.net> wrote:
> Vend wrote:
> > On 13 Apr, 14:54, Pascal Costanza <····@p-cos.net> wrote:
> >>> Could you give an example of a type error that is not an error?
> >> interface Foo {
> >>    public void m();
> >>    public void n();
>
> >> }
>
> >> class Bar implements Foo {
> >>    public void m() {System.out.println("Hello, World!");}
>
> >> }
>
> >> class Baz {
> >>    public static void main(String[] args) {
> >>     new Bar().m();
> >>    }
>
> >> }
>
> > You can use type definition to add unnecessary constraints.
>
> > However the purpose of Java interfaces is to make sure that different
> > parts of an application share a consistent interface. If you
> > overspecify an interface you can't blame the type system.
>
> This is nevertheless an example of a type error that is not an error.

Why not?
You have to specify an alternative semantics in order to make the
above code both syntattically and semantically correct.
I don't think there is any such semantics in which the 'interface'
construct is meaningful.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74h5peF13hji5U1@mid.individual.net>
Vend wrote:
> On 13 Apr, 17:04, Pascal Costanza <····@p-cos.net> wrote:
>> Vend wrote:
>>> On 13 Apr, 14:54, Pascal Costanza <····@p-cos.net> wrote:
>>>>> Could you give an example of a type error that is not an error?
>>>> interface Foo {
>>>>    public void m();
>>>>    public void n();
>>>> }
>>>> class Bar implements Foo {
>>>>    public void m() {System.out.println("Hello, World!");}
>>>> }
>>>> class Baz {
>>>>    public static void main(String[] args) {
>>>>     new Bar().m();
>>>>    }
>>>> }
>>> You can use type definition to add unnecessary constraints.
>>> However the purpose of Java interfaces is to make sure that different
>>> parts of an application share a consistent interface. If you
>>> overspecify an interface you can't blame the type system.
>> This is nevertheless an example of a type error that is not an error.
> 
> Why not?
> You have to specify an alternative semantics in order to make the
> above code both syntattically and semantically correct.

Yes. Eclipse does that, for example.

> I don't think there is any such semantics in which the 'interface'
> construct is meaningful.

Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <47373f1d-dc40-4422-bd68-9bbd12ae7f9b@j8g2000yql.googlegroups.com>
On 13 Apr, 17:58, Pascal Costanza <····@p-cos.net> wrote:
> Vend wrote:
> > On 13 Apr, 17:04, Pascal Costanza <····@p-cos.net> wrote:
> >> Vend wrote:
> >>> On 13 Apr, 14:54, Pascal Costanza <····@p-cos.net> wrote:
> >>>>> Could you give an example of a type error that is not an error?
> >>>> interface Foo {
> >>>>    public void m();
> >>>>    public void n();
> >>>> }
> >>>> class Bar implements Foo {
> >>>>    public void m() {System.out.println("Hello, World!");}
> >>>> }
> >>>> class Baz {
> >>>>    public static void main(String[] args) {
> >>>>     new Bar().m();
> >>>>    }
> >>>> }
> >>> You can use type definition to add unnecessary constraints.
> >>> However the purpose of Java interfaces is to make sure that different
> >>> parts of an application share a consistent interface. If you
> >>> overspecify an interface you can't blame the type system.
> >> This is nevertheless an example of a type error that is not an error.
>
> > Why not?
> > You have to specify an alternative semantics in order to make the
> > above code both syntattically and semantically correct.
>
> Yes. Eclipse does that, for example.

?
From: Andrew Reilly
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74i0evF13gk85U1@mid.individual.net>
On Mon, 13 Apr 2009 07:55:11 -0700, Vend wrote:

> Java type system is a mixture of static and dynamic typing. the
> instanceof operator pertains mostly with the dynamic part of the type
> system:

Of course: polymorphic object oriented systems are *necessarily* 
dynamically typed.  That's kind of the whole point.

> in a fully static type systems any instanceof expression would
> be a compile-time constant.

That would make for fairly lame object orientation, I suppose.

Cheers,

-- 
Andrew
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413160411.GB4558@gildor.inglorion.net>
On Mon, Apr 13, 2009 at 02:54:31PM +0200, Pascal Costanza wrote:
> Robbert Haarman wrote:
>> On Sat, Apr 11, 2009 at 04:55:34PM +0200, Pascal Costanza wrote:
>>> Robbert Haarman wrote:
>>> I may want to run tests by passing instances of the class I'm 
>>> developing  to a third-party method that declares to expect a 
>>> CharSequence as a  parameter, which means I may have no choice.
>>
>> Indeed. However, this is not forced upon you by static typing. It is  
>> forced upon you by the third-party method requiring that the argument  
>> you pass to it implement the interface. You may argue that requiring  
>> this is wrong, but that's an issue you have with the way the method is  
>> implemented, not with static typing.
>
> You're trying to explain away the problem. However, the fact remains  
> that this is an issue I have in a statically typed language which I  
> don't have in a dynamically typed language.

I am not denying that you have run into this problem or could run into 
this problem in a statically typed language, I am only saying that it is 
not _because_ of static typing that the problem exists.

You could run into the same problem in a dynamically typed language. 
Taking your "third-party method that requires a CharSequence as a 
parameter" as an example, if someone wrote

(defmethod foo ((x vector))
   ; some code here
   )

then you would have to pass that method some kind of vector. You may 
want to pass it some other object that the length function also works 
for, but that isn't going to work. The type system will only let you 
pass a vector.

The only difference from the statically typed case is that, in the 
statically typed case, your program won't compile, whereas, in the 
dynamically typed case, your program will compile and run...until you 
actually appy foo to something that isn't a vector.

>> Could you give an example of a type error that is not an error?
>
> interface Foo {
>   public void m();
>   public void n();
> }
>
> class Bar implements Foo {
>   public void m() {System.out.println("Hello, World!");}
> }
>
> class Baz {
>   public static void main(String[] args) {
>    new Bar().m();
>   }
> }

That code says that Bar implements Foo, but it doesn't. This is an 
error.

Note, also, that static typing didn't _force_ you to write this code. 
You could have:

1. Omitted n from Foo.
2. Implemented n in Bar.
3. Not claimed that Bar implemented Foo.

In all these cases, you would have had static typing and a working 
program.

As an aside, I would like to share a piece of OCaml code:

let foo x = x#bar

This defines a function foo, which takes an argument x, and calls that 
argument's bar method. The type of foo is reported as:

val foo : < bar : 'a; .. > -> 'a = <fun>

In words: foo is a function that takes one argument and returns one 
value. The argument must be of some type that has a bar method that 
returns a value of type 'a. foo returns a value of type 'a (i.e. the 
same type as the return type of bar).

It's all statically typed, and it poses exactly those requirements on 
arguments to foo that foo actually needs. 

Regards,

Bob

-- 
Reality is that which, when you stop believing in it, doesn't go away.

	-- Philip K. Dick

From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74hnb5F13p7brU1@mid.individual.net>
Robbert Haarman wrote:
> On Mon, Apr 13, 2009 at 02:54:31PM +0200, Pascal Costanza wrote:
>> Robbert Haarman wrote:
>>> On Sat, Apr 11, 2009 at 04:55:34PM +0200, Pascal Costanza wrote:
>>>> Robbert Haarman wrote:
>>>> I may want to run tests by passing instances of the class I'm 
>>>> developing  to a third-party method that declares to expect a 
>>>> CharSequence as a  parameter, which means I may have no choice.
>>> Indeed. However, this is not forced upon you by static typing. It is  
>>> forced upon you by the third-party method requiring that the argument  
>>> you pass to it implement the interface. You may argue that requiring  
>>> this is wrong, but that's an issue you have with the way the method is  
>>> implemented, not with static typing.
>> You're trying to explain away the problem. However, the fact remains  
>> that this is an issue I have in a statically typed language which I  
>> don't have in a dynamically typed language.
> 
> I am not denying that you have run into this problem or could run into 
> this problem in a statically typed language, I am only saying that it is 
> not _because_ of static typing that the problem exists.

It surely is.

> You could run into the same problem in a dynamically typed language. 
> Taking your "third-party method that requires a CharSequence as a 
> parameter" as an example, if someone wrote
> 
> (defmethod foo ((x vector))
>    ; some code here
>    )
> 
> then you would have to pass that method some kind of vector. You may 
> want to pass it some other object that the length function also works 
> for, but that isn't going to work. The type system will only let you 
> pass a vector.
> 
> The only difference from the statically typed case is that, in the 
> statically typed case, your program won't compile, whereas, in the 
> dynamically typed case, your program will compile and run...until you 
> actually appy foo to something that isn't a vector.

It will continue to run afterwards:

CL-USER 1 > (defmethod foo ((v vector))
               (map 'vector (lambda (x) (+ x x)) v))
#<STANDARD-METHOD FOO NIL (VECTOR) 21C525FF>

CL-USER 2 > (foo (list 1 2 3))

Error: No applicable methods for #<STANDARD-GENERIC-FUNCTION FOO 
21C50B5A> with args ((1 2 3))
   1 (continue) Call #<STANDARD-GENERIC-FUNCTION FOO 21C50B5A> again
   2 (abort) Return to level 0.
   3 Return to top loop level 0.

Type :b for backtrace, :c <option number> to proceed,  or :? for other 
options

CL-USER 3 : 1 > (defmethod foo (v)
                   (let* ((coerced (coerce v 'vector))
                          (result (foo coerced)))
                     (coerce result (type-of v))))
#<STANDARD-METHOD FOO NIL (T) 21DED763>

CL-USER 4 : 1 > :c 1
(2 4 6)

>>> Could you give an example of a type error that is not an error?
>> interface Foo {
>>   public void m();
>>   public void n();
>> }
>>
>> class Bar implements Foo {
>>   public void m() {System.out.println("Hello, World!");}
>> }
>>
>> class Baz {
>>   public static void main(String[] args) {
>>    new Bar().m();
>>   }
>> }
> 
> That code says that Bar implements Foo, but it doesn't. This is an 
> error.

Some Java IDEs seem to disagree.

> Note, also, that static typing didn't _force_ you to write this code. 
> You could have:
> 
> 1. Omitted n from Foo.
> 2. Implemented n in Bar.
> 3. Not claimed that Bar implemented Foo.
> 
> In all these cases, you would have had static typing and a working 
> program.

That doesn't change the fact that there is no _actual_ error in the 
program - none that would let the program fail at runtime.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <c837c95e-ad9b-4135-8ae8-9db3dc40c43b@k2g2000yql.googlegroups.com>
On 13 Apr, 22:58, Pascal Costanza <····@p-cos.net> wrote:
> Robbert Haarman wrote:
> > On Mon, Apr 13, 2009 at 02:54:31PM +0200, Pascal Costanza wrote:
> >> Robbert Haarman wrote:
> >>> On Sat, Apr 11, 2009 at 04:55:34PM +0200, Pascal Costanza wrote:
> >>>> Robbert Haarman wrote:
> >>>> I may want to run tests by passing instances of the class I'm
> >>>> developing  to a third-party method that declares to expect a
> >>>> CharSequence as a  parameter, which means I may have no choice.
> >>> Indeed. However, this is not forced upon you by static typing. It is  
> >>> forced upon you by the third-party method requiring that the argument  
> >>> you pass to it implement the interface. You may argue that requiring  
> >>> this is wrong, but that's an issue you have with the way the method is  
> >>> implemented, not with static typing.
> >> You're trying to explain away the problem. However, the fact remains  
> >> that this is an issue I have in a statically typed language which I  
> >> don't have in a dynamically typed language.
>
> > I am not denying that you have run into this problem or could run into
> > this problem in a statically typed language, I am only saying that it is
> > not _because_ of static typing that the problem exists.
>
> It surely is.
>
>
>
> > You could run into the same problem in a dynamically typed language.
> > Taking your "third-party method that requires a CharSequence as a
> > parameter" as an example, if someone wrote
>
> > (defmethod foo ((x vector))
> >    ; some code here
> >    )
>
> > then you would have to pass that method some kind of vector. You may
> > want to pass it some other object that the length function also works
> > for, but that isn't going to work. The type system will only let you
> > pass a vector.
>
> > The only difference from the statically typed case is that, in the
> > statically typed case, your program won't compile, whereas, in the
> > dynamically typed case, your program will compile and run...until you
> > actually appy foo to something that isn't a vector.
>
> It will continue to run afterwards:
>
> CL-USER 1 > (defmethod foo ((v vector))
>                (map 'vector (lambda (x) (+ x x)) v))
> #<STANDARD-METHOD FOO NIL (VECTOR) 21C525FF>
>
> CL-USER 2 > (foo (list 1 2 3))
>
> Error: No applicable methods for #<STANDARD-GENERIC-FUNCTION FOO
> 21C50B5A> with args ((1 2 3))
>    1 (continue) Call #<STANDARD-GENERIC-FUNCTION FOO 21C50B5A> again
>    2 (abort) Return to level 0.
>    3 Return to top loop level 0.
>
> Type :b for backtrace, :c <option number> to proceed,  or :? for other
> options
>
> CL-USER 3 : 1 > (defmethod foo (v)
>                    (let* ((coerced (coerce v 'vector))
>                           (result (foo coerced)))
>                      (coerce result (type-of v))))
> #<STANDARD-METHOD FOO NIL (T) 21DED763>
>
> CL-USER 4 : 1 > :c 1
> (2 4 6)
>
>
>
> >>> Could you give an example of a type error that is not an error?
> >> interface Foo {
> >>   public void m();
> >>   public void n();
> >> }
>
> >> class Bar implements Foo {
> >>   public void m() {System.out.println("Hello, World!");}
> >> }
>
> >> class Baz {
> >>   public static void main(String[] args) {
> >>    new Bar().m();
> >>   }
> >> }
>
> > That code says that Bar implements Foo, but it doesn't. This is an
> > error.
>
> Some Java IDEs seem to disagree.
>
> > Note, also, that static typing didn't _force_ you to write this code.
> > You could have:
>
> > 1. Omitted n from Foo.
> > 2. Implemented n in Bar.
> > 3. Not claimed that Bar implemented Foo.
>
> > In all these cases, you would have had static typing and a working
> > program.
>
> That doesn't change the fact that there is no _actual_ error in the
> program - none that would let the program fail at runtime.

Assuming that it was dynamically typed, in which case, the 'interface'
declaration would have been meaningless.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090414053118.GG4558@gildor.inglorion.net>
On Mon, Apr 13, 2009 at 10:58:12PM +0200, Pascal Costanza wrote:
> Robbert Haarman wrote:
>>
>> I am not denying that you have run into this problem or could run into  
>> this problem in a statically typed language, I am only saying that it 
>> is not _because_ of static typing that the problem exists.
>
> It surely is.

Ok, well, enough said about that. Clearly, neither of us is going to 
convince the other.

>> You could run into the same problem in a dynamically typed language.  
>> Taking your "third-party method that requires a CharSequence as a  
>> parameter" as an example, if someone wrote
>>
>> (defmethod foo ((x vector))
>>    ; some code here
>>    )
>>
>> then you would have to pass that method some kind of vector. You may  
>> want to pass it some other object that the length function also works  
>> for, but that isn't going to work. The type system will only let you  
>> pass a vector.
>>
>> The only difference from the statically typed case is that, in the  
>> statically typed case, your program won't compile, whereas, in the  
>> dynamically typed case, your program will compile and run...until you  
>> actually appy foo to something that isn't a vector.
>
> It will continue to run afterwards:

.. in Common Lisp. Yes. But that wasn't the point, the point was that 
the program was errorneous.

> CL-USER 1 > (defmethod foo ((v vector))
>               (map 'vector (lambda (x) (+ x x)) v))
> #<STANDARD-METHOD FOO NIL (VECTOR) 21C525FF>
>
> CL-USER 2 > (foo (list 1 2 3))
>
> Error: No applicable methods for #<STANDARD-GENERIC-FUNCTION FOO  
> 21C50B5A> with args ((1 2 3))
>   1 (continue) Call #<STANDARD-GENERIC-FUNCTION FOO 21C50B5A> again
>   2 (abort) Return to level 0.
>   3 Return to top loop level 0.
>
> Type :b for backtrace, :c <option number> to proceed,  or :? for other  
> options
>
> CL-USER 3 : 1 > (defmethod foo (v)
>                   (let* ((coerced (coerce v 'vector))
>                          (result (foo coerced)))
>                     (coerce result (type-of v))))
> #<STANDARD-METHOD FOO NIL (T) 21DED763>
>
> CL-USER 4 : 1 > :c 1
> (2 4 6)

.. and now you have done all the work to implement your own 
"third-party method", or, alternatively, to make your object compatible 
with the type expected by the original third-party method. In this case 
it was easy, because coerce knows how to convert a list to a vector and 
back, but that is beside the point. The point is that dynamic typing did 
not prevent you from having to do this work.

>>>> Could you give an example of a type error that is not an error?
>>> interface Foo {
>>>   public void m();
>>>   public void n();
>>> }
>>>
>>> class Bar implements Foo {
>>>   public void m() {System.out.println("Hello, World!");}
>>> }
>>>
>>> class Baz {
>>>   public static void main(String[] args) {
>>>    new Bar().m();
>>>   }
>>> }
>>
>> That code says that Bar implements Foo, but it doesn't. This is an  
>> error.
>
> Some Java IDEs seem to disagree.

Whether one agrees or not does not affect the truth. According to the 
semantics of the language, this code is invalid, and the compiler will 
reject it. If the IDE thinks the code is fine, the IDE is wrong.

>> Note, also, that static typing didn't _force_ you to write this code.  
>> You could have:
>>
>> 1. Omitted n from Foo.
>> 2. Implemented n in Bar.
>> 3. Not claimed that Bar implemented Foo.
>>
>> In all these cases, you would have had static typing and a working  
>> program.
>
> That doesn't change the fact that there is no _actual_ error in the  
> program - none that would let the program fail at runtime.

There are two options.

Either you check that Bar implements Foo, or you don't.

If you check that Bar implements Foo, you will find that this is not the 
case. Whether you do this before run time (static typing) or at run time 
(dynamic typing) does not change it.

If you don't check that Bar implements Foo, you will indeed not discover 
that it doesn't. Again, it does not matter if you (otherwise) use static 
or dynamic typing.

Regards,

Bob

-- 
"Never let your sense of morals prevent you from doing what is right"
	--Salvor Hardin

From: Chris Barts
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87tz4rvgc0.fsf@chbarts.motzarella.org>
Robbert Haarman <··············@inglorion.net> writes:

> On Mon, Apr 13, 2009 at 10:58:12PM +0200, Pascal Costanza wrote:
>> Robbert Haarman wrote:
>>>
>>> I am not denying that you have run into this problem or could run into  
>>> this problem in a statically typed language, I am only saying that it 
>>> is not _because_ of static typing that the problem exists.
>>
>> It surely is.
>
> Ok, well, enough said about that. Clearly, neither of us is going to 
> convince the other.

This is entirely possible, but I'm going to try a little different tack.

>>> You could run into the same problem in a dynamically typed language.  
>>> Taking your "third-party method that requires a CharSequence as a  
>>> parameter" as an example, if someone wrote
>>>
>>> (defmethod foo ((x vector))
>>>    ; some code here
>>>    )
>>>
>>> then you would have to pass that method some kind of vector. You may  
>>> want to pass it some other object that the length function also works  
>>> for, but that isn't going to work. The type system will only let you  
>>> pass a vector.
>>>
>>> The only difference from the statically typed case is that, in the  
>>> statically typed case, your program won't compile, whereas, in the  
>>> dynamically typed case, your program will compile and run...until you  
>>> actually appy foo to something that isn't a vector.
>>
>> It will continue to run afterwards:
>
> .. in Common Lisp. Yes. But that wasn't the point, the point was that 
> the program was errorneous.

Well, yes. That's true for Java, but it's a vacuous statement: We're
discussing language implementation philosophies, so we can't really
stop talking because something can't be done in one language. ;)

>
>> CL-USER 1 > (defmethod foo ((v vector))
>>               (map 'vector (lambda (x) (+ x x)) v))
>> #<STANDARD-METHOD FOO NIL (VECTOR) 21C525FF>
>>
>> CL-USER 2 > (foo (list 1 2 3))
>>
>> Error: No applicable methods for #<STANDARD-GENERIC-FUNCTION FOO  
>> 21C50B5A> with args ((1 2 3))
>>   1 (continue) Call #<STANDARD-GENERIC-FUNCTION FOO 21C50B5A> again
>>   2 (abort) Return to level 0.
>>   3 Return to top loop level 0.
>>
>> Type :b for backtrace, :c <option number> to proceed,  or :? for other  
>> options
>>
>> CL-USER 3 : 1 > (defmethod foo (v)
>>                   (let* ((coerced (coerce v 'vector))
>>                          (result (foo coerced)))
>>                     (coerce result (type-of v))))
>> #<STANDARD-METHOD FOO NIL (T) 21DED763>
>>
>> CL-USER 4 : 1 > :c 1
>> (2 4 6)
>
> .. and now you have done all the work to implement your own 
> "third-party method", or, alternatively, to make your object compatible 
> with the type expected by the original third-party method. In this case 
> it was easy, because coerce knows how to convert a list to a vector and 
> back, but that is beside the point. The point is that dynamic typing did 
> not prevent you from having to do this work.

It's an example of YAGNI: You Ain't Gonna Need It, which is a terse
way of stating the premise that you should only do the work you can
demonstrate, somehow, that you *need* to do, as opposed to doing all
the work you *might* have to do.

Why should you define specialized FOO methods for every reasonable
type when you don't really know which types your finished program will
use? YAGNI. That highly-evolved debugger exists for a reason, so use
it. Lispers have demonstrated that they *do* need the debugger for
precisely this reason. :)

(As an aside, I'm pretty sure Python and Ruby will eventually evolve
debuggers a lot like the ones Lisp and Smalltalk already have, much to
the shock and horror of the Java and C++ people around. I'm sure
really heavy static languages evolved that way in large part due to
the fact good debuggers have historically been very rare. Thus,
programmers have had to hammer things out early because when the
program bombs it's impossible to fix things dynamically. Lisp and
Smalltalk, at least, explicitly reject such a batch-oriented mindset
as the end result of having to use poor tools.)
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74s5otF155e1vU1@mid.individual.net>
Robbert Haarman wrote:
> On Mon, Apr 13, 2009 at 10:58:12PM +0200, Pascal Costanza wrote:
>> Robbert Haarman wrote:
>>> I am not denying that you have run into this problem or could run into  
>>> this problem in a statically typed language, I am only saying that it 
>>> is not _because_ of static typing that the problem exists.
>> It surely is.
> 
> Ok, well, enough said about that. Clearly, neither of us is going to 
> convince the other.
> 
>>> You could run into the same problem in a dynamically typed language.  
>>> Taking your "third-party method that requires a CharSequence as a  
>>> parameter" as an example, if someone wrote
>>>
>>> (defmethod foo ((x vector))
>>>    ; some code here
>>>    )
>>>
>>> then you would have to pass that method some kind of vector. You may  
>>> want to pass it some other object that the length function also works  
>>> for, but that isn't going to work. The type system will only let you  
>>> pass a vector.
>>>
>>> The only difference from the statically typed case is that, in the  
>>> statically typed case, your program won't compile, whereas, in the  
>>> dynamically typed case, your program will compile and run...until you  
>>> actually appy foo to something that isn't a vector.
>> It will continue to run afterwards:
> 
> .. in Common Lisp. Yes. But that wasn't the point, the point was that 
> the program was errorneous.
> 
>> CL-USER 1 > (defmethod foo ((v vector))
>>               (map 'vector (lambda (x) (+ x x)) v))
>> #<STANDARD-METHOD FOO NIL (VECTOR) 21C525FF>
>>
>> CL-USER 2 > (foo (list 1 2 3))
>>
>> Error: No applicable methods for #<STANDARD-GENERIC-FUNCTION FOO  
>> 21C50B5A> with args ((1 2 3))
>>   1 (continue) Call #<STANDARD-GENERIC-FUNCTION FOO 21C50B5A> again
>>   2 (abort) Return to level 0.
>>   3 Return to top loop level 0.
>>
>> Type :b for backtrace, :c <option number> to proceed,  or :? for other  
>> options
>>
>> CL-USER 3 : 1 > (defmethod foo (v)
>>                   (let* ((coerced (coerce v 'vector))
>>                          (result (foo coerced)))
>>                     (coerce result (type-of v))))
>> #<STANDARD-METHOD FOO NIL (T) 21DED763>
>>
>> CL-USER 4 : 1 > :c 1
>> (2 4 6)
> 
> .. and now you have done all the work to implement your own 
> "third-party method", or, alternatively, to make your object compatible 
> with the type expected by the original third-party method. In this case 
> it was easy, because coerce knows how to convert a list to a vector and 
> back, but that is beside the point. The point is that dynamic typing did 
> not prevent you from having to do this work.

It's a method for a signature that I'm currently interested in, and some 
arbitrary other code path.

>>>>> Could you give an example of a type error that is not an error?
>>>> interface Foo {
>>>>   public void m();
>>>>   public void n();
>>>> }
>>>>
>>>> class Bar implements Foo {
>>>>   public void m() {System.out.println("Hello, World!");}
>>>> }
>>>>
>>>> class Baz {
>>>>   public static void main(String[] args) {
>>>>    new Bar().m();
>>>>   }
>>>> }
>>> That code says that Bar implements Foo, but it doesn't. This is an  
>>> error.
>> Some Java IDEs seem to disagree.
> 
> Whether one agrees or not does not affect the truth. According to the 
> semantics of the language, this code is invalid, and the compiler will 
> reject it. If the IDE thinks the code is fine, the IDE is wrong.

It's obvious what the semantics should be if you want to run this. The 
error is a purely static complaint of the type checker, but nothing that 
corresponds to an _actual_ error that could ever occur at runtime. The 
method in question is never ever called. That's exactly the kind of 
thing a static type checker cannot determine in general.

>>> Note, also, that static typing didn't _force_ you to write this code.  
>>> You could have:
>>>
>>> 1. Omitted n from Foo.
>>> 2. Implemented n in Bar.
>>> 3. Not claimed that Bar implemented Foo.
>>>
>>> In all these cases, you would have had static typing and a working  
>>> program.
>> That doesn't change the fact that there is no _actual_ error in the  
>> program - none that would let the program fail at runtime.
> 
> There are two options.
> 
> Either you check that Bar implements Foo, or you don't.
> 
> If you check that Bar implements Foo, you will find that this is not the 
> case. Whether you do this before run time (static typing) or at run time 
> (dynamic typing) does not change it.
> 
> If you don't check that Bar implements Foo, you will indeed not discover 
> that it doesn't. Again, it does not matter if you (otherwise) use static 
> or dynamic typing.

I don't _want_ to check it. It's just that, in the given example, the 
type systems forces me to "check" it.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <9a2d2c53-39ed-47a5-8c05-f86affc5566f@y7g2000yqa.googlegroups.com>
On 17 Apr, 22:05, Pascal Costanza <····@p-cos.net> wrote:
> Robbert Haarman wrote:
> > On Mon, Apr 13, 2009 at 10:58:12PM +0200, Pascal Costanza wrote:
> >> Robbert Haarman wrote:
> >>> I am not denying that you have run into this problem or could run into  
> >>> this problem in a statically typed language, I am only saying that it
> >>> is not _because_ of static typing that the problem exists.
> >> It surely is.
>
> > Ok, well, enough said about that. Clearly, neither of us is going to
> > convince the other.
>
> >>> You could run into the same problem in a dynamically typed language.  
> >>> Taking your "third-party method that requires a CharSequence as a  
> >>> parameter" as an example, if someone wrote
>
> >>> (defmethod foo ((x vector))
> >>>    ; some code here
> >>>    )
>
> >>> then you would have to pass that method some kind of vector. You may  
> >>> want to pass it some other object that the length function also works  
> >>> for, but that isn't going to work. The type system will only let you  
> >>> pass a vector.
>
> >>> The only difference from the statically typed case is that, in the  
> >>> statically typed case, your program won't compile, whereas, in the  
> >>> dynamically typed case, your program will compile and run...until you  
> >>> actually appy foo to something that isn't a vector.
> >> It will continue to run afterwards:
>
> > .. in Common Lisp. Yes. But that wasn't the point, the point was that
> > the program was errorneous.
>
> >> CL-USER 1 > (defmethod foo ((v vector))
> >>               (map 'vector (lambda (x) (+ x x)) v))
> >> #<STANDARD-METHOD FOO NIL (VECTOR) 21C525FF>
>
> >> CL-USER 2 > (foo (list 1 2 3))
>
> >> Error: No applicable methods for #<STANDARD-GENERIC-FUNCTION FOO  
> >> 21C50B5A> with args ((1 2 3))
> >>   1 (continue) Call #<STANDARD-GENERIC-FUNCTION FOO 21C50B5A> again
> >>   2 (abort) Return to level 0.
> >>   3 Return to top loop level 0.
>
> >> Type :b for backtrace, :c <option number> to proceed,  or :? for other  
> >> options
>
> >> CL-USER 3 : 1 > (defmethod foo (v)
> >>                   (let* ((coerced (coerce v 'vector))
> >>                          (result (foo coerced)))
> >>                     (coerce result (type-of v))))
> >> #<STANDARD-METHOD FOO NIL (T) 21DED763>
>
> >> CL-USER 4 : 1 > :c 1
> >> (2 4 6)
>
> > .. and now you have done all the work to implement your own
> > "third-party method", or, alternatively, to make your object compatible
> > with the type expected by the original third-party method. In this case
> > it was easy, because coerce knows how to convert a list to a vector and
> > back, but that is beside the point. The point is that dynamic typing did
> > not prevent you from having to do this work.
>
> It's a method for a signature that I'm currently interested in, and some
> arbitrary other code path.
>
>
>
> >>>>> Could you give an example of a type error that is not an error?
> >>>> interface Foo {
> >>>>   public void m();
> >>>>   public void n();
> >>>> }
>
> >>>> class Bar implements Foo {
> >>>>   public void m() {System.out.println("Hello, World!");}
> >>>> }
>
> >>>> class Baz {
> >>>>   public static void main(String[] args) {
> >>>>    new Bar().m();
> >>>>   }
> >>>> }
> >>> That code says that Bar implements Foo, but it doesn't. This is an  
> >>> error.
> >> Some Java IDEs seem to disagree.
>
> > Whether one agrees or not does not affect the truth. According to the
> > semantics of the language, this code is invalid, and the compiler will
> > reject it. If the IDE thinks the code is fine, the IDE is wrong.
>
> It's obvious what the semantics should be if you want to run this. The
> error is a purely static complaint of the type checker, but nothing that
> corresponds to an _actual_ error that could ever occur at runtime. The
> method in question is never ever called. That's exactly the kind of
> thing a static type checker cannot determine in general.

After a deeper reflection I think that this issue has nothing to do
with the static vs. dymanic typing issue.

You are criticizing abstract methods. Abstract methods can be present
in both static and dymanic type systems or in neither.
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090427170149.142@gmail.com>
On 2009-04-17, Vend <······@virgilio.it> wrote:
> On 17 Apr, 22:05, Pascal Costanza <····@p-cos.net> wrote:
>> >>>>> Could you give an example of a type error that is not an error?
>> >>>> interface Foo {
>> >>>>   public void m();
>> >>>>   public void n();
>> >>>> }
>>
>> >>>> class Bar implements Foo {
>> >>>>   public void m() {System.out.println("Hello, World!");}
>> >>>> }
>>
>> >>>> class Baz {
>> >>>>   public static void main(String[] args) {
>> >>>>    new Bar().m();
>> >>>>   }
>> >>>> }
>> >>> That code says that Bar implements Foo, but it doesn't. This is an  
>> >>> error.
>> >> Some Java IDEs seem to disagree.
>>
>> > Whether one agrees or not does not affect the truth. According to the
>> > semantics of the language, this code is invalid, and the compiler will
>> > reject it. If the IDE thinks the code is fine, the IDE is wrong.
>>
>> It's obvious what the semantics should be if you want to run this. The
>> error is a purely static complaint of the type checker, but nothing that
>> corresponds to an _actual_ error that could ever occur at runtime. The
>> method in question is never ever called. That's exactly the kind of
>> thing a static type checker cannot determine in general.
>
> After a deeper reflection I think that this issue has nothing to do
> with the static vs. dymanic typing issue.
>
> You are criticizing abstract methods. Abstract methods can be present
> in both static and dymanic type systems or in neither.

But C++-style abstract base classes and their pure virtual functions, or
Java-style interfaces, are a completely useless stupidity in the context of a
dynamic language.

Yes, this does have to do with the static/dynamic typing issue. An interface,
or abstract base class, is used to assert a static type relationship.

Fact is that if Bar does not have the declaration that it ``implements Foo'',
then an instance of the Bar object cannot be substituted in places which expect
an object that implements Foo, and this is true even if Bar implements all of
the methods in Foo (but does not have the ``implements Foo'' declaration).

In a dynamic OO language (or one that isn't deliberately braindamaged), this
doesn't matter.  An object can be subsituted wherever the Foo interface is
required if it reacts properly to all of the Foo methods that are actually
used. If it neglects to react to some method properly, that is still a soft
error; exception handling may be used to trap it and recover.

You might ask, why would you ever neglect to implement a method, and doesn't
the static declaration help make sure that you don't neglect?  The answer is
that there are all kinds of situations where this might happen:

- suppose you want to experiment with a new method in some interface:

  If the language is static, then by adding the method to the formal interface,
  you break every single class which asserts that it implements that interface.
  To be able to compile the program, you have to insert some stub
  implementations everywhere, just to shut up the compiler so that you get a
  running executable. If experimenting with incremental features is expensive,
  it stifles development.

- versioning issues:

  Some interface is extended in a new version of the code, but the code has to
  coexist with old components which produce old-style objects that don't
  understand the extensions in the interface.

  In this case, the software could be written to handle the exceptions about
  the methods which are not understood, until the old code is upgraded.
  This may be a more cost-effective solution than having to refactor everything
  upfront just to get a program which compiles.

- retrofitting, and defaulting:

  Suppose that an interface is defined which is very generic, and large numbers
  of existing objects may be expected to participate in that interface. It
  is completely impractical to go into every existing class and inject that
  interface, and provide a default implementation (even by some kind of
  inheritance).

  The best case scenario in the static language world is that each class
  is derived from some ancestral superclass like Object. But we are forced
  to stick the interface there, and recompile the entire system. Some built-in
  classes that come from the language implementation also inherit from this
  class so we might not be allowed to fiddle with it.

  In the object system of a well-designed dynamic language, there is some
  ancestral superclass of all objects, but we can add new methods to classes
  without having to inject anything into a class or recompile anything which
  depends on it.

  If you have some new method m, you can easily make, for instance, strings and
  integers understand m.  You don't have to hack the system's string class
  to say that it ``implements Foo''. And you may easily be able to provide 
  a default implementation of method m for all types which don't
  otherwise have an implementation of m or interface Foo, so that calling
  m on any arbitrary object whatsoever will not throw a ``method not found''
  error, but branch to the default implementation.

- null objects:

  Static interfaces give rise to the null object design pattern. Sometimes you
  need to be able to express the semantics that ``there is no object here'',
  but not with a null reference. Why? Because a null reference can't be
  used to call methods; it will throw errors. So this pattern is invented
  whereby you have a null implementation of the a class or interface with
  stub methods. This lead to repetition: null version of this class, null
  verison of that class, etc.

  In a dynamic language, there can be a single object NIL which can be used
  in all such situations. It has its own type, distinct from the type of any
  other object.  Moreover, if the object-oriented system is designed right,
  you can specialize methods to this object. Any time you design a new method,
  you can express what the behavior should be if that method is called on this
  object NIL. Problem solved; no null design pattern, no proliferation of null
  classes.

Love your typo ``dymanic'', by the way: thanks for that! 

Dymanic is when you are really hyped up about dynamic behavior. :)
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <39346a0b-1341-4adc-8134-9f694f1f0b74@37g2000yqp.googlegroups.com>
> - null objects:
>
> Static interfaces give rise to the null
> object design pattern. Sometimes you
> need to be able to express the semantics
> that ``there is no object here'',
> but not with a null reference. Why?
I would not say that CL is something
outstanding here and offers some
advantages over statically typed languages.

First, statically typed OO language with
macros can be designed so that null instances
are created automagically for any class meeting
some criteria. E.g. for any class for which
there exists a reference to null instance of
that type. Maybe even C++ allows that.

Also, there is SQL, which is statically
typed but it has null value.

Also, on my experience, there is also a
need of typed null instance sometimes in
CL. Consider you have typed slot in a class.
If you want it to be settable to nil,
you need to declare it's type as
(or foo null). At least it looks like I needed
to do so in SBCL. I suspect some speed penalty
might be paid here as there is a need to
dynamic dispatch on type of the slot.
From: Didier Verna
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <muxab6e1li7.fsf@uzeb.lrde.epita.fr>
Kaz Kylheku <········@gmail.com> wrote:

> - suppose you want to experiment with a new method in some interface:
>
>   If the language is static, then by adding the method to the formal
>   interface, you break every single class which asserts that it
>   implements that interface. To be able to compile the program, you
>   have to insert some stub implementations everywhere, just to shut up
>   the compiler so that you get a running executable.
    
    Let alone the risk of forgetting later that you have a stub instead
of proper implementations somewhere. If you have plenty of such stubs,
it's easy to leave one behind.

-- 
European Lisp Symposium, May 2009: http://www.european-lisp-symposium.org
European Lisp Workshop, July 2009: http://elw.bknr.net/2009

Scientific site:   http://www.lrde.epita.fr/~didier
Music (Jazz) site: http://www.didierverna.com
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <8eff33f9-b79d-4521-9210-26198bd87d8f@c36g2000yqn.googlegroups.com>
On 18 Apr, 03:00, Kaz Kylheku <········@gmail.com> wrote:
> On 2009-04-17, Vend <······@virgilio.it> wrote:
>
>
>
> > On 17 Apr, 22:05, Pascal Costanza <····@p-cos.net> wrote:
> >> >>>>> Could you give an example of a type error that is not an error?
> >> >>>> interface Foo {
> >> >>>>   public void m();
> >> >>>>   public void n();
> >> >>>> }
>
> >> >>>> class Bar implements Foo {
> >> >>>>   public void m() {System.out.println("Hello, World!");}
> >> >>>> }
>
> >> >>>> class Baz {
> >> >>>>   public static void main(String[] args) {
> >> >>>>    new Bar().m();
> >> >>>>   }
> >> >>>> }
> >> >>> That code says that Bar implements Foo, but it doesn't. This is an  
> >> >>> error.
> >> >> Some Java IDEs seem to disagree.
>
> >> > Whether one agrees or not does not affect the truth. According to the
> >> > semantics of the language, this code is invalid, and the compiler will
> >> > reject it. If the IDE thinks the code is fine, the IDE is wrong.
>
> >> It's obvious what the semantics should be if you want to run this. The
> >> error is a purely static complaint of the type checker, but nothing that
> >> corresponds to an _actual_ error that could ever occur at runtime. The
> >> method in question is never ever called. That's exactly the kind of
> >> thing a static type checker cannot determine in general.
>
> > After a deeper reflection I think that this issue has nothing to do
> > with the static vs. dymanic typing issue.
>
> > You are criticizing abstract methods. Abstract methods can be present
> > in both static and dymanic type systems or in neither.
>
> But C++-style abstract base classes and their pure virtual functions, or
> Java-style interfaces, are a completely useless stupidity in the context of a
> dynamic language.
>
> Yes, this does have to do with the static/dynamic typing issue. An interface,
> or abstract base class, is used to assert a static type relationship.

It is used to specify a constraint on its subclasses. The constraint
can be enforced at compile-time or at run-time.

> Fact is that if Bar does not have the declaration that it ``implements Foo'',
> then an instance of the Bar object cannot be substituted in places which expect
> an object that implements Foo, and this is true even if Bar implements all of
> the methods in Foo (but does not have the ``implements Foo'' declaration).

It depends on the language. I think there are even statically typed
object-oriented languages that don't have explicit classes.

> In a dynamic OO language (or one that isn't deliberately braindamaged), this
> doesn't matter.  An object can be subsituted wherever the Foo interface is
> required if it reacts properly to all of the Foo methods that are actually
> used. If it neglects to react to some method properly, that is still a soft
> error; exception handling may be used to trap it and recover.

Without using inheritance?

> You might ask, why would you ever neglect to implement a method, and doesn't
> the static declaration help make sure that you don't neglect?  The answer is
> that there are all kinds of situations where this might happen:
>
> - suppose you want to experiment with a new method in some interface:

If you want to experiment you don't use an abstract method.

Java forces all the methods of all but at most one superclasses of a
concrete class to be abstract, but this is a Java issue, not a general
issue of static typing.

> - versioning issues:
>
>   Some interface is extended in a new version of the code, but the code has to
>   coexist with old components which produce old-style objects that don't
>   understand the extensions in the interface.
>
>   In this case, the software could be written to handle the exceptions about
>   the methods which are not understood, until the old code is upgraded.
>   This may be a more cost-effective solution than having to refactor everything
>   upfront just to get a program which compiles.

Again, you can use an non-abstract method which provides the default
implementation.

> - retrofitting, and defaulting:
>
>   Suppose that an interface is defined which is very generic, and large numbers
>   of existing objects may be expected to participate in that interface. It
>   is completely impractical to go into every existing class and inject that
>   interface, and provide a default implementation (even by some kind of
>   inheritance).

Why would it be impratical to provide a default implementation by
inheritance?
I think that's pretty much what inheritance is for.

>   If you have some new method m, you can easily make, for instance, strings and
>   integers understand m.  You don't have to hack the system's string class
>   to say that it ``implements Foo''. And you may easily be able to provide
>   a default implementation of method m for all types which don't
>   otherwise have an implementation of m or interface Foo, so that calling
>   m on any arbitrary object whatsoever will not throw a ``method not found''
>   error, but branch to the default implementation.

You can do that only in languages that allow you to add new methods to
existing classes (or objects). That is, you have to 'hack' the string
class if you want it to implement method m, even if you can do that
without modifying the source code where the string class is declared.

This can be done in both static and dynamic type systems.

> - null objects:
>
>   Static interfaces give rise to the null object design pattern. Sometimes you
>   need to be able to express the semantics that ``there is no object here'',
>   but not with a null reference. Why? Because a null reference can't be
>   used to call methods; it will throw errors. So this pattern is invented
>   whereby you have a null implementation of the a class or interface with
>   stub methods. This lead to repetition: null version of this class, null
>   verison of that class, etc.

It looks to me quite counterintuitive to invoke a method on a null
object, for what is worth.

>   In a dynamic language, there can be a single object NIL which can be used
>   in all such situations. It has its own type, distinct from the type of any
>   other object.  Moreover, if the object-oriented system is designed right,
>   you can specialize methods to this object. Any time you design a new method,
>   you can express what the behavior should be if that method is called on this
>   object NIL. Problem solved; no null design pattern, no proliferation of null
>   classes.

You can specialize this object by adding methods to it (or to its
class). Again, this has little to do, if anything, with static vs
dynamic.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74tpjiF15duuuU1@mid.individual.net>
Vend wrote:

>>>>>>> Could you give an example of a type error that is not an error?
>>>>>> interface Foo {
>>>>>>   public void m();
>>>>>>   public void n();
>>>>>> }
>>>>>> class Bar implements Foo {
>>>>>>   public void m() {System.out.println("Hello, World!");}
>>>>>> }
>>>>>> class Baz {
>>>>>>   public static void main(String[] args) {
>>>>>>    new Bar().m();
>>>>>>   }
>>>>>> }
>>>>> That code says that Bar implements Foo, but it doesn't. This is an  
>>>>> error.
>>>> Some Java IDEs seem to disagree.
>>> Whether one agrees or not does not affect the truth. According to the
>>> semantics of the language, this code is invalid, and the compiler will
>>> reject it. If the IDE thinks the code is fine, the IDE is wrong.
>> It's obvious what the semantics should be if you want to run this. The
>> error is a purely static complaint of the type checker, but nothing that
>> corresponds to an _actual_ error that could ever occur at runtime. The
>> method in question is never ever called. That's exactly the kind of
>> thing a static type checker cannot determine in general.
> 
> After a deeper reflection I think that this issue has nothing to do
> with the static vs. dymanic typing issue.

Again, the problem I describe occurs in a statically typed language, but 
not in a dynamically typed language. So there is a relationship there.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <9e89f89e-90e3-4ab0-a26d-d05fdf87b193@c9g2000yqm.googlegroups.com>
On 18 Apr, 12:50, Pascal Costanza <····@p-cos.net> wrote:
> Vend wrote:
> >>>>>>> Could you give an example of a type error that is not an error?
> >>>>>> interface Foo {
> >>>>>>   public void m();
> >>>>>>   public void n();
> >>>>>> }
> >>>>>> class Bar implements Foo {
> >>>>>>   public void m() {System.out.println("Hello, World!");}
> >>>>>> }
> >>>>>> class Baz {
> >>>>>>   public static void main(String[] args) {
> >>>>>>    new Bar().m();
> >>>>>>   }
> >>>>>> }
> >>>>> That code says that Bar implements Foo, but it doesn't. This is an  
> >>>>> error.
> >>>> Some Java IDEs seem to disagree.
> >>> Whether one agrees or not does not affect the truth. According to the
> >>> semantics of the language, this code is invalid, and the compiler will
> >>> reject it. If the IDE thinks the code is fine, the IDE is wrong.
> >> It's obvious what the semantics should be if you want to run this. The
> >> error is a purely static complaint of the type checker, but nothing that
> >> corresponds to an _actual_ error that could ever occur at runtime. The
> >> method in question is never ever called. That's exactly the kind of
> >> thing a static type checker cannot determine in general.
>
> > After a deeper reflection I think that this issue has nothing to do
> > with the static vs. dymanic typing issue.
>
> Again, the problem I describe occurs in a statically typed language, but
> not in a dynamically typed language. So there is a relationship there.

No.
Correlation doesn't mean causation.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <750d74F15lc3sU1@mid.individual.net>
Vend wrote:
> On 18 Apr, 12:50, Pascal Costanza <····@p-cos.net> wrote:
>> Vend wrote:
>>>>>>>>> Could you give an example of a type error that is not an error?
>>>>>>>> interface Foo {
>>>>>>>>   public void m();
>>>>>>>>   public void n();
>>>>>>>> }
>>>>>>>> class Bar implements Foo {
>>>>>>>>   public void m() {System.out.println("Hello, World!");}
>>>>>>>> }
>>>>>>>> class Baz {
>>>>>>>>   public static void main(String[] args) {
>>>>>>>>    new Bar().m();
>>>>>>>>   }
>>>>>>>> }
>>>>>>> That code says that Bar implements Foo, but it doesn't. This is an  
>>>>>>> error.
>>>>>> Some Java IDEs seem to disagree.
>>>>> Whether one agrees or not does not affect the truth. According to the
>>>>> semantics of the language, this code is invalid, and the compiler will
>>>>> reject it. If the IDE thinks the code is fine, the IDE is wrong.
>>>> It's obvious what the semantics should be if you want to run this. The
>>>> error is a purely static complaint of the type checker, but nothing that
>>>> corresponds to an _actual_ error that could ever occur at runtime. The
>>>> method in question is never ever called. That's exactly the kind of
>>>> thing a static type checker cannot determine in general.
>>> After a deeper reflection I think that this issue has nothing to do
>>> with the static vs. dymanic typing issue.
>> Again, the problem I describe occurs in a statically typed language, but
>> not in a dynamically typed language. So there is a relationship there.
> 
> No.
> Correlation doesn't mean causation.

Er. So?


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <f8c195e3-a897-43cb-9343-8ce32af09f75@r37g2000yqn.googlegroups.com>
On 19 Apr, 12:37, Pascal Costanza <····@p-cos.net> wrote:
> Vend wrote:
> > On 18 Apr, 12:50, Pascal Costanza <····@p-cos.net> wrote:
> >> Vend wrote:
> >>>>>>>>> Could you give an example of a type error that is not an error?
> >>>>>>>> interface Foo {
> >>>>>>>>   public void m();
> >>>>>>>>   public void n();
> >>>>>>>> }
> >>>>>>>> class Bar implements Foo {
> >>>>>>>>   public void m() {System.out.println("Hello, World!");}
> >>>>>>>> }
> >>>>>>>> class Baz {
> >>>>>>>>   public static void main(String[] args) {
> >>>>>>>>    new Bar().m();
> >>>>>>>>   }
> >>>>>>>> }
> >>>>>>> That code says that Bar implements Foo, but it doesn't. This is an  
> >>>>>>> error.
> >>>>>> Some Java IDEs seem to disagree.
> >>>>> Whether one agrees or not does not affect the truth. According to the
> >>>>> semantics of the language, this code is invalid, and the compiler will
> >>>>> reject it. If the IDE thinks the code is fine, the IDE is wrong.
> >>>> It's obvious what the semantics should be if you want to run this. The
> >>>> error is a purely static complaint of the type checker, but nothing that
> >>>> corresponds to an _actual_ error that could ever occur at runtime. The
> >>>> method in question is never ever called. That's exactly the kind of
> >>>> thing a static type checker cannot determine in general.
> >>> After a deeper reflection I think that this issue has nothing to do
> >>> with the static vs. dymanic typing issue.
> >> Again, the problem I describe occurs in a statically typed language, but
> >> not in a dynamically typed language. So there is a relationship there.
>
> > No.
> > Correlation doesn't mean causation.
>
> Er. So?

So the fact that some feature or problem is present in a static typed
language and not in a dynamic typed language doesn't imply that that
feature or problem is related to the staticity or dynamicity of the
type system.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <750l01F14tgcqU1@mid.individual.net>
Vend wrote:
> On 19 Apr, 12:37, Pascal Costanza <····@p-cos.net> wrote:
>> Vend wrote:
>>> On 18 Apr, 12:50, Pascal Costanza <····@p-cos.net> wrote:
>>>> Vend wrote:
>>>>>>>>>>> Could you give an example of a type error that is not an error?
>>>>>>>>>> interface Foo {
>>>>>>>>>>   public void m();
>>>>>>>>>>   public void n();
>>>>>>>>>> }
>>>>>>>>>> class Bar implements Foo {
>>>>>>>>>>   public void m() {System.out.println("Hello, World!");}
>>>>>>>>>> }
>>>>>>>>>> class Baz {
>>>>>>>>>>   public static void main(String[] args) {
>>>>>>>>>>    new Bar().m();
>>>>>>>>>>   }
>>>>>>>>>> }
>>>>>>>>> That code says that Bar implements Foo, but it doesn't. This is an  
>>>>>>>>> error.
>>>>>>>> Some Java IDEs seem to disagree.
>>>>>>> Whether one agrees or not does not affect the truth. According to the
>>>>>>> semantics of the language, this code is invalid, and the compiler will
>>>>>>> reject it. If the IDE thinks the code is fine, the IDE is wrong.
>>>>>> It's obvious what the semantics should be if you want to run this. The
>>>>>> error is a purely static complaint of the type checker, but nothing that
>>>>>> corresponds to an _actual_ error that could ever occur at runtime. The
>>>>>> method in question is never ever called. That's exactly the kind of
>>>>>> thing a static type checker cannot determine in general.
>>>>> After a deeper reflection I think that this issue has nothing to do
>>>>> with the static vs. dymanic typing issue.
>>>> Again, the problem I describe occurs in a statically typed language, but
>>>> not in a dynamically typed language. So there is a relationship there.
>>> No.
>>> Correlation doesn't mean causation.
>> Er. So?
> 
> So the fact that some feature or problem is present in a static typed
> language and not in a dynamic typed language doesn't imply that that
> feature or problem is related to the staticity or dynamicity of the
> type system.

Well, it is. But this thread has run out of steam...


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <b95379c1-4bf6-42e5-9fb7-66d3a39ced98@f19g2000yqo.googlegroups.com>
On 19 Apr, 14:50, Pascal Costanza <····@p-cos.net> wrote:
> Vend wrote:
> > On 19 Apr, 12:37, Pascal Costanza <····@p-cos.net> wrote:
> >> Vend wrote:
> >>> On 18 Apr, 12:50, Pascal Costanza <····@p-cos.net> wrote:
> >>>> Vend wrote:
> >>>>>>>>>>> Could you give an example of a type error that is not an error?
> >>>>>>>>>> interface Foo {
> >>>>>>>>>>   public void m();
> >>>>>>>>>>   public void n();
> >>>>>>>>>> }
> >>>>>>>>>> class Bar implements Foo {
> >>>>>>>>>>   public void m() {System.out.println("Hello, World!");}
> >>>>>>>>>> }
> >>>>>>>>>> class Baz {
> >>>>>>>>>>   public static void main(String[] args) {
> >>>>>>>>>>    new Bar().m();
> >>>>>>>>>>   }
> >>>>>>>>>> }
> >>>>>>>>> That code says that Bar implements Foo, but it doesn't. This is an  
> >>>>>>>>> error.
> >>>>>>>> Some Java IDEs seem to disagree.
> >>>>>>> Whether one agrees or not does not affect the truth. According to the
> >>>>>>> semantics of the language, this code is invalid, and the compiler will
> >>>>>>> reject it. If the IDE thinks the code is fine, the IDE is wrong.
> >>>>>> It's obvious what the semantics should be if you want to run this. The
> >>>>>> error is a purely static complaint of the type checker, but nothing that
> >>>>>> corresponds to an _actual_ error that could ever occur at runtime. The
> >>>>>> method in question is never ever called. That's exactly the kind of
> >>>>>> thing a static type checker cannot determine in general.
> >>>>> After a deeper reflection I think that this issue has nothing to do
> >>>>> with the static vs. dymanic typing issue.
> >>>> Again, the problem I describe occurs in a statically typed language, but
> >>>> not in a dynamically typed language. So there is a relationship there.
> >>> No.
> >>> Correlation doesn't mean causation.
> >> Er. So?
>
> > So the fact that some feature or problem is present in a static typed
> > language and not in a dynamic typed language doesn't imply that that
> > feature or problem is related to the staticity or dynamicity of the
> > type system.
>
> Well, it is.

No, Java is not a model for every statically typed system. In fact,
the Java type system is partially statical and partially dynamical.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <751aniF15dlkbU1@mid.individual.net>
Vend wrote:
> On 19 Apr, 14:50, Pascal Costanza <····@p-cos.net> wrote:
>> Vend wrote:
>>> On 19 Apr, 12:37, Pascal Costanza <····@p-cos.net> wrote:
>>>> Vend wrote:
>>>>> On 18 Apr, 12:50, Pascal Costanza <····@p-cos.net> wrote:
>>>>>> Vend wrote:
>>>>>>>>>>>>> Could you give an example of a type error that is not an error?
>>>>>>>>>>>> interface Foo {
>>>>>>>>>>>>   public void m();
>>>>>>>>>>>>   public void n();
>>>>>>>>>>>> }
>>>>>>>>>>>> class Bar implements Foo {
>>>>>>>>>>>>   public void m() {System.out.println("Hello, World!");}
>>>>>>>>>>>> }
>>>>>>>>>>>> class Baz {
>>>>>>>>>>>>   public static void main(String[] args) {
>>>>>>>>>>>>    new Bar().m();
>>>>>>>>>>>>   }
>>>>>>>>>>>> }
>>>>>>>>>>> That code says that Bar implements Foo, but it doesn't. This is an  
>>>>>>>>>>> error.
>>>>>>>>>> Some Java IDEs seem to disagree.
>>>>>>>>> Whether one agrees or not does not affect the truth. According to the
>>>>>>>>> semantics of the language, this code is invalid, and the compiler will
>>>>>>>>> reject it. If the IDE thinks the code is fine, the IDE is wrong.
>>>>>>>> It's obvious what the semantics should be if you want to run this. The
>>>>>>>> error is a purely static complaint of the type checker, but nothing that
>>>>>>>> corresponds to an _actual_ error that could ever occur at runtime. The
>>>>>>>> method in question is never ever called. That's exactly the kind of
>>>>>>>> thing a static type checker cannot determine in general.
>>>>>>> After a deeper reflection I think that this issue has nothing to do
>>>>>>> with the static vs. dymanic typing issue.
>>>>>> Again, the problem I describe occurs in a statically typed language, but
>>>>>> not in a dynamically typed language. So there is a relationship there.
>>>>> No.
>>>>> Correlation doesn't mean causation.
>>>> Er. So?
>>> So the fact that some feature or problem is present in a static typed
>>> language and not in a dynamic typed language doesn't imply that that
>>> feature or problem is related to the staticity or dynamicity of the
>>> type system.
>> Well, it is.
> 
> No, Java is not a model for every statically typed system. In fact,
> the Java type system is partially statical and partially dynamical.

I never said that Java is a model for every statically typed system.

But in every statically typed language, you can construct code paths 
that won't be executed in particular runs of the program, but that the 
type checker will check for static type errors nevertheless, whether you 
are interested in that code path or not.

You cannot argue against that, because that's a direct consequence of 
the halting problem. The only position you can take is that you can 
design a particular static type system in such a way that this problem 
will bug you less than in others, but you cannot completely remove that 
problem.

To a certain extent, my paper was an exercise in finding out whether 
such code paths can come up by accident, without being specifically 
constructed. And I indeed found examples, as the paper shows, and on top 
of that, they lure you into making your programs buggy. These particular 
examples may be Java-specific, but that's a secondary issue, from my 
perspective. I only wanted to show that "static typing" doesn't 
automatically mean "good", and I have shown sufficient evidence to that 
extent.

I would take counter examples seriously that show that such code paths 
are very unlikely to occur in specific type systems. But I have never 
seen such counter examples.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <821d2560-77c6-45a8-9522-630bdaf5b1ac@p11g2000yqe.googlegroups.com>
On 19 Apr, 21:01, Pascal Costanza <····@p-cos.net> wrote:
> Vend wrote:
> > On 19 Apr, 14:50, Pascal Costanza <····@p-cos.net> wrote:
> >> Vend wrote:
> >>> On 19 Apr, 12:37, Pascal Costanza <····@p-cos.net> wrote:
> >>>> Vend wrote:
> >>>>> On 18 Apr, 12:50, Pascal Costanza <····@p-cos.net> wrote:
> >>>>>> Vend wrote:
> >>>>>>>>>>>>> Could you give an example of a type error that is not an error?
> >>>>>>>>>>>> interface Foo {
> >>>>>>>>>>>>   public void m();
> >>>>>>>>>>>>   public void n();
> >>>>>>>>>>>> }
> >>>>>>>>>>>> class Bar implements Foo {
> >>>>>>>>>>>>   public void m() {System.out.println("Hello, World!");}
> >>>>>>>>>>>> }
> >>>>>>>>>>>> class Baz {
> >>>>>>>>>>>>   public static void main(String[] args) {
> >>>>>>>>>>>>    new Bar().m();
> >>>>>>>>>>>>   }
> >>>>>>>>>>>> }
> >>>>>>>>>>> That code says that Bar implements Foo, but it doesn't. This is an  
> >>>>>>>>>>> error.
> >>>>>>>>>> Some Java IDEs seem to disagree.
> >>>>>>>>> Whether one agrees or not does not affect the truth. According to the
> >>>>>>>>> semantics of the language, this code is invalid, and the compiler will
> >>>>>>>>> reject it. If the IDE thinks the code is fine, the IDE is wrong.
> >>>>>>>> It's obvious what the semantics should be if you want to run this. The
> >>>>>>>> error is a purely static complaint of the type checker, but nothing that
> >>>>>>>> corresponds to an _actual_ error that could ever occur at runtime. The
> >>>>>>>> method in question is never ever called. That's exactly the kind of
> >>>>>>>> thing a static type checker cannot determine in general.
> >>>>>>> After a deeper reflection I think that this issue has nothing to do
> >>>>>>> with the static vs. dymanic typing issue.
> >>>>>> Again, the problem I describe occurs in a statically typed language, but
> >>>>>> not in a dynamically typed language. So there is a relationship there.
> >>>>> No.
> >>>>> Correlation doesn't mean causation.
> >>>> Er. So?
> >>> So the fact that some feature or problem is present in a static typed
> >>> language and not in a dynamic typed language doesn't imply that that
> >>> feature or problem is related to the staticity or dynamicity of the
> >>> type system.
> >> Well, it is.
>
> > No, Java is not a model for every statically typed system. In fact,
> > the Java type system is partially statical and partially dynamical.
>
> I never said that Java is a model for every statically typed system.

But you focus on it.

> But in every statically typed language, you can construct code paths
> that won't be executed in particular runs of the program, but that the
> type checker will check for static type errors nevertheless, whether you
> are interested in that code path or not.
>
> You cannot argue against that, because that's a direct consequence of
> the halting problem.

Correct.

> The only position you can take is that you can
> design a particular static type system in such a way that this problem
> will bug you less than in others, but you cannot completely remove that
> problem.

I suppose that if you are not interested in some code path, you can
somehow prove that it will never be executed.
For each static type system it's probably possible to construct some
programs that have code paths that you can prove to be impossible
while the type checker algorithm can't, but I'm a bit skeptical that
with a well designed static type system this often happens
accidentally.

In a dynamic typing system the error is discovered only if and when
the code path is executed.

What is more costly? Rejecting a program that could be potentially
correct under a dynamical sematics due to a static type error, or
allowing it to run and potentially fail during execution?

I think that in order to write reliable software, early error
detection is generally preferable, even if in some cases it might
generate false positives.

> To a certain extent, my paper was an exercise in finding out whether
> such code paths can come up by accident, without being specifically
> constructed. And I indeed found examples, as the paper shows, and on top
> of that, they lure you into making your programs buggy. These particular
> examples may be Java-specific, but that's a secondary issue, from my
> perspective. I only wanted to show that "static typing" doesn't
> automatically mean "good", and I have shown sufficient evidence to that
> extent.

You picked some Java-specific issues, which aren't necessarly related
to static typing.

> I would take counter examples seriously that show that such code paths
> are very unlikely to occur in specific type systems. But I have never
> seen such counter examples.

How can there be an example of how unlikely something is?
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <751rrqF161itpU1@mid.individual.net>
Vend wrote:
> On 19 Apr, 21:01, Pascal Costanza <····@p-cos.net> wrote:
>> Vend wrote:
>>> On 19 Apr, 14:50, Pascal Costanza <····@p-cos.net> wrote:
>>>> Vend wrote:
>>>>> On 19 Apr, 12:37, Pascal Costanza <····@p-cos.net> wrote:
>>>>>> Vend wrote:
>>>>>>> On 18 Apr, 12:50, Pascal Costanza <····@p-cos.net> wrote:
>>>>>>>> Vend wrote:
>>>>>>>>>>>>>>> Could you give an example of a type error that is not an error?
>>>>>>>>>>>>>> interface Foo {
>>>>>>>>>>>>>>   public void m();
>>>>>>>>>>>>>>   public void n();
>>>>>>>>>>>>>> }
>>>>>>>>>>>>>> class Bar implements Foo {
>>>>>>>>>>>>>>   public void m() {System.out.println("Hello, World!");}
>>>>>>>>>>>>>> }
>>>>>>>>>>>>>> class Baz {
>>>>>>>>>>>>>>   public static void main(String[] args) {
>>>>>>>>>>>>>>    new Bar().m();
>>>>>>>>>>>>>>   }
>>>>>>>>>>>>>> }
>>>>>>>>>>>>> That code says that Bar implements Foo, but it doesn't. This is an  
>>>>>>>>>>>>> error.
>>>>>>>>>>>> Some Java IDEs seem to disagree.
>>>>>>>>>>> Whether one agrees or not does not affect the truth. According to the
>>>>>>>>>>> semantics of the language, this code is invalid, and the compiler will
>>>>>>>>>>> reject it. If the IDE thinks the code is fine, the IDE is wrong.
>>>>>>>>>> It's obvious what the semantics should be if you want to run this. The
>>>>>>>>>> error is a purely static complaint of the type checker, but nothing that
>>>>>>>>>> corresponds to an _actual_ error that could ever occur at runtime. The
>>>>>>>>>> method in question is never ever called. That's exactly the kind of
>>>>>>>>>> thing a static type checker cannot determine in general.
>>>>>>>>> After a deeper reflection I think that this issue has nothing to do
>>>>>>>>> with the static vs. dymanic typing issue.
>>>>>>>> Again, the problem I describe occurs in a statically typed language, but
>>>>>>>> not in a dynamically typed language. So there is a relationship there.
>>>>>>> No.
>>>>>>> Correlation doesn't mean causation.
>>>>>> Er. So?
>>>>> So the fact that some feature or problem is present in a static typed
>>>>> language and not in a dynamic typed language doesn't imply that that
>>>>> feature or problem is related to the staticity or dynamicity of the
>>>>> type system.
>>>> Well, it is.
>>> No, Java is not a model for every statically typed system. In fact,
>>> the Java type system is partially statical and partially dynamical.
>> I never said that Java is a model for every statically typed system.
> 
> But you focus on it.

So?

>> But in every statically typed language, you can construct code paths
>> that won't be executed in particular runs of the program, but that the
>> type checker will check for static type errors nevertheless, whether you
>> are interested in that code path or not.
>>
>> You cannot argue against that, because that's a direct consequence of
>> the halting problem.
> 
> Correct.

Exactly.

>> The only position you can take is that you can
>> design a particular static type system in such a way that this problem
>> will bug you less than in others, but you cannot completely remove that
>> problem.
> 
> I suppose that if you are not interested in some code path, you can
> somehow prove that it will never be executed.

Nope. I had the qualifier "in particular runs of the program" in my 
previous posting for a reason. (Of course you could partially evaluate 
the program for a particular kind of input and then try to prove it, but 
that would be very funny.)

> For each static type system it's probably possible to construct some
> programs that have code paths that you can prove to be impossible
> while the type checker algorithm can't, but I'm a bit skeptical that
> with a well designed static type system this often happens
> accidentally.

It's not about proving, it's about happening for particular concrete inputs.

> In a dynamic typing system the error is discovered only if and when
> the code path is executed.

Yes.

> What is more costly? Rejecting a program that could be potentially
> correct under a dynamical sematics due to a static type error, or
> allowing it to run and potentially fail during execution?

The former is clearly more costly, in terms of - potentially wasted - 
development time.

> I think that in order to write reliable software, early error
> detection is generally preferable, even if in some cases it might
> generate false positives.

Now you're mixing up type errors and _actual_ errors again. Why do you 
static types do this all the time?

>> To a certain extent, my paper was an exercise in finding out whether
>> such code paths can come up by accident, without being specifically
>> constructed. And I indeed found examples, as the paper shows, and on top
>> of that, they lure you into making your programs buggy. These particular
>> examples may be Java-specific, but that's a secondary issue, from my
>> perspective. I only wanted to show that "static typing" doesn't
>> automatically mean "good", and I have shown sufficient evidence to that
>> extent.
> 
> You picked some Java-specific issues, which aren't necessarly related
> to static typing.

Well, well...

>> I would take counter examples seriously that show that such code paths
>> are very unlikely to occur in specific type systems. But I have never
>> seen such counter examples.
> 
> How can there be an example of how unlikely something is?

See, that's exactly the problem: Whether static type systems are helpful 
or not is based on the assumption that something is unlikely, which can 
even not be shown.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5acf7361-8938-45b2-b4cb-41ff9c66d3c7@h2g2000yqg.googlegroups.com>
On 20 Apr, 01:53, Pascal Costanza <····@p-cos.net> wrote:
> Vend wrote:
> > On 19 Apr, 21:01, Pascal Costanza <····@p-cos.net> wrote:
> >> Vend wrote:
> >>> On 19 Apr, 14:50, Pascal Costanza <····@p-cos.net> wrote:
> >>>> Vend wrote:
> >>>>> On 19 Apr, 12:37, Pascal Costanza <····@p-cos.net> wrote:
> >>>>>> Vend wrote:
> >>>>>>> On 18 Apr, 12:50, Pascal Costanza <····@p-cos.net> wrote:
> >>>>>>>> Vend wrote:
> >>>>>>>>>>>>>>> Could you give an example of a type error that is not an error?
> >>>>>>>>>>>>>> interface Foo {
> >>>>>>>>>>>>>>   public void m();
> >>>>>>>>>>>>>>   public void n();
> >>>>>>>>>>>>>> }
> >>>>>>>>>>>>>> class Bar implements Foo {
> >>>>>>>>>>>>>>   public void m() {System.out.println("Hello, World!");}
> >>>>>>>>>>>>>> }
> >>>>>>>>>>>>>> class Baz {
> >>>>>>>>>>>>>>   public static void main(String[] args) {
> >>>>>>>>>>>>>>    new Bar().m();
> >>>>>>>>>>>>>>   }
> >>>>>>>>>>>>>> }
> >>>>>>>>>>>>> That code says that Bar implements Foo, but it doesn't. This is an  
> >>>>>>>>>>>>> error.
> >>>>>>>>>>>> Some Java IDEs seem to disagree.
> >>>>>>>>>>> Whether one agrees or not does not affect the truth. According to the
> >>>>>>>>>>> semantics of the language, this code is invalid, and the compiler will
> >>>>>>>>>>> reject it. If the IDE thinks the code is fine, the IDE is wrong.
> >>>>>>>>>> It's obvious what the semantics should be if you want to run this. The
> >>>>>>>>>> error is a purely static complaint of the type checker, but nothing that
> >>>>>>>>>> corresponds to an _actual_ error that could ever occur at runtime. The
> >>>>>>>>>> method in question is never ever called. That's exactly the kind of
> >>>>>>>>>> thing a static type checker cannot determine in general.
> >>>>>>>>> After a deeper reflection I think that this issue has nothing to do
> >>>>>>>>> with the static vs. dymanic typing issue.
> >>>>>>>> Again, the problem I describe occurs in a statically typed language, but
> >>>>>>>> not in a dynamically typed language. So there is a relationship there.
> >>>>>>> No.
> >>>>>>> Correlation doesn't mean causation.
> >>>>>> Er. So?
> >>>>> So the fact that some feature or problem is present in a static typed
> >>>>> language and not in a dynamic typed language doesn't imply that that
> >>>>> feature or problem is related to the staticity or dynamicity of the
> >>>>> type system.
> >>>> Well, it is.
> >>> No, Java is not a model for every statically typed system. In fact,
> >>> the Java type system is partially statical and partially dynamical.
> >> I never said that Java is a model for every statically typed system.
>
> > But you focus on it.
>
> So?
>
> >> But in every statically typed language, you can construct code paths
> >> that won't be executed in particular runs of the program, but that the
> >> type checker will check for static type errors nevertheless, whether you
> >> are interested in that code path or not.
>
> >> You cannot argue against that, because that's a direct consequence of
> >> the halting problem.
>
> > Correct.
>
> Exactly.
>
> >> The only position you can take is that you can
> >> design a particular static type system in such a way that this problem
> >> will bug you less than in others, but you cannot completely remove that
> >> problem.
>
> > I suppose that if you are not interested in some code path, you can
> > somehow prove that it will never be executed.
>
> Nope. I had the qualifier "in particular runs of the program" in my
> previous posting for a reason.

Yes, and the control software of the Arianne 5 rocket was also correct
for "particular runs of the program".

> (Of course you could partially evaluate
> the program for a particular kind of input and then try to prove it, but
> that would be very funny.)

Or you could properly specify the input type, if the type system is
expressive enough.

> > For each static type system it's probably possible to construct some
> > programs that have code paths that you can prove to be impossible
> > while the type checker algorithm can't, but I'm a bit skeptical that
> > with a well designed static type system this often happens
> > accidentally.
>
> It's not about proving, it's about happening for particular concrete inputs.

In general the programmer doesn't know the inputs in advance, or even
if he or she knows it might be difficult to reason about the
correctness of a program for a limited subset of the possible inputs,
and even if it is possible, another programmer working on that code
might not know.

> > In a dynamic typing system the error is discovered only if and when
> > the code path is executed.
>
> Yes.
>
> > What is more costly? Rejecting a program that could be potentially
> > correct under a dynamical sematics due to a static type error, or
> > allowing it to run and potentially fail during execution?
>
> The former is clearly more costly, in terms of - potentially wasted -
> development time.

And in terms of program failures, including in future versions?

> > I think that in order to write reliable software, early error
> > detection is generally preferable, even if in some cases it might
> > generate false positives.
>
> Now you're mixing up type errors and _actual_ errors again. Why do you
> static types do this all the time?

No.

> >> To a certain extent, my paper was an exercise in finding out whether
> >> such code paths can come up by accident, without being specifically
> >> constructed. And I indeed found examples, as the paper shows, and on top
> >> of that, they lure you into making your programs buggy. These particular
> >> examples may be Java-specific, but that's a secondary issue, from my
> >> perspective. I only wanted to show that "static typing" doesn't
> >> automatically mean "good", and I have shown sufficient evidence to that
> >> extent.
>
> > You picked some Java-specific issues, which aren't necessarly related
> > to static typing.
>
> Well, well...

Indeed. Checked exceptions, interfaces and instanceof?

If I rememeber correctly Clean doesn't have any of them, and it's much
more statically typed than Java.

> >> I would take counter examples seriously that show that such code paths
> >> are very unlikely to occur in specific type systems. But I have never
> >> seen such counter examples.
>
> > How can there be an example of how unlikely something is?
>
> See, that's exactly the problem: Whether static type systems are helpful
> or not is based on the assumption that something is unlikely, which can
> even not be shown.

I suppose the same applies to dynamic typing.

Yet, in my opinion, you failed to show valid examples in your paper.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1dlp7vk0frh69.hxn55bqs416p.dlg@40tude.net>
On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:

> Vend wrote:

>> I think that in order to write reliable software, early error
>> detection is generally preferable, even if in some cases it might
>> generate false positives.

Yes, though considering this case, dead code is obviously an error. So it
is a true positive, falsely attributed.

> Now you're mixing up type errors and _actual_ errors again. Why do you 
> static types do this all the time?

No, it is you who is mixing intended and program semantics. An ill-typed
program is illegal independently on the programmer's intention. It is wrong
to talk about its execution paths. An illegal program does not have any.

It is *exactly* same as if it would contain a syntax error. If you had a
syntax error in a "path that is not executed", would a dynamically typed
language reject this program, treacherously ignoring the "fact" that the
program is "correct"? Yes it will. What a pity!

The very question as you posed it is meaningless. An illegal program cannot
be correct or incorrect. It is not a program.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ac677669-3f43-4781-af71-6ffe27739995@c9g2000yqm.googlegroups.com>
On 20 Apr, 11:39, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
> > Vend wrote:
> >> I think that in order to write reliable software, early error
> >> detection is generally preferable, even if in some cases it might
> >> generate false positives.
>
> Yes, though considering this case, dead code is obviously an error. So it
> is a true positive, falsely attributed.

It doesn't have to be dead code.
It can be that, due to conditionals, some portions of code are all
executed at some time, but not in some specific sequence.

This can indeed cause some static type errors that would not result in
dynamic type errors at run time.

But I don't see this being a huge problem in practice by accident.
In fact, he had to construct specific examples using abstract methods,
which are a feature specifically provided to enforce some structure on
the source code regardless of its semantics.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <46twocoy0umy.yx096th7xq3x.dlg@40tude.net>
On Mon, 20 Apr 2009 04:44:13 -0700 (PDT), Vend wrote:

> On 20 Apr, 11:39, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
>>> Vend wrote:
>>>> I think that in order to write reliable software, early error
>>>> detection is generally preferable, even if in some cases it might
>>>> generate false positives.
>>
>> Yes, though considering this case, dead code is obviously an error. So it
>> is a true positive, falsely attributed.
> 
> It doesn't have to be dead code.
> It can be that, due to conditionals, some portions of code are all
> executed at some time, but not in some specific sequence.
> 
> This can indeed cause some static type errors that would not result in
> dynamic type errors at run time.

That only means that an error has slipped undetected. A type error is
either present or not. Correctness of a program does not depend on the
states of the program. It cannot be correct in one state (executed path)
and incorrect in another (not yet executed path). In order to be able to
reason about correctness of paths, they have to be properly encapsulated in
modules with clear interfaces, and become independent programs in the end.
So long they aren't, the argument is bogus, because it is unknown if they
are properly insulated from each other to be reasoned about separately.
Once they become properly separated, the argument remains bogus because the
module that passed type check, well, did pass it.

> But I don't see this being a huge problem in practice by accident.
> In fact, he had to construct specific examples using abstract methods,
> which are a feature specifically provided to enforce some structure on
> the source code regardless of its semantics.

I don't see it as a problem. I see it as an advantage of static analysis.
Abstract type is exactly the mechanism to provide partial implementations,
in order to support top down design (AKA best practice). This is far more
structured and well-thought than having code paths packed with rubbish,
pretending they were occasionally unreachable. Instead of this mess in a
statically typed language the programmer explicitly states that there is no
implementation of the abstract member. Further, the compiler will
statically check that there will be an implementation in any concrete
instance of the type.

P.S. The argument is as ridiculous as to implement integer addition using
multiplication, arguing that it is not an error since 2*2 = 2+2 and these
are the only two numbers the program sums right now.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <eddb4dac-71cf-4225-8775-3a9b63323d2c@k8g2000yqn.googlegroups.com>
There are applications where edit-compile-run sequence itself
is impractical to apply to the whole program system. E.g. it
is impractical to edit/compile/run operating systems. It is
practical to split them into multiple components which can be
modified independently while OS is running. Reliability of OSes
is enforced with other techniques than static typing. So,
regardless of your opinion and/or real advantages of static
typing/static interfacing, dynamic systems will continue
to exist and nothing can be done to it.
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <0bd2ec4b-fc88-46a4-a42d-2390140e1c68@m19g2000yqk.googlegroups.com>
On 20 Apr, 15:00, budden <···········@mail.ru> wrote:
> There are applications where edit-compile-run sequence itself
> is impractical to apply to the whole program system.

Static typed languages can have REPLs.

It seems that you people think that static typing vs dynamic typing
means Java vs Common Lisp.

> E.g. it
> is impractical to edit/compile/run operating systems. It is
> practical to split them into multiple components which can be
> modified independently while OS is running. Reliability of OSes
> is enforced with other techniques than static typing. So,
> regardless of your opinion and/or real advantages of static
> typing/static interfacing, dynamic systems will continue
> to exist and nothing can be done to it.

Dynamic what? I think you are missing the point.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4a2606b1-e26b-4804-953e-a15043cdb7dc@h2g2000yqg.googlegroups.com>
> Static typed languages can have REPLs.
Extreme example of this is gdb REPL. Sorry, I can't participate in
this discussion anymore.

> Dynamic what? I think you are missing the point.
If someone is missing the point, this is certainly not you? :) Maybe
you just didn't understand rather thin analogy? Did you try?
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <55d5c317-8628-4e3d-9e2b-06d0b3d7fcc9@m19g2000yqk.googlegroups.com>
On 21 Apr, 09:17, budden <···········@mail.ru> wrote:
> > Static typed languages can have REPLs.
>
> Extreme example of this is gdb REPL. Sorry, I can't participate in
> this discussion anymore.

F# and Haskell have REPLs.

> > Dynamic what? I think you are missing the point.
>
> If someone is missing the point, this is certainly not you? :) Maybe
> you just didn't understand rather thin analogy? Did you try?

I don't see an analogy.
There are lots of things called dynamic or static.
We were discussing type systems, bringing up unrelated stuff adds
nothing to the discussion.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <10l474998aiya$.159mo5b0wu05f.dlg@40tude.net>
On Mon, 20 Apr 2009 06:00:52 -0700 (PDT), budden wrote:

> There are applications where edit-compile-run sequence itself
> is impractical to apply to the whole program system. E.g. it
> is impractical to edit/compile/run operating systems. It is
> practical to split them into multiple components which can be
> modified independently while OS is running. Reliability of OSes
> is enforced with other techniques than static typing. So,
> regardless of your opinion and/or real advantages of static
> typing/static interfacing, dynamic systems will continue
> to exist and nothing can be done to it.

It seems that you are under impression that MS-DOS, RSX-11, UNIX, VxWorks,
VMS, Windows (just to name some) were developed in a dynamically typed
language. I assure you there were not, just as a matter of fact, without
drawing any further conclusions.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <55e64863-eff3-41ce-89ec-44313dfe51b1@k38g2000yqh.googlegroups.com>
> It seems that you are under impression that MS-DOS, RSX-11,
> UNIX, VxWorks, VMS, Windows (just to name some) were developed
> in a dynamically typed language. I assure you there were not,
> just as a matter of fact, without drawing any further conclusions.
It is not a fact :)

C is not, strictly speaking, a statically typed language.
Neither is assembly. Neither are shell languages.
There are three most important system languages of
modern OSes.

But I didn't say about typing, I said about dynamic/vs
static environment. When you use OS, you typically run
C compiler to alter some parts of your OS installation
(executables). You avoid rebuilding your OS as a whole
and try your best to isolate kernel as it is very
inconvinent to reboot OS on any change. Open-source
operating system consisting of OS kernel, filesystem,
OS startup scripts, multiple executables and a C compiler
as a whole is a dynamic environment, not a static one.
Also you might (and might not) have sources. If you
rebuild all your system from sources, you can
(partially) be sure that interfaces are compatible.
But building all your system is not the only option.

How safety is enforsed in an operating system?
First, there are tasks. OS tries its best to isolate
tasks in order to protect system from one misbehaving
task. This is a dynamic feature, not static. Also there
are package managers. You do not check interface
compatiblity of foo 1.4 and bar 1.2 with a compiler.
You just believe they match as there is a record stating
this in a package manager database. Maybe this is wrong,
but this is real world. So, if you trying to prove that
dynamic systems are bad/unnecessary, you have, in particular,
to prove that OSes are bad/unnecessary.

This is similar to Common Lisp, where you use SLIME to alter
functions, classes and other parts of your running lisp image,
without stopping it. Advantage of CL over, say, unix is that
it is not bound to a filesystem. Common Lisp is not unlike
operating system running in memory. You can alter many parts
of it and it can at the same time carry out some useful
activity (say, serve Web pages).

You seldom stop and rebuild your lisp image. This is a
contrast with statically linked programs which you need
to stop, compile and restart it.

Other example of this kind is an SQL Server, which
is dynamic in its nature (many parts of server code
can be redefined on the fly), but it is mostly
statically-typed.

Sorry, I have no time to write on this anymore.
Hope you'll understand what I mean.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <hvg5w75mr0cg.p7ixuv6lqbyw$.dlg@40tude.net>
On Mon, 20 Apr 2009 08:28:48 -0700 (PDT), budden wrote:

> C is not, strictly speaking, a statically typed language.

It is. You are confusing strong and weak typing.

> Neither is assembly. Neither are shell languages.
> There are three most important system languages of
> modern OSes.

None of them is dynamically typed.
 
> But I didn't say about typing, I said about dynamic/vs
> static environment.

Dynamics do not imply sloppiness. What is the environment of what? What are
you talking about guys? The arguments you make are based on swapping
objects and subjects.

OSes are not developed using dynamically typed languages. They do not
reinterpret interfaces of the drivers, tasks, processes, services. On the
contrary these interfaces are exactly specified and almost never changed.
One adds another layer of bindings instead (like POSIX etc). Same is true
for the hardware interfaces.

What was the point?

> How safety is enforsed in an operating system?
> First, there are tasks. OS tries its best to isolate
> tasks in order to protect system from one misbehaving
> task. This is a dynamic feature, not static.

What you are describing is not safety, it is security. What "dynamic
feature" might mean, I can only guess. Alternating currents in the CPU?
(:-))

> So, if you trying to prove that
> dynamic systems are bad/unnecessary, you have, in particular,
> to prove that OSes are bad/unnecessary.

Rubbish. I am trying to convey a rather obvious fact, that the idea to
postpone a check which can be done immediately is a nonsense.

> You seldom stop and rebuild your lisp image. This is a
> contrast with statically linked programs which you need
> to stop, compile and restart it.

Nope. Almost every compiled language can be interpreted. If you do care
about to patching the program code while debugging it, there is no
technical problem with that. There is not that many IDEs that support this
than there were before, just because there is no need in chaotic
modifications in a blind hope that it would finally run this time.

> Other example of this kind is an SQL Server, which
> is dynamic in its nature (many parts of server code
> can be redefined on the fly), but it is mostly
> statically-typed.

That is exactly why SQL is such a big pain in ass.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <98e7643d-ff00-415f-9dda-81f1446bd1b0@s28g2000vbp.googlegroups.com>
On Apr 20, 1:08 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Mon, 20 Apr 2009 08:28:48 -0700 (PDT), budden wrote:
> > C is not, strictly speaking, a statically typed language.
>
> It is. You are confusing strong and weak typing.
>

K.

> > Neither is assembly. Neither are shell languages.
> > There are three most important system languages of
> > modern OSes.
>
> None of them is dynamically typed.
>

Assembly + Shell don't seem to be statically typed either.

> > But I didn't say about typing, I said about dynamic/vs
> > static environment.
>
> Dynamics do not imply sloppiness. What is the environment of what? What are
> you talking about guys? The arguments you make are based on swapping
> objects and subjects.
>

Dynamic i.e. things can be rebound i believe.

> OSes are not developed using dynamically typed languages. They do not
> reinterpret interfaces of the drivers, tasks, processes, services. On the
> contrary these interfaces are exactly specified and almost never changed.
> One adds another layer of bindings instead (like POSIX etc). Same is true
> for the hardware interfaces.
>
> What was the point?
>

I think it was an analogy.

> > How safety is enforced in an operating system?
> > First, there are tasks. OS tries its best to isolate
> > tasks in order to protect system from one misbehaving
> > task. This is a dynamic feature, not static.
>
> What you are describing is not safety, it is security. What "dynamic
> feature" might mean, I can only guess. Alternating currents in the CPU?
> (:-))
>

Right, but reallocation is a dynamic process. I don't restart my
entire computer because I wanted to close fire fox.

> > So, if you trying to prove that
> > dynamic systems are bad/unnecessary, you have, in particular,
> > to prove that OSes are bad/unnecessary.
>
> Rubbish. I am trying to convey a rather obvious fact, that the idea to
> postpone a check which can be done immediately is a nonsense.
>

Okay sure, but when is immediately? Is immediately when I compile or
when I run the code?

If you think of functions within your program as being a bunch of mini
programs, dynamic typing makes sense. You don't always know what
inputs will be put through your mini programs (as how you don't always
know exactly what your user will do with a big program).

Do you write your C program so that it catches and rejects bad inputs
or do you write your C code so that it accepts bad inputs and then
crashes? (I hope it is the former).

Think of a lisp function as a C program with this input catching and
rejecting built in.

> > You seldom stop and rebuild your lisp image. This is a
> > contrast with statically linked programs which you need
> > to stop, compile and restart it.
>
> Nope. Almost every compiled language can be interpreted. If you do care
> about to patching the program code while debugging it, there is no
> technical problem with that. There is not that many IDEs that support this
> than there were before, just because there is no need in chaotic
> modifications in a blind hope that it would finally run this time.
>

Nope. Lisp 'image'. You maybe missed the boat.
You are confusing chaotic modifications with incremental modifications
(like scientific method).
If I were doing it chaotically I'd probably never get anywhere.

> > Other example of this kind is an SQL Server, which
> > is dynamic in its nature (many parts of server code
> > can be redefined on the fly), but it is mostly
> > statically-typed.
>
> That is exactly why SQL is such a big pain in ass.
>
> --
> Regards,
> Dmitry A. Kazakovhttp://www.dmitry-kazakov.de

Would it help you to think of a dynamically typed language as a set of
bindings on top of a statically typed language that automatically
catches and rejects bad inputs?
(Then all you have to manage is thinking of functions in your program
as smaller subprograms of your program).
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <msegvh50wr2i.52b93xm0xo73.dlg@40tude.net>
On Mon, 20 Apr 2009 11:47:03 -0700 (PDT), ··················@gmail.com
wrote:

> On Apr 20, 1:08�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Mon, 20 Apr 2009 08:28:48 -0700 (PDT), budden wrote:

>>> Neither is assembly. Neither are shell languages.
>>> There are three most important system languages of
>>> modern OSes.
>>
>> None of them is dynamically typed.
> 
> Assembly + Shell don't seem to be statically typed either.

Nevertheless the argument that OSes are developed using dynamic typing
languages is wrong.

None of the modern OSes was written in Assembly. UNIX was in C, VMS was in
BLISS (if I correctly remember), Windows was C, if not in VB (:-))

>>> But I didn't say about typing, I said about dynamic/vs
>>> static environment.
>>
>> Dynamics do not imply sloppiness. What is the environment of what? What are
>> you talking about guys? The arguments you make are based on swapping
>> objects and subjects.
> 
> Dynamic i.e. things can be rebound i believe.

A statically typed variable can be rebound to another value, very
dynamically:

   int X = 1;
   X = 2;

Widening the context makes no sense. You can find something "static" and
something "dynamic" everywhere.

>>> How safety is enforced in an operating system?
>>> First, there are tasks. OS tries its best to isolate
>>> tasks in order to protect system from one misbehaving
>>> task. This is a dynamic feature, not static.
>>
>> What you are describing is not safety, it is security. What "dynamic
>> feature" might mean, I can only guess. Alternating currents in the CPU?
>> (:-))
>>
> 
> Right, but reallocation is a dynamic process. I don't restart my
> entire computer because I wanted to close fire fox.

No. And it has nothing to do with the language of OS. Firefox is not a part
of OS. The interface between Firefox and OS is by no mean "dynamic." Take
the socket library, its interface is statically typed, it is written in a
statically typed language and Firefox and OS too.

>>> So, if you trying to prove that
>>> dynamic systems are bad/unnecessary, you have, in particular,
>>> to prove that OSes are bad/unnecessary.
>>
>> Rubbish. I am trying to convey a rather obvious fact, that the idea to
>> postpone a check which can be done immediately is a nonsense.
> 
> Okay sure, but when is immediately? Is immediately when I compile or
> when I run the code?

The first time the program comes to existence.

> If you think of functions within your program as being a bunch of mini
> programs, dynamic typing makes sense.

Nope:

1. It is a too low-level abstraction. Naked procedural decomposition is not
the way I a design programs.

2. What are the types of the arguments? Functions are meaningless without a
definition of.

3. In order to define types (like int) you have to define the operations of
(like +).

4. What is the difference between a function taking int and an operation
defined on int? There is none.

> You don't always know what
> inputs will be put through your mini programs (as how you don't always
> know exactly what your user will do with a big program).
>
> Do you write your C program so that it catches and rejects bad inputs
> or do you write your C code so that it accepts bad inputs and then
> crashes? (I hope it is the former).

You are confusing bugs and exceptional states. I don't need to check if an
input of sine were a string, because its type is specified. That's were a
bug, and this bug is detected by the compiler and the program is rendered
illegal. These are the *preconditions*, a context the program is
considered. You don't need to check it. Likewise you do not need to check
if somebody is shooting at the computer with the machine gun. It is not the
program's business. Divide and conquer is basic principle of engineering.

A totally different case is when an input value of sine (= the domain
value) is such that the result may not be evaluated with the required
accuracy, before the deadline etc. Then the result is not a crash but an
exception propagation. Note that is not a bug, it is a defined and required
behavior. The contract of sine reads: for *any* input the result is either
valid (accuracy specified) or else Numeric_Error exception is propagated.

>>> You seldom stop and rebuild your lisp image. This is a
>>> contrast with statically linked programs which you need
>>> to stop, compile and restart it.
>>
>> Nope. Almost every compiled language can be interpreted. If you do care
>> about to patching the program code while debugging it, there is no
>> technical problem with that. There is not that many IDEs that support this
>> than there were before, just because there is no need in chaotic
>> modifications in a blind hope that it would finally run this time.
> 
> Nope. Lisp 'image'. You maybe missed the boat.
> You are confusing chaotic modifications with incremental modifications
> (like scientific method).
> If I were doing it chaotically I'd probably never get anywhere.

Maybe because there is always "somewhere", especially in a language so
tolerant to errors. How do you know if your sine does not return your
postal address?

> Would it help you to think of a dynamically typed language as a set of
> bindings on top of a statically typed language that automatically
> catches and rejects bad inputs?

No, dynamic typing is quite easy to define without references to some
magical bad inputs. After all, it is a wrong model. The right one is that
"bad" inputs are considered correct, with the output defined in accordance
with the behavior of "rejecting", e.g. as exception propagation or a call
to "message not understood" etc. In connection to types it is called
dynamic polymorphism, AKA dispatch.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <08ef37f2-b070-4c4c-ba15-468222bb852b@y9g2000yqg.googlegroups.com>
On Apr 20, 4:18 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Mon, 20 Apr 2009 11:47:03 -0700 (PDT), ··················@gmail.com
> wrote:
>
> > On Apr 20, 1:08 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> On Mon, 20 Apr 2009 08:28:48 -0700 (PDT), budden wrote:
> >>> Neither is assembly. Neither are shell languages.
> >>> There are three most important system languages of
> >>> modern OSes.
>
> >> None of them is dynamically typed.
>
> > Assembly + Shell don't seem to be statically typed either.
>
> Nevertheless the argument that OSes are developed using dynamic typing
> languages is wrong.
>

Its neither right or wrong.

> None of the modern OSes was written in Assembly. UNIX was in C, VMS was in
> BLISS (if I correctly remember), Windows was C, if not in VB (:-))
>

This speaks more to the availability of C compilers 30 years ago than
to, the properties of a given type system.

You're not going to argue that my OS is type safe because it was
written in C?

> >>> But I didn't say about typing, I said about dynamic/vs
> >>> static environment.
>
> >> Dynamics do not imply sloppiness. What is the environment of what? What are
> >> you talking about guys? The arguments you make are based on swapping
> >> objects and subjects.
>
> > Dynamic i.e. things can be rebound i believe.
>
> A statically typed variable can be rebound to another value, very
> dynamically:
>
>    int X = 1;
>    X = 2;
>
> Widening the context makes no sense. You can find something "static" and
> something "dynamic" everywhere.
>

Narrowing the context makes just as little sense.
What is the real difference between a variable and a type flag?

> >>> How safety is enforced in an operating system?
> >>> First, there are tasks. OS tries its best to isolate
> >>> tasks in order to protect system from one misbehaving
> >>> task. This is a dynamic feature, not static.
>
> >> What you are describing is not safety, it is security. What "dynamic
> >> feature" might mean, I can only guess. Alternating currents in the CPU?
> >> (:-))
>
> > Right, but reallocation is a dynamic process. I don't restart my
> > entire computer because I wanted to close fire fox.
>
> No. And it has nothing to do with the language of OS. Firefox is not a part
> of OS. The interface between Firefox and OS is by no mean "dynamic." Take
> the socket library, its interface is statically typed, it is written in a
> statically typed language and Firefox and OS too.
>

I agree it doesn't have anything to do with the language of the OS.
The OS ends up with an API all of its own which is written in a
language (and is in turn a language in and of itself, no?)

By no means dynamic? Consider not the interface between Fire Fox and
the OS but between any old program and the OS. Seems pretty darn
dynamic to me.

> >>> So, if you trying to prove that
> >>> dynamic systems are bad/unnecessary, you have, in particular,
> >>> to prove that OSes are bad/unnecessary.
>
> >> Rubbish. I am trying to convey a rather obvious fact, that the idea to
> >> postpone a check which can be done immediately is a nonsense.
>
> > Okay sure, but when is immediately? Is immediately when I compile or
> > when I run the code?
>
> The first time the program comes to existence.
>
> > If you think of functions within your program as being a bunch of mini
> > programs, dynamic typing makes sense.
>
> Nope:
>
> 1. It is a too low-level abstraction. Naked procedural decomposition is not
> the way I a design programs.
>

Procedural decomposition is neither a high level or low level
abstraction, it is simply an abstraction.

> 2. What are the types of the arguments? Functions are meaningless without a
> definition of.
>

You'll have to complete your sentence for me to understand it. I will
forge on anyway.

A function isn't meaningless without a definition of the types of
arguments, it merely doesn't have the types of its arguments defined.
(Meaning it will accept what it accepts).

> 3. In order to define types (like int) you have to define the operations of
> (like +).
>

Butter side down, thanks.

> 4. What is the difference between a function taking int and an operation
> defined on int? There is none.
>

How is this relevant to what my compiler spits out when I run a
something through it?

> > You don't always know what
> > inputs will be put through your mini programs (as how you don't always
> > know exactly what your user will do with a big program).
>
> > Do you write your C program so that it catches and rejects bad inputs
> > or do you write your C code so that it accepts bad inputs and then
> > crashes? (I hope it is the former).
>
> You are confusing bugs and exceptional states. I don't need to check if an
> input of sine were a string, because its type is specified. That's were a bug,
> and this bug is detected by the compiler and the program is rendered
> illegal. These are the *preconditions*, a context the program is
> considered. You don't need to check it. Likewise you do not need to check
> if somebody is shooting at the computer with the machine gun. It is not the
> program's business. Divide and conquer is basic principle of engineering.
>

I'm not confusing bugs and exceptional states, in fact at no point did
i mention either.
I'm still in analogy mode where the C program is compared to something
like a lisp function (the lisp function being contained within a lisp
environment... the c program being contained within an operating
system, the relation being analogous).

> A totally different case is when an input value of sine (= the domain
> value) is such that the result may not be evaluated with the required
> accuracy, before the deadline etc. Then the result is not a crash but an
> exception propagation. Note that is not a bug, it is a defined and required
> behavior. The contract of sine reads: for *any* input the result is either
> valid (accuracy specified) or else Numeric_Error exception is propagated.
>

> >>> You seldom stop and rebuild your lisp image. This is a
> >>> contrast with statically linked programs which you need
> >>> to stop, compile and restart it.
>
> >> Nope. Almost every compiled language can be interpreted. If you do care
> >> about to patching the program code while debugging it, there is no
> >> technical problem with that. There is not that many IDEs that support this
> >> than there were before, just because there is no need in chaotic
> >> modifications in a blind hope that it would finally run this time.
>
> > Nope. Lisp 'image'. You maybe missed the boat.
> > You are confusing chaotic modifications with incremental modifications
> > (like scientific method).
> > If I were doing it chaotically I'd probably never get anywhere.
>
> Maybe because there is always "somewhere", especially in a language so
> tolerant to errors. How do you know if your sine does not return your
> postal address?
>

I'll have to refer you to G.E. Moore for this one.

> > Would it help you to think of a dynamically typed language as a set of
> > bindings on top of a statically typed language that automatically
> > catches and rejects bad inputs?
>
> No, dynamic typing is quite easy to define without references to some
> magical bad inputs. After all, it is a wrong model.

What is a wrong model? I suggest you more strongly type your pronouns.
I never gave you a model, I gave you analogy.

> The right one is that
> "bad" inputs are considered correct, with the output defined in accordance
> with the behavior of "rejecting", e.g. as exception propagation or a call
> to "message not understood" etc. In connection to types it is called
> dynamic polymorphism, AKA dispatch.

The right one what? The right model of dynamic typing? That sounds a
lot like dynamic typing. If that's what you are trying to get across,
then no shit.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090421073844.GW4558@gildor.inglorion.net>
Responding to various things I've seen flying about ...

On Mon, Apr 20, 2009 at 06:25:24PM -0700, ··················@gmail.com wrote:
> On Apr 20, 4:18 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
> > On Mon, 20 Apr 2009 11:47:03 -0700 (PDT), ··················@gmail.com
> > wrote:
> >
> > > On Apr 20, 1:08 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > > wrote:
> > >> On Mon, 20 Apr 2009 08:28:48 -0700 (PDT), budden wrote:
> > >>> Neither is assembly. Neither are shell languages.
> > >>> There are three most important system languages of
> > >>> modern OSes.
> >
> > >> None of them is dynamically typed.
> >
> > > Assembly + Shell don't seem to be statically typed either.

I would say assembly language is untyped. So, indeed, that's neither 
statically nor dynamically typed. There simply isn't any concept of 
types, and, accordingly, no type checking.

> > Nevertheless the argument that OSes are developed using dynamic typing
> > languages is wrong.

I agree, but I don't think it matters. There is a small part of an 
operating system that needs to be developed using unsafe languages, and 
the rest can be written in basically whatever language you want with 
whatever type discipline you want. Perhaps current operating systems 
have overwhelmingly been written in statically typed, type-unsafe 
languages, but what is that really supposed to tell us? For one thing, I 
don't think it has any bearing on whether dynamic typing necessarily 
makes one more productive than static typing.

> > None of the modern OSes was written in Assembly. UNIX was in C, VMS was in
> > BLISS (if I correctly remember), Windows was C, if not in VB (:-))
> >
> 
> This speaks more to the availability of C compilers 30 years ago than
> to, the properties of a given type system.

Not quite correct. Unix wasn't written in C because C was what was 
available. Rather, C was developed so that Unix could be written in it.

> You're not going to argue that my OS is type safe because it was
> written in C?

That would be foolish, because C is not type safe.

> By no means dynamic? Consider not the interface between Fire Fox and
> the OS but between any old program and the OS. Seems pretty darn
> dynamic to me.

The interesting question here is "which interface?" Generally, however, 
interfaces are boundaries where the control of the type checker stops, 
and which are therefore not type-safe. The best you can do to get type 
safety is introduce some run-time checks on the data you are being 
passed.

For examples, consider the command line interface to a program, 
communication over sockets, or even, really, the binary interface to 
code you link with. In all cases, you are interacting with something 
outside your and your type checker's control, which might break your 
expectations (with respect to types). There is not even any guarantee 
that what you get will be in a format you will understand.

> > >>> So, if you trying to prove that
> > >>> dynamic systems are bad/unnecessary, you have, in particular,
> > >>> to prove that OSes are bad/unnecessary.
> >
> > >> Rubbish. I am trying to convey a rather obvious fact, that the idea to
> > >> postpone a check which can be done immediately is a nonsense.

I think people have made some good arguments for why you sometimes do 
want to postpone some checks. As far as I can see, everybody in the 
discussion has agreed that you want to catch as many errors as you can 
before something enters production, but catching errors during 
development is less clear-cut. Indeed, I can see a point in being able 
to change one function, then test it, before having to update everything 
that needs to be changed as a result.

For example, consider you have something that once could have been 
represented with a simple data type already provided by your language, 
but now you want to add some attributes to it. This is a situation I 
have encountered a number of times. Under dynamic typing, you can change 
and test your functions one at a time. Under static typing, you would 
have to change everything that depended on the new type before you could 
test at all.

> > 2. What are the types of the arguments? Functions are meaningless without a
> > definition of.
> >
> 
> You'll have to complete your sentence for me to understand it. I will
> forge on anyway.

Please forgive Dmitry's difficulties with the English language. He means 
"without a definition of those. (the types of the arguments)"

> A function isn't meaningless without a definition of the types of
> arguments, it merely doesn't have the types of its arguments defined.
> (Meaning it will accept what it accepts).

But this is actually a type that can be checked. A function accepts an 
argument of a certain type if and only if all operations the function 
performs on that argument are defined for the argument's type.

> > 3. In order to define types (like int) you have to define the operations of
> > (like +).
> >
> 
> Butter side down, thanks.
> 
> > 4. What is the difference between a function taking int and an operation
> > defined on int? There is none.
> >
> 
> How is this relevant to what my compiler spits out when I run a
> something through it?

If I understand Dmitry correctly, he's arguing that you can't see types 
and operations/functions (which are the same thing) seperately from each 
other. I agree: as far as I am concerned, a type is _determined_ by the 
operations that can be applied to it, at least for the purpose of type 
checking.

> > >> Nope. Almost every compiled language can be interpreted.

There are no "compiled languages" or "interpreted languages". There are 
languages and implementations, and any language can be implemented by an 
interpreter.

> > > Would it help you to think of a dynamically typed language as a set of
> > > bindings on top of a statically typed language that automatically
> > > catches and rejects bad inputs?
> >
> > No, dynamic typing is quite easy to define without references to some
> > magical bad inputs. After all, it is a wrong model.

You keep saying that, but that doesn't make you right. Perhaps it is not 
the best model, but "wrong" is a bit strong. 

In fact, automatic catching and rejecting bad inputs (meaning bad w.r.t 
typing) at run time is exactly what dynamic typing does. You may prefer 
to prove before run time that bad inputs cannot occur, but that can't 
always be done. At that point, you have three options: rejecting the 
program (static typing), deferring the check until run time (dynamic 
typing), or continuing without checking (no typing).

> > The right one is that "bad" inputs are considered correct, with the 
> > output defined in accordance with the behavior of "rejecting", e.g. 
> > as exception propagation or a call to "message not understood" etc. 
> > In connection to types it is called dynamic polymorphism, AKA 
> > dispatch.
> 
> The right one what? The right model of dynamic typing? That sounds a
> lot like dynamic typing. If that's what you are trying to get across,
> then no shit.

If I understand this correctly, it's just a matter of definition. You 
can say that

1 + "foo"

is incorrect and will (under dynamic typing) to signal an error at run 
time, or you can say it is correct and that its behavior is to signal an 
error at run time. Where "signal an error" is the same thing in both 
cases, and is defined by the language or the implementation as 
appropriate (e.g. it may raise an exception, abort the program, start 
the debugger, etc.)

In both cases, the behavior is exactly the same. The only thing that has 
changed is whether you call the program incorrect (because it causes an 
error to be signalled) or correct (because its behavior is 
well-defined). Personally, I prefer the former.

Regards,

Bob

-- 
To use 'to use' to mention 'to mention' is a mistaken use of 'to use',
not to mention 'to mention'.

	-- Lenny Clapp


From: ·············@gmx.at
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <21f5acd6-0e62-48f0-851a-fdee1b14d767@h2g2000yqg.googlegroups.com>
On 21 Apr., 09:38, Robbert Haarman <··············@inglorion.net>
wrote:
> If I understand this correctly, it's just a matter of definition. You
> can say that
>
> 1 + "foo"
>
> is incorrect and will (under dynamic typing) to signal an error at run
> time, or you can say it is correct and that its behavior is to signal an
> error at run time. Where "signal an error" is the same thing in both
> cases, and is defined by the language or the implementation as
> appropriate (e.g. it may raise an exception, abort the program, start
> the debugger, etc.)
>
> In both cases, the behavior is exactly the same. The only thing that has
> changed is whether you call the program incorrect (because it causes an
> error to be signalled) or correct (because its behavior is
> well-defined). Personally, I prefer the former.

This reminds me to my efforts to combine dynamic and static
dispatch/typing. The predecessor language of Seed7 (named HAL) used
dynamic typing and dynamic dispatch everywhere. This had many
advantages (I used HAL to describe the logic of text adventure
games). But I missed the possibility to find type errors with the
help of static type checking. I was thinking over the problem
to combine static and dynamic typing for a long time and I came to
the following conclusions:

 - Everything (static POD objects and dynamic objects) needs a type
   (which might be implicit).

 - Some types are statically dispatched and other types are
   dynamically dispatched (which type is static or dynamic
   dispatched needs to be defined explicit).

 - Dynamic variables (parameters, results, ...) have a dynamic type
   and refer to a value which has a static type (Instead of using
   the term 'class of an object' the term 'type of a value' can be
   used). This is a two level concept where a dynamic (interface-)
   type and a static (implementation-) type work together.

 - The relationship between interface and implementation type must
   be defined explicit.

 - The static type system must surround the dynamic type system.
   E.g.: When the program is compiled a static type checking takes
   place and some constructs allow gateways to dynamic typing.

 - The gateways from static to dynamic typing must be defined
   explicit. E.g.: This needs function headers which will be
   dispatched dynamically. This way the static type check will
   accept programs as long as it can find a corresponding function
   or gateway.

Some of this conclusions seem arbitrary but this is not the case:
For a long time I had the idea that the possibility of a dynamic
dispatch should be recogniced automatically. But this turned out
to be impossible: You can either have implicit dynamic dispatch or
static type checking but not both. The introducion of the interface
(DYNAMIC) functions resolved the issue. When you specify the
function headers for which a dynamic dispatch should be used all
conflicts in the concept go away.

This conclusions went into the design of Seed7:

 - A dynamic type is called interface type.

 - Classes are defined as static types with the 'struct' construct.

 - The (function header) gateways are the 'DYNAMIC' definitions.

The details can be found here:

  http://seed7.sourceforge.net/manual/objects.htm

I don't say that this concept is perfect but it is IMHO a good
compromise.

It would be nice to get some response.

Greetings Thomas Mertes

Seed7 Homepage:  http://seed7.sourceforge.net
Seed7 - The extensible programming language: User defined statements
and operators, abstract data types, templates without special
syntax, OO with interfaces and multiple dispatch, statically typed,
interpreted or compiled, portable, runs under linux/unix/windows.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1jtkaktm4x1fj$.1kx8haqxneswx$.dlg@40tude.net>
On Tue, 21 Apr 2009 07:00:15 -0700 (PDT), ·············@gmx.at wrote:

> On 21 Apr., 09:38, Robbert Haarman <··············@inglorion.net>
> wrote:
>> If I understand this correctly, it's just a matter of definition. You
>> can say that
>>
>> 1 + "foo"
>>
>> is incorrect and will (under dynamic typing) to signal an error at run
>> time, or you can say it is correct and that its behavior is to signal an
>> error at run time. Where "signal an error" is the same thing in both
>> cases, and is defined by the language or the implementation as
>> appropriate (e.g. it may raise an exception, abort the program, start
>> the debugger, etc.)
>>
>> In both cases, the behavior is exactly the same. The only thing that has
>> changed is whether you call the program incorrect (because it causes an
>> error to be signalled) or correct (because its behavior is
>> well-defined). Personally, I prefer the former.
> 
> This reminds me to my efforts to combine dynamic and static
> dispatch/typing. The predecessor language of Seed7 (named HAL) used
> dynamic typing and dynamic dispatch everywhere. This had many
> advantages (I used HAL to describe the logic of text adventure
> games). But I missed the possibility to find type errors with the
> help of static type checking. I was thinking over the problem
> to combine static and dynamic typing for a long time and I came to
> the following conclusions:
> 
>  - Everything (static POD objects and dynamic objects) needs a type
>    (which might be implicit).

Yes

>  - Some types are statically dispatched and other types are
>    dynamically dispatched (which type is static or dynamic
>    dispatched needs to be defined explicit).

Rather an operation is dispatching in some types and is not in others.
However, you can generalize it by making all operations dispatching when
classes of classes are introduced. So there will be always a class where
the operation starts dispatching. The type of that class is static.

>  - Dynamic variables (parameters, results, ...) have a dynamic type
>    and refer to a value which has a static type (Instead of using
>    the term 'class of an object' the term 'type of a value' can be
>    used). This is a two level concept where a dynamic (interface-)
>    type and a static (implementation-) type work together.

Yes = polymorphic object / value.

>  - The relationship between interface and implementation type must
>    be defined explicit.

Hmm, I would say that from any type one can obtain a class rooted in this
type (the set of all types derived from it) and the type of that class. In
this sense an interface is always defined by each type. You can always
inherit the interface from any type.

>  - The static type system must surround the dynamic type system.
>    E.g.: When the program is compiled a static type checking takes
>    place and some constructs allow gateways to dynamic typing.

I don't know what type gateway is, but if each class has a type, then that
type can be statically checked. E.g. you have a calls numerical that
contains all numerical types. You can declare an operation in terms of this
numerical class, and you have a type for that. No inference needed, you
just declare it acting on the class, So the operation is statically
guarantied to get nothing but a polymorphic object from the class. If the
implementation of the operation makes any dispatching calls from inside,
they never fail, that it statically guaranteed.

> I don't say that this concept is perfect but it is IMHO a good
> compromise.

I don't think it is a compromise. Since I have nothing against it, I
conclude that lispers must object. (:-))
 
-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Robbert Haarman
Subject: Object-Orientation in Seed7
Date: 
Message-ID: <20090421194016.GZ4558@gildor.inglorion.net>
[Followup-To: comp.lang.misc]

On Tue, Apr 21, 2009 at 07:00:15AM -0700, ·············@gmx.at wrote:
> 
> The details can be found here:
> 
>   http://seed7.sourceforge.net/manual/objects.htm
> 
> It would be nice to get some response.

I like it. It looks very well thought out, and it's similar to how I 
have designed object systems in the past. I prefer to decouple things as 
much as possible, and you seem to have done so: both method declarations 
and type-interface relationships are separated from type definitions. 
Good.

And here is a question: Is there no multiple inheritance?

Regards,

Bob

-- 
"What if this weren't a hypothetical question?"


From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1k0v13mb8bdro$.8z86excg8ngb$.dlg@40tude.net>
On Tue, 21 Apr 2009 09:38:44 +0200, Robbert Haarman wrote:

> In both cases, the behavior is exactly the same. The only thing that has 
> changed is whether you call the program incorrect (because it causes an 
> error to be signalled) or correct (because its behavior is 
> well-defined). Personally, I prefer the former.

No you cannot go this way. If exception propagation makes the program
incorrect, then how can you handle this exception. The program will remain
incorrect even if the exception was caught and handled.

Or consider it otherwise. From the problem domain perspective, it is solely
the behavior which makes the program correct or incorrect. If the program
does it right, then it is correct. You cannot call it incorrect, just
because it raises some exceptions inside. Who cares?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090421083123.GX4558@gildor.inglorion.net>
On Tue, Apr 21, 2009 at 09:52:32AM +0200, Dmitry A. Kazakov wrote:
> On Tue, 21 Apr 2009 09:38:44 +0200, Robbert Haarman wrote:
> 
> > In both cases, the behavior is exactly the same. The only thing that has 
> > changed is whether you call the program incorrect (because it causes an 
> > error to be signalled) or correct (because its behavior is 
> > well-defined). Personally, I prefer the former.
> 
> No you cannot go this way. If exception propagation makes the program
> incorrect, then how can you handle this exception. The program will remain
> incorrect even if the exception was caught and handled.
> 
> Or consider it otherwise. From the problem domain perspective, it is solely
> the behavior which makes the program correct or incorrect. If the program
> does it right, then it is correct. You cannot call it incorrect, just
> because it raises some exceptions inside. Who cares?

I see your point, but I still maintain that it depends on how you define 
"correct".

If your definition of "correct" is that the program does what it should 
do according to the language specification (which seems to be the 
position you're taking), then any valid program is automatically 
correct (assuming, of course, the language implementation correctly 
implements the language).

I would rather assess a program's correctness by testing it against the 
specification for the program. In that case, whether or not the program 
is correct depends not on whether what the program does is the same as 
what the language specification says the source code must do (which is 
true of any valid program), but on whether this is what the program is 
intended to do. In my previous example, I had assumed that signalling a 
type error is not the intended behavior of the program, but is the 
actual behavior of the program, and this is why I would have labeled the 
program incorrect.

With regard to your example of raising an exception, note that this is 
only one possible implementation of signalling an error. If detecting a 
type error at run time causes an exception to be raised, this is dynamic 
typing. However, if, instead, the program were aborted with an error 
message, this would also be dynamic typing. The point being that the 
behavior of detecting a type error need not be exactly specified for 
there to be dynamic typing. And if the behavior is not specified, it 
cannot be relied upon.

Regards,

Bob

-- 
Why program by hand in five days what you can spend five years of your
life automating.

	-- Terence Parr


From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <11zdhfbdyjlpf.1kq1ft2wdi4ec$.dlg@40tude.net>
On Tue, 21 Apr 2009 10:31:23 +0200, Robbert Haarman wrote:

> On Tue, Apr 21, 2009 at 09:52:32AM +0200, Dmitry A. Kazakov wrote:
>> On Tue, 21 Apr 2009 09:38:44 +0200, Robbert Haarman wrote:
>> 
>>> In both cases, the behavior is exactly the same. The only thing that has 
>>> changed is whether you call the program incorrect (because it causes an 
>>> error to be signalled) or correct (because its behavior is 
>>> well-defined). Personally, I prefer the former.
>> 
>> No you cannot go this way. If exception propagation makes the program
>> incorrect, then how can you handle this exception. The program will remain
>> incorrect even if the exception was caught and handled.
>> 
>> Or consider it otherwise. From the problem domain perspective, it is solely
>> the behavior which makes the program correct or incorrect. If the program
>> does it right, then it is correct. You cannot call it incorrect, just
>> because it raises some exceptions inside. Who cares?
> 
> I see your point, but I still maintain that it depends on how you define 
> "correct".
> 
> If your definition of "correct" is that the program does what it should 
> do according to the language specification (which seems to be the 
> position you're taking), then any valid program is automatically 
> correct (assuming, of course, the language implementation correctly 
> implements the language).

No, I maintain the same definition as you. A valid program can be
incorrect.

> I had assumed that signalling a 
> type error is not the intended behavior of the program, but is the 
> actual behavior of the program, and this is why I would have labeled the 
> program incorrect.

My objection is methodical. When you bring the intended behavior of the
program into it, then you cannot talk about language features, because
these are defined independently on any concrete application. Exception
propagation and any other language mechanism by itself cannot be considered
incorrect (or correct). So when we are talking about dynamic typing as a
language feature we cannot consider type check as a correctness check. It
would be same as to say that the operator "if" does a correctness check.
Same applies to run-time assertions. They do not check correctness.

> With regard to your example of raising an exception, note that this is 
> only one possible implementation of signalling an error. If detecting a 
> type error at run time causes an exception to be raised, this is dynamic 
> typing. However, if, instead, the program were aborted with an error 
> message, this would also be dynamic typing.

Yes.

> The point being that the 
> behavior of detecting a type error need not be exactly specified for 
> there to be dynamic typing.

My position is that the behavior cannot be in detecting errors (error in
the sense of correctness). When the behavior is correct it cannot detect
itself incorrect. It is a contradiction.

Formally. Consider a program p that is contracted to return True. Let
Correct be a predicate that yields true when its argument is a correct
program. Then we write a p:

function p return Boolean is
begin
   return not Correct (p);
end p;

Now if p is correct, then p is incorrect, because it exposes wrong
behavior.

q.e.d.

Therefore it is technically wrong to talk about "type error checks", when
"type error" implies "incorrect". One can compare type tags, that is
perfectly OK. But equality or not of type tags by itself does not (and
cannot) imply correctness or incorrectness. It is not an error for two type
tags to be unequal.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090421190816.GY4558@gildor.inglorion.net>
On Tue, Apr 21, 2009 at 12:07:25PM +0200, Dmitry A. Kazakov wrote:
> On Tue, 21 Apr 2009 10:31:23 +0200, Robbert Haarman wrote:
> > 
> > If your definition of "correct" is that the program does what it should 
> > do according to the language specification (which seems to be the 
> > position you're taking), then any valid program is automatically 
> > correct (assuming, of course, the language implementation correctly 
> > implements the language).
> 
> No, I maintain the same definition as you. A valid program can be
> incorrect.

Ok.

> > I had assumed that signalling a 
> > type error is not the intended behavior of the program, but is the 
> > actual behavior of the program, and this is why I would have labeled the 
> > program incorrect.
> 
> My objection is methodical. When you bring the intended behavior of the
> program into it, then you cannot talk about language features, because
> these are defined independently on any concrete application.

How do you determine whether the program is correct, if not by comparing 
its behavior to its intended behavior? In other words, what is your 
definition of program correctness?

Regards,

Bob

-- 
This sentence is false.


From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <pmgokxixk9zv$.1tctt3w1k09s9$.dlg@40tude.net>
On Tue, 21 Apr 2009 21:08:16 +0200, Robbert Haarman wrote:

> On Tue, Apr 21, 2009 at 12:07:25PM +0200, Dmitry A. Kazakov wrote:
>> On Tue, 21 Apr 2009 10:31:23 +0200, Robbert Haarman wrote:

>>> If your definition of "correct" is that the program does what it should 
>>> do according to the language specification (which seems to be the 
>>> position you're taking), then any valid program is automatically 
>>> correct (assuming, of course, the language implementation correctly 
>>> implements the language).
>> 
>> No, I maintain the same definition as you. A valid program can be
>> incorrect.
> 
> Ok.
> 
>>> I had assumed that signalling a 
>>> type error is not the intended behavior of the program, but is the 
>>> actual behavior of the program, and this is why I would have labeled the 
>>> program incorrect.
>> 
>> My objection is methodical. When you bring the intended behavior of the
>> program into it, then you cannot talk about language features, because
>> these are defined independently on any concrete application.
> 
> How do you determine whether the program is correct, if not by comparing 
> its behavior to its intended behavior? In other words, what is your 
> definition of program correctness?

Same as yours.

The point is that this definition is inherently incompatible with a
behavior that checks/signals *itself* incorrect. This is trivially
inconsistent. Therefore a run-time type check cannot be an error check in
the above sense of correctness. So dynamic typing checks types it does not
check type errors.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ncxz5hc84y10.1ujd9wtq9ihzz$.dlg@40tude.net>
On Mon, 20 Apr 2009 18:25:24 -0700 (PDT), ··················@gmail.com
wrote:

> On Apr 20, 4:18�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:

>> None of the modern OSes was written in Assembly. UNIX was in C, VMS was in
>> BLISS (if I correctly remember), Windows was C, if not in VB (:-))
>> 
> This speaks more to the availability of C compilers 30 years ago than
> to, the properties of a given type system.
> 
> You're not going to argue that my OS is type safe because it was
> written in C?

Arguing about OS design would bring us to nowhere. The point is, OS cannot
be considered as an argument for dynamic typing.

>>>>> But I didn't say about typing, I said about dynamic/vs
>>>>> static environment.
>>
>>>> Dynamics do not imply sloppiness. What is the environment of what? What are
>>>> you talking about guys? The arguments you make are based on swapping
>>>> objects and subjects.
>>
>>> Dynamic i.e. things can be rebound i believe.
>>
>> A statically typed variable can be rebound to another value, very
>> dynamically:
>>
>> � �int X = 1;
>> � �X = 2;
>>
>> Widening the context makes no sense. You can find something "static" and
>> something "dynamic" everywhere.
> 
> Narrowing the context makes just as little sense.
> What is the real difference between a variable and a type flag?

Variable is a named object, a mapping. Type flag (tag, I guess) is a value.

>>>>> How safety is enforced in an operating system?
>>>>> First, there are tasks. OS tries its best to isolate
>>>>> tasks in order to protect system from one misbehaving
>>>>> task. This is a dynamic feature, not static.
>>
>>>> What you are describing is not safety, it is security. What "dynamic
>>>> feature" might mean, I can only guess. Alternating currents in the CPU?
>>>> (:-))
>>
>>> Right, but reallocation is a dynamic process. I don't restart my
>>> entire computer because I wanted to close fire fox.
>>
>> No. And it has nothing to do with the language of OS. Firefox is not a part
>> of OS. The interface between Firefox and OS is by no mean "dynamic." Take
>> the socket library, its interface is statically typed, it is written in a
>> statically typed language and Firefox and OS too.
> 
> I agree it doesn't have anything to do with the language of the OS.
> The OS ends up with an API all of its own which is written in a
> language (and is in turn a language in and of itself, no?)
> 
> By no means dynamic? Consider not the interface between Fire Fox and
> the OS but between any old program and the OS. Seems pretty darn
> dynamic to me.

Again, dynamic in what exactly sense? It seem rather static to me, because
my computer box sits on the table. Or is it dynamic because the Earth
rotates? Are we talking about dynamically typed languages? In that case
dynamic has an exact meaning which does not apply to APIs of major OSes.
Most of them didn't managed to have it properly typed.

>>> If you think of functions within your program as being a bunch of mini
>>> programs, dynamic typing makes sense.
>>
>> Nope:
>>
>> 1. It is a too low-level abstraction. Naked procedural decomposition is not
>> the way I a design programs.
> 
> Procedural decomposition is neither a high level or low level
> abstraction, it is simply an abstraction.

It is low-level because it fails to capture properties of the domain set
(of arguments' values). That's why types were invented.
 
>> 2. What are the types of the arguments? Functions are meaningless without a
>> definition of.
> 
> You'll have to complete your sentence for me to understand it. I will
> forge on anyway.
> 
> A function isn't meaningless without a definition of the types of
> arguments, it merely doesn't have the types of its arguments defined.
> (Meaning it will accept what it accepts).

Please define one *formally*. Use the conventional mathematical notation
for that. Take sine as an example. Then define it without references to the
set of real numbers. Go on.

>> 3. In order to define types (like int) you have to define the operations of
>> (like +).
> 
> Butter side down, thanks.

It tastes best so. Try it.

>> 4. What is the difference between a function taking int and an operation
>> defined on int? There is none.
> 
> How is this relevant to what my compiler spits out when I run a
> something through it?

It shows that all functions defined over a type contribute into the
definition of the type. It is impossible to separate functions from types.

Considering a program as a bunch of functions is either untyped or else
wrong. No offence meant, the paradigm you describe is untyped. If you try
to analyse it as I did you must agree with that. Ask yourself if you don't
like types and see them as a burden. Don't you prefer a dynamically typed
system not because of its ability to describe classes of types statically
and dynamically do members of these sets (which is necessary for complex
designs). Don't you do it just because it allows you to ignore types
whatsoever, or pretending you ignoring them?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: John Thingstad
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <op.usqt1yjhut4oq5@pandora>
På Tue, 21 Apr 2009 09:41:08 +0200, skrev Dmitry A. Kazakov  
<·······@dmitry-kazakov.de>:

>
> It shows that all functions defined over a type contribute into the
> definition of the type. It is impossible to separate functions from  
> types.
>
> Considering a program as a bunch of functions is either untyped or else
> wrong. No offence meant, the paradigm you describe is untyped. If you try
> to analyse it as I did you must agree with that. Ask yourself if you  
> don't
> like types and see them as a burden. Don't you prefer a dynamically typed
> system not because of its ability to describe classes of types statically
> and dynamically do members of these sets (which is necessary for complex
> designs). Don't you do it just because it allows you to ignore types
> whatsoever, or pretending you ignoring them?
>

You deduction seems faulty. Lisp is strongly typed. Of course it is the  
objects that are typed, not whatever slot you happen to stick it in.  To  
me the thought that a variable can be typed is itself inherently flawed.  
After all a variable (conceptually) just specifies the location of a  
object. A compiler would ideally follow the OBJECTS through the code and  
verify that the types hold.. If you are uncertain what the input will be  
(most often you are not) you can always (check-type o (integer 0 *)) for  
instance. Not that this is a stronger restraint than just integer as it  
specifies a positive integer. The latter would fall straight through your  
declaration in most languages and generally can only be known at run-time  
anyhow. It seems to me that the idea that untyped variables create bugs is  
unwarranted and that this does not seem to create big problems in real  
applications. What does allow bugs to propagate is being allowed to use  
unbound variables or doing implicit type conversion as in (un-strict) Perl  
for instance.

-----------------------
John Thingstad
From: ·············@gmx.at
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <12e44cb2-d77d-4c3f-ab5a-6a4e8fb398bb@m19g2000yqk.googlegroups.com>
On 21 Apr., 17:45, "John Thingstad" <·······@online.no> wrote:
> På Tue, 21 Apr 2009 09:41:08 +0200, skrev Dmitry A. Kazakov
> <·······@dmitry-kazakov.de>:
>
> > It shows that all functions defined over a type contribute into the
> > definition of the type. It is impossible to separate functions from
> > types.
>
> > Considering a program as a bunch of functions is either untyped or else
> > wrong. No offence meant, the paradigm you describe is untyped. If you try
> > to analyse it as I did you must agree with that. Ask yourself if you
> > don't
> > like types and see them as a burden. Don't you prefer a dynamically typed
> > system not because of its ability to describe classes of types statically
> > and dynamically do members of these sets (which is necessary for complex
> > designs). Don't you do it just because it allows you to ignore types
> > whatsoever, or pretending you ignoring them?
>
> You deduction seems faulty. Lisp is strongly typed. Of course it is the
> objects that are typed, not whatever slot you happen to stick it in.  To
> me the thought that a variable can be typed is itself inherently flawed.

The concept of variables with type is not something new.
In the area of mathematics variables usually are specified as being
integers, reals, complex. The notation of types for variables in
programming languages is just an extension. You seem to believe that
typed variables are just a handicap created to annoy ingenious
programmers. But variables with types increase also the readability
of a program. Static type checks can help to find errors earlier
without the need to run the program through various tests (you will
probably do tests anyway, but will your tests reach 100% code
coverage?).

When you introduce a variable you probably will have a concept about
which values the variable will hold. The area of possible values
can be small (like boolean) or it can be a very large (like object).
Types are just a method to specify your concept fomally in your
program. This helps when someone else has to maintain your program.

Here are some arguments why static type checking might have some
value:

http://seed7.sourceforge.net/faq.htm#static_type_checking

> After all a variable (conceptually) just specifies the location of a
> object. A compiler would ideally follow the OBJECTS through the code and
> verify that the types hold..

This is called type inferencing. But it has not only positive
effects. Imagine you have to maintain a big program written by
someone else. You need to follow the OBJECTS through the code and
verify what is going on... You need to do part of the compilers
work just to understand the program. Programs with type
declarations and typed variables (parameters, functions, ...) carry
more information. This information is helpful when you read
programs. You get references to the concepts which the original
programmer had in his mind.

Why I don't like type inference is explained here:

http://seed7.sourceforge.net/faq.htm#type_inference

> If you are uncertain what the input will be
> (most often you are not) you can always (check-type o (integer 0 *)) for
> instance. Not that this is a stronger restraint than just integer as it
> specifies a positive integer. The latter would fall straight through your
> declaration in most languages and generally can only be known at run-time
> anyhow. It seems to me that the idea that untyped variables create bugs is
> unwarranted and that this does not seem to create big problems in real
> applications.

Untyped variables do not create bugs, but some bugs can be found
quite easily with static type checks and typed variables.

Greetings Thomas Mertes

Seed7 Homepage:  http://seed7.sourceforge.net
Seed7 - The extensible programming language: User defined statements
and operators, abstract data types, templates without special
syntax, OO with interfaces and multiple dispatch, statically typed,
interpreted or compiled, portable, runs under linux/unix/windows.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <656d3e58-2279-45c6-a9b7-214112e6af53@e21g2000yqb.googlegroups.com>
Hi Thomas!
  Your language looks interesting. I like that BASIC
style end markers - they are expressive and make
less parens. I didn't yet learn how macros are
organized.
  Could I redefine a function f in a running seed7
interpreter so that all functions which invoke f
would immidiately see new definition? Are compiled
and interpreted code interoperable? Can I compile
a separate function in a running image? I enjoy
this in CL and this is a pain to live without it.
I'm looking for a partially statically typed language
which would support this REPL-based style of work.

Java supports multiple inheritance of interfaces.
Does your language so?

>Why I don't like type inference is explained here:
> http://seed7.sourceforge.net/faq.htm#type_inference
Idea is very interesting. There is a little type
inference in a CL and I never used OCaml or Haskell.
But I think that it is inevitable that user should
look somewhere else when he reads code. E.g. if
we have a reference to variable in a program listing,
this variable might be local or global, or be
declared on some medium level of nesting. Anyway,
reader need to find out its declaration. So
your approach to type inference seems a bit
extreme. Anyway, I can now see that type inference
might be harmful sometims. Thanks for this hint.

I do not share your approach to automatic type
conversions. E.g., in CL I have written
str+ function. It takes an arbitrary number of
string designators, each of them can be
character, string or symbol, coerces each to
a string with a "string" function
http://www.lispworks.com/documentation/HyperSpec/Body/f_string.htm
and concatenates all that strings together.
I use this function very often and it would be
painful to cast types explicitly.

Could I do similar thing in your language?
From: Leandro Rios
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gslnjk$f6g$1@news.motzarella.org>
budden escribi�:
> Hi Thomas!
>   Your language looks interesting. I like that BASIC
> style end markers - they are expressive and make
> less parens. 

Oh, yes, it is better to have

const proc: main is func
       begin
         writeln("hello world");
       end func;

than

(defun main ()
   (princ "Hello world"))

You wrote a lot more, but hey, you saved the typing of two pairs of 
parens. What is this? Some new kind of psychological condition? 
Parensophobia?

Intrigued,

Leandro
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <f23176e2-3908-4ff0-ab53-665b542a87f8@c9g2000yqm.googlegroups.com>
On Apr 21, 8:15 pm, Leandro Rios <··················@gmail.com> wrote:
> budden escribió:
>
> > Hi Thomas!
> >   Your language looks interesting. I like that BASIC
> > style end markers - they are expressive and make
> > less parens.
>
> Oh, yes, it is better to have
>
> const proc: main is func
>        begin
>          writeln("hello world");
>        end func;
>
> than
>
> (defun main ()
>    (princ "Hello world"))
>
> You wrote a lot more, but hey, you saved the typing of two pairs of
> parens. What is this? Some new kind of psychological condition?
> Parensophobia?
>
> Intrigued,
>
> Leandro

Can't sleep the parens will get me.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <6ed1cda7-49fb-4374-bf96-07221397a896@p11g2000yqe.googlegroups.com>
> You wrote a lot more, but hey, you saved the
> typing of two pairs of parens.
Yes, lisp code is shorter. But when nesting is deep,
closing qualifiers help to read code and to avoid
nesting errors. You can easily read and write lisp
code in good editor, but it is hard to read (and
especially write) lisp code on the paper. Lisp
syntax is, in general, mostly ok, when we're talking
about clojure. CL has a bad (IMO) syntax.
From: Marco Antoniotti
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <84e81b85-f5f0-44d7-84a6-292ea0ec3c38@x3g2000yqa.googlegroups.com>
On Apr 22, 9:00 am, budden <···········@mail.ru> wrote:
> > You wrote a lot more, but hey, you saved the
> > typing of two pairs of parens.
>
> Yes, lisp code is shorter. But when nesting is deep,
> closing qualifiers help to read code and to avoid
> nesting errors. You can easily read and write lisp
> code in good editor, but it is hard to read (and
> especially write) lisp code on the paper. Lisp
> syntax is, in general, mostly ok, when we're talking
> about clojure. CL has a bad (IMO) syntax.

When nesting gets deep, it is time to factor out snippets in
procedures/functions.  The rule of thumb is that human-written code
should never pass the time-honored 80th column.  You never know when
you'll have to print out your code on a pin printer.

CL syntax is beautiful for what it does.  It is minimalistic and very
very very regular.  What is very very hard is finding substitutes,
while keeping the CL/Scheme look'n'feel.  I have not seen anything
compelling enough from you, and I am expecting at least a CDR.

Cheers
--
Marco
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49f13287$0$95533$742ec2ed@news.sonic.net>
Marco Antoniotti wrote:

> When nesting gets deep, it is time to factor out snippets in
> procedures/functions.  The rule of thumb is that human-written code
> should never pass the time-honored 80th column.  You never know when
> you'll have to print out your code on a pin printer.

I use a 120-column limit.  And I *know* when I'll need to 
print code on a pin printer: NEVER.  Simple answer: I'm not 
going to buy a pin printer. 

                                Bear
From: Scott Burson
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <a21f3c8a-0969-4e96-b6c4-eadfa388cba1@d38g2000prn.googlegroups.com>
On Apr 23, 8:28 pm, Ray Dillinger <····@sonic.net> wrote:
> Marco Antoniotti wrote:
> > When nesting gets deep, it is time to factor out snippets in
> > procedures/functions.  The rule of thumb is that human-written code
> > should never pass the time-honored 80th column.  You never know when
> > you'll have to print out your code on a pin printer.
>
> I use a 120-column limit.  And I *know* when I'll need to
> print code on a pin printer: NEVER.  Simple answer: I'm not
> going to buy a pin printer.

Heh, me neither.

I use a 100-column limit, because that's the most I can use and still
get three XEmacs windows side-by-side on a 1920x1200 monitor using the
6x10 font under X.

-- Scott
From: Marco Antoniotti
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <3e973019-d1b6-45fc-9f2f-42e1b0541a05@f19g2000yqo.googlegroups.com>
On Apr 24, 6:11 am, Scott Burson <········@gmail.com> wrote:
> On Apr 23, 8:28 pm, Ray Dillinger <····@sonic.net> wrote:
>
> > Marco Antoniotti wrote:
> > > When nesting gets deep, it is time to factor out snippets in
> > > procedures/functions.  The rule of thumb is that human-written code
> > > should never pass the time-honored 80th column.  You never know when
> > > you'll have to print out your code on a pin printer.
>
> > I use a 120-column limit.  And I *know* when I'll need to
> > print code on a pin printer: NEVER.  Simple answer: I'm not
> > going to buy a pin printer.
>
> Heh, me neither.
>
> I use a 100-column limit, because that's the most I can use and still
> get three XEmacs windows side-by-side on a 1920x1200 monitor using the
> 6x10 font under X.
>
> -- Scott

Hey.  I thought *you* should know!  You ain't a padawan anymore :)

Cheers
--
Marco
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <7577iqF16loamU1@mid.individual.net>
On Tue, 21 Apr 2009 21:15:47 -0300, Leandro Rios wrote:

> budden escribió:
>> Hi Thomas!
>>   Your language looks interesting. I like that BASIC
>> style end markers - they are expressive and make less parens.
> 
> Oh, yes, it is better to have
> 
> const proc: main is func
>        begin
>          writeln("hello world");
>        end func;
> 
> than
> 
> (defun main ()
>    (princ "Hello world"))
> 
> You wrote a lot more, but hey, you saved the typing of two pairs of
> parens. What is this? Some new kind of psychological condition?
> Parensophobia?

Don't make fun of him, parensophobia is a serious medical condition,
caused by hypotrophy of the parenthetal gland.

Another related medical condition is hyperparenthetitis, which makes
patients use various forms (eg [{()}]) in situations where one kind would 
do perfectly.  Patients insist that assigning different semantics various
squiggly forms increases the "expressiveness" of a language.  A common 
complication is the desire to invent "clever syntax".

Sadly, the prognosis is bleak for both conditions, studies indicate a 
near universal loss of macro facilities.

Tamas
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <972aa38b-5c98-480e-9346-51330913830a@t21g2000yqi.googlegroups.com>
> Sadly, the prognosis is bleak for both conditions,
> studies indicate a near universal loss of macro
> facilities.

Hi Tamas! You are stupid, as usual. Clojure has
different kind of parens and it does have macros.

To keep macros you need just to be able to quasiquote
inside {} and []. This can be easily done in case of
CL (e.g. quasiquotation works for vectors).

Also I advice you to read this:

http://en.wikipedia.org/wiki/M4_(computer_language)

It is an infix macroprocessor and it shows an example of
quasiquoting with infix syntax. Its design is weaker
than CL's quasiquoting, but is still very usable and
in some aspects comparable to defmacro.
From: ·············@gmx.at
Subject: Features of Seed7
Date: 
Message-ID: <ae0f8743-8f94-45d9-89ac-d9ff20ffa0d6@v15g2000yqn.googlegroups.com>
On 21 Apr., 22:00, budden <···········@mail.ru> wrote:
> Hi Thomas!
>   Your language looks interesting. I like that BASIC
> style end markers - they are expressive and make
> less parens. I didn't yet learn how macros are
> organized.

Seed7 macros (templates) are just functions which happen to be
executed at compile time instead of runtime. Templates can use types
as parameters or can have types as function result. They can also
contain declarations (using the type parameters). Seed7 uses
templates to define array, hash, struct and other types. That way
such types are not "built in" in the compiler (as many other
programming languages do), but defined in the standard library
instead. Other usages of templates are the introduction of
declarations for some type. There are templates which introduce
a for-loop or I/O functions for a given type.

>   Could I redefine a function f in a running seed7
> interpreter so that all functions which invoke f
> would immidiately see new definition?

Conceptually a variable function could be used. Normal functions are
constant which means: The code cannot change during runtime, like
the value of a constant integer cannot change during runtime.
Variable funtions would allow to change the code of a function
during runtime. Sorry to say, but the current implementation does
not support variable functions (except for closure parameters).

What is supported by the current implementation are variable
programs. A variable of type 'program' can hold a whole Seed7
program. The code of this variable program can be inspected, changed
and executed. The Seed7 compiler (comp.sd7) uses a program variable
to generate a C program. The type 'program' is explained here:

  http://seed7.sourceforge.net/manual/types.htm#program

The types used to inspect the internal code representation of a
program are explained here:

  http://seed7.sourceforge.net/manual/types.htm#category
  http://seed7.sourceforge.net/manual/types.htm#reference
  http://seed7.sourceforge.net/manual/types.htm#ref_list

You will probably see similarities to Lisp lists (like that the
first element of a ref_list can refer to a function object) but
there are also differences like the destinction between the types
reference and ref_list. Note that this code inspection types are
defined to support the Seed7 compiler and similar programs, but are
not intended to be used in a way done by Lisp programs.

> Are compiled and interpreted code interoperable?

To a certain degree. The Seed7 compiler can (of course) compile
itself. The compiled compiler can still use the interpreters
functionality to optimize some subexpressions during the compilation
process.

> Can I compile a separate function in a running image?

Not sure what you mean here.
Linking a function from an external library?

> I enjoy
> this in CL and this is a pain to live without it.
> I'm looking for a partially statically typed language
> which would support this REPL-based style of work.

There are different ways to reach a tight edit-test loop. The Seed7
interpreter starts quickly. It is not uncommon that it processes
200000 lines per second or more.

> Java supports multiple inheritance of interfaces.
> Does your language so?

Generally yes (The interface declaration needs to be improved to
make it available).

> >Why I don't like type inference is explained here:
> >http://seed7.sourceforge.net/faq.htm#type_inference
>
> Idea is very interesting. There is a little type
> inference in a CL and I never used OCaml or Haskell.
> But I think that it is inevitable that user should
> look somewhere else when he reads code. E.g. if
> we have a reference to variable in a program listing,
> this variable might be local or global, or be
> declared on some medium level of nesting. Anyway,
> reader need to find out its declaration.

Correct, but at least the number of places to look at is limited
to some degree.

> So
> your approach to type inference seems a bit
> extreme.

Seed7 could allow a very mild form of type inference where the
initialisation value of a declaration is used to determine the type.

There is anoter point: Seed7 supports overloading of functions. When
you know the type of an expression at compile time you can do
overloading resolution with reasonable effort. AFAIK languages which
support type inference do not support overloading and vice versa.

BTW: The overloading resolution and the multiple dispatch mechanism
of Seed7 act like twins. Overloading resolution is done at compile
time while multiple dispatch is is done at runtime. In the
interpreter even the same function is used for both purposes.

> Anyway, I can now see that type inference
> might be harmful sometims. Thanks for this hint.
>
> I do not share your approach to automatic type
> conversions. E.g., in CL I have written
> str+ function. It takes an arbitrary number of
> string designators, each of them can be
> character, string or symbol, coerces each to
> a string with a "string" function
http://www.lispworks.com/documentation/HyperSpec/Body/f_string.htm
> and concatenates all that strings together.
> I use this function very often and it would be
> painful to cast types explicitly.
>
> Could I do similar thing in your language?

Yes. Seed7 uses a concatenation operator for this purpose. E.g.:

  writeln("number=" <& number <& "anotherNumber=" <& anotherNumber);

The rule for <& is like this:

  - The <& operator is defined to concatenate two string arguments.

  - If one argument is a string and the other is not the non-string
    argument is converted to a string with the 'str' function
    and both strings are concatenated.

The 'enable_output' macro defines the <& operator for a given type.
Needless to say: The 'str' function must be defined for this type.

You can see that Seed7 is useable in some areas while it is still
work in progress in other areas. There is many room to improve
and helping hands are always welcome.

Greetings Thomas Mertes

Seed7 Homepage:  http://seed7.sourceforge.net
Seed7 - The extensible programming language: User defined statements
and operators, abstract data types, templates without special
syntax, OO with interfaces and multiple dispatch, statically typed,
interpreted or compiled, portable, runs under linux/unix/windows.
From: John Thingstad
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <op.usq2dpv7ut4oq5@pandora>
På Tue, 21 Apr 2009 19:08:07 +0200, skrev <·············@gmx.at>:

>
> The concept of variables with type is not something new.
> In the area of mathematics variables usually are specified as being
> integers, reals, complex. The notation of types for variables in
> programming languages is just an extension. You seem to believe that
> typed variables are just a handicap created to annoy ingenious
> programmers. But variables with types increase also the readability
> of a program. Static type checks can help to find errors earlier
> without the need to run the program through various tests (you will
> probably do tests anyway, but will your tests reach 100% code
> coverage?).
>
> When you introduce a variable you probably will have a concept about
> which values the variable will hold. The area of possible values
> can be small (like boolean) or it can be a very large (like object).
> Types are just a method to specify your concept fomally in your
> program. This helps when someone else has to maintain your program.
>

Well I think of it more in the terms outlined by Haskell. Here type  
declarations are obviously derived from other types. Be it by inheritance  
or instance relations.
Similarly functions can be declared to take a sequence of arguments where  
each has a type and of course so does the return value. But the function  
accepts objects. And it is the OBJECTS passed to the function that need  
checking. The names that stand in for the expressions tha function is  
called with is only relevant in-so far at it keeps track of what happens  
to the objects.

> Here are some arguments why static type checking might have some
> value:
>
> http://seed7.sourceforge.net/faq.htm#static_type_checking
>
>> After all a variable (conceptually) just specifies the location of a
>> object. A compiler would ideally follow the OBJECTS through the code and
>> verify that the types hold..
>
> This is called type inferencing. But it has not only positive
> effects. Imagine you have to maintain a big program written by
> someone else. You need to follow the OBJECTS through the code and
> verify what is going on... You need to do part of the compilers
> work just to understand the program. Programs with type
> declarations and typed variables (parameters, functions, ...) carry
> more information. This information is helpful when you read
> programs. You get references to the concepts which the original
> programmer had in his mind.
>
> Why I don't like type inference is explained here:
>
> http://seed7.sourceforge.net/faq.htm#type_inference
>

But languages where variables have type are just as flawed. For instance I  
find it abhorrent that C allows 'int c;' as a variable declaration.  
Clearly c has no value. Instead when used it takes the n bits reserved for  
it on the stack and uses whatever random garbage happens to be there.  
Clearly ANSI C and C++ do a better job of checking for this, but to me the  
whole declaration syntax seems brain-damaged.
Another thing is the amount of repetitious garbage code you have to write.  
By having to specify the type of each variable over and over each time a  
object passes hand you also have a maintenance nightmare if you should  
decide to pass another type of object instead.

-----------------------
John Thingstad
From: ·············@gmx.at
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <53bb020f-1073-4ac5-8871-bc4a7e92bd60@s20g2000yqh.googlegroups.com>
On 21 Apr., 20:45, "John Thingstad" <·······@online.no> wrote:
> På Tue, 21 Apr 2009 19:08:07 +0200, skrev <·············@gmx.at>:
>
> > The concept of variables with type is not something new.
> > In the area of mathematics variables usually are specified as being
> > integers, reals, complex. The notation of types for variables in
> > programming languages is just an extension. You seem to believe that
> > typed variables are just a handicap created to annoy ingenious
> > programmers. But variables with types increase also the readability
> > of a program. Static type checks can help to find errors earlier
> > without the need to run the program through various tests (you will
> > probably do tests anyway, but will your tests reach 100% code
> > coverage?).
>
> > When you introduce a variable you probably will have a concept about
> > which values the variable will hold. The area of possible values
> > can be small (like boolean) or it can be a very large (like object).
> > Types are just a method to specify your concept fomally in your
> > program. This helps when someone else has to maintain your program.
>
> Well I think of it more in the terms outlined by Haskell. Here type
> declarations are obviously derived from other types. Be it by inheritance
> or instance relations.
> Similarly functions can be declared to take a sequence of arguments where
> each has a type and of course so does the return value. But the function
> accepts objects. And it is the OBJECTS passed to the function that need
> checking. The names that stand in for the expressions tha function is
> called with is only relevant in-so far at it keeps track of what happens
> to the objects.

What do you mean with "The names that stand in for the expressions"?
Is it possible to call such things "parameter" like everybody else
or do you mean some other concept? The rest of the sentence seems
also strange...

> > Here are some arguments why static type checking might have some
> > value:
>
> >http://seed7.sourceforge.net/faq.htm#static_type_checking
>
> >> After all a variable (conceptually) just specifies the location of a
> >> object. A compiler would ideally follow the OBJECTS through the code and
> >> verify that the types hold..
>
> > This is called type inferencing. But it has not only positive
> > effects. Imagine you have to maintain a big program written by
> > someone else. You need to follow the OBJECTS through the code and
> > verify what is going on... You need to do part of the compilers
> > work just to understand the program. Programs with type
> > declarations and typed variables (parameters, functions, ...) carry
> > more information. This information is helpful when you read
> > programs. You get references to the concepts which the original
> > programmer had in his mind.
>
> > Why I don't like type inference is explained here:
>
> >http://seed7.sourceforge.net/faq.htm#type_inference
>
> But languages where variables have type are just as flawed.

There are many levels between "great" and "flawed". The world is
not black and white. Please stop arguing like this. Otherwise people
might think that you are a troll.

> For instance I
> find it abhorrent that C allows 'int c;' as a variable declaration.
> Clearly c has no value. Instead when used it takes the n bits reserved for
> it on the stack and uses whatever random garbage happens to be there.

The problem here is the use of uninitialized variables.
Typed/untyped variables and initalized/uninitialized variables
are two different concepts. Clearly languages with all four
combinations of this concepts are possible:

 - Typed variables with must be initialized
 - Typed variables which can be uninitialized (contain garbage)
 - Untyped variables which can be uninitialized (contain garbage)
 - Untyped variables which always have a value

> Clearly ANSI C and C++ do a better job of checking for this,

AFAIK ANSI C and C++ allow uninitialized variables as well as
K&R C.

> but to me the
> whole declaration syntax seems brain-damaged.

The syntax to do declarations of typed/untyped variables
(or parameters) is again a different thing. Please don't mix up
different concepts.

> Another thing is the amount of repetitious garbage code you have to write.

Just a few questions:

What was the biggest program you wrote?
Did you ever take over the maintenance of a program written by
someone else?
How big was the biggest program you had to take over?
In which language the program was written?
Do you prefer to write garbage code without nasty declarations?

> By having to specify the type of each variable over and over each time a
> object passes hand you also have a maintenance nightmare if you should
> decide to pass another type of object instead.

This is just a "maintenance nightmare" when you have no idea how
to program. Have you ever heard of type declarations?
With type declarations you have one place where you can change the
structure of a type.

You have not answered to the arguments that type declarations
and typed variables (parameters, functions, ...) serve as some
kind of formal documentation that helps when you read a
program (Did I mention that programs are more often read than
written).

What do you think about the arguments here:

  http://seed7.sourceforge.net/faq.htm#static_type_checking

Greetings Thomas Mertes

Seed7 Homepage:  http://seed7.sourceforge.net
Seed7 - The extensible programming language: User defined statements
and operators, abstract data types, templates without special
syntax, OO with interfaces and multiple dispatch, statically typed,
interpreted or compiled, portable, runs under linux/unix/windows.
From: John Thingstad
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <op.usq82kybut4oq5@pandora>
På Tue, 21 Apr 2009 22:00:10 +0200, skrev <·············@gmx.at>:

>
> What do you mean with "The names that stand in for the expressions"?
> Is it possible to call such things "parameter" like everybody else
> or do you mean some other concept? The rest of the sentence seems
> also strange...
>

Haskell is purely functional.

so if you write

x = sin x
y = cos x

r = sqr x + sqt y

you could just as easilly have written

r = sqr sin x + sqr cos x

So you see x and y are placeholders for the expression just like in math.  
It decomposes a complex expression into simpler parts.

>> But languages where variables have type are just as flawed.
>
> There are many levels between "great" and "flawed". The world is
> not black and white. Please stop arguing like this. Otherwise people
> might think that you are a troll.

Why? If it's flawed it's flawed. Anyhow we all have our opinions.

>
>> For instance I
>> find it abhorrent that C allows 'int c;' as a variable declaration.
>> Clearly c has no value. Instead when used it takes the n bits reserved  
>> for
>> it on the stack and uses whatever random garbage happens to be there.
>
> The problem here is the use of uninitialized variables.
> Typed/untyped variables and initalized/uninitialized variables
> are two different concepts. Clearly languages with all four
> combinations of this concepts are possible:
>
>  - Typed variables with must be initialized
>  - Typed variables which can be uninitialized (contain garbage)
>  - Untyped variables which can be uninitialized (contain garbage)
>  - Untyped variables which always have a value

Yes this refers to the ability to use a variable before assignment  
obviously, not whether it is typed or not. Perls (non-strict) as I  
mentioned in another post suffers the same problem.

>> Clearly ANSI C and C++ do a better job of checking for this,
>
> AFAIK ANSI C and C++ allow uninitialized variables as well as
> K&R C.
>

But it enforces that you assign them a value before you use it (further  
down in the code) thus eliminating this class of error.

>> but to me the
>> whole declaration syntax seems brain-damaged.
>
> The syntax to do declarations of typed/untyped variables
> (or parameters) is again a different thing. Please don't mix up
> different concepts.
>

The point is it wouldn't be a problem if the variable didn't have a  
explicit type declaration . You would just introduce it when you have a  
meaningful value for it and before you use it. If you made a typo writing  
the name that would make the variable unbound and thus flag a error. I  
guess I find it offensive that you can assign a type but not the value.   
In ANSI C if you already have a prototype why do you need to re-declare  
the type in the instantiation?. In Haskell the "prototype" is just the  
name and the types. No names need to be given. Similarly in the function  
declaration you need only the names. To me this syntax decision seems  
"cleaner"

>> Another thing is the amount of repetitious garbage code you have to  
>> write.
>
> Just a few questions:
>
> What was the biggest program you wrote?
> Did you ever take over the maintenance of a program written by
> someone else?

> How big was the biggest program you had to take over?
> In which language the program was written?

For Opera and IST I worked with programs of about 350 000 to 700 000 lines  
of C++ code.

> Do you prefer to write garbage code without nasty declarations?
>

Re-declaring variables forces you to assign a type through every use of  
it. I Lisp we like to call this "premature optimisation" as we are forced  
to give something a type even if we are not sure what it should be. This  
actually makes a big difference to how much experimentation you allow  
yourself to do because of the overhead and error prone-ness of changing  
things. To be equally patronizing.. Did you ever implement a complex  
algorithm you didn't get out of a book?

>> By having to specify the type of each variable over and over each time a
>> object passes hand you also have a maintenance nightmare if you should
>> decide to pass another type of object instead.
>
> This is just a "maintenance nightmare" when you have no idea how
> to program. Have you ever heard of type declarations?
> With type declarations you have one place where you can change the
> structure of a type.
>

Sure - but what if the declaration is 'int' and I all of a sudden decide  
it should be a 'float'? Even if the type is a attribute in a class and I  
change it I would still need to change every point in the code that  
accessed it. Eclipse etc. allow dialogs to help in "Re-factoring". But did  
you ever consider that this is just a bad-aid over something that is a bad  
design idea in the first place?

> You have not answered to the arguments that type declarations
> and typed variables (parameters, functions, ...) serve as some
> kind of formal documentation that helps when you read a
> program (Did I mention that programs are more often read than
> written?)

Normally I prefer to use a good name to determine the intent of a variable  
rather than rely on inferring it from it's type. Lisp has other features  
like keyword arguments that allow you to state what a variable is for. In  
practise I rarely find myself wondering what a variable does.

>
> What do you think about the arguments here:
>
>   http://seed7.sourceforge.net/faq.htm#static_type_checking
>
> Greetings Thomas Mertes
>
> Seed7 Homepage:  http://seed7.sourceforge.net
> Seed7 - The extensible programming language: User defined statements
> and operators, abstract data types, templates without special
> syntax, OO with interfaces and multiple dispatch, statically typed,
> interpreted or compiled, portable, runs under linux/unix/windows.


-----------------------
John Thingstad
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090501193535.694@gmail.com>
On 2009-04-21, John Thingstad <·······@online.no> wrote:
> På Tue, 21 Apr 2009 22:00:10 +0200, skrev <·············@gmx.at>:
>
>>
>> What do you mean with "The names that stand in for the expressions"?
>> Is it possible to call such things "parameter" like everybody else
>> or do you mean some other concept? The rest of the sentence seems
>> also strange...
>>
>
> Haskell is purely functional.
>
> so if you write
>
> x = sin x
> y = cos x
>
> r = sqr x + sqt y
>
> you could just as easilly have written
>
> r = sqr sin x + sqr cos x
>
> So you see x and y are placeholders for the expression just like in math.  
> It decomposes a complex expression into simpler parts.

But, comrade, that means these placeholders are typeless!

Proof: where is the declaration for x?

That's a key part of the buffoon's misunderstanding.

He's unable to wrap his head around the concept that variables may be only
semantically empty bindings that have no type in and of themselves.

That's why he asserted that polymorphism is ``typeless'', even if it is
statically typed polymorphism!!!

So the world according to Dmitri:

- If you cannot look at a symbol like x, and determine a unique type for x
  which is obvious from the lexical scope, then situation is typeless.

- Doesn't matter if it's C++, Haskell or Lisp. Overloading and templates in
  C++ is typeless. Haskell polymorphism based type inference is typeless.

- The only data entities that have real type in programming are concrete
  regions of memory, which are accessed by expressions that have been declared
  to have a type.  Type is a unique property of an expression in program with
  respect to a piece of memory. A region of memory treated this way is an
  object with a type.

- When such an object is introduced by a named definition (not dynamically
  allocated, and not embedded as a sub-object in a larger object), that
  definition gives rise to a variable. A variable /is/ just a named object, and
  an object is a region of storage with a type. So a variable is a name for a
  region of storage, along with a type.   Anything else is not a proper
  variable, but some kind of non-variable, or a typeless variable in a typeless
  language.

- Only programming with these real types and variables is engineering.
  Everything else is typeless hacking: it is unchecked, and bug-prone.

- Static typing only adds safety. Almost all dynamic programs can be expressed
  as static programs, essentially as they are, which will make them safer,
  since situations which were checked only at run time are now checked
  prior to run time.  The remaining programs (if they exist at all, of which he
  is not convinced) are either useless curiosities, or they can be expressed in
  static languages which have incorporated all the useful features from dynamic
  languages, in the form of features like discriminated unions, tagged
  structures or OO with polymorphism centered around using a statically typed
  base class as an interface to multiple implementations expressed using
  inheritance. These languages are the modern static languages. They render
  dynamic languages obsolete.

If this is a strawman caricature of the view, someone correct me.
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uskk0oc8l.fsf@STRIPCAPStelus.net>
Kaz Kylheku <········@gmail.com> writes:
> So the world according to Dmitri:

Dmitri's view point is heavily colored by Ada. The way he thinks and the terminology he uses are expressed in Ada terms.

For some reason he does not seem to be able to expand his viewpoint to include alternative ways of thinking that other languages encourage, especially Lisp.

He's actually a lively participant on comp.lang.ada, and while still controversial, his arguments are more valid there than here.
-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <891sypzwjl4g.1tnv6bivps5u1.dlg@40tude.net>
On Tue, 21 Apr 2009 17:45:24 +0200, John Thingstad wrote:

> P� Tue, 21 Apr 2009 09:41:08 +0200, skrev Dmitry A. Kazakov  
> <·······@dmitry-kazakov.de>:
> 
>> It shows that all functions defined over a type contribute into the
>> definition of the type. It is impossible to separate functions from  
>> types.
>>
>> Considering a program as a bunch of functions is either untyped or else
>> wrong. No offence meant, the paradigm you describe is untyped. If you try
>> to analyse it as I did you must agree with that. Ask yourself if you  
>> don't
>> like types and see them as a burden. Don't you prefer a dynamically typed
>> system not because of its ability to describe classes of types statically
>> and dynamically do members of these sets (which is necessary for complex
>> designs). Don't you do it just because it allows you to ignore types
>> whatsoever, or pretending you ignoring them?
> 
> You deduction seems faulty. Lisp is strongly typed.

I had no specific language in mind. I considered a general attitude, a
desire to use whatever language in order to be able to work as if it were
untyped.

> Of course it is the  
> objects that are typed, not whatever slot you happen to stick it in.

That does not hold. A name uniquely specifies an object.

> To  
> me the thought that a variable can be typed is itself inherently flawed.  
> After all a variable (conceptually) just specifies the location of a  
> object.

Nope. A variable identifies an object. Moreover it does this statically.
You probably meant a variable identifying some referential objects which in
turn refers to another object.

> It seems to me that the idea that untyped variables create bugs is  
> unwarranted and that this does not seem to create big problems in real  
> applications.

It does, because the caller must be prepared to an unexpected outcome. Thus
it must handle all possible cases where the type does not match the
intended one. It is even worse, because the intended type is not explicitly
specified, So the program reader must guess what the semantics of a call
might be. This is a massive distributed overhead both for the computer but
much more for the programmer. Granted, people just ignore these cases in
hope that nothing bad would happen. Next comes some reasoning about
productivity etc, as a self excuse. Because deep inside everybody feels
himself guilty doing such things.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Marco Antoniotti
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <19791db6-dcfd-4d7d-b565-b66635304d87@h2g2000yqg.googlegroups.com>
On Apr 21, 6:45 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Tue, 21 Apr 2009 17:45:24 +0200, John Thingstad wrote:
> > På Tue, 21 Apr 2009 09:41:08 +0200, skrev Dmitry A. Kazakov  
> > <·······@dmitry-kazakov.de>:
>
> >> It shows that all functions defined over a type contribute into the
> >> definition of the type. It is impossible to separate functions from  
> >> types.
>
> >> Considering a program as a bunch of functions is either untyped or else
> >> wrong. No offence meant, the paradigm you describe is untyped. If you try
> >> to analyse it as I did you must agree with that. Ask yourself if you  
> >> don't
> >> like types and see them as a burden. Don't you prefer a dynamically typed
> >> system not because of its ability to describe classes of types statically
> >> and dynamically do members of these sets (which is necessary for complex
> >> designs). Don't you do it just because it allows you to ignore types
> >> whatsoever, or pretending you ignoring them?
>
> > You deduction seems faulty. Lisp is strongly typed.
>
> I had no specific language in mind. I considered a general attitude, a
> desire to use whatever language in order to be able to work as if it were
> untyped.
>
> > Of course it is the  
> > objects that are typed, not whatever slot you happen to stick it in.
>
> That does not hold. A name uniquely specifies an object.
>
> > To  
> > me the thought that a variable can be typed is itself inherently flawed.  
> > After all a variable (conceptually) just specifies the location of a  
> > object.
>
> Nope. A variable identifies an object. Moreover it does this statically.
> You probably meant a variable identifying some referential objects which in
> turn refers to another object.
>
> > It seems to me that the idea that untyped variables create bugs is  
> > unwarranted and that this does not seem to create big problems in real  
> > applications.
>
> It does, because the caller must be prepared to an unexpected outcome. Thus
> it must handle all possible cases where the type does not match the
> intended one. It is even worse, because the intended type is not explicitly
> specified, So the program reader must guess what the semantics of a call
> might be. This is a massive distributed overhead both for the computer but
> much more for the programmer. Granted, people just ignore these cases in
> hope that nothing bad would happen. Next comes some reasoning about
> productivity etc, as a self excuse. Because deep inside everybody feels
> himself guilty doing such things.

So... save us.  Apart from the fact that "productivity" is not
completely measurable, you yourself admitted that current statically
typed functional languages suck (you said that Haskell does thing
wrongly, while I maintain that it helps the programmer much more than
OCaml) because we don't know how to build a good type system.  I gave
you some homework.  Do it and help us write "enviromentally typed" (I
just made this one up, but it sounds good :) ) Common Lisp programs.

Cheers
--
Marco






>
> --
> Regards,
> Dmitry A. Kazakovhttp://www.dmitry-kazakov.de
From: John Thingstad
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <op.usqx4np6ut4oq5@pandora>
På Tue, 21 Apr 2009 18:45:52 +0200, skrev Dmitry A. Kazakov  
<·······@dmitry-kazakov.de>:

> On Tue, 21 Apr 2009 17:45:24 +0200, John Thingstad wrote:
>
>> På Tue, 21 Apr 2009 09:41:08 +0200, skrev Dmitry A. Kazakov
>> <·······@dmitry-kazakov.de>:
>>
>>> It shows that all functions defined over a type contribute into the
>>> definition of the type. It is impossible to separate functions from
>>> types.
>>>
>>> Considering a program as a bunch of functions is either untyped or else
>>> wrong. No offence meant, the paradigm you describe is untyped. If you  
>>> try
>>> to analyse it as I did you must agree with that. Ask yourself if you
>>> don't
>>> like types and see them as a burden. Don't you prefer a dynamically  
>>> typed
>>> system not because of its ability to describe classes of types  
>>> statically
>>> and dynamically do members of these sets (which is necessary for  
>>> complex
>>> designs). Don't you do it just because it allows you to ignore types
>>> whatsoever, or pretending you ignoring them?
>>
>> You deduction seems faulty. Lisp is strongly typed.
>
> I had no specific language in mind. I considered a general attitude, a
> desire to use whatever language in order to be able to work as if it were
> untyped.
>
>> Of course it is the
>> objects that are typed, not whatever slot you happen to stick it in.
>
> That does not hold. A name uniquely specifies an object.

It most certaily does not.

consider
x = 2.0
y = x

Clearly here both x and y refer to the object 2. In Haskell You would  
infer the type here from the type of the object 2.0

>
>> To
>> me the thought that a variable can be typed is itself inherently flawed.
>> After all a variable (conceptually) just specifies the location of a
>> object.
>
> Nope. A variable identifies an object. Moreover it does this statically.
> You probably meant a variable identifying some referential objects which  
> in
> turn refers to another object.

Again simply not true. in the previous example it can be thought of as

x --->  FloatObject[2]
            ^
y ---------|

In Lisp historically all variables are referential and refer to objects  
actually on the heap. Though for optimisation purposes this is no longer  
always that case care has been taken to make the difference between  
variables on the heap and variables 'on the stack'/'in registers'  
negligible.

In C you have hacks like 'union' to allow a variable to take values of  
multiple types. Java prefers the OO style and uses polymorphism. Clearly  
run-time dispatch on object type is often desirable.

>
>> It seems to me that the idea that untyped variables create bugs is
>> unwarranted and that this does not seem to create big problems in real
>> applications.
>
> It does, because the caller must be prepared to an unexpected outcome.  
> Thus
> it must handle all possible cases where the type does not match the
> intended one. It is even worse, because the intended type is not  
> explicitly
> specified,

Yes, but ss my check-type example shows it has to do that anyhow. Only a  
subset of all type inferences can be checked at run-time. So static types  
fundamentally don't buy you anything. It is at best a efficiency hack.

-----------------------
John Thingstad
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <oyux0hauggnz.p0j42yqi544w.dlg@40tude.net>
On Tue, 21 Apr 2009 19:13:25 +0200, John Thingstad wrote:

> P� Tue, 21 Apr 2009 18:45:52 +0200, skrev Dmitry A. Kazakov  
> <·······@dmitry-kazakov.de>:
> 
>> On Tue, 21 Apr 2009 17:45:24 +0200, John Thingstad wrote:
>>
>>> P� Tue, 21 Apr 2009 09:41:08 +0200, skrev Dmitry A. Kazakov
>>> <·······@dmitry-kazakov.de>:
>>>
>>>> It shows that all functions defined over a type contribute into the
>>>> definition of the type. It is impossible to separate functions from
>>>> types.
>>>>
>>>> Considering a program as a bunch of functions is either untyped or else
>>>> wrong. No offence meant, the paradigm you describe is untyped. If you  
>>>> try
>>>> to analyse it as I did you must agree with that. Ask yourself if you
>>>> don't
>>>> like types and see them as a burden. Don't you prefer a dynamically  
>>>> typed
>>>> system not because of its ability to describe classes of types  
>>>> statically
>>>> and dynamically do members of these sets (which is necessary for  
>>>> complex
>>>> designs). Don't you do it just because it allows you to ignore types
>>>> whatsoever, or pretending you ignoring them?
>>>
>>> You deduction seems faulty. Lisp is strongly typed.
>>
>> I had no specific language in mind. I considered a general attitude, a
>> desire to use whatever language in order to be able to work as if it were
>> untyped.
>>
>>> Of course it is the
>>> objects that are typed, not whatever slot you happen to stick it in.
>>
>> That does not hold. A name uniquely specifies an object.
> 
> It most certaily does not.
> 
> consider
> x = 2.0
> y = x
> 
> Clearly here both x and y refer to the object 2. In Haskell You would  
> infer the type here from the type of the object 2.0

2.0 is not an object, it is a value.

If you mean aliasing, that is a bad thing which decent languages trying to
avoid.

>>> To
>>> me the thought that a variable can be typed is itself inherently flawed.
>>> After all a variable (conceptually) just specifies the location of a
>>> object.
>>
>> Nope. A variable identifies an object. Moreover it does this statically.
>> You probably meant a variable identifying some referential objects which  
>> in turn refers to another object.
> 
> Again simply not true. in the previous example it can be thought of as
> 
> x --->  FloatObject[2]
>             ^
> y ---------|
>
> In Lisp historically all variables are referential and refer to objects  
> actually on the heap.

My interpretation is

x : reference object  ------\
                                   float object
y : reference object  ------/

> In C you have hacks like 'union' to allow a variable to take values of  
> multiple types.

Nope. The variable has exactly one type, this type is called "union".

> Java prefers the OO style and uses polymorphism. Clearly  
> run-time dispatch on object type is often desirable.

Yes, and a polymorphic variable still has one type - the closure of the
class. 

>>> It seems to me that the idea that untyped variables create bugs is
>>> unwarranted and that this does not seem to create big problems in real
>>> applications.
>>
>> It does, because the caller must be prepared to an unexpected outcome.  
>> Thus it must handle all possible cases where the type does not match the
>> intended one. It is even worse, because the intended type is not  
>> explicitly specified,
> 
> Yes, but ss my check-type example shows it has to do that anyhow.

I don't understand why calling sine I must check anything. Sine is defined
on reals and complex numbers. Period.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090501164812.1@gmail.com>
On 2009-04-21, John Thingstad <·······@online.no> wrote:
> På Tue, 21 Apr 2009 09:41:08 +0200, skrev Dmitry A. Kazakov  
><·······@dmitry-kazakov.de>:
> You deduction seems faulty. Lisp is strongly typed. Of course it is the  
> objects that are typed, not whatever slot you happen to stick it in.  To  
> me the thought that a variable can be typed is itself inherently flawed.  
> After all a variable (conceptually) just specifies the location of a  
> object. 

You are making a distinction between a variable and the object which it holds.

The probability that the russian moron will understand this?

Zero.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090422055925.GB4558@gildor.inglorion.net>
On Tue, Apr 21, 2009 at 05:45:24PM +0200, John Thingstad wrote:
> På Tue, 21 Apr 2009 09:41:08 +0200, skrev Dmitry A. Kazakov  
> <·······@dmitry-kazakov.de>:
>
>> It shows that all functions defined over a type contribute into the
>> definition of the type. It is impossible to separate functions from  
>> types.
>>
>> Considering a program as a bunch of functions is either untyped or else
>> wrong. No offence meant, the paradigm you describe is untyped. If you try
>> to analyse it as I did you must agree with that. Ask yourself if you  
>> don't
>> like types and see them as a burden. Don't you prefer a dynamically typed
>> system not because of its ability to describe classes of types statically
>> and dynamically do members of these sets (which is necessary for complex
>> designs). Don't you do it just because it allows you to ignore types
>> whatsoever, or pretending you ignoring them?
>>
>
> You deduction seems faulty. Lisp is strongly typed. Of course it is the  
> objects that are typed, not whatever slot you happen to stick it in.  To  
> me the thought that a variable can be typed is itself inherently flawed.  
> After all a variable (conceptually) just specifies the location of a  
> object. A compiler would ideally follow the OBJECTS through the code and  
> verify that the types hold.

I agree with this. It is objects which are types. Whether or not you 
have variables and whether or not these variables are allowed to hold 
values of different types at different times is irrelevant to the 
discussion, IMO.

However, none of this negates Dmitry's point about the operations that 
are defined for a type being an essential part of this type.

What I don't understand (and perhaps you could clarify, Dmitry) is why 
he claims that dynamic typing is not concerned with the operations 
defined on a type. As far as I can see, this is exactly what dynamic 
typing does: it checks, at run-time, if an actual operation can be 
applied to an actual value.

Static typing does the same thing, except it does it at compile time, 
when the actual values may not be known. So, where for dynamic typing 
you need to be able to deduce the type from the value (hence you will 
often have type tags), for static typing you need to deduce the type 
from how a value is used (hence you will often have functions that can 
only return values of one type, unlike, say, Scheme's READ).

Regards,

Bob

-- 
"The only 'intuitive' interface is the nipple. After that, it's all learned."
	-- Bruce Ediger


From: John Thingstad
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <op.usr7jeu2ut4oq5@pandora>
På Wed, 22 Apr 2009 07:59:25 +0200, skrev Robbert Haarman  
<··············@inglorion.net>:

> On Tue, Apr 21, 2009 at 05:45:24PM +0200, John Thingstad wrote:
> What I don't understand (and perhaps you could clarify, Dmitry) is why
> he claims that dynamic typing is not concerned with the operations
> defined on a type. As far as I can see, this is exactly what dynamic
> typing does: it checks, at run-time, if an actual operation can be
> applied to an actual value.
>

What he seems concerned with is the code coverage. He worries that obscure  
rarely run chunks don't get tested and that combinations of systems can  
pass the test and still cause havoc. This is true of course but that is  
equally true of statically typed systems. You need a very specific type  
system to avoid this like Ada's where you can type type day 1..31 and even  
then other constraints like February has only 28 days except every 4'th  
year when it has 29 but only 28 every hudreth could go unchecked. In a  
nutshell the only way to be absolutely sure is to verify the code  
mathematically.

-----------------------
John Thingstad
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1uvox87o7i4vi.lkj58zeqqfyl.dlg@40tude.net>
On Wed, 22 Apr 2009 07:59:25 +0200, Robbert Haarman wrote:

> However, none of this negates Dmitry's point about the operations that 
> are defined for a type being an essential part of this type.
> 
> What I don't understand (and perhaps you could clarify, Dmitry) is why 
> he claims that dynamic typing is not concerned with the operations 
> defined on a type. As far as I can see, this is exactly what dynamic 
> typing does:

I don't claim this.

> it checks, at run-time, if an actual operation can be 
> applied to an actual value.

No, it does not check this. If you mean dispatch, then the effect of
dispatch is in selection and execution of an implementation of the
operation. A polymorphic operation is defined on a set of types. Usually
its body is composed out of the bodies of the operations defined for
individual types from that set. These bodies are obtained in tree ways:

1. Explicit definition by overriding
2. Implicit definition by inheritance
2.a. Implicit definition as a stub propagating a run-time exception

This operation is always applicable, provided "non-applicable" means
language [typing] error.

If "non-applicable" means "semantically incorrect", then that has nothing
to do with the language.

It seems that much of confusion comes from here. If typing
[dynamic/static/etc] applies to the language, then we have to define it
independently from the program semantics.

> Static typing does the same thing, except it does it at compile time, 
> when the actual values may not be known. So, where for dynamic typing 
> you need to be able to deduce the type from the value (hence you will 
> often have type tags), for static typing you need to deduce the type 
> from how a value is used (hence you will often have functions that can 
> only return values of one type, unlike, say, Scheme's READ).

Type inference is a different issue.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5b195344-ea27-45e6-8250-6c799633c4aa@b6g2000pre.googlegroups.com>
>These bodies are obtained in tree ways:
>
>1. Explicit definition by overriding
>2. Implicit definition by inheritance
>2.a. Implicit definition as a stub propagating a run-time exception

Let us write program X which allows shared library "foo" to be loaded.
Shared library is expected to export function "eval" with no
arguments. Main program looks as follows (Pascal)

program X;
var s:string;
begin
s:=readln;
compile(s); { wrap line with header, save text to file,
  call compiler on the file to produce library "foo" }

load_shared_library("foo");

eval; { eval is invoked from foo library }
end.

We run program X and enter some line. Say,

40+2.0

Questions are:
- is the program X statically-typed?
- suppose compiler invoked from "compile" function fails.
Is this a compile time error or run-time error in a program
X?
- is there a way to avoid compilation errors in "compile"?
- how things change if we write
s:='40+2.0';
instead of
s:=readln?
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1u6rm5f753ohr.7xua2ea5btb4$.dlg@40tude.net>
On Wed, 22 Apr 2009 10:46:25 -0700 (PDT), budden wrote:

>>These bodies are obtained in tree ways:
>>
>>1. Explicit definition by overriding
>>2. Implicit definition by inheritance
>>2.a. Implicit definition as a stub propagating a run-time exception
> 
> Let us write program X which allows shared library "foo" to be loaded.
> Shared library is expected to export function "eval" with no
> arguments. Main program looks as follows (Pascal)
> 
> program X;
> var s:string;
> begin
> s:=readln;
> compile(s); { wrap line with header, save text to file,
>   call compiler on the file to produce library "foo" }
> 
> load_shared_library("foo");
> 
> eval; { eval is invoked from foo library }

(Note that eval can be optimized away. It has no arguments and no results,
if it also has no side effects, there is no reason to call it.)

> end.
> 
> We run program X and enter some line. Say,
> 
> 40+2.0
> 
> Questions are:
> - is the program X statically-typed?

Yes

> - suppose compiler invoked from "compile" function fails.
> Is this a compile time error or run-time error in a program
> X?

A run-time fault, no different from the case when readin would fail due to
end of file or ctrl-c.

> - is there a way to avoid compilation errors in "compile"?

That depends on the language used in "compile" and the implementation of
readin. But I suppose that for what you meant, the answer is no.

> - how things change if we write
> s:='40+2.0';
> instead of
> s:=readln?

They would not change.

(This also has to do with my argumentation concerning bugs and faults with
regard to static analysis. The type of s (String) does not allow us to
suggest that its value is static. Thus we cannot convert run-time faults of
its compilation by "compile" into compile-time errors of X. Compile
propagates and exception, it is a part of its contract, it is not a bug. 

Compiler error is not [necessarily] a compiler bug.)

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090422195736.GC4558@gildor.inglorion.net>
On Wed, Apr 22, 2009 at 06:09:46PM +0200, Dmitry A. Kazakov wrote:
> On Wed, 22 Apr 2009 07:59:25 +0200, Robbert Haarman wrote:
> 
> > However, none of this negates Dmitry's point about the operations that 
> > are defined for a type being an essential part of this type.
> > 
> > What I don't understand (and perhaps you could clarify, Dmitry) is why 
> > he claims that dynamic typing is not concerned with the operations 
> > defined on a type. As far as I can see, this is exactly what dynamic 
> > typing does:
> 
> I don't claim this.
> 
> > it checks, at run-time, if an actual operation can be 
> > applied to an actual value.
> 
> No, it does not check this. If you mean dispatch, then the effect of
> dispatch is in selection and execution of an implementation of the
> operation. A polymorphic operation is defined on a set of types. Usually
> its body is composed out of the bodies of the operations defined for
> individual types from that set. These bodies are obtained in tree ways:
> 
> 1. Explicit definition by overriding
> 2. Implicit definition by inheritance
> 2.a. Implicit definition as a stub propagating a run-time exception
> 
> This operation is always applicable, provided "non-applicable" means
> language [typing] error.

Ok, so you are saying that, under dynamic typing, an operation is 
defined for every type. It just may be that the definition is "signal a 
type error". And that this is the case if we have not defined the 
operation to be something else. Is that indeed what you are saying?

Regards,

Bob

-- 
An ideal world is left as an exercise to the reader.
	-- Paul Graham, On LISP


From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <es6d3nm1p52s.167hjsr6pup7w$.dlg@40tude.net>
On Wed, 22 Apr 2009 21:57:36 +0200, Robbert Haarman wrote:

> On Wed, Apr 22, 2009 at 06:09:46PM +0200, Dmitry A. Kazakov wrote:
>> On Wed, 22 Apr 2009 07:59:25 +0200, Robbert Haarman wrote:
>> 
>>> it checks, at run-time, if an actual operation can be 
>>> applied to an actual value.
>> 
>> No, it does not check this. If you mean dispatch, then the effect of
>> dispatch is in selection and execution of an implementation of the
>> operation. A polymorphic operation is defined on a set of types. Usually
>> its body is composed out of the bodies of the operations defined for
>> individual types from that set. These bodies are obtained in tree ways:
>> 
>> 1. Explicit definition by overriding
>> 2. Implicit definition by inheritance
>> 2.a. Implicit definition as a stub propagating a run-time exception
>> 
>> This operation is always applicable, provided "non-applicable" means
>> language [typing] error.
> 
> Ok, so you are saying that, under dynamic typing, an operation is 
> defined for every type.

Yes, if it is allowed to call any operation on any object.

But in a languages like Ada, C++, Java dynamic typing is limited to a
certain class checked statically. In my view it is the best of two worlds.
Check is static, dispatch is dynamic. The class can be widened or narrowed
as needed. I don't see how it limits the programmer in any way. If somebody
wishes to use + with an object, why not to say that the object is additive?

> It just may be that the definition is "signal a 
> type error". And that this is the case if we have not defined the 
> operation to be something else. Is that indeed what you are saying?

Yes. I don't like it because it is unsafe and does not add any additional
flexibility.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090502202557.862@gmail.com>
On 2009-04-22, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> But in a languages like Ada, C++, Java dynamic typing is limited to a
> certain class checked statically. In my view it is the best of two worlds.
> Check is static, dispatch is dynamic. The class can be widened or narrowed
> as needed. I don't see how it limits the programmer in any way.

When you become one, you will know.
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090430145147.755@gmail.com>
["Followup-To:" header set to comp.lang.lisp.]
On 2009-04-20, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> On Mon, 20 Apr 2009 08:28:48 -0700 (PDT), budden wrote:
>
>> C is not, strictly speaking, a statically typed language.
>
> It is. You are confusing strong and weak typing.

C is statically typed to the extent that the type of an object is determined by
what part of the program is accessing it, and nothing in the object itself.

However, real C programs often exhibit dynamic typing schemes, whereby the
static type system is defeated, and its role is supplanted by ad-hoc type
checks.

This approach is necessary to get the job done. If you don't treat the C
language as a dynamic language in disguise, then in most situations you will
not be able to program your way out of a wet paper bag.

> OSes are not developed using dynamically typed languages.

Counterexample: Symbolics Lisp machine.

Counterexample: proliferation of ad-hoc dynamic typing in the code bases of
actual operating systems written in static languages.

You don't know a fucking thing about operating systems, clearly.
From: Christopher C. Stacy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <yzlljp5la31.fsf@news.dtpq.com>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> OSes are not developed using dynamically typed languages.

Umm, some have been!

(?!?!)
From: George Neuner
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <451h05l3hs0886mf3aeioch1b9ll2fvi4q@4ax.com>
On Sun, 10 May 2009 02:56:50 -0400, ······@news.dtpq.com (Christopher
C. Stacy) wrote:

>"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> OSes are not developed using dynamically typed languages.
>
>Umm, some have been!
>
>(?!?!)

Most have not been.
From: Christopher
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <c3b23028-239e-4240-9f85-237583a541f8@o20g2000vbh.googlegroups.com>
On Apr 20, 11:28 am, budden <···········@mail.ru> wrote:
>
> C is not, strictly speaking, a statically typed language.

Yes it is. All types are known statically at compile time. You can
build dynamic schemes in C, but the language itself is nevertheless
still statically typed.
From: Andrew Reilly
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <757gfcF16p6tiU1@mid.individual.net>
On Tue, 21 Apr 2009 09:18:05 -0700, Christopher wrote:

> On Apr 20, 11:28 am, budden <···········@mail.ru> wrote:
>>
>> C is not, strictly speaking, a statically typed language.
> 
> Yes it is. All types are known statically at compile time. You can build
> dynamic schemes in C, but the language itself is nevertheless still
> statically typed.


void *


-- 
Andrew
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49ee92c3$0$22527$607ed4bc@cv.net>
Andrew Reilly wrote:
> On Tue, 21 Apr 2009 09:18:05 -0700, Christopher wrote:
> 
>> On Apr 20, 11:28 am, budden <···········@mail.ru> wrote:
>>> C is not, strictly speaking, a statically typed language.
>> Yes it is. All types are known statically at compile time. You can build
>> dynamic schemes in C, but the language itself is nevertheless still
>> statically typed.
> 
> 
> void *

Ah, the value infinity in typese.

kxo
From: Rob Warnock
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <SdCdnW9jkuhdPXPUnZ2dnUVZ_hednZ2d@speakeasy.net>
Kenneth Tilton  <·········@gmail.com> wrote:
+---------------
| Andrew Reilly wrote:
| > Christopher wrote:
| >> budden <···········@mail.ru> wrote:
| >>> C is not, strictly speaking, a statically typed language.
| >> Yes it is. All types are known statically at compile time. You can build
| >> dynamic schemes in C, but the language itself is nevertheless still
| >> statically typed.
| > 
| > void *
| 
| Ah, the value infinity in typese.
+---------------

But essentially useless for implementing a dynamic layer on C,
since you can't do any arithmetic at all on "void *" objects.

On the other hand, you *can* do simple pointer arithmetic on the
type "void **", so the latter is perhaps a better choice for a
Lisp object, e.g., "typedef void ** lispobj". I actually did this
once for a hacked-up version of Elk Scheme[1]. Worked fine.

On the third hand, even "void **" has a few odd corner restrictions,
so most Lisps (e.g., CMUCL) end up with "typedef long lispobj" or
similar, and use lots of C macros to pun the integer lispobjs into
pointers instead of lots of macros to pun the pointer lispobjs into
integers.


-Rob

[1] In order to make the "Object" type smaller.
    It was two words originally:

      typedef struct {
	  unsigned long data;
	  int tag;
      } Object;

    Replacing that with "typedef void **Object;" (and tweaking the
    various access macros to use lowtags) made the code run faster,
    at least on x86 CPUs.

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090422054046.GA4558@gildor.inglorion.net>
On Tue, Apr 21, 2009 at 11:44:48PM -0500, Rob Warnock wrote:
> 
> On the other hand, you *can* do simple pointer arithmetic on the
> type "void **", so the latter is perhaps a better choice for a
> Lisp object, e.g., "typedef void ** lispobj". I actually did this
> once for a hacked-up version of Elk Scheme[1]. Worked fine.

You could also use intptr_t, which is defined as being "an integer large 
enough to hold a pointer":

  Integer types capable of holding object pointers

  The following type designates a signed integer type with the property 
  that any valid pointer to void can be converted to this type, then 
  converted back to a pointer to void, and the result will compare equal 
  to the original pointer: intptr_t

  (From http://www.opengroup.org/onlinepubs/000095399/basedefs/stdint.h.html)

However, whatever type you use, it's not going to make C dynamically 
typed. It is not going to check your types at run-time. It will check 
your types at compile time, but only if you don't use casts or unions. 

When you are implementing something like Lisp in C, chances are you will 
be using casts or unions, which effectively disable type checking as far 
as C is concerned. You may implement your own run-time checks, but that 
is outside the scope of the C type checker.

Regards,

Bob

-- 
"What if this weren't a hypothetical question?"


From: Andrew Reilly
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <7585l1F172hs1U1@mid.individual.net>
On Tue, 21 Apr 2009 23:44:48 -0500, Rob Warnock wrote:

> Kenneth Tilton  <·········@gmail.com> wrote: +---------------
> | Andrew Reilly wrote:
> | > Christopher wrote:
> | >> budden <···········@mail.ru> wrote: 
> | >>> C is not, strictly speaking, a statically typed language.
> | >> Yes it is. All types are known statically at compile time. You can 
build 
> | >> dynamic schemes in C, but the language itself is nevertheless 
still 
> | >> statically typed.
> | >
> | > void *
> |
> | Ah, the value infinity in typese.
> +---------------
> 
> But essentially useless for implementing a dynamic layer on C, since you
> can't do any arithmetic at all on "void *" objects.
> 

Not bad as an argument against C being statically typed though, IMO.

[BTW: I use void* a lot in my C code.  I almost never put my struct 
definitions in header files, only function declarations.  This uses void* 
to establish opaque types.  That doesn't need any pointer arithmetic, in 
general.  Doesn't offer any type checking either, though.  Pass in the 
wrong type of struct and the code goes blammo (if you're lucky).  I don't 
put type tags into my structs.  Maybe I should.]

Cheers,

-- 
Andrew
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090422200753.GD4558@gildor.inglorion.net>
On Wed, Apr 22, 2009 at 09:17:21AM +0000, Andrew Reilly wrote:
> On Tue, 21 Apr 2009 23:44:48 -0500, Rob Warnock wrote:
> > 
> > But essentially useless for implementing a dynamic layer on C, since you
> > can't do any arithmetic at all on "void *" objects.
> > 
> 
> Not bad as an argument against C being statically typed though, IMO.

I think that, in general, you can only say that type checking is static 
or dynamic. A language may offer either, both, or neither.

For example, in C, you normally get static type checking, but you can 
fool the type system using casts or unions, in which case you basically 
end up with no type checking in the places where you use them.

For another example, in Java, you normally get static type checking, but 
if you use casts, you get dynamic type checking in the places where you 
use them.

Regards,

Bob

-- 
"It's often easier not to do something dumb then to do something smart."

	-- mjr.


From: ····················@hotmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5afde916-790e-4ac4-9527-2755a99278a9@t36g2000prt.googlegroups.com>
On 22 Apr, 10:17, Andrew Reilly <···············@areilly.bpc-
users.org> wrote:
> On Tue, 21 Apr 2009 23:44:48 -0500, Rob Warnock wrote:
> > Kenneth Tilton  <·········@gmail.com> wrote: +---------------
> > | Andrew Reilly wrote:
> > | > Christopher wrote:
> > | >> budden <···········@mail.ru> wrote:
> > | >>> C is not, strictly speaking, a statically typed language.
> > | >> Yes it is. All types are known statically at compile time. You can
> build
> > | >> dynamic schemes in C, but the language itself is nevertheless
> still
> > | >> statically typed.
> > | >
> > | > void *
> > |
> > | Ah, the value infinity in typese.
> > +---------------
>
> > But essentially useless for implementing a dynamic layer on C, since you
> > can't do any arithmetic at all on "void *" objects.
>
> Not bad as an argument against C being statically typed though, IMO.
>
> [BTW: I use void* a lot in my C code.

mayve a bad idea

> I almost never put my struct
> definitions in header files, only function declarations.  This uses void*
> to establish opaque types.
  That doesn't need any pointer arithmetic, in
> general.  Doesn't offer any type checking either, though.  Pass in the
> wrong type of struct and the code goes blammo (if you're lucky).  I don't
> put type tags into my structs.  Maybe I should.]

you can do better than this. Use incomplete declarations (hopefully I
got the terminalogy right).

ds.h
----

struct DeathStar;
void Fire (struct DeathStar*);


You've still got an opaque type but you get some type-checking.
For more detail ask in comp.lang.c


--
Nick Keighley
From: Christopher
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <482a063e-5b52-4a46-8732-e6cc9eda9d6d@n4g2000vba.googlegroups.com>
On Apr 21, 11:15 pm, Andrew Reilly <···············@areilly.bpc-
users.org> wrote:
> On Tue, 21 Apr 2009 09:18:05 -0700, Christopher wrote:
> > On Apr 20, 11:28 am, budden <···········@mail.ru> wrote:
>
> >> C is not, strictly speaking, a statically typed language.
>
> > Yes it is. All types are known statically at compile time. You can build
> > dynamic schemes in C, but the language itself is nevertheless still
> > statically typed.
>
> void *

What is your point? 'void *' is a static type declaration.
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090430114027.413@gmail.com>
["Followup-To:" header set to comp.lang.lisp.]
On 2009-04-20, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> On Mon, 20 Apr 2009 06:00:52 -0700 (PDT), budden wrote:
>
>> There are applications where edit-compile-run sequence itself
>> is impractical to apply to the whole program system. E.g. it
>> is impractical to edit/compile/run operating systems. It is
>> practical to split them into multiple components which can be
>> modified independently while OS is running. Reliability of OSes
>> is enforced with other techniques than static typing. So,
>> regardless of your opinion and/or real advantages of static
>> typing/static interfacing, dynamic systems will continue
>> to exist and nothing can be done to it.
>
> It seems that you are under impression that MS-DOS, RSX-11, UNIX, VxWorks,
> VMS, Windows (just to name some) were developed in a dynamically typed
> language. I assure you there were not, just as a matter of fact, without
> drawing any further conclusions.

OS'es are typically developed in a language which has a weak type system (VMS:
assembly language; UNIXes: C full of casts and other features  like inline
assembly that defeat the type system).

Operating system code typically contains oodles of Greenspunned dynamic typing.
That is to say, ad-hoc implementations of dynamic typing in the code base
itself, enforced only by hand-crafted checks and conventions.

For instance, in BSD Unixes, there is a ``struct vnode'' type which has a
``v_type'' field. If the field is not checked, then, for instance, a regular
file operation may be done on a V_DIR type object, which may trash your
filesystem.

budden's claim is basically correct: techniques other than (or in addition to)
static typing help to enforce reliability, such as run-time checks
where there is ad-hoc dynamic typing. There are other examples.
For instance, a UNIX OS would be terribly unreliable if it did not dynamically
validate user-space pointers passed to system calls, which has little to do
with typing.

Why do you keep dismissing information from people who obviously know way more
about technology than you?
From: George Neuner
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ur2su4d7sa2e41c0cq2tn4s9a0dcadkn0n@4ax.com>
On Mon, 20 Apr 2009 16:07:41 +0000 (UTC), Kaz Kylheku
<········@gmail.com> wrote:


>OS'es are typically developed in a language which has a weak type system (VMS:
>assembly language; UNIXes: C full of casts and other features  like inline
>assembly that defeat the type system).

Of necessity.


>Operating system code typically contains oodles of Greenspunned dynamic typing.
>That is to say, ad-hoc implementations of dynamic typing in the code base
>itself, enforced only by hand-crafted checks and conventions.

A strong type system (static or dynamic) doesn't help in code where
bits need to be reinterpreted ... which basically amounts to some
portion of nearly every possible device driver.  Bit munging[*] code
often has performance requirements that necessitate working in place -
for which you must be able to overlay different views of the data on
the same location.  Things like Haskell's IO monad and Ocaml's IO
classes are a farce - in truth little more than systematic ways to
throw well typed code out the window.

I agree that the 98% of the OS that does not do any bit munging could
benefit greatly from strong typing, but I haven't encountered a strong
type system that facilitates defeating it when necessary.  The best
compromises I have seen have been in Ada and Modula-3 with UofW's
"View" extension.  I like Lisp - but let's be honest - Lisp doesn't
make bit munging easy.  Nor does any modern statically typed language
I have tried.

Then too, operating systems are not the only places where system code
exists.  A hell of a lot of code is written for bare hardware in
embedded systems and quite a lot of it falls into the bit munging
category.  I used to do embedded programming and I've seen as much as
50% of an application devoted to munging data.

George

[*] I distinguish "munging" from "banging".  I use "munging" to mean
interpreting the same bits differently in different contexts and
"banging" to mean altering data at the bit level.  YMMV and your
terminology as well.
From: Zach Beane
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <m3zle9lq74.fsf@unnamed.xach.com>
George Neuner <········@comcast.net> writes:

> I like Lisp - but let's be honest - Lisp doesn't make bit munging
> easy.  Nor does any modern statically typed language I have tried.

I've heard many Lisp people say this (usually with the same disclaimer),
but I don't find it to be true personally. The same things that make
Lisp great for programming in general also make it great for me to do
whatever bit munging I need.

What sort of bit munging do you have in mind when you say this?

What do you think of Rob Warnock's public adventures in bit munging?

Zach
From: Rob Warnock
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <kbGdnehfsqYy4nPUnZ2dnUVZ_gBi4p2d@speakeasy.net>
Zach Beane  <····@xach.com> wrote:
+---------------
| George Neuner <········@comcast.net> writes:
| > I like Lisp - but let's be honest - Lisp doesn't make bit munging
| > easy.  Nor does any modern statically typed language I have tried.
| 
| I've heard many Lisp people say this (usually with the same disclaimer),
| but I don't find it to be true personally. The same things that make
| Lisp great for programming in general also make it great for me to do
| whatever bit munging I need.
+---------------

Ditto!

+---------------
| What sort of bit munging do you have in mind when you say this?
| What do you think of Rob Warnock's public adventures in bit munging?
+---------------

Heh. I just made a parallel reply with some more toy examples...  ;-}


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49ee924f$0$22548$607ed4bc@cv.net>
Rob Warnock wrote:
> Zach Beane  <····@xach.com> wrote:
> +---------------
> | George Neuner <········@comcast.net> writes:
> | > I like Lisp - but let's be honest - Lisp doesn't make bit munging
> | > easy.  Nor does any modern statically typed language I have tried.
> | 
> | I've heard many Lisp people say this (usually with the same disclaimer),
> | but I don't find it to be true personally. The same things that make
> | Lisp great for programming in general also make it great for me to do
> | whatever bit munging I need.
> +---------------
> 
> Ditto!
> 
> +---------------
> | What sort of bit munging do you have in mind when you say this?
> | What do you think of Rob Warnock's public adventures in bit munging?
> +---------------
> 
> Heh. I just made a parallel reply with some more toy examples...  ;-}

Yes, I do believe we have witnessed the live birth of The Warnock Proof 
By Counter-Example Of the Non-Bitmunging Objection to Lisp.

Snappy acronym left as an exercise.

kxo
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090502023623.589@gmail.com>
On 2009-04-22, Kenneth Tilton <·········@gmail.com> wrote:
> Rob Warnock wrote:
>> Heh. I just made a parallel reply with some more toy examples...  ;-}
>
> Yes, I do believe we have witnessed the live birth of The Warnock Proof 
> By Counter-Example Of the Non-Bitmunging Objection to Lisp.
>
> Snappy acronym left as an exercise.

I'm out of snappy acronyms. In this thread alrready came up with PIFFLE:
parenthesized, indented, (but otherwise) free-form, lisp expressions.

This was in response to PLOT.

If it's not Piffle, I don't want it.

Ah, okay:

  WRUBOL: 
  
  Warnock Refutation of The Unavailability of Bitmunging Operations in Lisp

Or just WRUB. We know that bitmunging is operations, and Lisp is ever
the contextual ground against any figure we may paint here; and WRUBOL
sounds too much like a programming language name.

And so: WRUB their snorting, porcine, little noses in it!
From: Cesar Rabak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gsr4te$vhl$1@aioe.org>
Kenneth Tilton escreveu:
> Rob Warnock wrote:
>> Zach Beane  <····@xach.com> wrote:
>> +---------------
>> | George Neuner <········@comcast.net> writes:
>> | > I like Lisp - but let's be honest - Lisp doesn't make bit munging
>> | > easy.  Nor does any modern statically typed language I have tried.
>> | | I've heard many Lisp people say this (usually with the same 
>> disclaimer),
>> | but I don't find it to be true personally. The same things that make
>> | Lisp great for programming in general also make it great for me to do
>> | whatever bit munging I need.
>> +---------------
>>
>> Ditto!
>>
>> +---------------
>> | What sort of bit munging do you have in mind when you say this?
>> | What do you think of Rob Warnock's public adventures in bit munging?
>> +---------------
>>
>> Heh. I just made a parallel reply with some more toy examples...  ;-}
> 
> Yes, I do believe we have witnessed the live birth of The Warnock Proof 
> By Counter-Example Of the Non-Bitmunging Objection to Lisp.
> 
> Snappy acronym left as an exercise.
> 
If get at a clever one without using the articles or prepositions we get 
a bonus beer?
From: Rob Warnock
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <kbGdnelfsqau4nPUnZ2dnUVZ_gCdnZ2d@speakeasy.net>
George Neuner  <········@comcast.net> wrote:
+---------------
| I like Lisp - but let's be honest - Lisp doesn't make bit munging easy.
...
| Then too, operating systems are not the only places where system code
| exists.  A hell of a lot of code is written for bare hardware in
| embedded systems and quite a lot of it falls into the bit munging
| category.
...
| [*] I distinguish "munging" from "banging".  I use "munging" to mean
| interpreting the same bits differently in different contexts and
| "banging" to mean altering data at the bit level.  YMMV and your
| terminology as well.
+---------------

One of the reasons I really like CMUCL is that it *does* let one get
down to the bit level if you really want to. [Most other production
CL compilers also provide for this; I just happen to know CMUCL best.]
Assume for the same of argument that one has already created the
following aliases, abbreviations, or shortcuts [as I have in my normal
toolbox]:

  - A read-macro for "0" that treats "0x{number}" as "#x{number}".
  - A FORMAT function "0x" for convenient hex printing.
  - make-lisp-obj == kernel:make-lisp-obj
  - lisp-obj      == kernel:get-lisp-obj-address
  - r{8,16,32}    == (lambda (addr)
		       (system:sap-ref-{8,16,32} (system:int-sap addr) 0))
  - w{8,16,32}    == (lambda (addr new-value)
		       (setf (system:sap-ref-{8,16,32} (system:int-sap addr) 0)
			     new-value))
  - dump32        == (lambda (addr &optional (len #x40) (print-addr addr))
		       "Does a hex dump from ADDR through (+ ADDR LEN -1),
			labelling locations with PRINT-ADDR. (The latter is
			useful when the object is mmap()'d to hardware.)" 

Then you can do these sorts of things:

    cmu> (gc :full t)   ; So things won't move around while we play.
    ; [GC threshold exceeded with 10,060,488 bytes in use.  Commencing GC.]
    ; [GC completed with 1,158,608 bytes retained and 8,901,880 bytes freed.]
    ; [GC will next occur when at least 13,158,608 bytes are in use.]

    NIL

    cmu> (deflex foo (vector 1 2 3 4))

    FOO
    cmu> (lisp-obj foo)

    1209122855
    cmu> (hex *)
    0x4811c027
    1209122855
    cmu> 

CMUCL uses 3-bit lowtags; a 7 means "other heap object" (that is,
not cons, function, or CLOS instance), which includes arrays.
Thus the array actually starts in memory at 0x4811c020, and we
*could* dump it this way:

    cmu> (loop for addr from 0x4811c020 by 4 repeat 6
	   collect (r32 addr))

    (58 16 4 8 12 16)
    cmu> (format t "~{~/0x/~^ ~}~%" *)
    0x0000003a 0x00000010 0x00000004 0x00000008 0x0000000c 0x00000010
    NIL
    cmu> 

which would show us that the heap header tag for SIMPLE-VECTOR is
decimal 58 (hex 0x3a), and would suggest that in CMUCL fixnums are
30 bits (using both the 0 & 4 lowtags), and that the second word of
a SIMPLE-VECTOR is the user-visible length of the vector as a fixnum
[all of which is in fact the case].  Or we could just use D32:  ;-}

    cmu> (d32 foo)
    0x4811c020: 0x0000003a 0x00000010 0x00000004 0x00000008
    0x4811c030: 0x0000000c 0x00000010 0x48119217 0x4811c043
    0x4811c040: 0x28f0000b 0x28f0000b 0x0000008c 0x28f0000b
    0x4811c050: 0x4811c04b 0x48119403 0x48119243 0x4811c063
    cmu> 

Yes, the display overran the object. So sue me. ;-}
If I weren't being lazy I could have typed this:

    cmu> (d32 24)
    0x4811c020: 0x0000003a 0x00000010 0x00000004 0x00000008
    0x4811c030: 0x0000000c 0x00000010
    cmu> 

or:

    cmu> (d32 foo (* 4 (+ 2 (length foo))))
    0x4811c020: 0x0000003a 0x00000010 0x00000004 0x00000008
    0x4811c030: 0x0000000c 0x00000010
    cmu> 

or even:

    cmu> (d32 foo (+ 8 (r32 (+ 4 (logandc2 (lisp-obj foo) 7)))))
    0x4811c020: 0x0000003a 0x00000010 0x00000004 0x00000008
    0x4811c030: 0x0000000c 0x00000010
    cmu> 

<ASIDE>
  The 3rd arg to D32 is for when the address is mmap'd to some physical
  hardware, and you want to display the physical address [to match the
  bus and/or chip documentation] instead of the virtual address when dumping:

    cmu> (d32 foo 24 0xcf900000)
    0xcf900000: 0x0000003a 0x00000010 0x00000004 0x00000008
    0xcf900010: 0x00004ab2 0x00000010
    cmu> 
</ASIDE>

Now suppose we want to change FOO from #(1 2 3 4) to #(1 2 #\J 4).
Yes, we could (SETF (AREF FOO 2) #\J), but what fun is that?!?  ;-}  ;-}

    cmu> (hex (char-code #\J)) ; An ASCII "J" is 74 (0x4a), in CMUCL 
    0x0000004a
    74
    cmu> (hex (lisp-obj #\J))  ; stored shifted up 8 in immediate type 0xb2.
    0x00004ab2
    19122
    cmu> (w32 (+ 16 (logandc2 (lisp-obj foo) 7)) 19122)  ; *ZAP!!*

    cmu> (d32 foo 24)
    0x4811c020: 0x0000003a 0x00000010 0x00000004 0x00000008
    0x4811c030: 0x00004ab2 0x00000010
    cmu> foo

    #(1 2 #\J 4)
    cmu> 

Is that enough "munging/banging" for you?!?   ;-}  ;-}

If not, I can dig up plenty of examples from for hardware bringup and
debugging [or you can just search for lots of previous noise by yours
truly in this group on that subject, search terms: "rpw3 hwtool opfr"].

+---------------
| I used to do embedded programming and I've seen as much
| as 50% of an application devoted to munging data.
+---------------

Indeed. My "hwtools" script [which uses my "peek-poke" library]
is almost *entirely* bit-banging stuff. A few tiny examples:

    ;;; Spray bits out for easy reading of hardware registers
    (defun decode-bits (n)
      (let (mflag)
	(when (minusp n)
	  (setf mflag t n (lognot n)))
	(loop for i downfrom (integer-length n) to 0
	  when mflag
	    collect (if (zerop n) 'all-ones 'all-ones-except)
	    and do (setf mflag nil)
	  when (logbitp i n)
	    collect i)))

    ;;; Same as above, but with textual labels.
    (defun decode-named-bits (value name-vector &key show-negated)
      (loop for i downfrom (1- (length name-vector)) to 0 do
	(cond
	  ((logbitp i value)
	   (format t " ~a" (aref name-vector i)))
	  (show-negated
	   (format t " \\~a" (aref name-vector i)))))
      (force-output))

used thusly:

    cmu> (decode-bits 0x48d02cf3)
    (30 27 23 22 20 13 11 10 7 6 5 4 1 0)
    cmu> 

Useful enough, but what do they *mean*?!?

    cmu> (deflex icr-bit-names '#(	; From an I2C controller chip
	   "Start"
	   "Stop"
	   "Nack"
	   "TransferByte"
	   "MasterAbort"
	   "SCL_En"
	   "I2C_En"
	   "GenCall_Dis" )) ; There were more, but that's enough for now.

    ICR-BIT-NAMES
    cmu> (decode-bits 0x69)

    (6 5 3 0)
    cmu> (decode-named-bits 0x69 icr-bit-names)
     I2C_En SCL_En TransferByte Start
    NIL
    cmu> (decode-named-bits 0x69 icr-bit-names :show-negated t)
     \GenCall_Dis I2C_En SCL_En \MasterAbort TransferByte \Nack \Stop Start
    NIL
    cmu> 


-Rob

p.s. Most O/S kernels have similar internal routines which, in fact, the
above was modelled on. E.g., from "dmesg.boot" on my FreeBSD laptop, we
see how the "Features" register in the CPU is decoded:

    CPU: AMD Athlon(tm) Processor (1836.65-MHz 686-class CPU)
      Origin = "AuthenticAMD"  Id = 0x6a0  Stepping = 0
      Features=0x383f9ff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE>
      AMD Features=0xc0480000<MP,AMIE,DSP,3DNow!>

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: George Neuner
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <h0l1v4h9crd62kltn2blgprna63r46kj0p@4ax.com>
On Tue, 21 Apr 2009 21:21:39 -0500, ····@rpw3.org (Rob Warnock) wrote:

>George Neuner  <········@comcast.net> wrote:
>+---------------
>| I like Lisp - but let's be honest - Lisp doesn't make bit munging easy.
>...
>| Then too, operating systems are not the only places where system code
>| exists.  A hell of a lot of code is written for bare hardware in
>| embedded systems and quite a lot of it falls into the bit munging
>| category.
>...
>| [*] I distinguish "munging" from "banging".  I use "munging" to mean
>| interpreting the same bits differently in different contexts and
>| "banging" to mean altering data at the bit level.  YMMV and your
>| terminology as well.
>+---------------
>
>One of the reasons I really like CMUCL is that it *does* let one get
>down to the bit level if you really want to. [Most other production
>CL compilers also provide for this; I just happen to know CMUCL best.]
>Assume for the same of argument that one has already created the
>following aliases, abbreviations, or shortcuts [as I have in my normal
>toolbox]:
>
>  - A read-macro for "0" that treats "0x{number}" as "#x{number}".
>  - A FORMAT function "0x" for convenient hex printing.
>  - make-lisp-obj == kernel:make-lisp-obj
>  - lisp-obj      == kernel:get-lisp-obj-address
>  - r{8,16,32}    == (lambda (addr)
>		       (system:sap-ref-{8,16,32} (system:int-sap addr) 0))
>  - w{8,16,32}    == (lambda (addr new-value)
>		       (setf (system:sap-ref-{8,16,32} (system:int-sap addr) 0)
>			     new-value))
>  - dump32        == (lambda (addr &optional (len #x40) (print-addr addr))
>		       "Does a hex dump from ADDR through (+ ADDR LEN -1),
>			labelling locations with PRINT-ADDR. (The latter is
>			useful when the object is mmap()'d to hardware.)" 
>
>Then you can do these sorts of things:
>

< SNIP - some interesting examples >

>
>Is that enough "munging/banging" for you?!?   ;-}  ;-}
>
>If not, I can dig up plenty of examples from for hardware bringup and
>debugging [or you can just search for lots of previous noise by yours
>truly in this group on that subject, search terms: "rpw3 hwtool opfr"].


Your examples are mostly "banging" and I think Lisp does this
acceptably.  What I'm talking about with "munging" is, for example,
you have a sequence of bytes in a buffer, you see that the first byte
is 0x5 and you overlay a particular typed data structure on the head
of the sequence (or wherever) and manipulate it directly.

In C

  switch (*buffer) 
  {
  :
  case 0x5:
    MY_STRUCT* p = (MYSTRUCT*) buffer;
    if (p->whatsit == foo) && (p->whatfor == bar)
    {
       MY_OTHER_STRUCT* q = (MY_OTHER_STRUCT*) p->payload;
       :
    }
    :
    break;
  :
  }

In Lisp (and modern FPLs), this kind of code is neither easily done
nor generally portable among compilers.  As I've mentioned before,
I've done embedded and real time programming, and so this kind of
stuff is maybe more important to me than to a general application
developer, but I would like to be able use to write high performance
munging code without resorting to unreadable trickery.  

I believe, as I think most Lispers do, that programming languages
should not restrict power - they should probably have safe defaults
(although this is debatable too) but not limits.  So far I haven't met
any Lisp or FPL that makes writing such code easy.  Maybe it was on a
Lisp Machine, but I'm not old enough to remember them and the targets
today are stock hardware.

Oh, and the issue is not about compiling to C (or whatever) vs an
image or managed VM ... rather it's about being about to write the
code as directly as possible directly in the high level language.

George
From: John Thingstad
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <op.usq6m9isut4oq5@pandora>
På Tue, 21 Apr 2009 20:58:46 +0200, skrev George Neuner  
<········@comcast.net>:

>
> A strong type system (static or dynamic) doesn't help in code where
> bits need to be reinterpreted ... which basically amounts to some
> portion of nearly every possible device driver.  Bit munging[*] code
> often has performance requirements that necessitate working in place -
> for which you must be able to overlay different views of the data on
> the same location.  Things like Haskell's IO monad and Ocaml's IO
> classes are a farce - in truth little more than systematic ways to
> throw well typed code out the window.

What on earth are you talking about? Monads are well established in  
category theory. Anyhow Haskell suffers from the fact that it is lazily  
evaluated and curried. This means it can't know which order the arguments  
of a function will be resolved. Clearly for things like IO this  
represent's a problem. The purpose of IO monads is to onforce a ording of  
function calls. It is certaily no a method for messing up the type  
inference.

-----------------------
John Thingstad
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49f13a2b$0$95507$742ec2ed@news.sonic.net>
George Neuner wrote:

> [*] I distinguish "munging" from "banging".  I use "munging" to mean
> interpreting the same bits differently in different contexts and
> "banging" to mean altering data at the bit level.  YMMV and your
> terminology as well.

I have always used the term "type punning" for bits that are multiply
interpreted (eg, code that uses a memory area as a floating point 
value and then accesses the same value in memory as an array of bytes 
or a string of characters, etc).

"Munging" to me implies conversion between forms that express 
exactly identical information.  For example if you have transition 
tables, regular expressions, BNF grammars for regular languages, or 
state diagrams, you can munge them into any of the other forms on 
that list. 

And I use "bit banging" more generally, to mean any operations which, 
by nature, require access to the underlying memory model or exact 
knowledge of binary data representation.  There is a lot of bit 
banging required, for example, in writing an FFI between languages 
with dissimilar data representations and call frame layouts, or 
in doing memory-mapped I/O. Type punning *almost* always involves
some degree of bit banging, but bit banging can be done without 
type punning.  In most languages, bit banging is that which you 
cannot do portably because bits are below the level of abstraction
accessible as typed data.

I agree with you though, that YMMV and your terminology as well; 
unlike formal languages, natural language is fluid, and every 
term has a "cloud" of related possible meanings.  "Authoritative"
definitions are just adhoc documentation of actual use, and vary 
from one year (or one decade) to the next.

                                Bear
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75ogtvF19l9qaU1@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Mon, 20 Apr 2009 04:44:13 -0700 (PDT), Vend wrote:
> 
>> On 20 Apr, 11:39, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>> wrote:
>>> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
>>>> Vend wrote:
>>>>> I think that in order to write reliable software, early error
>>>>> detection is generally preferable, even if in some cases it might
>>>>> generate false positives.
>>> Yes, though considering this case, dead code is obviously an error. So it
>>> is a true positive, falsely attributed.
>> It doesn't have to be dead code.
>> It can be that, due to conditionals, some portions of code are all
>> executed at some time, but not in some specific sequence.
>>
>> This can indeed cause some static type errors that would not result in
>> dynamic type errors at run time.
> 
> That only means that an error has slipped undetected. A type error is
> either present or not. Correctness of a program does not depend on the
> states of the program. It cannot be correct in one state (executed path)
> and incorrect in another (not yet executed path). In order to be able to
> reason about correctness of paths, they have to be properly encapsulated in
> modules with clear interfaces, and become independent programs in the end.
> So long they aren't, the argument is bogus, because it is unknown if they
> are properly insulated from each other to be reasoned about separately.
> Once they become properly separated, the argument remains bogus because the
> module that passed type check, well, did pass it.

Once again: Program correctness and the absence of type errors are 
unrelated.

The following program fragment does not have any type error, but is 
incorrect:

int x = 5 / 0;


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5d872069-b8fa-4f6a-a89b-5f7355c2577b@f41g2000pra.googlegroups.com>
On 28 Apr, 16:07, Pascal Costanza <····@p-cos.net> wrote:
> Dmitry A. Kazakov wrote:
> > On Mon, 20 Apr 2009 04:44:13 -0700 (PDT), Vend wrote:
>
> >> On 20 Apr, 11:39, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> >> wrote:
> >>> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
> >>>> Vend wrote:
> >>>>> I think that in order to write reliable software, early error
> >>>>> detection is generally preferable, even if in some cases it might
> >>>>> generate false positives.
> >>> Yes, though considering this case, dead code is obviously an error. So it
> >>> is a true positive, falsely attributed.
> >> It doesn't have to be dead code.
> >> It can be that, due to conditionals, some portions of code are all
> >> executed at some time, but not in some specific sequence.
>
> >> This can indeed cause some static type errors that would not result in
> >> dynamic type errors at run time.
>
> > That only means that an error has slipped undetected. A type error is
> > either present or not. Correctness of a program does not depend on the
> > states of the program. It cannot be correct in one state (executed path)
> > and incorrect in another (not yet executed path). In order to be able to
> > reason about correctness of paths, they have to be properly encapsulated in
> > modules with clear interfaces, and become independent programs in the end.
> > So long they aren't, the argument is bogus, because it is unknown if they
> > are properly insulated from each other to be reasoned about separately.
> > Once they become properly separated, the argument remains bogus because the
> > module that passed type check, well, did pass it.
>
> Once again: Program correctness and the absence of type errors are
> unrelated.
>
> The following program fragment does not have any type error, but is
> incorrect:
>
> int x = 5 / 0;

Actually this is conceptually a type error, but the type system of C/C+
+/Java isn't expressive enough to catch it at compile type.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1xu5hgbj152tx$.1lg7zstc00efb.dlg@40tude.net>
On Tue, 28 Apr 2009 16:07:59 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:
>> On Mon, 20 Apr 2009 04:44:13 -0700 (PDT), Vend wrote:
>> 
>>> On 20 Apr, 11:39, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>> wrote:
>>>> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
>>>>> Vend wrote:
>>>>>> I think that in order to write reliable software, early error
>>>>>> detection is generally preferable, even if in some cases it might
>>>>>> generate false positives.
>>>> Yes, though considering this case, dead code is obviously an error. So it
>>>> is a true positive, falsely attributed.
>>> It doesn't have to be dead code.
>>> It can be that, due to conditionals, some portions of code are all
>>> executed at some time, but not in some specific sequence.
>>>
>>> This can indeed cause some static type errors that would not result in
>>> dynamic type errors at run time.
>> 
>> That only means that an error has slipped undetected. A type error is
>> either present or not. Correctness of a program does not depend on the
>> states of the program. It cannot be correct in one state (executed path)
>> and incorrect in another (not yet executed path). In order to be able to
>> reason about correctness of paths, they have to be properly encapsulated in
>> modules with clear interfaces, and become independent programs in the end.
>> So long they aren't, the argument is bogus, because it is unknown if they
>> are properly insulated from each other to be reasoned about separately.
>> Once they become properly separated, the argument remains bogus because the
>> module that passed type check, well, did pass it.
> 
> Once again: Program correctness and the absence of type errors are 
> unrelated.

A correct statement is that absence of type errors does not imply program
correctness.

Analogous statement is that a properly functioning main board of the
computer does not imply correctness of your program. Which by no means
should lead you the conclusion that you should buy a defective computer "in
order to improve your productivity" (as always).

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75oi5pF18b2lbU2@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Tue, 28 Apr 2009 16:07:59 +0200, Pascal Costanza wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Mon, 20 Apr 2009 04:44:13 -0700 (PDT), Vend wrote:
>>>
>>>> On 20 Apr, 11:39, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>>> wrote:
>>>>> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
>>>>>> Vend wrote:
>>>>>>> I think that in order to write reliable software, early error
>>>>>>> detection is generally preferable, even if in some cases it might
>>>>>>> generate false positives.
>>>>> Yes, though considering this case, dead code is obviously an error. So it
>>>>> is a true positive, falsely attributed.
>>>> It doesn't have to be dead code.
>>>> It can be that, due to conditionals, some portions of code are all
>>>> executed at some time, but not in some specific sequence.
>>>>
>>>> This can indeed cause some static type errors that would not result in
>>>> dynamic type errors at run time.
>>> That only means that an error has slipped undetected. A type error is
>>> either present or not. Correctness of a program does not depend on the
>>> states of the program. It cannot be correct in one state (executed path)
>>> and incorrect in another (not yet executed path). In order to be able to
>>> reason about correctness of paths, they have to be properly encapsulated in
>>> modules with clear interfaces, and become independent programs in the end.
>>> So long they aren't, the argument is bogus, because it is unknown if they
>>> are properly insulated from each other to be reasoned about separately.
>>> Once they become properly separated, the argument remains bogus because the
>>> module that passed type check, well, did pass it.
>> Once again: Program correctness and the absence of type errors are 
>> unrelated.
> 
> A correct statement is that absence of type errors does not imply program
> correctness.

Also vice versa: Program correctness does not imply the absence of type 
errors.

> Analogous statement is that a properly functioning main board of the
> computer does not imply correctness of your program. Which by no means
> should lead you the conclusion that you should buy a defective computer "in
> order to improve your productivity" (as always).

Here, the reverse doesn't hold, and that's why this is a bad analogy.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090428165941.GG3862@gildor.inglorion.net>
On Tue, Apr 28, 2009 at 04:29:13PM +0200, Pascal Costanza wrote:
>
> Also vice versa: Program correctness does not imply the absence of type  
> errors.

I don't understand that. If the program has errors, how can it be 
correct?

Regards,

Bob

-- 
Tis better to be silent and thought a fool, than to open your mouth and 
remove all doubt.

	-- Abraham Lincoln

From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090509030832.640@gmail.com>
["Followup-To:" header set to comp.lang.lisp.]
On 2009-04-28, Robbert Haarman <··············@inglorion.net> wrote:
> On Tue, Apr 28, 2009 at 04:29:13PM +0200, Pascal Costanza wrote:
>>
>> Also vice versa: Program correctness does not imply the absence of type  
>> errors.
>
> I don't understand that. If the program has errors, how can it be 
> correct?

We can catch errors dynamically and do something sensible.

In Lisp we have access, at run time, to lexical scanning, evaluation and
compilation.

We can use the Lisp lexical scanner to read data which is in exactly the same
format as Lisp source code.

This is most readily illustrated by read-from-string:

  (read-from-string "(a b c 1)") -> (A B C 1)

The read-from-string function is a convenience routine which constructs a
string stream, and then calls the reader to scan from that stream.

The reader may encounter an error, like unbalanced parenthesis or whatever. If
it successfully reads something, that object may have any type.

We can then operate on that object under the assumption that it has some
particular type, and catch the condition if there is a type mismatch.

Does the following function have type errors? If so, how would you fix them?

  (defun repl ()
    (loop (print (eval (read)))))
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75p351F19k3b6U1@mid.individual.net>
Robbert Haarman wrote:
> On Tue, Apr 28, 2009 at 04:29:13PM +0200, Pascal Costanza wrote:
>> Also vice versa: Program correctness does not imply the absence of type  
>> errors.
> 
> I don't understand that. If the program has errors, how can it be 
> correct?

Easy. The trick is always the same: Use some form of metaprogramming or 
reflection.

Here is a simple example:

public class Support {

   int i = "Hello, World!";

}

public class Test {

   public static void main(String args[]) {
     Object obj = null;
     try {
       obj = Class.forName("Support").newInstance();
     } catch (Exception e) {}
     if (obj == null) {
       System.out.println("This program is correct.");
     } else {
       System.out.println("This program is incorrect.");
     }
   }
}


When compiling this program, you will get a type error at compile time, 
but it will be correct when you run it.

Some people find metaprogramming and reflection so essential, that they 
would drop static typing rather than restrict their expressiveness.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1af86a6b-acde-4c9e-9679-695fcced56ec@y10g2000prc.googlegroups.com>
On 28 Apr, 21:18, Pascal Costanza <····@p-cos.net> wrote:
> Robbert Haarman wrote:
> > On Tue, Apr 28, 2009 at 04:29:13PM +0200, Pascal Costanza wrote:
> >> Also vice versa: Program correctness does not imply the absence of type  
> >> errors.
>
> > I don't understand that. If the program has errors, how can it be
> > correct?
>
> Easy. The trick is always the same: Use some form of metaprogramming or
> reflection.
>
> Here is a simple example:
>
> public class Support {
>
>    int i = "Hello, World!";
>
> }
>
> public class Test {
>
>    public static void main(String args[]) {
>      Object obj = null;
>      try {
>        obj = Class.forName("Support").newInstance();
>      } catch (Exception e) {}
>      if (obj == null) {
>        System.out.println("This program is correct.");
>      } else {
>        System.out.println("This program is incorrect.");
>      }
>    }
>
> }
>
> When compiling this program, you will get a type error at compile time,
> but it will be correct when you run it.

How can you run it if you can't compile or load it?

> Some people find metaprogramming and reflection so essential, that they
> would drop static typing rather than restrict their expressiveness.

Metaprogramming and reflection are orthogonal to static vs dynamic
typing.
From: Thomas A. Russ
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ymivdoobbrt.fsf@blackcat.isi.edu>
Vend <······@virgilio.it> writes:

> On 28 Apr, 21:18, Pascal Costanza <····@p-cos.net> wrote:
> > When compiling this program, you will get a type error at compile time,
> > but it will be correct when you run it.
> 
> How can you run it if you can't compile or load it?

Exactly the larger point.

You should be able to compile and load it.  Since it is possible to run
it without error.



-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75pgf8F199nh9U2@mid.individual.net>
Vend wrote:
> On 28 Apr, 21:18, Pascal Costanza <····@p-cos.net> wrote:
>> Robbert Haarman wrote:
>>> On Tue, Apr 28, 2009 at 04:29:13PM +0200, Pascal Costanza wrote:
>>>> Also vice versa: Program correctness does not imply the absence of type  
>>>> errors.
>>> I don't understand that. If the program has errors, how can it be
>>> correct?
>> Easy. The trick is always the same: Use some form of metaprogramming or
>> reflection.
>>
>> Here is a simple example:
>>
>> public class Support {
>>
>>    int i = "Hello, World!";
>>
>> }
>>
>> public class Test {
>>
>>    public static void main(String args[]) {
>>      Object obj = null;
>>      try {
>>        obj = Class.forName("Support").newInstance();
>>      } catch (Exception e) {}
>>      if (obj == null) {
>>        System.out.println("This program is correct.");
>>      } else {
>>        System.out.println("This program is incorrect.");
>>      }
>>    }
>>
>> }
>>
>> When compiling this program, you will get a type error at compile time,
>> but it will be correct when you run it.
> 
> How can you run it if you can't compile or load it?

Well, I can run it. I don't know why you can't. ;)

>> Some people find metaprogramming and reflection so essential, that they
>> would drop static typing rather than restrict their expressiveness.
> 
> Metaprogramming and reflection are orthogonal to static vs dynamic
> typing.

...and the earth is flat.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <99ed12ed-3c2b-4bc3-9470-ceb59ee923d5@x1g2000prh.googlegroups.com>
On 29 Apr, 01:06, Pascal Costanza <····@p-cos.net> wrote:
> > Metaprogramming and reflection are orthogonal to static vs dynamic
> > typing.
>
> ...and the earth is flat.

No comment?
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75tiioF1alcnsU2@mid.individual.net>
Vend wrote:
> On 29 Apr, 01:06, Pascal Costanza <····@p-cos.net> wrote:
>>> Metaprogramming and reflection are orthogonal to static vs dynamic
>>> typing.
>> ...and the earth is flat.
> 
> No comment?

In CLOS, I can redefine classes at runtime, using the reflective 
features of CLOS, which includes adding and removing slots. This may 
mean that other parts of the code reference slots that did or didn't 
exist when the program was compiled. A static type checker cannot 
anticipate whether or when such changes to classes will be performed.

This is not an esoteric feature, but tremendously useful in practice. 
I'd wager that it's much more useful than static typing in particular 
circumstances.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <76dbdcee-17be-428b-a2af-b0f629e83901@d7g2000prl.googlegroups.com>
On 30 Apr, 14:06, Pascal Costanza <····@p-cos.net> wrote:
> Vend wrote:
> > On 29 Apr, 01:06, Pascal Costanza <····@p-cos.net> wrote:
> >>> Metaprogramming and reflection are orthogonal to static vs dynamic
> >>> typing.
> >> ...and the earth is flat.
>
> > No comment?
>
> In CLOS, I can redefine classes at runtime, using the reflective
> features of CLOS, which includes adding and removing slots. This may
> mean that other parts of the code reference slots that did or didn't
> exist when the program was compiled. A static type checker cannot
> anticipate whether or when such changes to classes will be performed.

Obviously this kind of thing can't be handled at compile time (it is
equivalent to modifying the source code while the program is running),
but you could have an on-line static type checker that verifies your
program every time you redefine classes and throws an exception if it
finds some type error.
I agree that it would be unusual (I don't know any language that does
that).

> This is not an esoteric feature, but tremendously useful in practice.
> I'd wager that it's much more useful than static typing in particular
> circumstances.

What is it useful for?
Are you sure you are not using classes as hashtables?
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <13983ce1-25d2-4620-8b2d-47219b7e8a79@q33g2000pra.googlegroups.com>
On Apr 30, 12:19 pm, Vend <······@virgilio.it> wrote:
> On 30 Apr, 14:06, Pascal Costanza <····@p-cos.net> wrote:
>
> > Vend wrote:
> > > On 29 Apr, 01:06, Pascal Costanza <····@p-cos.net> wrote:
> > >>> Metaprogramming and reflection are orthogonal to static vs dynamic
> > >>> typing.
> > >> ...and the earth is flat.
>
> > > No comment?
>
> > In CLOS, I can redefine classes at runtime, using the reflective
> > features of CLOS, which includes adding and removing slots. This may
> > mean that other parts of the code reference slots that did or didn't
> > exist when the program was compiled. A static type checker cannot
> > anticipate whether or when such changes to classes will be performed.
>
> Obviously this kind of thing can't be handled at compile time (it is
> equivalent to modifying the source code while the program is running),
> but you could have an on-line static type checker that verifies your
> program every time you redefine classes and throws an exception if it
> finds some type error.
> I agree that it would be unusual (I don't know any language that does
> that).
>

Wouldn't that be horribly inefficient?

> > This is not an esoteric feature, but tremendously useful in practice.
> > I'd wager that it's much more useful than static typing in particular
> > circumstances.
>
> What is it useful for?
> Are you sure you are not using classes as hash-tables?

With multiple inheritance and generic functions the uses become more
obvious.
Are you sure you aren't using hash-tables as classes?
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <adace026-b021-4575-b580-9b1310f158b4@v23g2000pro.googlegroups.com>
On 30 Apr, 20:45, ··················@gmail.com wrote:
> On Apr 30, 12:19 pm, Vend <······@virgilio.it> wrote:
>
>
>
> > On 30 Apr, 14:06, Pascal Costanza <····@p-cos.net> wrote:
>
> > > Vend wrote:
> > > > On 29 Apr, 01:06, Pascal Costanza <····@p-cos.net> wrote:
> > > >>> Metaprogramming and reflection are orthogonal to static vs dynamic
> > > >>> typing.
> > > >> ...and the earth is flat.
>
> > > > No comment?
>
> > > In CLOS, I can redefine classes at runtime, using the reflective
> > > features of CLOS, which includes adding and removing slots. This may
> > > mean that other parts of the code reference slots that did or didn't
> > > exist when the program was compiled. A static type checker cannot
> > > anticipate whether or when such changes to classes will be performed.
>
> > Obviously this kind of thing can't be handled at compile time (it is
> > equivalent to modifying the source code while the program is running),
> > but you could have an on-line static type checker that verifies your
> > program every time you redefine classes and throws an exception if it
> > finds some type error.
> > I agree that it would be unusual (I don't know any language that does
> > that).
>
> Wouldn't that be horribly inefficient?

Not if the type checker is incremental.
Anyway, it would be more efficient than checking value types at every
access.

> > > This is not an esoteric feature, but tremendously useful in practice.
> > > I'd wager that it's much more useful than static typing in particular
> > > circumstances.
>
> > What is it useful for?
> > Are you sure you are not using classes as hash-tables?
>
> With multiple inheritance and generic functions the uses become more
> obvious.

Explain.

> Are you sure you aren't using hash-tables as classes?
From: Vsevolod
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <c74a56db-3a4d-4e00-855a-86ee35524b87@z16g2000prd.googlegroups.com>
On Apr 30, 7:19 pm, Vend <······@virgilio.it> wrote:
> Obviously this kind of thing can't be handled at compile time (it is
> equivalent to modifying the source code while the program is running),
> but you could have an on-line static type checker that verifies your
> program every time you redefine classes and throws an exception if it
> finds some type error.
> I agree that it would be unusual (I don't know any language that does
> that).
It will not work as desired because of the mentioned above fact of the
necessity of atomic changes: with static typing if you redefine some
class for the different type you should at the same time (atomically)
redefine all the program parts, which use this class. Clearly it is
highly impractical and sometimes even infeasible (in distributed
systems).
So this is one of the types of flexibility, which is theoretically
impossible to achieve with static typing.
And if you argue, that such programs become incorrect in the process
of redefinition (which in the case of dynamic typing is not atomic),
it will be wrong, because you can anticipate this behavior and install
handlers for such types of errors, thus making the semantics of the
program completely specified.

Best regards,
Vsevolod
From: Thomas A. Russ
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ymid4atrhbb.fsf@blackcat.isi.edu>
Vend <······@virgilio.it> writes:

> On 30 Apr, 14:06, Pascal Costanza <····@p-cos.net> wrote:
> >
> > In CLOS, I can redefine classes at runtime, using the reflective
> > features of CLOS, which includes adding and removing slots. This may
> > mean that other parts of the code reference slots that did or didn't
> > exist when the program was compiled. A static type checker cannot
> > anticipate whether or when such changes to classes will be performed.
> 
> Obviously this kind of thing can't be handled at compile time (it is
> equivalent to modifying the source code while the program is running),
> but you could have an on-line static type checker that verifies your
> program every time you redefine classes and throws an exception if it
> finds some type error.
> I agree that it would be unusual (I don't know any language that does
> that).

Well, the problem with an on-line static type checker that throws an
exception is that this will really interfere with any ability to
sequentially apply a series of modifications.  The first one can change
the class definition, and the other ones can then modify the users of
that class to bring them into conformance with the new class definition.

But if the type checker blocks the first change, then you can't get any
of these changes through, at least not without the rather cumbersome
mechanism of first defining all users to be empty (and thus not have
type errors) first, changing the class, and then putting the real bodies
into place.  Hardly elegant.

Of course, it may be necessary to do this patching in some type of
guarded single-threaded mode to avoid other problems.  But that is all
easily doable.

> > This is not an esoteric feature, but tremendously useful in practice.
> > I'd wager that it's much more useful than static typing in particular
> > circumstances.
> 
> What is it useful for?
> Are you sure you are not using classes as hashtables?

Well, we've found it to be highly useful for implementing a knowledge
representation language in Common Lisp (http://www.isi.edu/isd/LOOM).
One of the optional implementation strategies is to realize each KR
class as a CLOS class.  Since the knowledge representation language (a
description logic variant) allows for evolution of the representation
through concept redefinition, we also need to redefine the underlying
CLOS classes.

A related technique involving changing the class of instances is also
used, because it simplifies the parsing problem.  When an object
identifier is encountered, it may not always be clear from context if
it refers to a class or a instance.  So we are able to assume that it is
an instance and later change it to a class if it turns out that this
initial guess was incorrect.  (Since Loom allows meta-annotations,
classes and relations can also be used as "instances" in assertional
sentences.)

I note that Protege and some of the other OWL parsers suffer from a need
to make sure they figure things out correctly when the object is
instantiated, because they can't change its class later.  That makes
some of the OWL parsing a lot more complicated than it might otherwise
need to be.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <d17139cf-a5f6-4180-97f9-610d7e4019dc@r36g2000vbr.googlegroups.com>
On 30 Apr, 22:56, ····@sevak.isi.edu (Thomas A. Russ) wrote:
> Vend <······@virgilio.it> writes:
> > On 30 Apr, 14:06, Pascal Costanza <····@p-cos.net> wrote:
>
> > > In CLOS, I can redefine classes at runtime, using the reflective
> > > features of CLOS, which includes adding and removing slots. This may
> > > mean that other parts of the code reference slots that did or didn't
> > > exist when the program was compiled. A static type checker cannot
> > > anticipate whether or when such changes to classes will be performed.
>
> > Obviously this kind of thing can't be handled at compile time (it is
> > equivalent to modifying the source code while the program is running),
> > but you could have an on-line static type checker that verifies your
> > program every time you redefine classes and throws an exception if it
> > finds some type error.
> > I agree that it would be unusual (I don't know any language that does
> > that).
>
> Well, the problem with an on-line static type checker that throws an
> exception is that this will really interfere with any ability to
> sequentially apply a series of modifications.  The first one can change
> the class definition, and the other ones can then modify the users of
> that class to bring them into conformance with the new class definition.

You mean changing the code while the program is running?

> Well, we've found it to be highly useful for implementing a knowledge
> representation language in Common Lisp (http://www.isi.edu/isd/LOOM).
> One of the optional implementation strategies is to realize each KR
> class as a CLOS class.  Since the knowledge representation language (a
> description logic variant) allows for evolution of the representation
> through concept redefinition, we also need to redefine the underlying
> CLOS classes.

How do you make sure that your knowledge base remains consistent after
a class redefinition?
I suppose you need some check of some kind.

> A related technique involving changing the class of instances is also
> used, because it simplifies the parsing problem.  When an object
> identifier is encountered, it may not always be clear from context if
> it refers to a class or a instance.  So we are able to assume that it is
> an instance and later change it to a class if it turns out that this
> initial guess was incorrect.  (Since Loom allows meta-annotations,
> classes and relations can also be used as "instances" in assertional
> sentences.)

Sounds like type inference.

> I note that Protege and some of the other OWL parsers suffer from a need
> to make sure they figure things out correctly when the object is
> instantiated, because they can't change its class later.  That makes
> some of the OWL parsing a lot more complicated than it might otherwise
> need to be.
>
> --
> Thomas A. Russ,  USC/Information Sciences Institute
From: John Thingstad
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <op.utal5ce9ut4oq5@pandora>
På Sat, 02 May 2009 02:29:43 +0200, skrev Vend <······@virgilio.it>:

>
> How do you make sure that your knowledge base remains consistent after
> a class redefinition?
> I suppose you need some check of some kind.
>

Well as you might know classes do allow dynamic recompilation. Objects are  
then updated on demand (when read). For more complex behaviour you can use  
update-instance-for-redefined-class to change add actions (like computing  
the values or accessing a database).

---------------------
John Thingstad
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <760ujuF19heb4U1@mid.individual.net>
Vend wrote:
> On 30 Apr, 14:06, Pascal Costanza <····@p-cos.net> wrote:
>> Vend wrote:
>>> On 29 Apr, 01:06, Pascal Costanza <····@p-cos.net> wrote:
>>>>> Metaprogramming and reflection are orthogonal to static vs dynamic
>>>>> typing.
>>>> ...and the earth is flat.
>>> No comment?
>> In CLOS, I can redefine classes at runtime, using the reflective
>> features of CLOS, which includes adding and removing slots. This may
>> mean that other parts of the code reference slots that did or didn't
>> exist when the program was compiled. A static type checker cannot
>> anticipate whether or when such changes to classes will be performed.
> 
> Obviously this kind of thing can't be handled at compile time (it is
> equivalent to modifying the source code while the program is running),
> but you could have an on-line static type checker that verifies your
> program every time you redefine classes and throws an exception if it
> finds some type error.
> I agree that it would be unusual (I don't know any language that does
> that).

...and I'm not holding my breath.


Pascal


-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: ····················@hotmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <c0f1de8d-74bc-4e0b-9948-81ce19a33546@w31g2000prd.googlegroups.com>
On 28 Apr, 23:43, Vend <······@virgilio.it> wrote:
> On 28 Apr, 21:18, Pascal Costanza <····@p-cos.net> wrote:
>
>
>
>
>
> > Robbert Haarman wrote:
> > > On Tue, Apr 28, 2009 at 04:29:13PM +0200, Pascal Costanza wrote:
> > >> Also vice versa: Program correctness does not imply the absence of type  
> > >> errors.
>
> > > I don't understand that. If the program has errors, how can it be
> > > correct?
>
> > Easy. The trick is always the same: Use some form of metaprogramming or
> > reflection.
>
> > Here is a simple example:
>
> > public class Support {
>
> >    int i = "Hello, World!";
>
> > }
>
> > public class Test {
>
> >    public static void main(String args[]) {
> >      Object obj = null;
> >      try {
> >        obj = Class.forName("Support").newInstance();
> >      } catch (Exception e) {}
> >      if (obj == null) {
> >        System.out.println("This program is correct.");
> >      } else {
> >        System.out.println("This program is incorrect.");
> >      }
> >    }
>
> > }
>
> > When compiling this program, you will get a type error at compile time,
> > but it will be correct when you run it.
>
> How can you run it if you can't compile or load it?
>
> > Some people find metaprogramming and reflection so essential, that they
> > would drop static typing rather than restrict their expressiveness.
>
> Metaprogramming and reflection are orthogonal to static vs dynamic
> typing

C++ has its dynamic_cast<>.
Which looks like a bodge to try and achieve dynamic typeing in an
otherwise staically typed language.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1swizxzscjecq$.brhr5hnfmptj$.dlg@40tude.net>
On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:

>> A correct statement is that absence of type errors does not imply program
>> correctness.
> 
> Also vice versa: Program correctness does not imply the absence of type 
> errors.

That depends on several premises:

1. "type" mean language type (like double), otherwise a problem space type
(a real number from R)

2. The language is statically typed

A. Provided 1 and 2, an ill-typed program is not a legal program. In this
context correctness is undefined, since it is not a program.

B. Provided not 1, an ill-typed is incorrect in the sense of the problem
domain. It sill can expose a proper behavior, but it would be far-fetched
to call it semantically correct. The case is when the program models the
problem domain improperly, yet is functions right. It can well happen
because the problem space as a set can have a much bigger cardinality than
the computing space.

C. 1 and not 2 means that the program is always well-typed independently on
its correctness.

>> Analogous statement is that a properly functioning main board of the
>> computer does not imply correctness of your program. Which by no means
>> should lead you the conclusion that you should buy a defective computer "in
>> order to improve your productivity" (as always).
> 
> Here, the reverse doesn't hold, and that's why this is a bad analogy.

It does. An incorrect program can behave correctly on a malfunctioning
hardware.

The point is, when you design a program you set certain properties of the
hardware as a precondition to everything else. In exactly same way
well-typing is another precondition when a strongly typed language is used.
These are related to correctness in the exact meaning, that program
correctness is defined if and only if these preconditions are met.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <a9815914-c8c4-41af-9d39-efab7a040ad9@r31g2000prh.googlegroups.com>
> That depends on several premises:
[snip]

What are you trying to prove? There is void * in C,
dynamic_cast in C++, polymorphism and typecasts in Java.
I don't know about Haskell and Ocaml, but anyway
there are parsers which convert string to an integer
and they are invoked inevitably when you read external
data. So, "statically typed" languages are in fact just
only partially statically typed and they _can_ in
practice fail at runtime due to typing errors. Nothing
can help here. On the other hand, there are many checks
that can be enforced in dynamic languages like lisp.
check-type, the, declare (type) are ways to introduce
typing in lisp and sometimes typing is done at compile time.
So there is no strict distinction between statically and
dynamically typed lanagues. And I don't know are you
talking about when you try to prove that dynamic typing
is flawed. Dynamic typing is just impossible to avoid. Also
there is no general way to avoid programming errors,
regardless of the language.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090428194018.GI3862@gildor.inglorion.net>
On Tue, Apr 28, 2009 at 12:10:38PM -0700, budden wrote:
> > That depends on several premises:
> [snip]
> 
> What are you trying to prove? There is void * in C,
> dynamic_cast in C++, polymorphism and typecasts in Java.

I don't know about dynamic_cast in C++, but I can tell you about type 
casts in C and Java.

In C, a type cast overrides type checking. The compiler just assums the 
type is what you say it is and uses that type while the cast is in 
scope. In short, type casts in C are unsafe.

In Java, a type cast causes a dynamic type check to be made. So, in 
Java, type casts introduce dynamic typing in an otherwise static type 
system.

> I don't know about Haskell and Ocaml, but anyway
> there are parsers which convert string to an integer
> and they are invoked inevitably when you read external
> data.

What happens with external data is outside the type system. The type 
system deals with data once it's in the program.

> So, "statically typed" languages are in fact just
> only partially statically typed and they _can_ in
> practice fail at runtime due to typing errors.

If there is type checking at run time, it's dynamic typing.

> Nothing
> can help here. On the other hand, there are many checks
> that can be enforced in dynamic languages like lisp.
> check-type, the, declare (type) are ways to introduce
> typing in lisp and sometimes typing is done at compile time.

Can you give examples of type checks that are performed at compile time 
in Common Lisp? (I assume that is what you meant by "lisp".) I was under 
the impression that Common Lisp type annotations are optional and there 
was no way to get static typing (barring extensions to / deviations from 
the language specification).

> So there is no strict distinction between statically and
> dynamically typed lanagues.

I agree. It is type checking that can be static or dynamic. A single 
language may have either, neither, or both.

> And I don't know are you talking about when you try to prove that 
> dynamic typing is flawed.

As far as I can tell, what is being contested is whether the fact that 
dynamic typing allows programs that can fail because of type errors to 
run is a Good Thing or a Bad Thing.

Dmitry and I have argued that, given that we can prove that some 
programs will not run into type errors, it is a Good Thing to allow only 
those programs to run.

Others have argued that there is value in also allowing programs that 
haven't been proven to never run into type errors to run. The argument 
here is that this allows you to get on with testing and development, 
without having to go and fix every type error in code you may or may not 
eventually end up using.

The whole discussion started with a comment about dynamic typing leading 
to greater productivity. So far, there has been no definitive proof that 
dynamic typing does lead to greater productivity. Perhaps Dmitry is of 
the opinion that dynamic typing is flawed, but I am not. I don't think 
there is anything wrong with dynamic typing, I am just not convinced it 
enhances productivity, considering the whole process from idea to mature 
program.

> Dynamic typing is just impossible to avoid.

That is factually incorrect. It is perfectly possible to have no type 
checking at run time.

> Also there is no general way to avoid programming errors, regardless 
> of the language.

Indeed. This is why we have type checking in the first place: to catch 
at least _some_ programming errors and stop them from having too dire 
consequences.

Regards,

Bob

-- 
The early bird gets the worm, but the second mouse gets the cheese.


From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <a0c797ff-24d1-441d-9b38-e049d00634b5@l16g2000pra.googlegroups.com>
> Can you give examples of type checks that are performed
> at compile time in Common Lisp?
Take a look at lisp code at
http://shootout.alioth.debian.org/u32q/benchmark.php?test=spectralnorm&lang=sbcl&id=3

and other lisp programs there. Also you might want to look at
compiler
output. It is rather evident that compiler strives to make use of
declarations and perform type inference based on them.

> What happens with external data is outside
> the type system. The type system deals with data
> once it's in the program.
It is not that simple. scanf actually does a type
check. It is rather hard to imagine a program
without any input. So, if we want practically
achieve a program free of type related
runtime errors, we would likely fail. Errors
in data formats and data processing are
important and they should be avoided too.
There are no means to do that statically.
But, despite that, people manage to write
programs that do data input and work well.
Why? Just because people are careful and
use reliable coding patterns. Errors in type
processing inside program itself can be
also avoided in many cases even with
dynamic typing, if people are careful
and use reliable coding patterns. I do
not intend to say that typing is
unnecessary and not useful.
But sometimes it is hard to describe
type properly, or language lacks flexibility.
This is a natural example of situation where
dynamic typing is advantageos.

> > So, "statically typed" languages are in fact just
> > only partially statically typed and they _can_ in
> > practice fail at runtime due to typing errors.
>
> If there is type checking at run time, it's dynamic typing.
So, it looks like statically typed languages are in fact
dynamically typed sometimes. This is just what I meant to say.

> I was under the impression that Common Lisp type
> annotations are optional and there was no way to get static
> typing (barring extensions to /
> deviations from the language specification).
Yes, type annotations are optional, but they are designed
> so that they could be used by a compiler.

> As far as I can tell, what is being contested is whether the fact that
> dynamic typing allows programs that can fail because of type errors to
> run is a Good Thing or a Bad Thing.
>
> Dmitry and I have argued that, given that we can prove that some
> programs will not run into type errors, it is a Good Thing to allow only
> those programs to run.
If we even can prove, it is not for free. And this is very extreme
point. Also, if program evolves at runtime (as in case of CL, SQL,
operating systems), it is either impossible to check entire program
in advance, or it imposes serios limitations which require more
design effort. CL has a very significant design effort behind its
back and if we tried to do it statically typed, it would be much
harder to implement. CL's ability to, say, change class definition
at runtime hurts performance and safety, but it greatly improves
developer's productivity: you simply do not need to restart your
running program when you want to fix/change something in it.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <85bcfd01-7ff9-450b-a7a7-c64784504396@j9g2000prh.googlegroups.com>
> you simply do not need to restart your
> running program when you want to fix/change something in it.
As an example, if you have function foo which calls
function bar, and then you want to change types
of arguments in a function bar, you fell into trouble
in a statically typed language. If you first
change bar and won't change foo, call to foo would
fail (smash the stack) in an optimized statically typed
language. If you try to change foo in the first place,
compiler would not allow it to compile, as bar have not
yet changed. So you need either to allow your code to
be unsafe in the meantime, or you should make changes atomic
(which is rather inconvinient), or you should allow
for a dynamic typing and type checks. This is a
design choice of CL and I think it is rather close
to optimal. When you need speed, you can
link things with inlining, disable safety checks,
etc. Unfortunately, there is no "function won't ever
change" declaration in CL standard, but there are means
to optimize away parameter checks in implementations.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090429060937.GK3862@gildor.inglorion.net>
On Tue, Apr 28, 2009 at 04:07:38PM -0700, budden wrote:
> > Can you give examples of type checks that are performed
> > at compile time in Common Lisp?
> Take a look at lisp code at
> http://shootout.alioth.debian.org/u32q/benchmark.php?test=spectralnorm&lang=sbcl&id=3

Actually, I was hoping for some text from the language specification. I 
am aware that various Common Lisp implementations implement soft typing, 
but, as far as I know, that isn't a feature of Common Lisp, but rather 
an extension to the language specification.

> > What happens with external data is outside
> > the type system. The type system deals with data
> > once it's in the program.

> It is not that simple. scanf actually does a type
> check.

Not that I am aware of.

> It is rather hard to imagine a program
> without any input.

Not really, but, yes, most practical programs will have input.

> So, if we want practically achieve a program free of type related 
> runtime errors, we would likely fail.

It depends, of course, on how you define "type". I like to use the same 
definition the type system uses, which means that "type" is whatever the 
type system deals with.

Under this definition, how can user input cause a type error? The only 
case I can see is where you have a function that may return a value of 
some type your program isn't expecting. For example:

(+ (read) 5)

now, if we input "foobar", we will get a type error, because + isn't 
defined for a string and an integer.

However, if we had instead had a function that read and returned an 
integer, we could never get a type error, because the returned value 
would always be of a type that the program could accept.

> Errors in data formats and data processing are important and they 
> should be avoided too. There are no means to do that statically.

Indeed.

> But, despite that, people manage to write
> programs that do data input and work well.
> Why? Just because people are careful and
> use reliable coding patterns. Errors in type
> processing inside program itself can be
> also avoided in many cases even with
> dynamic typing, if people are careful
> and use reliable coding patterns.

Yes.

> I do not intend to say that typing is unnecessary and not useful. But 
> sometimes it is hard to describe type properly, or language lacks 
> flexibility. This is a natural example of situation where dynamic 
> typing is advantageos.

Are you saying that, in the example above, it is more useful to signal 
an error when "foobar" is fed to + than to signal an error when "foobar" 
is fed to read_int? I don't see why one is more useful than the other.

> > > So, "statically typed" languages are in fact just
> > > only partially statically typed and they _can_ in
> > > practice fail at runtime due to typing errors.
> >
> > If there is type checking at run time, it's dynamic typing.
> So, it looks like statically typed languages are in fact
> dynamically typed sometimes. This is just what I meant to say.

In that case, of course, you have a language that uses both static and 
dynamic typing. "Statically typed language" is a misnomer in that case.

> > As far as I can tell, what is being contested is whether the fact that
> > dynamic typing allows programs that can fail because of type errors to
> > run is a Good Thing or a Bad Thing.
> >
> > Dmitry and I have argued that, given that we can prove that some
> > programs will not run into type errors, it is a Good Thing to allow only
> > those programs to run.

> If we even can prove, it is not for free.

Agreed, but neither are the alternatives. If you haven't proven that the 
program will not run into type errors at run time, you are either 
accepting the cost of run time checking (dynamic typing) or accepting 
the cost of type unsafety.

> And this is very extreme point. Also, if program evolves at runtime 
> (as in case of CL, SQL, operating systems), it is either impossible to 
> check entire program in advance, or it imposes serios limitations 
> which require more design effort. CL has a very significant design 
> effort behind its back and if we tried to do it statically typed, it 
> would be much harder to implement. CL's ability to, say, change class 
> definition at runtime hurts performance and safety, but it greatly 
> improves developer's productivity: you simply do not need to restart 
> your running program when you want to fix/change something in it.

I am interested in the limitations static typing imposes on interactive 
development. What are some things you cannot do under static typing, and 
how often would you run into this in practice?

I would imagine the limitations depend greatly on other features of the 
language, such as, for example, whether types have to be explicitly 
mentioned, whether or not the language supports polymorphism, etc.

In the end, however, I think the only limitations that static typing 
imposes are that:

1. You cannot run functions that may lead to type errors.
2. You cannot run functions where the type checker mistakenly believes 
that type errors may occur.

In particular, I don't think static typing needs to prevent you from 
redefining functions, I don't think it needs to prevent you from 
redefining functions in ways that aren't type-compatible with their old 
definitions, and I don't think it needs to prevent you from running 
those parts of your program that are still type-correct.

The limitations, compared to dynamic typing, are then the two numbered 
points above. Under dynamic typing, you could call both kinds of 
function, and they would work as long as you didn't pass in an 
incompatible value.

Regards,

Bob

-- 
Verb another noun.

From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ba787a68-b41e-441e-aa9e-931a9961a78e@q33g2000pra.googlegroups.com>
>In particular, I don't think static typing needs to prevent you from
>redefining functions, I don't think it needs to prevent you from
>redefining functions in ways that aren't type-compatible with their old
>definitions, and I don't think it needs to prevent you from running
>those parts of your program that are still type-correct.

In theory, yes. In practice, often not. Some dynamic languages has
limitations/difficulties. You can't delete a table column in Oracle
(couldn't 10 years ago, at least). You can't change procedure
parameters
in Interbase while you have other procedures that reference your
procedure. Views fail to notice changes to underlying table
in MS SQL until you recreate them manually (and they may return
incorrect data).
As far as I can remember, MS SQL allows incompatible type changes in
procedure signatures, but it keeps a cross-reference graph and
recompiles
depended code lazily. So, type errors on function parameter type
mismatch
can occur at runtime despite the language being "statically typed".
As far as I know, Haskell and Ocaml do not allow to change type at
runtime. In a CL, system needs keep track of CLOS instances or keep
some
information about instance age together with the instance
and check it at appropriate time. All these anomalies have some
reasons, either performance, or safety. So, you see, dynamic
languages
are not that simple. Imposing static typing adds even more complexity
and
this may turn to a limitation. It is a surprise for me that many
perfect dynamic languages were developed in the past (CL, SQL
Servers),
but new languages are not that perfect. Python is just completely
dynamic, too simple, has no real compiler and is too slow.
Haskell is too static. World certailny degrades.

>> I do not intend to say that typing is unnecessary and not useful. But
>> sometimes it is hard to describe type properly, or language lacks
>> flexibility. This is a natural example of situation where dynamic
>> typing is advantageos.
>
>Are you saying that, in the example above, it is more useful to signal
>an error when "foobar" is fed to + than to signal an error when "foobar"
>is fed to read_int? I don't see why one is more useful than the other.
"Read" can return value of any type. So, if + of 5 and some type is
defined, program is still "correct" on that input. We didn't waste
our
time declaring types and now we get more flexible program. Consider
(+ (read) (read)) or even (eval (read)).
From: Thomas A. Russ
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ymi4ow8cqlr.fsf@blackcat.isi.edu>
Robbert Haarman <··············@inglorion.net> writes:

> Can you give examples of type checks that are performed at compile time 
> in Common Lisp? (I assume that is what you meant by "lisp".) I was under 
> the impression that Common Lisp type annotations are optional and there 
> was no way to get static typing (barring extensions to / deviations from 
> the language specification).

Consider the following in CMUCL, where a type clash warning is
generated:

===========================================================

(defun foo (a b)
  (declare (fixnum a b))
  (+ a b))

FOO
* (compile 'foo)
; Compiling LAMBDA (A B): 
; Compiling Top-Level Form: 

FOO
NIL
NIL

* (defun bar ()
   (foo 2 3.0))

BAR
* (compile 'bar)
; Compiling LAMBDA NIL: 

; In: LAMBDA NIL

;   (FOO 2 3.0)
; Warning: This is not a (VALUES &OPTIONAL FIXNUM &REST T):
;   3.0
; 
; Compiling Top-Level Form: 

; Compilation unit finished.
;   1 warning


BAR
T
T
===========================================================

This is necessarily just a warning because one can redefine FOO so that
at invocation time of BAR, there is not type error.  For example, just
redefining FOO to either

(defun foo (a b)
  (declare (fixnum a) (float b))
  (+ a b))

or 

(defun foo (a b)
   (+ a b))

would fix things.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Thomas A. Russ
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ymi1vrccqko.fsf@blackcat.isi.edu>
Robbert Haarman <··············@inglorion.net> writes:

> Can you give examples of type checks that are performed at compile time 
> in Common Lisp? (I assume that is what you meant by "lisp".) I was under 
> the impression that Common Lisp type annotations are optional and there 
> was no way to get static typing (barring extensions to / deviations from 
> the language specification).

Consider the following in CMUCL, where a type clash warning is
generated:

===========================================================

(defun foo (a b)
  (declare (fixnum a b))
  (+ a b))

FOO
* (compile 'foo)
; Compiling LAMBDA (A B): 
; Compiling Top-Level Form: 

FOO
NIL
NIL

* (defun bar ()
   (foo 2 3.0))

BAR
* (compile 'bar)
; Compiling LAMBDA NIL: 

; In: LAMBDA NIL

;   (FOO 2 3.0)
; Warning: This is not a (VALUES &OPTIONAL FIXNUM &REST T):
;   3.0
; 
; Compiling Top-Level Form: 

; Compilation unit finished.
;   1 warning


BAR
T
T
===========================================================

This is necessarily just a warning because one can redefine FOO so that
at invocation time of BAR, there is not type error.  For example, just
redefining FOO to either

(defun foo (a b)
  (declare (fixnum a) (float b))
  (+ a b))

or 

(defun foo (a b)
   (+ a b))

would fix things.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <elt6brtsnnag$.qzays6ccyza3.dlg@40tude.net>
On Tue, 28 Apr 2009 21:40:18 +0200, Robbert Haarman wrote:

>> Dynamic typing is just impossible to avoid.
> 
> That is factually incorrect. It is perfectly possible to have no type 
> checking at run time.

Formally yes, any typing can be avoided, but it would be very impractical
to do. So I would rather agree that *in practice* dynamic typing is rather
unavoidable.

But that does no imply that *all* typing must be dynamic. On the contrary,
typing must be static where possible (i.e. the program size does not
explode, it remains readable, reasonably testable etc). An obvious thing,
IMO, I don't understand why some people find it so outrageous...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Cesar Rabak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gt8425$o81$2@aioe.org>
Dmitry A. Kazakov escreveu:
> On Tue, 28 Apr 2009 21:40:18 +0200, Robbert Haarman wrote:
> 
>>> Dynamic typing is just impossible to avoid.
>> That is factually incorrect. It is perfectly possible to have no type 
>> checking at run time.
> 
> Formally yes, any typing can be avoided, but it would be very impractical
> to do. 

No it is not: its done since the advent of computing to this days when 
programming in assembly.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <pgemwoombtbs.p6xl2fu81isp.dlg@40tude.net>
On Tue, 28 Apr 2009 20:38:32 -0300, Cesar Rabak wrote:

> Dmitry A. Kazakov escreveu:
>> On Tue, 28 Apr 2009 21:40:18 +0200, Robbert Haarman wrote:
>> 
>>>> Dynamic typing is just impossible to avoid.
>>> That is factually incorrect. It is perfectly possible to have no type 
>>> checking at run time.
>> 
>> Formally yes, any typing can be avoided, but it would be very impractical
>> to do. 
> 
> No it is not: its done since the advent of computing to this days when 
> programming in assembly.

Just compare complexity of those systems with modern ones. You can possibly
design a 32K bytes complex program in assembler. I did a compiler. But I
don't like to do it again, especially considering the average size of
modern programs.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Cesar Rabak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gtd6fo$r5$2@aioe.org>
Dmitry A. Kazakov escreveu:
> On Tue, 28 Apr 2009 20:38:32 -0300, Cesar Rabak wrote:
> 
>> Dmitry A. Kazakov escreveu:
>>> On Tue, 28 Apr 2009 21:40:18 +0200, Robbert Haarman wrote:
>>>
>>>>> Dynamic typing is just impossible to avoid.
>>>> That is factually incorrect. It is perfectly possible to have no type 
>>>> checking at run time.
>>> Formally yes, any typing can be avoided, but it would be very impractical
>>> to do. 
>> No it is not: its done since the advent of computing to this days when 
>> programming in assembly.
> 
> Just compare complexity of those systems with modern ones. You can possibly
> design a 32K bytes complex program in assembler. I did a compiler. But I
> don't like to do it again, especially considering the average size of
> modern programs.
> 
Using the same rule� than yours when you replied Pascal about existence 
of "program correctness", you have to agree this is /non sequitur/ for 
this discussion.

It is not "very impractical" just because someone wouldn't like to do it 
as millions of line of code still are being written in assembly nowadays.

--
Cesar Rabak


[1] meaning here 'measure'
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <xz3tyg6bswrj.u90715hu7grn.dlg@40tude.net>
On Thu, 30 Apr 2009 18:50:34 -0300, Cesar Rabak wrote:

> Dmitry A. Kazakov escreveu:
>> On Tue, 28 Apr 2009 20:38:32 -0300, Cesar Rabak wrote:
>> 
>>> Dmitry A. Kazakov escreveu:
>>>> On Tue, 28 Apr 2009 21:40:18 +0200, Robbert Haarman wrote:
>>>>
>>>>>> Dynamic typing is just impossible to avoid.
>>>>> That is factually incorrect. It is perfectly possible to have no type 
>>>>> checking at run time.
>>>> Formally yes, any typing can be avoided, but it would be very impractical
>>>> to do. 
>>> No it is not: its done since the advent of computing to this days when 
>>> programming in assembly.
>> 
>> Just compare complexity of those systems with modern ones. You can possibly
>> design a 32K bytes complex program in assembler. I did a compiler. But I
>> don't like to do it again, especially considering the average size of
>> modern programs.
>> 
> Using the same rule� than yours when you replied Pascal about existence 
> of "program correctness", you have to agree this is /non sequitur/ for 
> this discussion.

I see no connection.

> It is not "very impractical" just because someone wouldn't like to do it 
> as millions of line of code still are being written in assembly nowadays.

No it is exactly like that. You can row across the Atlantic Ocean, but that
is very impractical, because normal people take flight.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090428203244.GJ3862@gildor.inglorion.net>
On Tue, Apr 28, 2009 at 10:03:03PM +0200, Dmitry A. Kazakov wrote:
> On Tue, 28 Apr 2009 21:40:18 +0200, Robbert Haarman wrote:
> 
> >> Dynamic typing is just impossible to avoid.
> > 
> > That is factually incorrect. It is perfectly possible to have no type 
> > checking at run time.
> 
> Formally yes, any typing can be avoided, but it would be very impractical
> to do. So I would rather agree that *in practice* dynamic typing is rather
> unavoidable.

I don't know. There is no dynamic typing in C, is there? And C seems to 
do just fine in practice.

> But that does no imply that *all* typing must be dynamic. On the contrary,
> typing must be static where possible (i.e. the program size does not
> explode, it remains readable, reasonably testable etc). An obvious thing,
> IMO, I don't understand why some people find it so outrageous...

I think people in this discussion have made a good point by saying you 
don't necessarily want all your code statically checked all the time. 

The example given was: Suppose you work on a Lisp program. Lisp allows 
you to define new functions and undefine or redefine old ones, without 
having to recompile everything. It also allows you to introduce new 
definitions that aren't type-compatible with existing ones. Static 
typing must disallow this, because it allows type errors to occur along 
some code paths. Yet, it is useful: it allows you to use the code you 
have changed and the code you haven't changed - you will only get a type 
error if you use them together. Static typing would force you to make 
all possible execution paths valid before you could use any of them.

Having done incremental development in various languages (including 
dynamically typed Common Lisp and statically typed OCaml), I can attest 
that scenarios like the above do indeed occur. And there are advantages 
to both static and dynamic typing: static typing can ensure that no type 
errors will be present in the program you eventually ship, and dynamic 
typing allows you to run and test your code before it is free of type 
errors, without abandoning type checking altogether.

Regards,

Bob

-- 
Don't drink and derive.


From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1r2jvnrhqcsbc.7caypsxv6kyr.dlg@40tude.net>
On Tue, 28 Apr 2009 22:32:44 +0200, Robbert Haarman wrote:

> The example given was: Suppose you work on a Lisp program. Lisp allows 
> you to define new functions and undefine or redefine old ones, without 
> having to recompile everything.

I see no relation to type checking. You refer here to separate compilation.

> It also allows you to introduce new 
> definitions that aren't type-compatible with existing ones.

Why a new procedure has to be defined on the same type in a conflicting
way? In a statically typed language we simply declare a new type.

I don't see the point. If foo cannot be defined on A, let it be defined on
B. What's wrong with that?

Sqrt was not defined on -1, so mathematicians introduced complex numbers.
What could be a "dynamic" alternative? To define sqrt(-1) as 4.15?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Thomas A. Russ
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ymi4ow5r8je.fsf@blackcat.isi.edu>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> On Tue, 28 Apr 2009 22:32:44 +0200, Robbert Haarman wrote:
> 
> > The example given was: Suppose you work on a Lisp program. Lisp allows 
> > you to define new functions and undefine or redefine old ones, without 
> > having to recompile everything.
> 
> I see no relation to type checking. You refer here to separate compilation.

Well, the relation is that during the process of redefining several
mutually dependent functions, you may (transiently) have situations
where you no longer have type safety.

Consider the following simple programs

  int foo (int: x, int: y) ...;
  int bar (int: x) return foo(x,x);

Suppose we want to make this more general and instead have foo and bar
work for all numbers.  When making this change in a running system, we
have to change one of foo or bar first, and that would introduce a
momentary situation where a strict static type check would fail.

  number foo (number: x, number: y) ...;
  number bar (number: x) return foo(x,x);

So, you would be forced to first redefine bar into something
intermediate like

  number bar (number: x) return 0;

and then redefine foo and then redefine bar the way you wanted to.  That
hardly seems like a big advantage.


-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1e8vqighfllcq.zuvxa3rr10ot$.dlg@40tude.net>
On 30 Apr 2009 17:06:29 -0700, Thomas A. Russ wrote:

> Well, the relation is that during the process of redefining several
> mutually dependent functions, you may (transiently) have situations
> where you no longer have type safety.
> 
> Consider the following simple programs
> 
>   int foo (int: x, int: y) ...;
>   int bar (int: x) return foo(x,x);
> 
> Suppose we want to make this more general and instead have foo and bar
> work for all numbers.  When making this change in a running system, we
> have to change one of foo or bar first, and that would introduce a
> momentary situation where a strict static type check would fail.

Not at all. That is perfectly typed and moreover statically typed. Provided
that "number" is a class that contains "int" type as a member. 

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Thomas A. Russ
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ymizldwpz1f.fsf@blackcat.isi.edu>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> On 30 Apr 2009 17:06:29 -0700, Thomas A. Russ wrote:
> 
> > Well, the relation is that during the process of redefining several
> > mutually dependent functions, you may (transiently) have situations
> > where you no longer have type safety.
> > 
> > Consider the following simple programs
> > 
> >   int foo (int: x, int: y) ...;
> >   int bar (int: x) return foo(x,x);
> > 
> > Suppose we want to make this more general and instead have foo and bar
> > work for all numbers.  When making this change in a running system, we
> > have to change one of foo or bar first, and that would introduce a
> > momentary situation where a strict static type check would fail.
> 
> Not at all. That is perfectly typed and moreover statically typed. Provided
> that "number" is a class that contains "int" type as a member. 

Well, you didn't quote the part that showed the problem.

The problem occurs when you start with the above and then add (ONLY)

  number foo (number: x, number: y) ...;

as the first step of redefinition.  At that point you still have
function BAR defined as returning "int" but producing it by calling a
function FOO that now returns a super-type of "int", namely "number"
which could encompass other types besides "int".

That is where the interaction between dynamic redefinition and strict
typing makes life difficult.



-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <17kgkngkf0bs0.1t4wh96nzbsff.dlg@40tude.net>
On 01 May 2009 09:29:16 -0700, Thomas A. Russ wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> 
>> On 30 Apr 2009 17:06:29 -0700, Thomas A. Russ wrote:
>> 
>>> Well, the relation is that during the process of redefining several
>>> mutually dependent functions, you may (transiently) have situations
>>> where you no longer have type safety.
>>> 
>>> Consider the following simple programs
>>> 
>>>   int foo (int: x, int: y) ...;
>>>   int bar (int: x) return foo(x,x);
>>> 
>>> Suppose we want to make this more general and instead have foo and bar
>>> work for all numbers.  When making this change in a running system, we
>>> have to change one of foo or bar first, and that would introduce a
>>> momentary situation where a strict static type check would fail.
>> 
>> Not at all. That is perfectly typed and moreover statically typed. Provided
>> that "number" is a class that contains "int" type as a member. 
> 
> Well, you didn't quote the part that showed the problem.
> 
> The problem occurs when you start with the above and then add (ONLY)
> 
>   number foo (number: x, number: y) ...;
> 
> as the first step of redefinition.  At that point you still have
> function BAR defined as returning "int" but producing it by calling a
> function FOO that now returns a super-type of "int", namely "number"
> which could encompass other types besides "int".
> 
> That is where the interaction between dynamic redefinition and strict
> typing makes life difficult.

Nothing has changed at this step. The specification of bar is that the
result is int, it was before the change it remains so after it.

The issue with recompilation of now *wrong* bar is separate compilation. Do
you propose that the implementation of bar should silently convert number
to int? That is not dynamic vs static, but weak vs strong typing.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <011d96f3-2e97-41e2-8f83-3035187c1e23@y10g2000prc.googlegroups.com>
On May 1, 2:14 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On 30 Apr 2009 17:06:29 -0700, Thomas A. Russ wrote:
>
> > Well, the relation is that during the process of redefining several
> > mutually dependent functions, you may (transiently) have situations
> > where you no longer have type safety.
>
> > Consider the following simple programs
>
> >   int foo (int: x, int: y) ...;
> >   int bar (int: x) return foo(x,x);
>
> > Suppose we want to make this more general and instead have foo and bar
> > work for all numbers.  When making this change in a running system, we
> > have to change one of foo or bar first, and that would introduce a
> > momentary situation where a strict static type check would fail.
>
> Not at all. That is perfectly typed and moreover statically typed. Provided
> that "number" is a class that contains "int" type as a member.
>
> --
> Regards,
> Dmitry A. Kazakovhttp://www.dmitry-kazakov.de

You're forced into making the change in a specific order, however. You
have to change bar then foo.

Which seems trivial in a small context, but is very annoying in a
larger context when you are messing with the foo function and want to
try out different things with it, as you'd have to resolve all your
dependencies to the new type before being allowed to make the change
and test.

Its better /scientific method/ to make small changes and test
incrementally, than to have to commit to a big change and divine 'did
this help' from the mess that ensues...

I don't understand the how you can have 'strict' static typing if you
have a 'number' class that includes all numbers. (Could you maybe
explain what you meant by this, Tom?)

Then I can just have a 'true' class (lets call it T for short...) that
includes all objects, and we're back at dynamic typing. :-)

I think a better example might have been changing the foo from
returning an int to a float. In that case you clearly have an
intractable compilation problem.

Of course, you might be able to unbind all of your dependencies and
rebind them all with the new 'foo' and 'bar', but that does seem it
would amount to recompiling a large chunk of the program rather than
just the one thing I want to test.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <y38nlrncjkri.kyf004i1xvu0.dlg@40tude.net>
On Fri, 1 May 2009 09:31:19 -0700 (PDT), ··················@gmail.com
wrote:

> On May 1, 2:14�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On 30 Apr 2009 17:06:29 -0700, Thomas A. Russ wrote:
>>
>>> Well, the relation is that during the process of redefining several
>>> mutually dependent functions, you may (transiently) have situations
>>> where you no longer have type safety.
>>
>>> Consider the following simple programs
>>
>>> � int foo (int: x, int: y) ...;
>>> � int bar (int: x) return foo(x,x);
>>
>>> Suppose we want to make this more general and instead have foo and bar
>>> work for all numbers. �When making this change in a running system, we
>>> have to change one of foo or bar first, and that would introduce a
>>> momentary situation where a strict static type check would fail.
>>
>> Not at all. That is perfectly typed and moreover statically typed. Provided
>> that "number" is a class that contains "int" type as a member.
>>
> You're forced into making the change in a specific order, however. You
> have to change bar then foo.
> 
> Which seems trivial in a small context, but is very annoying in a
> larger context when you are messing with the foo function and want to
> try out different things with it, as you'd have to resolve all your
> dependencies to the new type before being allowed to make the change
> and test.
> 
> Its better /scientific method/ to make small changes and test
> incrementally, than to have to commit to a big change and divine 'did
> this help' from the mess that ensues...

No, that is not scientific. You refer is an iteration process, which must
converge. There are certain requirements for convergence. Moreover it
should converge to the goal. These requirements are not even stated, since
others here reject specifications and program correctness at all.

Otherwise it is just an urban legend. I have another legend about people
who are feverishly running increments making no any progress in the
project, but leaving utter mess behind them.

> I don't understand the how you can have 'strict' static typing if you
> have a 'number' class that includes all numbers. (Could you maybe
> explain what you meant by this, Tom?)

Static is the class, so that you can describe a polymorphic operation + for
all numeric types. Therefore 1 + "hello" would be statically wrong.

> Then I can just have a 'true' class (lets call it T for short...) that
> includes all objects, and we're back at dynamic typing. :-)

That is not dynamic, that is no typing. The class numeric contains only
certain types, like ones having + defined.

> I think a better example might have been changing the foo from
> returning an int to a float. In that case you clearly have an
> intractable compilation problem.

It is not a compilation problem, it is a semantic problem. I want the
compiler show me all calls to Foo so that I could revise my change.

There is a huge difference between changing int to numeric and changing int
to float. The former can be considered a "small" change because numeric
contains int. The latter is not.

> Of course, you might be able to unbind all of your dependencies and
> rebind them all with the new 'foo' and 'bar', but that does seem it
> would amount to recompiling a large chunk of the program rather than
> just the one thing I want to test.

What to test and how? Discrete mathematics and mathematics of real numbers
are very different in their methods. How is it consistent with your theory
of small steps?

This is a problem of weak typing. Changes that looks lexically small can be
semantically huge. Strong static typing helps me not to go that easily with
types. It is values here to describe dynamics, not types. When you choose
an algorithm, it usually fixes your types. You will not change them without
reconsidering your design. That is not a small change.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <72b2bce5-c4c2-4c90-b68b-6964eda5b0c9@m19g2000yqk.googlegroups.com>
On May 1, 1:57 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Fri, 1 May 2009 09:31:19 -0700 (PDT), ··················@gmail.com
> wrote:
>
>
>
> > On May 1, 2:14 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> On 30 Apr 2009 17:06:29 -0700, Thomas A. Russ wrote:
>
> >>> Well, the relation is that during the process of redefining several
> >>> mutually dependent functions, you may (transiently) have situations
> >>> where you no longer have type safety.
>
> >>> Consider the following simple programs
>
> >>>   int foo (int: x, int: y) ...;
> >>>   int bar (int: x) return foo(x,x);
>
> >>> Suppose we want to make this more general and instead have foo and bar
> >>> work for all numbers.  When making this change in a running system, we
> >>> have to change one of foo or bar first, and that would introduce a
> >>> momentary situation where a strict static type check would fail.
>
> >> Not at all. That is perfectly typed and moreover statically typed. Provided
> >> that "number" is a class that contains "int" type as a member.
>
> > You're forced into making the change in a specific order, however. You
> > have to change bar then foo.
>
> > Which seems trivial in a small context, but is very annoying in a
> > larger context when you are messing with the foo function and want to
> > try out different things with it, as you'd have to resolve all your
> > dependencies to the new type before being allowed to make the change
> > and test.
>
> > Its better /scientific method/ to make small changes and test
> > incrementally, than to have to commit to a big change and divine 'did
> > this help' from the mess that ensues...
>
> No, that is not scientific. You refer is an iteration process, which must
> converge. There are certain requirements for convergence. Moreover it
> should converge to the goal. These requirements are not even stated, since
> others here reject specifications and program correctness at all.

When you are doing a laboratory experiment, do you hold one variable
constant and change a number of others, or do you change one variable
and hold the rest constant?
The rest is BS, as I don't reject specifications.

> Otherwise it is just an urban legend. I have another legend about people
> who are feverishly running increments making no any progress in the
> project, but leaving utter mess behind them.

Of course you have to do it right... but you can end up in just as
much of a mess with static typing if you are a disorganized
programmer.

> > I don't understand the how you can have 'strict' static typing if you
> > have a 'number' class that includes all numbers. (Could you maybe
> > explain what you meant by this, Tom?)
>
> Static is the class, so that you can describe a polymorphic operation + for
> all numeric types. Therefore 1 + "hello" would be statically wrong.
>

Okay. It was a rhetorical question. To implement the polymorphic
operation, do you or do you not have to perform a run time type check?

> > Then I can just have a 'true' class (lets call it T for short...) that
> > includes all objects, and we're back at dynamic typing. :-)
>
> That is not dynamic, that is no typing. The class numeric contains only
> certain types, like ones having + defined.
>

Fine, its static typing then, everything is of class T.

> > I think a better example might have been changing the foo from
> > returning an int to a float. In that case you clearly have an
> > intractable compilation problem.
>
> It is not a compilation problem, it is a semantic problem. I want the
> compiler show me all calls to Foo so that I could revise my change.
>
Well i want to run foo and see if it works before i commit to revising
all my calls to foo. (Because then I will end up with an 'incremental
mess' if I decide it was a bad idea). We have proven that you want
different things than I do, congrats.

> There is a huge difference between changing int to numeric and changing int
> to float. The former can be considered a "small" change because numeric
> contains int. The latter is not.
>

So? It is still a change and I still may want to make it. In dynamic
typing it is easy to test and change it back, in static typing it is
not.

> > Of course, you might be able to unbind all of your dependencies and
> > rebind them all with the new 'foo' and 'bar', but that does seem it
> > would amount to recompiling a large chunk of the program rather than
> > just the one thing I want to test.
>
> What to test and how? Discrete mathematics and mathematics of real numbers
> are very different in their methods. How is it consistent with your theory
> of small steps?
>
What does mathematical theory have to do with this? I want to change
the function foo and run it to see if it would be better with a float
than an int. that's all i want to do.

> This is a problem of weak typing. Changes that looks lexically small can be
> semantically huge. Strong static typing helps me not to go that easily with

No, it is not a small change, that is why i want to try it out in a
few cases before I fully commit to it.

> types. It is values here to describe dynamics, not types. When you choose
> an algorithm, it usually fixes your types. You will not change them without
> reconsidering your design. That is not a small change.
>
I suppose if you get all of your algorithms from a pre-existing
package... but as it happens, I do not... most of the time my initial
design starts of as a picture (literally a sketch on paper) and I
write some prototype code that I evolve and refine.

We have different techniques, I'll prefer to save myself your
masochism.

> --
> Regards,
> Dmitry A. Kazakovhttp://www.dmitry-kazakov.de
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <63waq6ka7t6i$.1e68ojh6b40d1$.dlg@40tude.net>
On Fri, 1 May 2009 11:47:08 -0700 (PDT), ··················@gmail.com
wrote:

> On May 1, 1:57�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Fri, 1 May 2009 09:31:19 -0700 (PDT), ··················@gmail.com
>> wrote:
>>
>>> On May 1, 2:14�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>> wrote:
>>>> On 30 Apr 2009 17:06:29 -0700, Thomas A. Russ wrote:
>>
>>>>> Well, the relation is that during the process of redefining several
>>>>> mutually dependent functions, you may (transiently) have situations
>>>>> where you no longer have type safety.
>>
>>>>> Consider the following simple programs
>>
>>>>> � int foo (int: x, int: y) ...;
>>>>> � int bar (int: x) return foo(x,x);
>>
>>>>> Suppose we want to make this more general and instead have foo and bar
>>>>> work for all numbers. �When making this change in a running system, we
>>>>> have to change one of foo or bar first, and that would introduce a
>>>>> momentary situation where a strict static type check would fail.
>>
>>>> Not at all. That is perfectly typed and moreover statically typed. Provided
>>>> that "number" is a class that contains "int" type as a member.
>>
>>> You're forced into making the change in a specific order, however. You
>>> have to change bar then foo.
>>
>>> Which seems trivial in a small context, but is very annoying in a
>>> larger context when you are messing with the foo function and want to
>>> try out different things with it, as you'd have to resolve all your
>>> dependencies to the new type before being allowed to make the change
>>> and test.
>>
>>> Its better /scientific method/ to make small changes and test
>>> incrementally, than to have to commit to a big change and divine 'did
>>> this help' from the mess that ensues...
>>
>> No, that is not scientific. You refer is an iteration process, which must
>> converge. There are certain requirements for convergence. Moreover it
>> should converge to the goal. These requirements are not even stated, since
>> others here reject specifications and program correctness at all.
> 
> When you are doing a laboratory experiment, do you hold one variable
> constant and change a number of others, or do you change one variable
> and hold the rest constant?

Using your analogy:

1. physical measurements are continuous. That is one of the premises of
convergence. Source code changes aren't continuous.

2. if you run an optimization with multiple variables varying one of them
that is guaranteed the not the shortest path.

3. what about local optimums? If you have such then doing small steps will
never lead you the goal. You will be trapped in a local optimum.

>> Otherwise it is just an urban legend. I have another legend about people
>> who are feverishly running increments making no any progress in the
>> project, but leaving utter mess behind them.
> 
> Of course you have to do it right... but you can end up in just as
> much of a mess with static typing if you are a disorganized
> programmer.

If I can do it right, then I can do it in just one step. The problem is
that nobody can do it right.

>>> I don't understand the how you can have 'strict' static typing if you
>>> have a 'number' class that includes all numbers. (Could you maybe
>>> explain what you meant by this, Tom?)
>>
>> Static is the class, so that you can describe a polymorphic operation + for
>> all numeric types. Therefore 1 + "hello" would be statically wrong.
> 
> Okay. It was a rhetorical question. To implement the polymorphic
> operation, do you or do you not have to perform a run time type check?

I do not. (If you mean dynamic dispatch, then the compiler inserts an
appropriate code for it. Maybe you mean run-time type information? Yes, it
is sometimes needed.)

>>> Then I can just have a 'true' class (lets call it T for short...) that
>>> includes all objects, and we're back at dynamic typing. :-)
>>
>> That is not dynamic, that is no typing. The class numeric contains only
>> certain types, like ones having + defined.
> 
> Fine, its static typing then, everything is of class T.

No. The class is "numeric", it is not "any". You can have "any", but it
would be useless, because "any" has no operations defined on it.

In a strongly typed language you cannot call an operation undefined on the
type. You probably mean a model where any operation can be called on any
type, i.e. such that any operation is defined on "any". That is
semantically untyped.

>>> I think a better example might have been changing the foo from
>>> returning an int to a float. In that case you clearly have an
>>> intractable compilation problem.
>>
>> It is not a compilation problem, it is a semantic problem. I want the
>> compiler show me all calls to Foo so that I could revise my change.
>>
> Well i want to run foo and see if it works before i commit to revising
> all my calls to foo. (Because then I will end up with an 'incremental
> mess' if I decide it was a bad idea). We have proven that you want
> different things than I do, congrats.

Foo does not work, you don't need to run it.

>> There is a huge difference between changing int to numeric and changing int
>> to float. The former can be considered a "small" change because numeric
>> contains int. The latter is not.
> 
> So? It is still a change and I still may want to make it. In dynamic
> typing it is easy to test and change it back, in static typing it is
> not.

It is a discrete system. You cannot split changes infinitely. And again,
changing 303 from phone number to outdoor temperature in Kelvin is
semantically big change, even if your language fails to capture that.

>>> Of course, you might be able to unbind all of your dependencies and
>>> rebind them all with the new 'foo' and 'bar', but that does seem it
>>> would amount to recompiling a large chunk of the program rather than
>>> just the one thing I want to test.
>>
>> What to test and how? Discrete mathematics and mathematics of real numbers
>> are very different in their methods. How is it consistent with your theory
>> of small steps?
>>
> What does mathematical theory have to do with this?

It does with the program semantics. A change has a purpose determined by
the semantics. If you accept specifications, then you should have them for
foo. When you change types of Foo you do it in the specifications of Foo.
So you have to fix specifications first (at least mentally, in your head)
as well as tests of Foo. That must happen before you run Foo, because
without the specifications you just do not know what to expect from your
code. A manifestedly typed language just helps you to put specifications in
written form as a part of the program.

>> This is a problem of weak typing. Changes that looks lexically small can be
>> semantically huge. Strong static typing helps me not to go that easily with
> 
> No, it is not a small change, that is why i want to try it out in a
> few cases before I fully commit to it.

If that is not a small change, then it can have tricky side effects on the
rest of the program. How can you evaluate these effects before fixing other
parts of the program *evidently* influenced by the change? Static typing
does not prevent you from testing separately compiled units. So you can
test Foo in isolation. But you cannot test Bar that uses Foo evidently
improperly. Where is a problem?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <9a853999-dd2c-4b96-a4dc-8c777cb75067@i6g2000yqj.googlegroups.com>
On May 1, 3:25 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Fri, 1 May 2009 11:47:08 -0700 (PDT), ··················@gmail.com
> wrote:
>
>
>
> > On May 1, 1:57 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> On Fri, 1 May 2009 09:31:19 -0700 (PDT), ··················@gmail.com
> >> wrote:
>
> >>> On May 1, 2:14 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> >>> wrote:
> >>>> On 30 Apr 2009 17:06:29 -0700, Thomas A. Russ wrote:
>
> >>>>> Well, the relation is that during the process of redefining several
> >>>>> mutually dependent functions, you may (transiently) have situations
> >>>>> where you no longer have type safety.
>
> >>>>> Consider the following simple programs
>
> >>>>>   int foo (int: x, int: y) ...;
> >>>>>   int bar (int: x) return foo(x,x);
>
> >>>>> Suppose we want to make this more general and instead have foo and bar
> >>>>> work for all numbers.  When making this change in a running system, we
> >>>>> have to change one of foo or bar first, and that would introduce a
> >>>>> momentary situation where a strict static type check would fail.
>
> >>>> Not at all. That is perfectly typed and moreover statically typed. Provided
> >>>> that "number" is a class that contains "int" type as a member.
>
> >>> You're forced into making the change in a specific order, however. You
> >>> have to change bar then foo.
>
> >>> Which seems trivial in a small context, but is very annoying in a
> >>> larger context when you are messing with the foo function and want to
> >>> try out different things with it, as you'd have to resolve all your
> >>> dependencies to the new type before being allowed to make the change
> >>> and test.
>
> >>> Its better /scientific method/ to make small changes and test
> >>> incrementally, than to have to commit to a big change and divine 'did
> >>> this help' from the mess that ensues...
>
> >> No, that is not scientific. You refer is an iteration process, which must
> >> converge. There are certain requirements for convergence. Moreover it
> >> should converge to the goal. These requirements are not even stated, since
> >> others here reject specifications and program correctness at all.
>
> > When you are doing a laboratory experiment, do you hold one variable
> > constant and change a number of others, or do you change one variable
> > and hold the rest constant?
>
> Using your analogy:
>
> 1. physical measurements are continuous. That is one of the premises of
> convergence. Source code changes aren't continuous.
>
No. Measurements are not continuous. Space is continuous. Measurements
are discrete.

> 2. if you run an optimization with multiple variables varying one of them
> that is guaranteed the not the shortest path.
>
It allows me to collect data. Evidence that the change I'm going to
make is the correct one.

> 3. what about local optimums? If you have such then doing small steps will
> never lead you the goal. You will be trapped in a local optimum.
>
Not steps, changes to collect data. Provides me with information
relating to getting out of my local optimum. Of course you need to
have both views of the code in mind. don't be purposefully dense.

> >> Otherwise it is just an urban legend. I have another legend about people
> >> who are feverishly running increments making no any progress in the
> >> project, but leaving utter mess behind them.
>
> > Of course you have to do it right... but you can end up in just as
> > much of a mess with static typing if you are a disorganized
> > programmer.
>
> If I can do it right, then I can do it in just one step. The problem is
> that nobody can do it right.
>

What is 'it' to you? I'm talking about sticking to a process and not
changing too much at one time. You agree it is impossible to do things
correctly the first time, yet you claim that making *bigger* changes
all at once is better.

> >>> I don't understand the how you can have 'strict' static typing if you
> >>> have a 'number' class that includes all numbers. (Could you maybe
> >>> explain what you meant by this, Tom?)
>
> >> Static is the class, so that you can describe a polymorphic operation + for
> >> all numeric types. Therefore 1 + "hello" would be statically wrong.
>
> > Okay. It was a rhetorical question. To implement the polymorphic
> > operation, do you or do you not have to perform a run time type check?
>
> I do not. (If you mean dynamic dispatch, then the compiler inserts an
> appropriate code for it. Maybe you mean run-time type information? Yes, it
> is sometimes needed.)
>
Yes, run time type information.

> >>> Then I can just have a 'true' class (lets call it T for short...) that
> >>> includes all objects, and we're back at dynamic typing. :-)
>
> >> That is not dynamic, that is no typing. The class numeric contains only
> >> certain types, like ones having + defined.
>
> > Fine, its static typing then, everything is of class T.
>
> No. The class is "numeric", it is not "any". You can have "any", but it
> would be useless, because "any" has no operations defined on it.
>

No? Maybe it has every operation defined on it? Just some operations
throw errors.

> In a strongly typed language you cannot call an operation undefined on the
> type. You probably mean a model where any operation can be called on any
> type, i.e. such that any operation is defined on "any". That is
> semantically untyped.
>
It is statically semantically untyped. You confuse static typing with
typing in general.

> >>> I think a better example might have been changing the foo from
> >>> returning an int to a float. In that case you clearly have an
> >>> intractable compilation problem.
>
> >> It is not a compilation problem, it is a semantic problem. I want the
> >> compiler show me all calls to Foo so that I could revise my change.
>
> > Well i want to run foo and see if it works before i commit to revising
> > all my calls to foo. (Because then I will end up with an 'incremental
> > mess' if I decide it was a bad idea). We have proven that you want
> > different things than I do, congrats.
>
> Foo does not work, you don't need to run it.
>

How do you know? Seems to work in my repl.

> >> There is a huge difference between changing int to numeric and changing int
> >> to float. The former can be considered a "small" change because numeric
> >> contains int. The latter is not.
>
> > So? It is still a change and I still may want to make it. In dynamic
> > typing it is easy to test and change it back, in static typing it is
> > not.
>
> It is a discrete system. You cannot split changes infinitely. And again,
> changing 303 from phone number to outdoor temperature in Kelvin is
> semantically big change, even if your language fails to capture that.
>

I don't need to split them infinitely. I just need to split them to a
point where i have a manageable chunk of code to work with.

changing 303 from phone number to outdoor temperature in kelvin is
also meaningless. Int->float conversion is not.

> >>> Of course, you might be able to unbind all of your dependencies and
> >>> rebind them all with the new 'foo' and 'bar', but that does seem it
> >>> would amount to recompiling a large chunk of the program rather than
> >>> just the one thing I want to test.
>
> >> What to test and how? Discrete mathematics and mathematics of real numbers
> >> are very different in their methods. How is it consistent with your theory
> >> of small steps?
>
> > What does mathematical theory have to do with this?
>
> It does with the program semantics. A change has a purpose determined by
> the semantics. If you accept specifications, then you should have them for
> foo. When you change types of Foo you do it in the specifications of Foo.
> So you have to fix specifications first (at least mentally, in your head)
> as well as tests of Foo. That must happen before you run Foo, because
> without the specifications you just do not know what to expect from your
> code. A manifestedly typed language just helps you to put specifications in
> written form as a part of the program.
>

Why should I prematurely be forced to specify my program's
functionality?

> >> This is a problem of weak typing. Changes that looks lexically small can be
> >> semantically huge. Strong static typing helps me not to go that easily with
>
> > No, it is not a small change, that is why i want to try it out in a
> > few cases before I fully commit to it.
>
> If that is not a small change, then it can have tricky side effects on the
> rest of the program. How can you evaluate these effects before fixing other

I evaluate them as they come up? and if I've done a reasonably good
job of designing my program I don't have to worry about 'tricky side
effects', because I avoided side effects all together.

> parts of the program *evidently* influenced by the change? Static typing
> does not prevent you from testing separately compiled units. So you can
> test Foo in isolation. But you cannot test Bar that uses Foo evidently
> improperly. Where is a problem?

Cutting out all of the program code that I need to cut out to test foo
and then compiling it separately takes time. when you get into a
larger system, it may not even be reasonably possible.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1b0qeutg0wsak$.1t9a3i364uhq1.dlg@40tude.net>
On Fri, 1 May 2009 13:01:45 -0700 (PDT), ··················@gmail.com
wrote:

> On May 1, 3:25�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:

>> If I can do it right, then I can do it in just one step. The problem is
>> that nobody can do it right.
> 
> What is 'it' to you? I'm talking about sticking to a process and not
> changing too much at one time. You agree it is impossible to do things
> correctly the first time, yet you claim that making *bigger* changes
> all at once is better.

Yes I do. Mathematically speaking there is no continuity. Therefore step
size is not directly related to the effect of the change. There exist
behaviors which cannot be achieved by small increments while keeping the
rest of the program's behavior more or less intakt. Provided you had some
additive measure for the program behavior. But you don't have it, because
below you wrote that you didn't want to "prematurely" specify it. So it is
not only non-additive, it is just non-existent. Hence nothing certain can
be said about it.

>>>>> Then I can just have a 'true' class (lets call it T for short...) that
>>>>> includes all objects, and we're back at dynamic typing. :-)
>>
>>>> That is not dynamic, that is no typing. The class numeric contains only
>>>> certain types, like ones having + defined.
>>
>>> Fine, its static typing then, everything is of class T.
>>
>> No. The class is "numeric", it is not "any". You can have "any", but it
>> would be useless, because "any" has no operations defined on it.
> 
> No? Maybe it has every operation defined on it? Just some operations
> throw errors.

That would be semantically weakly or untyped.

>> In a strongly typed language you cannot call an operation undefined on the
>> type. You probably mean a model where any operation can be called on any
>> type, i.e. such that any operation is defined on "any". That is
>> semantically untyped.
>>
> It is statically semantically untyped. You confuse static typing with
> typing in general.

No I don't.

>>>>> I think a better example might have been changing the foo from
>>>>> returning an int to a float. In that case you clearly have an
>>>>> intractable compilation problem.
>>
>>>> It is not a compilation problem, it is a semantic problem. I want the
>>>> compiler show me all calls to Foo so that I could revise my change.
>>
>>> Well i want to run foo and see if it works before i commit to revising
>>> all my calls to foo. (Because then I will end up with an 'incremental
>>> mess' if I decide it was a bad idea). We have proven that you want
>>> different things than I do, congrats.
>>
>> Foo does not work, you don't need to run it.
> 
> How do you know? Seems to work in my repl.

Because it is not a program yet, it is a compile error.
 
> changing 303 from phone number to outdoor temperature in kelvin is
> also meaningless. Int->float conversion is not.

It is, unless your application problem domain is numeric. Moreover even
int->int is meaningless when on the left is a process number and on the
right is a button id. This is basically the idea of typing.

> Why should I prematurely be forced to specify my program's
> functionality?

Because program is here to function.

>>>> This is a problem of weak typing. Changes that looks lexically small can be
>>>> semantically huge. Strong static typing helps me not to go that easily with
>>
>>> No, it is not a small change, that is why i want to try it out in a
>>> few cases before I fully commit to it.
>>
>> If that is not a small change, then it can have tricky side effects on the
>> rest of the program. How can you evaluate these effects before fixing other
> 
> I evaluate them as they come up?

You don't have a measure for that. Testing is performed against the
specifications. But there is none.

>> parts of the program *evidently* influenced by the change? Static typing
>> does not prevent you from testing separately compiled units. So you can
>> test Foo in isolation. But you cannot test Bar that uses Foo evidently
>> improperly. Where is a problem?
> 
> Cutting out all of the program code that I need to cut out to test foo
> and then compiling it separately takes time. when you get into a
> larger system, it may not even be reasonably possible.

No, these pieces are in different files anyway. Unless you propose to keep
all sources in just one source file?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090511090258.406@gmail.com>
On 2009-04-30, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> Sqrt was not defined on -1, so mathematicians introduced complex numbers.
>
> What could be a "dynamic" alternative? To define sqrt(-1) as 4.15?

In Common Lisp, (sqrt -1) returns a value of complex type. However (sqrt 1)
returns a real number.  Example session:

shell$ clisp -q
[1]> (sqrt -1)
#C(0 1)
[2]> (sqrt 1)
1
[3]> (type-of (sqrt -1))
COMPLEX
[4]> (type-of (sqrt 1))
BIT

An expression like (sqrt x) could return either a complex or a real; it depends
on the value of X, which may not be apparent at compile time. I.e. the sqrt
function has a dynamically typed result type.

This #C(0 1) doesn't resemble 4.15 in any way. It represents a complex number
whose real part is 0, and imaginary part is 1.

So how does Ada deal with sqrt(-1)?

Ada95 RM:

  generic type Float_Type is digits <>;
  package Ada.Numerics.Generic_Elementary_Functions is 
    pragma Pure(Generic_Elementary_Functions);
  
  function Sqrt (X : Float_Type'Base) return Float_Type'Base; ''

Ooops, that doesn't look like it can handle Sqrt(-1). There is also
this:

  The exception Numerics.Argument_Error is raised, signaling a parameter value
  outside the domain of the corresponding mathematical function, in the
  following cases: 

  ...

  by the Sqrt and Log functions, when the value of the parameter X is negative; 

So, you were saying ...
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75p91hF19jd0sU1@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Tue, 28 Apr 2009 21:40:18 +0200, Robbert Haarman wrote:
> 
>>> Dynamic typing is just impossible to avoid.
>> That is factually incorrect. It is perfectly possible to have no type 
>> checking at run time.
> 
> Formally yes, any typing can be avoided, but it would be very impractical
> to do. So I would rather agree that *in practice* dynamic typing is rather
> unavoidable.
> 
> But that does no imply that *all* typing must be dynamic. On the contrary,
> typing must be static where possible (i.e. the program size does not
> explode, it remains readable, reasonably testable etc). An obvious thing,
> IMO, I don't understand why some people find it so outrageous...

...because you don't want to understand it.

Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75pfrcF18ajgkU1@mid.individual.net>
Robbert Haarman wrote:

> Dmitry and I have argued that, given that we can prove that some 
> programs will not run into type errors, it is a Good Thing to allow only 
> those programs to run.
> 
> Others have argued that there is value in also allowing programs that 
> haven't been proven to never run into type errors to run. The argument 
> here is that this allows you to get on with testing and development, 
> without having to go and fix every type error in code you may or may not 
> eventually end up using.
> 
> The whole discussion started with a comment about dynamic typing leading 
> to greater productivity. So far, there has been no definitive proof that 
> dynamic typing does lead to greater productivity.

Of course not. There can't be a 'proof' of that. The best we could hope 
for is empirical evidence, but there is also little empirical evidence.

When people argue that they have a productivity boost, then this is 
mainly based on their own experience.

> Perhaps Dmitry is of 
> the opinion that dynamic typing is flawed, but I am not. I don't think 
> there is anything wrong with dynamic typing, I am just not convinced it 
> enhances productivity, considering the whole process from idea to mature 
> program.

It can just be that you have a different programming style, a different 
way to think about programs.



Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Chetan
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <umya0xsxo.fsf@myhost.sbcglobal.net>
Pascal Costanza <··@p-cos.net> writes:

> Robbert Haarman wrote:
>
>> Dmitry and I have argued that, given that we can prove that some programs
>> will not run into type errors, it is a Good Thing to allow only those
>> programs to run.
>>
>> Others have argued that there is value in also allowing programs that haven't
>> been proven to never run into type errors to run. The argument here is that
>> this allows you to get on with testing and development, without having to go
>> and fix every type error in code you may or may not eventually end up using.
>>
>> The whole discussion started with a comment about dynamic typing leading to
>> greater productivity. So far, there has been no definitive proof that dynamic
>> typing does lead to greater productivity.
>
> Of course not. There can't be a 'proof' of that. The best we could hope for is
> empirical evidence, but there is also little empirical evidence.
>
> When people argue that they have a productivity boost, then this is mainly
> based on their own experience.
>
>> Perhaps Dmitry is of the opinion that dynamic typing is flawed, but I am
>> not. I don't think there is anything wrong with dynamic typing, I am just not
>> convinced it enhances productivity, considering the whole process from idea
>> to mature program.
>
> It can just be that you have a different programming style, a different way to
> think about programs.
>
>
>
> Pascal

It is more likely dependent on the problem at hand.  If the program is
expected to deal with types that are known in advance, it is better
(i.e. safer) to use static typing and catch as many errors early on as
possible.  It is possible that one can write a program that does the
same thing with dynamic typing with the same coding efficiency, but it
is much harder to assure that all type related errors will be handled
as expected.

Chetan
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75qfa4F19m46pU1@mid.individual.net>
Chetan wrote:
> Pascal Costanza <··@p-cos.net> writes:
> 
>> Robbert Haarman wrote:
>>
>>> Dmitry and I have argued that, given that we can prove that some programs
>>> will not run into type errors, it is a Good Thing to allow only those
>>> programs to run.
>>>
>>> Others have argued that there is value in also allowing programs that haven't
>>> been proven to never run into type errors to run. The argument here is that
>>> this allows you to get on with testing and development, without having to go
>>> and fix every type error in code you may or may not eventually end up using.
>>>
>>> The whole discussion started with a comment about dynamic typing leading to
>>> greater productivity. So far, there has been no definitive proof that dynamic
>>> typing does lead to greater productivity.
>> Of course not. There can't be a 'proof' of that. The best we could hope for is
>> empirical evidence, but there is also little empirical evidence.
>>
>> When people argue that they have a productivity boost, then this is mainly
>> based on their own experience.
>>
>>> Perhaps Dmitry is of the opinion that dynamic typing is flawed, but I am
>>> not. I don't think there is anything wrong with dynamic typing, I am just not
>>> convinced it enhances productivity, considering the whole process from idea
>>> to mature program.
>> It can just be that you have a different programming style, a different way to
>> think about programs.
>>
>>
>>
>> Pascal
> 
> It is more likely dependent on the problem at hand.  If the program is
> expected to deal with types that are known in advance, it is better
> (i.e. safer) to use static typing and catch as many errors early on as
> possible.  It is possible that one can write a program that does the
> same thing with dynamic typing with the same coding efficiency, but it
> is much harder to assure that all type related errors will be handled
> as expected.

It could be that - because of my preferred programming style - the 
type-related errors don't matter that much. (I know, this is hard to 
grasp for you static types.)


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87prev60fr.fsf.mdw@metalzone.distorted.org.uk>
Robbert Haarman <··············@inglorion.net> writes:

> In C, a type cast overrides type checking. 

No, this is wrong.

> The compiler just assums the type is what you say it is and uses that
> type while the cast is in scope.

Casts don't have scope in C.  They're operators, not bindings.

The cast operator in C causes an explicit conversion of a value.  If you
want to reinterpret the representation of a value, you can do that
either by playing tricks with unions, messing with memcpy (or similar),
or by taking the address of the representation, casting the resulting
pointer (to be a pointer of a different type), and dereferencing this
cast pointer:

        *(other_type *)&thing

[budden wrote:]
> > Nothing can help here. On the other hand, there are many checks that
> > can be enforced in dynamic languages like lisp.  check-type, the,
> > declare (type) are ways to introduce typing in lisp and sometimes
> > typing is done at compile time.
>
> Can you give examples of type checks that are performed at compile time 
> in Common Lisp?

I believe `sometimes', here, means that some implementations do this
kind of checking.  For example, in SBCL:

* (lambda () (cdr 5))
; in: LAMBDA NIL
;     (CDR 5)
;
; caught WARNING:
;   Asserted type LIST conflicts with derived type 
;   (VALUES (INTEGER 5 5) &OPTIONAL).

I often find that warnings about unreachable code are caused by the
compiler proving that some other part of the function would signal a
runtime type error.

> Dmitry and I have argued that, given that we can prove that some
> programs will not run into type errors, it is a Good Thing to allow
> only those programs to run.
>
> Others have argued that there is value in also allowing programs that 
> haven't been proven to never run into type errors to run. The argument 
> here is that this allows you to get on with testing and development, 
> without having to go and fix every type error in code you may or may not 
> eventually end up using.
>
> The whole discussion started with a comment about dynamic typing leading 
> to greater productivity.

No.  I believe that I made that comment (in an article with message-id
<··················@metalzone.distorted.org.uk>):

: Robbert Haarman <··············@inglorion.net> writes:
:
: > Taking "static typing" to mean that programs that cannot be
: > correctly at compile time are rejected at compile time, whereas
: > "dynamic typing" means type errors lead to rejection at run-time,
: > static typing means, by definition, rejecting bad programs early. It
: > seems to me this would be a productivity gain.
:
: There's a downside to static typing, though.  The compiler doesn't just
: reject programs that it can prove are incorrect: it rejects programs
: which it fails to prove are correct.  As a consequence, compilers for
: statically typed languages actually reject a nontrivial class of correct
: programs.  Since the kinds of programs that I write in dynamically typed
: languages, such as Lisp or Python, are most certainly in this class,
: they would assuredly be rejected by a compiler for a statically typed
: language.
:
: I don't see how having the programs I'd like to write be rejected is a
: productivity win.

I was making the much weaker claim that /static/ typing is /not/ a
clear-cut productivity win.

All other things being equal, having programs which would encounter
run-time type errors rejected at compile time would indeed be a win.  My
position is that other things are /not/ equal, that static type checking
has a cost, partly in terms of annotation, but from my perspective at
least more significantly in terms of the /correct/ programs which are
rejected because the checker can't prove their correctness.

There is therefore a tradeoff between statically provable properties and
programmer convenience (in terms of idioms and techniques which, while
valid, frustrate proof systems).  I suspect that this means that the
right tradeoff to make in any situation is a complicated decision
needing to take into account the nature of the project, and the people
and tools available.

-- [mdw]
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1o5jsdffets5.1xbcj6gdq2zo6$.dlg@40tude.net>
On Tue, 28 Apr 2009 12:10:38 -0700 (PDT), budden wrote:

>> That depends on several premises:
> [snip]
> 
> What are you trying to prove?

Nothing. I merely explain and answer questions at a bit more formal level,
yet enough formal to prevent cheating like you have used below:

> There is void * in C,

"Void *" is a statically checked type in C, as it was already pointed out.

[...]

You cut my explanations (without reading?) and repeat same complains again
using old tricks.

> And I don't know are you talking about

Ask what you didn't understand. It might help. But read first.

> when you try to prove that dynamic typing is flawed.

I never said it. Reread my posts. What is possibly flawed is your
understanding of what dynamic typing is and hence the purpose of.

> Dynamic typing is just impossible to avoid.

Again, read my posts. I never denied this trivial fact. That would be silly
of me.

> Also
> there is no general way to avoid programming errors,
> regardless of the language.

Ditto.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <41a5c8fd-0271-4dd8-8d74-b5c968a93e11@b7g2000pre.googlegroups.com>
> "Void *" is a statically checked type in C, as it was already pointed out.

Formally, yes. Actually, void * often hides unsafe type conversions
and
serves to manage unknown types. It is not a dynamic typing (as no type
checks
are performed) but it is not a static typing too. Anyway, when you
have void *, compiler does not help you to avoid runtime errors
related to
incorrect types. So yes, compiler won't err, but this doesn't mean
that your program is type safe. It is evident that C is either non
statically
typed or it has "holes" in its statical typing and it is no way better
than CL.

Everyone but you seem to agree on that.

Frankly, I have stopping reading your posts after you stated that
operating systems are written in a statically typed languages.

There languages are mostly C, bash and assebmly, no one of them is,
strictly speaking, statically typed. You denied that C is not
statically typed and you didn't comment bash and assembly as
if I didn't mention them at all. It is a waste of time to
read your postings after that as it is evident that you simply
ignore things that you can't handle. Such a discussion is not
interesting for me.
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <40357038-0544-473e-8a0a-1b99cd1bad2f@k19g2000prh.googlegroups.com>
On 29 Apr, 00:43, budden <···········@mail.ru> wrote:
> > "Void *" is a statically checked type in C, as it was already pointed out.
>
> Formally, yes. Actually, void * often hides unsafe type conversions
> and
> serves to manage unknown types. It is not a dynamic typing (as no type
> checks
> are performed) but it is not a static typing too. Anyway, when you
> have void *, compiler does not help you to avoid runtime errors
> related to
> incorrect types. So yes, compiler won't err, but this doesn't mean
> that your program is type safe. It is evident that C is either non
> statically
> typed or it has "holes" in its statical typing and it is no way better
> than CL.

I think you are confusing static typing with type safety.
C is statically typed (that is, every expression has a type known at
compile-time) but not type safe.

> Everyone but you seem to agree on that.
>
> Frankly, I have stopping reading your posts after you stated that
> operating systems are written in a statically typed languages.
>
> There languages are mostly C, bash and assebmly, no one of them is,
> strictly speaking, statically typed. You denied that C is not
> statically typed

It is.

> and you didn't comment bash and assembly as
> if I didn't mention them at all.

Assembly is mostly untyped.
OSes are not written in Bash, as far as I know.

> It is a waste of time to
> read your postings after that as it is evident that you simply
> ignore things that you can't handle. Such a discussion is not
> interesting for me.
From: Matthias Blume
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <m1vdomw4a0.fsf@hana.uchicago.edu>
budden <···········@mail.ru> writes:

>> "Void *" is a statically checked type in C, as it was already pointed out.
>
> Formally, yes. Actually, void * often hides unsafe type conversions
> and
> serves to manage unknown types. It is not a dynamic typing (as no type
> checks
> are performed) but it is not a static typing too.

Yes, it is static typing.  Just not the safe variety.

> Anyway, when you
> have void *, compiler does not help you to avoid runtime errors
> related to
> incorrect types. So yes, compiler won't err, but this doesn't mean
> that your program is type safe. It is evident that C is either non
> statically
> typed or it has "holes" in its statical typing and it is no way better
> than CL.

Your "or" statement is true because the second clause is true.  The first clause is false.  C is statically typed.

> Frankly, I have stopping reading your posts after you stated that
> operating systems are written in a statically typed languages.

Are they not?  Of course they are!

> There languages are mostly C, bash and assebmly, no one of them is,
> strictly speaking, statically typed.

Strictly speaking, all of them are statically typed -- as every programming language is.  The question is how strong, safe, and expressive a particular static type system is.  Even Lisp is statically typed, but it has only one (static) type (sometimes called "lispval" for "Lisp value").  OS kernels are mostly written in C, which undoubtedly is statically typed.

> You denied that C is not statically typed

As he should, because it isn't.

> and you didn't comment bash and assembly as
> if I didn't mention them at all.

Operating systems are not written in bash.  Some of the higher level are, but the kernel certainly isn't.

> It is a waste of time to
> read your postings after that as it is evident that you simply
> ignore things that you can't handle. Such a discussion is not
> interesting for me.

I think that (as usual in this sort of discussion) both sides suffer from the problem of ignoring what the other side says.  "Your" side is not without blame here (by far!), as is evident from many of the posts in this thread.  It usually starts with not understanding what static typing actually is, which leads to a completely nonsensical discussion over whether one form of typing is better or worse than the other.  The truth is that the two forms are quite different in nature, and any comparison ends up having the flavor of the proverbial "apples vs. oranges" comparison.  Moreover, the two forms, being conceptually quite different, are not at all opposites.  If looked at from the right angle, both paradigms are usually used in conjunction in real languages.  The question is to what degree a particular language design relies on one or the other (or the combination of the two) -- and there is a wide spectrum of possibilities. 

I would recommend that anyone engaging in the discussion of static vs. dynamic typing would read and understand(!) a good textbook on the matter (for example B.C. Pierce's "Types and Programming Languages").  When arguing about something it helps to rely on the same terminology (unless the real goal is to waste time in "mine is bigger" contests on usenet).

Kind regards,
Matthias
From: Matthias Blume
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <m1r5zaw44u.fsf@hana.uchicago.edu>
Matthias Blume <·····@hana.uchicago.edu> writes:

>> You denied that C is not statically typed
>
> As he should, because it isn't.

Sorry, meant "because it is [statically typed]" or "because it isn't [not statically typed]".
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090509032547.589@gmail.com>
On 2009-04-28, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> On Tue, 28 Apr 2009 12:10:38 -0700 (PDT), budden wrote:
>
>>> That depends on several premises:
>> [snip]
>> 
>> What are you trying to prove?
>
> Nothing. I merely explain and answer questions at a bit more formal level,

Formal? What does that mean? Are you wearing a bow-tie and frock coat while
posting the same bullshit?

Go lecture computer science in your home village.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75p38tF19k3b6U2@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
>> Dmitry A. Kazakov wrote:
>>> Analogous statement is that a properly functioning main board of the
>>> computer does not imply correctness of your program. Which by no means
>>> should lead you the conclusion that you should buy a defective computer "in
>>> order to improve your productivity" (as always).
>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
> 
> It does. An incorrect program can behave correctly on a malfunctioning
> hardware.

Ah, right. That is actually done in practice. Just avoid the parts of 
the hardware that don't work.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <11lqotqfrqp6g.ntm32m3f5r11.dlg@40tude.net>
On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:
>> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
>>> Dmitry A. Kazakov wrote:
>>>> Analogous statement is that a properly functioning main board of the
>>>> computer does not imply correctness of your program. Which by no means
>>>> should lead you the conclusion that you should buy a defective computer "in
>>>> order to improve your productivity" (as always).
>>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
>> 
>> It does. An incorrect program can behave correctly on a malfunctioning
>> hardware.
> 
> Ah, right. That is actually done in practice. Just avoid the parts of 
> the hardware that don't work.

Actually I meant the case when the program has a bug "fixed" by
malfunctioning hardware. Consider:

if X then
   Print ("X is false");
else
   Print ("X is true");
end if;

Now imagine that a cosmic ray hit the register caching X and toggled its
bit. The result will be correct behavior of the semantically incorrect
program.

BTW, this is not so uncommon when dealing with buggy hardware implementing
buggy protocols.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49f7c587$0$95494$742ec2ed@news.sonic.net>
Dmitry A. Kazakov wrote:

> On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:
> 

>> Ah, right. That is actually done in practice. Just avoid the parts of
>> the hardware that don't work.
> 
> Actually I meant the case when the program has a bug "fixed" by
> malfunctioning hardware. 

I have had the "interesting" experience of running into flaws in the 
compiler (behavior contrary to specification) and had to code routines
that will work correctly under *both* (mutually exclusive) semantics 
so they wouldn't break again when the compiler got fixed.

Those routines were six times as verbose as they should have been, plus 
comments explaining both sets of semantics and proving how the code 
achieved its goal in both cases.  

And they took a constant-factor performance hit too, but fortunately 
they weren't called too often.

Since then I use open-source (compilers I'm allowed to fix) whenever 
possible.  It's just easier.

                                Bear
From: Cesar Rabak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gt83vq$o81$1@aioe.org>
Dmitry A. Kazakov escreveu:
> On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
>>>> Dmitry A. Kazakov wrote:
>>>>> Analogous statement is that a properly functioning main board of the
>>>>> computer does not imply correctness of your program. Which by no means
>>>>> should lead you the conclusion that you should buy a defective computer "in
>>>>> order to improve your productivity" (as always).
>>>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
>>> It does. An incorrect program can behave correctly on a malfunctioning
>>> hardware.
>> Ah, right. That is actually done in practice. Just avoid the parts of 
>> the hardware that don't work.
> 
> Actually I meant the case when the program has a bug "fixed" by
> malfunctioning hardware. Consider:
> 
> if X then
>    Print ("X is false");
> else
>    Print ("X is true");
> end if;
> 
> Now imagine that a cosmic ray hit the register caching X and toggled its
> bit. The result will be correct behavior of the semantically incorrect
> program.
Your attempt to stretch your examples brought an interesting kind of 
fallacy.

There is no way of knowing if the programm is correct as it stands, so 
your suggestion that the programm is "semantically incorrect" is 
nonsensical.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <r19guim1mr11$.wsyr3nyqo6wi$.dlg@40tude.net>
On Tue, 28 Apr 2009 20:37:17 -0300, Cesar Rabak wrote:

> Dmitry A. Kazakov escreveu:
>> On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:
>> 
>>> Dmitry A. Kazakov wrote:
>>>> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
>>>>> Dmitry A. Kazakov wrote:
>>>>>> Analogous statement is that a properly functioning main board of the
>>>>>> computer does not imply correctness of your program. Which by no means
>>>>>> should lead you the conclusion that you should buy a defective computer "in
>>>>>> order to improve your productivity" (as always).
>>>>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
>>>> It does. An incorrect program can behave correctly on a malfunctioning
>>>> hardware.
>>> Ah, right. That is actually done in practice. Just avoid the parts of 
>>> the hardware that don't work.
>> 
>> Actually I meant the case when the program has a bug "fixed" by
>> malfunctioning hardware. Consider:
>> 
>> if X then
>>    Print ("X is false");
>> else
>>    Print ("X is true");
>> end if;
>> 
>> Now imagine that a cosmic ray hit the register caching X and toggled its
>> bit. The result will be correct behavior of the semantically incorrect
>> program.
>
> Your attempt to stretch your examples brought an interesting kind of 
> fallacy.
> 
> There is no way of knowing if the programm is correct as it stands,

Exactly the point.

> so 
> your suggestion that the programm is "semantically incorrect" is 
> nonsensical.

On the contrary. Nonsensical is a delusion that a given program can be
correct as-is, in some mystical way. Program correctness is checked against
the specification. Program behavior is another thing. The relation is that
If the program is correct and the preconditions (like properly functioning
hardware are met) THEN the program exposes the expected behavior.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Cesar Rabak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gtd6ju$r5$3@aioe.org>
Dmitry A. Kazakov escreveu:
> On Tue, 28 Apr 2009 20:37:17 -0300, Cesar Rabak wrote:
> 
>> Dmitry A. Kazakov escreveu:
>>> On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:
>>>
>>>> Dmitry A. Kazakov wrote:
>>>>> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
>>>>>> Dmitry A. Kazakov wrote:
>>>>>>> Analogous statement is that a properly functioning main board of the
>>>>>>> computer does not imply correctness of your program. Which by no means
>>>>>>> should lead you the conclusion that you should buy a defective computer "in
>>>>>>> order to improve your productivity" (as always).
>>>>>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
>>>>> It does. An incorrect program can behave correctly on a malfunctioning
>>>>> hardware.
>>>> Ah, right. That is actually done in practice. Just avoid the parts of 
>>>> the hardware that don't work.
>>> Actually I meant the case when the program has a bug "fixed" by
>>> malfunctioning hardware. Consider:
>>>
>>> if X then
>>>    Print ("X is false");
>>> else
>>>    Print ("X is true");
>>> end if;
>>>
>>> Now imagine that a cosmic ray hit the register caching X and toggled its
>>> bit. The result will be correct behavior of the semantically incorrect
>>> program.
>> Your attempt to stretch your examples brought an interesting kind of 
>> fallacy.
>>
>> There is no way of knowing if the programm is correct as it stands,
> 
> Exactly the point.
> 
>> so 
>> your suggestion that the programm is "semantically incorrect" is 
>> nonsensical.
> 
> On the contrary. Nonsensical is a delusion that a given program can be
> correct as-is, in some mystical way. Program correctness is checked against
> the specification. Program behavior is another thing. The relation is that
> If the program is correct and the preconditions (like properly functioning
> hardware are met) THEN the program exposes the expected behavior.
> 
Would you please shows us a mathematical complete proof of the above 
program stub if it is "semantically correct" or "semantically incorrect"?

--
Cesar Rabak
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <stq86f8cj86y.1nymlossv658f$.dlg@40tude.net>
On Thu, 30 Apr 2009 18:52:49 -0300, Cesar Rabak wrote:

> Dmitry A. Kazakov escreveu:
>> On Tue, 28 Apr 2009 20:37:17 -0300, Cesar Rabak wrote:
>> 
>>> Dmitry A. Kazakov escreveu:
>>>> On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:
>>>>
>>>>> Dmitry A. Kazakov wrote:
>>>>>> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
>>>>>>> Dmitry A. Kazakov wrote:
>>>>>>>> Analogous statement is that a properly functioning main board of the
>>>>>>>> computer does not imply correctness of your program. Which by no means
>>>>>>>> should lead you the conclusion that you should buy a defective computer "in
>>>>>>>> order to improve your productivity" (as always).
>>>>>>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
>>>>>> It does. An incorrect program can behave correctly on a malfunctioning
>>>>>> hardware.
>>>>> Ah, right. That is actually done in practice. Just avoid the parts of 
>>>>> the hardware that don't work.
>>>> Actually I meant the case when the program has a bug "fixed" by
>>>> malfunctioning hardware. Consider:
>>>>
>>>> if X then
>>>>    Print ("X is false");
>>>> else
>>>>    Print ("X is true");
>>>> end if;
>>>>
>>>> Now imagine that a cosmic ray hit the register caching X and toggled its
>>>> bit. The result will be correct behavior of the semantically incorrect
>>>> program.
>>> Your attempt to stretch your examples brought an interesting kind of 
>>> fallacy.
>>>
>>> There is no way of knowing if the programm is correct as it stands,
>> 
>> Exactly the point.
>> 
>>> so 
>>> your suggestion that the programm is "semantically incorrect" is 
>>> nonsensical.
>> 
>> On the contrary. Nonsensical is a delusion that a given program can be
>> correct as-is, in some mystical way. Program correctness is checked against
>> the specification. Program behavior is another thing. The relation is that
>> If the program is correct and the preconditions (like properly functioning
>> hardware are met) THEN the program exposes the expected behavior.
>> 
> Would you please shows us a mathematical complete proof of the above 
> program stub if it is "semantically correct" or "semantically incorrect"?

You can do it yourself. The starting point is specification: Inputs (the
values of X), outputs (the printout). Then you need to define the semantics
of language statements (like if-then-else). For the rest see D. Gries, The
Science of Programming (an excellent reading, BTW). In the end you will
have a formal proof of correctness (assuming its decidability of course).

(I don't know why are you asking this, because incorrectness against
assumed specification ("to print value of X") is evident in this case.)

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Cesar Rabak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gtf3cr$lrk$1@aioe.org>
Dmitry A. Kazakov escreveu:
> On Thu, 30 Apr 2009 18:52:49 -0300, Cesar Rabak wrote:
> 
>> Dmitry A. Kazakov escreveu:
>>> On Tue, 28 Apr 2009 20:37:17 -0300, Cesar Rabak wrote:
>>>
>>>> Dmitry A. Kazakov escreveu:
>>>>> On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:
>>>>>
>>>>>> Dmitry A. Kazakov wrote:
>>>>>>> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
>>>>>>>> Dmitry A. Kazakov wrote:
>>>>>>>>> Analogous statement is that a properly functioning main board of the
>>>>>>>>> computer does not imply correctness of your program. Which by no means
>>>>>>>>> should lead you the conclusion that you should buy a defective computer "in
>>>>>>>>> order to improve your productivity" (as always).
>>>>>>>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
>>>>>>> It does. An incorrect program can behave correctly on a malfunctioning
>>>>>>> hardware.
>>>>>> Ah, right. That is actually done in practice. Just avoid the parts of 
>>>>>> the hardware that don't work.
>>>>> Actually I meant the case when the program has a bug "fixed" by
>>>>> malfunctioning hardware. Consider:
>>>>>
>>>>> if X then
>>>>>    Print ("X is false");
>>>>> else
>>>>>    Print ("X is true");
>>>>> end if;
>>>>>
>>>>> Now imagine that a cosmic ray hit the register caching X and toggled its
>>>>> bit. The result will be correct behavior of the semantically incorrect
>>>>> program.
>>>> Your attempt to stretch your examples brought an interesting kind of 
>>>> fallacy.
>>>>
>>>> There is no way of knowing if the programm is correct as it stands,
>>> Exactly the point.
>>>
>>>> so 
>>>> your suggestion that the programm is "semantically incorrect" is 
>>>> nonsensical.
>>> On the contrary. Nonsensical is a delusion that a given program can be
>>> correct as-is, in some mystical way. Program correctness is checked against
>>> the specification. Program behavior is another thing. The relation is that
>>> If the program is correct and the preconditions (like properly functioning
>>> hardware are met) THEN the program exposes the expected behavior.
>>>
>> Would you please shows us a mathematical complete proof of the above 
>> program stub if it is "semantically correct" or "semantically incorrect"?
> 
> You can do it yourself. The starting point is specification: Inputs (the
> values of X), outputs (the printout). Then you need to define the semantics
> of language statements (like if-then-else). For the rest see D. Gries, The
> Science of Programming (an excellent reading, BTW). In the end you will
> have a formal proof of correctness (assuming its decidability of course).
> 

So since we cannot have decidability, you are shooting water all the 
time. . .

> (I don't know why are you asking this, because incorrectness against
> assumed specification ("to print value of X") is evident in this case.)
> 

Not it is not and until now you've not shown us (much weaker than prove)...
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1t8kxocylyro9$.13d9i6uqjt8sc.dlg@40tude.net>
On Fri, 01 May 2009 12:09:17 -0300, Cesar Rabak wrote:

> Dmitry A. Kazakov escreveu:
>> On Thu, 30 Apr 2009 18:52:49 -0300, Cesar Rabak wrote:
>> 
>>> Dmitry A. Kazakov escreveu:
>>>> On Tue, 28 Apr 2009 20:37:17 -0300, Cesar Rabak wrote:
>>>>
>>>>> Dmitry A. Kazakov escreveu:
>>>>>> On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:
>>>>>>
>>>>>>> Dmitry A. Kazakov wrote:
>>>>>>>> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
>>>>>>>>> Dmitry A. Kazakov wrote:
>>>>>>>>>> Analogous statement is that a properly functioning main board of the
>>>>>>>>>> computer does not imply correctness of your program. Which by no means
>>>>>>>>>> should lead you the conclusion that you should buy a defective computer "in
>>>>>>>>>> order to improve your productivity" (as always).
>>>>>>>>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
>>>>>>>> It does. An incorrect program can behave correctly on a malfunctioning
>>>>>>>> hardware.
>>>>>>> Ah, right. That is actually done in practice. Just avoid the parts of 
>>>>>>> the hardware that don't work.
>>>>>> Actually I meant the case when the program has a bug "fixed" by
>>>>>> malfunctioning hardware. Consider:
>>>>>>
>>>>>> if X then
>>>>>>    Print ("X is false");
>>>>>> else
>>>>>>    Print ("X is true");
>>>>>> end if;
>>>>>>
>>>>>> Now imagine that a cosmic ray hit the register caching X and toggled its
>>>>>> bit. The result will be correct behavior of the semantically incorrect
>>>>>> program.
>>>>> Your attempt to stretch your examples brought an interesting kind of 
>>>>> fallacy.
>>>>>
>>>>> There is no way of knowing if the programm is correct as it stands,
>>>> Exactly the point.
>>>>
>>>>> so 
>>>>> your suggestion that the programm is "semantically incorrect" is 
>>>>> nonsensical.
>>>> On the contrary. Nonsensical is a delusion that a given program can be
>>>> correct as-is, in some mystical way. Program correctness is checked against
>>>> the specification. Program behavior is another thing. The relation is that
>>>> If the program is correct and the preconditions (like properly functioning
>>>> hardware are met) THEN the program exposes the expected behavior.
>>>>
>>> Would you please shows us a mathematical complete proof of the above 
>>> program stub if it is "semantically correct" or "semantically incorrect"?
>> 
>> You can do it yourself. The starting point is specification: Inputs (the
>> values of X), outputs (the printout). Then you need to define the semantics
>> of language statements (like if-then-else). For the rest see D. Gries, The
>> Science of Programming (an excellent reading, BTW). In the end you will
>> have a formal proof of correctness (assuming its decidability of course).
>> 
> 
> So since we cannot have decidability, you are shooting water all the 
> time. . .

In the above case it is decidable.

>> (I don't know why are you asking this, because incorrectness against
>> assumed specification ("to print value of X") is evident in this case.)
> 
> Not it is not and until now you've not shown us (much weaker than prove)...

Sorry, if that was not evident, then I cannot help you further.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ····················@hotmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <91488371-b540-43a0-ae2c-896ac9deaf56@o14g2000vbo.googlegroups.com>
On 1 May, 16:09, Cesar Rabak <·······@yahoo.com.br> wrote:
> Dmitry A. Kazakov escreveu:
> > On Thu, 30 Apr 2009 18:52:49 -0300, Cesar Rabak wrote:
> >> Dmitry A. Kazakov escreveu:
> >>> On Tue, 28 Apr 2009 20:37:17 -0300, Cesar Rabak wrote:
> >>>> Dmitry A. Kazakov escreveu:
> >>>>> On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:
> >>>>>> Dmitry A. Kazakov wrote:
> >>>>>>> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
> >>>>>>>> Dmitry A. Kazakov wrote:

> >>>>>>>>> Analogous statement is that a properly functioning main board of the
> >>>>>>>>> computer does not imply correctness of your program. Which by no means
> >>>>>>>>> should lead you the conclusion that you should buy a defective computer "in
> >>>>>>>>> order to improve your productivity" (as always).
>
> >>>>>>>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
>
> >>>>>>> It does. An incorrect program can behave correctly on a malfunctioning
> >>>>>>> hardware.
>
> >>>>>> Ah, right. That is actually done in practice. Just avoid the parts of
> >>>>>> the hardware that don't work.
>
> >>>>> Actually I meant the case when the program has a bug "fixed" by
> >>>>> malfunctioning hardware. Consider:
>
> >>>>> if X then
> >>>>>    Print ("X is false");
> >>>>> else
> >>>>>    Print ("X is true");
> >>>>> end if;

what's X? Or was that your point?

> >>>>> Now imagine that a cosmic ray hit the register caching X and toggled its
> >>>>> bit. The result will be correct behavior of the semantically incorrect
> >>>>> program.
>
> >>>> Your attempt to stretch your examples brought an interesting kind of
> >>>> fallacy.
>
> >>>> There is no way of knowing if the programm is correct as it stands,
>
> >>> Exactly the point.
>
> >>>> so your suggestion that the programm is "semantically incorrect" is
> >>>> nonsensical.
>
> >>> On the contrary. Nonsensical is a delusion that a given program can be
> >>> correct as-is, in some mystical way. Program correctness is checked against
> >>> the specification. Program behavior is another thing. The relation is that
> >>> If the program is correct and the preconditions (like properly functioning
> >>> hardware are met) THEN the program exposes the expected behavior.
>
> >> Would you please shows us a mathematical complete proof of the above
> >> program stub if it is "semantically correct" or "semantically incorrect"?
>
> > You can do it yourself. The starting point is specification: Inputs (the
> > values of X), outputs (the printout). Then you need to define the semantics
> > of language statements (like if-then-else). For the rest see D. Gries, The
> > Science of Programming (an excellent reading, BTW). In the end you will
> > have a formal proof of correctness (assuming its decidability of course).
>
> So since we cannot have decidability, you are shooting water all the
> time. . .

why not? And if you mean some version of the Halting Problem then
what does that have to do with this case? Some programs *can* be
proved to be correct.

> > (I don't know why are you asking this, because incorrectness against
> > assumed specification ("to print value of X") is evident in this case.)
>
> [No] it is not and until now you've not shown us (much weaker than prove
From: Cesar Rabak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gtgagd$ptm$1@aioe.org>
Dmitry A. Kazakov escreveu:
> On Thu, 30 Apr 2009 18:52:49 -0300, Cesar Rabak wrote:
> 
>> Dmitry A. Kazakov escreveu:
>>> On Tue, 28 Apr 2009 20:37:17 -0300, Cesar Rabak wrote:
>>>
>>>> Dmitry A. Kazakov escreveu:
>>>>> On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:
>>>>>
>>>>>> Dmitry A. Kazakov wrote:
>>>>>>> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
>>>>>>>> Dmitry A. Kazakov wrote:
>>>>>>>>> Analogous statement is that a properly functioning main board of the
>>>>>>>>> computer does not imply correctness of your program. Which by no means
>>>>>>>>> should lead you the conclusion that you should buy a defective computer "in
>>>>>>>>> order to improve your productivity" (as always).
>>>>>>>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
>>>>>>> It does. An incorrect program can behave correctly on a malfunctioning
>>>>>>> hardware.
>>>>>> Ah, right. That is actually done in practice. Just avoid the parts of 
>>>>>> the hardware that don't work.
>>>>> Actually I meant the case when the program has a bug "fixed" by
>>>>> malfunctioning hardware. Consider:
>>>>>
>>>>> if X then
>>>>>    Print ("X is false");
>>>>> else
>>>>>    Print ("X is true");
>>>>> end if;
>>>>>
>>>>> Now imagine that a cosmic ray hit the register caching X and toggled its
>>>>> bit. The result will be correct behavior of the semantically incorrect
>>>>> program.
>>>> Your attempt to stretch your examples brought an interesting kind of 
>>>> fallacy.
>>>>
>>>> There is no way of knowing if the programm is correct as it stands,
>>> Exactly the point.
>>>
>>>> so 
>>>> your suggestion that the programm is "semantically incorrect" is 
>>>> nonsensical.
>>> On the contrary. Nonsensical is a delusion that a given program can be
>>> correct as-is, in some mystical way. Program correctness is checked against
>>> the specification. Program behavior is another thing. The relation is that
>>> If the program is correct and the preconditions (like properly functioning
>>> hardware are met) THEN the program exposes the expected behavior.
>>>
>> Would you please shows us a mathematical complete proof of the above 
>> program stub if it is "semantically correct" or "semantically incorrect"?
> 
> You can do it yourself. The starting point is specification: Inputs (the
> values of X), outputs (the printout). Then you need to define the semantics
> of language statements (like if-then-else). For the rest see D. Gries, The
> Science of Programming (an excellent reading, BTW). In the end you will
> have a formal proof of correctness (assuming its decidability of course).
> 

So since we cannot have decidability, you are shooting water all the
time. . .

> (I don't know why are you asking this, because incorrectness against
> assumed specification ("to print value of X") is evident in this case.)
> 

Not it is not and until now you've not shown us (much weaker than prove)...
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75pfsbF18ajgkU2@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Tue, 28 Apr 2009 21:21:01 +0200, Pascal Costanza wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Tue, 28 Apr 2009 16:29:13 +0200, Pascal Costanza wrote:
>>>> Dmitry A. Kazakov wrote:
>>>>> Analogous statement is that a properly functioning main board of the
>>>>> computer does not imply correctness of your program. Which by no means
>>>>> should lead you the conclusion that you should buy a defective computer "in
>>>>> order to improve your productivity" (as always).
>>>> Here, the reverse doesn't hold, and that's why this is a bad analogy.
>>> It does. An incorrect program can behave correctly on a malfunctioning
>>> hardware.
>> Ah, right. That is actually done in practice. Just avoid the parts of 
>> the hardware that don't work.
> 
> Actually I meant the case when the program has a bug "fixed" by
> malfunctioning hardware. 

Well, that's then not a correct analogy again.



Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Marco Antoniotti
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <3a8cf982-b938-4e5e-9a0b-753b298fd40b@m24g2000vbp.googlegroups.com>
On Apr 20, 11:39 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
> > Vend wrote:
> >> I think that in order to write reliable software, early error
> >> detection is generally preferable, even if in some cases it might
> >> generate false positives.
>
> Yes, though considering this case, dead code is obviously an error. So it
> is a true positive, falsely attributed.
>
> > Now you're mixing up type errors and _actual_ errors again. Why do you
> > static types do this all the time?
>
> No, it is you who is mixing intended and program semantics. An ill-typed
> program is illegal independently on the programmer's intention. It is wrong
> to talk about its execution paths. An illegal program does not have any.

I think you are missing the point.  While static typing is a good
thing, it is a PITA to program with a too strict ST language.  That is
the reason while Haskell is more pleasant to work with than OCaml.  It
is because its type system, while very sophisticated is less of a
bondage and discipline one than OCaml.

Shifting a little bit over, the other bit missing is that the
environment and/or the language should do a CA (Computer Aided) job.
If I have a language that admits untyped/utypable computing paths
(notice that I explicitely avoid the word "ill typed") then that may -
and the whole point of the dynamic crowd is that it does - be a good
thing as well.

YMMV.  As usual, the right thing would be a Lisp language with type
inference.  That is why Qi is the closest thing to TRT floating
around.

> It is *exactly* same as if it would contain a syntax error. If you had a
> syntax error in a "path that is not executed", would a dynamically typed
> language reject this program, treacherously ignoring the "fact" that the
> program is "correct"? Yes it will. What a pity!
>
> The very question as you posed it is meaningless. An illegal program cannot
> be correct or incorrect. It is not a program.

Only in a very strict and restrictive programming language.

The following is incorrect in OCaml and correct in Haskell (needless
to say in CL).

function f (n is integer, acc is integer)
   = if (n =< 0) then
        return 1
     else
        return f(n - 1, n * acc)

translate it into both and try it as f(13) on you run-of-the-mill
architecture.  And I have not even started doing 2 * 3.14.

Cheers
--
Marco
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <781iz293hedz.1jhbd1us3btex$.dlg@40tude.net>
On Mon, 20 Apr 2009 07:22:33 -0700 (PDT), Marco Antoniotti wrote:

> On Apr 20, 11:39�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:

>> The very question as you posed it is meaningless. An illegal program cannot
>> be correct or incorrect. It is not a program.
> 
> Only in a very strict and restrictive programming language.

Yep, that is the whole point. That is the property of a *formal* language.
So it is not just about being untyped. You want more, you want to run a
syntactically incorrect program. From corrupted files, too? On a wrong
processor type? Powered off? After all there is a non-zero probability that
it might work...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Marco Antoniotti
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <b90859ad-14be-4a8b-9d9f-ec4d8276ed94@d14g2000yql.googlegroups.com>
On Apr 20, 6:31 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Mon, 20 Apr 2009 07:22:33 -0700 (PDT), Marco Antoniotti wrote:
> > On Apr 20, 11:39 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> The very question as you posed it is meaningless. An illegal program cannot
> >> be correct or incorrect. It is not a program.
>
> > Only in a very strict and restrictive programming language.
>
> Yep, that is the whole point. That is the property of a *formal* language.

Then why does the following works:

======================
Prelude> 40 + 2.0
40 + 2.0
42.0
======================

while the following is - IMHO - a PITA?

======================
# 40 + 2.0;;
Characters 5-8:
  40 + 2.0;;
       ^^^
Error: This expression has type float but is here used with type int
======================

Or, again

======================
*Main> factorial 14
factorial 14
87178291200
======================

vs.

======================
#   factorial 14;;
- : int = -868538368
#
======================

It is because the designers made choices along the way.  The Haskell
guys made a language that is far more palatable to CL programmers
than, say, OCaml.

It is very true that the earlier you catch errors, the better it is.
What you are missing, is that some people find that looking for bugs
early, actually slows down overall development time towards a final,
comparably bug-free, product.  Do not ask me to provide you with
analytical data.  I do not have it besides my personal experience, and
AFAIK, the SE people are still actively researching ways to really
measure such things.

Now, would I like a CL with an interactive computer aided type
checker?  Of course I would (nad to some extent, I already have).  But
different people want different things.  People who want to program in
strict statically typed languages can do so.  Other people have a
different frame of mind.

There is the old saying: if Lisp is so great then why isn't everybody
using it?  Who knows.  But let me paraphrase this.  If statically
typed (avec type inference) functional languages are so great (they
have been around since the 80s in a usable form), then why did Tcl,
Python, Ruby, and other SLDJs popped up in the meanwhile?  I don't
have an answer and you, I'd bet, don't have an answer either.


Cheers
--
Marco
www.european-lisp-symposium.org
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <9wcvq2la44zz.1os94or1zqv6u.dlg@40tude.net>
On Mon, 20 Apr 2009 13:08:16 -0700 (PDT), Marco Antoniotti wrote:

> On Apr 20, 6:31�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Mon, 20 Apr 2009 07:22:33 -0700 (PDT), Marco Antoniotti wrote:
>>> On Apr 20, 11:39�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>> wrote:
>>>> The very question as you posed it is meaningless. An illegal program cannot
>>>> be correct or incorrect. It is not a program.
>>
>>> Only in a very strict and restrictive programming language.
>>
>> Yep, that is the whole point. That is the property of a *formal* language.
> 
> Then why does the following works:
> 
> ======================
> Prelude> 40 + 2.0
> 40 + 2.0
> 42.0
> ======================

Very bad, suggesting that 2.0 is floating point and so the result. The
obvious problem with floating-point is that it is inexact. So the meaning
of 40+2.0 is diffuse. Does it round towards zero if normalizes?

> while the following is - IMHO - a PITA?
> 
> ======================
> # 40 + 2.0;;
> Characters 5-8:
>   40 + 2.0;;
>        ^^^
> Error: This expression has type float but is here used with type int
> ======================

Yep, that it is exactly what I expect from a good language. Probably it was
integer +, or maybe modular +, rational +, fixed-point +, floating-point +,
interval +? There are damn many numeric types already in mathematics, and
even more their models in computing.

> It is very true that the earlier you catch errors, the better it is.
> What you are missing, is that some people find that looking for bugs
> early, actually slows down overall development time towards a final,
> comparably bug-free, product.

Then in my humble opinion, they must be retrained. Sorry, if a pilot finds
that it is better not to check whether the retractable landing gear is down
before landing, then he should better wash dishes.

> There is the old saying: if Lisp is so great then why isn't everybody
> using it?  Who knows.  But let me paraphrase this.  If statically
> typed (avec type inference) functional languages are so great (they
> have been around since the 80s in a usable form),

Unfortunately they are far from being great. We still do not know how to
build a good type system.

IMO. Dynamically typed languages is a computing field where new ideas are
researched. When such idea gets more or less understood and ripe, it is
incorporated into industrial languages, which can be only statically typed.

> then why did Tcl,
> Python, Ruby, and other SLDJs popped up in the meanwhile?  I don't
> have an answer and you, I'd bet, don't have an answer either.

I always thought that they were sent to us as a retribution for our sins...
(:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Marco Antoniotti
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <b8c5d3bf-b346-4917-85cb-70f020bb64cb@k8g2000yqn.googlegroups.com>
On Apr 20, 10:41 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Mon, 20 Apr 2009 13:08:16 -0700 (PDT), Marco Antoniotti wrote:
> > On Apr 20, 6:31 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> On Mon, 20 Apr 2009 07:22:33 -0700 (PDT), Marco Antoniotti wrote:
> >>> On Apr 20, 11:39 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> >>> wrote:
> >>>> The very question as you posed it is meaningless. An illegal program cannot
> >>>> be correct or incorrect. It is not a program.
>
> >>> Only in a very strict and restrictive programming language.
>
> >> Yep, that is the whole point. That is the property of a *formal* language.
>
> > Then why does the following works:
>
> > ======================
> > Prelude> 40 + 2.0
> > 40 + 2.0
> > 42.0
> > ======================
>
> Very bad, suggesting that 2.0 is floating point and so the result. The
> obvious problem with floating-point is that it is inexact. So the meaning
> of 40+2.0 is diffuse. Does it round towards zero if normalizes?

I don't know; I suppose the behavior of "rounding to zero" may be
documented somewhere.  You'd better take this flame on
comp.lang.haskell and see what the Haskell crowd tells you :) since
the above is Haskell.

> > while the following is - IMHO - a PITA?
>
> > ======================
> > # 40 + 2.0;;
> > Characters 5-8:
> >   40 + 2.0;;
> >        ^^^
> > Error: This expression has type float but is here used with type int
> > ======================
>
> Yep, that it is exactly what I expect from a good language.

No.  This is a PITA.  Especially for numerics where things should be
pretty well understood at this point.

> Probably it was
> integer +, or maybe modular +, rational +, fixed-point +, floating-point +,
> interval +? There are damn many numeric types already in mathematics, and
> even more their models in computing.

Yes.  But you are missing the point.  I should not fight the language
(or the type system) just to get the program to compile.

> > It is very true that the earlier you catch errors, the better it is.
> > What you are missing, is that some people find that looking for bugs
> > early, actually slows down overall development time towards a final,
> > comparably bug-free, product.
>
> Then in my humble opinion, they must be retrained.

In my humble opinion you are not answering.  There are plenty of
people who have trained themselves to be conversant in this and that.
My point was that there are (at least) two paths to a "bug-free"
program: one with early bug-swats and the second with later bug-
swats.  There is a class of people who chooses not so type strict
languages because they feel they get to the end result (I repeat: a
"comparably bug-free" code) with less effort.  At the same time they
employ all they have to swat bugs early: there *are* CL compilers that
do type inference.

> Sorry, if a pilot finds
> that it is better not to check whether the retractable landing gear is down
> before landing, then he should better wash dishes.

This is a cute example, but it does not work.  Static type checking
ensures you that there is a landing gear.  A duck typing language will
need a landing *and* take off gear before getting the pilot in the
flying position of checking if the landing gear is out before landing.

> > There is the old saying: if Lisp is so great then why isn't everybody
> > using it?  Who knows.  But let me paraphrase this.  If statically
> > typed (avec type inference) functional languages are so great (they
> > have been around since the 80s in a usable form),
>
> Unfortunately they are far from being great. We still do not know how to
> build a good type system.

Yes.  I agree with that.  In fact there is no satisfactory type system
for Common Lisp.  And not because CL was not designed without one (*),
but because there are very many ways to use the language that current
type checkers don't have a clue what to do about it.

So, here is a homework for you.  Fix the CL type system, implement a
CL extension that works the type checker seamlessly in the language
and report back to us.  (BTW. Qi is very good but no cigar.)

> IMO. Dynamically typed languages is a computing field where new ideas are
> researched. When such idea gets more or less understood and ripe, it is
> incorporated into industrial languages, which can be only statically typed.

Yes and no.  First of all you carefully avoided the "Lisp Machine"
existence proof in your answers.  Second, "industrial languages" is a
non defined term.  Third, the amount of "industrial code" written in
dynamically typed languages is a clear counter example, although I
think it is not relevant.

> > then why did Tcl,
> > Python, Ruby, and other SLDJs popped up in the meanwhile?  I don't
> > have an answer and you, I'd bet, don't have an answer either.
>
> I always thought that they were sent to us as a retribution for our sins...
> (:-))

Then do penance!  Your homework was clearly spelled out a few lines
above.  I am already waiting. :)

Cheers
--
Marco

(*) Don't even think of responding to this point.  I know that CL type
system does not have recursive types.  It is one of the things that
you should do in your homework. :)
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1a9zjk5jhe2mv$.1iltejupl9qz9.dlg@40tude.net>
On Tue, 21 Apr 2009 01:03:27 -0700 (PDT), Marco Antoniotti wrote:

> On Apr 20, 10:41�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Mon, 20 Apr 2009 13:08:16 -0700 (PDT), Marco Antoniotti wrote:
>>> On Apr 20, 6:31�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>> wrote:
>>>> On Mon, 20 Apr 2009 07:22:33 -0700 (PDT), Marco Antoniotti wrote:
>>>>> On Apr 20, 11:39�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>>>> wrote:
>>>>>> The very question as you posed it is meaningless. An illegal program cannot
>>>>>> be correct or incorrect. It is not a program.
>>
>>>>> Only in a very strict and restrictive programming language.
>>
>>>> Yep, that is the whole point. That is the property of a *formal* language.
>>
>>> Then why does the following works:
>>
>>> ======================
>>> Prelude> 40 + 2.0
>>> 40 + 2.0
>>> 42.0
>>> ======================
>>
>> Very bad, suggesting that 2.0 is floating point and so the result. The
>> obvious problem with floating-point is that it is inexact. So the meaning
>> of 40+2.0 is diffuse. Does it round towards zero if normalizes?
> 
> I don't know; I suppose the behavior of "rounding to zero" may be
> documented somewhere.

So why are you using it, if you don't know what it does?

> You'd better take this flame on
> comp.lang.haskell and see what the Haskell crowd tells you :) since
> the above is Haskell.

No, I blame you for bringing it as an example of a fair language use.
 
>>> while the following is - IMHO - a PITA?
>>
>>> ======================
>>> # 40 + 2.0;;
>>> Characters 5-8:
>>> � 40 + 2.0;;
>>> � � � �^^^
>>> Error: This expression has type float but is here used with type int
>>> ======================
>>
>> Yep, that it is exactly what I expect from a good language.
> 
> No.  This is a PITA.  Especially for numerics where things should be
> pretty well understood at this point.

They are not, as your first example shows. You wrote a program of an
ill-defined semantics, you have confirmed that. The compiler merely drawn
your dissipated attention to this fact. Maybe you meant rather 40 + 2?

>> Probably it was
>> integer +, or maybe modular +, rational +, fixed-point +, floating-point +,
>> interval +? There are damn many numeric types already in mathematics, and
>> even more their models in computing.
> 
> Yes.  But you are missing the point.  I should not fight the language
> (or the type system) just to get the program to compile.

You need not. Define an operation "+" with the left argument of the type
integer and the right one of the type float. The result type is up to you.
Everybody is happy. You have defined the semantics, the compiler knows what
to do.

Implicit type conversions are bad. They are typical for weakly typed
languages. The classic example was PL/1. It was infamous for being real
PITA, because the outcome of an expression with conversion was absolutely
unpredictable, yet well-defined. Since then not so many keep on arguing
that 40+2.0 is a good thing.

>>> It is very true that the earlier you catch errors, the better it is.
>>> What you are missing, is that some people find that looking for bugs
>>> early, actually slows down overall development time towards a final,
>>> comparably bug-free, product.
>>
>> Then in my humble opinion, they must be retrained.
> 
> In my humble opinion you are not answering.  There are plenty of
> people who have trained themselves to be conversant in this and that.
> My point was that there are (at least) two paths to a "bug-free"
> program: one with early bug-swats and the second with later bug-
> swats.  There is a class of people who chooses not so type strict
> languages because they feel they get to the end result (I repeat: a
> "comparably bug-free" code) with less effort.  At the same time they
> employ all they have to swat bugs early: there *are* CL compilers that
> do type inference.

You are arguing here to popularity. But this argument cannot stand. The
most popular language is Visual Basic, the most popular OS is Windows. What
else need to be said?

Programming as an engineering activity requires training and selection. So
long we were unable to reason about technology of programming without
resorting to: "all were running, I ran too".

>> Sorry, if a pilot finds
>> that it is better not to check whether the retractable landing gear is down
>> before landing, then he should better wash dishes.
> 
> This is a cute example, but it does not work.

The example was about postponed checks.
[...]

> So, here is a homework for you.  Fix the CL type system, implement a
> CL extension that works the type checker seamlessly in the language
> and report back to us.

If I worked in academics, probably I would. However you guys have things to
do first. Remove these pathetic brackets from the language!

>> IMO. Dynamically typed languages is a computing field where new ideas are
>> researched. When such idea gets more or less understood and ripe, it is
>> incorporated into industrial languages, which can be only statically typed.
> 
> Yes and no.  First of all you carefully avoided the "Lisp Machine"
> existence proof in your answers.

Why should I care of? The idea sank. It is the hardware guys who dictate
us, not reverse. In general the idea is wrong. One can observe Prolog,
RDBMS, JVM as illustrations to how widely the idea was tried on and how
miserably it failed. In fact even CISC architectures were questioned to
give way to RISC.

> Second, "industrial languages" is a
> non defined term.

= used in industrial software developing.

> Third, the amount of "industrial code" written in
> dynamically typed languages is a clear counter example, although I
> think it is not relevant.

Industrial does not imply use in industry. It means code production in an
industrial way, engineered. You know, planning, fixed budged, deadlines,
predictable quality, underpaid personell, fat bosses (:-)) etc.

>>> then why did Tcl,
>>> Python, Ruby, and other SLDJs popped up in the meanwhile? �I don't
>>> have an answer and you, I'd bet, don't have an answer either.
>>
>> I always thought that they were sent to us as a retribution for our sins...
>> (:-))
> 
> Then do penance!  Your homework was clearly spelled out a few lines
> above.  I am already waiting. :)

I better not, otherwise something even more evil might come upon me. (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Marco Antoniotti
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <cd7134d6-0c77-4069-925d-0f917af26274@t21g2000yqi.googlegroups.com>
On Apr 21, 7:15 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Tue, 21 Apr 2009 01:03:27 -0700 (PDT), Marco Antoniotti wrote:
> > On Apr 20, 10:41 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> On Mon, 20 Apr 2009 13:08:16 -0700 (PDT), Marco Antoniotti wrote:
> >>> On Apr 20, 6:31 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> >>> wrote:
> >>>> On Mon, 20 Apr 2009 07:22:33 -0700 (PDT), Marco Antoniotti wrote:
> >>>>> On Apr 20, 11:39 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> >>>>> wrote:
> >>>>>> The very question as you posed it is meaningless. An illegal program cannot
> >>>>>> be correct or incorrect. It is not a program.
>
> >>>>> Only in a very strict and restrictive programming language.
>
> >>>> Yep, that is the whole point. That is the property of a *formal* language.
>
> >>> Then why does the following works:
>
> >>> ======================
> >>> Prelude> 40 + 2.0
> >>> 40 + 2.0
> >>> 42.0
> >>> ======================
>
> >> Very bad, suggesting that 2.0 is floating point and so the result. The
> >> obvious problem with floating-point is that it is inexact. So the meaning
> >> of 40+2.0 is diffuse. Does it round towards zero if normalizes?
>
> > I don't know; I suppose the behavior of "rounding to zero" may be
> > documented somewhere.
>
> So why are you using it, if you don't know what it does?

Because I know when I need to be careful about robustness of
computations.

> > You'd better take this flame on
> > comp.lang.haskell and see what the Haskell crowd tells you :) since
> > the above is Haskell.
>
> No, I blame you for bringing it as an example of a fair language use.
>

But it is fair language use.  Even in a statically typed language like
Haskell. But you are shooting at the wrong target.  Your target is in
comp.lang.haskell..... :)

> >>> while the following is - IMHO - a PITA?
>
> >>> ======================
> >>> # 40 + 2.0;;
> >>> Characters 5-8:
> >>>   40 + 2.0;;
> >>>        ^^^
> >>> Error: This expression has type float but is here used with type int
> >>> ======================
>
> >> Yep, that it is exactly what I expect from a good language.
>
> > No.  This is a PITA.  Especially for numerics where things should be
> > pretty well understood at this point.
>
> They are not, as your first example shows. You wrote a program of an
> ill-defined semantics, you have confirmed that. The compiler merely drawn
> your dissipated attention to this fact. Maybe you meant rather 40 + 2?

Maybe I meant "here are two numbers, I know the compiler will do
coercion, let it do what it wants; get me a result and I will come
back later to fix things, meanwhile let moe work on something which I
think is more important".  In OCaml, I may have had to stop and deal
with this non-issue instead of doing something else that I need more.


> >> Probably it was
> >> integer +, or maybe modular +, rational +, fixed-point +, floating-point +,
> >> interval +? There are damn many numeric types already in mathematics, and
> >> even more their models in computing.
>
> > Yes.  But you are missing the point.  I should not fight the language
> > (or the type system) just to get the program to compile.
>
> You need not. Define an operation "+" with the left argument of the type
> integer and the right one of the type float. The result type is up to you.
> Everybody is happy. You have defined the semantics, the compiler knows what
> to do.

Nope.  I am *not* happy to have to use +. for float sums.  And let's
be clear that this is just an example; fixing the numerical operators
in your code is trivial.


> Implicit type conversions are bad. They are typical for weakly typed
> languages. The classic example was PL/1. It was infamous for being real
> PITA, because the outcome of an expression with conversion was absolutely
> unpredictable, yet well-defined. Since then not so many keep on arguing
> that 40+2.0 is a good thing.

The Haskell crowd seems to.  And they are right in the middle of the
statically typed functional language landscape.

> >>> It is very true that the earlier you catch errors, the better it is.
> >>> What you are missing, is that some people find that looking for bugs
> >>> early, actually slows down overall development time towards a final,
> >>> comparably bug-free, product.
>
> >> Then in my humble opinion, they must be retrained.
>
> > In my humble opinion you are not answering.  There are plenty of
> > people who have trained themselves to be conversant in this and that.
> > My point was that there are (at least) two paths to a "bug-free"
> > program: one with early bug-swats and the second with later bug-
> > swats.  There is a class of people who chooses not so type strict
> > languages because they feel they get to the end result (I repeat: a
> > "comparably bug-free" code) with less effort.  At the same time they
> > employ all they have to swat bugs early: there *are* CL compilers that
> > do type inference.
>
> You are arguing here to popularity. But this argument cannot stand. The
> most popular language is Visual Basic, the most popular OS is Windows. What
> else need to be said?

Nope.  I am not arguing "to popularity".  I am arguing to resulting,
working and tested code.  And you do not seem to have a counter
argument, rather you are trying to dismiss the point that certain
people can be as productive even using tools that have different bug
swatting profiles

> Programming as an engineering activity requires training and selection. So
> long we were unable to reason about technology of programming without
> resorting to: "all were running, I ran too".

This is an obvious statement. But you are dismissing other people
running because they don't run the way you do, but rather, sometimes
they run differently, although they still run.

> >> Sorry, if a pilot finds
> >> that it is better not to check whether the retractable landing gear is down
> >> before landing, then he should better wash dishes.
>
> > This is a cute example, but it does not work.
>
> The example was about postponed checks.

Nope.  It is the usual example which is good to illustrate the point
of the need of checks and that it is easier to do them statically.  It
is not an example that says that a pilot that checks that the landing
gear is out is out before landing is an idiot because he did not check
to have a landing gear before getting to the airport to fly out.

> [...]
>
> > So, here is a homework for you.  Fix the CL type system, implement a
> > CL extension that works the type checker seamlessly in the language
> > and report back to us.
>
> If I worked in academics, probably I would. However you guys have things to
> do first. Remove these pathetic brackets from the language!

Now we have it.  You don't get the beauty of code is data.  That is
the crux of the problem.  The ML crowd did somethign very right but
messed up another thing.  They removed the equivalence of code and
data which is what sets Lisps apart (and no: I do not think that PLOT
AST manipulations will fly either).  They had their reasons to do
so... but in doing that they lost many lispers.

> >> IMO. Dynamically typed languages is a computing field where new ideas are
> >> researched. When such idea gets more or less understood and ripe, it is
> >> incorporated into industrial languages, which can be only statically typed.
>
> > Yes and no.  First of all you carefully avoided the "Lisp Machine"
> > existence proof in your answers.
>
> Why should I care of? The idea sank. It is the hardware guys who dictate
> us, not reverse. In general the idea is wrong. One can observe Prolog,
> RDBMS, JVM as illustrations to how widely the idea was tried on and how
> miserably it failed. In fact even CISC architectures were questioned to
> give way to RISC.

But you just sneered at the fact that VB is the most used language
nowadays.  You avoided acknoledging that the Lisp Machines had OSes
built on top a a dynamically strongly typed language (as opposed to
UNIX statically weakly typed one).

> > Second, "industrial languages" is a
> > non defined term.
>
> = used in industrial software developing.
>
> > Third, the amount of "industrial code" written in
> > dynamically typed languages is a clear counter example, although I
> > think it is not relevant.
>
> Industrial does not imply use in industry. It means code production in an
> industrial way, engineered. You know, planning, fixed budged, deadlines,
> predictable quality, underpaid personell, fat bosses (:-)) etc.

There are (still) two commercial Lisp vendors out there.  AFAIK they
are doing at least ok.  I am sure they know about these things. (Which
means Duane isn't paid enough! :) )

> >>> then why did Tcl,
> >>> Python, Ruby, and other SLDJs popped up in the meanwhile?  I don't
> >>> have an answer and you, I'd bet, don't have an answer either.
>
> >> I always thought that they were sent to us as a retribution for our sins...
> >> (:-))
>
> > Then do penance!  Your homework was clearly spelled out a few lines
> > above.  I am already waiting. :)
>
> I better not, otherwise something even more evil might come upon me. (:-))

Nope.  You can't avoid it.  It is the homework curse.  Now you have to
do it.  :)

Cheers
--
Marco
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uotmzp5r66mz.1ryw6b6ms63bz$.dlg@40tude.net>
On Tue, 21 Apr 2009 14:27:16 -0700 (PDT), Marco Antoniotti wrote:

> On Apr 21, 7:15�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Tue, 21 Apr 2009 01:03:27 -0700 (PDT), Marco Antoniotti wrote:
>>> On Apr 20, 10:41�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>> wrote:
>>>> On Mon, 20 Apr 2009 13:08:16 -0700 (PDT), Marco Antoniotti wrote:
>>>>> On Apr 20, 6:31�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>>>> wrote:
>>>>>> On Mon, 20 Apr 2009 07:22:33 -0700 (PDT), Marco Antoniotti wrote:
>>>>>>> On Apr 20, 11:39�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>>>>>> wrote:
>>>>>>>> The very question as you posed it is meaningless. An illegal program cannot
>>>>>>>> be correct or incorrect. It is not a program.
>>
>>>>>>> Only in a very strict and restrictive programming language.
>>
>>>>>> Yep, that is the whole point. That is the property of a *formal* language.
>>
>>>>> Then why does the following works:
>>
>>>>> ======================
>>>>> Prelude> 40 + 2.0
>>>>> 40 + 2.0
>>>>> 42.0
>>>>> ======================
>>
>>>> Very bad, suggesting that 2.0 is floating point and so the result. The
>>>> obvious problem with floating-point is that it is inexact. So the meaning
>>>> of 40+2.0 is diffuse. Does it round towards zero if normalizes?
>>
>>> I don't know; I suppose the behavior of "rounding to zero" may be
>>> documented somewhere.
>>
>> So why are you using it, if you don't know what it does?
> 
> Because I know when I need to be careful about robustness of
> computations.

This does not explain your intent neither to the program reader nor to the
compiler. The only clear thing is that you were careless to express it
unambiguously. There is also no hope that any run-time information
unavailable at compile time could clarify it, unlikely to, say, a dynamic
type case. So it is a clear bug to me.

>>>>> while the following is - IMHO - a PITA?
>>
>>>>> ======================
>>>>> # 40 + 2.0;;
>>>>> Characters 5-8:
>>>>> � 40 + 2.0;;
>>>>> � � � �^^^
>>>>> Error: This expression has type float but is here used with type int
>>>>> ======================
>>
>>>> Yep, that it is exactly what I expect from a good language.
>>
>>> No. �This is a PITA. �Especially for numerics where things should be
>>> pretty well understood at this point.
>>
>> They are not, as your first example shows. You wrote a program of an
>> ill-defined semantics, you have confirmed that. The compiler merely drawn
>> your dissipated attention to this fact. Maybe you meant rather 40 + 2?
> 
> Maybe I meant "here are two numbers, I know the compiler will do
> coercion, let it do what it wants; get me a result and I will come
> back later to fix things, meanwhile let moe work on something which I
> think is more important". 

You could write 4.12 instead.

> In OCaml, I may have had to stop and deal
> with this non-issue instead of doing something else that I need more.

Why not to remove this piece of code?

>>>> Probably it was
>>>> integer +, or maybe modular +, rational +, fixed-point +, floating-point +,
>>>> interval +? There are damn many numeric types already in mathematics, and
>>>> even more their models in computing.
>>
>>> Yes. �But you are missing the point. �I should not fight the language
>>> (or the type system) just to get the program to compile.
>>
>> You need not. Define an operation "+" with the left argument of the type
>> integer and the right one of the type float. The result type is up to you.
>> Everybody is happy. You have defined the semantics, the compiler knows what
>> to do.
> 
> Nope.  I am *not* happy to have to use +. for float sums.  And let's
> be clear that this is just an example; fixing the numerical operators
> in your code is trivial.

An example of what? It does not prove that ignoring bugs is good. Are you
trying to classify bugs into bugs and not so big bugs? Without careful
software design it is impossible to do. If merely considering the semantics
of + is already too big burden, then I presume, to design the program in a
way that variations of the semantics of + would have little (measurably
little) effect on its behaviour is a greatly more difficult task. Certainly
you don't want to do that either. Therefore you cannot claim it non-issue.

> Nope.  I am not arguing "to popularity".  I am arguing to resulting,
> working and tested code.  And you do not seem to have a counter
> argument, rather you are trying to dismiss the point that certain
> people can be as productive even using tools that have different bug
> swatting profiles

No. I object the idea to leave a bug in the program after it has been
detected. (It is not about how the bug gets detected.) Here is a list of
propositions:

1. All detectable bugs have to be detected
2. These shall be detected early
3. Detected bugs have to be fixed early

Nobody seems to argue against 1 and 2. There is some concern about 3. I
don't see how 3 can be rebutted, without some strong data about effects of
a given bug on the program behavior.

>> Programming as an engineering activity requires training and selection. So
>> long we were unable to reason about technology of programming without
>> resorting to: "all were running, I ran too".
> 
> This is an obvious statement. But you are dismissing other people
> running because they don't run the way you do, but rather, sometimes
> they run differently, although they still run.

You get me wrong. The point is that in engineering there is not that much
space for choosing between practices. I believe that the way I do it is
right. You trust in your way. There is no obvious criterion how
professionals in this area could rationally resolve it. I.e. what we are
doing is not much engineering. Otherwise one of us would already have to
look for another job.

>>>> Sorry, if a pilot finds
>>>> that it is better not to check whether the retractable landing gear is down
>>>> before landing, then he should better wash dishes.
>>
>>> This is a cute example, but it does not work.
>>
>> The example was about postponed checks.
> 
> Nope.

So we can agree on the position 3? I cannot believe it!

>>> So, here is a homework for you. �Fix the CL type system, implement a
>>> CL extension that works the type checker seamlessly in the language
>>> and report back to us.
>>
>> If I worked in academics, probably I would. However you guys have things to
>> do first. Remove these pathetic brackets from the language!
> 
> Now we have it.  You don't get the beauty of code is data. 

Right. Code is not data.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Marco Antoniotti
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <0877af46-eae2-40c6-a3a1-99b82b5bb8a2@y9g2000yqg.googlegroups.com>
Alright then....  last time around.

On Apr 22, 4:57 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Tue, 21 Apr 2009 14:27:16 -0700 (PDT), Marco Antoniotti wrote:
> > On Apr 21, 7:15 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> On Tue, 21 Apr 2009 01:03:27 -0700 (PDT), Marco Antoniotti wrote:
> >>> On Apr 20, 10:41 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> >>> wrote:
> >>>> On Mon, 20 Apr 2009 13:08:16 -0700 (PDT), Marco Antoniotti wrote:
> >>>>> On Apr 20, 6:31 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> >>>>> wrote:
> >>>>>> On Mon, 20 Apr 2009 07:22:33 -0700 (PDT), Marco Antoniotti wrote:
> >>>>>>> On Apr 20, 11:39 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> >>>>>>> wrote:
> >>>>>>>> The very question as you posed it is meaningless. An illegal program cannot
> >>>>>>>> be correct or incorrect. It is not a program.
>
> >>>>>>> Only in a very strict and restrictive programming language.
>
> >>>>>> Yep, that is the whole point. That is the property of a *formal* language.
>
> >>>>> Then why does the following works:
>
> >>>>> ======================
> >>>>> Prelude> 40 + 2.0
> >>>>> 40 + 2.0
> >>>>> 42.0
> >>>>> ======================
>
> >>>> Very bad, suggesting that 2.0 is floating point and so the result. The
> >>>> obvious problem with floating-point is that it is inexact. So the meaning
> >>>> of 40+2.0 is diffuse. Does it round towards zero if normalizes?
>
> >>> I don't know; I suppose the behavior of "rounding to zero" may be
> >>> documented somewhere.
>
> >> So why are you using it, if you don't know what it does?
>
> > Because I know when I need to be careful about robustness of
> > computations.
>
> This does not explain your intent neither to the program reader nor to the
> compiler. The only clear thing is that you were careless to express it
> unambiguously. There is also no hope that any run-time information
> unavailable at compile time could clarify it, unlikely to, say, a dynamic
> type case. So it is a clear bug to me.

It is nto a bug to me, nor to the Haskell folks.  Can we agree to
disagree?

>
>
>
> >>>>> while the following is - IMHO - a PITA?
>
> >>>>> ======================
> >>>>> # 40 + 2.0;;
> >>>>> Characters 5-8:
> >>>>>   40 + 2.0;;
> >>>>>        ^^^
> >>>>> Error: This expression has type float but is here used with type int
> >>>>> ======================
>
> >>>> Yep, that it is exactly what I expect from a good language.
>
> >>> No.  This is a PITA.  Especially for numerics where things should be
> >>> pretty well understood at this point.
>
> >> They are not, as your first example shows. You wrote a program of an
> >> ill-defined semantics, you have confirmed that. The compiler merely drawn
> >> your dissipated attention to this fact. Maybe you meant rather 40 + 2?
>
> > Maybe I meant "here are two numbers, I know the compiler will do
> > coercion, let it do what it wants; get me a result and I will come
> > back later to fix things, meanwhile let moe work on something which I
> > think is more important".
>
> You could write 4.12 instead.

Yes. I could and I will.  But this is just an example of a the
programming modality/process/practice that very very strict statically
typed programming languages impose on the programmer.



>
> > In OCaml, I may have had to stop and deal
> > with this non-issue instead of doing something else that I need more.
>
> Why not to remove this piece of code?
>

Because it works! (And Haskell infers a correct type for it; if I am
using CL some compilers may even be so smart as to let you know that
what you are doing has performance penalties).




>
>
> >>>> Probably it was
> >>>> integer +, or maybe modular +, rational +, fixed-point +, floating-point +,
> >>>> interval +? There are damn many numeric types already in mathematics, and
> >>>> even more their models in computing.
>
> >>> Yes.  But you are missing the point.  I should not fight the language
> >>> (or the type system) just to get the program to compile.
>
> >> You need not. Define an operation "+" with the left argument of the type
> >> integer and the right one of the type float. The result type is up to you.
> >> Everybody is happy. You have defined the semantics, the compiler knows what
> >> to do.
>
> > Nope.  I am *not* happy to have to use +. for float sums.  And let's
> > be clear that this is just an example; fixing the numerical operators
> > in your code is trivial.
>
> An example of what? It does not prove that ignoring bugs is good. Are you
> trying to classify bugs into bugs and not so big bugs? Without careful
> software design it is impossible to do. If merely considering the semantics
> of + is already too big burden, then I presume, to design the program in a
> way that variations of the semantics of + would have little (measurably
> little) effect on its behaviour is a greatly more difficult task. Certainly
> you don't want to do that either. Therefore you cannot claim it non-issue.

You just don't get it, do you?  I do not mean to be offensive.  I am
just trying to make the point that I am willing to use all the tricks
and tools I have at hand to get the job done.  Yet, at the same time I
do have some ability to discern what is important and what is less
important.  If there are some "untypable" part of my program/
application/library I do not want to be stopped to work on other parts
because (something like) 40 + 2.0 generates a type error.


> > Nope.  I am not arguing "to popularity".  I am arguing to resulting,
> > working and tested code.  And you do not seem to have a counter
> > argument, rather you are trying to dismiss the point that certain
> > people can be as productive even using tools that have different bug
> > swatting profiles
>
> No. I object the idea to leave a bug in the program after it has been
> detected. (It is not about how the bug gets detected.) Here is a list of
> propositions:
>
> 1. All detectable bugs have to be detected
> 2. These shall be detected early
> 3. Detected bugs have to be fixed early
>
> Nobody seems to argue against 1 and 2. There is some concern about 3. I
> don't see how 3 can be rebutted, without some strong data about effects of
> a given bug on the program behavior.

You are perfectly right.  Up to point 3.  Which by your own admission
needs data.  Data that the Software Engineering folks are still
debating how to collect.


> >> Programming as an engineering activity requires training and selection. So
> >> long we were unable to reason about technology of programming without
> >> resorting to: "all were running, I ran too".
>
> > This is an obvious statement. But you are dismissing other people
> > running because they don't run the way you do, but rather, sometimes
> > they run differently, although they still run.
>
> You get me wrong. The point is that in engineering there is not that much
> space for choosing between practices. I believe that the way I do it is
> right. You trust in your way. There is no obvious criterion how
> professionals in this area could rationally resolve it. I.e. what we are
> doing is not much engineering. Otherwise one of us would already have to
> look for another job.

We can debate this as long as you want.  At least we are agreeing to
disagree.  Probably it is me who is not doing "engineering" as you
intend it.  But again, I hold that that is "engineering" as *you*
intend it.

>
> >>>> Sorry, if a pilot finds
> >>>> that it is better not to check whether the retractable landing gear is down
> >>>> before landing, then he should better wash dishes.
>
> >>> This is a cute example, but it does not work.
>
> >> The example was about postponed checks.
>
> > Nope.
>
> So we can agree on the position 3? I cannot believe it!

Only in the way I qualified it :)

> >>> So, here is a homework for you.  Fix the CL type system, implement a
> >>> CL extension that works the type checker seamlessly in the language
> >>> and report back to us.
>
> >> If I worked in academics, probably I would. However you guys have things to
> >> do first. Remove these pathetic brackets from the language!
>
> > Now we have it.  You don't get the beauty of code is data.
>
> Right. Code is not data.

What about "text".  What does a compiler do?  The beauty of Lisp is
that it lets you do it in a far easier way.  What was the quote?

"APL is like a diamond, try to add anything to it and it breaks.  Lisp
is like a ball of mud, no matter how much mud you add to it, it still
looks like a ball of mud". :)

Cheers
--
Marco
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1o5qs8sgyqrmj.7yrqj3gv12l8.dlg@40tude.net>
On Wed, 22 Apr 2009 08:21:35 -0700 (PDT), Marco Antoniotti wrote:

> Alright then....  last time around.
> 
> On Apr 22, 4:57�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Tue, 21 Apr 2009 14:27:16 -0700 (PDT), Marco Antoniotti wrote:
>>> On Apr 21, 7:15�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>> wrote:
>>>> On Tue, 21 Apr 2009 01:03:27 -0700 (PDT), Marco Antoniotti wrote:
>>>>> On Apr 20, 10:41�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>>>> wrote:
>>>>>> On Mon, 20 Apr 2009 13:08:16 -0700 (PDT), Marco Antoniotti wrote:
>>>>>>> On Apr 20, 6:31�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>>>>>> wrote:
>>>>>>>> On Mon, 20 Apr 2009 07:22:33 -0700 (PDT), Marco Antoniotti wrote:
>>>>>>>>> On Apr 20, 11:39�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>>>>>>>> wrote:
>>>>>>>>>> The very question as you posed it is meaningless. An illegal program cannot
>>>>>>>>>> be correct or incorrect. It is not a program.
>>
>>>>>>>>> Only in a very strict and restrictive programming language.
>>
>>>>>>>> Yep, that is the whole point. That is the property of a *formal* language.
>>
>>>>>>> Then why does the following works:
>>
>>>>>>> ======================
>>>>>>> Prelude> 40 + 2.0
>>>>>>> 40 + 2.0
>>>>>>> 42.0
>>>>>>> ======================
>>
>>>>>> Very bad, suggesting that 2.0 is floating point and so the result. The
>>>>>> obvious problem with floating-point is that it is inexact. So the meaning
>>>>>> of 40+2.0 is diffuse. Does it round towards zero if normalizes?
>>
>>>>> I don't know; I suppose the behavior of "rounding to zero" may be
>>>>> documented somewhere.
>>
>>>> So why are you using it, if you don't know what it does?
>>
>>> Because I know when I need to be careful about robustness of
>>> computations.
>>
>> This does not explain your intent neither to the program reader nor to the
>> compiler. The only clear thing is that you were careless to express it
>> unambiguously. There is also no hope that any run-time information
>> unavailable at compile time could clarify it, unlikely to, say, a dynamic
>> type case. So it is a clear bug to me.
> 
> It is nto a bug to me, nor to the Haskell folks.  Can we agree to
> disagree?

Just one last note. Bug is not a matter of someone's opinion. It should be
objective, e.g. testable. What kind of test could you propose in support
that the behavior (which you don't know) is correct (not a bug)? Static
type check is a kind of little test. In a language I would admire, that
test would fail.

> You just don't get it, do you?  I do not mean to be offensive.  I am
> just trying to make the point that I am willing to use all the tricks
> and tools I have at hand to get the job done.

No, I understand it pretty good, you do it at will. So what you expect me
to answer? That I am shaken to the ground, have lost my faith in humanity?
Not really. (:-))

>>> Nope. �I am not arguing "to popularity". �I am arguing to resulting,
>>> working and tested code. �And you do not seem to have a counter
>>> argument, rather you are trying to dismiss the point that certain
>>> people can be as productive even using tools that have different bug
>>> swatting profiles
>>
>> No. I object the idea to leave a bug in the program after it has been
>> detected. (It is not about how the bug gets detected.) Here is a list of
>> propositions:
>>
>> 1. All detectable bugs have to be detected
>> 2. These shall be detected early
>> 3. Detected bugs have to be fixed early
>>
>> Nobody seems to argue against 1 and 2. There is some concern about 3. I
>> don't see how 3 can be rebutted, without some strong data about effects of
>> a given bug on the program behavior.
> 
> You are perfectly right.  Up to point 3.  Which by your own admission
> needs data.  Data that the Software Engineering folks are still
> debating how to collect.

Yes, if we wished to show it empirically. But I asked for other data,
design documents, software architecture letting us (you) to conclude that
the effect of the bug to ignore is negligible. It must be a deliberate
decision based on certain knowledge, or?

>>>>> So, here is a homework for you. �Fix the CL type system, implement a
>>>>> CL extension that works the type checker seamlessly in the language
>>>>> and report back to us.
>>
>>>> If I worked in academics, probably I would. However you guys have things to
>>>> do first. Remove these pathetic brackets from the language!
>>
>>> Now we have it. �You don't get the beauty of code is data.
>>
>> Right. Code is not data.
> 
> What about "text".  What does a compiler do?

You fall into the trap of universal quantification. What is data for one
program might well be code of another.

> The beauty of Lisp is
> that it lets you do it in a far easier way.  What was the quote?
> 
> "APL is like a diamond, try to add anything to it and it breaks.  Lisp
> is like a ball of mud, no matter how much mud you add to it, it still
> looks like a ball of mud". :)

If only I were a scarab! (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090420205010.GV4558@gildor.inglorion.net>
On Mon, Apr 20, 2009 at 01:08:16PM -0700, Marco Antoniotti wrote:
> 
> There is the old saying: if Lisp is so great then why isn't everybody
> using it?  Who knows.  But let me paraphrase this.  If statically
> typed (avec type inference) functional languages are so great (they
> have been around since the 80s in a usable form), then why did Tcl,
> Python, Ruby, and other SLDJs popped up in the meanwhile?  I don't
> have an answer and you, I'd bet, don't have an answer either.

I think the popularity of the likes of Ruby and Python is simply due to 
them being very good at what they do: provide a low barrier to doing the 
things many people want to do with their computers.

They don't bother would-be programmers with the finer details of data 
representations or memory management (unlike, say, C), and offer a lot 
of popular functionality in libraries, using naming conventions that 
make sense to today's programmers (unlike, say, Common Lisp).

Of course, no language is perfect, and both Ruby and Python certainly 
have their shortcomings. But I think the effort required to learn either 
is very low as programming languages go, and yet they are very powerful. 
And if you need more speed or libraries, it's relatively easy to 
interface to C. So you get a low barrier to entry, plus pretty much any 
functionality you want, at whatever speed you need (although you may 
have to make some trade-offs). This is very hard to beat.

Regards,

Bob

-- 
"Beware of bugs in the above code; I have only proved it correct, but not
tried it."
	-- Donald Knuth


From: Marco Antoniotti
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <98e26906-927b-4c48-992e-ae6b2ac8d55a@f19g2000yqo.googlegroups.com>
On Apr 20, 10:50 pm, Robbert Haarman <··············@inglorion.net>
wrote:
> On Mon, Apr 20, 2009 at 01:08:16PM -0700, Marco Antoniotti wrote:
>
> > There is the old saying: if Lisp is so great then why isn't everybody
> > using it?  Who knows.  But let me paraphrase this.  If statically
> > typed (avec type inference) functional languages are so great (they
> > have been around since the 80s in a usable form), then why did Tcl,
> > Python, Ruby, and other SLDJs popped up in the meanwhile?  I don't
> > have an answer and you, I'd bet, don't have an answer either.
>
> I think the popularity of the likes of Ruby and Python is simply due to
> them being very good at what they do: provide a low barrier to doing the
> things many people want to do with their computers.
>
> They don't bother would-be programmers with the finer details of data
> representations or memory management (unlike, say, C), and offer a lot
> of popular functionality in libraries, using naming conventions that
> make sense to today's programmers (unlike, say, Common Lisp).
>
> Of course, no language is perfect, and both Ruby and Python certainly
> have their shortcomings. But I think the effort required to learn either
> is very low as programming languages go, and yet they are very powerful.
> And if you need more speed or libraries, it's relatively easy to
> interface to C. So you get a low barrier to entry, plus pretty much any
> functionality you want, at whatever speed you need (although you may
> have to make some trade-offs). This is very hard to beat.

Yes.  This is all agreeable.  But that simply means that "advanced
languages" (for and appropriate definition of "advanced") are "hard".

Ok.  I shouldn't have jumped in....  Let me jump out. :)

> --
> "Beware of bugs in the above code; I have only proved it correct, but not
> tried it."
>         -- Donald Knuth

A case where "argument by authority" should really shut everybody
down :)


Cheers
--
Marco
www.european-lisp-symposium.org
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <753mt9F16ddf2U1@mid.individual.net>
On Mon, 20 Apr 2009 18:31:57 +0200, Dmitry A. Kazakov wrote:

> On Mon, 20 Apr 2009 07:22:33 -0700 (PDT), Marco Antoniotti wrote:
> 
>> On Apr 20, 11:39 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>> wrote:
> 
>>> The very question as you posed it is meaningless. An illegal program
>>> cannot be correct or incorrect. It is not a program.
>> 
>> Only in a very strict and restrictive programming language.
> 
> Yep, that is the whole point. That is the property of a *formal*
> language. So it is not just about being untyped. You want more, you want
> to run a syntactically incorrect program. From corrupted files, too? On
> a wrong processor type? Powered off? After all there is a non-zero
> probability that it might work...

You are comparing things that don't make sense (corrupted files, etc)
to something that does and is found to be very advantageous by its
users (dynamically typed languages).  Besides the ignorance you have
demonstrated in this thread, this also shows that you are not above
arguing dishonestly, using false analogies and similar rhetorical
tools.

Rational arguments are only possible between parties who refrain from
such practices -- whereas you are just desperately trying to discredit
things which do not fit into your narrow mind.

This thread is pointless.

Tamas
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49ecb282$0$95525$742ec2ed@news.sonic.net>
Dmitry A. Kazakov wrote:

> No, it is you who is mixing intended and program semantics. An ill-typed
> program is illegal independently on the programmer's intention. It is
> wrong to talk about its execution paths. An illegal program does not have
> any.

Y'know what's illegal?  Things that there are laws against. 
Programs that are illegal, at least in this country, are 
any executable code that helps someone to violate the DMCA.  
I think worms or viruses are also often illegal, but they 
are mostly written in statically typed languages.  

So could you please chill down the hyperbole a little?  
Nobody's talking about anything illegal, or even immoral, 
until I mentioned it in the paragraph above. 

I've been trying to ignore this thread, but I feel responsible 
since I started it.  

I'm sorry everybody; I didn't intend to start a religious war 
about typing.  


I will just say some really obvious things.  I think that these
are universal. 

Programs that violate civil or criminal laws are illegal.

Other programs are legal. 

Every language implementation accepts some files as source 
for programs, and rejects others.  

Anytime a program runs, with or without some actual input, 
without encountering errors, then it has run without 
encountering errors. 

This is true whether or not it was provable before the program 
started that it could be run without encountering errors. 

This is true even when it is provable that a run of the 
program for some different input *will* encounter errors. 

This is true regardless of how the particular language system 
involved responds to errors. 


                                Bear
From: Didier Verna
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <muxprf7zgv1.fsf@uzeb.lrde.epita.fr>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> wrote:

> It is *exactly* same as if it would contain a syntax error. If you had
> a syntax error in a "path that is not executed", would a dynamically
> typed language reject this program, treacherously ignoring the "fact"
> that the program is "correct"? Yes it will. What a pity!

1/ Yes, what a pity. The future of dynamic languages lies in JIT
   Read-Eval'uation.

2/ In Lisp (I mean if you type your code correctly), leaving code
   unfinished doesn't lead to syntax errors frequently though, so this
   is not really a problem.

3/ Even if it were (leading to syntax errors), the REPL sort of lets you
   have syntactically incorrect programs run. You just don't Read-Eval
   the parts that are unfinished. I often have half-written functions in
   my Lisp files because I'm suddenly thinking about something else and
   jump to it. I just don't M-C-x them right away, that's all.

-- 
European Lisp Symposium, May 2009: http://www.european-lisp-symposium.org
European Lisp Workshop, July 2009: http://elw.bknr.net/2009

Scientific site:   http://www.lrde.epita.fr/~didier
Music (Jazz) site: http://www.didierverna.com
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75ogeaF18r19aU1@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
> 
>> Vend wrote:
> 
>>> I think that in order to write reliable software, early error
>>> detection is generally preferable, even if in some cases it might
>>> generate false positives.
> 
> Yes, though considering this case, dead code is obviously an error. So it
> is a true positive, falsely attributed.

Here is an example of code you _want_ to be dead in order to be correct:

(define (foo x)
   (if (= x 0)
     (error "This should never happen!")
     (/ 5 x)))

>> Now you're mixing up type errors and _actual_ errors again. Why do you 
>> static types do this all the time?
> 
> No, it is you who is mixing intended and program semantics. An ill-typed
> program is illegal independently on the programmer's intention. It is wrong
> to talk about its execution paths. An illegal program does not have any.
> 
> It is *exactly* same as if it would contain a syntax error. If you had a
> syntax error in a "path that is not executed", would a dynamically typed
> language reject this program, treacherously ignoring the "fact" that the
> program is "correct"? Yes it will. What a pity!

Here is a program with serious syntax errors:

test.lisp >>>>>>>>>
(defun hello ()
   (print "Hello, World!"))

(hello)
(quit)

@#$)(*#)$(·@#)$(*)·@*)#($*#$*(#^&·@#(^
<<<<<<<<<

Here is a run of this program:

 > sbcl
This is SBCL 1.0.27, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.

SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses.  See the CREDITS and COPYING files in the
distribution for more information.
* (load "test.lisp")

"Hello, World!"
 >

No problems encountered.

> The very question as you posed it is meaningless. An illegal program cannot
> be correct or incorrect. It is not a program.

There are more possibilities than you seem to think.



Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Benjamin Tovar
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87tz48vpca.fsf@the.google.mail.thing>
Pascal Costanza <··@p-cos.net> writes:

> Here is a program with serious syntax errors:
>
> test.lisp >>>>>>>>>
> (defun hello ()
>   (print "Hello, World!"))
>
> (hello)
> (quit)
>
> @#$)(*#)$(·@#)$(*)·@*)#($*#$*(#^&·@#(^
> <<<<<<<<<

[snip]

> No problems encountered.
>
>> The very question as you posed it is meaningless. An illegal program cannot
>> be correct or incorrect. It is not a program.
>
> There are more possibilities than you seem to think.

As if writing a compiler was not hard enough, you want to write a
scavenger? ;-)

-- 
Benjamin Tovar
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75oi25F18b2lbU1@mid.individual.net>
Benjamin Tovar wrote:
> Pascal Costanza <··@p-cos.net> writes:
> 
>> Here is a program with serious syntax errors:
>>
>> test.lisp >>>>>>>>>
>> (defun hello ()
>>   (print "Hello, World!"))
>>
>> (hello)
>> (quit)
>>
>> @#$)(*#)$(·@#)$(*)·@*)#($*#$*(#^&·@#(^
>> <<<<<<<<<
> 
> [snip]
> 
>> No problems encountered.
>>
>>> The very question as you posed it is meaningless. An illegal program cannot
>>> be correct or incorrect. It is not a program.
>> There are more possibilities than you seem to think.
> 
> As if writing a compiler was not hard enough, you want to write a
> scavenger? ;-)

:)

No, of course not.

But: The usual claims about static typing are (1) that it helps ensuring 
correctness, (2) that it helps improving performance and (3) that it 
helps documenting your software.

(1) is not true. Static typing can be a tool to work for more 
correctness, if that fits the way you think about a program. But that 
doesn't mean that it comes for free, and that also doesn't mean that 
there is no other way to ensure correctness.

(2) is not true. The state of the art in achieving performance is based 
on dynamic compilation, which does not need to know about static type 
information to work.

(3) is only true in manifest type system. However, the state of the art 
in static type systems seems to be based on type inference, so (3) is 
actually also not true.

Considering that static typing actually also implies a price to pay in 
terms of expressivity at the base level - there are certain programs you 
may want to express that cannot be statically type-checked - it's not 
clear-cut at all that static typing is a tool that _must_ be used (or 
else, all hell breaks loose, as certain people seem to want to imply).


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <2ebe734d-4298-4a89-b1d5-698e5eb48faf@i28g2000prd.googlegroups.com>
> But: The usual claims about static typing are (1)
> that it helps ensuring
> correctness, (2) that it helps improving
> performance and (3) that it
> helps documenting your software.
>
> (1) is not true. Static typing can be a tool to work for more
> correctness, if that fits the way you think about a program. But that
> doesn't mean that it comes for free, and that also doesn't mean that
> there is no other way to ensure correctness.
This not seem to be an objection agains (1). "It helps" IMO means
neither "it always help" nor "only it helps".

> (2) is not true. The state of the art in achieving
> performance is based on dynamic compilation, which does not
> need to know about static type information to work.
Dynamic compilation is orthogonal to typing. There is
a possibility to generate statically typed code at
runtime. But, statically typed languages are faster
in a rather wide domain. In many cases, static typing
allows to precompute some part of total calculation, so
it does improve speed. Again,
http://shootout.alioth.debian.org/
If dynamically typed dynamically compiled languages are
faster, it has to be proved by at least some examples.
From: Dimiter "malkia" Stanev
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <2e319f2a-c575-4525-8c45-dbdaa8c8903a@i28g2000prd.googlegroups.com>
> Dynamic compilation is orthogonal to typing. There is
> a possibility to generate statically typed code at
> runtime. But, statically typed languages are faster
> in a rather wide domain. In many cases, static typing
> allows to precompute some part of total calculation, so
> it does improve speed. Again,http://shootout.alioth.debian.org/
> If dynamically typed dynamically compiled languages are
> faster, it has to be proved by at least some examples.

What about stalin?

And just because something is faster, it does not make it the fastest.

No matter how fast such a language is, nothing beats right now hand
optimized GPU code. So it's not worth spending so much time in system
that might improve the performance, where they target a CPU that is
generally a lot slower than other specialized CPU's such as a GPU,
DSP, or whatever there is.

The right combination, if your imperative is SPEED, is some kind of
language (dynamic, static, whatever) to drive all kind of tasks, and
specialized assembler/C whatever language for handling specific hard-
cases (for example FFT, DCT, Image composition, etc.) - be it CUDA,
OpenCL, or even OpenGL/DirectX operations.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <bdb9230e-7247-4959-8a1a-b6c71f241bfa@c18g2000prh.googlegroups.com>
> What about stalin?
I've never seen Stalin benchmarks. I see it uses some
optimizations which are certainly available for
statically-typed languages too. And, Scheme itself
is not as dynamic as CL. Ironically, it lacks
type declarations and hence it is harder to make
Scheme programs run quickly.

> if your imperative is SPEED
No, it is not. I think compiled CL programs
are fast enough. Python is too slow.
From: Benjamin Tovar
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87fxfs7sav.fsf@the.google.mail.thing>
Pascal Costanza <··@p-cos.net> writes:

> Considering that static typing actually also implies a price to pay in
> terms of expressivity at the base level - there are certain programs
> you may want to express that cannot be statically type-checked - it's
> not clear-cut at all that static typing is a tool that _must_ be used
> (or else, all hell breaks loose, as certain people seem to want to
> imply).

Yes, precisely. It is a tool, not an all-encompassing way of thinking.


-- 
Benjamin Tovar
From: Tobias C. Rittweiler
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87vdojg3jl.fsf@freebits.de>
Pascal Costanza <··@p-cos.net> writes:

> But: The usual claims about static typing are [...] (3) that it helps
> documenting your software. [...]
>
> (3) is only true in manifest type system. However, the state of the
> art in static type systems seems to be based on type inference, so (3)
> is actually also not true.

That's quite a stretch. Type inference makes types implicit to the user,
but they're there; ideally, you could ask your development environment
for the types of arbitrary expressions in a convenient way.

I do not know if that's already possible. I also do not know if there
has been research on incremental type inference, though I hope. If not,
it should be done. (Pointers welcome.)

  -T.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1ttt9hrieycur.39iac85zz0e2.dlg@40tude.net>
On Tue, 28 Apr 2009 15:59:38 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:
>> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
>> 
>>> Vend wrote:
>> 
>>>> I think that in order to write reliable software, early error
>>>> detection is generally preferable, even if in some cases it might
>>>> generate false positives.
>> 
>> Yes, though considering this case, dead code is obviously an error. So it
>> is a true positive, falsely attributed.
> 
> Here is an example of code you _want_ to be dead in order to be correct:
> 
> (define (foo x)
>    (if (= x 0)
>      (error "This should never happen!")
>      (/ 5 x)))

No, correctness of this program is not defined. What is the result (the
behavior) of foo in x=0? Your language allows foo called with x=0,
therefore it is the responsibility of the designer of foo (you), to define
its behavior in 0. Since you cannot give any reliable evidence that x is
never 0, you cannot claim that foo is correct (or not). What is the
behavior of foo in 0, so that I could *verify* if it does in 0 what it has
to do?

>> The very question as you posed it is meaningless. An illegal program cannot
>> be correct or incorrect. It is not a program.
> 
> There are more possibilities than you seem to think.

Yes, inconsistent language design, for example. When a non-lisp program is
considered a lisp program, which was I suppose your point. Is it so? Any
combination of characters is a lisp program?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uy6tky4cu.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> Here is an example of code you _want_ to be dead in order to be correct:
>> 
>> (define (foo x)
>>    (if (= x 0)
>>      (error "This should never happen!")
>>      (/ 5 x)))
>
> No, correctness of this program is not defined. What is the result (the
> behavior) of foo in x=0? Your language allows foo called with x=0,
> therefore it is the responsibility of the designer of foo (you), to define
> its behavior in 0. Since you cannot give any reliable evidence that x is
> never 0, you cannot claim that foo is correct (or not). What is the
> behavior of foo in 0, so that I could *verify* if it does in 0 what it has
> to do?

Foo is only asserting its preconditions. As long as its preconditions
are met, the foo is perfectly correct. It is fair game to raise an
exception if there is a violation of the contract, i.e. an error.

Ada is no different. Consider the equivalent function in Ada. It is
equivalently correct (I argue) or incorrect (by your definition)
exactly as the Lisp function is.

  function Foo(x:Real):Real
  begin
    return 5 / x; -- a runtime exception raised if x = 0
  end;
-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <zml8m3ve0dfz.alh19gnz5l9p$.dlg@40tude.net>
On Tue, 28 Apr 2009 19:18:15 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>> Here is an example of code you _want_ to be dead in order to be correct:
>>> 
>>> (define (foo x)
>>>    (if (= x 0)
>>>      (error "This should never happen!")
>>>      (/ 5 x)))
>>
>> No, correctness of this program is not defined. What is the result (the
>> behavior) of foo in x=0? Your language allows foo called with x=0,
>> therefore it is the responsibility of the designer of foo (you), to define
>> its behavior in 0. Since you cannot give any reliable evidence that x is
>> never 0, you cannot claim that foo is correct (or not). What is the
>> behavior of foo in 0, so that I could *verify* if it does in 0 what it has
>> to do?
> 
> Foo is only asserting its preconditions. As long as its preconditions
> are met, the foo is perfectly correct. It is fair game to raise an
> exception if there is a violation of the contract, i.e. an error.

No it is unfair. I think we should not go into a discussion about
preconditions. I only say that preconditions (statements about program
correctness) cannot be checked at run-time (by the same program). A more or
less formal proof I already gave in one of my replies to Robert Haarman.

> Ada is no different. Consider the equivalent function in Ada. It is
> equivalently correct (I argue) or incorrect (by your definition)
> exactly as the Lisp function is.
> 
>   function Foo(x:Real):Real
>   begin
>     return 5 / x; -- a runtime exception raised if x = 0
>   end;

No, here it is not required to raise an exception, and it is illegal anyway
because 5 is not Real (Float in Ada).

But that is a minor detail, let me fix your example preserving your
intention:

function Foo (X : Integer) return Integer is
begin
   return 5 / X; -- Constraint_Error may propagate
end Foo;

This program is correct only if its designer defines its behavior close to
0.

A. When he says (me, you, potential users): Foo returns the mathematically
correct result of division of 5 to its argument except the range 5 /
Integer'First .. 5 / Integer'Last in which case Constraint_Error is
propagated, then the program is correct.

B. When he says: Foo returns the mathematically correct result of division
of 5 to its argument (period), then the program is incorrect. Proof: call
it with X=0. q.e.d.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <utz48xyuz.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> Foo is only asserting its preconditions. As long as its preconditions
>> are met, the foo is perfectly correct. It is fair game to raise an
>> exception if there is a violation of the contract, i.e. an error.
>
> No it is unfair. I think we should not go into a discussion about
> preconditions. I only say that preconditions (statements about program
> correctness) cannot be checked at run-time (by the same program). A more or
> less formal proof I already gave in one of my replies to Robert Haarman.

Here I strongly disagree, probably along with everyone else here.

Of course you have to consider preconditions. You can do nothing
else. You cannot do any formal proofs whatsoever without taking them
into account. I submit that correctness is not even defined except in
conjunction with the preconditions.

In as much a language cannot articulate a precondition statically, then
on checks for it at runtime.

This is simple, easy to understand, and still allows for formal analysis.

With both the Lisp and Ada fragments as presented above I can reason about
them formally from their implementations.

I think you are being too rigid and pendantic with your definition of
erroneous, to such an extent that I simply find your definition
ineffective and unsusable in practical terms.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <wqehs7ljp5uf$.p8hog5gqf17o$.dlg@40tude.net>
On Tue, 28 Apr 2009 21:16:55 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>> Foo is only asserting its preconditions. As long as its preconditions
>>> are met, the foo is perfectly correct. It is fair game to raise an
>>> exception if there is a violation of the contract, i.e. an error.
>>
>> No it is unfair. I think we should not go into a discussion about
>> preconditions. I only say that preconditions (statements about program
>> correctness) cannot be checked at run-time (by the same program). A more or
>> less formal proof I already gave in one of my replies to Robert Haarman.
> 
> Here I strongly disagree, probably along with everyone else here.
> 
> Of course you have to consider preconditions. You can do nothing
> else. You cannot do any formal proofs whatsoever without taking them
> into account. I submit that correctness is not even defined except in
> conjunction with the preconditions.
> 
> In as much a language cannot articulate a precondition statically, then
> on checks for it at runtime.
> 
> This is simple, easy to understand, and still allows for formal analysis.
> 
> With both the Lisp and Ada fragments as presented above I can reason about
> them formally from their implementations.
> 
> I think you are being too rigid and pendantic with your definition of
> erroneous, to such an extent that I simply find your definition
> ineffective and unsusable in practical terms.

On the contrary, look at the design of safe systems. One consequence of
fundamental inconsistency of run-time correctness checks, is a well known
principle, that the checking has to be performed by an independent body (=
a separate program). If you want to check P1 then there must be an
*independent* P2 that does it. At best running on a separate hardware.

Note that it is no matter whether P2 be a compiler, the theorem prover or a
watch dog. The time of check is only relevant for somebody who has the
responsibility to react on an error. Important is only that P2 /= P1.

Another consequence is that the language of pre-/post-conditions,
specifications etc is not the object language. It may appear in the same
source, but they are a clearly separated.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uskjpkdpn.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> I think you are being too rigid and pendantic with your definition of
>> erroneous, to such an extent that I simply find your definition
>> ineffective and unsusable in practical terms.
>
> On the contrary, look at the design of safe systems. One consequence of
> fundamental inconsistency of run-time correctness checks, is a well known
> principle, that the checking has to be performed by an independent body (=
> a separate program). If you want to check P1 then there must be an
> *independent* P2 that does it. At best running on a separate hardware.

This might be well known to you, but I again think you are too rigidly
applying your notions of types, correctness, etc., to the general case.

I fundamentally disagree that you need an independent body for runtime
correctness checks. Almost all language environments, including both
Lisp and Ada, do so easily and usefully.

If you say, "ah, but those are not *correctness* checks!", then that is
the key issue of contention. For me, correctness is what the programmer
chooses it to mean.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4xgwubs2ui8s$.1tqj527zfs2jx.dlg@40tude.net>
On Thu, 30 Apr 2009 21:56:41 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>> I think you are being too rigid and pendantic with your definition of
>>> erroneous, to such an extent that I simply find your definition
>>> ineffective and unsusable in practical terms.
>>
>> On the contrary, look at the design of safe systems. One consequence of
>> fundamental inconsistency of run-time correctness checks, is a well known
>> principle, that the checking has to be performed by an independent body (=
>> a separate program). If you want to check P1 then there must be an
>> *independent* P2 that does it. At best running on a separate hardware.
> 
> This might be well known to you, but I again think you are too rigidly
> applying your notions of types, correctness, etc., to the general case.
> 
> I fundamentally disagree that you need an independent body for runtime
> correctness checks. Almost all language environments, including both
> Lisp and Ada, do so easily and usefully.
> 
> If you say, "ah, but those are not *correctness* checks!", then that is
> the key issue of contention. For me, correctness is what the programmer
> chooses it to mean.

So the programmer may decide it at will? Well, well, the infamous: "it's
not a bug, it's a feature!"

Where have you got such customers? Is there any free? I envy you. (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uocuc4dap.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> If you say, "ah, but those are not *correctness* checks!", then that is
>> the key issue of contention. For me, correctness is what the programmer
>> chooses it to mean.
>
> So the programmer may decide it at will? Well, well, the infamous: "it's
> not a bug, it's a feature!"

Yes exactly, but that programmer still needs to convince his peers/customers
with persuasive arguments or evidence of testing.

If his audience is not convinced, it's back to the drawing board.

This is actually how people work in practice.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75p3k6F18osdoU1@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Tue, 28 Apr 2009 15:59:38 +0200, Pascal Costanza wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
>>>
>>>> Vend wrote:
>>>>> I think that in order to write reliable software, early error
>>>>> detection is generally preferable, even if in some cases it might
>>>>> generate false positives.
>>> Yes, though considering this case, dead code is obviously an error. So it
>>> is a true positive, falsely attributed.
>> Here is an example of code you _want_ to be dead in order to be correct:
>>
>> (define (foo x)
>>    (if (= x 0)
>>      (error "This should never happen!")
>>      (/ 5 x)))
> 
> No, correctness of this program is not defined. What is the result (the
> behavior) of foo in x=0? Your language allows foo called with x=0,
> therefore it is the responsibility of the designer of foo (you), to define
> its behavior in 0. 

I already defined it. Just read the code.

> Since you cannot give any reliable evidence that x is
> never 0, you cannot claim that foo is correct (or not). What is the
> behavior of foo in 0, so that I could *verify* if it does in 0 what it has
> to do?

There is nothing to verify. It's already in the code what the behavior 
of foo is.

>>> The very question as you posed it is meaningless. An illegal program cannot
>>> be correct or incorrect. It is not a program.
>> There are more possibilities than you seem to think.
> 
> Yes, inconsistent language design, for example. When a non-lisp program is
> considered a lisp program, which was I suppose your point. Is it so? Any
> combination of characters is a lisp program?

That would obviously be nonsense. So I obviously mean something else.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <nvpkxtbgg4h5.1kopj3n1mrw0t.dlg@40tude.net>
On Tue, 28 Apr 2009 21:27:01 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:
>> On Tue, 28 Apr 2009 15:59:38 +0200, Pascal Costanza wrote:
>> 
>>> Dmitry A. Kazakov wrote:
>>>> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
>>>>
>>>>> Vend wrote:
>>>>>> I think that in order to write reliable software, early error
>>>>>> detection is generally preferable, even if in some cases it might
>>>>>> generate false positives.
>>>> Yes, though considering this case, dead code is obviously an error. So it
>>>> is a true positive, falsely attributed.
>>> Here is an example of code you _want_ to be dead in order to be correct:
>>>
>>> (define (foo x)
>>>    (if (= x 0)
>>>      (error "This should never happen!")
>>>      (/ 5 x)))
>> 
>> No, correctness of this program is not defined. What is the result (the
>> behavior) of foo in x=0? Your language allows foo called with x=0,
>> therefore it is the responsibility of the designer of foo (you), to define
>> its behavior in 0. 
> 
> I already defined it. Just read the code.

I hope you don't say that the implementation specifies program correctness?

>> Since you cannot give any reliable evidence that x is
>> never 0, you cannot claim that foo is correct (or not). What is the
>> behavior of foo in 0, so that I could *verify* if it does in 0 what it has
>> to do?
> 
> There is nothing to verify. It's already in the code what the behavior 
> of foo is.

That is even more meaningless, since any code behaves as it does. So if
*that* must be the program specification, then *any* program is trivially
correct. Handy isn't it? (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Alessio Stalla
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <10efafdc-26f8-444f-bbfd-dc1cc65d56c6@y33g2000prg.googlegroups.com>
On Apr 28, 10:28 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Tue, 28 Apr 2009 21:27:01 +0200, Pascal Costanza wrote:
> > Dmitry A. Kazakov wrote:
> >> On Tue, 28 Apr 2009 15:59:38 +0200, Pascal Costanza wrote:
>
> >>> Dmitry A. Kazakov wrote:
> >>>> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
>
> >>>>> Vend wrote:
> >>>>>> I think that in order to write reliable software, early error
> >>>>>> detection is generally preferable, even if in some cases it might
> >>>>>> generate false positives.
> >>>> Yes, though considering this case, dead code is obviously an error. So it
> >>>> is a true positive, falsely attributed.
> >>> Here is an example of code you _want_ to be dead in order to be correct:
>
> >>> (define (foo x)
> >>>    (if (= x 0)
> >>>      (error "This should never happen!")
> >>>      (/ 5 x)))
>
> >> No, correctness of this program is not defined. What is the result (the
> >> behavior) of foo in x=0? Your language allows foo called with x=0,
> >> therefore it is the responsibility of the designer of foo (you), to define
> >> its behavior in 0.
>
> > I already defined it. Just read the code.
>
> I hope you don't say that the implementation specifies program correctness?
>
> >> Since you cannot give any reliable evidence that x is
> >> never 0, you cannot claim that foo is correct (or not). What is the
> >> behavior of foo in 0, so that I could *verify* if it does in 0 what it has
> >> to do?
>
> > There is nothing to verify. It's already in the code what the behavior
> > of foo is.
>
> That is even more meaningless, since any code behaves as it does. So if
> *that* must be the program specification, then *any* program is trivially
> correct. Handy isn't it? (:-))

Hmm, I think the point is that both the ADA and the Scheme programs
will "fail" (or better, behave equivalently) when their input is 0.
This regardless of the fact that ADA is statically type checked and
Scheme is not. The case input == 0 is not statically checkable by ADA,
and thus error detection is postponed until runtime. Stretching it a
bit, this is a type error (since the input should be a member of the
type of "all integers except 0" in order for the program to run
"correctly"), and yet you accept to have it checked only at runtime,
like Scheme does. More correctly, type errors have no magic quality
that distinguishes them from other kinds of errors, except that they
can be easily checked statically (*). So imho it is a very strong
statement to claim that in order to be able to run a program, type
errors should always be checked statically. I have no problem
accepting the weaker statement that static type analysis is a useful
tool to detect errors in a program, just like e.g. unit testing
technologies are.

(*) that is, type errors in most languages' type systems. Common Lisp,
for example, has a type system that can express types like "integers
between X and Y". Such a type is not at all easy to check statically.

Just my €.02
Alessio Stalla
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uprewxykb.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>> No, correctness of this program is not defined. What is the result (the
>>> behavior) of foo in x=0? Your language allows foo called with x=0,
>>> therefore it is the responsibility of the designer of foo (you), to define
>>> its behavior in 0. 
>> 
>> I already defined it. Just read the code.
>
> I hope you don't say that the implementation specifies program correctness?

Yes it does. In the absence of anything else, of course it does. It
certainly specifies the behaviour around x=0.

As much as possible one tries to have "executable specifications". That
way there is no need to invent an alternative specification notation.
You use the implementation language itself as a specification language.

Basically, the code "does what it says it does". To understand the
code's consequences, you can apply formal proofs to its behaviour and
build up logical characteristics that then allow you to reason about
foo's effects when used by callers.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090429072801.GL3862@gildor.inglorion.net>
On Tue, Apr 28, 2009 at 09:23:19PM +0000, Ray Blaak wrote:
> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> >>> No, correctness of this program is not defined. What is the result (the
> >>> behavior) of foo in x=0? Your language allows foo called with x=0,
> >>> therefore it is the responsibility of the designer of foo (you), to define
> >>> its behavior in 0. 
> >> 
> >> I already defined it. Just read the code.
> >
> > I hope you don't say that the implementation specifies program correctness?
> 
> Yes it does.
>
> <snip>
> 
> Basically, the code "does what it says it does".

In that case, "correctness" becomes a meaningless term. Every program 
does what it says it does.

Hence my earlier remark about the importance of defining correctness. In 
my view, programs are written to perform certain tasks. Writing a 
program starts with thinking about what you want the program to do. The 
program is correct if it behaves as desired.

Taking Pascal's division function as an example:

(define (foo x)
  (if (= x 0)
    (error "This should never happen!")
    (/ 5 x)))

Is this program correct? I would say it is impossible to tell, because 
it has not been stated what it is supposed to do. You first have to 
define the desired behavior before you can assess if the program 
exhibits that behavior or not.

If the desired behavior is "given two numbers, return their 
product", then foo is not a correct implementation of that behavior.

If the desired behavior is "given a number other than 0, return 5 
divided by that number", the program may or may not be correct. If 
I pass 23423580979802372179852798237078523 as a number, what is the 
program supposed to return, and what does it actually return?

Now, still using the "given a number other than 0" specification, 
suppose the function actually returns an acceptable value for every 
number other than 0 that we can pass in. In that case, the program is 
correct. It may throw an error if we pass in 0, and it may throw an 
error if we pass it "Johnny", but this does not violate the 
specification.

So, the way I see it, answering the question "Is this program correct" 
requires answering two sub questions:

1. Does it do everything the specification requires?
2. Does it do anything the specification forbids?

If the answer to the first question is "yes" and the answer to the 
second question is "no", then the program is correct. Other answers mean 
the program is incorrect. If there is no specification, it is impossible 
to say if the program is correct, because we don't know what it is 
supposed to do.

In short, you must define what constitutes "correct" before you can 
determine if a given program is correct or not.

Regards,

Bob

-- 
In a free market, people get the deals they deserve.


From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ud4avoq8m.fsf@STRIPCAPStelus.net>
Robbert Haarman <··············@inglorion.net> writes:
>> Basically, the code "does what it says it does".
>
> In that case, "correctness" becomes a meaningless term. Every program 
> does what it says it does.

Yes in the sense there is no external specification to prove that the
implementation conforms to it.

But no, in the sense that you can often still can reason usefully about
the consequences of an implementation.

Consider a specification that had enough expressive power to say that
"foo returns 5/x, except when x=0".

Now the implementations shown above look pretty close to that, directly.

So why waste time and effort? Just use the code when you can.

Ultimately, even *with* the presence of an externally verifiable
specification, you have to match things to the user's expectations, and
that is an informal exercise. All a specification does is move the
problem to "is the specification correct?".

Note also, the above has little to do with static vs dynamic typing. The
notion of correctness is independent of when you can verify it.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75thtgF19upleU1@mid.individual.net>
Robbert Haarman wrote:
> On Tue, Apr 28, 2009 at 09:23:19PM +0000, Ray Blaak wrote:
>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>>> No, correctness of this program is not defined. What is the result (the
>>>>> behavior) of foo in x=0? Your language allows foo called with x=0,
>>>>> therefore it is the responsibility of the designer of foo (you), to define
>>>>> its behavior in 0. 
>>>> I already defined it. Just read the code.
>>> I hope you don't say that the implementation specifies program correctness?
>> Yes it does.
>>
>> <snip>
>>
>> Basically, the code "does what it says it does".
> 
> In that case, "correctness" becomes a meaningless term. Every program 
> does what it says it does.

Correctness is a meaningless term anyway. You can have two different 
formal specifications for the same system. If they don't agree, that's 
all you know. Either one of the formal specifications does not fulfill 
user expectations, or both. Even if both formal specifications agree 
with each other, it could be that they both do not fulfill user 
expectations. If only one does not fulfill the user expectations, you 
also don't know upfront which one. It could just be that the programmer 
got right what the specification writer screwed up.

The idea that program correctness somehow means that the implementation 
confirms to a (typically non-executable) specification stems from the 
idea that the (formal) specification is easier to understand than the 
(formal) source code of the implementation, and that therefore the 
formal specification is somehow automagically 'correct' by default. 
However, if the programming language allows you to express solutions in 
such a way that it is 'obvious' what it does, the other specification 
may just be redundant.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75pgdjF199nh9U1@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Tue, 28 Apr 2009 21:27:01 +0200, Pascal Costanza wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Tue, 28 Apr 2009 15:59:38 +0200, Pascal Costanza wrote:
>>>
>>>> Dmitry A. Kazakov wrote:
>>>>> On Mon, 20 Apr 2009 01:53:29 +0200, Pascal Costanza wrote:
>>>>>
>>>>>> Vend wrote:
>>>>>>> I think that in order to write reliable software, early error
>>>>>>> detection is generally preferable, even if in some cases it might
>>>>>>> generate false positives.
>>>>> Yes, though considering this case, dead code is obviously an error. So it
>>>>> is a true positive, falsely attributed.
>>>> Here is an example of code you _want_ to be dead in order to be correct:
>>>>
>>>> (define (foo x)
>>>>    (if (= x 0)
>>>>      (error "This should never happen!")
>>>>      (/ 5 x)))
>>> No, correctness of this program is not defined. What is the result (the
>>> behavior) of foo in x=0? Your language allows foo called with x=0,
>>> therefore it is the responsibility of the designer of foo (you), to define
>>> its behavior in 0. 
>> I already defined it. Just read the code.
> 
> I hope you don't say that the implementation specifies program correctness?

There is no such thing as program correctness. The best you can hope for 
is that you have two (or more) different formal descriptions of the same 
system, one of which is executable, and the other describes the system 
from some other angle and is not necessarily executable. You can then 
try to prove that the two formal descriptions somehow coincide.

This doesn't help you with ensuring program correctness in the sense 
that you don't know whether any of the two actually correctly represents 
user expectations. User expectations are by definition informal, and 
there is no formal way to get from an informal description to a formal 
description. The translations from informal user expectations to any of 
the two (or more) formal descriptions can have mistakes and lead to 
incorrect such descriptions. This is especially nasty if certain 
potential problems are inherent in the problem domain and show up in any 
of the possible formal descriptions.

Some people prefer a programming language which allows you to describe 
the solution to a problem as close as possible in terms of the actual 
problem description. That's the essence of declarative programming (and 
there are various ways to achieve declarative programming, be it by way 
of functional, logic, constraint-based, DSL-based and/or a mixture of 
those styles). If you can get close to that, you can ideally omit having 
to describe the solution in several different ways and having to prove 
that they actually coincide.

This may also not be a bullet-proof solution, but since there is no 
bullet-proof solution anywhere in sight, not even remotely, I don't see 
why there shouldn't be several different approaches being taken by 
several different programming communities.

>>> Since you cannot give any reliable evidence that x is
>>> never 0, you cannot claim that foo is correct (or not). What is the
>>> behavior of foo in 0, so that I could *verify* if it does in 0 what it has
>>> to do?
>> There is nothing to verify. It's already in the code what the behavior 
>> of foo is.
> 
> That is even more meaningless, since any code behaves as it does. So if
> *that* must be the program specification, then *any* program is trivially
> correct. Handy isn't it? (:-))

You have a too narrow view on 'correctness'.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1cmqntw4n3eci.1k6u89cy8lro1$.dlg@40tude.net>
On Wed, 29 Apr 2009 01:05:23 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:

>> I hope you don't say that the implementation specifies program correctness?
> 
> There is no such thing as program correctness.

This is surely wrong. But let's pretend it true. In that case talking about
errors becomes meaningless.

[...]

You are confusing several things:

1. formal definition of correctness
2. decidability of 1 for any given program
3. computability of 1

That 3 is safely false has no influence on 2 or 1. There is a whole world
beyond computers!

>>>> Since you cannot give any reliable evidence that x is
>>>> never 0, you cannot claim that foo is correct (or not). What is the
>>>> behavior of foo in 0, so that I could *verify* if it does in 0 what it has
>>>> to do?
>>> There is nothing to verify. It's already in the code what the behavior 
>>> of foo is.
>> 
>> That is even more meaningless, since any code behaves as it does. So if
>> *that* must be the program specification, then *any* program is trivially
>> correct. Handy isn't it? (:-))
> 
> You have a too narrow view on 'correctness'.

At least I don't claim it inexistent. The core problem why your views led
you to dismissing the notion of correctness, is conscious or not desire to
check for correctness at run time. This is absolutely hopeless, because

1. correctness is stated for the problem space in some problem space
language (formal or not). The power of this language is typically far
beyond Turing completeness. This is why in most cases correctness cannot
checked or even stated using a programming language.

2. self-correctness is inconsistent.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Cesar Rabak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gtd6de$r5$1@aioe.org>
Dmitry A. Kazakov escreveu:
> On Wed, 29 Apr 2009 01:05:23 +0200, Pascal Costanza wrote:
> 
>> Dmitry A. Kazakov wrote:
> 
>>> I hope you don't say that the implementation specifies program correctness?
>> There is no such thing as program correctness.
> 
> This is surely wrong. But let's pretend it true. In that case talking about
> errors becomes meaningless.
> 
> [...]
> 
> You are confusing several things:
> 
> 1. formal definition of correctness
> 2. decidability of 1 for any given program
> 3. computability of 1
> 
> That 3 is safely false has no influence on 2 or 1. There is a whole world
> beyond computers!

Given we agree on 2 and 3, what is your formal definition of "program 
correctness" then?
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1rjm98p0ca5ne.jse69uqs850g.dlg@40tude.net>
On Thu, 30 Apr 2009 18:49:21 -0300, Cesar Rabak wrote:

> Dmitry A. Kazakov escreveu:
>> On Wed, 29 Apr 2009 01:05:23 +0200, Pascal Costanza wrote:
>> 
>>> Dmitry A. Kazakov wrote:
>> 
>>>> I hope you don't say that the implementation specifies program correctness?
>>> There is no such thing as program correctness.
>> 
>> This is surely wrong. But let's pretend it true. In that case talking about
>> errors becomes meaningless.
>> 
>> [...]
>> 
>> You are confusing several things:
>> 
>> 1. formal definition of correctness
>> 2. decidability of 1 for any given program
>> 3. computability of 1
>> 
>> That 3 is safely false has no influence on 2 or 1. There is a whole world
>> beyond computers!
> 
> Given we agree on 2 and 3, what is your formal definition of "program 
> correctness" then?

Program is correct when under certain conditions it exposes specified
behavior.

(Certain conditions = hardware, valid inputs, no nuclear explosions around,
stable laws of Nature and logic etc.)

Error/bug is when the behavior does correspond to the specification. A
program that has at least one error is incorrect.

Disagree?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Cesar Rabak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gtf33g$lb8$1@aioe.org>
Dmitry A. Kazakov escreveu:
> On Thu, 30 Apr 2009 18:49:21 -0300, Cesar Rabak wrote:
> 
>> Dmitry A. Kazakov escreveu:
>>> On Wed, 29 Apr 2009 01:05:23 +0200, Pascal Costanza wrote:
>>>
>>>> Dmitry A. Kazakov wrote:
>>>>> I hope you don't say that the implementation specifies program correctness?
>>>> There is no such thing as program correctness.
>>> This is surely wrong. But let's pretend it true. In that case talking about
>>> errors becomes meaningless.
>>>
>>> [...]
>>>
>>> You are confusing several things:
>>>
>>> 1. formal definition of correctness
>>> 2. decidability of 1 for any given program
>>> 3. computability of 1
>>>
>>> That 3 is safely false has no influence on 2 or 1. There is a whole world
>>> beyond computers!
>> Given we agree on 2 and 3, what is your formal definition of "program 
>> correctness" then?
> 
> Program is correct when under certain conditions it exposes specified
> behavior.
> 
> (Certain conditions = hardware, valid inputs, no nuclear explosions around,
> stable laws of Nature and logic etc.)
> 
> Error/bug is when the behavior does correspond to the specification. A
> program that has at least one error is incorrect.
> 
> Disagree?
> 
Your statements are not formal enough. In fact the use of semantically 
open concepts in the second paragraph shows that you're struggling to 
stretch the ideas you are not acquainted enough to discuss.

What is your *formal* definition?
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <cr10ef88wrik.u2wda5m8pfzw$.dlg@40tude.net>
On Fri, 01 May 2009 12:05:06 -0300, Cesar Rabak wrote:

> Dmitry A. Kazakov escreveu:
>> On Thu, 30 Apr 2009 18:49:21 -0300, Cesar Rabak wrote:
>> 
>>> Dmitry A. Kazakov escreveu:
>>>> On Wed, 29 Apr 2009 01:05:23 +0200, Pascal Costanza wrote:
>>>>
>>>>> Dmitry A. Kazakov wrote:
>>>>>> I hope you don't say that the implementation specifies program correctness?
>>>>> There is no such thing as program correctness.
>>>> This is surely wrong. But let's pretend it true. In that case talking about
>>>> errors becomes meaningless.
>>>>
>>>> [...]
>>>>
>>>> You are confusing several things:
>>>>
>>>> 1. formal definition of correctness
>>>> 2. decidability of 1 for any given program
>>>> 3. computability of 1
>>>>
>>>> That 3 is safely false has no influence on 2 or 1. There is a whole world
>>>> beyond computers!
>>> Given we agree on 2 and 3, what is your formal definition of "program 
>>> correctness" then?
>> 
>> Program is correct when under certain conditions it exposes specified
>> behavior.
>> 
>> (Certain conditions = hardware, valid inputs, no nuclear explosions around,
>> stable laws of Nature and logic etc.)
>> 
>> Error/bug is when the behavior does correspond to the specification. A
>> program that has at least one error is incorrect.
>> 
>> Disagree?
>> 
> Your statements are not formal enough

Not enough for what? I don;t want to go into define "define" games. The
definition is precise enough to apply it for a concrete problem space. It
cannot be more formal than the available formalisms there.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Cesar Rabak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gtgbr6$qcc$1@aioe.org>
Dmitry A. Kazakov escreveu:
> On Fri, 01 May 2009 12:05:06 -0300, Cesar Rabak wrote:
> 
>> Dmitry A. Kazakov escreveu:
>>> On Thu, 30 Apr 2009 18:49:21 -0300, Cesar Rabak wrote:
>>>
>>>> Dmitry A. Kazakov escreveu:
>>>>> On Wed, 29 Apr 2009 01:05:23 +0200, Pascal Costanza wrote:
>>>>>
>>>>>> Dmitry A. Kazakov wrote:
>>>>>>> I hope you don't say that the implementation specifies program correctness?
>>>>>> There is no such thing as program correctness.
>>>>> This is surely wrong. But let's pretend it true. In that case talking about
>>>>> errors becomes meaningless.
>>>>>
>>>>> [...]
>>>>>
>>>>> You are confusing several things:
>>>>>
>>>>> 1. formal definition of correctness
>>>>> 2. decidability of 1 for any given program
>>>>> 3. computability of 1
>>>>>
>>>>> That 3 is safely false has no influence on 2 or 1. There is a whole world
>>>>> beyond computers!
>>>> Given we agree on 2 and 3, what is your formal definition of "program 
>>>> correctness" then?
>>> Program is correct when under certain conditions it exposes specified
>>> behavior.
>>>
>>> (Certain conditions = hardware, valid inputs, no nuclear explosions around,
>>> stable laws of Nature and logic etc.)
>>>
>>> Error/bug is when the behavior does correspond to the specification. A
>>> program that has at least one error is incorrect.
>>>
>>> Disagree?
>>>
>> Your statements are not formal enough
> 
> Not enough for what? I don;t want to go into define "define" games. The
> definition is precise enough to apply it for a concrete problem space. It
> cannot be more formal than the available formalisms there.
> 
You obviously don't have a clue of what a formal proof is, isn't it?

The small program that now you snipped was proposed by you. First we 
agreed that as it stand there was no way of knowing the programm was 
correct.

Then you believe that the case was decidable, but you're not able to 
proof it. Your only defense:

>>> >> (I don't know why are you asking this, because incorrectness against
>>> >> assumed specification ("to print value of X") is evident in this case.)
>> > 
>> > Not it is not and until now you've not shown us (much weaker than prove)...
> 
> Sorry, if that was not evident, then I cannot help you further.

Which is a way of showing cocky and arrogant, but still did not bring 
substance.

Besides _mentioning_ some terms of Computer Science without articulating 
then, could you please show us why you affirm "it is evident" proofing 
it formally?
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <760us3F19heb4U3@mid.individual.net>
Dmitry A. Kazakov wrote:
> Program is correct when under certain conditions it exposes specified
> behavior.
> 
> (Certain conditions = hardware, valid inputs, no nuclear explosions around,
> stable laws of Nature and logic etc.)

"under certain conditions": Whoa, we're getting very fuzzy here. ;)

> Error/bug is when the behavior does correspond to the specification. A
> program that has at least one error is incorrect.

Unless the specification is incorrect, and the program is actually correct.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <484o7kc03kq9$.g20guqvpfi9f$.dlg@40tude.net>
On Fri, 01 May 2009 20:54:59 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:

>> Program is correct when under certain conditions it exposes specified
>> behavior.
>> 
>> (Certain conditions = hardware, valid inputs, no nuclear explosions around,
>> stable laws of Nature and logic etc.)
> 
> "under certain conditions": Whoa, we're getting very fuzzy here. ;)

Right, since Hilbert's program failed, that is the world where we are
living...

>> Error/bug is when the behavior does correspond to the specification. A
>> program that has at least one error is incorrect.
> 
> Unless the specification is incorrect, and the program is actually correct.

Yep, that falls under "certain conditions". One of them that the
specification is one of what you actually want, and that you want is what
you really do, and ... so on.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <762k8vF1aqlidU1@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Fri, 01 May 2009 20:54:59 +0200, Pascal Costanza wrote:
> 
>> Dmitry A. Kazakov wrote:
> 
>>> Program is correct when under certain conditions it exposes specified
>>> behavior.
>>>
>>> (Certain conditions = hardware, valid inputs, no nuclear explosions around,
>>> stable laws of Nature and logic etc.)
>> "under certain conditions": Whoa, we're getting very fuzzy here. ;)
> 
> Right, since Hilbert's program failed, that is the world where we are
> living...

Yep.

>>> Error/bug is when the behavior does correspond to the specification. A
>>> program that has at least one error is incorrect.
>> Unless the specification is incorrect, and the program is actually correct.
> 
> Yep, that falls under "certain conditions". One of them that the
> specification is one of what you actually want, and that you want is what
> you really do, and ... so on.

These are very strong assumptions, and my position is that they are 
indeed too strong.

The real problem in developing software is not how to cover the gap 
between (formal) specifications and (formal) programs, but how to cover 
the gap between (informal) user expectations and (formal) 
specifications. Depending on your development style, it may be redundant 
to have an intermediate non-executable formal specification, because you 
might as well try to cover the gap between user expectations and the 
program without having an unnecessary middle layer in between.

Some high-level languages are designed with that goal in mind.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090502103431.GP3862@gildor.inglorion.net>
On Sat, May 02, 2009 at 12:06:23PM +0200, Pascal Costanza wrote:
>
> The real problem in developing software is not how to cover the gap  
> between (formal) specifications and (formal) programs, but how to cover  
> the gap between (informal) user expectations and (formal)  
> specifications.

That is my experience, too.

> Depending on your development style, it may be redundant  
> to have an intermediate non-executable formal specification, because you  
> might as well try to cover the gap between user expectations and the  
> program without having an unnecessary middle layer in between.

That is true up to a certain level. However, having an actual 
specification allows you to formalize what is to be delivered. This may 
or may not actually be what the customer actually wants, but it at least 
simplifies discussions about whether or not the supplier delivered what 
was promised. If what was delivered conforms with the specification, the 
supplier has done its job. If not, then the supplier hasn't done its 
job. Without the specification, this is much harder to assess, because 
each side will have their own assumptions. A specification is a way of 
making these assumptions explicit and having both parties agree on them.

Regards,

Bob

-- 
An astronaut in space in 1970 was asked by a reporter, "How do you feel?"

"How would you feel," the astronout replied, "if you were stuck here, on
top of 20,000 parts each one supplied by the lowest engineering bidder?"


From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <763difF15k0fpU1@mid.individual.net>
Robbert Haarman wrote:
> On Sat, May 02, 2009 at 12:06:23PM +0200, Pascal Costanza wrote:
>> The real problem in developing software is not how to cover the gap  
>> between (formal) specifications and (formal) programs, but how to cover  
>> the gap between (informal) user expectations and (formal)  
>> specifications.
> 
> That is my experience, too.
> 
>> Depending on your development style, it may be redundant  
>> to have an intermediate non-executable formal specification, because you  
>> might as well try to cover the gap between user expectations and the  
>> program without having an unnecessary middle layer in between.
> 
> That is true up to a certain level. However, having an actual 
> specification allows you to formalize what is to be delivered. This may 
> or may not actually be what the customer actually wants, but it at least 
> simplifies discussions about whether or not the supplier delivered what 
> was promised. If what was delivered conforms with the specification, the 
> supplier has done its job. If not, then the supplier hasn't done its 
> job. Without the specification, this is much harder to assess, because 
> each side will have their own assumptions. A specification is a way of 
> making these assumptions explicit and having both parties agree on them.

The goal should be to make the customers happy, not to show them that 
they suck at understanding specifications.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Vend
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <bb555e31-0059-4acd-a873-6dc48d8ccfe8@r34g2000vbi.googlegroups.com>
On 2 Mag, 12:06, Pascal Costanza <····@p-cos.net> wrote:
> Dmitry A. Kazakov wrote:
> > On Fri, 01 May 2009 20:54:59 +0200, Pascal Costanza wrote:
>
> >> Dmitry A. Kazakov wrote:
>
> >>> Program is correct when under certain conditions it exposes specified
> >>> behavior.
>
> >>> (Certain conditions = hardware, valid inputs, no nuclear explosions around,
> >>> stable laws of Nature and logic etc.)
> >> "under certain conditions": Whoa, we're getting very fuzzy here. ;)
>
> > Right, since Hilbert's program failed, that is the world where we are
> > living...
>
> Yep.
>
> >>> Error/bug is when the behavior does correspond to the specification. A
> >>> program that has at least one error is incorrect.
> >> Unless the specification is incorrect, and the program is actually correct.
>
> > Yep, that falls under "certain conditions". One of them that the
> > specification is one of what you actually want, and that you want is what
> > you really do, and ... so on.
>
> These are very strong assumptions, and my position is that they are
> indeed too strong.
>
> The real problem in developing software is not how to cover the gap
> between (formal) specifications and (formal) programs,

I don't think there are many program specifications that say that the
program can crash, lose data or cause security breaches, yet programs
do that all the time.

> but how to cover
> the gap between (informal) user expectations and (formal)
> specifications. Depending on your development style, it may be redundant
> to have an intermediate non-executable formal specification, because you
> might as well try to cover the gap between user expectations and the
> program without having an unnecessary middle layer in between.

Unless the designer, programmer and user of the program are the same
person I think this is quite difficult, and it becomes pratically
impossible as the size of the development team and the user base
increases.

>
> Some high-level languages are designed with that goal in mind.
>
> Pascal
>
> --
> ELS'09:http://www.european-lisp-symposium.org/
> My website:http://p-cos.net
> Common Lisp Document Repository:http://cdr.eurolisp.org
> Closer to MOP & ContextL:http://common-lisp.net/project/closer/
From: Scott Burson
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <8d82d062-a3a8-4b2c-813b-cbda2d9fb378@w35g2000prg.googlegroups.com>
On May 3, 7:39 am, Vend <······@virgilio.it> wrote:
> On 2 Mag, 12:06, Pascal Costanza <····@p-cos.net> wrote:
> > The real problem in developing software is not how to cover the gap
> > between (formal) specifications and (formal) programs,
>
> I don't think there are many program specifications that say that the
> program can crash, lose data or cause security breaches, yet programs
> do that all the time.

Yes, excellent point.  While greatly narrowing the gap between formal
specifications and programs would not solve all the problems of
software development, it would be a huge help nonetheless.

> > but how to cover
> > the gap between (informal) user expectations and (formal)
> > specifications.

I would add, in reply to Pascal, that it depends on the kind of
program being written.  For applications, what you say can be true a
lot of the time; for systems work I think it is less true.  (Consider
a concurrent GC: far easier to specify than to write.)

Also, a reasoning system adequate to assist with the implementation of
formal specifications, and proving that the implementation satisfies
the spec, would also be very helpful in detecting inconsistencies in
the spec itself.  This would accelerate the process of getting the
user to figure out what they really want.

-- Scott
From: ·············@gmx.at
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <34ead5ff-ebe2-47ff-9afb-6766d7bbc787@e20g2000vbc.googlegroups.com>
On 3 Mai, 16:39, Vend <······@virgilio.it> wrote:
> On 2 Mag, 12:06, Pascal Costanza <····@p-cos.net> wrote:
>
> > Dmitry A. Kazakov wrote:
> > > On Fri, 01 May 2009 20:54:59 +0200, Pascal Costanza wrote:
>
> > >> Dmitry A. Kazakov wrote:
>
> > >>> Program is correct when under certain conditions it exposes specified
> > >>> behavior.
>
> > >>> (Certain conditions = hardware, valid inputs, no nuclear explosions around,
> > >>> stable laws of Nature and logic etc.)
> > >> "under certain conditions": Whoa, we're getting very fuzzy here. ;)
>
> > > Right, since Hilbert's program failed, that is the world where we are
> > > living...
>
> > Yep.
>
> > >>> Error/bug is when the behavior does correspond to the specification. A
> > >>> program that has at least one error is incorrect.
> > >> Unless the specification is incorrect, and the program is actually correct.
>
> > > Yep, that falls under "certain conditions". One of them that the
> > > specification is one of what you actually want, and that you want is what
> > > you really do, and ... so on.
>
> > These are very strong assumptions, and my position is that they are
> > indeed too strong.
>
> > The real problem in developing software is not how to cover the gap
> > between (formal) specifications and (formal) programs,
>
> I don't think there are many program specifications that say that the
> program can crash, lose data or cause security breaches, yet programs
> do that all the time.

That means that all possibilities must be used to find bugs. It is
just like in the world of hardware.

If something can go wrong it will go wrong. I heared about a valve
in an airplane which was to be used in one direction. An arrow
showed the correct position of the valve. One day the valve was used
wrong with fatal consequences. The solution was a valve which could
not be used in the wrong direction (it had differend windings at
both ends).

In the real (hardware) world the possibilities to combine elements
are reduced on purpose to help eliminating bugs. When airplanes are
build nobody would complain that he is restricted because it is
impossible to use a valve in the wrong (dangerous) way.

In the world of software some people start to complain when it is
not possible to use something the wrong way...

Greetings Thomas Mertes

Seed7 Homepage:  http://seed7.sourceforge.net
Seed7 - The extensible programming language: User defined statements
and operators, abstract data types, templates without special
syntax, OO with interfaces and multiple dispatch, statically typed,
interpreted or compiled, portable, runs under linux/unix/windows.
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49fb2443$0$95515$742ec2ed@news.sonic.net>
Dmitry A. Kazakov wrote:

> You are confusing several things:
> 
> 1. formal definition of correctness
> 2. decidability of 1 for any given program
> 3. computability of 1

I do not believe in a "formal" definition of correctness.  The definition
of correctness is that a program conforms to user expectations and meets 
the user's needs. 

User expectations and user needs are informally held, and cannot 
even be formally stated.  Thus there can be no formal way to define 
meeting them.

                                Bear
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <6az1862bmrcu$.m0sgso40n8af$.dlg@40tude.net>
On Fri, 01 May 2009 09:30:02 -0700, Ray Dillinger wrote:

> Dmitry A. Kazakov wrote:
> 
>> You are confusing several things:
>> 
>> 1. formal definition of correctness
>> 2. decidability of 1 for any given program
>> 3. computability of 1
> 
> I do not believe in a "formal" definition of correctness.  The definition
> of correctness is that a program conforms to user expectations and meets 
> the user's needs. 
> 
> User expectations and user needs are informally held, and cannot 
> even be formally stated.  Thus there can be no formal way to define 
> meeting them.

They can be formally stated if the problem domain is formalized. Consider
numerical problems as an example of such domain.

Even if the specifications are informal or implicit, that alone does not
imply absence of correctness. It is hard to present a case where the
specifications could not be formalized at all.

Considering the outcry happened here when I supposed that some "dynamic"
people were actually working and thinking in an untyped way, it is
interesting to their reaction to your statement that errors do not exist.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49fb8cfc$0$95498$742ec2ed@news.sonic.net>
Dmitry A. Kazakov wrote:

> On Fri, 01 May 2009 09:30:02 -0700, Ray Dillinger wrote:

>> User expectations and user needs are informally held, and cannot
>> even be formally stated.  Thus there can be no formal way to define
>> meeting them.
 
> They can be formally stated if the problem domain is formalized. Consider
> numerical problems as an example of such domain.

Certainly.  A formal statement of user expectations and user needs
in such a domain can be done in Fortran, Pascal, APL, or any of a 
number of other formal languages.  If you hand me a fully formal 
statement of user expectations and needs, I will simply compile 
it into executable code and hand you the "program".  This will 
give you an instant proof that the program does what the 
specification says, given an assumption that the compiler's not 
buggy; but it will in no wise prove that the specification itself 
was correct.

It is normal management thinking to seek a method of making formal 
program specifications that can be done with less probability of 
error than programming.  It is sound engineering thinking that if 
such a method is found to exist, then many engineers will rapidly 
adopt it as a programming formalism. Then, quod erat faciendum, it 
won't be simpler or less error-prone than programming anymore.

Formalisms less expressive than programming languages are either
insufficiently expressive (ie, it is not possible to fully state 
user needs and expectations with them) or insufficiently rigorous 
(ie, there is no formal method for testing whether a program matches 
a specification in that specification formalism, thus no way to 
construct a formal proof of correctness).  

In order to *formally* state user expectations and needs, you 
need a specification as detailed as your implementation.
Developing the specification must cope with the same issues 
of managing complexity, avoiding misinterpretation, etc, that 
the implementation effort must cope with.  Along the way, the 
process of developing the specification in a specification 
formalism is equally subject to the exact same kind of mistakes 
made when developing a program in a programming formalism. 

(aside: first draft read "programming language."  So far most 
worthwhile programming formalisms are languages, but the point is 
more general than that and applies even to theoretical or obscure 
non-language programming formalisms....).

It is just as hard and error-prone to develop a formal (and 
correct) statement of user expectations and needs as it is to 
develop an executable (and correct) program. 

If I'm wrong, and there is any formal way to state user expectations 
and needs that is *less* complex and difficult than implementing them 
with an extant programming language, then it behooves us to implement 
a way of executing specifications expressed in that formalism directly, 
and thus to *make* it into a programming formalism. In fact, that's 
where several good programming languages, or good ideas that inform 
their design, came from.

Where the program does not match the specification it is not 
possible in principle to tell which is correct without checking 
against (informally held) user needs and expectations.  There 
is no formal method for checking against informally held needs 
and expectations. The program is no more likely to contain an 
error than the specification itself because the same difficulties, 
cognitive issues and barriers need to be overcome in order to 
produce the specification as need to be overcome in order to 
produce the program.  Neither process is easier or less error-prone, 
hence neither artifact is more likely to be correct on the first 
attempt.

Where the program does match the specification, the specification
might as well have been expressed in the executable implementation 
language in the first place.  If the specification is sufficiently
detailed to formally test the program against, then it is just as 
detailed as the program.  If not, then no formal proof of correctness
can be constructed. 

> Even if the specifications are informal or implicit, that alone does not
> imply absence of correctness. It is hard to present a case where the
> specifications could not be formalized at all.

Of course.  You can formalize them using a programming language.
Oops, that's the same job as implementing them, and formal 
specifications  are therefore just as subject to bugs as programs.  

I don't assert that correctness is impossible.  Users *have* needs 
and expectations,and programs *can* meet those needs and expectations.  
That is what a correct program does.

But the process is informal, not formal. When you state those needs 
and expectations in a formal specification, where matching or not 
matching the specification is formally testable, you are doing exactly 
the same work as programming. In so doing you are not less likely to 
make the mistakes you make at implementation.  The *formal* 
specification is not more likely to be correct than any (other) 
formally stated program.  An *informal* specification is not 
sufficiently rigorous to apply any formal method in testing against 
program behavior. 


                                Bear
From: Scott Burson
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <9a1ee3cc-888a-4b8c-ae5c-2d1a2fa43a56@d19g2000prh.googlegroups.com>
On May 1, 4:56 pm, Ray Dillinger <····@sonic.net> wrote:
> In order to *formally* state user expectations and needs, you
> need a specification as detailed as your implementation.
> Developing the specification must cope with the same issues
> of managing complexity, avoiding misinterpretation, etc, that
> the implementation effort must cope with.  Along the way, the
> process of developing the specification in a specification
> formalism is equally subject to the exact same kind of mistakes
> made when developing a program in a programming formalism.

Not necessarily.  A specification formalism may abstract away details
which, in a programming language, must be specified.

> If I'm wrong, and there is any formal way to state user expectations
> and needs that is *less* complex and difficult than implementing them
> with an extant programming language, then it behooves us to implement
> a way of executing specifications expressed in that formalism directly,
> and thus to *make* it into a programming formalism. In fact, that's
> where several good programming languages, or good ideas that inform
> their design, came from.

Sometimes things along this line can be done, but in general we run
into computability issues: the translation from the specification form
to the executable form is uncomputable.  This is true even though the
correspondence between them is completely and formally specified such
that we can prove that the latter (the program) satisfies the former
(the specification).  (Notice, I haven't used the word "correct".)

"Uncomputable" doesn't mean "impossible", necessarily, but search is
required, as it is for theorem proving, a closely related activity.
The current state of the AI art is such that the kind of search
involved is one we do not yet know how to automate effectively.

> Where the program does not match the specification it is not
> possible in principle to tell which is correct without checking
> against (informally held) user needs and expectations.  There
> is no formal method for checking against informally held needs
> and expectations. The program is no more likely to contain an
> error than the specification itself because the same difficulties,
> cognitive issues and barriers need to be overcome in order to
> produce the specification as need to be overcome in order to
> produce the program.

I agree up to the last sentence.  It is true that writing a good
formal specification of a program involves the same _kind_ of
difficulties as writing the program.  However, I contend that it could
involve substantially fewer of them, because the specification can be
quite a bit smaller than the program.  Furthermore, reuse of
previously existing work could be easier and more powerful at the
specification level.  (Extreme example: in a situation where a
programming language has already been formalized, and a machine
instruction set has already been formalized, a specification for a
compiler could be as simple as referencing those two specifications
and then saying "a compiler from this language to this instruction
set".)

> If the specification is sufficiently
> detailed to formally test the program against, then it is just as
> detailed as the program.

No, this isn't true at all.  Many algorithms are far more complex than
their formal specifications.

It is true that what I am saying is somewhat moot at the moment.
Specification languages exist (see for instance PVS), but verifying
that a specification is self-consistent, synthesizing a program from
it, and verifying that the program satisfies the specification, while
possible, are still highly manual activities for which the available
automated tools provide a little help but not much.  Until that
changes, specification languages will be used only by people with the
very highest needs for reliability (avionics, nuclear plant control
software, etc.)

And even when it does change, people will still have to figure out
what they want their programs to do; an inherently informal process,
as you say.

-- Scott
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49fba2f2$0$95506$742ec2ed@news.sonic.net>
Scott Burson wrote:

> On May 1, 4:56 pm, Ray Dillinger <····@sonic.net> wrote:

> No, this isn't true at all.  Many algorithms are far more complex than
> their formal specifications.

It's true.  It's easy to specify that you want, say, the prime factors 
of a large number, and easy to check that a set of numbers given are in 
fact prime and that their product is in fact that number.  Thus, a 
"formal proof of correctness" is possible in this case, but it is not 
possible to get from there to an efficient method of factoring.

To me this seems like a failure of some kind.  It's frustrating.  But 
I would say that the formalism is still executable, even if executing
it is exponential in complexity and checking it can be done in linear 
or polynomial time. 

                                Bear
From: Scott Burson
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <a278165f-cd5e-4cbe-864d-58defbba8ae0@d25g2000prn.googlegroups.com>
On May 1, 6:30 pm, Ray Dillinger <····@sonic.net> wrote:
> Scott Burson wrote:
> > Many algorithms are far more complex than
> > their formal specifications.
>
> It's true.  It's easy to specify that you want, say, the prime factors
> of a large number, and easy to check that a set of numbers given are in
> fact prime and that their product is in fact that number.  Thus, a
> "formal proof of correctness" is possible in this case, but it is not
> possible to get from there to an efficient method of factoring.

Well, you don't even have to invoke NP-completeness and similar
phenomena to show the point.  Here, I think is a better example.  A
specification of NREVERSE would basically say "reverses a list in
place, returning the reversed list".  It's true, this is not a formal
specification, but any useful specification language will have to give
you a way to say things like that, and I don't think the details
matter for this discussion.

The algorithm, however, is a nice little puzzle for Lispers.  If you
haven't tried writing it or seen it, I encourage you to take a shot at
it.  I think it's fair to say there's a significant gap between the
difficulty of writing the specification and that of writing the
algorithm.  And this is just a microscopic example.

Clearly, writing the specification would be easiest if the concepts of
sequence reversal and the more subtle concept of in-place operations
had already been defined in the specification language, but they very
plausibly could be.  Sequence reversal is straightforward to define by
saying that the reversed sequence has the same length and the elements
are in the opposite order.  Defining "in-place" would require
referring to the state of the heap before and after the operation, and
saying that no new heap nodes are consed and that the only nodes
modified are the tails of the argument list; there are ways to do this
kind of thing (search on "separation logic" if you're curious).
Clearly, to be as useful as we could want, a specification language
will have to be provided with a large library of such definitions; but
this is entirely possible.

(This goes back to what I was saying about reuse.  Concepts like
"reverse" and "in-place" are much more general, and so easier to
reuse, than pieces of code.)

> To me this seems like a failure of some kind.  It's frustrating.  But
> I would say that the formalism is still executable, even if executing
> it is exponential in complexity and checking it can be done in linear
> or polynomial time.

I'm afraid it goes beyond exponential complexity.  Remember, we're not
talking about the computation the algorithm is to perform; we're
talking about a process that could generate the implementation given
the specification.  This is uncomputable; no one knows how to do it
even in exponential time.  And note, the input is the specification,
so if we're talking about exponential time, that means adding one
token to the specification makes the synthesis take k times as long.
Specifications of real programs are going to be huge, so even if we
could do it in exponential time, that would not begin to be useful in
practice.

In practice there is a clear distinction between executable and non-
executable specifications, though some specification languages support
both.

-- Scott
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49fc0993$0$95490$742ec2ed@news.sonic.net>
Scott Burson wrote:

> Well, you don't even have to invoke NP-completeness and similar
> phenomena to show the point.  Here, I think is a better example.  A
> specification of NREVERSE would basically say "reverses a list in
> place, returning the reversed list".  It's true, this is not a formal
> specification, but any useful specification language will have to give
> you a way to say things like that, and I don't think the details
> matter for this discussion.

But that is the epitome of executable specification.  It's literally
a single procedure call.  If it's a part of the specification language, 
then by all means implement it in the implementation language.

> The algorithm, however, is a nice little puzzle for Lispers.  

No... it's a call to a standard library procedure for Lispers.  They 
get it about specifications being executable.

> I think it's fair to say there's a significant gap between the
> difficulty of writing the specification and that of writing the
> algorithm.  And this is just a microscopic example.

All standard procedures and libraries have to be implemented - once. 
And people all over the world may use them forever after.  In the 
same way all *formal* specification statements have to be defined 
in terms of primitives - once.  And specification writers all over 
the world may refer to that definition forever after. 

If you don't have a definition of the thing in terms of some formalism
like lambda calculus or transfer language or denotational semantics or 
something, then it isn't part of a *formal* specification system.  It's 
part of an *informal* specification, and you don't have any formal 
method of proving code conforms to it or doesn't. 

> (This goes back to what I was saying about reuse.  Concepts like
> "reverse" and "in-place" are much more general, and so easier to
> reuse, than pieces of code.)

When expressed formally, as part of a formal specification with formal 
definitions, they *are* code, and they are exactly as easy to reuse as code.
 
> I'm afraid it goes beyond exponential complexity.  Remember, we're not
> talking about the computation the algorithm is to perform; we're
> talking about a process that could generate the implementation given
> the specification.  

We're talking about a compiler and a set of standard libraries.  Complex, 
yes.  Not uncomputable. 

> This is uncomputable; no one knows how to do it 
> even in exponential time.  

I have a shelf full of books on how to do it in polynomial time; 
you should start with Aho, Hopcroft, and Ullman. 

> In practice there is a clear distinction between executable and non-
> executable specifications, though some specification languages support
> both.

I maintain that the "clear distinction" of which you speak is the 
distinction between specifications which are and which are not amenable
to any formal method of proving that a program conforms to them. In 
principle, I claim that any specification formalism for which such a 
method exists can be used as a programming language. And if it's any 
easier than extant programming languages then it *should* be used as 
a programming language.

                                        Bear
From: Scott Burson
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5b4d9910-e879-45c4-a60b-d56cd16993cc@b6g2000pre.googlegroups.com>
On May 2, 1:48 am, Ray Dillinger <····@sonic.net> wrote:
> Scott Burson wrote:
> > Well, you don't even have to invoke NP-completeness and similar
> > phenomena to show the point.  Here, I think is a better example.  A
> > specification of NREVERSE would basically say "reverses a list in
> > place, returning the reversed list".  It's true, this is not a formal
> > specification, but any useful specification language will have to give
> > you a way to say things like that, and I don't think the details
> > matter for this discussion.
>
> But that is the epitome of executable specification.  It's literally
> a single procedure call.  If it's a part of the specification language,
> then by all means implement it in the implementation language.
>
> > The algorithm, however, is a nice little puzzle for Lispers.  
>
> No... it's a call to a standard library procedure for Lispers.  They
> get it about specifications being executable.

I was asking you to imagine that it hadn't been written yet.  I
thought that was obvious.

> > I think it's fair to say there's a significant gap between the
> > difficulty of writing the specification and that of writing the
> > algorithm.  And this is just a microscopic example.
>
> All standard procedures and libraries have to be implemented - once.
> And people all over the world may use them forever after.  In the
> same way all *formal* specification statements have to be defined
> in terms of primitives - once.  And specification writers all over
> the world may refer to that definition forever after.
>
> If you don't have a definition of the thing in terms of some formalism
> like lambda calculus or transfer language or denotational semantics or
> something, then it isn't part of a *formal* specification system.  It's
> part of an *informal* specification, and you don't have any formal
> method of proving code conforms to it or doesn't.

I agree with these statements, but ...

> > (This goes back to what I was saying about reuse.  Concepts like
> > "reverse" and "in-place" are much more general, and so easier to
> > reuse, than pieces of code.)
>
> When expressed formally, as part of a formal specification with formal
> definitions, they *are* code, and they are exactly as easy to reuse as
> code.

No, they're not.

Ray, besides being a compiler developer for decades, and having done
quite a bit of static analysis, I've also worked in the areas of
formal specification and verification.  You seem to be unaware of
facts that are familiar to that community.  There are ways of
specifying algorithms such that the specification is, in many cases,
quite a bit simpler and easier to understand than the algorithm.  I
know because I've used them a little myself, and because I know people
who have spent years designing and using them -- this is their career.

There are many different kinds of code in the world, and for some of
them the gap between specification and implementation is very small,
even negligible, while for others it is quite substantial.  But anyone
working on a program of significant complexity could benefit from the
ability to state a desired property of the program abstractly and have
the machine prove that the program has the property, or show why it
does not.  Any such proof demonstrates the gap between specification
and implementation.

> > This is uncomputable; no one knows how to do it
> > even in exponential time.  
>
> I have a shelf full of books on how to do it in polynomial time;
> you should start with Aho, Hopcroft, and Ullman.

Programming languages are designed to be compilable.  So nothing can
go in a programming language that cannot be compiled in polynomial
time, at least in the cases that programmers care about.  This is the
primary constraint on the design of any programming language.
Removing this constraint is what allows specification languages to be
more elegant and expressive.

I'm not saying specification languages are useful for general
programming today.  They're not, because synthesizing a program from a
specification (or proving that a program satisfies its specification;
an equivalent problem) is still a largely manual task.  For most kinds
of programs there's no point in going through all that effort.  That's
why most people are not familiar with specification languages (and why
they are still in their infancy).

> > In practice there is a clear distinction between executable and non-
> > executable specifications, though some specification languages support
> > both.
>
> I maintain that the "clear distinction" of which you speak is the
> distinction between specifications which are and which are not amenable
> to any formal method of proving that a program conforms to them.

I'm sorry, you're just mistaken.  You do understand the concept of
uncomputability, don't you?  Remember the Halting Problem?  The
Halting Problem says that termination is uncomputable: that there is
no algorithm that can tell you, for every program P, whether P
terminates.  Yet proofs of termination of particular algorithms are a
staple of CS work.  So if the specification of a program includes the
statement that the program terminates -- which is certainly a well-
defined property -- then proving that the program meets that aspect of
the specification, while clearly possible in almost all cases of
interest, is uncomputable.  So there is _some_ formal method of
proving that the program has this property; it's just not a method
that we know how to automate (well enough to be generally useful).

-- Scott
From: Rob Warnock
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <q7adnU5Bm_7Ca2HUnZ2dnUVZ_qWdnZ2d@speakeasy.net>
Scott Burson  <········@gmail.com> wrote:
+---------------
| Halting Problem says that termination is uncomputable: that there is
| no algorithm that can tell you, for every program P, whether P
| terminates.  Yet proofs of termination of particular algorithms are a
| staple of CS work.  So if the specification of a program includes the
| statement that the program terminates -- which is certainly a well-
| defined property -- then proving that the program meets that aspect of
| the specification, while clearly possible in almost all cases of
| interest, is uncomputable.  So there is _some_ formal method of
| proving that the program has this property; it's just not a method
| that we know how to automate (well enough to be generally useful).
+---------------

The situation is neither as dire *nor* as favorable as many who argue
the extremes may wish to portray. If you're given a random program
and asked to prove whether or not it conforms to some other random
specification -- even as simple as "it terminates" -- then, yes, this
is uncomputable *in general*. However, individual program instances
*might* be provable. You just don't know which ones until you try
[and your proof system either terminates... or doesn't].

The real problem, IMHO, is that we have all [well, mostly all] been
ignoring the advice of people like Dijkstra & Gries &c. who recommend
abandoning the whole "try to prove that program A meets spec B" approach,
and instead suggest growing the program *and* the proof *together* from
the spec, using "predicate transformers" to work backwards in a formal,
provable manner from the desired results [the "post-conditions"] to the
specified initial conditions [the "pre-conditions"]. There is no issue
of computability with the approach, since at no time is the program that's
being constructed allowed to contain unprovable elements. Dijkstra's
famous monograph on this approach is called "A Discipline of Programming":

    http://www.amazon.com/dp/013215871X

Gries's later textbook, which might be more approachable to many, is
called "The Science of Programming": 

    http://www.springer.com/computer/programming/book/978-0-387-96480-5
    http://www.amazon.com/dp/0387964800

But to bring my diversion back on-topic, the theoretical problem with
the Dijkstra/Gries approach is that the technique is not constructive,
and hence cannot be (fully) automated. Specifically, proving loop
termination involve searching for a loop invariant predicate that meets
the pre- & post-conditions [less the termination condition] *and* "makes
non-zero progress" [typically decrements a positive number, where said
number being zero is the termination condition]. Once you have found
some appropriate loop invariant predicate, the proof of correctness
and termination of the loop is trivial [and is trivially automatable].
Unfortunately, their formal method doesn't help you at all in *finding*
such a predicate!! All one has available for guidance is "style",
"experience", "heuristic", etc.

[One is reminded of Shannon's famous Coding Theorem, which proves that,
given some specific channel, a block code exists that can produce throughput
"as close as you like" to the theoretical maximum information carrying
capacity of the channel with, at the same time, an error rate "as low
as you like"... but does not provide *any* guidance as to how to *find*
such a block code!! (*sigh*)]

Still, I suspect that creating an "expert system programming assistant"
to help programmers find appropriate loop invariant predicates using
Dijkstra's formalism would, in the long run, be a more effective use
of resources than continuing to beat our heads against the *known*
incomputable wall of "write it first *then* try to prove it".


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Scott Burson
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <16020224-0a2f-4e1f-94f9-ed2406e366c7@x31g2000prc.googlegroups.com>
On May 2, 6:41 pm, ····@rpw3.org (Rob Warnock) wrote:
> The situation is neither as dire *nor* as favorable as many who argue
> the extremes may wish to portray. If you're given a random program
> and asked to prove whether or not it conforms to some other random
> specification -- even as simple as "it terminates" -- then, yes, this
> is uncomputable *in general*. However, individual program instances
> *might* be provable. You just don't know which ones until you try
> [and your proof system either terminates... or doesn't].

Yes, but ...

> Specifically, proving loop
> termination involve searching for a loop invariant predicate

You have put your finger on the problem.  Most interesting proofs of
program properties involve invariants (loop invariants, recursion
invariants, class invariants, module invariants, etc.).  And as you
note, there is no procedure for extracting an invariant from a piece
of code; humans evidently do it through some combination of experience
and explicit search, and no one has managed to automate it.

> Still, I suspect that creating an "expert system programming assistant"
> to help programmers find appropriate loop invariant predicates using
> Dijkstra's formalism would, in the long run, be a more effective use
> of resources than continuing to beat our heads against the *known*
> incomputable wall of "write it first *then* try to prove it".

I actually believe it to be the same problem, in essence.  If you can
find the invariants, you've solved the problem of searching in these
high-branching-factor spaces, and the rest of the proof problem will
yield to the same techniques.

-- Scott
From: Rob Warnock
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <oOKdnTL62spb52PUnZ2dnUVZ_uydnZ2d@speakeasy.net>
Scott Burson  <········@gmail.com> wrote:
+---------------
| ····@rpw3.org (Rob Warnock) wrote:
| > Specifically, proving loop
| > termination involve searching for a loop invariant predicate
| 
| You have put your finger on the problem.  Most interesting proofs of
| program properties involve invariants (loop invariants, recursion
| invariants, class invariants, module invariants, etc.).  And as you
| note, there is no procedure for extracting an invariant from a piece
| of code; humans evidently do it through some combination of experience
| and explicit search, and no one has managed to automate it.
+---------------

Though note that in the case of the Dijkstra/Gries formalism, one
is *NOT* trying to "extract an invariant from a piece of code" per se.
The code hasn't been written yet, and *won't* be until the desired
invariant has been found. Rather, the task is to extract an invariant
from the post-condition of the loop, one which produce an acceptable
pre-condition on the loop which makes progress towards the ultimate
pre-condition of "NIL". Then once a usable loop invariant predicate
is found, the writing of the loop code -- and simultaneous proof of
same -- is trivial (at least by comparison).

This is not just a quibble, but is a fundamental difference in approach.
People who have not grokked that part fully seem to think that that in
the Dijkstra/Gries formalism one still "writes a little bit of code and
then somehow tries to prove it", which is absolutely incorrect. Instead,
one constructs a little theorem (or lemma), whereupon the code corresponding
to that proof is immediately manifest, since the coding primitive are
*defined* as "predicate transformers" when take a post-condition predicate
and transformer it into a pre-condition predicate.

Tiny, tiny example -- given the following pre-condition & post-condition:

    (assert (= x 3))          ; pre-condition
    ...[what code goes here?]...
    (assert (= x 17))         ; post-condition

we search for a predicate transformer that will transform (= X 17)
to (= X 3). By inspection, we see that the latter is equivalent
to (= X (- 17 14)), so that the predicate transformer we seek is
one that will change any post-condition (= X Y) into (= X (- Y 14)).
The code (INCF X 14) is such a predicate transformer, that is:

   Wp((= X Y), (INCF X 14)) ==> (= X (- Y 14))

[where "Wp" is the "Weakest pre-condition" operator, which takes
a post-condition and an imperative action (code) and returns the
weakest pre-condition that must be true for the post-condition to
be true following the action] and we immediately see that it satisfies
the proof:

    (assert (= x 3))          ; pre-condition
    (incf x 14)
    (assert (= x 17))         ; post-condition

Anyway, sorry for the sledghammer overkill, but I cannot stress
enough that in the Dijkstra/Gries formalism one is *never*
"proving the code"; instead, the proof *writes* the code.

Of course, that doesn't relieve the fact that *finding* the proof
[especially in the case of loops] is not the hard part...  ;-}

+---------------
| > Still, I suspect that creating an "expert system programming assistant"
| > to help programmers find appropriate loop invariant predicates using
| > Dijkstra's formalism would, in the long run, be a more effective use
| > of resources than continuing to beat our heads against the *known*
| > incomputable wall of "write it first *then* try to prove it".
| 
| I actually believe it to be the same problem, in essence.  If you can
| find the invariants, you've solved the problem of searching in these
| high-branching-factor spaces, and the rest of the proof problem will
| yield to the same techniques.
+---------------

Yup. Just so!


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Scott Burson
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <e254f52f-8754-43a6-863e-46d956453e83@z23g2000prd.googlegroups.com>
On May 3, 10:19 pm, ····@rpw3.org (Rob Warnock) wrote:
> Scott Burson  <········@gmail.com> wrote:
> +---------------
> | ····@rpw3.org (Rob Warnock) wrote:
> | > Specifically, proving loop
> | > termination involve searching for a loop invariant predicate
> |
> | You have put your finger on the problem.  Most interesting proofs of
> | program properties involve invariants (loop invariants, recursion
> | invariants, class invariants, module invariants, etc.).  And as you
> | note, there is no procedure for extracting an invariant from a piece
> | of code; humans evidently do it through some combination of experience
> | and explicit search, and no one has managed to automate it.
> +---------------
>
> Though note that in the case of the Dijkstra/Gries formalism, one
> is *NOT* trying to "extract an invariant from a piece of code" per se.
> The code hasn't been written yet, and *won't* be until the desired
> invariant has been found. Rather, the task is to extract an invariant
> from the post-condition of the loop [...]
>
> This is not just a quibble, but is a fundamental difference in approach.
> People who have not grokked that part fully seem to think that that in
> the Dijkstra/Gries formalism one still "writes a little bit of code and
> then somehow tries to prove it", which is absolutely incorrect.

Okay, but writing a little bit of code and then trying to prove it can
also be a viable strategy.  If the proof fails, the details of the
failure are a good guide to fixing the code.

Does the Dijkstra/Gries strategy work better?  I don't know.  Really,
though, while I think it's instructive to make these techniques
explicit, I think programmers use them subconsciously anyway.
Obviously one can't write a function by generating possible pieces of
code independently of the precondition and postcondition and then
attempting to prove that they do the right thing.  Rather, I think
people form some idea of what predicate transformer they need,
generate a piece of code that does something like that, try
(informally) to prove that it does what they want, and iterate that
process until it appears to have converged.

-- Scott
From: Rob Warnock
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <isCdnYVGBPL6V2PUnZ2dnUVZ_rmdnZ2d@speakeasy.net>
Oops!! Earlier, I replied:
+---------------
| Anyway, sorry for the sledghammer overkill, but I cannot stress
| enough that in the Dijkstra/Gries formalism one is *never*
| "proving the code"; instead, the proof *writes* the code.
| 
| Of course, that doesn't relieve the fact that *finding* the proof
| [especially in the case of loops] is not the hard part...  ;-}
+---------------

Of course that final "not" was a typo; it *should* have read:

  Of course, that doesn't relieve the fact that *finding* the proof
  [especially in the case of loops] is the hard part...
				   ****

-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <761mkaF19knneU1@mid.individual.net>
On Fri, 01 May 2009 18:30:33 -0700, Ray Dillinger wrote:

> Scott Burson wrote:
> 
>> On May 1, 4:56 pm, Ray Dillinger <····@sonic.net> wrote:
> 
>> No, this isn't true at all.  Many algorithms are far more complex than
>> their formal specifications.
> 
> It's true.  It's easy to specify that you want, say, the prime factors
> of a large number, and easy to check that a set of numbers given are in
> fact prime and that their product is in fact that number.  Thus, a
> "formal proof of correctness" is possible in this case, but it is not
> possible to get from there to an efficient method of factoring.
> 
> To me this seems like a failure of some kind.  It's frustrating.  But I
> would say that the formalism is still executable, even if executing it
> is exponential in complexity and checking it can be done in linear or
> polynomial time.

But there exist problems for which even verification is infeasible,
since it would be tantamount to solving the problem.  Eg many
optimization problems in large spaces are of this kind, verifying that
some point in a space is the maximum may not be any easier than
finding it.

Tamas
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49fc0a29$0$95490$742ec2ed@news.sonic.net>
Tamas K Papp wrote:

> But there exist problems for which even verification is infeasible,
> since it would be tantamount to solving the problem.  Eg many
> optimization problems in large spaces are of this kind, verifying that
> some point in a space is the maximum may not be any easier than
> finding it.

And the point of handing some engineer a specification of this kind 
would be what, exactly?  "Do the impossible or we fire you?"  

It is not worthwhile to talk about specifying that someone should 
solve a problem which has no solution. 

                                Bear
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <763067F1avls9U1@mid.individual.net>
On Sat, 02 May 2009 01:50:55 -0700, Ray Dillinger wrote:

> Tamas K Papp wrote:
> 
>> But there exist problems for which even verification is infeasible,
>> since it would be tantamount to solving the problem.  Eg many
>> optimization problems in large spaces are of this kind, verifying that
>> some point in a space is the maximum may not be any easier than finding
>> it.
> 
> And the point of handing some engineer a specification of this kind
> would be what, exactly?  "Do the impossible or we fire you?"

Ah.  So the poor souls who wrote those optimization libraries are on
the street now with a begging bowl :-)

While these problems have no formal solution, people come up with
useful heuristics that seem to work most of the time.  For example,
there are modifications to the BFGS algorithm so that it can continue
if it encounters a "wall".  AFAIK there is no formal proof for why
this helps, so you can never verify that the program is "correct".
But in practice, it works fine.

> It is not worthwhile to talk about specifying that someone should solve
> a problem which has no solution.

Don't think like a mathematician, think like an engineer.  Something
that "seems to be good enough" frequently qualifies as a solution in
some situations.

Tamas
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49fc6d73$0$95498$742ec2ed@news.sonic.net>
Tamas K Papp wrote:


> Don't think like a mathematician, think like an engineer.  Something
> that "seems to be good enough" frequently qualifies as a solution in
> some situations.

Right.  This was, sort of, my point. 

Other people here have bandied about the idea of "proving" that a 
program conforms to a specification.  Equivalently, people have 
talked about a "formal specification."  

I maintain that these things are not possible for any specification 
formalism not equivalent to a programming formalism. That is an 
understanding of "proofs" and "formal methods" shared between 
mathematicians and engineers. 

In the real world, "proof" that a program conforms to a specification
is not possible, because specification systems are (intentionally and 
by design) never sufficiently expressive to prove things about.  They 
are not formalisms, and do not admit of formal proofs.  That is an 
engineer's understanding of "specifications." It does not touch on
"proofs" or "formal methods" at any point, and this is what I've been
trying to point out, by counterexample.

When it is possible to "prove" anything about programs conforming 
to specifications, or to develop "formal" specifications with 
rigorous semantics, then developing specifications is absolutely 
equivalent to programming and therefore a redundant and useless 
effort. 
 
Specifications which are not useless are non-formal descriptions of 
the users' informally held needs and desires.  The idea of "proving" 
that a program conforms to them is a non sequitur.  Conformance 
checking is not a "proof" process and necessarily comes down to 
*informal* methods. You look at the program and the specification, 
apply human understanding instead of any formal method, and you 
see whether you *think* they say the same thing.  

Other people can look at the same program and the same specification, 
and understand the specification differently, and disagree with you.  
The program is rigorously defined by the language semantics; the 
specification is not. The terms in the specification do not have 
definitions in terms of any formalism admitting of proof.  So you 
can check, in a human, informal way that requires judgement and 
which different people can disagree about, but you cannot prove.
Sometimes you can't even achieve consensus.

My point, originally to Dmitry, is that speaking of "proving" that 
a program conforms to a specification is silly, unless the 
specification is equivalent to the program.  Talking about a 
"formal" specification is also silly, unless the specification 
is equivalent to the program.  And in that case, where rigorous 
mathematical semantics are attached to the specification formalism
and it is *possible* to "prove" things about it, there is no 
advantage in having the programming formalism be separate from the
specification formalism. 

It even ceases to be useful as a communication between client and 
programmer, because for any client who has developed that kind of 
a specification, the programming is redundant and the client has 
no need to hire a programmer.

                                Bear
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <f8a3ta176jyr$.1ulz8cnvcmn8f$.dlg@40tude.net>
On Sat, 02 May 2009 08:54:32 -0700, Ray Dillinger wrote:

> Tamas K Papp wrote:
> 
>> Don't think like a mathematician, think like an engineer.  Something
>> that "seems to be good enough" frequently qualifies as a solution in
>> some situations.
> 
> Right.  This was, sort of, my point. 
> 
> Other people here have bandied about the idea of "proving" that a 
> program conforms to a specification.  Equivalently, people have 
> talked about a "formal specification."  
> 
> I maintain that these things are not possible for any specification 
> formalism not equivalent to a programming formalism.

What is wrong with the formalism of to say arithmetic? Let we want to
specify modular integer addition. Is it impossible to prove correctness of
an implementation?

> When it is possible to "prove" anything about programs conforming 
> to specifications, or to develop "formal" specifications with 
> rigorous semantics, then developing specifications is absolutely 
> equivalent to programming and therefore a redundant and useless 
> effort. 

Untrue. Specification is not equivalent to an implementation. There can be
many or none implementations of a specification. There is no any
equivalence.
  
> Specifications which are not useless are non-formal descriptions of 
> the users' informally held needs and desires.

So a specification of Boolean AND is useless? Come on!

> You look at the program and the specification, 
> apply human understanding instead of any formal method, and you 
> see whether you *think* they say the same thing.  

This argument is irrelevant due to its universality. You look at the most
rigorous proof of anything and think it is true. At the end of any chain of
formalisms always sits a human being with his fuzzy logic. 

> My point, originally to Dmitry, is that speaking of "proving" that 
> a program conforms to a specification is silly, unless the 
> specification is equivalent to the program.

I don't see why. It is perfectly possible to do in some cases. Further an
ability to prove depends on the power for the formal system used.
Provability is equivalent to halting problem, but your argument that the
prover must be a computing system is wrong. It can be a human being,
Martians or almighty God Himself.

> Talking about a 
> "formal" specification is also silly, unless the specification 
> is equivalent to the program.

Nope. Consider examples above.

> It even ceases to be useful as a communication between client and 
> programmer, because for any client who has developed that kind of 
> a specification, the programming is redundant and the client has 
> no need to hire a programmer.

Nonsense. Consider the specification: write a program playing chess
statistically better than human. It is known to be possible, but this
specification is not even close to a program that indeed does this.

Furthermore, consider a specification: write a program that implements
integer arithmetic. This is known to be impossible (because incomputable).
Where is a redundancy then?

But a more important point is that whether specifications are useful,
known, provable etc is irrelevant to the issue. A program is correct when
it meets its specification. When you do not know the specification, that is
your subjective problem. You cannot consider specifications non-existent
just for this reason, otherwise you should not write the program (or you
could take just any program instead). This has nothing to do with
correctness proofs. A correct program can be used even if you cannot prove
that it is indeed correct, even if you don't know what it is really for.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <80b8d399-f182-4b22-87ac-7ffa5d763c38@m24g2000vbp.googlegroups.com>
On May 2, 1:33 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Sat, 02 May 2009 08:54:32 -0700, Ray Dillinger wrote:
> > Tamas K Papp wrote:
>
> >> Don't think like a mathematician, think like an engineer.  Something
> >> that "seems to be good enough" frequently qualifies as a solution in
> >> some situations.
>
> > Right.  This was, sort of, my point.
>
> > Other people here have bandied about the idea of "proving" that a
> > program conforms to a specification.  Equivalently, people have
> > talked about a "formal specification."  
>
> > I maintain that these things are not possible for any specification
> > formalism not equivalent to a programming formalism.
>
> What is wrong with the formalism of to say arithmetic? Let we want to
> specify modular integer addition. Is it impossible to prove correctness of
> an implementation?

There are examples of problems that can be described formally yet
cannot be solved formally.
An implementation is a possible solution.

> > When it is possible to "prove" anything about programs conforming
> > to specifications, or to develop "formal" specifications with
> > rigorous semantics, then developing specifications is absolutely
> > equivalent to programming and therefore a redundant and useless
> > effort.
>
> Untrue. Specification is not equivalent to an implementation. There can be
> many or none implementations of a specification. There is no any
> equivalence.
>

Documentation is part of implementation. If my documentation matches
what my program does, then it is /a/ 'correct' program. Whether it
matches the specification determines whether it is a program that
matches /the/ specification.

If I write a program with no documentation, how is anyone to tell if
it is a 'correct' program? It may or may not be doing exactly what  /
the programmer/ intends it to do. When I create documentation I create
a description of my intent as a programmer. It helps my clients
determine whether or not I have implemented the correct specification.
Further proof of this is that we can specify that certain behavior is /
unspecified/.

> > Specifications which are not useless are non-formal descriptions of
> > the users' informally held needs and desires.
>
> So a specification of Boolean AND is useless? Come on!
>

You don't need to specify boolean AND, as it is most likely specified
elsewhere (boolean AND is defined).

What you need to specify is what your program needs to do. If you are
writing a programming language, then sure, go to town on boolean AND.
I don't think you'd need more than a couple sentences however.

Most likely you'll have a better time specifying your program in terms
that humans understand... i.e. 'what does it do, how do i use it, what
are my expected results?'

> > You look at the program and the specification,
> > apply human understanding instead of any formal method, and you
> > see whether you *think* they say the same thing.  
>
> This argument is irrelevant due to its universality. You look at the most
> rigorous proof of anything and think it is true. At the end of any chain of
> formalisms always sits a human being with his fuzzy logic.
>

How is the human interpretation of a computer program irrelevant? A
programming language /is/ an interface between a human (and his fuzzy
logic) and a computer.

> > My point, originally to Dmitry, is that speaking of "proving" that
> > a program conforms to a specification is silly, unless the
> > specification is equivalent to the program.
>
> I don't see why. It is perfectly possible to do in some cases. Further an
> ability to prove depends on the power for the formal system used.
> Provability is equivalent to halting problem, but your argument that the
> prover must be a computing system is wrong. It can be a human being,
> Martians or almighty God Himself.
>

A formal system of proof /is/ a computing system.

> > Talking about a
> > "formal" specification is also silly, unless the specification
> > is equivalent to the program.
>
> Nope. Consider examples above.
>
> > It even ceases to be useful as a communication between client and
> > programmer, because for any client who has developed that kind of
> > a specification, the programming is redundant and the client has
> > no need to hire a programmer.
>
> Nonsense. Consider the specification: write a program playing chess
> statistically better than human. It is known to be possible, but this
> specification is not even close to a program that indeed does this.
>
This is an informal specification.

> Furthermore, consider a specification: write a program that implements
> integer arithmetic. This is known to be impossible (because incomputable).
> Where is a redundancy then?
>
This is also an informal specification.

Any formal specification could be translated into a computer language.
This is the hangup in your logic, this is the redundancy.

> A program is correct when it meets its specification.
> When you do not know the  specification, that is
> your subjective problem.

I think this is your main point, but I think you are wrong about it.
Sometimes a program is a vehicle to find a proper specification.
From: jra/pdx
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1568ce6b-b5f3-4689-80b8-e9075a1cc5b0@j18g2000prm.googlegroups.com>
In reference to specifications and proving program correctness, just a
few days ago I was talking with a fellow about his job.  He is the
Quality Assurance point man at his company.

Describing his job, he remarked that engineers, customers, etc., come
up with a detailed specification, and the programmers implement the
spec.  Just as said in so many prior posts above.

According to my informant, the task of testing the program to
determine conformance to the spec is the "easy part".

Where it gets difficult is to determine if the program behaves
adversely when there is an "unexpected" condition.  What happens if
the user presses two keys when the program expects only one key?  Does
it screw up the database if a large upload (say 10 mb) fails halfway
through?   Turns out the value of a QA guy is only as good as his
experience, and his ability to imagine the multitude of ways things
can go askew.

In other words, it appears the issue is less that the program meets
specifications, but that the specification (and the program that
implements it) conforms to the actual "problem space" it is attempting
to address.  In the QA universe, turns out that the spec nearly always
fails in this respect to greater or lesser degree.

As specifications regularly turn out to be subsets of unknown larger
specs, it implies that for real-world "non-trivial" applications, the
actual "problem space" is unknowable, meaning we can't produce a truly
complete specification, and certainly can't implement a complete
program to accomplish it.

Call it the "specification paradox": we can specify program behavior
and prove the program meets specification, but the specification and
conforming program are necessarily incomplete.  Therefore, an adequate
program conforms to an unknowable specification, that is, a
specification that can't be written before the implementation is
created.

Natural phenomena, e.g., biological entities, like human beings, can
only be described in terms of probability distributions, and human
creations similarly are intrinsically and randomly variable.  Useful
that it may be, the idea of "exactness" is only an abstraction, not a
property of Mother Nature.  It is inevitably troublesome to take
abstractions too literally in the course of solving our actual real-
world problems.

We are constrained by Nature to incompleteness, and unpredictable
randomness.  The only "answer" we ever have is that there is no
answer.  "Close enough" is as close as we can get.  Operationally, the
best we can do is strive to construct a solution that works, well,
mostly anyway, at least for a while.

Accept reality as it is, or go ahead, argue with Mother Nature and see
where it gets you.  Place your bet...:)

Jules.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1x90c7e54w564.1jhx6e3loojqo$.dlg@40tude.net>
On Sat, 2 May 2009 11:29:43 -0700 (PDT), ··················@gmail.com
wrote:

> On May 2, 1:33�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Sat, 02 May 2009 08:54:32 -0700, Ray Dillinger wrote:
>>> Tamas K Papp wrote:
>>
>>>> Don't think like a mathematician, think like an engineer. �Something
>>>> that "seems to be good enough" frequently qualifies as a solution in
>>>> some situations.
>>
>>> Right. �This was, sort of, my point.
>>
>>> Other people here have bandied about the idea of "proving" that a
>>> program conforms to a specification. �Equivalently, people have
>>> talked about a "formal specification." �
>>
>>> I maintain that these things are not possible for any specification
>>> formalism not equivalent to a programming formalism.
>>
>> What is wrong with the formalism of to say arithmetic? Let we want to
>> specify modular integer addition. Is it impossible to prove correctness of
>> an implementation?
> 
> There are examples of problems that can be described formally yet
> cannot be solved formally.

Yes

> An implementation is a possible solution.

An implementation is a solution. When it is impossible, then it is not a
solution, because it does not exist.

>>> When it is possible to "prove" anything about programs conforming
>>> to specifications, or to develop "formal" specifications with
>>> rigorous semantics, then developing specifications is absolutely
>>> equivalent to programming and therefore a redundant and useless
>>> effort.
>>
>> Untrue. Specification is not equivalent to an implementation. There can be
>> many or none implementations of a specification. There is no any
>> equivalence.
> 
> Documentation is part of implementation. If my documentation matches
> what my program does, then it is /a/ 'correct' program.

No, then it possibly is a correct documentation. There exist well
documented incorrect programs, obviously.

>>> Specifications which are not useless are non-formal descriptions of
>>> the users' informally held needs and desires.
>>
>> So a specification of Boolean AND is useless? Come on!
> 
> You don't need to specify boolean AND, as it is most likely specified
> elsewhere (boolean AND is defined).

If somebody has defined it, then it was useful for him. q.e.d.

>>> You look at the program and the specification,
>>> apply human understanding instead of any formal method, and you
>>> see whether you *think* they say the same thing. �
>>
>> This argument is irrelevant due to its universality. You look at the most
>> rigorous proof of anything and think it is true. At the end of any chain of
>> formalisms always sits a human being with his fuzzy logic.
> 
> How is the human interpretation of a computer program irrelevant?

The argument is irrelevant. I said nothing about human interpretation of a
program.

> A
> programming language /is/ an interface between a human (and his fuzzy
> logic) and a computer.

Everything we are talking, can talk, thinking or can think is that thing.
The argument in its core is about perception. An individual may be aware of
anything only through perception. It is a general philosophic issue
irrelevant to the discussion.

>>> My point, originally to Dmitry, is that speaking of "proving" that
>>> a program conforms to a specification is silly, unless the
>>> specification is equivalent to the program.
>>
>> I don't see why. It is perfectly possible to do in some cases. Further an
>> ability to prove depends on the power for the formal system used.
>> Provability is equivalent to halting problem, but your argument that the
>> prover must be a computing system is wrong. It can be a human being,
>> Martians or almighty God Himself.
> 
> A formal system of proof /is/ a computing system.

Any system is. I used "computing system" in the narrow sense of a computer.
The point was that the prover is not necessary a computer. Therefore it can
prove things which a computer could not.

>>> Talking about a
>>> "formal" specification is also silly, unless the specification
>>> is equivalent to the program.
>>
>> Nope. Consider examples above.
>>
>>> It even ceases to be useful as a communication between client and
>>> programmer, because for any client who has developed that kind of
>>> a specification, the programming is redundant and the client has
>>> no need to hire a programmer.
>>
>> Nonsense. Consider the specification: write a program playing chess
>> statistically better than human. It is known to be possible, but this
>> specification is not even close to a program that indeed does this.
>>
> This is an informal specification.

It can be formalized to the level acceptable in a scientific work. Rules of
the chess game are well defined. "Statistically better" can also be
defined: sample size, rules to make it representative etc.
 
>> Furthermore, consider a specification: write a program that implements
>> integer arithmetic. This is known to be impossible (because incomputable).
>> Where is a redundancy then?
>>
> This is also an informal specification.

It can be formalized.

> Any formal specification could be translated into a computer language.

Wrong. Proof: take the predicate Halt.

>> A program is correct when it meets its specification.
>> When you do not know the  specification, that is
>> your subjective problem.
> 
> I think this is your main point, but I think you are wrong about it.
> Sometimes a program is a vehicle to find a proper specification.

How does it make my point wrong? Any program is either correct or not. If
you  Consider the set of all programs. This is a finite set since the
memory size is limited. Are you saying that this set contains programs that
are both correct and incorrect or neither? Just because you cannot say if
they are? What if somebody else could say it?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <25880d20-73dc-4e81-9412-1ac8b7a53742@g20g2000vba.googlegroups.com>
On May 2, 4:07 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Sat, 2 May 2009 11:29:43 -0700 (PDT), ··················@gmail.com
> wrote:
>
>
>
> > On May 2, 1:33 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> On Sat, 02 May 2009 08:54:32 -0700, Ray Dillinger wrote:
> >>> Tamas K Papp wrote:
>
> >>>> Don't think like a mathematician, think like an engineer.  Something
> >>>> that "seems to be good enough" frequently qualifies as a solution in
> >>>> some situations.
>
> >>> Right.  This was, sort of, my point.
>
> >>> Other people here have bandied about the idea of "proving" that a
> >>> program conforms to a specification.  Equivalently, people have
> >>> talked about a "formal specification."  
>
> >>> I maintain that these things are not possible for any specification
> >>> formalism not equivalent to a programming formalism.
>
> >> What is wrong with the formalism of to say arithmetic? Let we want to
> >> specify modular integer addition. Is it impossible to prove correctness of
> >> an implementation?
>
> > There are examples of problems that can be described formally yet
> > cannot be solved formally.
>
> Yes
>
> > An implementation is a possible solution.
>
> An implementation is a solution. When it is impossible, then it is not a
> solution, because it does not exist.
>

EmphAsis on the wrong sylAble.
An implementation is a possible solution, of which there are many.

> >>> When it is possible to "prove" anything about programs conforming
> >>> to specifications, or to develop "formal" specifications with
> >>> rigorous semantics, then developing specifications is absolutely
> >>> equivalent to programming and therefore a redundant and useless
> >>> effort.
>
> >> Untrue. Specification is not equivalent to an implementation. There can be
> >> many or none implementations of a specification. There is no any
> >> equivalence.
>
> > Documentation is part of implementation. If my documentation matches
> > what my program does, then it is /a/ 'correct' program.
>
> No, then it possibly is a correct documentation. There exist well
> documented incorrect programs, obviously.
>

Is it an incorrect program or is it an improperly documented correct
program?
Or is it a properly documented correctly implemented program that
doesn't do the job that we want it to do?

Notice that there is a distinction between a /well/ documented program
and a /perfectly/ documented program.

> >>> Specifications which are not useless are non-formal descriptions of
> >>> the users' informally held needs and desires.
>
> >> So a specification of Boolean AND is useless? Come on!
>
> > You don't need to specify boolean AND, as it is most likely specified
> > elsewhere (boolean AND is defined).
>
> If somebody has defined it, then it was useful for him. q.e.d.
>



> >>> You look at the program and the specification,
> >>> apply human understanding instead of any formal method, and you
> >>> see whether you *think* they say the same thing.  
>
> >> This argument is irrelevant due to its universality. You look at the most
> >> rigorous proof of anything and think it is true. At the end of any chain of
> >> formalisms always sits a human being with his fuzzy logic.
>
> > How is the human interpretation of a computer program irrelevant?
>
> The argument is irrelevant. I said nothing about human interpretation of a
> program.
>

Yes you did. "always sits a human being with his fuzzy logic."

Sounds like human interpretation of the program to me.

> > A
> > programming language /is/ an interface between a human (and his fuzzy
> > logic) and a computer.
>
> Everything we are talking, can talk, thinking or can think is that thing.
> The argument in its core is about perception. An individual may be aware of
> anything only through perception. It is a general philosophic issue
> irrelevant to the discussion.
>

Who are you to be the arbiter of relevance?
We're asking 'what is the best way to interface with a computer',
aren't we?


> >>> My point, originally to Dmitry, is that speaking of "proving" that
> >>> a program conforms to a specification is silly, unless the
> >>> specification is equivalent to the program.
>
> >> I don't see why. It is perfectly possible to do in some cases. Further an
> >> ability to prove depends on the power for the formal system used.
> >> Provability is equivalent to halting problem, but your argument that the
> >> prover must be a computing system is wrong. It can be a human being,
> >> Martians or almighty God Himself.
>
> > A formal system of proof /is/ a computing system.
>
> Any system is. I used "computing system" in the narrow sense of a computer.
> The point was that the prover is not necessary a computer. Therefore it can
> prove things which a computer could not.
>

Like?

>
>
> >>> Talking about a
> >>> "formal" specification is also silly, unless the specification
> >>> is equivalent to the program.
>
> >> Nope. Consider examples above.
>
> >>> It even ceases to be useful as a communication between client and
> >>> programmer, because for any client who has developed that kind of
> >>> a specification, the programming is redundant and the client has
> >>> no need to hire a programmer.
>
> >> Nonsense. Consider the specification: write a program playing chess
> >> statistically better than human. It is known to be possible, but this
> >> specification is not even close to a program that indeed does this.
>
> > This is an informal specification.
>
> It can be formalized to the level acceptable in a scientific work. Rules of
> the chess game are well defined. "Statistically better" can also be
> defined: sample size, rules to make it representative etc.
>

But you didn't formalize it. This leads me to believe there is no such
thing as a formalized specification. This is because as soon as you
fully formalize it, you have an implementation.

> >> Furthermore, consider a specification: write a program that implements
> >> integer arithmetic. This is known to be impossible (because incomputable).
> >> Where is a redundancy then?
>
> > This is also an informal specification.
>
> It can be formalized.
>
But isn't.

> > Any formal specification could be translated into a computer language.
>
> Wrong. Proof: take the predicate Halt.

Ok. I take the predicate halt, now what do I do with it?
Are you talking about the halting problem?

Would it have been clearer if i had said 'formal specification
language'?

>
> >> A program is correct when it meets its specification.
> >> When you do not know the  specification, that is
> >> your subjective problem.
>
> > I think this is your main point, but I think you are wrong about it.
> > Sometimes a program is a vehicle to find a proper specification.
>
> How does it make my point wrong?

>Any program is either correct or not.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is where you are wrong.

> Are you saying that this set contains programs that are both correct and incorrect or neither?

Yes, and added to that, a program can be both correct and incorrect,
entirely dependent on the documentation accompanying it.

> Just because you cannot say if they are?

No, because we have a set of inputs and expected outputs.
If the outputs of the program do not match the expected outputs, it is
incorrect.

Expectations are determined by documentation.

> What if somebody else could say it?

Well then he probably has the manual.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1jiqo6nvev9re.a2jzr5jp4i3p.dlg@40tude.net>
On Sat, 2 May 2009 14:16:12 -0700 (PDT), ··················@gmail.com
wrote:

> On May 2, 4:07�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:

>> Are you saying that this set contains programs that are both correct and incorrect or neither?
> 
> Yes, and added to that, a program can be both correct and incorrect,
> entirely dependent on the documentation accompanying it.

Great. That nicely ends the discussion about run-time type error checks.
Since the program is neither correct or incorrect or maybe both, it cannot
have [type] errors, thus checking them at run-time is still rubbish. Which
was my point.

>> Just because you cannot say if they are?
> 
> No, because we have a set of inputs and expected outputs.
> If the outputs of the program do not match the expected outputs, it is
> incorrect.

This is in contradiction to what you said before. Since all programs have
finite (valued and numbered) inputs and outputs, which can either match or
do not match each other, therefore, trivially, any program is either
correct or not. Because according to you that is untrue, and programs
hopefully still have their inputs and outputs, then matching them does not
determine correctness. Or maybe it is so that a program can have inputs and
don't have them at the same time?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <af18a152-4ac1-4d43-9d0e-615fa7c0de97@l5g2000vbc.googlegroups.com>
On May 3, 4:29 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Sat, 2 May 2009 14:16:12 -0700 (PDT), ··················@gmail.com
> wrote:
>
> > On May 2, 4:07 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> Are you saying that this set contains programs that are both correct and incorrect or neither?
>
> > Yes, and added to that, a program can be both correct and incorrect,
> > entirely dependent on the documentation accompanying it.
>
> Great. That nicely ends the discussion about run-time type error checks.
> Since the program is neither correct or incorrect or maybe both, it cannot
> have [type] errors, thus checking them at run-time is still rubbish. Which
> was my point.
>

I think I said that:
Sometimes it is correct (if it matches documentation),
Other times it is incorrect (if it does not match documentation)
And still other times it is neither (if it has no documentation).

If one version of a program has a certain set of documentation, and
another has a different set, it could potentially be both.

I don't think I was making a point about type checking, specifically,
just pointing out that your logic about program correctness in
relation to type checking is flawed.

> >> Just because you cannot say if they are?
>
> > No, because we have a set of inputs and expected outputs.
> > If the outputs of the program do not match the expected outputs, it is
> > incorrect.
>
> This is in contradiction to what you said before. Since all programs have
> finite (valued and numbered) inputs and outputs, which can either match or
> do not match each other, therefore, trivially, any program is either
> correct or not. Because according to you that is untrue, and programs
> hopefully still have their inputs and outputs, then matching them does not
> determine correctness. Or maybe it is so that a program can have inputs and
> don't have them at the same time?
>

Mmm. no. My expectations are dependent on the documentation
accompanying the program.

You cannot objectively (of course you /can/ subjectively) tell the
difference between a bugged program and a program with incorrect
documentation. (This is actually tautological).
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1dkc36ny3vfkh$.1q1ewbwpdsvdv.dlg@40tude.net>
On Mon, 4 May 2009 14:36:20 -0700 (PDT), ··················@gmail.com
wrote:

> On May 3, 4:29�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:

>> Great. That nicely ends the discussion about run-time type error checks.
>> Since the program is neither correct or incorrect or maybe both, it cannot
>> have [type] errors, thus checking them at run-time is still rubbish. Which
>> was my point.
> 
> I think I said that:
> Sometimes it is correct (if it matches documentation),
> Other times it is incorrect (if it does not match documentation)
> And still other times it is neither (if it has no documentation).

Sure. Program correctness postulates existence of a specification. Even if
you occasionally don't know it, it is still assumed to be there. Now if you
change the specification that automatically changes the program. Let you
change the specification of sine to cosine, the result would be no sine,
but a cosine implemented by some poor idiot.

> I don't think I was making a point about type checking, specifically,
> just pointing out that your logic about program correctness in
> relation to type checking is flawed.

No it is not, because it works for your model as well. If you don't have
correctness, you don't have type errors. As you wrote, it is sometimes an
error, sometimes not, and sometimes nobody knows what.
 
>>>> Just because you cannot say if they are?
>>
>>> No, because we have a set of inputs and expected outputs.
>>> If the outputs of the program do not match the expected outputs, it is
>>> incorrect.
>>
>> This is in contradiction to what you said before. Since all programs have
>> finite (valued and numbered) inputs and outputs, which can either match or
>> do not match each other, therefore, trivially, any program is either
>> correct or not. Because according to you that is untrue, and programs
>> hopefully still have their inputs and outputs, then matching them does not
>> determine correctness. Or maybe it is so that a program can have inputs and
>> don't have them at the same time?
> 
> Mmm. no. My expectations are dependent on the documentation
> accompanying the program.
> 
> You cannot objectively (of course you /can/ subjectively) tell the
> difference between a bugged program and a program with incorrect
> documentation. (This is actually tautological).

Yes. I don't care. Exactly as the compiler does not when it sees an illegal
program. It waves its hands. It is up to the programmer to make the program
conform with the language, specifications etc. Everybody has to do his
work. That keeps the problem manageable.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ueiv82c9h.fsf@STRIPCAPStelus.net>
Ray Dillinger <····@sonic.net> writes:
> In order to *formally* state user expectations and needs, you 
> need a specification as detailed as your implementation.
> Developing the specification must cope with the same issues 
> of managing complexity, avoiding misinterpretation, etc, that 
> the implementation effort must cope with.  Along the way, the 
> process of developing the specification in a specification 
> formalism is equally subject to the exact same kind of mistakes 
> made when developing a program in a programming formalism. 

In defense of the formal methods approach, the reason that trying to specify
things separately from an implementation is actually helpful is not that a
spec is some magical description that you prove your implementation conforms
to and gives you perfect confidence of correctness.

Instead, it is that the effort of trying to derive a spec causes one to analyze
their problem with great care and effort, trying to derive the essence of the
problem. One needs to persuade their audience that the spec truly captures
what they are after.

Then, with that agreed upon specification, the non-trival effort of proving or
deriving a conforming implemenation is done. 

The result is that you have much more confidence that the resulting
implementation is "correct".

However, the reason for this is not that the specification notation is
automatically superior, or that it inherently captures requirements.

The reason is simply that working really really hard at a problem tends to
make people solve it better than if they didn't work so hard at it.

Formal methods are just a way of capturing the results of such work more
systematically.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090502081508.GO3862@gildor.inglorion.net>
On Fri, May 01, 2009 at 04:56:51PM -0700, Ray Dillinger wrote:
> 
> Certainly.  A formal statement of user expectations and user needs
> in such a domain can be done in Fortran, Pascal, APL, or any of a 
> number of other formal languages.  If you hand me a fully formal 
> statement of user expectations and needs, I will simply compile 
> it into executable code and hand you the "program".  This will 
> give you an instant proof that the program does what the 
> specification says, given an assumption that the compiler's not 
> buggy; but it will in no wise prove that the specification itself 
> was correct.

Perhaps this is a matter of perspective. To me, the specification is 
that by which we judge the program. It is the contract between the 
supplier and the consumer: the supplier promises to deliver a program 
that correctly implements the specification.

I would also regard it as a mistake to write a specification in Fortran, 
Pascal, or APL. I am not convinced that these languages can express 
everything I would want to express in a specification, and I am 
convinced that it will be difficult to read a specification written in 
one of those languages. Since I see the specification as a sort of 
contract between humans, it is important that the specification be 
readable by humans. The most obvious choice of language, then, would be 
a language for human-human interaction, not a language for 
human-computer interaction.

> In order to *formally* state user expectations and needs, you 
> need a specification as detailed as your implementation.

No. The specification describes, at a high level, the criteria by which 
we judge the program to be correct. It mandates some behavior and may 
prohibit other behavior. If the program exhibits the mandatory behavior 
and does not exhibit the prohibited behavior, it is a correct 
implementation of the specification.

The program itself need concern itself with numerous details that the 
specification does not have to, and, in fact, should not concern itself 
with. For example, an implementor must make choices about types, data 
structures, and algorithms. To the specification, only the resulting 
behavior should matter.

> Where the program does match the specification, the specification
> might as well have been expressed in the executable implementation 
> language in the first place.  If the specification is sufficiently
> detailed to formally test the program against, then it is just as 
> detailed as the program.  If not, then no formal proof of correctness
> can be constructed. 

Are we talking about testing, or are we talking about proving 
correctness?

In my view, proving that a program implements the specification would be 
ideal, but isn't feasible in practice for many Real World programs. So, 
instead of requiring that the program has been soundly and completely 
shown to implement the specification before we believe it is correct, we 
adopt the opposite approach: we assume the implementor has made an 
honest effort to implement the specification, and assume the program to 
be correct unless we find it to deviate from the specification.

Regards,

Bob

-- 
Coal powered the first steam engines, whose killer app was pumping
stagnant water out of coal mines. It powered the railroads, whose killer
app was moving coal.
	-- Bruce Sterling

From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <760v5vF19heb4U4@mid.individual.net>
Dmitry A. Kazakov wrote:
> Even if the specifications are informal or implicit, that alone does not
> imply absence of correctness. It is hard to present a case where the
> specifications could not be formalized at all.

It's actually very easy.

See http://doi.acm.org/10.1145/379486.379512

It's not the first time I post the link here in this thread. Maybe it's 
a good idea to read it for a change.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49fb8dae$0$95498$742ec2ed@news.sonic.net>
Pascal Costanza wrote:

> See http://doi.acm.org/10.1145/379486.379512
 
> It's not the first time I post the link here in this thread. Maybe it's
> a good idea to read it for a change.

Pascal, don't berate people for not reading something that's hidden 
behind a membership wall.  ACM dues are not particularly cheap, and 
if we're not members already, it isn't worth it to read one paper.

                                Bear
From: Dimiter "malkia" Stanev
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gtg4hi$bg$1@malkia.motzarella.org>
Ray Dillinger wrote:
> Pascal Costanza wrote:
> 
>> See http://doi.acm.org/10.1145/379486.379512
>  
>> It's not the first time I post the link here in this thread. Maybe it's
>> a good idea to read it for a change.
> 
> Pascal, don't berate people for not reading something that's hidden 
> behind a membership wall.  ACM dues are not particularly cheap, and 
> if we're not members already, it isn't worth it to read one paper.
> 
>                                 Bear
> 

FYI: I can read that for free, direct PDF link:

http://delivery.acm.org/10.1145/380000/379512/p18-smith.pdf?key1=379512&key2=7254221421&coll=GUIDE&dl=GUIDE&CFID=32892070&CFTOKEN=20641012
From: George Neuner
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4gfnv4l9detv5t99hd1e1t3itiai036kvt@4ax.com>
On Fri, 01 May 2009 17:36:02 -0700, "Dimiter \"malkia\" Stanev"
<······@mac.com> wrote:

>Ray Dillinger wrote:
>> Pascal Costanza wrote:
>> 
>>> See http://doi.acm.org/10.1145/379486.379512
>>  
>>> It's not the first time I post the link here in this thread. Maybe it's
>>> a good idea to read it for a change.
>> 
>> Pascal, don't berate people for not reading something that's hidden 
>> behind a membership wall.  ACM dues are not particularly cheap, and 
>> if we're not members already, it isn't worth it to read one paper.
>> 
>>                                 Bear
>> 
>
>FYI: I can read that for free, direct PDF link:
>
>http://delivery.acm.org/10.1145/380000/379512/p18-smith.pdf?key1=379512&key2=7254221421&coll=GUIDE&dl=GUIDE&CFID=32892070&CFTOKEN=20641012

I can't - and I have an ACM library membership.  When I click on the
PDF link I am invited to log in.

George
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1aatsita2q13p.ctkd3x93pgzc$.dlg@40tude.net>
On Fri, 01 May 2009 21:00:15 +0200, Pascal Costanza wrote:

> It's not the first time I post the link here in this thread. Maybe it's 
> a good idea to read it for a change.

You should post links to freely accessible copies.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <762kf0F1aqlidU2@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Fri, 01 May 2009 21:00:15 +0200, Pascal Costanza wrote:
> 
>> It's not the first time I post the link here in this thread. Maybe it's 
>> a good idea to read it for a change.
> 
> You should post links to freely accessible copies.

Try scholar.google.com and search for "Brian Cantwell Smith The Limits 
of Correctness". Second result, "all 3 versions", 2nd or 3rd link.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uzldv6utg.fsf@STRIPCAPStelus.net>
Pascal Costanza <··@p-cos.net> writes:

> Dmitry A. Kazakov wrote:
> > On Fri, 01 May 2009 21:00:15 +0200, Pascal Costanza wrote:
> >
> >> It's not the first time I post the link here in this thread. Maybe it's a
> >> good idea to read it for a change.
> > You should post links to freely accessible copies.
> 
> Try scholar.google.com and search for "Brian Cantwell Smith The Limits of
> Correctness". Second result, "all 3 versions", 2nd or 3rd link.

Doesn't work for me. The links I find are protected.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <763veiF1b2v9cU1@mid.individual.net>
Ray Blaak wrote:
> Pascal Costanza <··@p-cos.net> writes:
> 
>> Dmitry A. Kazakov wrote:
>>> On Fri, 01 May 2009 21:00:15 +0200, Pascal Costanza wrote:
>>>
>>>> It's not the first time I post the link here in this thread. Maybe it's a
>>>> good idea to read it for a change.
>>> You should post links to freely accessible copies.
>> Try scholar.google.com and search for "Brian Cantwell Smith The Limits of
>> Correctness". Second result, "all 3 versions", 2nd or 3rd link.
> 
> Doesn't work for me. The links I find are protected.

Among others I find these two links:

http://sdg.csail.mit.edu/6.894/dnjPapers/p18-smith.pdf
http://www.cs.wm.edu/~coppit/other-papers/p18-smith.pdf

They don't work?


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uab5u667p.fsf@STRIPCAPStelus.net>
Pascal Costanza <··@p-cos.net> writes:
> Among others I find these two links:
> 
> http://sdg.csail.mit.edu/6.894/dnjPapers/p18-smith.pdf
> http://www.cs.wm.edu/~coppit/other-papers/p18-smith.pdf
> 
> They don't work?

Got it, thanks.

Silly me, I was clicking on the title link, which was to portal.acm.org. The
mit link was right beside it, invisible to my eyes.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1m0a54ityirpf$.di5t6noq9oi0.dlg@40tude.net>
On Fri, 01 May 2009 21:00:15 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:

>> Even if the specifications are informal or implicit, that alone does not
>> imply absence of correctness. It is hard to present a case where the
>> specifications could not be formalized at all.
> 
> It's actually very easy.
> 
> See http://doi.acm.org/10.1145/379486.379512

The article does not show that. It merely says that the problem space is
often informal because it refers to human activity etc. That tells nothing
about whether it could be formalized and even less about if the
specifications can be.

In fact, because the computer is a finite system, a specification can
certainly be formalized. The question whether this specification is one you
or me wanted (in whatever sense) is irrelevant.

Author states rather an obvious thing, the specification might be wrong. So
what? My salary might wrong too. It does not mean that I don't have a bank
account.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <765b62F1b6b7nU1@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Fri, 01 May 2009 21:00:15 +0200, Pascal Costanza wrote:
> 
>> Dmitry A. Kazakov wrote:
> 
>>> Even if the specifications are informal or implicit, that alone does not
>>> imply absence of correctness. It is hard to present a case where the
>>> specifications could not be formalized at all.
>> It's actually very easy.
>>
>> See http://doi.acm.org/10.1145/379486.379512
> 
> The article does not show that. It merely says that the problem space is
> often informal because it refers to human activity etc. That tells nothing
> about whether it could be formalized and even less about if the
> specifications can be.
> 
> In fact, because the computer is a finite system, a specification can
> certainly be formalized. The question whether this specification is one you
> or me wanted (in whatever sense) is irrelevant.

I believe that this is the only relevant question.

Enough said.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <r59gqjpvxa65$.65gosdpultxb$.dlg@40tude.net>
On Sun, 03 May 2009 12:49:38 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:
>> On Fri, 01 May 2009 21:00:15 +0200, Pascal Costanza wrote:
>> 
>>> Dmitry A. Kazakov wrote:
>> 
>>>> Even if the specifications are informal or implicit, that alone does not
>>>> imply absence of correctness. It is hard to present a case where the
>>>> specifications could not be formalized at all.
>>> It's actually very easy.
>>>
>>> See http://doi.acm.org/10.1145/379486.379512
>> 
>> The article does not show that. It merely says that the problem space is
>> often informal because it refers to human activity etc. That tells nothing
>> about whether it could be formalized and even less about if the
>> specifications can be.
>> 
>> In fact, because the computer is a finite system, a specification can
>> certainly be formalized. The question whether this specification is one you
>> or me wanted (in whatever sense) is irrelevant.
> 
> I believe that this is the only relevant question.

Relevant to what? You reject existence specifications only because you
haven't write them? Does the number 3456218667 exist? Did it before you
have read it? What about this number plus 23?

Consider yourself the designer of a printer that prints numbers. Now your
argument is that no printer can exist because nobody can tell which number
you want it to print. Is it your point?

Returning to the topic of dynamic typing. According to you a type error is
not an error, since there is no errors at all. Hence dynamic type check is
not an error check. This way or another, with correctness or not, my point
stands!

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <766a59F1aoaorU1@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Sun, 03 May 2009 12:49:38 +0200, Pascal Costanza wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Fri, 01 May 2009 21:00:15 +0200, Pascal Costanza wrote:
>>>
>>>> Dmitry A. Kazakov wrote:
>>>>> Even if the specifications are informal or implicit, that alone does not
>>>>> imply absence of correctness. It is hard to present a case where the
>>>>> specifications could not be formalized at all.
>>>> It's actually very easy.
>>>>
>>>> See http://doi.acm.org/10.1145/379486.379512
>>> The article does not show that. It merely says that the problem space is
>>> often informal because it refers to human activity etc. That tells nothing
>>> about whether it could be formalized and even less about if the
>>> specifications can be.
>>>
>>> In fact, because the computer is a finite system, a specification can
>>> certainly be formalized. The question whether this specification is one you
>>> or me wanted (in whatever sense) is irrelevant.
>> I believe that this is the only relevant question.
> 
> Relevant to what? You reject existence specifications only because you
> haven't write them? Does the number 3456218667 exist? Did it before you
> have read it? What about this number plus 23?
> 
> Consider yourself the designer of a printer that prints numbers. Now your
> argument is that no printer can exist because nobody can tell which number
> you want it to print. Is it your point?
> 
> Returning to the topic of dynamic typing. According to you a type error is
> not an error, since there is no errors at all. Hence dynamic type check is
> not an error check. This way or another, with correctness or not, my point
> stands!

You're starting to say very weird things now.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <766b5qF1bcpd5U1@mid.individual.net>
On Sun, 03 May 2009 21:38:17 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:
>> Returning to the topic of dynamic typing. According to you a type error
>> is not an error, since there is no errors at all. Hence dynamic type
>> check is not an error check. This way or another, with correctness or
>> not, my point stands!
> 
> You're starting to say very weird things now.

The diagnosis is still only borderline.  He did finish "my point
stands!" with an exclamation mark, which is a telltale sign.  But
still, there is no manic laughter, so we can't be sure, he should have
ended it like this:

"This way or another, with correctness or not, my point stands!
MWHAHAHA!  And they told _me_ I was crazy!!!  I will show them!!!"

Tamas
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49fe05c1$0$5913$607ed4bc@cv.net>
Tamas K Papp wrote:
> On Sun, 03 May 2009 21:38:17 +0200, Pascal Costanza wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> Returning to the topic of dynamic typing. According to you a type error
>>> is not an error, since there is no errors at all. Hence dynamic type
>>> check is not an error check. This way or another, with correctness or
>>> not, my point stands!
>> You're starting to say very weird things now.
> 
> The diagnosis is still only borderline.  He did finish "my point
> stands!" with an exclamation mark, which is a telltale sign.  But
> still, there is no manic laughter, so we can't be sure, he should have
> ended it like this:
> 
> "This way or another, with correctness or not, my point stands!
> MWHAHAHA!  And they told _me_ I was crazy!!!  I will show them!!!"
> 

Awwww, you miss me!

kzo
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <760uq1F19heb4U2@mid.individual.net>
Dmitry A. Kazakov wrote:

> At least I don't claim it inexistent. The core problem why your views led
> you to dismissing the notion of correctness, is conscious or not desire to
> check for correctness at run time.

I don't necessarily want to check for correctness at all. I want my 
programs to do useful things. Whether that entails degrees of 
correctness or not depends on a lot of other factors.

> 1. correctness is stated for the problem space in some problem space
> language (formal or not). The power of this language is typically far
> beyond Turing completeness. This is why in most cases correctness cannot
> checked or even stated using a programming language.

...nor any other formal language.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <18rrwg8xkc3h5.uwstpdad7vcv.dlg@40tude.net>
On Fri, 01 May 2009 20:53:53 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:
> 
>> At least I don't claim it inexistent. The core problem why your views led
>> you to dismissing the notion of correctness, is conscious or not desire to
>> check for correctness at run time.
> 
> I don't necessarily want to check for correctness at all. I want my 
> programs to do useful things.

You replaced "correct" with "useful".

> Whether that entails degrees of 
> correctness or not depends on a lot of other factors.

Well, if you want to make correctness measurable in [0,1] rather than
Boolean, I don't object. That does not make correctness non-existent.
Neither it will prevent antinomies of self-correctness checks.

>> 1. correctness is stated for the problem space in some problem space
>> language (formal or not). The power of this language is typically far
>> beyond Turing completeness. This is why in most cases correctness cannot
>> checked or even stated using a programming language.
> 
> ...nor any other formal language.

Or so. But it does not make correct program non-existing. What are you
going to develop, otherwise?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Don Geddis
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87eiv71i4q.fsf@geddis.org>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> wrote on Thu, 30 Apr 2009:
> correctness is stated for the problem space in some problem space language
> (formal or not). The power of this language is typically far beyond Turing
> completeness.

There's a language with a power "far beyond Turing completeness"?

Pray tell, what language is that?

Maybe I wasn't paying attention in class, but I recall being taught that
Turing machines were kind of at the top of the power hierarchy.  (At least,
in the real world -- leaving aside magic oracles and the like.)

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Until you stalk and overrun, you can't devour anyone.
	-- Tiger aphorism, by Hobbes (Calvin and Hobbes, 11-22-2005)
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1coddqdq8txji.1lk6fs4mq65on.dlg@40tude.net>
On Sat, 02 May 2009 11:20:37 -0700, Don Geddis wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> wrote on Thu, 30 Apr 2009:
>> correctness is stated for the problem space in some problem space language
>> (formal or not). The power of this language is typically far beyond Turing
>> completeness.
> 
> There's a language with a power "far beyond Turing completeness"?
> 
> Pray tell, what language is that?

Peano arithmetic. Or just take a language wherein Halt is a predicate.

> Maybe I wasn't paying attention in class, but I recall being taught that
> Turing machines were kind of at the top of the power hierarchy.  (At least,
> in the real world -- leaving aside magic oracles and the like.)

No, it is almost at the bottom. Everything we are used to deal with is
incomputable, real addition, clock, random number generator...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <u4ow38a3x.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> No, it is almost at the bottom. Everything we are used to deal with is
> incomputable, real addition, clock, random number generator...

Well, we are certainly using computable versions of them on our finite machines.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <igngm08b17nn$.fxseiam8y9wy.dlg@40tude.net>
On Sat, 02 May 2009 21:32:06 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

>> No, it is almost at the bottom. Everything we are used to deal with is
>> incomputable, real addition, clock, random number generator...
> 
> Well, we are certainly using computable versions of them on our finite machines.

No, we are using computable models of. These models are fundamentally
inadequate. Some of them are not even asymptotically adequate. The
programmer is advised to remember this when he uses them. And it is a part
of the specification to describe when and how these models can be used.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <u7i0xop3v.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> >> No, it is almost at the bottom. Everything we are used to deal with is
> >> incomputable, real addition, clock, random number generator...
> > 
> > Well, we are certainly using computable versions of them on our finite machines.
> 
> No, we are using computable models of. These models are fundamentally
> inadequate. Some of them are not even asymptotically adequate. 

Well, they're good enough for me, my bank, my company's payroll system and
pretty much every computer system in use today.


> The programmer is advised to remember this when he uses them. And it is a
> part of the specification to describe when and how these models can be used.

That is true.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Don Geddis
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <871vr4suxi.fsf@geddis.org>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> wrote on Sat, 2 May 2009 :
> On Sat, 02 May 2009 11:20:37 -0700, Don Geddis wrote:
>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> wrote on Thu, 30 Apr 2009:
>>> correctness is stated for the problem space in some problem space language
>>> (formal or not). The power of this language is typically far beyond Turing
>>> completeness.
>> 
>> There's a language with a power "far beyond Turing completeness"?
>> Pray tell, what language is that?
>
> Peano arithmetic. Or just take a language wherein Halt is a predicate.

Turing Machines have a Halt state as well.

>> Turing machines were kind of at the top of the power hierarchy.  (At least,
>> in the real world -- leaving aside magic oracles and the like.)
>
> No, it is almost at the bottom. Everything we are used to deal with is
> incomputable, real addition, clock, random number generator...

So, can you tell me of a "problem space language" (as you stated in your
original posting), which CAN deal with "incomputable, real addition", etc.,
but which a Turing Machine CANNOT handle?

Still very curious about this language you have in mind, with a "power [...]
far beyond Turing completeness".

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Thirty years in captivity.  He's been out for, what, 13 years.  And he hasn't
re-offended.  He's going straight.  Which just goes to show that prison works.
	-- "David Brent" (aka Ricky Gervias from "The Office"), on his
	   "hero" Nelson Mandela; in a Microsoft UK training video, 2003
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <178z9ik3htbb8$.1hfiquh2kzjpg$.dlg@40tude.net>
On Mon, 04 May 2009 09:18:49 -0700, Don Geddis wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> wrote on Sat, 2 May 2009 :
>> On Sat, 02 May 2009 11:20:37 -0700, Don Geddis wrote:

>>> Turing machines were kind of at the top of the power hierarchy.  (At least,
>>> in the real world -- leaving aside magic oracles and the like.)
>>
>> No, it is almost at the bottom. Everything we are used to deal with is
>> incomputable, real addition, clock, random number generator...
> 
> So, can you tell me of a "problem space language" (as you stated in your
> original posting), which CAN deal with "incomputable, real addition", etc.,
> but which a Turing Machine CANNOT handle?

> Still very curious about this language you have in mind, with a "power [...]
> far beyond Turing completeness".

The language of physics.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090517124327.550@gmail.com>
["Followup-To:" header set to comp.lang.lisp.]
On 2009-05-06, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> On Mon, 04 May 2009 09:18:49 -0700, Don Geddis wrote:
>
>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> wrote on Sat, 2 May 2009 :
>>> On Sat, 02 May 2009 11:20:37 -0700, Don Geddis wrote:
>
>>>> Turing machines were kind of at the top of the power hierarchy.  (At least,
>>>> in the real world -- leaving aside magic oracles and the like.)
>>>
>>> No, it is almost at the bottom. Everything we are used to deal with is
>>> incomputable, real addition, clock, random number generator...
>> 
>> So, can you tell me of a "problem space language" (as you stated in your
>> original posting), which CAN deal with "incomputable, real addition", etc.,
>> but which a Turing Machine CANNOT handle?
>
>> Still very curious about this language you have in mind, with a "power [...]
>> far beyond Turing completeness".
>
> The language of physics.

The language of physics is mathematics. Mathematics is symbolic, and 
symbolic manipulation is a Turing process.

The symbols are considerd to be endowed with semantics dealing with quantities
which are not computable, such as real numbers, or functions that are
inexpressible in symbolic form.

Physics itself cannot use its own symbolic language to represent even some
trivial physics problems. For instance, the evolution of the trajectories of N
bodies (``the N body problem'') has only numerical solutions, and not anything
in the language of physics. No combinations of symbols in the language of
physics can represent the solution.

That's because that language is weak; it has no power beyond Turing
completeness. The language can handle only a subset of the conceivable abstract
objects that it may be called upon to represent.

Maybe what you mean by ``language of physics'' is not symbolic math, but the
actual behavior of reality, like the actual trajectories of three or more
bodies.  However, that is the result of a Turing computable process. The
positions of these objects are not true real numbers; what unfolds in reality
is a very precise numerical solution, encoded in a finite number of quantum
states.

Computation over true real numbers, and arbitrary real-valued functions that
have no symbolic representation, is an illusion. It doesn't exist as far as we
know.
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uzldpvqvz.fsf@STRIPCAPStelus.net>
Kaz Kylheku <········@gmail.com> writes:
> Maybe what you mean by ``language of physics'' is not symbolic math, but the
> actual behavior of reality, like the actual trajectories of three or more
> bodies.  However, that is the result of a Turing computable process.

That is not proven. Many of us seem to suspect that reality matches such
a process, but it could very well be that "real" reality uses a
differrent computability model.

> Computation over true real numbers, and arbitrary real-valued functions that
> have no symbolic representation, is an illusion. It doesn't exist as far as we
> know.

I agree with this, but have no proof. It's a "as far as we know" kind of
thing. It almost doesn't matter though. Even if there exist a perfect
real number (perhaps a perfect circle), we only have a finite amount of
time to measure it approximately.

So approximate reals will just have to suffice.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <zeemtljvxv5c.c7sfgkkpzst3.dlg@40tude.net>
On Wed, 06 May 2009 21:54:43 GMT, Ray Blaak wrote:

> Even if there exist a perfect
> real number (perhaps a perfect circle), we only have a finite amount of
> time to measure it approximately.
> 
> So approximate reals will just have to suffice.

No, because you would have problems with the meaning of "approximate"
originally defined in terms of something that now "does not exist". You
will also do with switching from the theory of continuous functions in R to
unmanageable discrete "approximations". Just consider the problem of
proving that there is a solution of some differential equation in the space
of such "approximations".

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uws8srhzq.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> On Wed, 06 May 2009 21:54:43 GMT, Ray Blaak wrote:
>
>> Even if there exist a perfect
>> real number (perhaps a perfect circle), we only have a finite amount of
>> time to measure it approximately.
>> 
>> So approximate reals will just have to suffice.
>
> No, because you would have problems with the meaning of "approximate"
> originally defined in terms of something that now "does not exist". 

We would continue to pretend that things are continuous and continue to
use continuous theories simply because they work better, are easier to
understand and give workable results.

Why would we do otherwise?

But our actual computers would still use approximate reals nonetheless.

Are you saying that you know of a "real" real in use somewhere? Quick!
Write it up. You will be famous.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <n8nlk7igdtul.16iuzvji87tle.dlg@40tude.net>
On Thu, 07 May 2009 16:32:44 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> On Wed, 06 May 2009 21:54:43 GMT, Ray Blaak wrote:
>>
>>> Even if there exist a perfect
>>> real number (perhaps a perfect circle), we only have a finite amount of
>>> time to measure it approximately.
>>> 
>>> So approximate reals will just have to suffice.
>>
>> No, because you would have problems with the meaning of "approximate"
>> originally defined in terms of something that now "does not exist". 
> 
> We would continue to pretend that things are continuous and continue to
> use continuous theories simply because they work better, are easier to
> understand and give workable results.

That is what we are doing now.

> Why would we do otherwise?

Egh, why are you asking me? That was the idea of yours! (:-))

> But our actual computers would still use approximate reals nonetheless.

Sure, this why I said that our computers are at the bottom of the hierarchy
of languages. Upper levels inhabited by increasingly declarative languages,
blatantly non-constructive too. Like:

"Let there be light: and there was light." (:-))

> Are you saying that you know of a "real" real in use somewhere? Quick!
> Write it up.

pi

> You will be famous.

Too late... (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uskjgr8dp.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> Are you saying that you know of a "real" real in use somewhere? Quick!
>> Write it up.
>
> pi

Pi has never been completely observed in the history of humanity. I
predict it never will. If you do that you will be famous. Go ahead, try
it. I won't wait.

Sure, you have the symbol Pi, its definition, and rules for its use, but
that is not what I am referring to. I mean the actual numeric value.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Espen Vestre
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <m1ws8s1xy0.fsf@vestre.net>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> Sure, you have the symbol Pi, its definition, and rules for its use, but
> that is not what I am referring to. I mean the actual numeric value.

Last time I checked, nobody had observed any "actual numeric values" at
all. If you've found the planet where the "actual numbers" live, please
tell us! 
-- 
  (espen)
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090519143913.706@gmail.com>
On 2009-05-07, Espen Vestre <·····@vestre.net> wrote:
> Ray Blaak <········@STRIPCAPStelus.net> writes:
>
>> Sure, you have the symbol Pi, its definition, and rules for its use, but
>> that is not what I am referring to. I mean the actual numeric value.
>
> Last time I checked, nobody had observed any "actual numeric values" at
> all. If you've found the planet where the "actual numbers" live, please
> tell us! 

You can observe the number three wherever there occurs a set of three things.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <10lbe9pu8flef$.1t5qvgni5org4$.dlg@40tude.net>
On Thu, 07 May 2009 20:00:21 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>> Write it up.
>>
>> pi
> 
> Pi has never been completely observed in the history of humanity. I
> predict it never will. If you do that you will be famous. Go ahead, try
> it. I won't wait.

Oh, you are deeply wrong here. Faithful Blefuscudians strongly believe that
it is only pi, which exists as the length of circumference. But the
diameter length is a myth. Did anybody ever see any diameters? (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ufxfeqgo0.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>>> Write it up.
>>>
>>> pi
>> 
>> Pi has never been completely observed in the history of humanity. I
>> predict it never will. If you do that you will be famous. Go ahead, try
>> it. I won't wait.
>
> Oh, you are deeply wrong here. Faithful Blefuscudians strongly believe that
> it is only pi, which exists as the length of circumference. But the
> diameter length is a myth. Did anybody ever see any diameters? (:-))

We are not communicating. 

Oh well, I tried.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Matthias Blume
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <m1my9nwsgz.fsf@hana.uchicago.edu>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>> Write it up.
>>
>> pi
>
> Pi has never been completely observed in the history of humanity. I
> predict it never will. If you do that you will be famous. Go ahead, try
> it. I won't wait.
>
> Sure, you have the symbol Pi, its definition, and rules for its use, but
> that is not what I am referring to. I mean the actual numeric value.

Have you ever observed the "actual numeric value" of 1.5? Or of sqrt(2)?
Or of 1, for that matter? Pi can be finitely represented in most formal
notations for real arithmetic, just like the other examples. What else is
there?

Now, you could have said that there are many (MANY!) real numbers that
you cannot represent. Unfortunately, you can't name any one of them (by
definition).

Matthias
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4a0463e4$0$5400$607ed4bc@cv.net>
Matthias Blume wrote:
> Ray Blaak <········@STRIPCAPStelus.net> writes:
> 
>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>>> Write it up.
>>> pi
>> Pi has never been completely observed in the history of humanity. I
>> predict it never will. If you do that you will be famous. Go ahead, try
>> it. I won't wait.
>>
>> Sure, you have the symbol Pi, its definition, and rules for its use, but
>> that is not what I am referring to. I mean the actual numeric value.
> 
> Have you ever observed the "actual numeric value" of 1.5? 

Yes, and quite recently------------------------------- ^^^

Am I the only one who endured New Math? And apparently the only one who 
can use a dictionary?:

  "A word or symbol representing a number"
      -- http://en.wiktionary.org/wiki/Numeral

Of course this leaves Dmitry with some explaining to do, since he used 
the word "Pi" in claiming he had never observed it.

kt
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4a0465d2$0$5382$607ed4bc@cv.net>
Kenneth Tilton wrote:
> Matthias Blume wrote:
>> Ray Blaak <········@STRIPCAPStelus.net> writes:
>>
>>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>>>> Write it up.
>>>> pi
>>> Pi has never been completely observed in the history of humanity. I
>>> predict it never will. If you do that you will be famous. Go ahead, try
>>> it. I won't wait.
>>>
>>> Sure, you have the symbol Pi, its definition, and rules for its use, but
>>> that is not what I am referring to. I mean the actual numeric value.
>>
>> Have you ever observed the "actual numeric value" of 1.5? 
> 
> Yes, and quite recently------------------------------- ^^^
> 
> Am I the only one who endured New Math? And apparently the only one who 
> can use a dictionary?:
> 
>  "A word or symbol representing a number"
>      -- http://en.wiktionary.org/wiki/Numeral
> 
> Of course this leaves Dmitry with some explaining to do, since he used 
> the word "Pi" in claiming he had never observed it.

That assumes of course that Dmitry can explain Ray.

kt
From: Matthias Blume
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <m1eiuzwgm4.fsf@hana.uchicago.edu>
Kenneth Tilton <·········@gmail.com> writes:

> Matthias Blume wrote:
>> Ray Blaak <········@STRIPCAPStelus.net> writes:
>>
>>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>>>> Write it up.
>>>> pi
>>> Pi has never been completely observed in the history of humanity. I
>>> predict it never will. If you do that you will be famous. Go ahead, try
>>> it. I won't wait.
>>>
>>> Sure, you have the symbol Pi, its definition, and rules for its use, but
>>> that is not what I am referring to. I mean the actual numeric value.
>>
>> Have you ever observed the "actual numeric value" of 1.5? 
>
> Yes, and quite recently------------------------------- ^^^
>
> Am I the only one who endured New Math? And apparently the only one
> who can use a dictionary?:
>
>  "A word or symbol representing a number"
>      -- http://en.wiktionary.org/wiki/Numeral

Ok, so you observed a /numeral/, which was not the question.

> Of course this leaves Dmitry with some explaining to do, since he used
> the word "Pi" in claiming he had never observed it.

Wasn't that Ray?

Anyway, my point was that the only way for us to interact with numbers
is to use symbolic notation, which is quite different from their "actual
numeric value".  (It's the difference between syntax and semantics.)
There are a lot of real numbers that we cannot interact with at all
because we do not have (finite) notation for them.  There is no way for
us to observe the numeric value of any number because that is an
abstract concept.

Matthias
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4a048aa7$0$5397$607ed4bc@cv.net>
Matthias Blume wrote:
> Kenneth Tilton <·········@gmail.com> writes:
> 
>> Matthias Blume wrote:
>>> Ray Blaak <········@STRIPCAPStelus.net> writes:
>>>
>>>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>>>>> Write it up.
>>>>> pi
>>>> Pi has never been completely observed in the history of humanity. I
>>>> predict it never will. If you do that you will be famous. Go ahead, try
>>>> it. I won't wait.
>>>>
>>>> Sure, you have the symbol Pi, its definition, and rules for its use, but
>>>> that is not what I am referring to. I mean the actual numeric value.
>>> Have you ever observed the "actual numeric value" of 1.5? 
>> Yes, and quite recently------------------------------- ^^^
>>
>> Am I the only one who endured New Math? And apparently the only one
>> who can use a dictionary?:
>>
>>  "A word or symbol representing a number"
>>      -- http://en.wiktionary.org/wiki/Numeral
> 
> Ok, so you observed a /numeral/, which was not the question.

The phrase was "numeric value". Numeral and number are disjoint sets. A 
number cannot be numeric. A value can be anything.

> 
>> Of course this leaves Dmitry with some explaining to do, since he used
>> the word "Pi" in claiming he had never observed it.
> 
> Wasn't that Ray?

Yes.

kt
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <u7i0qqgbk.fsf@STRIPCAPStelus.net>
Kenneth Tilton <·········@gmail.com> writes:
> Matthias Blume wrote:
>> Kenneth Tilton <·········@gmail.com> writes:
>>
>>> Matthias Blume wrote:
>>>> Ray Blaak <········@STRIPCAPStelus.net> writes:
>>>>
>>>>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>>>>>> Write it up.
>>>>>> pi
>>>>> Pi has never been completely observed in the history of humanity. I
>>>>> predict it never will. If you do that you will be famous. Go ahead, try
>>>>> it. I won't wait.
>>>>>
>>>>> Sure, you have the symbol Pi, its definition, and rules for its use, but
>>>>> that is not what I am referring to. I mean the actual numeric value.
>>>> Have you ever observed the "actual numeric value" of 1.5? 
>>> Yes, and quite recently------------------------------- ^^^
>>>
>>> Am I the only one who endured New Math? And apparently the only one
>>> who can use a dictionary?:
>>>
>>>  "A word or symbol representing a number"
>>>      -- http://en.wiktionary.org/wiki/Numeral
>>
>> Ok, so you observed a /numeral/, which was not the question.
>
> The phrase was "numeric value". Numeral and number are disjoint sets. A number
> cannot be numeric. A value can be anything.
>
>>
>>> Of course this leaves Dmitry with some explaining to do, since he used
>>> the word "Pi" in claiming he had never observed it.

I deny any such claim, strictly speaking. I meant that no one had observed all the of its digits.

Of course we have the symbol Pi and rules for its use, as I have said myself.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4a05d9ee$0$22521$607ed4bc@cv.net>
Ray Blaak wrote:
> Kenneth Tilton <·········@gmail.com> writes:
>> Matthias Blume wrote:
>>> Kenneth Tilton <·········@gmail.com> writes:
>>>
>>>> Matthias Blume wrote:
>>>>> Ray Blaak <········@STRIPCAPStelus.net> writes:
>>>>>
>>>>>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>>>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>>>>>>> Write it up.
>>>>>>> pi
>>>>>> Pi has never been completely observed in the history of humanity. I
>>>>>> predict it never will. If you do that you will be famous. Go ahead, try
>>>>>> it. I won't wait.
>>>>>>
>>>>>> Sure, you have the symbol Pi, its definition, and rules for its use, but
>>>>>> that is not what I am referring to. I mean the actual numeric value.
>>>>> Have you ever observed the "actual numeric value" of 1.5? 
>>>> Yes, and quite recently------------------------------- ^^^
>>>>
>>>> Am I the only one who endured New Math? And apparently the only one
>>>> who can use a dictionary?:
>>>>
>>>>  "A word or symbol representing a number"
>>>>      -- http://en.wiktionary.org/wiki/Numeral
>>> Ok, so you observed a /numeral/, which was not the question.
>> The phrase was "numeric value". Numeral and number are disjoint sets. A number
>> cannot be numeric. A value can be anything.
>>
>>>> Of course this leaves Dmitry with some explaining to do, since he used
>>>> the word "Pi" in claiming he had never observed it.
> 
> I deny any such claim, strictly speaking. I meant that no one had observed all the of its digits.

Of course, but the key to trolling is to deliberately distort the 
other's remarks and pretend one does not get their intended meaning.

Well, i was not really trolling, I was forking the thread to make fun of 
the New Math that tried to get the numeral/number distinction across to 
five year olds.


> 
> Of course we have the symbol Pi and rules for its use, as I have said myself.
> 

And based on the definition of numeral, pi is a fine and precise numeric 
value, given that you have already stipulated the definition of pi as 
being fine and dandy.

Turn your observation around: which digit of pi cannot be observed? 
We'll need Heisenberg in here shortly.

kt
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uhbzuvvxk.fsf@STRIPCAPStelus.net>
Kenneth Tilton <·········@gmail.com> writes:
> And based on the definition of numeral, pi is a fine and precise numeric
> value, given that you have already stipulated the definition of pi as being
> fine and dandy.

I wonder about that. When you actually use it for computation, do you do
so symbolically? How far does that get you if you are, say, plotting a
picture? 

That gets you to the imprecision of things again.

> Turn your observation around: which digit of pi cannot be observed? We'll need
> Heisenberg in here shortly.

Any particular digit can be observed, if you're willing to wait. All of
them at once is the problem.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <u8wl69dwd.fsf@STRIPCAPStelus.net>
Ray Blaak <········@STRIPCAPStelus.net> writes:
> I wonder about that. When you actually use it for computation, do you do
> so symbolically? How far does that get you if you are, say, plotting a
> picture? 

To clarify: all computation is symbolic. But I think you know what I mean.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <mv39fmaxvdmp.u9ueyjh21n3m.dlg@40tude.net>
On Sat, 09 May 2009 15:30:52 -0400, Kenneth Tilton wrote:

> Well, i was not really trolling, I was forking the thread to make fun of 
> the New Math that tried to get the numeral/number distinction across to 
> five year olds.

You find it better to start with medieval concepts working gradually on to
the mathematics of XIX century, while explaining each next year what was
wrong with the things they learnt a year ago?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4a07be84$0$22505$607ed4bc@cv.net>
Dmitry A. Kazakov wrote:
> On Sat, 09 May 2009 15:30:52 -0400, Kenneth Tilton wrote:
> 
>> Well, i was not really trolling, I was forking the thread to make fun of 
>> the New Math that tried to get the numeral/number distinction across to 
>> five year olds.
> 
> You find it better to start with medieval concepts working gradually on to
> the mathematics of XIX century, while explaining each next year what was
> wrong with the things they learnt a year ago?
> 

Well, since you asked...

http://smuglispweeny.blogspot.com/2009/05/how-to-teach-math.html


hth,kt
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1jbxapyrccr2u$.ni01j4cfct6a.dlg@40tude.net>
On Mon, 11 May 2009 01:58:20 -0400, Kenneth Tilton wrote:

> Dmitry A. Kazakov wrote:
>> On Sat, 09 May 2009 15:30:52 -0400, Kenneth Tilton wrote:
>> 
>>> Well, i was not really trolling, I was forking the thread to make fun of 
>>> the New Math that tried to get the numeral/number distinction across to 
>>> five year olds.
>> 
>> You find it better to start with medieval concepts working gradually on to
>> the mathematics of XIX century, while explaining each next year what was
>> wrong with the things they learnt a year ago?
> 
> Well, since you asked...
> 
> http://smuglispweeny.blogspot.com/2009/05/how-to-teach-math.html

Poor kids!

Well, in my view this one of the major problems of modern education system:
pupils are learning history of mathematics/physics/chemistry instead of the
subject. I think this is all wrong.

I myself, as a kid, played a guinea-pig in an education program which
started mathematics with some basics of the set theory and used logical
formalism in all proofs. If I correctly remember the program was developed
on the initiative of Kolmogorov himself. That time I had no particular
interest in mathematics, I was a typical humanitarian interested only in
arts and ancient history. Nevertheless it was so easy, and what a contrast
to the way physics was later taught. Unfortunately the program was quickly
scrapped.

There is no reason to learn others errors. Modern mathematics is so much
simpler than the old one! Littlewood in his "Miscellany" gives a horrific
example of the definition of function from textbooks of XIX century:

http://www.amazon.com/Littlewoods-Miscellany-B%C3%A9la-Bollob%C3%A1s/dp/052133702X

Why should anybody learn that mess?

But enough off-topics for today... (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4a082e2c$0$5893$607ed4bc@cv.net>
Dmitry A. Kazakov wrote:
> On Mon, 11 May 2009 01:58:20 -0400, Kenneth Tilton wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Sat, 09 May 2009 15:30:52 -0400, Kenneth Tilton wrote:
>>>
>>>> Well, i was not really trolling, I was forking the thread to make fun of 
>>>> the New Math that tried to get the numeral/number distinction across to 
>>>> five year olds.
>>> You find it better to start with medieval concepts working gradually on to
>>> the mathematics of XIX century, while explaining each next year what was
>>> wrong with the things they learnt a year ago?
>> Well, since you asked...
>>
>> http://smuglispweeny.blogspot.com/2009/05/how-to-teach-math.html
> 
> Poor kids!
> 
> Well, in my view this one of the major problems of modern education system:
> pupils are learning history of mathematics/physics/chemistry instead of the
> subject. I think this is all wrong.
> 
> I myself, as a kid, played a guinea-pig in an education program which
> started mathematics with some basics of the set theory and used logical
> formalism in all proofs. If I correctly remember the program was developed
> on the initiative of Kolmogorov himself. That time I had no particular
> interest in mathematics, I was a typical humanitarian interested only in
> arts and ancient history. Nevertheless it was so easy,...


ok, but you are someone who turned out to be a math geek. teaching math 
to future math geeks is not all that hard.

> and what a contrast
> to the way physics was later taught. Unfortunately the program was quickly
> scrapped.
> 
> There is no reason to learn others errors.

Well thanks for leaving my supposition completely unexamined. The point 
is the motivation it provides, and the improved retention and mastery 
of, say, place value, if one had done arithmetic with roman numerals first.

> Modern mathematics is so much
> simpler than the old one! Littlewood in his "Miscellany" gives a horrific
> example of the definition of function from textbooks of XIX century:
> 
> http://www.amazon.com/Littlewoods-Miscellany-B%C3%A9la-Bollob%C3%A1s/dp/052133702X
> 
> Why should anybody learn that mess?
> 
> But enough off-topics for today... (:-))
> 

Yeah, let's get back to talking about Java.

kt
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <v4y89y3tschi$.4kalakqs7qpq.dlg@40tude.net>
On Mon, 11 May 2009 09:54:43 -0400, Kenneth Tilton wrote:

> Well thanks for leaving my supposition completely unexamined. The point 
> is the motivation it provides, and the improved retention and mastery 
> of, say, place value, if one had done arithmetic with roman numerals first.

I would object that modern mathematics provides no less motivation and
meat for the brains of those who is learning it.

I must confess I have never learnt how to sum Roman numerals. But there was
a thing that puzzled me. Why quite often some very serious books, to name
one: already mentioned here The Science of Programming by D. Gries, contain
some very basic, if not trivial, introductory chapters about Boolean logic,
logical inference, set theory etc. Isn't it so that the intended audience,
students and people with M.S. in technical disciplines might not know
*this*? How could that happen? Could it be so that instead they well
remember how to manipulate Roman numerals, know the alchemical symbol of
gold, and can name the gender of the turtle that supports the Earth's disk?
(:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4a0879d2$0$5382$607ed4bc@cv.net>
Dmitry A. Kazakov wrote:
> On Mon, 11 May 2009 09:54:43 -0400, Kenneth Tilton wrote:
> 
>> Well thanks for leaving my supposition completely unexamined. The point 
>> is the motivation it provides, and the improved retention and mastery 
>> of, say, place value, if one had done arithmetic with roman numerals first.
> 
> I would object that modern mathematics provides no less motivation and
> meat for the brains of those who is learning it.

The "those" you are thinking about are those like you. I have tried and 
failed to explain that math geek learning styles are not why math 
education gets reformed every generation. So this is perfect. Like those 
who invented the New Math the only model you have for learning math is 
your own superior math intellect. Not even the average elementary school 
teacher reluctantly turning to the math lesson of the day could 
understand the new math, let alone the parents of the kids. A revolution 
soon restored conventional instruction (no prize itself) until another 
cockamamie idea came along (two actually, constructivism and one I forget).

Now tell us again how exciting, self-motivating, and fascinating math 
was to you and everyone you know down at the local Lisp drinking society.

:)

> 
> I must confess I have never learnt how to sum Roman numerals. But there was
> a thing that puzzled me. Why quite often some very serious books, to name
> one: already mentioned here The Science of Programming by D. Gries, contain
> some very basic, if not trivial, introductory chapters about Boolean logic,
> logical inference, set theory etc. Isn't it so that the intended audience,
> students and people with M.S. in technical disciplines might not know
> *this*? 

Do not be confused by my having offered as an example that programming 
language books often play the same gradual evolution game in presenting 
material. I just wanted to point out that it is an accepted pedagogical 
trick, not suggest that programming instruction should be the precise 
model for elementary school math instruction.

> How could that happen? Could it be so that instead they well
> remember how to manipulate Roman numerals, know the alchemical symbol of
> gold, and can name the gender of the turtle that supports the Earth's disk?
> (:-))
> 

Btw, please do not confuse computer programming with computer science. 
The overlap is only incidental.

kt
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1tzs87i4gu0sw.a6js3aw1khlw$.dlg@40tude.net>
On Mon, 11 May 2009 15:17:28 -0400, Kenneth Tilton wrote:

> Dmitry A. Kazakov wrote:
>> On Mon, 11 May 2009 09:54:43 -0400, Kenneth Tilton wrote:
>> 
>>> Well thanks for leaving my supposition completely unexamined. The point 
>>> is the motivation it provides, and the improved retention and mastery 
>>> of, say, place value, if one had done arithmetic with roman numerals first.
>> 
>> I would object that modern mathematics provides no less motivation and
>> meat for the brains of those who is learning it.
> 
> The "those" you are thinking about are those like you. I have tried and 
> failed to explain that math geek learning styles are not why math 
> education gets reformed every generation. So this is perfect. Like those 
> who invented the New Math the only model you have for learning math is 
> your own superior math intellect. Not even the average elementary school 
> teacher reluctantly turning to the math lesson of the day could 
> understand the new math, let alone the parents of the kids. A revolution 
> soon restored conventional instruction (no prize itself) until another 
> cockamamie idea came along (two actually, constructivism and one I forget).

Yes there is an impedance to "new" ideas.

"A new scientific truth does not triumph by convincing its opponents and
making them see the light, but rather because its opponents eventually die,
and a new generation grows up that is familiar with it."
   -- Max Planck

The ideas behind new mathematics aren't less intuitive than the old ones.
Concerning programming, I think there is a deep connection between the
revolution happened in mathematics in the beginning of the last century and
programming languages construction now.

> Now tell us again how exciting, self-motivating, and fascinating math 
> was to you and everyone you know down at the local Lisp drinking society.

> :)

OK, I'll try. I find lambda calculus one of the most disgusting parts of
mathematics. (:-)) 

>> I must confess I have never learnt how to sum Roman numerals. But there was
>> a thing that puzzled me. Why quite often some very serious books, to name
>> one: already mentioned here The Science of Programming by D. Gries, contain
>> some very basic, if not trivial, introductory chapters about Boolean logic,
>> logical inference, set theory etc. Isn't it so that the intended audience,
>> students and people with M.S. in technical disciplines might not know
>> *this*? 
> 
> Do not be confused by my having offered as an example that programming 
> language books often play the same gradual evolution game in presenting 
> material.

Yes, but should they start with Assembler and FORTRAN-IV? I mean in order
to teach imperative programming, we don't need to start with arithmetic IF,
proceed to the silly Pascal problem of an "else" following two "if"'s etc.

The point is, why evolution of presented material should follow historical
evolution? I suggest that it should not for the following reason. People
who believed, say, in programming in Assembler weren't imbecile. Some of
them were probably wiser than we. If they didn't understand that
programming in Assembler weren't that good for industrial scale code
production, why students would? I mean that the evolution of science and
engineering is decided at the level of understanding, which is likely
unavailable to the beginners. So, should they get exposed to this?

> Btw, please do not confuse computer programming with computer science. 
> The overlap is only incidental.

I don't. I fully agree with you here.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090519144856.261@gmail.com>
["Followup-To:" header set to comp.lang.lisp.]
On 2009-05-08, Matthias Blume <·····@hana.uchicago.edu> wrote:
> Ray Blaak <········@STRIPCAPStelus.net> writes:
>
>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>>> Write it up.
>>>
>>> pi
>>
>> Pi has never been completely observed in the history of humanity. I
>> predict it never will. If you do that you will be famous. Go ahead, try
>> it. I won't wait.
>>
>> Sure, you have the symbol Pi, its definition, and rules for its use, but
>> that is not what I am referring to. I mean the actual numeric value.
>
> Have you ever observed the "actual numeric value" of 1.5? 

Yes. 1.5 is 1 + 1/2.   

The number one is observed wherever there is
a set containing just one half. 

The number 1/2 is observed whenever a set containing an even number of elements
is subdivided into two equal-sized sets.

If we have a set of things which is divisible by three, we can divide it into
two sets such that one contains 1/3 of the items an the other subset contains
2/3 of the items. The full set is exactly 1.5 times bigger than the 2/3 set.

When you observe this relationship, you are observing number 1.5.

> Or of sqrt(2)?

This number is visualized only in the imagination. You have to, for instance,
imagine an ideal square of length 1. The square root of two is then the length
of its diagonal.  No such square is observed in reality.
From: Matthias Blume
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <m14ovvwbc7.fsf@hana.uchicago.edu>
Kaz Kylheku <········@gmail.com> writes:

> ["Followup-To:" header set to comp.lang.lisp.]
> On 2009-05-08, Matthias Blume <·····@hana.uchicago.edu> wrote:
>> Ray Blaak <········@STRIPCAPStelus.net> writes:
>>
>>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>>>> Write it up.
>>>>
>>>> pi
>>>
>>> Pi has never been completely observed in the history of humanity. I
>>> predict it never will. If you do that you will be famous. Go ahead, try
>>> it. I won't wait.
>>>
>>> Sure, you have the symbol Pi, its definition, and rules for its use, but
>>> that is not what I am referring to. I mean the actual numeric value.
>>
>> Have you ever observed the "actual numeric value" of 1.5? 
>
> Yes. 1.5 is 1 + 1/2.   

All you did was to give another symbolic expression denoting the same number.

> The number one is observed wherever there is
> a set containing just one half. 

In that case you are observing an /instance/ of the abstract concept
"one".  You cannot observe the abstract concept itself.  That's why we
call it "abstract".

> The number 1/2 is observed whenever a set containing an even number of elements
> is subdivided into two equal-sized sets.

See above.

> If we have a set of things which is divisible by three, we can divide it into
> two sets such that one contains 1/3 of the items an the other subset contains
> 2/3 of the items. The full set is exactly 1.5 times bigger than the 2/3 set.
>
> When you observe this relationship, you are observing number 1.5.

Again, you are merely observing an instance of the abstract concept.
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090519171946.667@gmail.com>
["Followup-To:" header set to comp.lang.lisp.]
On 2009-05-08, Matthias Blume <·····@hana.uchicago.edu> wrote:
> Kaz Kylheku <········@gmail.com> writes:
>
>> ["Followup-To:" header set to comp.lang.lisp.]
>> On 2009-05-08, Matthias Blume <·····@hana.uchicago.edu> wrote:
>>> Ray Blaak <········@STRIPCAPStelus.net> writes:
>>>
>>>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
>>>>>> Write it up.
>>>>>
>>>>> pi
>>>>
>>>> Pi has never been completely observed in the history of humanity. I
>>>> predict it never will. If you do that you will be famous. Go ahead, try
>>>> it. I won't wait.
>>>>
>>>> Sure, you have the symbol Pi, its definition, and rules for its use, but
>>>> that is not what I am referring to. I mean the actual numeric value.
>>>
>>> Have you ever observed the "actual numeric value" of 1.5? 
>>
>> Yes. 1.5 is 1 + 1/2.   
>
> All you did was to give another symbolic expression denoting the same number.

Yes; I only did that to shift away from the decimal notation.

>> The number one is observed wherever there is
>> a set containing just one half. 
>
> In that case you are observing an /instance/ of the abstract concept
> "one".  You cannot observe the abstract concept itself. 

An instance of something is really that something.  So, after work, I might
be drinking an instance of beer, which, like, really is beer.

Any concept which has concrete instances that can be observed is not actually
abstract. That concept is concrete, which is why it has concrete instances.

Thus the numbers one and two, and all rational numbers, are actually concrete.
Any collection of two things is really the number two.

Pi and sqrt(2) are abstract concepts, because they have no concrete instance.
One may visualize concepts of ideal circles and squares connected to these
numbers, but these visualizations are themselves abstract; in any case,
visualizations are not observations.

Still, the principle applies that an instance of something is that something.
So pi is an abstract concept, and consequently, it finds instantiation in
another abstract concept: the ratio of the circumference to diameter of an
circle.

Abstract instantiates abstract (e.g. irrational number); concrete instantiates
concrete (integer, rational).
From: ···············@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <7dcf955d-5bca-4c33-aacd-81fbd28c10f8@m24g2000vbp.googlegroups.com>
On May 8, 6:24 pm, Kaz Kylheku <········@gmail.com> wrote:
> ["Followup-To:" header set to comp.lang.lisp.]
> On 2009-05-08, Matthias Blume <·····@hana.uchicago.edu> wrote:
>
>
>
> > Kaz Kylheku <········@gmail.com> writes:
>
> >> ["Followup-To:" header set to comp.lang.lisp.]
> >> On 2009-05-08, Matthias Blume <·····@hana.uchicago.edu> wrote:
> >>> Ray Blaak <········@STRIPCAPStelus.net> writes:
>
> >>>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> >>>>>> Are you saying that you know of a "real" real in use somewhere? Quick!
> >>>>>> Write it up.
>
> >>>>> pi
>
> >>>> Pi has never been completely observed in the history of humanity. I
> >>>> predict it never will. If you do that you will be famous. Go ahead, try
> >>>> it. I won't wait.
>
> >>>> Sure, you have the symbol Pi, its definition, and rules for its use, but
> >>>> that is not what I am referring to. I mean the actual numeric value.
>
> >>> Have you ever observed the "actual numeric value" of 1.5?
>
> >> Yes. 1.5 is 1 + 1/2.  
>
> > All you did was to give another symbolic expression denoting the same number.
>
> Yes; I only did that to shift away from the decimal notation.
>
> >> The number one is observed wherever there is
> >> a set containing just one half.
>
> > In that case you are observing an /instance/ of the abstract concept
> > "one".  You cannot observe the abstract concept itself.
>
> An instance of something is really that something.  So, after work, I might
> be drinking an instance of beer, which, like, really is beer.
>
> Any concept which has concrete instances that can be observed is not actually
> abstract. That concept is concrete, which is why it has concrete instances.
>
> Thus the numbers one and two, and all rational numbers, are actually concrete.
> Any collection of two things is really the number two.
>
> Pi and sqrt(2) are abstract concepts, because they have no concrete instance.
> One may visualize concepts of ideal circles and squares connected to these
> numbers, but these visualizations are themselves abstract; in any case,
> visualizations are not observations.
>
> Still, the principle applies that an instance of something is that something.
> So pi is an abstract concept, and consequently, it finds instantiation in
> another abstract concept: the ratio of the circumference to diameter of an
> circle.
>
> Abstract instantiates abstract (e.g. irrational number); concrete instantiates
> concrete (integer, rational).

I think you are shortchanging the concept of integer and rational
here. An integer is much more than something attached to concrete
groups of the correct number of objects.

The integer corresponds to the infinite class of sets of that
cardinality; a set containing two unicorns, or of two "colorless green
ideas sleeping furiously" is an instance of that class, even if it
cannot be observed in reality. Similarly, the notion of "beer"
includes even beers that one cannot drink, or can only imagine.

One of course must be careful to avoid logical contradictions and
problems (e.g. Russel's paradox) arising from too sloppy a usage of
English to define sets or classes, but I think it is crucial to
recognize the very abstract nature of number, even of kinds such as
integer that are quite accessible to our intuition. Integers are
highly abstract.
From: Madhu
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <m3tz3vthfq.fsf@moon.robolove.meer.net>
* Matthias Blume <··············@hana.uchicago.edu> :
Wrote on Fri, 08 May 2009 16:09:44 -0500:
| Kaz Kylheku <········@gmail.com> writes:
|>> Have you ever observed the "actual numeric value" of 1.5?
|>
|> Yes. 1.5 is 1 + 1/2.   
|
| All you did was to give another symbolic expression denoting the same
| number.
|
|> The number one is observed wherever there is a set containing just
|> one half.
|
| In that case you are observing an /instance/ of the abstract concept
| "one".  You cannot observe the abstract concept itself.  That's why we
| call it "abstract".

Perhaps it is then a categorical mistake on the part of the semanticists
to call the concept "abstract", because Kaz seems to have observed it.

[That said.  I'd like to believe numbers are EQ in the implementation of
 the real world, i.e. in the underlying implementation in which all
 abstract concepts are implemented in all instances that they occur in
 minds and consciousness of every individual reasoning or observing
 about them]

--
Madhu
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090519174302.122@gmail.com>
On 2009-05-08, Madhu <·······@meer.net> wrote:
> * Matthias Blume <··············@hana.uchicago.edu> :
>| In that case you are observing an /instance/ of the abstract concept
>| "one".  You cannot observe the abstract concept itself.  That's why we
>| call it "abstract".
>
> Perhaps it is then a categorical mistake on the part of the semanticists
> to call the concept "abstract", because Kaz seems to have observed it.

Bingo.
From: John Thingstad
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <op.utmlj3s1ut4oq5@pandora>
På Fri, 08 May 2009 16:59:40 +0200, skrev Matthias Blume  
<·····@hana.uchicago.edu>:

>
> Now, you could have said that there are many (MANY!) real numbers that
> you cannot represent. Unfortunately, you can't name any one of them (by
> definition).
>
> Matthias

Well all irrational numbers and particularly all transcendental numbers.
Fractions can be represented absolutely but if represented in floating  
point with a binary number mantissa only if the fractions (canonized by  
dividing numerator and denominator by the least common divisor) where  
where the factorisation of the denominator contains 2 can be exactly  
represented. Secondly it limited by the number of bits in the mantissa.  
Though that is a smaller problem that only really occurs if the  
factorization of the numerator or denominator of the fraction contains  
large primes.
Is that exact enough?

---------------------
John Thingstad
From: Matthias Blume
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <m1ab5nwbkl.fsf@hana.uchicago.edu>
"John Thingstad" <·······@online.no> writes:

> P� Fri, 08 May 2009 16:59:40 +0200, skrev Matthias Blume
> <·····@hana.uchicago.edu>:
>
>>
>> Now, you could have said that there are many (MANY!) real numbers that
>> you cannot represent. Unfortunately, you can't name any one of them (by
>> definition).
>>
>> Matthias
>
> Well all irrational numbers and particularly all transcendental numbers.

False. sqrt(2) is irrational -- and I just represented it. Many
transcendentals are similarly representable as solutions to certain
equations (just not algebraic ones). Any number you can describe to me
unambiguously is representable. And -- by that definition -- you cannot
unambiguously describe to me a particular number that is not
representable.

> Fractions can be represented absolutely but if represented in floating
> point with a binary number mantissa only if the fractions (canonized
> by  dividing numerator and denominator by the least common divisor)
> where  where the factorisation of the denominator contains 2 can be
> exactly  represented.

I didn't say anything about binary floating point representation.

Matthias
From: Rob Warnock
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <VaSdnYrh1tzoSJnXnZ2dnUVZ_h2dnZ2d@speakeasy.net>
Matthias Blume  <·····@hana.uchicago.edu> wrote:
+---------------
| "John Thingstad" <·······@online.no> writes:
| > Well all irrational numbers and particularly all transcendental numbers.
| 
| False. sqrt(2) is irrational -- and I just represented it. Many
| transcendentals are similarly representable as solutions to certain
| equations (just not algebraic ones). Any number you can describe to me
| unambiguously is representable. And -- by that definition -- you cannot
| unambiguously describe to me a particular number that is not
| representable.
+---------------

Hah! What about numbers you can describe unambiguously but not
compute more than a tiny fraction of their leading bits?!?
[...at least, not to better precision than the length of the
program that's doing the computation.] I refer, of course, to
Chaitin's "Omega" <http://en.wikipedia.org/wiki/Chaitin%27s_constant>,
which is a real number between 0 & 1 but is algorithmically random.
[That is, the shortest program to output the first N bits of Omega
must be of size at least N-O(1).] It is not a computable number;
there is no computable function that enumerates its binary expansion.
[See the URL.]


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: John Thingstad
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <op.utmlxtscut4oq5@pandora>
På Fri, 08 May 2009 21:25:05 +0200, skrev John Thingstad  
<·······@online.no>:

(canonized by dividing numerator and denominator by the least common  
divisor)
That should be greatest common divisor of course. (Euclid's algorithm)

---------------------
John Thingstad
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ubpq2qggq.fsf@STRIPCAPStelus.net>
Matthias Blume <·····@hana.uchicago.edu> writes:
> Have you ever observed the "actual numeric value" of 1.5? Or of sqrt(2)?
> Or of 1, for that matter? Pi can be finitely represented in most formal
> notations for real arithmetic, just like the other examples. What else is
> there?
>
> Now, you could have said that there are many (MANY!) real numbers that
> you cannot represent. Unfortunately, you can't name any one of them (by
> definition).

Sure. Thanks for the clarification.

But the fact remains that our computers represent many kinds of numbers,
especially integers in a restricted range fairly naturally. So I can
"see" them there.

My main point is that our computers are working with approximations. Can
that be denied? In as much as Dmitri seems to be doing that, I object.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <s9btnqrs2eob.150v82rec80uu.dlg@40tude.net>
On Sat, 09 May 2009 18:27:49 GMT, Ray Blaak wrote:

> My main point is that our computers are working with approximations. Can
> that be denied? In as much as Dmitri seems to be doing that, I object.

No, you wrote something very different, I quote you:

"Even if there exist a perfect
real number (perhaps a perfect circle), we only have a finite amount of
time to measure it approximately.

So approximate reals will just have to suffice."

This was not about whether computer models were approximations, which they
obviously are. This was a false implication of yours leading to a false
statement about the properties of such approximations (their sufficiency
for what?). Myself. as well as others, merely pointed out this fact. Then
you tried to show "sufficiency" by denying existence of anything for which
a computable model might appear insufficient.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ud4aivvcp.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> No, you wrote something very different, I quote you:
>
> "Even if there exist a perfect
> real number (perhaps a perfect circle), we only have a finite amount of
> time to measure it approximately.
>
> So approximate reals will just have to suffice."
>
> This was not about whether computer models were approximations, which they
> obviously are. 

But it is about that. That is what I am trying to talk about at any
rate. Approximate reals do have to suffice. Computation with reals in
today's computers mean binary floating point arithmetic in practical
terms.

I am not debating whether one can express formal symbols on finite
machines. Of course you can. But even if you have symbolic algebraic
systems like Maple to work with when you finally need to apply the
results, "render" them, so to speak, there is a fundamental
approximation of things happening.

You were talking before about the inadequacy of our computer models. I
am saying they work well enough, that is all.

Now, in as much as people correct and clarify my statements about
symbolic representation, that's all well and good, even appreciated, but
the approximation inherent in our finite machines is what I was trying
to talk about.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <haaw7221qlmi$.4riqwyaw8wzz$.dlg@40tude.net>
On Sat, 09 May 2009 21:07:19 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> No, you wrote something very different, I quote you:
>>
>> "Even if there exist a perfect
>> real number (perhaps a perfect circle), we only have a finite amount of
>> time to measure it approximately.
>>
>> So approximate reals will just have to suffice."
>>
>> This was not about whether computer models were approximations, which they
>> obviously are. 
> 
> But it is about that. That is what I am trying to talk about at any
> rate. Approximate reals do have to suffice. Computation with reals in
> today's computers mean binary floating point arithmetic in practical
> terms.
> 
> I am not debating whether one can express formal symbols on finite
> machines. Of course you can.

Note that any numeral system (binary including) is symbolic. It has no
connection to whether certain numbers (irrational or not) are representable
in the symbolism. As an example consider this positional numeral system:

http://en.wikipedia.org/wiki/Golden_ratio_base

"1.0" is irrational there!

> You were talking before about the inadequacy of our computer models. I
> am saying they work well enough, that is all.

But you didn't give anything in support of this statement. What could be a
"well enough" approximation of the Dirichlet function?

http://mathworld.wolfram.com/DirichletFunction.html
 
> Now, in as much as people correct and clarify my statements about
> symbolic representation, that's all well and good, even appreciated, but
> the approximation inherent in our finite machines is what I was trying
> to talk about.

So can we finally agree that computers are incapable to adequately handle
real numbers? (which was the starting point of this discussion)

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uprehcrt1.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> You were talking before about the inadequacy of our computer models. I
>> am saying they work well enough, that is all.
>
> But you didn't give anything in support of this statement. 

I did. I mentioned my company's payroll, banking, etc. Practical things
are being computed all the time.

But you know this. So, I wonder what I am really arguing with you about.

> So can we finally agree that computers are incapable to adequately handle
> real numbers? (which was the starting point of this discussion)

We just don't communicate well do we? "Adequately"? With full precision,
obviously not, as I keep saying. In terms of getting useful work done, of
course yes.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <t0xumgyi53l3.jbpdasul2p1v.dlg@40tude.net>
On Sun, 10 May 2009 07:58:51 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>> You were talking before about the inadequacy of our computer models. I
>>> am saying they work well enough, that is all.
>>
>> But you didn't give anything in support of this statement. 
> 
> I did. I mentioned my company's payroll, banking, etc.

Payrolls contain dates. Time is incomputable.

> Practical things
> are being computed all the time.

I don't know your definition of "practical". Something tells me that it is
tautological. Am I right?

>> So can we finally agree that computers are incapable to adequately handle
>> real numbers? (which was the starting point of this discussion)
> 
> We just don't communicate well do we? "Adequately"? With full precision,
> obviously not, as I keep saying.

Full precision?

http://en.wikipedia.org/wiki/Accuracy_and_precision

You certainly meant something else.

> In terms of getting useful work done, of
> course yes.

I don't know what "useful" and "work done" should mean in connection with
real numbers. I never saw any classification of numbers (or numerical
problems) into "useful" and "useless".

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ufxfc6ftl.fsf@STRIPCAPStelus.net>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> Payrolls contain dates. Time is incomputable.

Oh really? When I look at my watch, I am deluded?

We are going around and round. Of course time (and numbers) are
computable, in a practical sense. We do it all the time.

Properly? Accurately? Precisely? Maybe not, but we certainly approixmate
these values to a useful extent.

>> Practical things are being computed all the time.
>
> I don't know your definition of "practical". Something tells me that it is
> tautological. Am I right?

No. The evidence is all around us since we all use computers everyday.

You are a programmer yourself, and so you actually make computers do
useful things, using arithmetic, using date calculations.

> I don't know what "useful" and "work done" should mean in connection with
> real numbers. I never saw any classification of numbers (or numerical
> problems) into "useful" and "useless".

It's not that hard.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: George Neuner
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <rt1h05p2ub11hb3g37lq2o7qh05do2v2po@4ax.com>
On Sun, 10 May 2009 09:30:59 +0200, "Dmitry A. Kazakov"
<·······@dmitry-kazakov.de> wrote:

>On Sat, 09 May 2009 21:07:19 GMT, Ray Blaak wrote:
>
>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>> No, you wrote something very different, I quote you:
>>>
>>> "Even if there exist a perfect
>>> real number (perhaps a perfect circle), we only have a finite amount of
>>> time to measure it approximately.
>>>
>>> So approximate reals will just have to suffice."
>>>
>>> This was not about whether computer models were approximations, which they
>>> obviously are. 
>> 
>> But it is about that. That is what I am trying to talk about at any
>> rate. Approximate reals do have to suffice. Computation with reals in
>> today's computers mean binary floating point arithmetic in practical
>> terms.
>> 
>> I am not debating whether one can express formal symbols on finite
>> machines. Of course you can.
>
>Note that any numeral system (binary including) is symbolic. It has no
>connection to whether certain numbers (irrational or not) are representable
>in the symbolism. As an example consider this positional numeral system:
>
>http://en.wikipedia.org/wiki/Golden_ratio_base
>
>"1.0" is irrational there!
>
>> You were talking before about the inadequacy of our computer models. I
>> am saying they work well enough, that is all.
>
>But you didn't give anything in support of this statement. What could be a
>"well enough" approximation of the Dirichlet function?
>
>http://mathworld.wolfram.com/DirichletFunction.html
> 
>> Now, in as much as people correct and clarify my statements about
>> symbolic representation, that's all well and good, even appreciated, but
>> the approximation inherent in our finite machines is what I was trying
>> to talk about.
>
>So can we finally agree that computers are incapable to adequately handle
>real numbers? (which was the starting point of this discussion)

The point that everyone is dancing around is that the vast majority of
maths may be interesting theoretically but have no practical purpose.
Computers serve a practical purpose ... and while for some that
purpose might be exploration of theoretical math, for most people it
is not.  The numeric representations/approximations we have are the
ones that have proven to be the most useful to the greatest number of
people.

You know the joke about the mathematician, the engineer and the naked
woman?  Engineering shows us that true representations are not needed
for practical use.

George
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <10kkat2ayo2g3.lsv6zu2b4obu$.dlg@40tude.net>
On Mon, 11 May 2009 16:28:17 -0400, George Neuner wrote:

> On Sun, 10 May 2009 09:30:59 +0200, "Dmitry A. Kazakov"
> <·······@dmitry-kazakov.de> wrote:
> 
>>On Sat, 09 May 2009 21:07:19 GMT, Ray Blaak wrote:
>>
>>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>> No, you wrote something very different, I quote you:
>>>>
>>>> "Even if there exist a perfect
>>>> real number (perhaps a perfect circle), we only have a finite amount of
>>>> time to measure it approximately.
>>>>
>>>> So approximate reals will just have to suffice."
>>>>
>>>> This was not about whether computer models were approximations, which they
>>>> obviously are. 
>>> 
>>> But it is about that. That is what I am trying to talk about at any
>>> rate. Approximate reals do have to suffice. Computation with reals in
>>> today's computers mean binary floating point arithmetic in practical
>>> terms.
>>> 
>>> I am not debating whether one can express formal symbols on finite
>>> machines. Of course you can.
>>
>>Note that any numeral system (binary including) is symbolic. It has no
>>connection to whether certain numbers (irrational or not) are representable
>>in the symbolism. As an example consider this positional numeral system:
>>
>>http://en.wikipedia.org/wiki/Golden_ratio_base
>>
>>"1.0" is irrational there!
>>
>>> You were talking before about the inadequacy of our computer models. I
>>> am saying they work well enough, that is all.
>>
>>But you didn't give anything in support of this statement. What could be a
>>"well enough" approximation of the Dirichlet function?
>>
>>http://mathworld.wolfram.com/DirichletFunction.html
>> 
>>> Now, in as much as people correct and clarify my statements about
>>> symbolic representation, that's all well and good, even appreciated, but
>>> the approximation inherent in our finite machines is what I was trying
>>> to talk about.
>>
>>So can we finally agree that computers are incapable to adequately handle
>>real numbers? (which was the starting point of this discussion)
> 
> The point that everyone is dancing around is that the vast majority of
> maths may be interesting theoretically but have no practical purpose.

Yes, but you never know for sure what does have a practical use. And anyway
it is always safe to keep it mathematically correct.

> Computers serve a practical purpose

Well, in my opinion computers largely serve themselves. They are no longer
a mere tool, but in most cases the source, meaning, and purpose.

> ... and while for some that
> purpose might be exploration of theoretical math, for most people it
> is not.

Yes. BTW, I am among that majority.

> The numeric representations/approximations we have are the
> ones that have proven to be the most useful to the greatest number of
> people.

Yes. Therefore it is so important to understand that they are only
representations, and that there are many of them, and that none of these
many is the best, unless the meaning of "best" is defined.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <3a85ad9e-569d-488c-a631-b2783e7505a0@o27g2000vbd.googlegroups.com>
On May 12, 3:36 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> > Computers serve a practical purpose
>
> Well, in my opinion computers largely serve themselves. They are no longer
> a mere tool, but in most cases the source, meaning, and purpose.
>

What is the output of a computer with no programmer?
I find math to be the most annoying part of math.
From: George Neuner
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <6uek05l5i4omv5psraph69472d1u8ad2jj@4ax.com>
On Tue, 12 May 2009 18:45:23 -0700 (PDT), ··················@gmail.com
wrote:

>What is the output of a computer with no programmer?

Heat and EMI.

>I find math to be the most annoying part of math.

Definitely.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <19a3t6na52145$.4jr6ao3rhadh$.dlg@40tude.net>
On Tue, 12 May 2009 18:45:23 -0700 (PDT), ··················@gmail.com
wrote:

> On May 12, 3:36�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>>> Computers serve a practical purpose
>>
>> Well, in my opinion computers largely serve themselves. They are no longer
>> a mere tool, but in most cases the source, meaning, and purpose.
> 
> What is the output of a computer with no programmer?

The output.

You probably meant the *meaning* of the output without a *user* capable to
understand it.

"Meaning", "information", "knowledge" are all things in the head of a man.
They aren't properties of the computer, not even ones of its output, unless
output is defined as an impression induced in someone's head.

> I find math to be the most annoying part of math.

(:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Matthias Blume
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <m17i0latht.fsf@hana.uchicago.edu>
··················@gmail.com writes:

> I find math to be the most annoying part of math.

Agreed.  And oddly enough, at the same time it is also the most pleasant
part of math...
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74s9h6F14fqm0U1@mid.individual.net>
Pascal Costanza wrote:
> Robbert Haarman wrote:
>> On Mon, Apr 13, 2009 at 10:58:12PM +0200, Pascal Costanza wrote:
>>> Robbert Haarman wrote:
>>>> I am not denying that you have run into this problem or could run 
>>>> into  this problem in a statically typed language, I am only saying 
>>>> that it is not _because_ of static typing that the problem exists.
>>> It surely is.
>>
>> Ok, well, enough said about that. Clearly, neither of us is going to 
>> convince the other.
>>
>>>> You could run into the same problem in a dynamically typed 
>>>> language.  Taking your "third-party method that requires a 
>>>> CharSequence as a  parameter" as an example, if someone wrote
>>>>
>>>> (defmethod foo ((x vector))
>>>>    ; some code here
>>>>    )
>>>>
>>>> then you would have to pass that method some kind of vector. You 
>>>> may  want to pass it some other object that the length function also 
>>>> works  for, but that isn't going to work. The type system will only 
>>>> let you  pass a vector.
>>>>
>>>> The only difference from the statically typed case is that, in the  
>>>> statically typed case, your program won't compile, whereas, in the  
>>>> dynamically typed case, your program will compile and run...until 
>>>> you  actually appy foo to something that isn't a vector.
>>> It will continue to run afterwards:
>>
>> .. in Common Lisp. Yes. But that wasn't the point, the point was that 
>> the program was errorneous.
>>
>>> CL-USER 1 > (defmethod foo ((v vector))
>>>               (map 'vector (lambda (x) (+ x x)) v))
>>> #<STANDARD-METHOD FOO NIL (VECTOR) 21C525FF>
>>>
>>> CL-USER 2 > (foo (list 1 2 3))
>>>
>>> Error: No applicable methods for #<STANDARD-GENERIC-FUNCTION FOO  
>>> 21C50B5A> with args ((1 2 3))
>>>   1 (continue) Call #<STANDARD-GENERIC-FUNCTION FOO 21C50B5A> again
>>>   2 (abort) Return to level 0.
>>>   3 Return to top loop level 0.
>>>
>>> Type :b for backtrace, :c <option number> to proceed,  or :? for 
>>> other  options
>>>
>>> CL-USER 3 : 1 > (defmethod foo (v)
>>>                   (let* ((coerced (coerce v 'vector))
>>>                          (result (foo coerced)))
>>>                     (coerce result (type-of v))))
>>> #<STANDARD-METHOD FOO NIL (T) 21DED763>
>>>
>>> CL-USER 4 : 1 > :c 1
>>> (2 4 6)
>>
>> .. and now you have done all the work to implement your own 
>> "third-party method", or, alternatively, to make your object 
>> compatible with the type expected by the original third-party method. 
>> In this case it was easy, because coerce knows how to convert a list 
>> to a vector and back, but that is beside the point. The point is that 
>> dynamic typing did not prevent you from having to do this work.
> 
> It's a method for a signature that I'm currently interested in, and some 
> arbitrary other code path.

Make that: "and not one that is on some arbitrary other code path."


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87iqlcer6v.fsf.mdw@metalzone.distorted.org.uk>
Robbert Haarman <··············@inglorion.net> writes:

> Taking "static typing" to mean that programs that cannot be correctly at 
> compile time are rejected at compile time, whereas "dynamic typing" 
> means type errors lead to rejection at run-time, static typing means, by 
> definition, rejecting bad programs early. It seems to me this would be a 
> productivity gain.

There's a downside to static typing, though.  The compiler doesn't just
reject programs that it can prove are incorrect: it rejects programs
which it fails to prove are correct.  As a consequence, compilers for
statically typed languages actually reject a nontrivial class of correct
programs.  Since the kinds of programs that I write in dynamically typed
languages, such as Lisp or Python, are most certainly in this class,
they would assuredly be rejected by a compiler for a statically typed
language.

I don't see how having the programs I'd like to write be rejected is a
productivity win.

(If I take the time to decorate my Lisp program with type declarations,
a decent compiler will indeed warn me about type errors.  Admittedly,
Lisp will warn me about programs which it proves to be /incorrect/,
which is the other kind of error, but it's still useful -- and I get to
write the programs which naturally occur to me to write rather than the
ones I'm forced to write by the type system.)

> Also, requiring types to be checked at compile time requires the types 
> to be determined at compile time, which means the knowledge of types is 
> available to perform optimizations.
>
> Now, in theory, you could perform all the same type inference and type 
> checking on a dynamically typed language that you could perform on a 
> statically typed language, as long as your program is written in a style 
> that we know how to do type inference for. In practice, this is often 
> not the case. The result is that programs written in dynamically typed 
> languages will often not have all their types known and checked at 
> compile time, leading to less efficient code generation and the 
> possibility of type errors at run time.

You mean that you acknowledge that programmers using dynamically typed
languages tend to write programs which static type systems would reject.
There must be a reason for this, and I'd claim that it's not just ill
discipline or ignorance.  In fact, I'd guess that the reason is that
those programs are quicker to write.  This certainly casts doubt on your
claim that static typing is an untrammelled productivity win.

-- [mdw]
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <8p5soz1o2e4m.1ds6m0j0etc3$.dlg@40tude.net>
On Fri, 10 Apr 2009 11:30:48 +0100, Mark Wooding wrote:

> Robbert Haarman <··············@inglorion.net> writes:
> 
>> Taking "static typing" to mean that programs that cannot be correctly at 
>> compile time are rejected at compile time, whereas "dynamic typing" 
>> means type errors lead to rejection at run-time, static typing means, by 
>> definition, rejecting bad programs early. It seems to me this would be a 
>> productivity gain.
> 
> There's a downside to static typing, though.  The compiler doesn't just
> reject programs that it can prove are incorrect: it rejects programs
> which it fails to prove are correct.

Firstly, it is a property of *any* compiling system. There is no compiler
that could compile a correct program of 2**9999999999999999 characters
long.

Secondly, as an alternative to Lisp I propose a random generator of
hexadecimal machine code. Any sequence of machine codes is a correct
program. It would not smoke the CPU, you know.

> I don't see how having the programs I'd like to write be rejected is a
> productivity win.

Random generator is greatly more productive. In fact "bug" is an artefact
of checks. If you check nothing, there is no bugs...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <fdf2ec24-a390-4689-8533-c51a03f738f8@r8g2000yql.googlegroups.com>
On Apr 10, 8:03 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Fri, 10 Apr 2009 11:30:48 +0100, Mark Wooding wrote:
> > Robbert Haarman <··············@inglorion.net> writes:
>
> >> Taking "static typing" to mean that programs that cannot be correctly at
> >> compile time are rejected at compile time, whereas "dynamic typing"
> >> means type errors lead to rejection at run-time, static typing means, by
> >> definition, rejecting bad programs early. It seems to me this would be a
> >> productivity gain.
>
> > There's a downside to static typing, though.  The compiler doesn't just
> > reject programs that it can prove are incorrect: it rejects programs
> > which it fails to prove are correct.
>
> Firstly, it is a property of *any* compiling system. There is no compiler
> that could compile a correct program of 2**9999999999999999 characters
> long.
>
> Secondly, as an alternative to Lisp I propose a random generator of
> hexadecimal machine code. Any sequence of machine codes is a correct
> program. It would not smoke the CPU, you know.
>
> > I don't see how having the programs I'd like to write be rejected is a
> > productivity win.
>
> Random generator is greatly more productive. In fact "bug" is an artefact
> of checks. If you check nothing, there is no bugs...
>
> --
> Regards,
> Dmitry A. Kazakovhttp://www.dmitry-kazakov.de

I think you're merely being sarcastic, but just in case you actually
think these point count as refutations...

1. Absurd "counterexamples" don't disprove general points. In fact,
having to reach for absurdities is usually a sign your argument is
failing.

2. Reasonable measures of productivity count *useful* worker output,
not random trash.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <g8b1tvxfcacp$.bwsoedn441ti$.dlg@40tude.net>
On Fri, 10 Apr 2009 07:05:03 -0700 (PDT), Raffael Cavallaro wrote:

> On Apr 10, 8:03�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 11:30:48 +0100, Mark Wooding wrote:
>>> Robbert Haarman <··············@inglorion.net> writes:
>>
>>>> Taking "static typing" to mean that programs that cannot be correctly at
>>>> compile time are rejected at compile time, whereas "dynamic typing"
>>>> means type errors lead to rejection at run-time, static typing means, by
>>>> definition, rejecting bad programs early. It seems to me this would be a
>>>> productivity gain.
>>
>>> There's a downside to static typing, though. �The compiler doesn't just
>>> reject programs that it can prove are incorrect: it rejects programs
>>> which it fails to prove are correct.
>>
>> Firstly, it is a property of *any* compiling system. There is no compiler
>> that could compile a correct program of 2**9999999999999999 characters
>> long.
>>
>> Secondly, as an alternative to Lisp I propose a random generator of
>> hexadecimal machine code. Any sequence of machine codes is a correct
>> program. It would not smoke the CPU, you know.
>>
>>> I don't see how having the programs I'd like to write be rejected is a
>>> productivity win.
>>
>> Random generator is greatly more productive. In fact "bug" is an artefact
>> of checks. If you check nothing, there is no bugs...
> 
> I think you're merely being sarcastic,

Surely I am.

> but just in case you actually
> think these point count as refutations...
> 
> 1. Absurd "counterexamples" don't disprove general points. In fact,
> having to reach for absurdities is usually a sign your argument is
> failing.

Absurd counterexample disproves absurd point.

> 2. Reasonable measures of productivity count *useful* worker output,
> not random trash.

So we can agree that the original point about productivity was absurd
without providing any measurements of.

I would like to see them. Precisely the number of man-hours required to
achieve the rate software failure at the given severity level per source
code line per one second of execution.

Further I would also like to an explanation how later or less checks could
improve this rate and thus productivity. Especially the issue how program
correctness can be defined without checks, which, according to the point
need to be reduced in order to improve "productivity." Otherwise, you fall
into trivial, no checks, no bugs, infinite productivity.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ef9524c8-6db9-4bd9-b63a-1ea4f2989eaa@o11g2000yql.googlegroups.com>
On Apr 10, 11:44 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
[...]
> Further I would also like to an explanation how later or less checks could
> improve this rate and thus productivity.

How could such an obvious point need explanation? Eliminating
*irrelevant* checks will clearly increase productivity, because
irrelevant checks are, by definition, a waste of time.

> Especially the issue how program correctness can be defined without
> checks, which, according to the point need to be reduced in order
> to improve "productivity."

This is a really pitiful strawman. Just because you can't define
program correctness without the idea of conforming to *some* set of
checks hardly means that you can't define program correctness without
conforming to *every possible* set of checks.

Cheers,
Pillsy
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <22mf0jajocp0$.ttz59uuilbjm$.dlg@40tude.net>
On Fri, 10 Apr 2009 09:13:46 -0700 (PDT), Pillsy wrote:

> On Apr 10, 11:44�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
> [...]
>> Further I would also like to an explanation how later or less checks could
>> improve this rate and thus productivity.
> 
> How could such an obvious point need explanation? Eliminating
> *irrelevant* checks will clearly increase productivity, because
> irrelevant checks are, by definition, a waste of time.
>
>> Especially the issue how program correctness can be defined without
>> checks, which, according to the point need to be reduced in order
>> to improve "productivity."
> 
> This is a really pitiful strawman. Just because you can't define
> program correctness without the idea of conforming to *some* set of
> checks hardly means that you can't define program correctness without
> conforming to *every possible* set of checks.

Wow, now it becomes interesting. So type checks are irrelevant. That's
honest. At least!

But that was not the original point. It was, that type checks are great to
perform later. You should have argued for untyped languages.

However that does not wonder me. Dynamic typing consequently leads to no
typing. No need to be ashamed of guys, just speak your mind. How are going
to define correctness outside types (sets of values and operations on
them)? I am curious.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <c93683b9-038b-4d99-bfcd-f348c3db1acd@h28g2000yqd.googlegroups.com>
On Apr 10, 12:42 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Fri, 10 Apr 2009 09:13:46 -0700 (PDT), Pillsy wrote:
[...]
> > This is a really pitiful strawman. Just because you can't define
> > program correctness without the idea of conforming to *some* set of
> > checks hardly means that you can't define program correctness without
> > conforming to *every possible* set of checks.

> Wow, now it becomes interesting. So type checks are irrelevant.

In *some* circumstances, they are.

You seem to be repeatedly excluding the middle.

> But that was not the original point. It was, that type checks are great to
> perform later.

Yes, because that's one easy way of providing finer-grained control
over what type checks are performed. If I'm doing exploratory
programming (which is something I certainly do a lot of), being
dropped into the debugger due to the occasional type error is a lot
more convenient than spending the time up front to get all my types
right, because that slows down the exploration that was the ultimate
point of the exercise.

Once I've got something that I think is pretty good, then getting
earlier checking is a lot more useful, and having a static type-system
is a lot more appealing, but it's not appealing enough to make dealing
with that system worth the trouble in the earlier stages of
development.

Cheers,
Pillsy
From: Paul Wallich
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gro504$mi6$1@reader1.panix.com>
Pillsy wrote:
> On Apr 10, 12:42 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 09:13:46 -0700 (PDT), Pillsy wrote:
> [...]
>>> This is a really pitiful strawman. Just because you can't define
>>> program correctness without the idea of conforming to *some* set of
>>> checks hardly means that you can't define program correctness without
>>> conforming to *every possible* set of checks.
> 
>> Wow, now it becomes interesting. So type checks are irrelevant.
> 
> In *some* circumstances, they are.
> 
> You seem to be repeatedly excluding the middle.
> 
>> But that was not the original point. It was, that type checks are great to
>> perform later.
> 
> Yes, because that's one easy way of providing finer-grained control
> over what type checks are performed. If I'm doing exploratory
> programming (which is something I certainly do a lot of), being
> dropped into the debugger due to the occasional type error is a lot
> more convenient than spending the time up front to get all my types
> right, because that slows down the exploration that was the ultimate
> point of the exercise.

Another way to think of this is in terms of dead-code elimination or 
short-circuit evaluation. None of the code that gets discarded during 
the exploratory phase is in any of the execution paths of the final 
product, or even of the preliminary versions. Performing extensive 
series of tests on it is a little like insisting that office workers 
carefully flatten every sheet of paper they throw into the wastebasket.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1iq2wek5m3jpw$.dp8qyzdkwrvi$.dlg@40tude.net>
On Fri, 10 Apr 2009 10:07:36 -0700 (PDT), Pillsy wrote:

> On Apr 10, 12:42�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 09:13:46 -0700 (PDT), Pillsy wrote:
> [...]
>>> This is a really pitiful strawman. Just because you can't define
>>> program correctness without the idea of conforming to *some* set of
>>> checks hardly means that you can't define program correctness without
>>> conforming to *every possible* set of checks.
> 
>> Wow, now it becomes interesting. So type checks are irrelevant.
> 
> In *some* circumstances, they are.

It is interesting to learn the cases when type violation becomes
irrelevant. The only one case is when there is no any type. In all other
cases it is just a bug.

> You seem to be repeatedly excluding the middle.

It is not me, it is logic which does. Unless you deploy multi-valued or
fuzzy logic here.

>> But that was not the original point. It was, that type checks are great to
>> perform later.
> 
> Yes, because that's one easy way of providing finer-grained control
> over what type checks are performed. If I'm doing exploratory
> programming (which is something I certainly do a lot of), being
> dropped into the debugger due to the occasional type error is a lot
> more convenient than spending the time up front to get all my types
> right, because that slows down the exploration that was the ultimate
> point of the exercise.
>
> Once I've got something that I think is pretty good, then getting
> earlier checking is a lot more useful, and having a static type-system
> is a lot more appealing, but it's not appealing enough to make dealing
> with that system worth the trouble in the earlier stages of
> development.

The question is what is the name for "something." How you describe
"something" to yourself? You should have it different from an algebraic
presentation as a set of values bound by operations, because that is what a
type is.

So either, your language does not provide an adequate model to describe the
concept you have in mind, or else you have something really uncommon. My
guess it is rather the former.

Note, that I do not mean incomplete descriptions of types, when "something"
is being explored and so far it is not clear which values and which
operations it will have in the end. No, this cannot not prevent you from
stating that "something" is really something and not, say, a complex matrix
or commuter train schedule. What you want is a different thing. You want to
plug a microphone into 220V wall outlet in order to explore what happens.
You should not do it. Trust the compiler, there were people who already
tried...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <c8366afc-3583-4740-8397-0066cfa3bd42@q16g2000yqg.googlegroups.com>
On Apr 10, 1:46 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:

> It is interesting to learn the cases when type violation becomes
> irrelevant. The only one case is when there is no any type. In all other
> cases it is just a bug.

Only because you're thinking only of one class of program, where the
code to be run is known at compile time. But dynamically typed
languages let you write systems like this one:

<http://nm.wu-wien.ac.at/research/publications/b335.pdf>

where the code to be run isn't known at compile time because it is
created by end users, not programmers. This code can't be statically
type checked at compile time for a simple reason; it doesn't exist yet
at compile time.

Now, turing completeness being what it is, one could of course have
written such a system in a statically typed language. Of course doing
so would mean implementing a good part of a dynamically typed language
on top of the statically typed language. If one is going to do this,
one might as well start with an existing, well specified, debugged,
tested dynamically typed language, rather than writing one's own on
top of a statically typed language.

The same concern applies the other way round; if one wants static type
checks for safety reasons (for example, potentially lethal medical
treatment software) it is of course possible to implement a statically
typed language compiler on top of a dynamically typed language. But
why go that route when perfectly good statically typed languages
already exist?

IOW, there are problem domains where dynamic typing is simply
necessary because we don't yet know at compile time what we'll be
running. In these cases, dynamically typed languages let us write
programs that static type checkers cannot prove correct and won't let
us compile (short of the reductio ad absurdum of implementing dynamic
typing on top our static type system).

Conversely, there exist problem domains where we don't really care if
we end up rejecting some programs that could possibly be correct at
runtime because we want strong guarantees of safety before anything is
ever allowed to run. In such domains static type checking provides
added security at a cost that is inconsequential in that domain.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090410183810.GQ3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 11:14:15AM -0700, Raffael Cavallaro wrote:
> On Apr 10, 1:46 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
> 
> > It is interesting to learn the cases when type violation becomes
> > irrelevant. The only one case is when there is no any type. In all other
> > cases it is just a bug.
> 
> Only because you're thinking only of one class of program, where the
> code to be run is known at compile time. But dynamically typed
> languages let you write systems like this one:
> 
> <http://nm.wu-wien.ac.at/research/publications/b335.pdf>
> 
> where the code to be run isn't known at compile time because it is
> created by end users, not programmers. This code can't be statically
> type checked at compile time for a simple reason; it doesn't exist yet
> at compile time.

Sure it does. It may not yet exist at the time when the large system is 
compiled, but it does exist once the user creates it. You can perform 
your checking at any time after that - including before you run it.

Regards,

Bob

-- 
Life is too short to be taken seriously.

From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <da23a3dd-b0f2-49f3-b518-461d53912081@z14g2000yqa.googlegroups.com>
On Apr 10, 2:38 pm, Robbert Haarman <··············@inglorion.net>
wrote:
> On Fri, Apr 10, 2009 at 11:14:15AM -0700, Raffael Cavallaro wrote:
> > On Apr 10, 1:46 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
>
> > > It is interesting to learn the cases when type violation becomes
> > > irrelevant. The only one case is when there is no any type. In all other
> > > cases it is just a bug.
>
> > Only because you're thinking only of one class of program, where the
> > code to be run is known at compile time. But dynamically typed
> > languages let you write systems like this one:
>
> > <http://nm.wu-wien.ac.at/research/publications/b335.pdf>
>
> > where the code to be run isn't known at compile time because it is
> > created by end users, not programmers. This code can't be statically
> > type checked at compile time for a simple reason; it doesn't exist yet
> > at compile time.
>
> Sure it does. It may not yet exist at the time when the large system is
> compiled, but it does exist once the user creates it. You can perform
> your checking at any time after that - including before you run it.
>
> Regards,
>
> Bob
>
> --
> Life is too short to be taken seriously.

Only if your end user is a programmer who is willing to put up with
the ctyptic error messages of static type checkers. When your end user
is not a programmer but a biologist, and you subject her to the
typical output of a static type checker, she simply gives up in
frustration and won't use your system.

Read the linked article and see why you need a system that insulates
the end user from such concerns, and why such a system in effect
requires you to either be running a dynamically typed language, or to
build one on top of your statically typed language.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090410195550.GU3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 12:18:24PM -0700, Raffael Cavallaro wrote:
> 
> Only if your end user is a programmer who is willing to put up with
> the ctyptic error messages of static type checkers. When your end user
> is not a programmer but a biologist, and you subject her to the
> typical output of a static type checker, she simply gives up in
> frustration and won't use your system.

Fair enough, but I contend this is not a problem with static checking, 
but rather with cryptic messages. If your messages are cryptic, users 
will get frustrated, no matter if you use static or dynamic typing.

> Read the linked article and see why you need a system that insulates
> the end user from such concerns,

You can't get around the fact that not all operations can be 
meaningfully applied to all values. At one point or another, a user is 
going to instruct the system to do something that can't be done. If 
anything, I would expect the frustration to be less if the system told 
the user up front "this isn't going to work", instead of waiting until 
things actually blow up and then having the user retrace the steps.

In the situation discussed in the article, I would have tried to detect 
as many errors as early as possible. With respect to types, that could 
mean keeping track of the type of each expression and giving an 
indication as soon as the user breaks the type rules.

I believe Eclipse does this for Java: it knows the type of each 
expression and it knows the types that each method can take. If you 
enter something that isn't right, it will underline the code. You can 
then request information about why it thinks the code is wrong.

Having said this, I have never worked on a system for biologists. If the 
system Catherine and Uwe built works well for the users, they did a good 
job.

Regards,

Bob

-- 
"The first casualty of war is truth."


From: Chris Barts
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87d4bjkcuz.fsf@chbarts.motzarella.org>
Robbert Haarman <··············@inglorion.net> writes:

> On Fri, Apr 10, 2009 at 12:18:24PM -0700, Raffael Cavallaro wrote:
>> 
>> Only if your end user is a programmer who is willing to put up with
>> the ctyptic error messages of static type checkers. When your end user
>> is not a programmer but a biologist, and you subject her to the
>> typical output of a static type checker, she simply gives up in
>> frustration and won't use your system.
>
> Fair enough, but I contend this is not a problem with static checking, 
> but rather with cryptic messages. If your messages are cryptic, users 
> will get frustrated, no matter if you use static or dynamic typing.

The paradigmatic languages here (that is, domain-specific languages
used by domain experts not trained as programmers) are Excel (for
accountants), R (for statisticians), Mathematica (for mathematicians),
and Perl (for people doing bioinformatics). They're all dynamically
typed languages that do very little static checking, AFAIK. I think
that's important to mention at this juncture.

>
>> Read the linked article and see why you need a system that insulates
>> the end user from such concerns,
>
> You can't get around the fact that not all operations can be 
> meaningfully applied to all values. At one point or another, a user is 
> going to instruct the system to do something that can't be done. If 
> anything, I would expect the frustration to be less if the system told 
> the user up front "this isn't going to work", instead of waiting until 
> things actually blow up and then having the user retrace the steps.

Again, this is presuming the type system isn't going to reject
reasonable programs because it thinks some variable might at some
point contain a class of variable it's never going to
contain. Convincing them to test their functions (which they have to
do anyway, really) is more reasonable than expecting them to learn
about the difference between a number and a string (and gods help them
if the language differentiates between types of numbers!).

Static language proponents can't get around the fact testing is a fact
of life and type checking is rather coarse-grained compared to a
well-designed battery of tests. Especially if the tests are designed
by a domain expert and represent a reasonable workload for the code.

> In the situation discussed in the article, I would have tried to detect 
> as many errors as early as possible. With respect to types, that could 
> mean keeping track of the type of each expression and giving an 
> indication as soon as the user breaks the type rules.

OK, that will make the biologists scream obscenities and jab you with
infected needles. Prophylaxis against type errors is nothing compared
to prophylaxis against poly-resistant TB.

> I believe Eclipse does this for Java: it knows the type of each 
> expression and it knows the types that each method can take. If you 
> enter something that isn't right, it will underline the code. You can 
> then request information about why it thinks the code is wrong.

Eclipse is a bad example: You don't want to remind us all of Java's
horribly losing type system at this juncture. You want to remind us
about Haskell, perhaps, or some other (likely ML-family) language with
a type system that's complex enough to diagnose (or at least flag)
non-obvious errors without forcing us to explicitly declare everything
all the time.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090411060655.GW3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 10:56:04PM -0600, Chris Barts wrote:
> Robbert Haarman <··············@inglorion.net> writes:
> 
> > On Fri, Apr 10, 2009 at 12:18:24PM -0700, Raffael Cavallaro wrote:
> >> 
> >> Only if your end user is a programmer who is willing to put up with
> >> the ctyptic error messages of static type checkers. When your end user
> >> is not a programmer but a biologist, and you subject her to the
> >> typical output of a static type checker, she simply gives up in
> >> frustration and won't use your system.
> >
> > Fair enough, but I contend this is not a problem with static checking, 
> > but rather with cryptic messages. If your messages are cryptic, users 
> > will get frustrated, no matter if you use static or dynamic typing.
> 
> The paradigmatic languages here (that is, domain-specific languages
> used by domain experts not trained as programmers) are Excel (for
> accountants), R (for statisticians), Mathematica (for mathematicians),
> and Perl (for people doing bioinformatics). They're all dynamically
> typed languages that do very little static checking, AFAIK. I think
> that's important to mention at this juncture.

Correct. And don't forget Python and JavaScript, which are also used by 
millions of non-programmers successfully, and are also dynamically 
typed.

It does make one wonder what it is that makes these languages so 
successful, and if it might perhaps be dynamic typing that plays a role 
here. Indeed, I have wondered this, and come to the following 
conclusion.

Many beginning programmers (whether they do programming as their main 
activity or not) do not have a concept of a type system. Forcing them to 
learn about types before they can write a program raises the barrier to 
entry for your language. Therefore, explicit typing is a disadvantage: 
what is this int and char stuff? In fact, this goes for anything that is 
basically boilerplate; it is much easier to grasp 'print "Hello, world"' 
when it isn't surrounded by half a screenful of boilerplate code.

In short, you want your programmers to have learn as little as possible 
about your langauage before they can get started. Dynamic typing works 
here, because it does not require you to learn anything about the type 
system at all before you can write programs. However, that does not mean 
that dynamic typing is the only way to go, nor that dynamic typing is 
actually what makes these languages successful (more often than not, I 
think you will find the language is more or less dictated by the domain; 
e.g. you can't do scripting on web pages in anything other than 
JavaScript).

What I think, but I admit I don't have any empirical data to back this 
up, is that it is not dynamic typing, but implicit typing that lowers 
the barrier to entry. In other words, the important thing is not when 
type errors are signalled, but whether or not you need to mention types 
in your program.

> >
> >> Read the linked article and see why you need a system that insulates
> >> the end user from such concerns,
> >
> > You can't get around the fact that not all operations can be 
> > meaningfully applied to all values. At one point or another, a user is 
> > going to instruct the system to do something that can't be done. If 
> > anything, I would expect the frustration to be less if the system told 
> > the user up front "this isn't going to work", instead of waiting until 
> > things actually blow up and then having the user retrace the steps.
> 
> Again, this is presuming the type system isn't going to reject
> reasonable programs because it thinks some variable might at some
> point contain a class of variable it's never going to
> contain.

I have a difficult time imagining a program that one would want to write 
that is not amenable to static type checking. Perhaps that is just a 
limitation on my part, however. If you could point me to some examples, 
that would be great.

> Convincing them to test their functions (which they have to
> do anyway, really) is more reasonable than expecting them to learn
> about the difference between a number and a string

I agree with you that nothing is a good substitute for actual testing. 
As for the differences between strings and numbers: they are going to 
have to learn about those one way or another. Static typing or dynamic 
typing, you can't divide "hello" by 2, unless this is defined in some 
meaningful way.

> (and gods help them if the language differentiates between types of 
> numbers!).

Honestly, I wish more languages bothered their users with the difference 
between exact and inexact numbers.

> > I believe Eclipse does this for Java: it knows the type of each 
> > expression and it knows the types that each method can take. If you 
> > enter something that isn't right, it will underline the code. You can 
> > then request information about why it thinks the code is wrong.
> 
> Eclipse is a bad example: You don't want to remind us all of Java's
> horribly losing type system at this juncture.

Oh, I agree. But the example wasn't about Java, it was about signalling 
type errors as soon as they are entered. I could have used any other 
combination of programming environment and programming language that 
does this, except that I am not aware of any.

Regards,

Bob

-- 
The surest way to remain a winner is to win once, and then not play any more.


From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <n6clkox3wyzx.1dits5118ww9g.dlg@40tude.net>
On Sat, 11 Apr 2009 08:06:55 +0200, Robbert Haarman wrote:

> On Fri, Apr 10, 2009 at 10:56:04PM -0600, Chris Barts wrote:

>> Convincing them to test their functions (which they have to
>> do anyway, really) is more reasonable than expecting them to learn
>> about the difference between a number and a string
> 
> I agree with you that nothing is a good substitute for actual testing. 

Really? In fact nothing is a substitute for formal proof of correctness.
Since branch coverage test is 1) technically impossible, 2) requires
specification anyway.

If they indeed wanted to test (rather than just to *probe*), they should
have to invest a huge amount of up front work compared to trivial
attributing their variables with types. And then the specification
changes... God help them! (But of course, they actually test nothing, and
what they do is technically non-testable.)

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87ocv3cvg3.fsf.mdw@metalzone.distorted.org.uk>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> If they indeed wanted to test (rather than just to *probe*), they should
> have to invest a huge amount of up front work compared to trivial
> attributing their variables with types.

I must be reading this wrong.  You appear to be claiming that static
type checking is a substitute for testing.  But that's obviously crazy,
since even well-typed programs can fail to meet their specifications.

> And then the specification changes... God help them! (But of course,
> they actually test nothing, and what they do is technically
> non-testable.)

Nope, I've got no idea what the second half of the parenthetical remark
means -- because it can't mean what it looks like it means, since it's
so obviously wrong.

-- [mdw]
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <v8f44lmfzwtu.13sgonxn941iq$.dlg@40tude.net>
On Sat, 11 Apr 2009 11:54:04 +0100, Mark Wooding wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> 
>> If they indeed wanted to test (rather than just to *probe*), they should
>> have to invest a huge amount of up front work compared to trivial
>> attributing their variables with types.
> 
> I must be reading this wrong.  You appear to be claiming that static
> type checking is a substitute for testing.

Sure it is. You don't need to test for type errors, since types are
checked.

I repeat the point I made before. The only consistent dynamic typing is no
typing. Since the only way out is to claim that type errors need not to be
checked at all!

> But that's obviously crazy,
> since even well-typed programs can fail to meet their specifications.

No, they never fail to meet types specifications. You are mixing two
different kinds of errors. The point is that testing is always the worst
case scenario. You test only for something that cannot be proved
statically.

>> And then the specification changes... God help them! (But of course,
>> they actually test nothing, and what they do is technically
>> non-testable.)
> 
> Nope, I've got no idea what the second half of the parenthetical remark
> means -- because it can't mean what it looks like it means, since it's
> so obviously wrong.

One can test only against a specification. When just typing is already too
much work to think about it, then how a much more detailed thing as
specification isn't? Look, guys complain that merely to consider if the
thingy is a number or else a commuter train schedule is too big burden. How
could they design, implement and evaluate a test scenario for what they
don't care to know what it actually is?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <877i1rcjrq.fsf.mdw@metalzone.distorted.org.uk>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> On Sat, 11 Apr 2009 11:54:04 +0100, Mark Wooding wrote:
> > I must be reading this wrong.  You appear to be claiming that static
> > type checking is a substitute for testing.
>
> Sure it is. You don't need to test for type errors, since types are
> checked.

But type errors are only one (fairly small) class of errors.  You still
need to test for all of the others, so you've actually won relatively
little.

> I repeat the point I made before. The only consistent dynamic typing is no
> typing. Since the only way out is to claim that type errors need not to be
> checked at all!

This is just a complete misunderstanding of what dynamic and static
typing mean.  The two are in fact almost orthogonal, rather than
mutually exclusive: the names `static' and `dynamic' are unhelpful and
impede understanding.

  * `Static typing' is better named `expression typing'.  An expression
    is a syntactic entity which denotes a computation.  Static typing
    assigns each expression a type, according to some rules, possibly
    based on other annotations in the source.  If this assignment fails
    (e.g., there is no applicable rule to assign a type to an
    expression, or there are multiple rules that assign distinct types
    without an means of disambiguation) then the program is considered
    ill-formed.

  * `Dynamic typing' is better named `value typing'.  A value is a
    runtime entity which stores a (compound or atomic) datum.  Dynamic
    typing assigns a type to each value, which can be checked at runtime
    by functions and operators acting on those values.  If an operator
    or function is applied to a value with an inappropriate type, then
    an error can be signalled.  This may indicate that the program is
    incorrect, or simply be a means of validating input data.

Just to show that these two concepts are mostly orthogonal:

  * Forth is neither statically nor dynamically typed.

  * C is statically typed, but not dynamically typed.

  * Scheme is dynamically typed, but not statically typed.

  * Java is both statically and dynamically typed.

To further confuse matters, there's the issue of `strong' versus `weak'
typing, which measures how easy the type system is to subvert.  Both
static and dynamic type systems can be weak or strong.

So, to return to your comment:

> I repeat the point I made before. The only consistent dynamic typing
> is no typing. Since the only way out is to claim that type errors need
> not to be checked at all!

This is clearly nonsense.  Scheme, for example, has a perfectly coherent
type system.  It has no static typing (or, if you like, trivial static
typing, since all expressions are assigned the same universal type).
But this doesn't mean that there is no typing at all: Scheme has strong
dynamic typing: if you apply the `*' procedure to strings or lists, an
error is signalled.

> > But that's obviously crazy, since even well-typed programs can fail
> > to meet their specifications.
>
> No, they never fail to meet types specifications. 

But not all specifications are about types.

> You are mixing two different kinds of errors. The point is that
> testing is always the worst case scenario. You test only for something
> that cannot be proved statically.

`Beware of bugs in the above code; I have only proved it correct, not
tried it.'
        -- Donald Knuth

> One can test only against a specification. When just typing is already
> too much work to think about it, then how a much more detailed thing
> as specification isn't? Look, guys complain that merely to consider if
> the thingy is a number or else a commuter train schedule is too big
> burden.

You've got this backwards.  If I have a specification, and a program,
and a proof that the program correctly implements the specification,
then type annotations and static checking are worthless to me.  The only
thing that a static type checker can tell me is that my program is
well-typed; but it must be if it's a correct program.

> How could they design, implement and evaluate a test scenario for what
> they don't care to know what it actually is?

And this is word salad.

-- [mdw]
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1xaycaanrtwql$.g5gwe6642b2r$.dlg@40tude.net>
On Sat, 11 Apr 2009 16:06:17 +0100, Mark Wooding wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> 
>> On Sat, 11 Apr 2009 11:54:04 +0100, Mark Wooding wrote:
>>> I must be reading this wrong.  You appear to be claiming that static
>>> type checking is a substitute for testing.
>>
>> Sure it is. You don't need to test for type errors, since types are
>> checked.
> 
> But type errors are only one (fairly small) class of errors.  You still
> need to test for all of the others, so you've actually won relatively
> little.

OK, that brings us back to the metrics. Nobody has presented them. Secondly
whatever percentage of errors it might be, they are caught. There is
absolutely no reason not to catch them.

>> I repeat the point I made before. The only consistent dynamic typing is no
>> typing. Since the only way out is to claim that type errors need not to be
>> checked at all!
> 
> This is just a complete misunderstanding of what dynamic and static
> typing mean.  The two are in fact almost orthogonal, rather than
> mutually exclusive: the names `static' and `dynamic' are unhelpful and
> impede understanding.

[...]
To clarify things. Each value has a type in any model. What you refer as
"dynamic" is merely a polymorphic value, which also has a type denoting the
class rooted in the most unspecific type. The tag of this value specifies
not the type of the compound, but the specific type of what it contains.

There is no orthogonality between these models. Any modern statically typed
language supports dynamic polymorphism and so dynamic typing, like Java,
you have mentioned. Nobody argues that there is no place for dynamic
typing. My point was, that if you want to remove static typing (because it
is a burden etc), you have to go untyped.
 
>> I repeat the point I made before. The only consistent dynamic typing
>> is no typing. Since the only way out is to claim that type errors need
>> not to be checked at all!
> 
> This is clearly nonsense.  Scheme, for example, has a perfectly coherent
> type system.  It has no static typing (or, if you like, trivial static
> typing, since all expressions are assigned the same universal type).

Yep

> But this doesn't mean that there is no typing at all: Scheme has strong
> dynamic typing: if you apply the `*' procedure to strings or lists, an
> error is signalled.

Is it an error? Note that an error (bug) cannot be signalled within the
program, which is erroneous = incorrect = has unpredictable behavior. It
can only be to the programmer. So in fact, if what you call "error" does
not kill the program, it is not an error, but a legal state of its
execution, which semantics is perfectly defined. For example, propagation
of an exception, call to "method not understood" etc. So any operation is
legal for any object. This is why I call it effectively untyped.

Yet another issue is the desire keep such problems undetected. Again, the
only possible reason why, is that you have no types. If you had types, not
formally, but semantically, then you would like to prevent type errors.

>>> But that's obviously crazy, since even well-typed programs can fail
>>> to meet their specifications.
>>
>> No, they never fail to meet types specifications. 
> 
> But not all specifications are about types.

Nobody claimed that.

>> You are mixing two different kinds of errors. The point is that
>> testing is always the worst case scenario. You test only for something
>> that cannot be proved statically.
> 
> `Beware of bugs in the above code; I have only proved it correct, not
> tried it.'
>         -- Donald Knuth

(:-))

Yes. But errors in programs proved correct, are ones in the specifications.
If we could prove specifications correct we would need not to test them. It
is an infinite recursion. Even more it is important to specify what is the
testee.

>> One can test only against a specification. When just typing is already
>> too much work to think about it, then how a much more detailed thing
>> as specification isn't? Look, guys complain that merely to consider if
>> the thingy is a number or else a commuter train schedule is too big
>> burden.
> 
> You've got this backwards.  If I have a specification, and a program,
> and a proof that the program correctly implements the specification,
> then type annotations and static checking are worthless to me.

You cannot spell the specifications without types at any reasonable level
of complexity.

> The only
> thing that a static type checker can tell me is that my program is
> well-typed; but it must be if it's a correct program.

The only thing that a test can tell is that this test is passed. It does
not show that the program is correct. You need a branch coverage test in
order to show it correct. But in absence of statically typed annotation of
inputs and outputs, branch coverage is uncountably infinite. Arguing for
tests you do for static typing!

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87hc0vayu2.fsf.mdw@metalzone.distorted.org.uk>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> OK, that brings us back to the metrics. Nobody has presented
> them. Secondly whatever percentage of errors it might be, they are
> caught. There is absolutely no reason not to catch them.

Yes, there is, as I mentioned much earlier in the thread.  A static type
checker exists to prove properties about your program -- in particular,
to prove that certain kinds of errors cannot occur at runtime.  But, as
we know, proving nontrivial properties about arbitrary programs is very
difficult, and trying to do it algorithmically is doomed to failure
since most nontrivial properties are noncomputable.  So the static type
checker errs conservatively, rejecting programs which it cannot prove to
be free of the kinds of errors it's checking for -- even if the program
is, in fact, correct.

For example, consider

        foo (x : int, y : int, z : int) =
                3 * (if x /= 0 and y /= 0 and z /= 0 and
                        x^3 + y^3 == z^3 then
                          "hello, world"
                     else
                          2)

I doubt that many type checkers are capable of following Andrew Wiles in
their ability to show the type-correctness of this program (and its
equivalence to a program which evaluates its arguments, discards the
results, and returns 6).

> >> I repeat the point I made before. The only consistent dynamic
> >> typing is no typing. Since the only way out is to claim that type
> >> errors need not to be checked at all!
> > 
> > This is just a complete misunderstanding of what dynamic and static
> > typing mean.  The two are in fact almost orthogonal, rather than
> > mutually exclusive: the names `static' and `dynamic' are unhelpful
> > and impede understanding.
>
> [...]
> To clarify things. Each value has a type in any model. What you refer
> as "dynamic" is merely a polymorphic value, which also has a type
> denoting the class rooted in the most unspecific type. The tag of this
> value specifies not the type of the compound, but the specific type of
> what it contains.

That's a clarification?  It makes use of several terms, such as `class'
and `compound' which don't seem to be defined anywhere.  Dynamic typing
occurs in languages which have no class structure, e.g., pre-CLOS Common
Lisp (or do you count structure inclusion?) or Scheme.

> There is no orthogonality between these models. Any modern statically
> typed language supports dynamic polymorphism and so dynamic typing,
> like Java, you have mentioned.

Why restrict the discussion to modern languages?  C is still heavily
used; C++'s dynamic typing (RTTI) is rather limited.

> Nobody argues that there is no place for dynamic typing. My point was,
> that if you want to remove static typing (because it is a burden etc),
> you have to go untyped.

This seems to be a trivial tautology: if you remove static typing, there
is no static typing.  In a language like Scheme, there is still run-time
type checking.

> >> I repeat the point I made before. The only consistent dynamic typing
> >> is no typing. Since the only way out is to claim that type errors need
> >> not to be checked at all!
> > 
> > This is clearly nonsense.  Scheme, for example, has a perfectly coherent
> > type system.  It has no static typing (or, if you like, trivial static
> > typing, since all expressions are assigned the same universal type).
>
> Yep
>
> > But this doesn't mean that there is no typing at all: Scheme has strong
> > dynamic typing: if you apply the `*' procedure to strings or lists, an
> > error is signalled.
>
> Is it an error? Note that an error (bug) cannot be signalled within the
> program, which is erroneous = incorrect = has unpredictable behavior.

Checking my copy of R6RS, it seems that I misspoke, and the correct term
in Scheme is `raising an exception' (5.3).  Since the standard
procedures in Scheme are defined to check their arguments (5.4, 6.2) and
raise exceptions (of defined condition types) if the requirements are
not met, this behaviour can be relied upon by correct programs.

> It can only be to the programmer. So in fact, if what you call "error"
> does not kill the program, it is not an error, but a legal state of
> its execution, which semantics is perfectly defined. For example,
> propagation of an exception, call to "method not understood" etc. So
> any operation is legal for any object.

Yes, I suppose one could describe the situation in those terms, though I
don't think that it's the most useful way of thinking about it.

> This is why I call it effectively untyped.

As mentioned, I don't think this is a fruitful way of approaching a
non-statically typed but strongly and richly dynamically typed language
such as Scheme.  For example, Scheme's numeric tower is unusually rich
and powerful, making clear distinctions between exact and inexact,
integer, rational, real and complex numbers; furthermore, there are many
other kinds of data, both atomic and compound, which can be manipulated
in Scheme, and different operations are appropriate to different kinds
of values.  While an implementation will allow you to apply `cdr' to the
integer 3, this is not usually a helpful thing to do, and one tends to
consider it erroneous -- even though in fact the program's behaviour
remains well-defined.

> Yet another issue is the desire keep such problems undetected. Again,
> the only possible reason why, is that you have no types.

No.  The desire is to be able to write programs which are, in fact, free
of type errors (in the sense of there not being any unplanned exceptions
raised, say), though a static type checker might be unable to prove
this.

> If you had types, not formally, but semantically, then you would like
> to prevent type errors.

I don't know what this means.

> >>> But that's obviously crazy, since even well-typed programs can fail
> >>> to meet their specifications.
> >>
> >> No, they never fail to meet types specifications. 
> > 
> > But not all specifications are about types.
>
> Nobody claimed that.

Then why is your comment `they never fail to meet types specifications'
a useful response to my claim that well-typed programs can fail to meet
their specifications?

> You cannot spell the specifications without types at any reasonable level
> of complexity.

True.  But the types in the specification need not map onto types in my
implementation.  The types in the Z specification language are very
abstract (simply sets of values), and don't map exactly onto any
implementation language that I know -- and I know a lot of them.  But
that doesn't matter: all you have to prove is that, for argument values
in the specified domain, the function computes result values according
to the specification.  Static types in the implementation language are
entirely unnecessary to this process.

> > The only thing that a static type checker can tell me is that my
> > program is well-typed; but it must be if it's a correct program.
>
> The only thing that a test can tell is that this test is passed. It does
> not show that the program is correct. 

True, but irrelevant to my point.

> You need a branch coverage test in order to show it correct. But in
> absence of statically typed annotation of inputs and outputs, branch
> coverage is uncountably infinite. Arguing for tests you do for static
> typing!

Even that is insufficient.  In particular, it won't tell you that the
program as written fails to detect and handle particular kinds of
special cases in its input.  Consider a binary tree implementation which
corrupts the tree when deleting entries represented in interior nodes
(say by truncating the entire branch): a test suite which only tests the
implementation when deleting leaf nodes can achieve full branch
coverage, yet it is still an incorrect (though possibly well-typed)
implementation.

-- [mdw]
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <15m6jj8jbsssk.1ap6zq7bsphyd.dlg@40tude.net>
On Sat, 11 Apr 2009 18:23:49 +0100, Mark Wooding wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> 
>> OK, that brings us back to the metrics. Nobody has presented
>> them. Secondly whatever percentage of errors it might be, they are
>> caught. There is absolutely no reason not to catch them.
> 
> Yes, there is, as I mentioned much earlier in the thread.  A static type
> checker exists to prove properties about your program -- in particular,
> to prove that certain kinds of errors cannot occur at runtime.

Yes

> But, as
> we know, proving nontrivial properties about arbitrary programs is very
> difficult, and trying to do it algorithmically is doomed to failure
> since most nontrivial properties are noncomputable.

Yes

> So the static type
> checker errs conservatively, rejecting programs which it cannot prove to
> be free of the kinds of errors it's checking for -- even if the program
> is, in fact, correct.

No. That depends on the power of the type system. If the type system is too
powerful allowing description of types that would be undecidable to match,
then yes, a well-typed program would be impossible to check. So in order to
stay checkable, you have to limit expressiveness of the type system.

> For example, consider
> 
>         foo (x : int, y : int, z : int) =
>                 3 * (if x /= 0 and y /= 0 and z /= 0 and
>                         x^3 + y^3 == z^3 then
>                           "hello, world"
>                      else
>                           2)
> 
> I doubt that many type checkers are capable of following Andrew Wiles in
> their ability to show the type-correctness of this program (and its
> equivalence to a program which evaluates its arguments, discards the
> results, and returns 6).

This a good example of a system that is unnecessarily powerful. Just change
it to

   int foo (x : int, y : int, z : int) = ...

and it would become trivially checkable, rejecting the program because
"hello world" is not int.

>>>> I repeat the point I made before. The only consistent dynamic
>>>> typing is no typing. Since the only way out is to claim that type
>>>> errors need not to be checked at all!
>>> 
>>> This is just a complete misunderstanding of what dynamic and static
>>> typing mean.  The two are in fact almost orthogonal, rather than
>>> mutually exclusive: the names `static' and `dynamic' are unhelpful
>>> and impede understanding.
>>
>> [...]
>> To clarify things. Each value has a type in any model. What you refer
>> as "dynamic" is merely a polymorphic value, which also has a type
>> denoting the class rooted in the most unspecific type. The tag of this
>> value specifies not the type of the compound, but the specific type of
>> what it contains.
> 
> That's a clarification?  It makes use of several terms, such as `class'
> and `compound' which don't seem to be defined anywhere.  Dynamic typing
> occurs in languages which have no class structure, e.g., pre-CLOS Common
> Lisp (or do you count structure inclusion?) or Scheme.

Class = set of types closed by the relation of inheritance.

>> There is no orthogonality between these models. Any modern statically
>> typed language supports dynamic polymorphism and so dynamic typing,
>> like Java, you have mentioned.
> 
> Why restrict the discussion to modern languages?  C is still heavily
> used; C++'s dynamic typing (RTTI) is rather limited.

But the type system of C barely deserves any interest. Maybe as a vivid
example of how not to do it...

>> Nobody argues that there is no place for dynamic typing. My point was,
>> that if you want to remove static typing (because it is a burden etc),
>> you have to go untyped.
> 
> This seems to be a trivial tautology: if you remove static typing, there
> is no static typing.  In a language like Scheme, there is still run-time
> type checking.

This is discussed below.

>>>> I repeat the point I made before. The only consistent dynamic typing
>>>> is no typing. Since the only way out is to claim that type errors need
>>>> not to be checked at all!
>>> 
>>> This is clearly nonsense.  Scheme, for example, has a perfectly coherent
>>> type system.  It has no static typing (or, if you like, trivial static
>>> typing, since all expressions are assigned the same universal type).
>>
>> Yep
>>
>>> But this doesn't mean that there is no typing at all: Scheme has strong
>>> dynamic typing: if you apply the `*' procedure to strings or lists, an
>>> error is signalled.
>>
>> Is it an error? Note that an error (bug) cannot be signalled within the
>> program, which is erroneous = incorrect = has unpredictable behavior.
> 
> Checking my copy of R6RS, it seems that I misspoke, and the correct term
> in Scheme is `raising an exception' (5.3).  Since the standard
> procedures in Scheme are defined to check their arguments (5.4, 6.2) and
> raise exceptions (of defined condition types) if the requirements are
> not met, this behaviour can be relied upon by correct programs.
> 
>> It can only be to the programmer. So in fact, if what you call "error"
>> does not kill the program, it is not an error, but a legal state of
>> its execution, which semantics is perfectly defined. For example,
>> propagation of an exception, call to "method not understood" etc. So
>> any operation is legal for any object.
> 
> Yes, I suppose one could describe the situation in those terms, though I
> don't think that it's the most useful way of thinking about it.

I understand, it is disappointing... (:-))

>> This is why I call it effectively untyped.
> 
> As mentioned, I don't think this is a fruitful way of approaching a
> non-statically typed but strongly and richly dynamically typed language
> such as Scheme.  For example, Scheme's numeric tower is unusually rich
> and powerful, making clear distinctions between exact and inexact,
> integer, rational, real and complex numbers;

Actually I have no problem with mixing integer and rational numbers, if the
difference is not contracted as in the case of the behavior of rounding,
overflow etc. What you describe is an implementation detail, because it is
not a part of the contract. My concern is adding apples to oranges. Both
are natural integers.

> While an implementation will allow you to apply `cdr' to the
> integer 3, this is not usually a helpful thing to do, and one tends to
> consider it erroneous -- even though in fact the program's behaviour
> remains well-defined.

This is a language weakness. The programmer should be able to render
semantically erroneous programs as illegal from the language point of view.
Lack of static typing is an obvious weakness from this stand point.
 
>> Yet another issue is the desire keep such problems undetected. Again,
>> the only possible reason why, is that you have no types.
> 
> No.  The desire is to be able to write programs which are, in fact, free
> of type errors (in the sense of there not being any unplanned exceptions
> raised, say), though a static type checker might be unable to prove
> this.

I have no problem with that.

>> If you had types, not formally, but semantically, then you would like
>> to prevent type errors.
> 
> I don't know what this means.

Types as sets of values and operations on them. Thinking about the problem
space involves such types, like Employee, Salary, Integer etc. Formal types
of the programming language are to model these types.

>>>>> But that's obviously crazy, since even well-typed programs can fail
>>>>> to meet their specifications.
>>>>
>>>> No, they never fail to meet types specifications. 
>>> 
>>> But not all specifications are about types.
>>
>> Nobody claimed that.
> 
> Then why is your comment `they never fail to meet types specifications'
> a useful response to my claim that well-typed programs can fail to meet
> their specifications?

Well-typed program meets all specification of types, no more no less.

>> You cannot spell the specifications without types at any reasonable level
>> of complexity.
> 
> True.  But the types in the specification need not map onto types in my
> implementation.  The types in the Z specification language are very
> abstract (simply sets of values), and don't map exactly onto any
> implementation language that I know -- and I know a lot of them.  But
> that doesn't matter: all you have to prove is that, for argument values
> in the specified domain, the function computes result values according
> to the specification.  Static types in the implementation language are
> entirely unnecessary to this process.

But there should be something that models domain types. If not a type then
what? It looks very unusual. In my view type systems were invented just to
resemble domain types. A need to check disparate formal types arise from
the fact that the corresponding domain types do not mix.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87tz4uaova.fsf.mdw@metalzone.distorted.org.uk>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> On Sat, 11 Apr 2009 18:23:49 +0100, Mark Wooding wrote:
>
> >         foo (x : int, y : int, z : int) =
> >                 3 * (if x /= 0 and y /= 0 and z /= 0 and
> >                         x^3 + y^3 == z^3 then
> >                           "hello, world"
> >                      else
> >                           2)
> > 
> This a good example of a system that is unnecessarily powerful. Just change
> it to
>
>    int foo (x : int, y : int, z : int) = ...
>
> and it would become trivially checkable, rejecting the program because
> "hello world" is not int.

No, fixing the return type wouldn't help.  3 * 2 is certainly an
integer, and the function is unable to return anything else.

> > That's a clarification?  It makes use of several terms, such as `class'
> > and `compound' which don't seem to be defined anywhere.  Dynamic typing
> > occurs in languages which have no class structure, e.g., pre-CLOS Common
> > Lisp (or do you count structure inclusion?) or Scheme.
>
> Class = set of types closed by the relation of inheritance.

Umm... That doesn't help.  No class structure, remember?

> > Why restrict the discussion to modern languages?  C is still heavily
> > used; C++'s dynamic typing (RTTI) is rather limited.
>
> But the type system of C barely deserves any interest. Maybe as a vivid
> example of how not to do it...

As an example of the kinds of type systems that exist and that therefore
our categorization needs to include it.

> > No.  The desire is to be able to write programs which are, in fact, free
> > of type errors (in the sense of there not being any unplanned exceptions
> > raised, say), though a static type checker might be unable to prove
> > this.
>
> I have no problem with that.

I'm not quite sure what you're referring to by `that'; I have a problem
with being denied the ability to write particular correct programs
simply because of the inadequacy of a type system verifier.  And I most
certainly have a problem with being told that this is to improve my
productivity.

> >>>>> But that's obviously crazy, since even well-typed programs can fail
> >>>>> to meet their specifications.
> >>>>
> >>>> No, they never fail to meet types specifications. 
> >>> 
> >>> But not all specifications are about types.
> >>
> >> Nobody claimed that.
> > 
> > Then why is your comment `they never fail to meet types specifications'
> > a useful response to my claim that well-typed programs can fail to meet
> > their specifications?
>
> Well-typed program meets all specification of types, no more no less.

Let's get this straight.  I claimed that `even well-typed programs can
fail to meet their specifications'.  You said that this claim was false,
stating as your justification that `they never fail to meet types
specifications'.  However, since you accept that `not all specifications
are about types', I don't see how you can consider your justification
adequate, or even relevant.

> But there should be something that models domain types. If not a type
> then what? It looks very unusual. In my view type systems were
> invented just to resemble domain types. A need to check disparate
> formal types arise from the fact that the corresponding domain types
> do not mix.

Types, maybe; statically checked types, no.

-- [mdw]
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1beh6koyf5av7.1t25jo71jyzkz$.dlg@40tude.net>
On Sat, 11 Apr 2009 21:59:05 +0100, Mark Wooding wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> 
>> On Sat, 11 Apr 2009 18:23:49 +0100, Mark Wooding wrote:
>>
>>>         foo (x : int, y : int, z : int) =
>>>                 3 * (if x /= 0 and y /= 0 and z /= 0 and
>>>                         x^3 + y^3 == z^3 then
>>>                           "hello, world"
>>>                      else
>>>                           2)
>>> 
>> This a good example of a system that is unnecessarily powerful. Just change
>> it to
>>
>>    int foo (x : int, y : int, z : int) = ...
>>
>> and it would become trivially checkable, rejecting the program because
>> "hello world" is not int.
> 
> No, fixing the return type wouldn't help.  3 * 2 is certainly an
> integer, and the function is unable to return anything else.

What should it return?

>>> That's a clarification?  It makes use of several terms, such as `class'
>>> and `compound' which don't seem to be defined anywhere.  Dynamic typing
>>> occurs in languages which have no class structure, e.g., pre-CLOS Common
>>> Lisp (or do you count structure inclusion?) or Scheme.
>>
>> Class = set of types closed by the relation of inheritance.
> 
> Umm... That doesn't help.  No class structure, remember?

I don't sorry. Somebody cut the citation off.

>>> Why restrict the discussion to modern languages?  C is still heavily
>>> used; C++'s dynamic typing (RTTI) is rather limited.
>>
>> But the type system of C barely deserves any interest. Maybe as a vivid
>> example of how not to do it...
> 
> As an example of the kinds of type systems that exist and that therefore
> our categorization needs to include it.

It does. We can discuss C as well.

>>> No.  The desire is to be able to write programs which are, in fact, free
>>> of type errors (in the sense of there not being any unplanned exceptions
>>> raised, say), though a static type checker might be unable to prove
>>> this.
>>
>> I have no problem with that.
> 
> I'm not quite sure what you're referring to by `that'; I have a problem
> with being denied the ability to write particular correct programs
> simply because of the inadequacy of a type system verifier.  And I most
> certainly have a problem with being told that this is to improve my
> productivity.

You should show this inadequacy. In a properly designed type system you can
always relax checks by adding dynamic constraints and imaginary elements to
the set of values. This is how variable length arrays and division to zero
issues are solved without breaking types. Dynamic polymorphism is in this
row.

In a type definition you specify as much as you know for sure (e.g.
statically) about the values of the type that can be expressed in the
language. Why should this reduce your productivity? On the contrary,
thinking about it is an essential part of software design.

>>>>>>> But that's obviously crazy, since even well-typed programs can fail
>>>>>>> to meet their specifications.
>>>>>>
>>>>>> No, they never fail to meet types specifications. 
>>>>> 
>>>>> But not all specifications are about types.
>>>>
>>>> Nobody claimed that.
>>> 
>>> Then why is your comment `they never fail to meet types specifications'
>>> a useful response to my claim that well-typed programs can fail to meet
>>> their specifications?
>>
>> Well-typed program meets all specification of types, no more no less.
> 
> Let's get this straight.  I claimed that `even well-typed programs can
> fail to meet their specifications'.  You said that this claim was false,
> stating as your justification that `they never fail to meet types
> specifications'.  However, since you accept that `not all specifications
> are about types', I don't see how you can consider your justification
> adequate, or even relevant.

If your point was merely to say that there are programs that do not meet
specifications, being compilable, executable, sold, even deployed, then how
is this relevant to the discussion? Yes, they are.

>> But there should be something that models domain types. If not a type
>> then what? It looks very unusual. In my view type systems were
>> invented just to resemble domain types. A need to check disparate
>> formal types arise from the fact that the corresponding domain types
>> do not mix.
> 
> Types, maybe; statically checked types, no.

Great, we are moving forward! So, types are useful. Checking for errors is
not? Are you sure that you could defend that?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090421110532.435@gmail.com>
["Followup-To:" header set to comp.lang.lisp.]
On 2009-04-11, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> On Sat, 11 Apr 2009 16:06:17 +0100, Mark Wooding wrote:
>
>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> 
>>> On Sat, 11 Apr 2009 11:54:04 +0100, Mark Wooding wrote:
>>>> I must be reading this wrong.  You appear to be claiming that static
>>>> type checking is a substitute for testing.
>>>
>>> Sure it is. You don't need to test for type errors, since types are
>>> checked.
>> 
>> But type errors are only one (fairly small) class of errors.  You still
>> need to test for all of the others, so you've actually won relatively
>> little.
>
> OK, that brings us back to the metrics. Nobody has presented them. Secondly
> whatever percentage of errors it might be, they are caught.

This percentage win is only real if everything else is held constant.
Everything else is held constant only if the static type checks are in an
advisory role. The static analysis gives us three possible answers: the program
is free of type errors, the program has type errors, or the checks are
inconclusive.  If the checks are inconclusive, we may want to run the program
anyway. That's okay; we have dynamic types to extend safety into run-time.
Whenever the static checks positively identify errors, they help us.

But suppose that we can't run the program if the static checks are
inconclusive. Then it's not all equal any more. In exchange for type safety
(the percentage of errors caught) we have to give up a whole class of valid
programs. Of course, we can't give up programs that solve problems for us, so
we have to rewrite them so that they can pass static type checks.

If we have to rewrite some programs, then everything is no longer held
constant. We do not have straightforward percentage improvement from static
typing.

> There is absolutely no reason not to catch them.

Likewise, there is absolutely no reason not to run programs in which
no type errors were detected, but for which the detection was not conclusive.
To run those programs safely, type information must be known at run time.

> To clarify things. Each value has a type in any model. What you refer as
> "dynamic" is merely a polymorphic value

There rarely exist polymorphic values, but rather polymorphic data paths in a
program, or polymorphic storage locations.

, which also has a type denoting the
> class rooted in the most unspecific type. The tag of this value specifies
> not the type of the compound, but the specific type of what it contains.
>
> There is no orthogonality between these models. Any modern statically typed
> language supports dynamic polymorphism and so dynamic typing, like Java,
> you have mentioned. 

Java isn't an example of a modern static language (except for values of
``modern' that mean ``recent'').

Note that ``support'' for dynamic typing isn't the same thing as complete
dynamic typing all the way down.

> Nobody argues that there is no place for dynamic
> typing. My point was, that if you want to remove static typing (because it
> is a burden etc), you have to go untyped.

Your point is stupefyingly moronic. It's like a 10,000 kilogram, dense
stupidity that, walk around it though I may, I find impenetrable from all
sides. How can you claim that typing is non-typing?

> Is it an error? Note that an error (bug) cannot be signalled within the
> program, which is erroneous = incorrect = has unpredictable behavior. It
> can only be to the programmer. So in fact, if what you call "error" does
> not kill the program, it is not an error, but a legal state of its
> execution, which semantics is perfectly defined.

A segmentation fault with a core dump is a legal state of execution of a
program on a Unix box.

> For example, propagation
> of an exception, call to "method not understood" etc. So any operation is
> legal for any object. This is why I call it effectively untyped.

So the claim is that errors within the error model of a high level language are
not real errors.

Your point seems to be that if the high level language has a kind of
bullet-proof run-time system in which errors like type mismatch or division by
zero are represented in a predictable way, that language is ...
effectively untyped?

So typing is when an operation /cannot/ be applied to any object,
because it will result in an undefined state (corruption, abnormal termination,
etc).  If objects have type codes which prevents this situation from ever
happening, then that is /untyped/. Real typing is when you can confidently
throw the type information away because you checked it at run time, and
rejected programs which didn't prove error-free.  

So when you can discard type information, you have typing.  When you retain it,
you do not have typing.

If this is a strawman version of your position, please re-articulate the strong
version; I'm curious what that would be.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090411162557.GZ3826@gildor.inglorion.net>
On Sat, Apr 11, 2009 at 04:06:17PM +0100, Mark Wooding wrote:
> 
> This is just a complete misunderstanding of what dynamic and static
> typing mean.  The two are in fact almost orthogonal, rather than
> mutually exclusive: the names `static' and `dynamic' are unhelpful and
> impede understanding.

Please note that I provided my definitions of static typing and dynamic 
typing at the start of this discussion. The definitions are, in a 
nutshell,

Static typing means type errors are signalled before the program is run.

Dynamic typing means type errors are signalled while the program is 
running.

These two concepts are certainly mutually exclusive. Perhaps you do not 
agree with my definitions, but that is a different discussion. If you 
would be so kind, if you want to use different definitions, do so in a 
different thread. This one is large enough as it is.

Regards,

Bob

-- 
Tis better to be silent and thought a fool, than to open your mouth 
and remove all doubt.

	-- Abraham Lincoln

From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <b54cd7cc-a052-46f0-9260-82b8f13399c6@r31g2000prh.googlegroups.com>
On Apr 11, 9:25 am, Robbert Haarman <··············@inglorion.net>
wrote:
> On Sat, Apr 11, 2009 at 04:06:17PM +0100, Mark Wooding wrote:
>
> > This is just a complete misunderstanding of what dynamic and static
> > typing mean.  The two are in fact almost orthogonal, rather than
> > mutually exclusive: the names `static' and `dynamic' are unhelpful and
> > impede understanding.
>
> Please note that I provided my definitions of static typing and dynamic
> typing at the start of this discussion. The definitions are, in a
> nutshell,
>
> Static typing means type errors are signalled before the program is run.

Agreed.  It should also be noted that _all_ static type errors will be
caught (specification errors notwithstanding).

> Dynamic typing means type errors are signalled while the program is
> running.

Partially agreed, but not necessarily so.  Such errors are only caught
(yes, at runtime) when they are encountered.  It is possible (and in
fact likely) that during a unit test of the program a dynamic type
error will not be encountered, and so, while the programmer is
concentrating on getting Unit A working, he need not concern himself
with interruptions due to a type error in Unit B.  This is especially
important when multiple programmers are working on multiple subunits
of a program where the types within each other's units are not even
known - it allows work to go on without having to stop the whole
project while the type system is worked out.

> These two concepts are certainly mutually exclusive. Perhaps you do not
> agree with my definitions, but that is a different discussion. If you
> would be so kind, if you want to use different definitions, do so in a
> different thread. This one is large enough as it is.

I don't think that there's any problem with your definitions, though
there are certainly alternate ones.  Your definition of dynamic typing
certainly led you to summarize your definitions in a not-quite-correct
way, even if you agree that it was a minor mistake (i.e. that dynamic
type errors _are_ signaled at runtime, rather than _may_ _be_ signaled
at runtime).

But therein lies the rub.  Those who are static typing advocates tend
to minimize the fact that dynamic typing doesn't catch all type
errors, or they claim it as a great bane on dynamic typing.  But
dynamic typing advocates call this a _feature_, not a bug, and we lay
claim to great productivity increases, both personally and in group
efforts, because of the lazy-detection that dynamic typing offers.

Also, something that gets lost in these discussions is the presumed
dichotomy of static vs dynamic typing.  It is clear that a static
typing advocate must by its definition maintain an absolute position
on static typing, if there are any errors in the compilation then the
program is wrong.  However, dynamic-typing proponents are not limited
to viewing the world in terms of type-checks-at-runtime-only; CL
especially allows static type checking when types are known and when
the compiler can catch the type errors, and we who are dynamic-typing
proponents do indeed appreciate the efforts of good statically checked
program segments.  As a CL implementor, I consider it to be important
to hug the terrain very closely; where types are known, the compiler
should propagate them, infer from them, and either warn or error as
appropriate.  But as types are not yet known, even providing warnings,
such lacks in compile-time specifications must not deter the
programmer from his desired thought process.

Duane
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4c9c67eb-f0de-4cd9-af98-3ce3f1ae9b8a@s1g2000prd.googlegroups.com>
Correcting myself:

On Apr 11, 9:53 am, ·····@franz.com wrote:

> > Static typing means type errors are signalled before the program is run.
>
> Agreed.  It should also be noted that _all_ static type errors will be
> caught (specification errors notwithstanding).

The parenthesized phrase was unfortunate.  What I meant was "(modulo
specification errors)".

Duane
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1m3ufkjf2byi3.1lepvmxy7pbis$.dlg@40tude.net>
On Sat, 11 Apr 2009 09:53:53 -0700 (PDT), ·····@franz.com wrote:

> This is especially
> important when multiple programmers are working on multiple subunits
> of a program where the types within each other's units are not even
> known - it allows work to go on without having to stop the whole
> project while the type system is worked out.

Impressive. One can sew sails while other would prepare harness for the
horse...

> But therein lies the rub.  Those who are static typing advocates tend
> to minimize the fact that dynamic typing doesn't catch all type
> errors, or they claim it as a great bane on dynamic typing.  But
> dynamic typing advocates call this a _feature_, not a bug, and we lay
> claim to great productivity increases, both personally and in group
> efforts, because of the lazy-detection that dynamic typing offers.

I think this confuses dynamic typing and weak typing. Dynamic refers to the
time of binding. A possibility to check for type errors at given time is
merely a consequence of. The "feature" of ignoring typing is to have typing
weak, rather than strong.

Let me repeat it once again: a honest advocate of dynamic typing do it
against typing in favor of no typing. Thank you for illustrating this
point.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <368cb3b7-0abe-4bf4-9f41-17fdf80d0178@v28g2000vbb.googlegroups.com>
On Apr 11, 1:25 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:

> Impressive. One can sew sails while other would prepare harness for the
> horse...

So you're admitting that your preferred form of type checking would
never let a team develop a sailboat that can be drawn by horses along
a canal. IOW, that static type restrictions can severely impede
programmer creativity.
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ed3b5390-5009-4f6d-a6b7-e9ac99e989cb@r31g2000vbp.googlegroups.com>
On Apr 11, 10:25 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Sat, 11 Apr 2009 09:53:53 -0700 (PDT), ·····@franz.com wrote:
> > This is especially
> > important when multiple programmers are working on multiple subunits
> > of a program where the types within each other's units are not even
> > known - it allows work to go on without having to stop the whole
> > project while the type system is worked out.
>
> Impressive. One can sew sails while other would prepare harness for the
> horse...

Of course.  But why limit yourself to horses?  Why not expand your
horizons and harness the craft with wings, or even a jet engine?  With
slight modification to the hull (little matters of being airtight and
ability to withstand pressure) one could use the sails to capture
winds in space that are not atmospheric.  You'd never get that far if
you were bogged down with a compiler that insisted that your sails
were made of cloth.

> > But therein lies the rub.  Those who are static typing advocates tend
> > to minimize the fact that dynamic typing doesn't catch all type
> > errors, or they claim it as a great bane on dynamic typing.  But
> > dynamic typing advocates call this a _feature_, not a bug, and we lay
> > claim to great productivity increases, both personally and in group
> > efforts, because of the lazy-detection that dynamic typing offers.
>
> I think this confuses dynamic typing and weak typing.

I've seen you state this confusion, and I'm sorry for you.  That
doesn't make the concept confusing; it's just you that are confused.
Remember, I am accepting Mr Haarman's definitions of static and
dynamic types; if you want to argue within another definitional set,
then state your definitions.

> Dynamic refers to the time of binding.

Agreed.

> A possibility to check for type errors at given time is merely a consequence of.

Please finish your sentence.  The above sentence is not complete.

> The "feature" of ignoring typing is to have typing weak, rather than strong.

Here's where you're confused.  Dynamic typing and weak typing are
orthogonal.  You've been told this before.  If you want to get
anywhere with us, you must state your basic assumptions, and we must
agree to discuss within those assumptions.

> Let me repeat it once again: a honest advocate of dynamic typing do it
> against typing in favor of no typing.

You keep repeating again and again this incorrect mantra, with the
expectation that something will magically change.  You should fix your
expectation and figure out why we don't accept your conclusions.
Otherwise you're likely to be one frustrated boy.

> Thank you for illustrating this point.

You're welcome.  Of course, the point I illustrated (and the trap you
yet again fell into) was not the one you expected...

Duane
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e0e4b9$0$22543$607ed4bc@cv.net>
> On Apr 11, 10:25 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Sat, 11 Apr 2009 09:53:53 -0700 (PDT), ·····@franz.com wrote:
>>> This is especially
>>> important when multiple programmers are working on multiple subunits
>>> of a program where the types within each other's units are not even
>>> known - it allows work to go on without having to stop the whole
>>> project while the type system is worked out.
>> Impressive. One can sew sails while other would prepare harness for the
>> horse...

You just broke Tilton's Law, perhaps the pre-eminent one:

     Solve the right problem.

If on prototype-assembly day your power team showed up looking for the 
mast and your drive train team showed up looking for horses, the problem 
is not that they did not prepare 500 page design specifications before 
picking up their tools, it is that they were not talking.

I would say "project management was not looking", but project management 
never looks.

btw, if one eschews the desperation that is pre-planning everything you 
would be surprised how few people you need to build complex systems, 
thereby eliminating a large chunk of the communication issue.

I recommend e-mail for the rest.

hth, kenny
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <6kxby9cemnps$.1cafez4w4xa02$.dlg@40tude.net>
On Sat, 11 Apr 2009 11:23:52 -0700 (PDT), ·····@franz.com wrote:

> On Apr 11, 10:25�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:

>> On Sat, 11 Apr 2009 09:53:53 -0700 (PDT), ·····@franz.com wrote:

>>> This is especially
>>> important when multiple programmers are working on multiple subunits
>>> of a program where the types within each other's units are not even
>>> known - it allows work to go on without having to stop the whole
>>> project while the type system is worked out.
>>
>> Impressive. One can sew sails while other would prepare harness for the
>> horse...
> 
> Of course.  But why limit yourself to horses?  Why not expand your
> horizons and harness the craft with wings, or even a jet engine?  With
> slight modification to the hull (little matters of being airtight and
> ability to withstand pressure) one could use the sails to capture
> winds in space that are not atmospheric.  You'd never get that far if
> you were bogged down with a compiler that insisted that your sails
> were made of cloth.

Ah, this is also understanding of how aircrafts are produced. I am afraid
we have different ideas of how and when scientific research is done and how
that differs from engineering. I can only hope that methods of software
engineering you advocate weren't used during design of flight control
systems.

>> A possibility to check for type errors at given time is merely a consequence of.
> 
> Please finish your sentence.

It is possible to check types at compile time because it the time of
binding.

>> The "feature" of ignoring typing is to have typing weak, rather than strong.
> 
> Here's where you're confused.  Dynamic typing and weak typing are
> orthogonal.

I am not. I merely pointed out that the "feature" you described is
characteristic to weak typing, not to the time of binding. Weak typing can
be static or dynamic.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <874owvatiz.fsf.mdw@metalzone.distorted.org.uk>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> On Sat, 11 Apr 2009 09:53:53 -0700 (PDT), ·····@franz.com wrote:
> > But therein lies the rub.  Those who are static typing advocates
> > tend to minimize the fact that dynamic typing doesn't catch all type
> > errors, or they claim it as a great bane on dynamic typing.  But
> > dynamic typing advocates call this a _feature_, not a bug, and we
> > lay claim to great productivity increases, both personally and in
> > group efforts, because of the lazy-detection that dynamic typing
> > offers.
>
> I think this confuses dynamic typing and weak typing.

I think you're the one who's confused about them.  Dynamic typing is
characteristic of Lisp; some Lisps have (optional) static typing as
well, e.g., Common Lisp's type declarations.  Scheme is strongly typed,
though: attempting to apply `cdr' to an integer, or `+' to a string
causes an exception to be raised.  However, not all Lisps are strongly
typed in this way.  Common Lisp allows you to tune the strength of the
type system to some extent using the SAFETY optimization-quality
declaration.  Interlisp seems to have been fairly cavalier about what
CAR and CDR did to objects other than CONS cells -- typically just
picking out the first two halfword fields of the object in question.

> Dynamic refers to the time of binding. A possibility to check for type
> errors at given time is merely a consequence of. The "feature" of
> ignoring typing is to have typing weak, rather than strong.

No.  As I mentioned earlier -- you don't see very interested in learning
-- weakness and strength are qualities which can be applied to both
static and dynamic type systems.  For example, C is weakly statically
typed; Interlisp (as discussed above) is weakly dynamically typed;
Common Lisp has tunable strength of both dynamic and static typing.

> Let me repeat it once again: a honest advocate of dynamic typing do it
> against typing in favor of no typing. Thank you for illustrating this
> point.

This has been rubbish every time you've said it in the past, and it's
still rubbish now.

-- [mdw]
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413081841.GE3826@gildor.inglorion.net>
Duane,

I think your post sums up the whole discussion rather nicely. Thanks for 
that.

I have indeed learned from this discussion that the fact that dynamic 
typing does not catch all errors can actually be considered a desirable 
property. I had not thought of that.

Back to the drawing board. Perhaps I will go for soft typing (warnings 
if the compiler cannot prove type correctness) with the possibility to 
turn the warnings into errors.

Regards,

Bob

-- 
When Marriage is Outlawed, Only Outlaws will have Inlaws.

From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74c021F12o205U1@mid.individual.net>
On Sat, 11 Apr 2009 18:25:57 +0200, Robbert Haarman wrote:

> On Sat, Apr 11, 2009 at 04:06:17PM +0100, Mark Wooding wrote:
>> 
>> This is just a complete misunderstanding of what dynamic and static
>> typing mean.  The two are in fact almost orthogonal, rather than
>> mutually exclusive: the names `static' and `dynamic' are unhelpful and
>> impede understanding.
> 
> Please note that I provided my definitions of static typing and dynamic
> typing at the start of this discussion. The definitions are, in a
> nutshell,
> 
> Static typing means type errors are signalled before the program is run.
> 
> Dynamic typing means type errors are signalled while the program is
> running.
> 
> These two concepts are certainly mutually exclusive. Perhaps you do not

And they are also useless / idiotic.  Consider

(defun foo (x y)
  (+ (expt x 9) (symbol-name y)))

compiled under SBCL:

; compiling (DEFUN FOO ...)

; file: /tmp/file4Z4YAi.lisp
; in: DEFUN FOO
;     (+ (EXPT X 9) (SYMBOL-NAME Y))
; 
; note: deleting unreachable code
; 
; caught WARNING:
;   Asserted type NUMBER conflicts with derived type
;   (VALUES SIMPLE-STRING &OPTIONAL).
;   See also:
;     The SBCL Manual, Node "Handling of Types"
; 
; compilation unit finished
;   caught 1 WARNING condition
;   printed 1 note

Using your definitions, SBCL is statically typed.  Way to go.  Care to
define something else, perhaps?

> agree with my definitions, but that is a different discussion. If you
> would be so kind, if you want to use different definitions, do so in a
> different thread. This one is large enough as it is.

So once you give us bogus definitions, we are not allowed to correct
them in this thread?  Are you an elementary school teacher, who is
used to having absolute say in what is Right or Wrong, and sends
students who argue to stand in a corner?  Can I leave the thread to go
to the potty?

Tamas
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090412065346.GA3826@gildor.inglorion.net>
First off, my apologies if I stepped on anybody's toes with my earlier 
post. That was not my intention. I just fealt that the discussion is 
long enough as it is, so that discussing different definitions of static 
typing and dynamic typing might better be done in a different thread. 
Apparently, people don't agree, so, fine, we'll discuss them in this 
thread.

Also, various people have made remarks to the effect that I don't have 
the right to determine what definitions are in use. However, the way I 
see it, it was this snippet that sparked off the whole static vs. 
dynamic typing debate:

=== BEGIN QUOTE ===

I disagree. You are implying that dynamic typing leads to greater
productivity than static typing. I don't think this is the case.

Taking "static typing" to mean that programs that cannot be correctly at
compile time are rejected at compile time, whereas "dynamic typing"
means type errors lead to rejection at run-time, static typing means, by
definition, rejecting bad programs early. It seems to me this would be a
productivity gain.

=== END QUOTE ===

That snippet is indeed mine, and I have been under the impression that 
_most_ of the discussion has indeed used these meannings of "static 
typing" and "dynamic typing". However, correct me if I am wrong.

Now, back to your post, Tamas:

On Sat, Apr 11, 2009 at 04:50:09PM +0000, Tamas K Papp wrote:
> 
> And they are also useless / idiotic.  Consider
> 
> (defun foo (x y)
>   (+ (expt x 9) (symbol-name y)))
> 
> compiled under SBCL:
> 
> ; compiling (DEFUN FOO ...)
> 
> ; file: /tmp/file4Z4YAi.lisp
> ; in: DEFUN FOO
> ;     (+ (EXPT X 9) (SYMBOL-NAME Y))
> ; 
> ; note: deleting unreachable code
> ; 
> ; caught WARNING:
> ;   Asserted type NUMBER conflicts with derived type
> ;   (VALUES SIMPLE-STRING &OPTIONAL).
> ;   See also:
> ;     The SBCL Manual, Node "Handling of Types"
> ; 
> ; compilation unit finished
> ;   caught 1 WARNING condition
> ;   printed 1 note
> 
> Using your definitions, SBCL is statically typed.  Way to go.  Care to
> define something else, perhaps?

Not the way I meant my definitions. Note that the program is not 
actually _rejected_. You can still run it if you want. Static typing (at 
least the way I mean it!) would not allow this.

What you are seeing here is what is called "soft typing". If the 
compiler thinks there is a type error, it emits a warning, but otherwise 
continues to compile the program.

Regards,

Bob

-- 
Trust in God, and tie your camel

	-- Old Arab proverb.

From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74e1gtF13a2vsU1@mid.individual.net>
On Sun, 12 Apr 2009 08:53:46 +0200, Robbert Haarman wrote:

> On Sat, Apr 11, 2009 at 04:50:09PM +0000, Tamas K Papp wrote:
>> 
>> (defun foo (x y)
>>   (+ (expt x 9) (symbol-name y)))
>> 
>> compiled under SBCL:
>> 
>> ; compiling (DEFUN FOO ...)
>> 
>> ; file: /tmp/file4Z4YAi.lisp
>> ; in: DEFUN FOO
>> ;     (+ (EXPT X 9) (SYMBOL-NAME Y))
>> ;
>> ; note: deleting unreachable code
>> ;
>> ; caught WARNING:
>> ;   Asserted type NUMBER conflicts with derived type ;   (VALUES
>> SIMPLE-STRING &OPTIONAL). ;   See also:
>> ;     The SBCL Manual, Node "Handling of Types" ;
>> ; compilation unit finished
>> ;   caught 1 WARNING condition
>> ;   printed 1 note
>> 
>> Using your definitions, SBCL is statically typed.  Way to go.  Care to
>> define something else, perhaps?
> 
> Not the way I meant my definitions. Note that the program is not
> actually _rejected_. You can still run it if you want. Static typing (at
> least the way I mean it!) would not allow this.

But earlier you said the following:

"Static typing means type errors are signalled before the program is run."

and the SBCL example satisfies this.  Now you realize that your
definition is stupid, so you change it without admitting that it is.
And of course nobody else is allowed to use another definition, but
you can change yours at whim.  It is pointless to argue with you.

> What you are seeing here is what is called "soft typing". If the

What I am seeing here is that you don't have a clue about these
concepts.

Tamas
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413094054.GF3826@gildor.inglorion.net>
Tamas,

> But earlier you said the following:
> 
> "Static typing means type errors are signalled before the program is run."
> 
> and the SBCL example satisfies this. Now you realize that your 
> definition is stupid, so you change it without admitting that it is.

No. I realized that my intended meaning of "signalled" did not make it 
across to you, so I clarified it. I apologize if I did not express 
myself clearly enough earlier, but I resent you calling my definitions 
"stupid" and saying that I "don't have a clue". There is really no need 
for that.

> And of course nobody else is allowed to use another definition, but
> you can change yours at whim.

Another misunderstanding, and one I have already attempted to clear up. 
It has never been my intention to dictate what definitions people can or 
cannot use. And it has never been my intention to change my definitions.

> It is pointless to argue with you.

If that is what you believe, I would rather you stopped arguing instead 
of insulting me. I have rather enjoyed the intellectual discussion so 
far, and learned a thing or two besides. Let's continue in that spirit.

> > What you are seeing here is what is called "soft typing". If the
> 
> What I am seeing here is that you don't have a clue about these
> concepts.

Of course, I have my own ideas about that. ;-)

Anyway, thanks for pointing out the deficiencies in the wording of my 
definitions. I will try to word things more carefully next time, to 
avoid misunderstandings in the future.

*offers hand*

Regards,

Bob

-- 
"What if this weren't a hypothetical question?"

From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87d4bjay7e.fsf.mdw@metalzone.distorted.org.uk>
Robbert Haarman <··············@inglorion.net> writes:

> Please note that I provided my definitions of static typing and dynamic 
> typing at the start of this discussion. 

My newsreader tells me that Ray Dillinger started this discussion.  This
thread is crossposted to three newsgroups, and seems to be relevant to
all of them; none of them is moderated.  I'm at a loss to see why you
consider yourself to be especially privileged in further guiding the
discussion.

> The definitions are, in a nutshell,
>
> Static typing means type errors are signalled before the program is run.
>
> Dynamic typing means type errors are signalled while the program is 
> running.
>
> These two concepts are certainly mutually exclusive.

Some languages -- those, such as Java and Common Lisp which provide both
typing disciplines -- can report type errors at both compile- and
runtime.  These counterexamples disprove your claim of mutual exclusion
and leave your definitions meaningless.

What's missing, of course, is that the word `type' doesn't mean quite
the same thing in static and dynamic type systems, and static types
don't apply to the same things as dynamic types.  As I explained --
relatively clearly, I hope.

> Perhaps you do not agree with my definitions, but that is a different
> discussion. If you would be so kind, if you want to use different
> definitions, do so in a different thread. This one is large enough as
> it is.

Since your definitions are defective and misleading, we shall have to go
elsewhere for some replacements.  Any ideas?

-- [mdw]
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090421122030.218@gmail.com>
On 2009-04-11, Robbert Haarman <··············@inglorion.net> wrote:
> On Sat, Apr 11, 2009 at 04:06:17PM +0100, Mark Wooding wrote:
>> 
>> This is just a complete misunderstanding of what dynamic and static
>> typing mean.  The two are in fact almost orthogonal, rather than
>> mutually exclusive: the names `static' and `dynamic' are unhelpful and
>> impede understanding.
>
> Please note that I provided my definitions of static typing and dynamic 
> typing at the start of this discussion. The definitions are, in a 
> nutshell,
>
> Static typing means type errors are signalled before the program is run.
>
> Dynamic typing means type errors are signalled while the program is 
> running.

So if no errors are signaled because the program is correct, then
there is neither static nor dynamic typing.

What if the compiler for a static language produces an executable in all
cases? If there are type errors, the ``a.out'' executable, when run, produces
the message ``this program is aborting due to type errors''.

Since the errors are signaled at run time, it's dynamic typing
by your definition.

> These two concepts are certainly mutually exclusive.

They are in no way exclusive.

> Perhaps you do not 
> agree with my definitions, but that is a different discussion.

Computer science does not agree with your definitions. 
Go take it up with the reams of literature. (Which, admittedly,
is a ploy to get you to read some of it).
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413121404.GH3826@gildor.inglorion.net>
On Sun, Apr 12, 2009 at 04:40:47AM +0000, Kaz Kylheku wrote:
> On 2009-04-11, Robbert Haarman <··············@inglorion.net> wrote:
> >
> > Static typing means type errors are signalled before the program is run.
> >
> > Dynamic typing means type errors are signalled while the program is 
> > running.

Since these phrasings seem to have caused some confusion, let me try 
to spell them out to clarify what I mean.

By "static typing", I mean that type checking is done without running 
the program, and programs that don't pass the type checker are not 
allowed to run.

By "dynamic typing", I mean that the type checker does not prevent the 
program from running until an ill-typed operation is actually attempted.

A little example to further clarify. Suppose / denotes a division 
operation, which is defined for integers but not for strings. The 
following program:

// choice contains "y" or "n", depending on user input
if(choice == "y") {
  12 / 3;
} else {
  12 / "foo";
}

Under static typing, this program would not be allowed to run, because 
it '12 / "foo"` is not well-typed.

Under dynamic typing, this program would be allowed to run, but give an 
error if 'choice' were ever not "y".

> So if no errors are signaled because the program is correct, then
> there is neither static nor dynamic typing.

In that case, the difference between static typing and dynamic typing is 
irrelevant. The behavior is exactly the same, no matter which one of the 
two you have.

> What if the compiler for a static language produces an executable in all
> cases? If there are type errors, the ``a.out'' executable, when run, produces
> the message ``this program is aborting due to type errors''.
> 
> Since the errors are signaled at run time, it's dynamic typing
> by your definition.

That is a case I had not considered. It is neither what I would think of 
as static typing, nor what I would think of as dynamic typing. I hope 
that is also clear from the spelled out definitions above.

> > These two concepts are certainly mutually exclusive.
> 
> They are in no way exclusive.

Could you provide a counter-example to prove your point? A way of 
handling type errors that is both static typing and dynamic typing?

> > Perhaps you do not 
> > agree with my definitions, but that is a different discussion.
> 
> Computer science does not agree with your definitions. 
> Go take it up with the reams of literature. (Which, admittedly,
> is a ploy to get you to read some of it).

I know there are many different definitions in use, which I why I stated 
my definitions when I joined the discussion. My definitions are my 
attempt to cut to the heart of the matter while leaving out concepts 
that may or may not exist in a particular language.

If there is some sort of consensus in computer science that differs from 
what I have presented, I would like to know about it. As it is, at least 
the Wikipedia article on type systems seems to employ definitions that 
are very similar to mine:

> Static typing

> A programming language is said to use static typing when type checking 
> is performed during compile-time as opposed to run-time.

> Dynamic typing

> A programming language is said to be dynamically typed, or just 
> 'dynamic', when the majority of its type checking is performed at 
> run-time as opposed to at compile-time.

Source: 
http://en.wikipedia.org/w/index.php?title=Type_system&oldid=282920211#Type_checking

Regards,

Bob

-- 
You'll get my inches, miles and gallons when you pry them from my cold
dead feet!

	-- Seen on Slashdot

From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5a39956c-ee6e-426d-9afe-e640aaea90fa@q33g2000pra.googlegroups.com>
On Apr 13, 5:14 am, Robbert Haarman <··············@inglorion.net>
wrote:
> On Sun, Apr 12, 2009 at 04:40:47AM +0000, Kaz Kylheku wrote:
> > On 2009-04-11, Robbert Haarman <··············@inglorion.net> wrote:
>
> > > Static typing means type errors are signalled before the program is run.
>
> > > Dynamic typing means type errors are signalled while the program is
> > > running.
>
> Since these phrasings seem to have caused some confusion, let me try
> to spell them out to clarify what I mean.

There will always be confusion, because the language of the various
disciplines will have a slight twist on what the person looking at
your definitions says about whether they're true or not.  Sometimes,
it is not even obvious to the observer that they are taking your
definitions and modifying them as they read, folding the definitions
into their own understanding based on their preferred discipline.

> By "static typing", I mean that type checking is done without running
> the program, and programs that don't pass the type checker are not
> allowed to run.

And a static typing enthusiast will accept that definition as-is, but
a dynamic typing enthusiast will automatically transform that to "are
not allowed to run without a warning", because to a dynamic typing
enthusiast, it is ludicrous to disallow the running of a program.

> By "dynamic typing", I mean that the type checker does not prevent the
> program from running until an ill-typed operation is actually attempted.

One thing that gives Lisp enthusiasts their enthusiasm is the idea of
"correct and continue" style of programming.  It makes no sense to let
a program run which is going to give you a wrong answer, so if a
language's runtime allows these "preventions" to interrupt the program
flow, require intervention from the programmer, and then allow the
program to continue on when the error has been corrected, then it does
make sense to stop the program when a dynamic type error is detected.
These are called "continuable errors" in Common Lisp, and they are
what allows dynamic typing to be practical; the entire Lisp
environment is available at error-time to examine, analyze, and
correct the error, and to then continue from that error as if the
error condition had not been seen (it also then allows other kinds of
errors to be caught/corrected/continued-from without having to
shoehorn them into some kind of type lattice, and it makes the
programming effort much more natural.  If this mechanism had not been
available, it would have forced the program to be run from the
beginning every time, and that would make dynamic-typing less
desirable.

Furthermore, even in the absence of continuability (i.e. the ability
to correct an error and to continue from that point) at least some CL
implementations give the programmer the ability to examine the stack
and to either restart that frame, or to return a value from that
frame, so as to provide either a way to restart a modular section of
code or to pretend that that section of code had worked properly.  As
stated before, the goal is to help the programmer to continue on his
train of thought without letting the mechanism of error detection get
in the way of his productivity.

For the sake of our friend Dmitry, and if you also want to be
complete, you would also have to include a definition for "no
typing" (or weak typing) where instead of being caught at compile-time
or run-time, type errors may _not_ be caught.  I think it should be
obvious to both static typing enthusiasts and dynamic typing
enthusiasts that the no-typing paradigm is not a good one, but it must
be mentioned because often static-typing enthusiasts mistake dynamic-
typing for no-typing.

Duane
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <998adfc2-7059-4c11-b663-7a0c25afe345@p11g2000yqe.googlegroups.com>
On Apr 13, 1:09 pm, ·····@franz.com wrote:
[...]
> I think it should be obvious to both static typing enthusiasts and
> dynamic typing enthusiasts that the no-typing paradigm is not a good
> one, but it must be mentioned because often static-typing enthusiasts
> mistake dynamic-typing for no-typing.

Well, I'm a dynamic-typing enthusiast, and I'm not sure that that's
obvious to me, although that may be because it's not clear to me where
the boundary between "no-typing" and "weak-typing" (whether static or
dynamic) is. I can at least see a case for the idea that adding the
integer 1 to the string "2" ought to return the string "3".

There are a lot of problem domains out there, and in some of them,
it's a win to let people write sloppy programs that just sort of do
the right thing (or at least some arguably right thing) in situations
like that.

Cheers,
Pillsy
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <0dbbd9aa-8205-4f62-a69a-6e78153b3be2@v35g2000pro.googlegroups.com>
On Apr 13, 10:31 am, Pillsy <·········@gmail.com> wrote:
> On Apr 13, 1:09 pm, ·····@franz.com wrote:
> [...]
>
> > I think it should be obvious to both static typing enthusiasts and
> > dynamic typing enthusiasts that the no-typing paradigm is not a good
> > one, but it must be mentioned because often static-typing enthusiasts
> > mistake dynamic-typing for no-typing.
>
> Well, I'm a dynamic-typing enthusiast, and I'm not sure that that's
> obvious to me, although that may be because it's not clear to me where
> the boundary between "no-typing" and "weak-typing" (whether static or
> dynamic) is. I can at least see a case for the idea that adding the
> integer 1 to the string "2" ought to return the string "3".

If you thought about it a little deeper, though, you wouldn't really
want that idea to be automatic.  In a language where a + operator can
be thus overloaded, a program could indeed provide that concept of
casting a string into an integer, either with static or dynamic
typing.  But what would stop a language from interpreting the above
addition and presenting "12" or "21" as results instead?  Or, in fact,
the more likely case in a truly weak-typed language is to interpret
"2" as a pointer to a memory location, and thus 1+"2" would likely
return a pointer to some unrelated value - in C it might be the
terminating null character, or if the addressing were word-oriented it
might be the next string in the table that holds the "2" string.  I
hope you agree that you wouldn't want any of those situations to occur
without your express knowledge.  Giving the language the express
permission to perform this kind of overloading is definitely not
incompatible with dynamic typing paradigms, and allowing the language
to select the default behavior of something undefined like your
example would be more of a case of weak- or no-typing.

> There are a lot of problem domains out there, and in some of them,
> it's a win to let people write sloppy programs that just sort of do
> the right thing (or at least some arguably right thing) in situations
> like that.

I prefer to make a distinction between "sloppy" programming and "lazy"
programming.  A good programmer is a lazy programmer, but a sloppy
programmer very seldom writes good programs.  The difference is that a
sloppy programmer says to himself "what can I get away with not having
to do, so that I have to think as little as possible?" but the lazy
programmer says "what can I put off until later so I don't have to
think about the whole problem all at once?".  The difference is in the
attitude; the lazy programmer intends to get back to the rest of the
program when it becomes easier to think about that aspect of the
program.  Of course, the antithesis of both of these kinds of
programmers on one end, is the perfectionist, whose idea it is that a
program must be perfect or else it is not worth running; the
perfectionist can't understand how a lazy programmer can think the way
he does, because after all, the less-than-perfect program is
broken...  But the perfectionist also tends to be less productive than
the lazy programmer, because he has to spend so much time thinking
about how his program must be perfect that it is very hard for him to
even get started writing his program.  Any programmer is going to
start with an idea of what he wants to do, and is going to want the
program to indeed do what he wanted it to do, but in thinking about a
very complex problem set it sometimes becomes much too large for him
to hold in his mind conceptually, and so he starts figuring out ways
to compartmentalize the problem so as to be able to concentrate on
thinking about more manageable parts of the program at a time.  If his
language allows him to do this, he can get started right away,
provided he intends to come back to the other parts of the problem
before he decides his programming effort is finished.

So I would agree with your last paragraph if you substituted "lazy"
for "sloppy", but then than statement would not contradict my earlier
statements; dynamic typing supports lazy programming, but strong
dynamic typing does not support sloppy programming well, because it
does eventually tend to catch the sloppiness (though how much pain is
invoked depends on where the errors are actually caught; if at a user
site, the pain is much greater than at the development site, after
good testing has been done).

Duane
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <36d9117a-492f-4eba-b258-a440abf11a68@v15g2000yqn.googlegroups.com>
Well, I think I can add my two kopeykas.
CL has some disguisting features.
Here is one of them:

(defstruct struct1 field1 field2)

(defvar x (make-struct1 :field1 1 :field2 2))
(+ (struct1-field1 x) (struct1-field2 x))

Note I need to type struct1 three times in
the code which uses the type.

No, don't tell me to use :conc-name nil.
Because next line might be
(defstruct struct2 field2 field3)

So, :conc-name nil is not scalable.

Also don't say to use defclass. defclass
forces me to use method dispatch here
and that is why CL is indeed slower than
C++.

Compare this to C++

struct struct1 {
  int field1;
  int field2;
}

int main(...) {
struct1 x;
x.field1+x.field2;
}

Here I need to mention struct1 one time only
as compiler guesses type from variable type
declaration.

This code is both terse and fast.
Slot access is static, no need to repeat
type name at every slot access.

Unnecessary repetition of type name in CL
I consider as a serious disadvantage.
Naming in a large projects becomes hard.
In an object-oriented languages where
objects create a namespace, names can
be short, as they're already classified
by a class name. In a CL, namespaces are
large, and one have to qualify them manually.
So names tend to be long and code gets
even more verbose. Verbose code is harder
to read as there is too many unnecessary
information in it. And finally it is harder
to develop. I think this is a main reason why
CL libraries are so week despite of wonderful
language design and wonderful macro
expressiveness.

This is the example of the situation when
static typing improves code readability.
In fact CL design almost allows for that
style. If we have code walker and are able
to process type declarations, and if we
use implementation-dependent reflection
on structure types, and if use some
special syntax extensions, we can in fact
write something like this:

(def-typed-var y struct1
  (make-struct1 :field1 1 :field2 2))

(+ y^field1 y^field2)

But in we need to collect too many
pieces together which I still didn't find.
E.g. I don't know yet how to get a list
of structure slot accessors in a portable
way. So, again and again, CL needs to be
improved before it would become competitive
with modern languages. Lisp has great ideas,
but CL prevents many (or most) of them
to show all their potential. Either lisp
change and take best practices from other
languages, or it will continue to fade away.
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ae127789-ea25-4de6-bc47-7d2cb8ce0b15@v28g2000vbb.googlegroups.com>
On Apr 13, 4:01 pm, budden <···········@mail.ru> wrote:

> Well, I think I can add my two kopeykas.
> CL has some disguisting features.
> Here is one of them:

> (defstruct struct1 field1 field2)

> (defvar x (make-struct1 :field1 1 :field2 2))
> (+ (struct1-field1 x) (struct1-field2 x))

> Note I need to type struct1 three times in
> the code which uses the type.

Um, why not just use

(with-accessors ((f1 struct1-field1) (f2 struct2-field2)) x
  (+ f1 f2))

Doesn't that do what you want? I'm not 100% sure that it's truly
conforming based on my reading of the spec, but I'd be really
surprised if it didn't work in an actual specification.

Heck, it doesn't work right, or it's still too prolix for you, write
your own WITH-STRUCT macro that does what you want. It should take you
all of five minutes to make

(with-struct (field1 field2) (x struct1)
  (+ field1 field2))

work, and you sure don't need to do any code walking!

Cheers,
Pillsy
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <2fabbf26-fbc8-4c7a-b548-b9d7560cc008@e18g2000yqo.googlegroups.com>
> Heck, it doesn't work right, or it's still too prolix for
> you, write your own WITH-STRUCT macro that does what you
> want. It should take you all of five minutes to make

(with-struct (field1 field2) (x struct1)
  (+ field1 field2))

Now I need to mention field1 and field2 extra time and
I get another level of nesting. I gained nothing.

If it was at least

(with-struct (x struct1)
  (+ field1 field2))

This was a gain (if we ignore unnecessary nesting level).
But it is impossible to do portably AFAIK. And what if
I have two instances of struct1? Say, I have

(defstruct point x y)

and want to subtract two points to get vector.
In a C, this would be like

vector *subtract_points(point *a,point *b) {
  return make_vector(a->x - b->x, a->y - b->y);
}
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <c6fcbaeb-6162-4d2a-a9ab-17c49728df09@u8g2000yqn.googlegroups.com>
On Apr 14, 2:58 am, budden <···········@mail.ru> wrote:
[...]
> In a C, this would be like

> vector *subtract_points(point *a,point *b) {
>   return make_vector(a->x - b->x, a->y - b->y);
> }

In Lisp it would be

(defun subtract-points (a b)
  (vector (- (point-x a) (point-x b))
          (- (point-y a) (point-y b)))

Sure, you effectively note the types of a and b twice, but then again,
you aren't redundantly saying that you're returning a vector twice, or
for that matter redundantly having to note that you're returning a
value at all.

It looks like a wash to me.

Cheers,
Pillsy
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <da9d1904-5cb0-4075-92dc-b7469f04041c@c9g2000yqm.googlegroups.com>
> (defun subtract-points (a b)
>   (vector (- (point-x a) (point-x b))
>           (- (point-y a) (point-y b)))

> Sure, you effectively note the types of a and b
> twice, but then again, you aren't redundantly saying
> that you're returning a vector twice, or
> for that matter redundantly having to note that
> you're returning a value at all.
"Everything is expression" and progn semantics is an irrelevant
question here. We're not comparing C and lisp, we compare
static and dynamic typing. One can easily imagine typed lisp,
where the function would look like

(defun substract-points (a b)
   (declare (point a b))
   (vector (- a.x b.x)
           (- a.y b.y))

In this (very trivial) example we have reduced
occurances of "point" from 4 to 2 (or even 1).
Real-life code is not so trivial and it is likely
that more repetitions of "point" will be removed.

Also, C code does not check type of
(point-x a), so C code is inherently faster.
If we want as fast code in lisp, we need
(at least)

(defun subtract-points (a b)
  (declare (point a b))
  (vector (- (point-x a) (point-x b))
          (- (point-y a) (point-y b)))

Or maybe

(declaim (ftype (function (point point) vector)
         subtract-points))
(defun subtract-points (a b)
  (vector (- (point-x a) (point-x b))
          (- (point-y a) (point-y b)))

Good compiler than could optimize out
four type checks (any C compiler would not
even try to check types on member access).
But redundancy now is very significant
compared to C.

In a C++, we also would remove one
more occurance of "point" word. Something
like this:

vector point::subtract(point *b) {
  return new vector(x - b->x, y - b->y);
  }

No need to mention "point" when we would
use a function. Names of members
are implicitly qualified by class name.
This is not the case in lisp so scalability
often requires to put qualifier in a function
name itself. E.g. consider CL function
(copy-readtable). It could be readtable::copy
and once it is known that object is a readtable,
we need not such a long name. Or (gethash)
which could be hash::get or even hash::()

Also we have namespace inside method
body where we can call this->x and
this->y without qualifier. So
code gets even terser.

You'd say we have multiple argument type
dispatch in lisp. But again, this is
irrelevant. One can easily imagine another
C++ - like language where multiple dispatch was
implemented. We're not talking about languages,
but about concepts.

Happily, it looks like one can incorporate
true static typing to lisp rather easily,
and it would combine smoothly with dynamic
typing and with dynamic nature of lisp
environment. But some works needs to be done.
I've heard sometimes about projects which did
that (and added a type inference to lisp),
but now I lost that links.
From: ·····@sherbrookeconsulting.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <a770add6-3d7d-4c56-acb4-ad1e39e854a4@l5g2000vbc.googlegroups.com>
On Apr 14, 3:59 pm, budden <···········@mail.ru> wrote:

> > (defun subtract-points (a b)
> >   (vector (- (point-x a) (point-x b))
> >           (- (point-y a) (point-y b)))
> > Sure, you effectively note the types of a and b
> > twice, but then again, you aren't redundantly saying
> > that you're returning a vector twice, or
> > for that matter redundantly having to note that
> > you're returning a value at all.

> "Everything is expression" and progn semantics is an irrelevant
> question here. We're not comparing C and lisp, we compare
> static and dynamic typing. One can easily imagine typed lisp,
> where the function would look like

> (defun substract-points (a b)
>    (declare (point a b))
>    (vector (- a.x b.x)
>            (- a.y b.y))

One can also very easily imagine doing that with a macro like so:

(defun subtract-points (a b)
  (with-cish-struct-syntax (point (a b))
     (vector (- a.x b.x) (a.x b.x))))

I think it's kind of silly and distasteful, but it's not hard at all.
The macro can also contain the type declaration for speed purposes.

You want more convenient syntax based on information available at
compile time. Macros are your friend!

> In this (very trivial) example we have reduced
> occurances of "point" from 4 to 2 (or even 1).
> Real-life code is not so trivial and it is likely
> that more repetitions of "point" will be removed.

But once you have a non-trivial example where you've got a lot of
repititions to eliminate, the objection to just using bog-standard
WITH-ACCESSORS vanishes. It's only the really simple cases where the
difference is notable.

And yes, I admit that C syntax makes dealing with structs a bit easier
than Lisp syntax. Structs are really C's bread and butter, so it makes
sense for the language to make working with them very convenient
syntactically. They aren't nearly so used in CL; most of the time I'll
use a CLOS instance or conses instead, and I know I'm far from alone
in that.
[...]
> Or maybe

> (declaim (ftype (function (point point) vector)
>          subtract-points))
> (defun subtract-points (a b)
>   (vector (- (point-x a) (point-x b))
>           (- (point-y a) (point-y b)))

> Good compiler than could optimize out
> four type checks (any C compiler would not
> even try to check types on member access).

A good Lisp compiler could *infer* those types and generate checks
(for higher-safety, lower-speed optimization settings) or eliminate
the chekcs (for higher-speed, lower-safety settings). Indeed, I think
SBCL and CMUCL will actually do that inference with the proper options
set.

Cheers,
Pillsy
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <545fb807-ca85-4694-81a9-9f5f4abf77a5@k38g2000yqh.googlegroups.com>
Hi Pillsy,

> One can also very easily imagine doing that with a macro like so:
>
> (defun subtract-points (a b)
>   (with-cish-struct-syntax (point (a b))
>      (vector (- a.x b.x) (a.x b.x))))
>
> I think it's kind of silly and distasteful, but it's not hard at all.
> The macro can also contain the type declaration for speed purposes.
It is not that easy, as I know no way to list all slot accessors of a
struct.
Using slot-value is around 6 times slower.

I have done this by redefining defstruct, and then some other way
(don't remember),
but it is potentially importable and really, it looks like a bad
style.

http://tinyurl.com/dflzxz

> You want more convenient syntax based on information available at
> compile time. Macros are your friend!
No, macros are not sufficient. To improve language expressiveness
based
on type information, I need a way to access type information
available to compiler. I have seen parse-declaration project for
that.
Also I need change some naming conventions. There is no reason
to have gethash when you can have just get. It can be even in lispy
syntax if we have type inference:

(let ((h (hash-table::make))
   (setf (get h key) value)))

It is very important there that get is not
a generic function, but a function which
belongs to hash-table namespace. Substitution from
cl:get to hash-table:get is done at compile-time.
So, congruent lambda list limitation is gone! When one
need generic access, one can specify cl:get or whatever
explicitly.

This will be not a CL then. Also this is
not too easy to implement. Printed code will
differ from source code and this is bad. So,
in fact, I see no smooth way to incorporate
type-based improvements to CL.

> A good Lisp compiler could *infer* those types and
> generate checks (for higher-safety, lower-speed
> optimization settings) or eliminate the chekcs
> (for higher-speed, lower-safety settings).
Likely. Maybe I was wrong. But again, we mention point
4 times instead of 2 times and want compiler to be smart.
Instead of that, C requires 2 mentions of "point" and
compiler is simpler.

> And yes, I admit that C syntax makes dealing with structs a bit easier
than Lisp syntax.
Nice! :)

> They aren't nearly so used in CL; most of the time I'll
> use a CLOS instance or conses instead
I too. But if structures were more convinient, I'm sure they
were much more popular. Get rid of type prefix, allow methods
qualified by a structure type and allow for local structure
types (like local functions) and you easily make lisp outperform
C++ in terms of speed, convinience and flexibility. As you would
have macros, generics, inheritance. CLOS is rather slow and AFAIK
it is designed to be rather slow. Also, structures have some
advantages compared to CLOS classes:
- structure definition is much simpler
- there is a default constructor, printer and
reader syntax
From: ·····@sherbrookeconsulting.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <afff4fa4-bce9-4d32-bdf3-e929f677d01e@f18g2000vbf.googlegroups.com>
On Apr 15, 3:00 am, budden <···········@mail.ru> wrote:

> Hi Pillsy,

> > One can also very easily imagine doing that with a macro like so:

> > (defun subtract-points (a b)
> >   (with-cish-struct-syntax (point (a b))
> >      (vector (- a.x b.x) (a.x b.x))))

> > I think it's kind of silly and distasteful, but it's not hard at all.
> > The macro can also contain the type declaration for speed purposes.

> It is not that easy, as I know no way to list all slot accessors of a
> struct.

So what? Just find the symbols that look like "a.x" and replace them
with "(point-x a)". It's a somewhat leaky abstraction, but life is
full of them.

If you want to get closer to perfection, go ahead and make a DEFINE-
STRUCTURE macro that stores all the information you need in the
property list for the symbol POINT.

Or just make a WITH-POINT or BIND-POINT macro on an ad-hoc basis.
That's actually what I do, and it has the advantages of being super
easy and much more flexible when you decide you want POINT to be a
cons or a CLOS class instead of a struct.
[...]
> I have done this by redefining defstruct, and then some other way
> (don't remember), but it is potentially importable and really,
> it looks like a bad style.

There's nothing wrong with using your own macros in preference to the
standard ones. I think it's a good idea to change the name where
possible (as in DEFINE-STRUCT above, or maybe use DEFSTRUCT*), like...

> http://tinyurl.com/dflzxz

...um, wait. You have a solution. Why not just, well, use it?

> > You want more convenient syntax based on information available at
> > compile time. Macros are your friend!

> No, macros are not sufficient. To improve language expressiveness
> based on type information, I need a way to access type information
> available to compiler.

But you don't need the type information to do what you want to do
here. Dot syntax is used in all kinds of languages where the type
information isn't available at compile time.

> Also I need change some naming conventions. There is no reason
> to have gethash when you can have just get.
[...]
> It is very important there that get is not a generic function,
> but a function which belongs to hash-table namespace.

Um, wait, *why* is it very important that GET not be a generic
function?
[...]
> > A good Lisp compiler could *infer* those types and
> > generate checks (for higher-safety, lower-speed
> > optimization settings) or eliminate the chekcs
> > (for higher-speed, lower-safety settings).

> Likely. Maybe I was wrong. But again, we mention point
> 4 times instead of 2 times and want compiler to be smart.

Not smarter than actual Lisp compilers are, though.
[...]
> > They aren't nearly so used in CL; most of the time I'll
> > use a CLOS instance or conses instead

> I too. But if structures were more convinient, I'm sure they
> were much more popular.

I kind of doubt it, given how much less flexible than CLOS classes
they are, and how modest the benefits they offer over ad-hoc data
structures built out of conses are.

> Get rid of type prefix,

Not hard now. I know you're worried about namespacing issues, but you
have other ways of dealing with those as it is.

> allow methods qualified by a structure type

Make smaller packages. Really, that's all you need to do.

> and allow for local structure types (like local functions)

Easily done with a macro; you just give the structure a gensymmed name
and then use FLET and INLINE to make sure the accessors have all the
names you want.
[...]
> Also, structures have some advantages compared to CLOS classes:
> - structure definition is much simpler
> - there is a default constructor, printer and
> reader syntax

Go ahead and make your DEFINE-CLASS or DEFCLASS* macro to set up the
"convenient" default syntax[1] that you want if you want it.

I think you may know this already, but I'm not trying to argue that CL
handles all of this stuff perfectly. There's room for more convenience
and flexibility in handling structs and there's also room for better
access to lexical information like type declarations, which is often
available in implementation-specific extensions anyway. But you also
seem to be really resisting the the tools you *do* have because
they're only good enough to solve 99% of the problem.

Cheers, Pillsy

[1] Which I hate like fire, but YM obviously V.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4e5412f8-9620-43f2-981e-fbee9af6eaf5@y9g2000yqg.googlegroups.com>
Hi Pillsy,
>> http://tinyurl.com/dflzxz
>...um, wait. You have a solution. Why not just,
>...well, use it?
There are some problems. First of all,
if I have two structures, I need to write
(with-struct x foo
  (with-struct y bar
     (setf .x.a .y.b))

So I have two levels of nesting.

In fact, currently I'm using proga macro, which allows to
flatten out unnecessary nesting and get rid of many unnecessary
parens. E.g., instead of
(with-open-file (x ...)
 (let ((a b))
  (flet ((foo (x) y))
    (foo a)
    (foo b))))
now I always write
(proga
  (with-open-file x ...)
  (let a b)
  (flet foo (x) y)
  (foo a)
  (foo b)
  )

If I integrate with-struct and
proga (which would take about
one LOC), things will be easier:
(proga
  (with-struct x foo)
  (with-struct y bar)
  (setf .x.a .y.b))

I simply didn't think about it yet
:), thanks for a reminder :)

And, most important, I have to
change compilation order.
Structure declaration has to
be available at with-struct expansion
time, so I have surround defstruct with
(eval-when #.always ...)
or move defstructs to a separate
file. Both solutions are not very
convinient as they break compilation order.
The same problem prevents me
from using construct like
(with-all-class-slots ...)

Packages are another pain. Despite significant
efforts and study, I'm still unable to
develop a robust methodology which would
allow for small packages. Hopefully, when I
make hierarchical packages portable and
will be able to merge packages together with
their internal symbols, things will be easy.
Currently I came to a conclusion that CL
strongly discourages small packages due
to poor design of package system. I have
ideas how to fix it, but effort will have no
fast return.

> But you don't need the type information to do what you want to do
> here. Dot syntax is used in all kinds of languages where the type
> information isn't available at compile time.
I can use slot-value to access structure slots. This is from 6 to 10
times slow than using direct structure-slot accessors, I tested it
in SBCL, Allegro, Lispworks, CCL. If speed is irrelevant, CLOS classes
seem to be better as CL updates their instances with new slots upon
redefiniton. Defstructs are relevant for writing fast code.


> Um, wait, *why* is it very important that GET not be a generic
> function?
This is rather evident. Gethash has an optional default argument.
getf has no one. It is impossible to predict all possible
arglists of a function which can be named "get". Congruent
lambda lists come here into a play and lay another barrier
to scalability.

> I kind of doubt it, given how much less flexible than CLOS classes
> they are, and how modest the benefits they offer over ad-hoc data
> structures built out of conses are.
They are simple and fast, and they can be printed readably by
default. Also you can use *print-circle* with them. So they are
perfect in some sence. Not for everything.

> and allow for local structure types (like local functions)
>
> Easily done with a macro; you just give the structure a gensymmed name
> and then use FLET and INLINE to make sure the accessors have all the
> names you want.
Thanks for the hint. But it's a bit tricky. I'll also need a package
to place all the symbols created to.

> Go ahead and make your DEFINE-CLASS or DEFCLASS* macro to set up the
> "convenient" default syntax[1] that you want if you want it.
It seems that this is a wrong practice :) I've seen some people who
tried to simplify defclass rejected their solutions later.

> I think you may know this already, but I'm not trying to argue that CL
> handles all of this stuff perfectly. There's room for more convenience
> and flexibility in handling structs and there's also room for better
> access to lexical information like type declarations, which is often
> available in implementation-specific extensions anyway. But you also
> seem to be really resisting the tools you *do* have because
> they're only good enough to solve 99% of the problem.
What do you mean? I'm using CL every day and see no replacement for
this.
Closest substitute is Python, but it is ugly compared to CL and it is
much
slower. There are many perfect features in CL, but there is still
many
things that need to be improved. Maybe some Scheme implementations
could
compete to CL to.
From: ·····@sherbrookeconsulting.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <8dc4d3a4-8c08-45b5-a53e-d630937ac317@f19g2000yqo.googlegroups.com>
On Apr 17, 3:47 pm, budden <···········@mail.ru> wrote:
> Hi Pillsy,>>http://tinyurl.com/dflzxz
> >...um, wait. You have a solution. Why not just,
> >...well, use it?

> There are some problems. First of all,
> if I have two structures, I need to write
> (with-struct x foo
>   (with-struct y bar
>      (setf .x.a .y.b))

When I write these sorts of macros, I usually set them up so I can
write

(with-structs ((x foo) (y bar))
  #| this space for rent |#)

on the theory that more parens are much less painful than more
indentation.

> In fact, currently I'm using proga macro, which allows to
> flatten out unnecessary nesting and get rid of many unnecessary
> parens.

That's an intriguing idea.
[...]
> And, most important, I have to
> change compilation order.
> Structure declaration has to
> be available at with-struct expansion
> time, so I have surround defstruct with
> (eval-when #.always ...)
> or move defstructs to a separate
> file.

Not really. It has to be if you want the performance benefits
associated with any declarations, but other than that it should
compile fine.

> Hopefully, when I
> make hierarchical packages portable and
> will be able to merge packages together with
> their internal symbols, things will be easy.

Why do you need them to be portable? It's a serious question, not a
rhetorical one: I'm interested in what you're using CL for.

> > But you don't need the type information to do what you want to do
> > here. Dot syntax is used in all kinds of languages where the type
> > information isn't available at compile time.

> I can use slot-value to access structure slots.

INTERN! Just do the same thing that DEFSTRUCT would.

Like I said, it's not a super-clean solution. But it'll get the job
done.
[...]
> > Um, wait, *why* is it very important that GET not be a generic
> > function?
>
> This is rather evident. Gethash has an optional default argument.
> getf has no one. It is impossible to predict all possible
> arglists of a function which can be named "get". Congruent
> lambda lists come here into a play and lay another barrier
> to scalability.

That's what &rest arguments are for, right? Each class you're getting
from can have its own way of handling the other arguments. This will
work better if you use the argument order of AREF instead of GET, but
you would want one argument order anyway.

> > I kind of doubt it, given how much less flexible than CLOS classes
> > they are, and how modest the benefits they offer over ad-hoc data
> > structures built out of conses are.

> They are simple and fast, and they can be printed readably by
> default. Also you can use *print-circle* with them. So they are
> perfect in some sence. Not for everything.

Cons-based structures have all the same advantages, though, except for
some of the speed. Structs have their place, but their place isn't
dominating the way it is in C or C++.

> > Easily done with a macro; you just give the structure a gensymmed name
> > and then use FLET and INLINE to make sure the accessors have all the
> > names you want.

> Thanks for the hint. But it's a bit tricky. I'll also need a package
> to place all the symbols created to.

Use gensymmed slot names and (:conc-name nil)[1] and you won't need a
package for those created symbols, right?

Cheers, Pillsy
[...]

[1] Of course, you'll also need to explicitly pass in gensyms to name
the constructor and predicate and such.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <c6427144-ed57-4f37-96a3-3428f242c13f@y7g2000yqa.googlegroups.com>
>Congruent
>> lambda lists come here into a play and lay another barrier
>> to scalability.
>
>That's what &rest arguments are for, right? Each class
>you're getting from can have its own way of handling the
>other arguments.
On this way finally I'll have lisp like perl: any function
can have any number of arguments. This is much worse than
having separate gethash and getf.

More important, I use some libraries and I can't
(shouldn't) change signature of their "get" method.
Well, I can write foolib:get and barlib:get, but again
I have to qualify get with some prefix. I am free from
that in C++ where all what I need is a type of first
argument (which is class instance itself). Summarizing,
I tried all ways and I still see no way to avoid
qualifying in CL w/o parsing type
declarations and changing symbol-to-function mapping
mechanism. And this is what I intend to change to make
lisp more expressive.

> > Structure declaration has to
> > be available at with-struct expansion
> > time, so I have surround defstruct with
> > (eval-when #.always ...)
> > or move defstructs to a separate
> > file.
>
> Not really. It has to be if you want the
> performance benefits associated with any
> declarations, but other than that it should
> compile fine.
I tried this and this failed in practice.
Maybe I did something wrong. IMO problem
is with compilation semantics.

>
> > Hopefully, when I
> > make hierarchical packages portable and
> > will be able to merge packages together with
> > their internal symbols, things will be easy.
>
> Why do you need them to be portable? It's a serious question, not a
> rhetorical one: I'm interested in what you're using CL for.
Portable code has more chance to be widely admitted. Allegro's and
Tim
Bradshaw's hierarchical packages exist fore several years, but they
are
not admitted to libraries. I want lisp to survive and I'm sure
packages shouldn't be flat. Packages are like directories. It is a
nonsence not to have hierarchical directory structure. Also my reader
would (hopefully) solve many other questions.

> > They are simple and fast, and they can be printed readably by
> > default. Also you can use *print-circle* with them. So they are
> > perfect in some sence. Not for everything.
>
> Cons-based structures have all the same advantages, though, except for
> some of the speed. Structs have their place, but their place isn't
> dominating the way it is in C or C++.
Yes, now I use conses where structs would be appropriate.
I'm not sure #S(point :x 4 :y 5) is better than just (4 5),
but "x" and "y" for access are much more informative than just
"car" and "cadr".
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <54a08fb2-b6dc-41d5-bbba-0f59195f7612@a7g2000yqk.googlegroups.com>
On Apr 18, 1:18 am, budden <···········@mail.ru> wrote:
> > > Hopefully, when I
> > > make hierarchical packages portable and
> > > will be able to merge packages together with
> > > their internal symbols, things will be easy.
>
> > Why do you need them to be portable? It's a serious question, not a
> > rhetorical one: I'm interested in what you're using CL for.
>
> Portable code has more chance to be widely admitted. Allegro's and
> Tim Bradshaw's hierarchical packages exist fore several years, but
> they are not admitted to libraries.

Nonsense.  Allegro CL's hierarchical package implementation has always
been completely open, and is available here:

http://www.franz.com/support/documentation/current/doc/packages.htm#hier-pack-implementation-2

Also, if you look at Bradshaws's site (http://www.cl-user.net/asp/libs/
tfb-hierarchical-packages) you'll see that he bases his hierarchical
packages on Allegro CL's version, and that he lists compatible
implementations being on Allegro CL, Lispworks, and CMUCL.  I'm sure
it would be easy to spread his implementation to other CL
implementations, if you were willing to do the work.

I wish people would stop spreading FUD about Lisp libraries,
especially in cases where it's not true.

Duane
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <880e58c6-98ad-4b87-a98e-e8203e5d286f@e18g2000yqo.googlegroups.com>
> I'm sure it would be easy to spread his implementation to other CL
> implementations, if you were willing to do the work.
Porting hierarchical packages to new implementations is not that
easy (see Tim Bradshaw's comments). I say hierarchical packages
are not portable. It is really so. They are only ported to some
of CL implementations, but not to all of them. Where do you
see FUD and false statements?
From: Marco Antoniotti
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <bedb2192-1067-4d94-b017-df4a6b3c1fee@37g2000yqp.googlegroups.com>
On Apr 19, 7:55 pm, budden <···········@mail.ru> wrote:
> > I'm sure it would be easy to spread his implementation to other CL
> > implementations, if you were willing to do the work.
>
> Porting hierarchical packages to new implementations is not that
> easy (see Tim Bradshaw's comments).
> I say hierarchical packages
> are not portable. It is really so. They are only ported to some
> of CL implementations, but not to all of them. Where do you
> see FUD and false statements?

That depends on the definition of "hierarchical package" for Common
Lisp.  Franz has one that has been implemented in CMUCL and has served
as the basis of Tim Bradshaw's code.  Porting that code to *new*
implementations is *trivial*.  Porting it to *old* implementations is
difficult.

In any case, it'd be better to agree on a spec for hierarchical
packages first.  Since there is only one floating around (or two, if
you count the old Lisp Machines things) either you agree with it or
you don't.  If you don't write up one and have it go through some
"peer review" (e.g., write a CDR).  Of course, you will have to
convince people that mapping packages to the file system is a good
idea (I am inclined to say it is).

Cheers
--
Marco
From: Tim Bradshaw
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4315cb5e-ceda-4ecd-9a9d-6a5728a51928@k41g2000yqh.googlegroups.com>
On Apr 19, 6:55 pm, budden <···········@mail.ru> wrote:

> Porting hierarchical packages to new implementations is not that
> easy (see Tim Bradshaw's comments). I say hierarchical packages
> are not portable. It is really so. They are only ported to some
> of CL implementations, but not to all of them. Where do you
> see FUD and false statements?

Actually, it is easy to port it: I did so several times, including at
least one port (to Genera) which seems not to have made it into the
version on my web site.

Porting it requires you to find a single place where you can "hook"
the implementation, to intervene in the string->package lookup.  For
some (and perhaps all) implementations this is CL:FIND-PACKAGE, which
you need to redefine suitably - this is what the existing code does,
of course.

I would expect the effort of porting the package to an unknown
implementation to be a couple of hours work at most, most of which
would probably be making suitable branches in the source control
system. The changes would be a few lines at most.

Unfortunately I'm not actively writing Lisp at the moment, but if
anyone wants to port to other implementations I'll happily accept such
ports and put them in the code.  As I've said, it's a minute amount of
work.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <3a5937c3-e5a8-47b7-9c88-f8d2e1d6bd4a@z9g2000yqi.googlegroups.com>
>Porting it requires you to find a single place
>where you can "hook" the implementation, to intervene in
>the string->package lookup.  For some (and perhaps all)
>implementations this is CL:FIND-PACKAGE, which
>you need to redefine suitably - this is what the existing
>code does of course.
I tried to do this for CLISP and failed. Reader simply
ignores my redefinition. The same happens in SBCL.
Maybe there are some other place(s) to hook, but I don't
know them. And there is no guarantee it does exist at all
in a particular implementation.

So, it _might_ be that one would need to patch CLISP
to make them work. I'm sure this is very easy in technical
sense, but it might be not very easy as a practical task
(keeping the patch up to date with changing CLISP and
rebuilding CLISP or convincing CLISP team to admit it).
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1998eac4-df4c-45ce-8002-4d2f9b1cc77b@u8g2000yqn.googlegroups.com>
On Apr 20, 1:32 pm, budden <···········@mail.ru> wrote:
> >Porting it requires you to find a single place
> >where you can "hook" the implementation, to intervene in
> >the string->package lookup.  For some (and perhaps all)
> >implementations this is CL:FIND-PACKAGE, which
> >you need to redefine suitably - this is what the existing
> >code does of course.
>
> I tried to do this for CLISP and failed. Reader simply
> ignores my redefinition. The same happens in SBCL.
> Maybe there are some other place(s) to hook, but I don't
> know them. And there is no guarantee it does exist at all
> in a particular implementation.

Ah, yes; it's because you don't have the implementor's magic touch.
What is the implementor's magic touch?  If you're interested, I'll
tell you.  But for now...

> So, it _might_ be that one would need to patch CLISP
> to make them work. I'm sure this is very easy in technical
> sense, but it might be not very easy as a practical task
> (keeping the patch up to date with changing CLISP and
> rebuilding CLISP or convincing CLISP team to admit it).

Yes, it _might_ be that this is precisely what Tim Bradshaw and I have
been talking about all along.  The reason you find the job so hard is
because you're doing the wrong job.  If instead you do your job as a
user and make a convincing argument to the implementors of your
language, then the implementors can do their job and make the change
to their implementation easily.  But since you are not an implementor,
the job you are trying to do is of course impossible, as you see.

For other readers of this message: I'm not trying to set up any kind
of caste system between implementors and non-implementors; I'm only
making a point for budden's benefit.

Duane
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49ed1f9b$0$22522$607ed4bc@cv.net>
·····@franz.com wrote:
> For other readers of this message: I'm not trying to set up any kind
> of caste system between implementors and non-implementors;

awesome concept. can I be an untouchable?

kzo
From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <f3d1c4fe-369a-4f35-b1f2-f78cd98147d1@l1g2000yqk.googlegroups.com>
On Apr 20, 9:21 pm, Kenneth Tilton <·········@gmail.com> wrote:
> ·····@franz.com wrote:
> > For other readers of this message: I'm not trying to set up any kind
> > of caste system between implementors and non-implementors;
>
> awesome concept. can I be an untouchable?
>
> kzo

sure - you be Sean Connery and I'll be Kevin Costner

;^)
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090430220605.907@gmail.com>
On 2009-04-21, Kenneth Tilton <·········@gmail.com> wrote:
> ·····@franz.com wrote:
>> For other readers of this message: I'm not trying to set up any kind
>> of caste system between implementors and non-implementors;
>
> awesome concept. can I be an untouchable?

Why don't be unreachable? Then you're basically just garbage.
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49ed3fba$0$27759$607ed4bc@cv.net>
Kaz Kylheku wrote:
> On 2009-04-21, Kenneth Tilton <·········@gmail.com> wrote:
>> ·····@franz.com wrote:
>>> For other readers of this message: I'm not trying to set up any kind
>>> of caste system between implementors and non-implementors;
>> awesome concept. can I be an untouchable?
> 
> Why don't be unreachable? Then you're basically just garbage.

I've heard bad things about getting "collected".
From: Rob Warnock
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <cdqdnb_m9PVa73DUnZ2dnUVZ_uadnZ2d@speakeasy.net>
Kenneth Tilton  <·········@gmail.com> wrote:
+---------------
| Kaz Kylheku wrote:
| > Kenneth Tilton <·········@gmail.com> wrote:
| >> ·····@franz.com wrote:
| >>> For other readers of this message: I'm not trying to set up any kind
| >>> of caste system between implementors and non-implementors;
| >> awesome concept. can I be an untouchable?
| > 
| > Why don't be unreachable? Then you're basically just garbage.
| 
| I've heard bad things about getting "collected".
+---------------

Even worse, being "finalized".


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <42e19a57-9252-478a-a6a2-629508631cde@v15g2000yqn.googlegroups.com>
On Apr 21, 12:16 am, ····@rpw3.org (Rob Warnock) wrote:
> Kenneth Tilton  <·········@gmail.com> wrote:
> +---------------| Kaz Kylheku wrote:
>
> | > Kenneth Tilton <·········@gmail.com> wrote:| >> ·····@franz.com wrote:
>
> | >>> For other readers of this message: I'm not trying to set up any kind
> | >>> of caste system between implementors and non-implementors;
> | >> awesome concept. can I be an untouchable?
> | >
> | > Why don't be unreachable? Then you're basically just garbage.
> |
> | I've heard bad things about getting "collected".
> +---------------
>
> Even worse, being "finalized".

Nah, finalization is only for the weak.  But if you find yourself
becoming a loose end, you better watch yourself, or you find yourself
in your car going off into the abyss.  Capisca?
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49edc8bc$0$27780$607ed4bc@cv.net>
·····@franz.com wrote:
> On Apr 21, 12:16 am, ····@rpw3.org (Rob Warnock) wrote:
>> Kenneth Tilton  <·········@gmail.com> wrote:
>> +---------------| Kaz Kylheku wrote:
>>
>> | > Kenneth Tilton <·········@gmail.com> wrote:| >> ·····@franz.com wrote:
>>
>> | >>> For other readers of this message: I'm not trying to set up any kind
>> | >>> of caste system between implementors and non-implementors;
>> | >> awesome concept. can I be an untouchable?
>> | >
>> | > Why don't be unreachable? Then you're basically just garbage.
>> |
>> | I've heard bad things about getting "collected".
>> +---------------
>>
>> Even worse, being "finalized".
> 
> Nah, finalization is only for the weak.  But if you find yourself
> becoming a loose end, you better watch yourself, or you find yourself
> in your car going off into the abyss.  Capisca?
> 

Reminds me of my friend from the "you can only compromise your 
principles once" digression*: when I asked him why he did not get 
involved with the mob when he had the chance he said he did not like the 
retirement plan.

kt

* 
http://smuglispweeny.blogspot.com/2008/03/tiltons-law-solve-first-problem.html
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <8ab550b2-618e-4c54-94f1-4188263410a5@z14g2000yqa.googlegroups.com>
> Yes, it _might_ be that this is precisely
> what Tim Bradshaw and I have
> been talking about all along.  The reason
> you find the job so hard is
> because you're doing the wrong job.  
Is it an implementor's magic touch which
enables you to know so much about me?
I'm not looking for a job now :)

I'm rather new to open-source development.
I have already improved iterate. Now I
can write

(iter:iter (:for foo in bar))
instead of
(iter:iter (iter:for foo in bar))
or using iter and having to resolve multiple
clashes which is annoying.

This is evidently better. I wrote to
iterate-devel and submitted a patch (together
with the patch of the tests). They ignored
the patch and adviced me an ugly workaround
which I have already knew and which was not
satisfactory. So, without further discussion,
I've made a fork and published it separatedly.
Now I'm using iterate and I'm happy with it as
it is so much better than loop.

It is hard to communicate to you lispers:
most of you think lisp is ok despite the
fact lisp is dying. My business is writing
programs, not convincing lispers that my
patches are improvements. I just know it for
myself and this is enough. So, instead of
sending patches, I'd prefer to make my own
libraries. What goes to hierarchical packages,
maybe someday I finish my _portable_ reader
which would allow for hierarchical packages
without unpleasant work of promoting my
patches. And then everyone would free to use
or not to use it. I'll be also independent
of anyone's opinion.
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5162d5c4-a955-4554-adb5-e3ea7d4f13fc@t11g2000vbc.googlegroups.com>
On Apr 21, 12:52 am, budden <···········@mail.ru> wrote:
> > Yes, it _might_ be that this is precisely
> > what Tim Bradshaw and I have
> > been talking about all along.  The reason
> > you find the job so hard is
> > because you're doing the wrong job.  
>
> Is it an implementor's magic touch which
> enables you to know so much about me?

No, you misunderstand my statement about the implementor's magic
touch.  To clarify, answer me a riddle: How many times does an
implementor need to try before he succeeds?

> I'm not looking for a job now :)

That's precisely your problem.  In my original reply to you, I said
"I'm sure it would be easy to spread his implementation to other CL
implementations, if you were willing to do the work."  Well, the
reason it is so hard for you is that you are not willing to do the
work.  Again, go back to my riddle and answer it.

> I'm rather new to open-source development.
> I have already improved iterate. Now I
> can write
>
> (iter:iter (:for foo in bar))
> instead of
> (iter:iter (iter:for foo in bar))
> or using iter and having to resolve multiple
> clashes which is annoying.
>
> This is evidently better. I wrote to
> iterate-devel and submitted a patch (together
> with the patch of the tests). They ignored
> the patch and adviced me an ugly workaround
> which I have already knew and which was not
> satisfactory. So, without further discussion,
> I've made a fork and published it separatedly.
> Now I'm using iterate and I'm happy with it as
> it is so much better than loop.

Very good! You just implemented something.  Does that make you an
implementor?  Not yet.  Go back to my riddle and answer it.

You've also learned something about the wonders of open-source
development; you get to look at the source, make your changes, and
when you can't convince the core maintainers of the software to
include your change, you get to make a branch and become the
maintainer of your own version of the software!

> It is hard to communicate to you lispers:
> most of you think lisp is ok despite the
> fact lisp is dying.

Oh, my!  I've never heard that before!  Dying?  When?  Have you
informed the others?

Good grief.  Lisp has been "dying" for over 50 years, now, despite the
fact that it is growing.  Get a clue.

As for lisp being ok, do you really think we would have these
discussions if we didn't think there could be improvements?  Do you
really think we would be putting out new modules, some completely open-
source, if we didn't want users to accept them and to put pressure on
other lisp vendors to implement the necessary changes?
Your penchant for spreading FUD is truly amazing.

> My business is writing
> programs, not convincing lispers that my
> patches are improvements. I just know it for
> myself and this is enough. So, instead of
> sending patches, I'd prefer to make my own
> libraries.

As you said, you are new to open-source development. Your experience
is not a lisp-centric experience, but one common to all open-source
development (and lest you think that I or my company have no
experience in open-source development just because we sell commercial
software; think again).  You'll find you have to convince the
maintainers of any open-source software to accept your change, just as
you must convince proprietary software maintainers to make a change
for you.  The difference is that you can make the change yourself, if
you wish, in your own copy.

> What goes to hierarchical packages,
> maybe someday I finish my _portable_ reader
> which would allow for hierarchical packages
> without unpleasant work of promoting my
> patches. And then everyone would free to use
> or not to use it. I'll be also independent
> of anyone's opinion.

Go for it.

Duane
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <72f83432-aab6-44f7-baa4-5d4e4c2f1af2@k2g2000yql.googlegroups.com>
> No, you misunderstand my statement about the implementor's magic
> touch.  
It was a joke :)

> That's precisely your problem.  
I answered to other part of your post...

Answer to riddle I guess is 1 but don't ask me why.

> As for lisp being ok, do you really think we would have these
> discussions if we didn't think there could be improvements?  Do you
> really think we would be putting out new modules, some completely open-
> source, if we didn't want users to accept them and to put pressure on
> other lisp vendors to implement the necessary changes?
> Your penchant for spreading FUD is truly amazing.

Well, I'm sorry. I didn't mean all of the lispers. But now I see why
you want me to make patches to implementations. This is fine intention
and it looks like we have similar targets, but I really do not want
to
do _this_ kind of job.
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <6eadeeb1-8b6e-4a42-9c1d-10c80708d05a@u9g2000pre.googlegroups.com>
On Apr 21, 12:45 pm, budden <···········@mail.ru> wrote:
> > No, you misunderstand my statement about the implementor's magic
> > touch.  
>
> It was a joke :)

Of course it was, but deflecting a serious question with a joke
doesn't get the question answered or any enlightenment for you...

> > That's precisely your problem.  
>
> I answered to other part of your post...
>
> Answer to riddle I guess is 1 but don't ask me why.

If your guess is one then it shows yet again what you demonstrate so
often; you don't have an implementor's mindset.

The answer to the riddle "How many times does an  implementor need to
try before he succeeds?" is

"One more than the number of times he failed".

> > As for lisp being ok, do you really think we would have these
> > discussions if we didn't think there could be improvements?  Do you
> > really think we would be putting out new modules, some completely open-
> > source, if we didn't want users to accept them and to put pressure on
> > other lisp vendors to implement the necessary changes?
> > Your penchant for spreading FUD is truly amazing.
>
> Well, I'm sorry. I didn't mean all of the lispers. But now I see why
> you want me to make patches to implementations. This is fine intention
> and it looks like we have similar targets, but I really do not want
> to
> do _this_ kind of job.

That's fine.  You don't have to be an implementor.  Users get together
all the time and place pressure on the vendors of the products they
use, and it doesn't require them to become implementors, just vocal
users.  But posting a complaint to c.l.l does not count as vocalizing
desires for a feature; you must go to the implementor/maintainer of
the lisp you are using and ask.  Usually each vendor (both commercial
and opensource) has a forum to which you can go and log your concerns
and listen to the concerns of other users. If enough users log the
same concerns, you can probably expect that your vendor will respond.

Duane
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <fc82103f-6edc-4f4f-9ae5-b84da93b2a2c@f19g2000yqo.googlegroups.com>
Hi Duane!
  Yes, I agree I spread FUD about CL itself.
I'm not the first who does this. I'm interested
in religions. They have something in common.
Among other things, every religion has its
dogmas. Some of them are intentionnaly absurd.
When I accept absurd dogma, this is an act of
my initiation and it guarantees that I'll be
loyal in a future. In the case of lisp one of
the absurd dogmas is:

PARENS ARE NOT THERE

It was one very good lisper who tried to initiate me
to a lisp religion. I introduced my let1 to a common
project to write

(let1 foo bar . body) instead of
(let ((foo bar)) . body)

It is just a macro, syntactic sugar, but it was
considered as a heresy. I do not say ALL lispers
think so. But it looks like this opinion is
rather general.

My problem is that I have too much common sense.
So no. I failed to be initiated. This is not my
religion. I am not a lisper. Lisp is fine, but
my dream is a creation of a new language. Python,
Ruby, Java are miserable compared to lisp, but
lisp itself is miserable too compared to what it
should be. So yes I'm spreading FUD about
CL so that scared  (potential) lisp users would create
another language. This is some kind of black magic of
course, but this is what I do. And there is
another reason for my complaints (see below).

I'm using let1 for a four years know and
my lisp does not crash, I have no problems
with macros expanding to let1 forms. So,
let1 is no way harmful for me.

Proga macro is a next
step in this direction. I use
it, but it is rather new and I think
I'll redesign it one more time before
it gets its final view.

With proga, I write
(proga
  (flet f (x) (+ x 1))
  (let y (f 1))
  y)

instead of

(flet ((f (x) (+ x 1)))
  (let ((y (f 1)))
     y))

so I save 6 parens here and 1
nesting level. Instead of that
I'm writing on extra line and
one extra word. For me, it is
a gain as

#.(with-human-readable-syntax)
  I thrown out Lot of InneceSsary Parens
end-human-readable-syntax

is better than

(((p)(a)(r)(e)(n)(s))
 (((t)(h)(a)(t))
  (((p)(i)(l)(e))
   ((u)(p)))))

There is nothing new
and ingenious in proga. It
is just a design from non-lispy
languages where bindings do
not create nesting level

{
var f = function(x) { x + 1; }
var y = f(1);
}

But many lispers say this is
confusing.

So, I AM NOT A LISPER.

I'm still a user of a lisp, but
I'm out of lisp culture.

But I didn't spread FUD about
hierarchical packages. Idea of
hierarchical packages is very
cool. I do not strictly insist,
but IMO, code is portable only
when it requires NO patching of
target platform. Even if I
wrong, this is not a spreading of
FUD, but just a therminology
ambiguity.

> But posting a complaint to
> c.l.l does not count as vocalizing
> desires for a feature;
I usually complain. Then I suggest
solution. Then I read replies. Then
I think. Then I find my solution is
unacceptable. Then I retry. Finally I
sometimes get acceptable solution.
My acceptable and published solutions
I worked out so far are:

iterate-keywords
merge-packages-and-reexport
let1
proga
*read-eval-stream*

All they are related to language itself, not
to applications. Most of them are smooth,
portable, underwent some testing in several
implementations and they do not break language.

Merge-packages-and-reexport is not ideal,
but still useful.

What goes to your riddle, I see nothing
implementor-specific in your reply.
From: namekuseijin
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gsmgeq$s5d$1@adenine.netfront.net>
Hey, Xah!  Good to see you back!  That time on the spa certainly made 
you look more... hmm, buddy? ;)

budden wrote:
> PARENS ARE NOT THERE

What parens? :)

> It was one very good lisper who tried to initiate me
> to a lisp religion. I introduced my let1 to a common
> project to write
> 
> (let1 foo bar . body) instead of
> (let ((foo bar)) . body)
> 
> It is just a macro, syntactic sugar, but it was
> considered as a heresy.

Not an heresy, it just sucks.  i.e.:

(let1 foo 1 bar 2 rez (+ foo bar) (princ foo) (princ bar) (princ rez))

I think it's hard for the locals to stand out and to see where 
statements begin.  Clojure at least puts a pair of parentheses around them.

I also enjoy parentheses around each binding as Lisp has no = for 
assignment.  Plus, quick parenthetical source code navigation with an 
appropriate tool is a must lesser languages can't afford. :)  All too 
often I see python and haskell programmers crying over how their editing 
tools don't support their meaningful whitespace...

> My problem is that I have too much common sense.

Oh!  Common Sense is how Scheme ought to be named. :)

> I'm using let1 for a four years know and
> my lisp does not crash, I have no problems
> with macros expanding to let1 forms. So,
> let1 is no way harmful for me.

Only to people reading your peculiar code.

> Proga macro is a next
> step in this direction. I use
> it, but it is rather new and I think
> I'll redesign it one more time before
> it gets its final view.
> 
> With proga, I write
> (proga
>   (flet f (x) (+ x 1))
>   (let y (f 1))
>   y)
> 
> instead of
> 
> (flet ((f (x) (+ x 1)))
>   (let ((y (f 1)))
>      y))

No such problem with Scheme:
(let* ((f (lambda (x) (+ x 1)))
        (y (f 1)))
   y)

Better yet with common sense -> 2 :)

> so I save 6 parens here and 1
> nesting level. Instead of that
> I'm writing on extra line and
> one extra word. For me, it is
> a gain as
> 
> #.(with-human-readable-syntax)
>   I thrown out Lot of InneceSsary Parens
> end-human-readable-syntax
> 
> is better than
> 
> (((p)(a)(r)(e)(n)(s))
>  (((t)(h)(a)(t))
>   (((p)(i)(l)(e))
>    ((u)(p)))))
> 
> There is nothing new
> and ingenious in proga. It
> is just a design from non-lispy
> languages where bindings do
> not create nesting level

That's exactly what let* is for.

Anyway, gotta love Lisp for allowing for any wacko or perv to bend it to 
his twisted, sicko tastes... :)
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <a174fce0-b3a3-45fd-b9d6-dbc17f98e398@c12g2000yqc.googlegroups.com>
> Hey, Xah!  Good to see you back!
I'm not a Xah.

>Not an heresy, it just sucks.  i.e.:
>
>(let1 foo 1 bar 2 rez (+ foo bar) (princ foo) (princ bar) (princ rez))
>
>I think it's hard for the locals to stand out and
>to see where
>statements begin.  Clojure at least puts a pair
>of parentheses around them.

Let1 is let ONE. So it is intended for one parameter only.
No sence to have SIX parens where we need only TWO.
In case of two variables I wrote let2, but later
I rejected it.

(let2 foo 1 bar 2 . body) is not better for me than
(let ((foo 1) (bar 2)) . body)

In a proga, I can write
(proga
  (let foo 1 bar 2 baz 3)
  . body)

but I prefer
(proga
  (let foo 1)
  (let bar 2)
  (let bar 3)
  . body)

(this is in fact let*)

Maybe in a future version of a
proga would used let with one arg for a
type declaration:

(proga
  (let foo bar my-type)
  . body)

Clojure's let seem to be a middle ground, but
I'm unsure that using up both [] and {} is very fine.
I think they should have left one pair for the user
extensions.

> No such problem with Scheme:
> (let* ((f (lambda (x) (+ x 1)))
>         (y (f 1)))
>    y)
I dislike Scheme. Keyword arguments,
dirty macros and type declarations are
so cool... Scheme's missing them, as
far as I remember. And no, proga can
do more.

(proga
  (let fname "foo")
  (with-open-file f fname :direction input)
  (with-your-construct &rest its-head)
  . body)

proga can flatten any nesting construct if you define
an appropriate proga clause. It looks like C++ where
objects on the stack can guard resources. You declare all
objects in a single nesting level:

{
String f("foo");
FileStream f(fname); // like with-open-file
YourResourceGuard(its_arguments);
body;
}

> Anyway, gotta love Lisp for allowing for
> any wacko or perv to bend it to
> his twisted, sicko tastes... :)
You love Lisp? Really? I love women.
From: ·····@sherbrookeconsulting.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <3d88f704-de7f-44c6-87e3-38b8ce671d05@d14g2000yql.googlegroups.com>
On Apr 22, 3:49 am, budden <···········@mail.ru> wrote:
[...]
> >Not an heresy, it just sucks.  i.e.:

> >(let1 foo 1 bar 2 rez (+ foo bar) (princ foo) (princ bar) (princ rez))

> >I think it's hard for the locals to stand out and
> >to see where statements begin. Clojure at least puts a pair
> >of parentheses around them.

> Let1 is let ONE. So it is intended for one parameter only.
> No sence to have SIX parens where we need only TWO.

The sucky part comes in when you decide that you need two locals
instead of one. Then you need to type all six parens and turn your
LET1 into a LET. It's not a world-ending PITA, but all-in-all I'd much
rather stick to LET.

Even if it means living with the fear that one day I'll get the
dreaded, "Too many parens -- core dumped" error.

Cheers,
Pillsy
[...]
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <d104fbc1-fe39-425d-8133-673c44780dc6@r33g2000yqn.googlegroups.com>
> The sucky part comes in when you decide that you
> need two locals instead of one.
You are right, there is some inconvience. But
let1 is still useful. After all, I think that
clojure's version is the best.

> Then you need to type all six parens and turn your
> LET1 into a LET. It's not a world-ending PITA,
> but all-in-all I'd much
> rather stick to LET.
Really, it is not that hard. And also, one smart
lisper (sorry, I can't remember his name) overloaded
let1 to be defined as follows:

(defmacro let1 (variable + &body progn)
"Shortcut for (let ((a b)) . c) or (destructuring-bind a b . c)"
    (if (atom variable)
        `(let ((,variable ,+)) ,@progn)
      `(destructuring-bind ,variable ,+ ,@progn)))

So, it serves at the same time as a shorthand
for destructuring-bind.

But now I prefer proga, which is really much more
convinent. With one variable, I write
(proga
  (let foo bar)
   ...)
When I add second one, I just write
(proga
  (let foo bar)
  (let baz quux)
   ...)
it is a let* though, but I can do
just old

(proga
  (let ((foo bar)(baz foo))
    ...))

proga is not polished yet. I think someday
I'll use codewalking to expand it inside
inner constructs such as cond. And also
set up my own defun, so it would look like

(defun f (arg)
  "docstring"
  nil ; optional block name
  (let x y)
  (with-open-file z "foo" :direction :input)
  (cond
    (x (let y z) z) ; implicit proga here
    (t x)))

Also I think there are some more reasonable
changes.
E.g. in the scope of proga:

(let (values symbol status) (find-symbol :foo :bar))
instead of multiple-value-bind.

(return a b c) ; returns multiple values
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <a8b0eb29-9076-4040-bfbe-112c70c84384@p11g2000yqe.googlegroups.com>
On Apr 22, 10:35 am, budden <···········@mail.ru> wrote:
> > The sucky part comes in when you decide that you
> > need two locals instead of one.
>

Or when you want to initialize something to nil.

> But now I prefer proga, which is really much more
> convinent. With one variable, I write
> (proga
>   (let foo bar)
>    ...)
> When I add second one, I just write
> (proga
>   (let foo bar)
>   (let baz quux)
>    ...)
> it is a let* though, but I can do
> just old
>
> (proga
>   (let ((foo bar)(baz foo))
>     ...))

Why does everyone who complains about the parentheses not know how to
format his code?

(let* ((foo bar)  ;; this should be a let* btw.
       (baz foo))
..declarations..
..body..
)

Your entire argument reeks of writing ugly (unreadable) imperative
code in lisp.

Every time you use progn, god kills a puppy.

Stop shitting all over good style and you won't have this problem.

(You also completely ignore efficiency/concurrency potential in the
distinction between let and let* by abusing it in this way).
From: namekuseijin
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gsnteu$3dj$1@adenine.netfront.net>
budden escreveu:
>> The sucky part comes in when you decide that you
>> need two locals instead of one.
> You are right, there is some inconvience. But
> let1 is still useful. After all, I think that
> clojure's version is the best.
> 
>> Then you need to type all six parens and turn your
>> LET1 into a LET. It's not a world-ending PITA,
>> but all-in-all I'd much
>> rather stick to LET.
> Really, it is not that hard. And also, one smart
> lisper (sorry, I can't remember his name) overloaded
> let1 to be defined as follows:
> 
> (defmacro let1 (variable + &body progn)
> "Shortcut for (let ((a b)) . c) or (destructuring-bind a b . c)"
>     (if (atom variable)
>         `(let ((,variable ,+)) ,@progn)
>       `(destructuring-bind ,variable ,+ ,@progn)))
> 
> So, it serves at the same time as a shorthand
> for destructuring-bind.
> 
> But now I prefer proga, which is really much more
> convinent. With one variable, I write
> (proga
>   (let foo bar)
>    ...)
> When I add second one, I just write
> (proga
>   (let foo bar)
>   (let baz quux)
>    ...)

I see you introduce new bindings with let, which kinda defeats the 
purpose of being shorter.  i.e:

  (let* (
    (foo bar)
    (baz quux)
    )
     ...)

is both shorter shorter and more elegant (thanks to the enclosing 
parentheses I can jump over parameters directly to body in a single 
keypress with a proper editor).

> proga is not polished yet.

Yeah, main problem right now is the dumb name.  How about shizznit?

-- 
a game sig: http://tinyurl.com/d3rxz9
From: namekuseijin
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gsnq8o$30jl$1@adenine.netfront.net>
budden escreveu:
>> Anyway, gotta love Lisp for allowing for
>> any wacko or perv to bend it to
>> his twisted, sicko tastes... :)
> You love Lisp? Really? I love women.

We all do, Xah.  I also love pizza.

-- 
a game sig: http://tinyurl.com/d3rxz9
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090422202456.GE4558@gildor.inglorion.net>
budden,

It deserves to be said:

I like your let1 and proga.

  -- Bob


From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <4e344a93-27dc-4379-ad04-3be90bcb65c2@z19g2000yqe.googlegroups.com>
> I like your let1 and proga.
Nice, I'll post them to lisppaste when I get home.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <7261a955-71f0-4997-805d-58d85dbf58ce@a5g2000pre.googlegroups.com>
I've found it is there already:

http://paste.lisp.org/display/74339
From: Michele Simionato
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5099b0e7-b174-49fb-bfc7-52cdd096db2b@g37g2000yqn.googlegroups.com>
On Apr 22, 9:30 am, namekuseijin <···················@gmail.com>
wrote:
> Budden wrote:
> > I introduced my let1 to a common
> > project to write
>
> > (let1 foo bar . body) instead of
> > (let ((foo bar)) . body)
>
> > It is just a macro, syntactic sugar, but it was
> > considered as a heresy.
>
> Not an heresy, it just sucks.  i.e.:
>
> (let1 foo 1 bar 2 rez (+ foo bar) (princ foo) (princ bar) (princ rez))
>
> I think it's hard for the locals to stand out and to see where
> statements begin.  Clojure at least puts a pair of parentheses around them.


Just to play devil's advocate, notice that Budden
means a let one syntax, which is the same Paul
Graham advocates in Arc (http://www.paulgraham.com/arcll1.html)
so there is at least one famous lispers that
thinks this is a good idea.
I personally would prefer a notation such as
(let (name1 value1) (name2 value2) ... expr)
with a single expression instead of a body.
But this is idle speculation, of course.
From: namekuseijin
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <gsnqqn$31dt$1@adenine.netfront.net>
Michele Simionato escreveu:
> On Apr 22, 9:30 am, namekuseijin <···················@gmail.com>
> wrote:
>> Budden wrote:
>>> I introduced my let1 to a common
>>> project to write
>>> (let1 foo bar . body) instead of
>>> (let ((foo bar)) . body)
>>> It is just a macro, syntactic sugar, but it was
>>> considered as a heresy.
>> Not an heresy, it just sucks.  i.e.:
>>
>> (let1 foo 1 bar 2 rez (+ foo bar) (princ foo) (princ bar) (princ rez))
>>
>> I think it's hard for the locals to stand out and to see where
>> statements begin.  Clojure at least puts a pair of parentheses around them.
> 
> 
> Just to play devil's advocate, notice that Budden
> means a let one syntax, which is the same Paul
> Graham advocates in Arc (http://www.paulgraham.com/arcll1.html)
> so there is at least one famous lispers that
> thinks this is a good idea.

Yeah, he clarified that.  Then again, what's the point of having:
(let1 foo 2 (+ foo 3))

instead of simply:
(+ 2 3)

> I personally would prefer a notation such as
> (let (name1 value1) (name2 value2) ... expr)

Say,
(let (boz (lambda (x) (+ 1 x)))
   (let (foo 1) (bar 2) (boz bar)))

It just looks confusing I think... let alone if I have a proper text 
editor capable of hierarchical/parenthetical editing I won't be able to 
simply jump over the paremeters directly to the body, instead having to 
jump over each parameter individually.  Like Alt+left, Alt+down, Alt+up 
in DrScheme and something similar but more baroque in emacs...

-- 
a game sig: http://tinyurl.com/d3rxz9
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49ef03e0$0$22519$607ed4bc@cv.net>
budden wrote:

> So no. I failed to be initiated. This is not my
> religion. I am not a lisper. Lisp is fine, but
> my dream is a creation of a new language. 

Oh, yeah, that really sets you off from Lispers. Won't find them going 
their own way, no siree bob. Meanwhile Pythonistas have created Scheme, 
Arc, Qi, Clojure, newLisp, PLOP... hang on.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <cbf53012-263e-4b6e-abfc-5ba1389aac08@f19g2000yqh.googlegroups.com>
>Meanwhile Pythonistas have created Scheme,
>Arc, Qi, Clojure, newLisp, PLOP... hang on.
Unfortunately, neither language meets my criteria:
Arc does not exist.
Qi - maybe, I don't know. But I like neither ocaml
nor haskell.
Clojure. I don't believe general purpose language
should accent on purity. Libraries are "dirty" and
they can't be integrated to clojure's transactions.
So one can comfortable write only a server code,
but not a client code in clojure. Writing "dirty"
code removes advantages of purity and turns them to
inconvinences. I could accept a language where strong
FP subset is loaded into imperative environment, but
not vice versa.
Scheme... it has to grow before it becomes
a good lisp. I see no reason why keyword arguments are
not standardized. Continuations are cool, but they hurt
interoperability, so it is hard/impossible to write
good Scheme, say, in Java.
newLisp is very nice (it resembles me a tcl), but it
is limited to interpreter. PLOP does not exist as a code.
Well. Javascript? :)
From: Scott Burson
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <f0452afd-b89a-4b2e-a583-f9a6eac716c7@c18g2000prh.googlegroups.com>
On Apr 21, 11:40 pm, budden <···········@mail.ru> wrote:
> PARENS ARE NOT THERE

Have you tried paredit?  It is an Emacs package that sets things up so
that parentheses are always balanced.  I think it makes editing Lisp
much easier.  So, for example, there are commands to delete the
nearest pair of parens enclosing the cursor, and to extend the current
list forward or backward over the next s-expression, or to undo this
action.  Like anything, paredit takes a little getting used to, but
the investment is quickly repaid if you do much Lisp editing.

I think with paredit you will find that Lisp's parens fade even
further into the background.

-- Scott
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <215ede88-e8e6-4782-8516-bb5beee743d4@d14g2000yql.googlegroups.com>
> Easily done with a macro; you just give the structure a
> gensymmed name and then use FLET and INLINE to make
> sure the accessors have all the names you want.
Woke up in the morning. New type would be created at every
recompilation,
and type table would be filled with garbage. I would need to cache
identical
local structure definitions to get rid of some of the garbage.
From: ·····@sherbrookeconsulting.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <3606d511-d5e2-45f3-b3d8-ee90763f0dad@q9g2000yqc.googlegroups.com>
On Apr 18, 2:12 am, budden <···········@mail.ru> wrote:

> > Easily done with a macro; you just give the structure a
> > gensymmed name and then use FLET and INLINE to make
> > sure the accessors have all the names you want.

> Woke up in the morning. New type would be created at every
> recompilation,
> and type table would be filled with garbage. I would need to cache
> identical
> local structure definitions to get rid of some of the garbage.

Oh, you have to be a little sleazy, and have the DEFSTRUCT form get
evaluated at macroexpansion time.

Cheers,
Pillsy
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090415054854.GK4558@gildor.inglorion.net>
On Tue, Apr 14, 2009 at 12:59:43PM -0700, budden wrote:
> > (defun subtract-points (a b)
> >   (vector (- (point-x a) (point-x b))
> >           (- (point-y a) (point-y b)))
> 
> > Sure, you effectively note the types of a and b
> > twice, but then again, you aren't redundantly saying
> > that you're returning a vector twice, or
> > for that matter redundantly having to note that
> > you're returning a value at all.
> "Everything is expression" and progn semantics is an irrelevant
> question here. We're not comparing C and lisp, we compare
> static and dynamic typing. One can easily imagine typed lisp,
> where the function would look like
> 
> (defun substract-points (a b)
>    (declare (point a b))
>    (vector (- a.x b.x)
>            (- a.y b.y))

But this has nothing to do with static vs. dynamic typing. Ruby is 
dynamically typed and has dot syntax:

def subtract_points a, b
  Vector.new(a.x - b.x, a.y - b.y)
end

What is different is that Lisp uses prefix syntax for everything, 
whereas many other languages use prefix syntax for some things and infix 
syntax for other things. It is not even true that Lisp requires you to 
keep repeating the type name. You _could_ have used x and y as 
accessors and obtained

(defun subtract-points (a b)
  (vector (- (x a) (x b)) (- (y a) (y b))))

Your earlier objection that this requires dispatch and is thus slower 
does not necessarily hold. First of all, it isn't necessarily true that 
infix syntax eliminates this dispatch. For example, in Java, the 
canonical way to write this would be:

Vector subtractPoints(Vector a, Vector b) {
  new Vector(a.getX() - b.getX(), a.getY(), b.getY());
}

There is dispatch here to the same extent that there is dispatch in the 
Common Lisp example. But not to worry! It may be possible to do the 
dispatch at compile time, or even inline the operations. This is true of 
both the Java and the Common Lisp example.

> In this (very trivial) example we have reduced
> occurances of "point" from 4 to 2 (or even 1).
> Real-life code is not so trivial and it is likely
> that more repetitions of "point" will be removed.

I find that real-life code, especially in languages with manifest typing 
(such as C and Java), tends to be repetitive in ways that can be 
described by macros. Lisp macros, in particular, go a long way in 
eliminating repetetive patterns in code, but even C macros are better 
than nothing.

One reason why Lisp macros are so powerful is that Lisp makes them easy 
to understand. This is due at least in part to Lisp's very regular 
syntax. I'll gladly do a little extra typing in some cases (such as 
writing "(point-x a)" instead of "a.x" to use a language that saves me 
typing in the long run.

To illustrate this point, consider some Java code that I had to write 
last year. It was essentially

  a.setFoo(b.getFoo());
  a.setBar(b.getBar());
  a.setBaz(b.getBaz());
  // and so on
  // for a few classes containing at least 7 properties each

In Lisp, this could have been

  (setf (alpha-foo a) (beta-foo b))
  (setf (alpha-bar a) (beta-bar b))
  (setf (alpha-baz a) (beta-baz b))
  ; and so on

However, it could also have been

  (copy-slots a b
    foo
    bar
    baz
    ; and so on
    )

On Real World code, I find cases where macros are a win to be far more 
common than cases where "a.x" instead of "(point-x a)" is a win. Also, 
my experience is that object-orientation is relatively rare when not 
forced on programmers.

Regards,

Bob

-- 
``Those who forget history are doomed to repeat it.''


From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e58a56$0$95575$742ec2ed@news.sonic.net>
Robbert Haarman wrote:

> On Real World code, I find cases where macros are a win to be far more
> common than cases where "a.x" instead of "(point-x a)" is a win. Also,
> my experience is that object-orientation is relatively rare when not
> forced on programmers.

I don't find OO methodology increase my productivity as a lone 
programmer, or even as a member of small groups - but OO is so far the 
only methodology that scales reasonably well to projects involving
hundreds of programmers. 

Also, I'm liking the "a.x" notation, even in Lisp code.  My toy lisp 
dialect uses it for a property-list accessor.  The rule is that every 
symbol contains a namespace in which more symbols may be bound, 
recursively.  This is what it uses, generically, as a "record type" 
or "object type." So having passed received "a" as an argument, you 
can refer to "a.x", "a.x.y", "a.x.b", and so on. The dot accessor 
is both succinct and clear.

But yes, there's a list-based syntax too, expressly for the purpose 
of being usable with macros. You could say (prop a x), (prop a x y) 
or (prop a x b) instead of a.x, a.x.y, or a.x.b.  The dot notation 
isn't a "reader macro" that maps to the same list structure, though; 
Although redundant, both are primitive forms for variable references.

I decided a little optional syntax here and there is a good thing, 
even in a Lisp, where it makes the code more clear.

                        Bear
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ef41ca62-2f9d-4167-8abb-e196619c1cf7@r33g2000yqn.googlegroups.com>
> You could say (prop a x), (prop a x y)
> or (prop a x b) instead of a.x, a.x.y, or a.x.b.
> The dot notation isn't a "reader macro" that
> maps to the same list structure, though;
> Although redundant, both are primitive forms
>for variable references
I'm (lazily) working to make a CL reader which
would allow for installing custom symbol name
parsers. So you could convert a.x to (prop a x)
at readtime. It looks like reader itself
works, but some facilities have to be added.
It is not ready for publication yet.
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090425131558.314@gmail.com>
["Followup-To:" header set to comp.lang.lisp.]
On 2009-04-15, Ray Dillinger <····@sonic.net> wrote:
> Robbert Haarman wrote:
>
>> On Real World code, I find cases where macros are a win to be far more
>> common than cases where "a.x" instead of "(point-x a)" is a win. Also,
>> my experience is that object-orientation is relatively rare when not
>> forced on programmers.
>
> I don't find OO methodology increase my productivity as a lone 
> programmer, or even as a member of small groups - but OO is so far the 
> only methodology that scales reasonably well to projects involving
> hundreds of programmers. 

OO may be the only methodology that scales reasonably well when there must be
large numbers of implementations of the same interface, many of which may have
to be added later without recompiling any of the places which use the
interfaces.

Repeating case statements which switch on the type of an object is usually a
non-starter, even in smallish one-man projects. You may afford to be able to
edit through numerous case statements when a new kind of something is
introduced, but it's still a waste of your effort.

You don't need a large project with hundreds of programmers to realize the
benefit. In fact, a large project with hundreds of programmers may in fact be
way past the point of diminishing returns from OO.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <381be9e2-3c25-4a9c-8f09-d66550c98750@y9g2000yqg.googlegroups.com>
Hi Bob,

>You _could_ have used x and y as
>accessors and obtained
>
>(defun subtract-points (a b)
>  (vector (- (x a) (x b)) (- (y a) (y b))))
Yes, this is more correct form of my example.
But I like a.x syntax as there is less parens.
This really doesn't matter here.

> Your earlier objection that this requires dispatch
> and is thus slower does not necessarily hold.
Dispatch is sensitive, but it is not the main problem.

> What is different is that Lisp uses prefix syntax for everything,
> whereas many other languages use prefix syntax for some things and infix
> syntax for other things. It is not even true that Lisp requires you to
> keep repeating the type name. You _could_ have used x and y as
> accessors and obtained
>
> (defun subtract-points (a b)
>   (vector (- (x a) (x b)) (- (y a) (y b))))
No, I can't. I have to introduce bindings to x and y lexically.
Consider a is a point and b is a vector. I need x and y be
the same objects for a and b. This is not the case in C++.
I don't know about Ruby, but Ruby is at least 10 times slower
than CL. CL can be very fast and I want be able to write fast
programs in it with no unnecessary pain.

Alternatively, I can bind x and y with macrolet and make this
macro to read type declarations for its arguments and make
a dispatch at compile time. But I can't do it in a CL, I need
at least some codewalker to help me process type declarations.
with-augmented-environment is removed from the CL.

> There is dispatch here to the same extent that there is
> dispatch in the Common Lisp example. But not to worry!
> It may be possible to do the dispatch at compile time,
> or even inline the operations. This is true of both the
> Java and the Common Lisp example.
Maybe, I don't know. I never tried to optimize CLOS code for
speed. I see there is no "final" keyword in CL, so inlining
of generics would freeze things in a bit unpredictible way.
Also, nothing is guaranteed. I learned that to make code
fast, it is better to avoid CLOS at all and I do that. Also
I dislike that I can just "print" conses and structures but
I have to "inspect" CLOS instances. So CLOS is not the thing
I like to use.

> I find that real-life code, especially in languages with manifest typing
> (such as C and Java), tends to be repetitive in ways that can be
> described by macros.
I'm not comparing languages. I see no reason why language
can't have macros and at the same time allow for terser code
due to right handling of type declarations. There are such
languages, e.g. Boo and Nemerle. I don't know how are they
good overall, but they do exist. And they even have macros
for infix syntax.
From: ·····@sherbrookeconsulting.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <6a8e5b5e-ce6a-4d9c-8e6f-adaa16a75fd9@o11g2000yql.googlegroups.com>
On Apr 15, 3:33 am, budden <···········@mail.ru> wrote:
[...]
> Alternatively, I can bind x and y with macrolet and make this
> macro to read type declarations for its arguments and make
> a dispatch at compile time. But I can't do it in a CL, I need
> at least some codewalker to help me process type declarations.
> with-augmented-environment is removed from the CL.

The environments stuff was removed from the CL ANS, but some vendors
provide it. I suggest that you use oen of their implementations if you
need it, and hassle the other vendors to indicate they're losing a
customer by not providing it.

Common Lisp has never been limited to what was provided by ANSI in
1994, and it never will be. Don't shun solutions simply because they
aren't as portable as you'd like.

Cheers,
Pillsy
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <bcc4727a-70b4-44cd-bacc-2299c40ed72e@s20g2000yqh.googlegroups.com>
On Apr 15, 7:46 am, ······@sherbrookeconsulting.com"
<·········@gmail.com> wrote:
> On Apr 15, 3:33 am, budden <···········@mail.ru> wrote:
> [...]
>
> > Alternatively, I can bind x and y with macrolet and make this
> > macro to read type declarations for its arguments and make
> > a dispatch at compile time. But I can't do it in a CL, I need
> > at least some codewalker to help me process type declarations.
> > with-augmented-environment is removed from the CL.
>
> The environments stuff was removed from the CL ANS, but some vendors
> provide it. I suggest that you use oen of their implementations if you
> need it, and hassle the other vendors to indicate they're losing a
> customer by not providing it.

Our example: http://www.lispwire.com/entry-proganal-envaccess-des

> Common Lisp has never been limited to what was provided by ANSI in
> 1994, and it never will be. Don't shun solutions simply because they
> aren't as portable as you'd like.

Portability outside the CL spec isn't always easy, especially when
different vendors have different ideas about the right way to extend
the language.  But we do try, and sometimes users group together and
try, using layered unifying libraries as portability module.  The
Environments Access module perhaps has a better chance than most of
making it into other lisps, because it wasn't invented by any one
vendor; we tried to stay as true as possible to the original CLtL2
specification, so there is less chance of any kind of NIH syndrome.
But hooking up environments as first-class objects is a major chore
for a vendor, so if you want it in the CL you use, you should apply
pressure to your CL vendor of choice and ask that they hook it up to
their lisp implementation; the source for the module is already
available at the referenced site above, and it was verified to work at
a "compilable" level in all of the major CLs of that time.  Some
vendors (e.g. CMUCL) have gone further to hook up some of the
functionality to the compiler so that the kind of access that is
desired can be had by the user.  And of course, I would be remiss if I
didn't state the obvious; the full functionality of the module is
already available in Allegro CL, which can be downloaded for free.

Duane
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <b4b863ed-9626-44e8-8691-164ec0d6077a@k38g2000yqh.googlegroups.com>
> Our example:http://www.lispwire.com/entry-proganal-envaccess-des
Thanks, I've already found it and certainly I'll take a close look
at it.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090415200611.GL4558@gildor.inglorion.net>
On Wed, Apr 15, 2009 at 12:33:48AM -0700, budden wrote:
> > What is different is that Lisp uses prefix syntax for everything,
> > whereas many other languages use prefix syntax for some things and infix
> > syntax for other things. It is not even true that Lisp requires you to
> > keep repeating the type name. You _could_ have used x and y as
> > accessors and obtained
> >
> > (defun subtract-points (a b)
> >   (vector (- (x a) (x b)) (- (y a) (y b))))
> No, I can't. I have to introduce bindings to x and y lexically.

Why?

> Consider a is a point and b is a vector. I need x and y be
> the same objects for a and b. This is not the case in C++.

True, but x and y can be generic functions that work for both points and 
vectors.

> I don't know about Ruby, but Ruby is at least 10 times slower
> than CL.

That is, some Ruby implementation is at least 10 times slower than some 
CL implementation on some tasks.

> > There is dispatch here to the same extent that there is
> > dispatch in the Common Lisp example. But not to worry!
> > It may be possible to do the dispatch at compile time,
> > or even inline the operations. This is true of both the
> > Java and the Common Lisp example.
> Maybe, I don't know. I never tried to optimize CLOS code for
> speed. I see there is no "final" keyword in CL, so inlining
> of generics would freeze things in a bit unpredictible way.

Point is, dispatching is done in all the examples I gave:

; Lisp
(x a)

// Java
a.getX()

# Ruby
a.x

In Lisp, there is a single function (x) that dispatches on the type of 
its argument (a). In Java and Ruby, there is a single object (a) that 
dispatches based on the method (getX or x). If you find differences in 
speed, I would say that is because of differences in implementation, 
rather than due to one approach being necessarily slower than the 
others.

> Also, nothing is guaranteed. I learned that to make code
> fast, it is better to avoid CLOS at all and I do that.

That seems perfectly sensible.

> > I find that real-life code, especially in languages with manifest typing
> > (such as C and Java), tends to be repetitive in ways that can be
> > described by macros.
> I'm not comparing languages.

Maybe not, but I think that if you want to discuss about what is terser 
or less repetitive, you have to look at the whole picture, unless you 
can make a convincing argument to show that some things are unrelated.

> I see no reason why language can't have macros and at the same time 
> allow for terser code due to right handling of type declarations. 
> There are such languages, e.g. Boo and Nemerle. I don't know how are 
> they good overall, but they do exist. And they even have macros for 
> infix syntax.

Boo and Nemerle have macros? This is new to me.

Still, I wonder where type declarations fit in. Are you saying that type 
declarations are necessary to enable "a.x" syntax? This is not the case; 
for example, in JavaScript, one can do:

var a = { x: 42, y: 15 };
alert(a.x);

Regards,

Bob

-- 
"The only 'intuitive' interface is the nipple. After that, it's all learned."
	-- Bruce Ediger


From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <dc8fdf69-c5f2-4994-be09-e1d6bda6665c@q9g2000yqc.googlegroups.com>
>> > (defun subtract-points (a b)
>> >   (vector (- (x a) (x b))
>> >           (- (y a) (y b))))
>> No, I can't. I have to introduce
>>bindings to x and y lexically.
>Why?
At least there can be global (x) function.

> True, but x and y can be generic functions
> that work for both points and
> vectors.
See my answer to Pillsy about get. Problem
is the congruent lambda list limitation.
C++ makes every class like a CL package,
so there is no conflict between different
versions of get. Signature overloading is
useful too, but less important.

> > I don't know about Ruby, but Ruby is at least 10 times slower
> > than CL.
>
> That is, some Ruby implementation is at least 10
> times slower than some CL implementation on some tasks.
http://shootout.alioth.debian.org/
Feel free to make Ruby take over. Then I'll think about reviewing
my opinion.

> Point is, dispatching is done in all the examples I gave:
>
> ; Lisp
> (x a)
No. It was your suggestion to make x generic. My suggestion
is to macrolet it to point-x. No dispatch is done than
(at runtime) and it would look like C (and non-virtual methods in C+
+).

> // Java
> a.getX()
I don't know but I guess Java would do a dispatch at compile
time if it is possible.
>
> # Ruby
> a.x
I don't know Ruby at all. But I guess dispatch is always dynamic.
Anyway, when speed is important, Ruby sucks badly. I don't know why.

> Maybe not, but I think that if you want to discuss about what is terser
> or less repetitive, you have to look at the whole picture, unless you
> can make a convincing argument to show that some things are unrelated.
All languages I know are so imperfect that it is better to talk about
concepts, optimization targets and tradeoffs, not about languages.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090418005608.GO4558@gildor.inglorion.net>
On Fri, Apr 17, 2009 at 01:07:01PM -0700, budden wrote:
> > > I don't know about Ruby, but Ruby is at least 10 times slower
> > > than CL.
> >
> > That is, some Ruby implementation is at least 10
> > times slower than some CL implementation on some tasks.
> http://shootout.alioth.debian.org/
> Feel free to make Ruby take over. Then I'll think about reviewing
> my opinion.
> 
> > Point is, dispatching is done in all the examples I gave:
> >
> > ; Lisp
> > (x a)
> No. It was your suggestion to make x generic. My suggestion
> is to macrolet it to point-x. No dispatch is done than
> (at runtime) and it would look like C (and non-virtual methods in C+
> +).
> 
> > // Java
> > a.getX()
> I don't know but I guess Java would do a dispatch at compile
> time if it is possible.
> >
> > # Ruby
> > a.x
> I don't know Ruby at all. But I guess dispatch is always dynamic.
> Anyway, when speed is important, Ruby sucks badly. I don't know why.

Yes. So, in my examples, there is dispatch in all three languages. It is 
the language implementation that determines when this dispatching is 
done and how fast or slow it is.

My point is: speed is always determined by the language implementation. 
The only influence the language proper has is in constraining the 
optimizations an implementation can perform. The best a language can do, 
from a performance point of view, is to put as few restrictions on 
optimization as possible.

Saying that some operations (e.g. dispatch) is slow in language X means 
one of two things:

1. Your implementation of language X isn't as optimized as it could be.

2. Language X prevents the operation from being implemented efficiently.

Usually, it is the former. For example, it is clear to me that many 
implementations of Ruby, JavaScript, and various Lisp dialects aren't 
nearly as fast as they could be. Does this make these languages slow? 
It depends on how you look at it. Strictly speaking, I would say the 
answer is no, because you _could_ write an implementation that was fast. 
Pragmatically speaking, however, you can observe that, with the current 
state of the art, writing your program in a language for which only slow 
implementations exist is going to make it run slowly and so the language 
is indeed slow. But it is important to recognize that, in that case, you 
are talking about language implementations and not about languages 
proper.

As for current Ruby implementations being slower than the fastest Common 
Lisp implementations on the tasks in the Shootout, I bet that has 
everything to do with Ruby implementations not having received the 
same amount of performance-enhancing work that has gone into said 
Common Lisp implementations.

Regards,

Bob

-- 
Computers have made our lives much more efficient.
Now we can do many more useless things in one day.


From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e93e43$0$95484$742ec2ed@news.sonic.net>
Robbert Haarman wrote:

> Saying that some operations (e.g. dispatch) is slow in language X means
> one of two things:
> 
> 1. Your implementation of language X isn't as optimized as it could be.
> 
> 2. Language X prevents the operation from being implemented efficiently.
> 
> Usually, it is the former. For example, it is clear to me that many
> implementations of Ruby, JavaScript, and various Lisp dialects aren't
> nearly as fast as they could be. Does this make these languages slow?
> It depends on how you look at it. Strictly speaking, I would say the
> answer is no, because you _could_ write an implementation that was fast.
> Pragmatically speaking, however, you can observe that, with the current
> state of the art, writing your program in a language for which only slow
> implementations exist is going to make it run slowly and so the language
> is indeed slow. But it is important to recognize that, in that case, you
> are talking about language implementations and not about languages
> proper.

One particular problem that impairs performance in a toy lisp system 
I work on occasionally is that symbols ("objects" for OO designers) 
have attributes ("members") whose memory layout is not easily derived 
from their type specification.  

For example, you have a type specification named type1 which says symbols
legal to pass to procedures expecting a type1 must have attributes named 
foo, bar, and baz.  Simple so far.  Now you have another type specification
named type2 which says symbols legal to pass to procedures expecting a 
type2 must have attributes named foo, foobar, and zot.  Still simple.   
Now you have a symbol (object) which is supposed to be both a type1 and 
a type2 (multiple inheritance).  Not simple anymore. 

In a C-like system where each thing (struct) conforms to exactly one 
type specification(struct declaration) , a compiler creating executable
code for a routine that expects an struct of a particular type can 
calculate the offset into the struct of each field, and the offset 
into the activation frame of each struct, and therefore replace each 
reference to a struct attribute (member) in the routine with a simple 
relative pointer offset into the activation frame.  In addition to 
the 'win' of not having to calculate these addresses at runtime, 
there's a performance 'win' of short relative indirections with 
guaranteed cache hits rather than long relative indirections into 
general heap space. But structs conforming to multiple declarations 
are not supported by these semantics. 

In a dynamic system where symbols can conform to multiple type
specifications life is not so simple.  There's no straightforward way 
to calculate the address of attributes 'foo', 'baz', and 'zot' within 
a symbol that is consistent with the appearance of foo and baz in 
type1, and foo and zot's appearance in type2, and the appearance of 
all three in the union type.  So you must calculate at runtime a 
table of addresses given the keys (name hashes) of these attributes 
and a table within the actual symbol presented as an argument. This 
makes procedure calls costly (and inlining, TCO, etc, a big win) 
where symbols having attributes are involved because all the attribute
references made by the routine have to be resolved at runtime.  

Further, because the symbols themselves only have pointers to the 
values (a requirement for the easy implementation of the keyed 
attribute lookup tables) the values themselves are at locations
'somewhere out there' in the heap.  The compiled code refers to 
entries in a pointer table at a known location relative to the 
frame pointer, rather than to values at known locations relative 
to the frame pointer, so there's an additional, possibly non-local,
indirection with each attribute reference as well as the cost of 
resolving and filling in the table entries once per call.  

Anyway....  I can't find a way around the increased cost of 
referring to attributes within symbols in a system with multiple 
inheritance (especially with multiple inheritance and runtime-
mutable type specifiers).  So I'm thinking this is a performance 
cost of the language rather than a performance cost of the particular
implementation.  

Naturally, if you have any better ideas about implementation, I'd be 
glad to hear 'em. 

                                Bear
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090427173448.425@gmail.com>
On 2009-04-18, Robbert Haarman <··············@inglorion.net> wrote:
> Saying that some operations (e.g. dispatch) is slow in language X means 
> one of two things:
>
> 1. Your implementation of language X isn't as optimized as it could be.
>
> 2. Language X prevents the operation from being implemented efficiently.
>
> Usually, it is the former.

It doesn't have to be just one of these two things, but a mixture of the
issues.

Problems in 1 mask problems in 2, because 2 whereas imposes some upper bounds
on theoretical performance, where these upper bounds lie is not obvious as long
as the implementations suck due to issue 1.

> For example, it is clear to me that many 
> implementations of Ruby, JavaScript, and various Lisp dialects aren't 
> nearly as fast as they could be. Does this make these languages slow? 
> It depends on how you look at it. Strictly speaking, I would say the 
> answer is no, because you _could_ write an implementation that was fast. 
> Pragmatically speaking, however, you can observe that, with the current 
> state of the art, writing your program in a language for which only slow 
> implementations exist is going to make it run slowly and so the language 
> is indeed slow. But it is important to recognize that, in that case, you 
> are talking about language implementations and not about languages 
> proper.
>
> As for current Ruby implementations being slower than the fastest Common 
> Lisp implementations on the tasks in the Shootout, I bet that has 
> everything to do with Ruby implementations not having received the 
> same amount of performance-enhancing work that has gone into said 
> Common Lisp implementations.

Problem is that, Ruby being the mess that it is, such performance-enhanching
work may require Ruby to be redesigned, such that the enhancements appear in a
``Ruby 2'' which is not compatible with all Ruby 1 programs.

Some languages really are hostile toward efficient implementation. They get
that way because people pile hacks on top of hacks just to get functionality to
work, without caring about the goal of ``how will a compiler deal with this,
when one day someone tries to make one''.

For instance, if the people who work with that language are proud of their
monkey patching hacks, what will the compiler do when it has to suspect that
everything and anything may have been monkey patched? Either it preserves the
wacky interpreted semantics under compililing, thereby doing a crappy job of
compiling, or it breaks the interpreted semantics, or the language is changed
entierly. The language users have to live with better, but still poor
performance; interpreted semantics which is different (or undefined entirely)
under compiling, or compatibility disruptions.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <7bcc9485-9efc-44d2-803a-5638aaaa438b@v15g2000yqn.googlegroups.com>
> > 1. Your implementation of language X isn't as optimized as it could be.
>
> > 2. Language X prevents the operation from being implemented efficiently.
>
> > Usually, it is the former.
>
> It doesn't have to be just one of these two things, but a mixture of the
> issues.
>
> Problems in 1 mask problems in 2, because 2 whereas imposes some upper bounds
> on theoretical performance, where these upper bounds lie is not obvious as long
> as the implementations suck due to issue 1.
Thanks, Kaz, here and on you have expressed my point very verbosely.
I agree to you.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090418091415.GP4558@gildor.inglorion.net>
On Sat, Apr 18, 2009 at 01:13:46AM +0000, Kaz Kylheku wrote:
> On 2009-04-18, Robbert Haarman <··············@inglorion.net> wrote:
> > Saying that some operations (e.g. dispatch) is slow in language X means 
> > one of two things:
> >
> > 1. Your implementation of language X isn't as optimized as it could be.
> >
> > 2. Language X prevents the operation from being implemented efficiently.
> >
> > Usually, it is the former.
> 
> It doesn't have to be just one of these two things, but a mixture of the
> issues.

Of course.

> > As for current Ruby implementations being slower than the fastest Common 
> > Lisp implementations on the tasks in the Shootout, I bet that has 
> > everything to do with Ruby implementations not having received the 
> > same amount of performance-enhancing work that has gone into said 
> > Common Lisp implementations.
> 
> Problem is that, Ruby being the mess that it is, such performance-enhanching
> work may require Ruby to be redesigned, such that the enhancements appear in a
> ``Ruby 2'' which is not compatible with all Ruby 1 programs.

Yes. But, as far as I can see, the ways in which Ruby is, in your words, 
"a mess", are largely also ways in which Common Lisp is "a mess". 
Specifically, everything can be redefined at run-time. This makes it 
very difficult to optimize things, because virtually nothing can be 
assumed.

Of course, Common Lisp has declarations, which allow the compiler to 
assume that what is stated in the declaration is correct. Current Ruby 
does not feature declarations, even if it would be easy to add them 
without breaking existing programs, and thus, Ruby implementations can 
never perform the optimizations that declarations allow.

Regards,

Bob

-- 
I'm a dyslexic agnostic with insomnia... I lie awake at night wondering
if there really is a dog!


From: Drew Crampsie
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <e9bfa9b1-2387-469d-a387-cecd89521dd6@r31g2000prh.googlegroups.com>
On Apr 13, 11:58 pm, budden <···········@mail.ru> wrote:
> > Heck, it doesn't work right, or it's still too prolix for
> > you, write your own WITH-STRUCT macro that does what you
> > want. It should take you all of five minutes to make
>
> (with-struct (field1 field2) (x struct1)
>   (+ field1 field2))
>
> Now I need to mention field1 and field2 extra time and
> I get another level of nesting. I gained nothing.
>
> If it was at least
>
> (with-struct (x struct1)
>   (+ field1 field2))
>
> This was a gain (if we ignore unnecessary nesting level).
> But it is impossible to do portably AFAIK.

What's wrong with, say, using a defstruct* over defstruct and going
with something like :

(defvar *defstruct-forms* (make-hash-table))

(defmacro defstruct* (&body defstruct-form)
  (setf (gethash (if (listp (car defstruct-form))
		     (first (car defstruct-form))
		     (car defstruct-form))
		 *defstruct-forms*)
	defstruct-form)
  `(defstruct ,@defstruct-form))

(defun structure-field-names (struct-name)
  (mapcar (lambda (name)
	    (if (listp name)
		(first name)
		name))
	  (cdr (gethash struct-name *defstruct-forms* ))))

(defun structure-conc-name (struct-name)
  (let ((form (gethash struct-name *defstruct-forms* )))
    (or  (when (listp (car form))
	   (loop
	      :for options :in (cdr (car form))
	      :if (and (listp options) (eq (car options) :conc-name))
	      :do (return (car (cdr options)))))
	 (car form))))

(defmacro with-structure-fields ((struct-name &rest vars-bound-to-
instances-of-struct-name) &body body)
  (let ((field-names (structure-field-names struct-name))
	(conc-name (structure-conc-name struct-name))
	(var (first vars-bound-to-instances-of-struct-name)))
    (if var
	`(symbol-macrolet
	     ,(loop for field in field-names
		 collect `(,(intern (format nil "~A.~A" var field))
			    (,(intern (format nil "~A-~A" conc-name field)) ,var)))
	   (with-structure-fields (,struct-name ,@(rest vars-bound-to-
instances-of-struct-name))
	     ,@body))
	`(progn ,@body))))


> And what if
> I have two instances of struct1? Say, I have
>
> (defstruct point x y)
>
> and want to subtract two points to get vector.
> In a C, this would be like
>
> vector *subtract_points(point *a,point *b) {
>   return make_vector(a->x - b->x, a->y - b->y);
>
> }

ok, i covered that case too :

CL-USER> (defstruct* point x y)
POINT

CL-USER> (let ((a (make-point :x 3 :y 4))
	       (b (make-point :x 1 :y 2)))
	   (with-structure-fields (point a b)
	     (cons (- a.x b.x) (- a.y b.y))))
(2 . 2)

This isn't rocket surgery really, it's a trivial recursive macro using
symbol-macrolet. If you want to avoid the extra level of nesting,
write a STRUCTURE-LET binding construct. Want to use a struct not
defined by defstruct*? (setf (gethash struct-name *defstruct-forms*
(cons struct-name field-names)) or whatever.... the docs for use of
that struct should give you all you need.

I'll bet it took me significantly less time to write the 35 lines that
make up WITH-STRUCTURE-FIELD than it took you to write all these c.l.l
articles saying it can't be done ;).

Cheers,

drewc
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <72f9d779-45a0-4332-af0c-911a7f87e759@z9g2000yqi.googlegroups.com>
>I'll bet it took me significantly less
>time to write the 35 lines that
>make up WITH-STRUCTURE-FIELD
If you'll try to put a test of your
macro to a single file, then start with
fresh lisp and compile the file, you'll
be surprised.
From: Drew Crampsie
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <6ae39747-6ac0-4923-a207-0c71d970dba2@y34g2000prb.googlegroups.com>
On Apr 17, 1:11 pm, budden <···········@mail.ru> wrote:
> >I'll bet it took me significantly less
> >time to write the 35 lines that
> >make up WITH-STRUCTURE-FIELD
>
> If you'll try to put a test of your
> macro to a single file, then start with
> fresh lisp and compile the file, you'll
> be surprised.

Surprised that you don't understand enough lisp to know that would
require an EVAL-WHEN? no, i'm not in the least bit surprised.

Cheers,

drewc
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <28424c17-ec2f-441b-99bd-85aecfe3541e@q9g2000yqc.googlegroups.com>
> Surprised that you don't understand enough lisp to know that would
> require an EVAL-WHEN? no, i'm not in the least bit surprised.

http://groups.google.com/group/comp.lang.lisp/msg/195d6aa450f259ca

In this message from the current thread you can find lisppaste and
test
for it which uses eval-when and works fine with compilation.

You suggested 35 LOC "solution" which does not work with compilation
and you have attacked me twice despite the fact I have published
more complete solution before.

I think if you strong, honest, and honor yourself, you admit your
failure
and apologize for you stupid attacks.

And then it'd be nice if you take a time to find out and describe
pitfall of using eval-when this way.
From: Drew Crampsie
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5a264045-e245-4a03-b607-927fc7e3aac7@d7g2000prl.googlegroups.com>
On Apr 20, 4:10 am, budden <···········@mail.ru> wrote:

> > Surprised that you don't understand enough lisp to know that would
> > require an EVAL-WHEN? no, i'm not in the least bit surprised.
>
> http://groups.google.com/group/comp.lang.lisp/msg/195d6aa450f259ca
>
> In this message from the current thread you can find lisppaste and
> test
> for it which uses eval-when and works fine with compilation.
>
> You suggested 35 LOC "solution" which does not work with compilation
> and you have attacked me twice despite the fact I have published
> more complete solution before.

Attacked? no, i've mearly pointed out that it is trivial to provide
what you desire in common lisp. You are getting quite defensive over
what should be a simple technical discussion. The fact is, if you
don't like compilation, you are free to use an interpreted language.

my solution works fine with compilation.. it's budden that doesn't
seem to want to work with compilation. In order for a compiler to
work, it must have some static knowledge of the data structures you
plan to use. If you can't accept this, but still want to gain the
speed that static analysis gives, there is nothing any programming
language can do for you.

> I think if you strong, honest, and honor yourself, you admit your
> failure
> and apologize for you stupid attacks.

Well, we must define those worlds differently, or you think wrong.

You keep going on about how performance is important to you, yet you
will not give a single concession to the compiler so that it may
produce fast code. You want speed, but won't order your code to allow
for that. And you perceive advice as attacks. Methinks the lady doth
protest too much.

I'm starting to think that you just like complaining, and do not have
any real problems to solve. That's fine, but i don't think i'll be
wasting my time trying to address your complaints.

> And then it'd be nice if you take a time to find out and describe
> pitfall of using eval-when this way.

It would be nice if you'd take the time to learn about the problems
you discuss. If you want speed, you compile. If you want compilation,
structure your source files in such a way that the compiler can use
them. If you don't want speed, but rather just want to complain about
something, please continue, i just thought i'd try and set you
straight before giving up and plonking.

Cheers,

drewc
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <97e5ae75-0207-4a99-b659-bf5dc2f1056c@g19g2000yql.googlegroups.com>
Well, I see you're not strong enough to admit that
you were wrong. So, farewell.
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e39ca0$0$27759$607ed4bc@cv.net>
budden wrote:
> Well, I think I can add my two kopeykas.
> CL has some disguisting features.
> Here is one of them:
> 
> (defstruct struct1 field1 field2)
> 
> (defvar x (make-struct1 :field1 1 :field2 2))
> (+ (struct1-field1 x) (struct1-field2 x))
> 
> Note I need to type struct1 three times in
> the code which uses the type.
> 
> No, don't tell me to use :conc-name nil.
> Because next line might be
> (defstruct struct2 field2 field3)
> 
> So, :conc-name nil is not scalable.

Use :conc-name s1-, bozo.

hth, kenny
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <a6316b12-812f-4489-8d12-c2cd3464c7d8@c9g2000yqm.googlegroups.com>
Hi Kenny,

Can you agree that inventing and keeping in mind two names for
the same thing is an unnecessary work?

And it is not scalable too. Someday I'll merge another library
to my project and here s1- will be used too. What I'll do then?

my-package:s1-field1
another-package:s1-field1

Nice!

It is just a workaround. Not super-ugly, but it offences my
sense of style.
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e461a1$0$5908$607ed4bc@cv.net>
budden wrote:
> Hi Kenny,
> 
> Can you agree that inventing and keeping in mind two names for
> the same thing is an unnecessary work?

Hunh? What two things? er, what two names? er, hunh?

> 
> And it is not scalable too. Someday I'll merge another library
> to my project and here s1- will be used too. What I'll do then?

You will be astonished someone besides you was daft enough to use the 
name "struct1" for a struct. In the context of name clashes, it is 
meaningless to worry about made-up names. They clash, real ones do not 
more than once every two years worth of heads-down non-stop programming.

Mind you, if I load two different physics engines at once into the same 
Lisp image I might have clashes more often but Tilton's Law would ride 
to the rescue: Solve the right problem, stop loading two libraries 
covering the same domain.

kt
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <58ef9590-c3bb-4e00-af0a-d3962507c15d@37g2000yqp.googlegroups.com>
> Hunh? What two things? er, what two names? er, hunh?
I mean full name and short prefix. These are two names
for the same thing.

> You will be astonished someone besides you was daft enough to use the
> name "struct1" for a struct.
struct1 is rather long sequence of characters. Conflict is less likely
than
in case of short prefix s1. If I limit myself with two character
prefixes,
it is for sure that clash will occur soon.
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e4e962$0$27778$607ed4bc@cv.net>
budden wrote:
>> Hunh? What two things? er, what two names? er, hunh?
> I mean full name and short prefix. These are two names
> for the same thing.

You mean like Co and Company? Inc and Incorporated? Heartbreaking. 
Absolutely unmanageable.

> 
>> You will be astonished someone besides you was daft enough to use the
>> name "struct1" for a struct.
> struct1 is rather long sequence of characters. Conflict is less likely
> than
> in case of short prefix s1. If I limit myself with two character
> prefixes,
> it is for sure that clash will occur soon.

Try three. <sigh>

hth, kenny

ps. Anyone else reminded of one of those dogs that won't let go of a 
stick? You can lift it off the ground, swing it around, yer not gettin 
that stick? Always adorable. k
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <91379361-be97-48c7-8572-8ee16547e559@v15g2000yqn.googlegroups.com>
> You mean like Co and Company?
> Inc and Incorporated? Heartbreaking.
> Absolutely unmanageable.
Inc has three letters.

> Try three. <sigh>
Heh. You gave up your Word?
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <9be952bf-6b6d-4b5c-afc8-62fb29d90df1@m24g2000vbp.googlegroups.com>
On Apr 13, 3:19 pm, ·····@franz.com wrote:

> On Apr 13, 10:31 am, Pillsy <·········@gmail.com> wrote:
[...]
> > Well, I'm a dynamic-typing enthusiast, and I'm not sure that that's
> > obvious to me, although that may be because it's not clear to me where
> > the boundary between "no-typing" and "weak-typing" (whether static or
> > dynamic) is. I can at least see a case for the idea that adding the
> > integer 1 to the string "2" ought to return the string "3".

> If you thought about it a little deeper, though, you wouldn't really
> want that idea to be automatic.  In a language where a + operator can
> be thus overloaded, a program could indeed provide that concept of
> casting a string into an integer, either with static or dynamic
> typing.  But what would stop a language from interpreting the above
> addition and presenting "12" or "21" as results instead?  

Nothing at all, really. It would clearly be a bad idea for a language
to not pick a single well-defined thing to do in this situation, but
as long as it does one thing and that one thing is reasonably useful,
then it may be defensible in some situations. I'm having trouble
thinking of cases where the pointer-arithmetic approach would be what
a user would want, but that's because "mucking with pointers" and
"sloppy programming" go together like peanut butter and broken glass.
[...]
> Giving the language the express permission to perform this kind of
> overloading is definitely not incompatible with dynamic typing
> paradigms, and allowing the language to select the default behavior
> of something undefined like your example would be more of a case of
> weak- or no-typing.

Yes, and I'm just wondering if weak typing isn't the right thing in
some instances. Not in the instances where I'm doing the sort of
things I do with Common Lisp, surely, but maybe the instances where I
do the sort of things that I do with one-liners at a bash prompt.
There are hard problems you need to solve in a robust way, but there
are easy problems you can get away with solving in a brittle way,
too....

> > There are a lot of problem domains out there, and in some of them,
> > it's a win to let people write sloppy programs that just sort of do
> > the right thing (or at least some arguably right thing) in situations
> > like that.

> I prefer to make a distinction between "sloppy" programming and "lazy"
> programming.

I do too. :)

> A good programmer is a lazy programmer, but a sloppy programmer very
> seldom writes good programs.

No, but sometimes bad programs are all you need to get a specific job
done. Or you're a bad programmer and you need to write a program
anyway.

Languages that tell you that 1 + "2" == 3, or that 1 + "2" == "12",
are hardly unpopular, after all. They're good enough for some people
in some circumstances, in much the way that dashing off a three
sentence email with indifferent spelling and no capital letters is
sometimes OK, or the way that sometimes you prop up a table leg with a
piece of cardboard instead of, um, doing something with, um, sanders
and levels.

I like to think of myself as a passable programmer, but I'm a total
bust as a carpenter. It doesn't mean my furniture never wobbles....

Cheers,
Pillsy
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413182528.GE4558@gildor.inglorion.net>
On Mon, Apr 13, 2009 at 10:31:21AM -0700, Pillsy wrote:
> On Apr 13, 1:09 pm, ·····@franz.com wrote:
> [...]
> > I think it should be obvious to both static typing enthusiasts and
> > dynamic typing enthusiasts that the no-typing paradigm is not a good
> > one, but it must be mentioned because often static-typing enthusiasts
> > mistake dynamic-typing for no-typing.
> 
> Well, I'm a dynamic-typing enthusiast, and I'm not sure that that's
> obvious to me, although that may be because it's not clear to me where
> the boundary between "no-typing" and "weak-typing" (whether static or
> dynamic) is. I can at least see a case for the idea that adding the
> integer 1 to the string "2" ought to return the string "3".

But that is weak typing, or, perhaps, overloading. Not what I would call 
no typing.

I don't know what other people think of when they read "no typing", but 
I think of things like machine code, assembly language, Forth, untyped 
lambda calculus, or the Unix shell. The common theme here is that these 
languages don't have a concept of types. What you may think of as 
"integers" or "strings" or "pointers to employee structs" is all the 
same to these languages.

In assembly language, for example, you can happily add the first four 
bytes of a string to a pointer to an employee struct, xor the result 
with 0x5e5e5e5e, and then perform a function call to whatever you get 
out of that. The type system certainly isn't going to stop you, because 
there isn't one.

Regards,

Bob

-- 
You are in a twisting maze of passages, all alike.

From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87y6u4vzgp.fsf.mdw@metalzone.distorted.org.uk>
Pillsy <·········@gmail.com> writes:

> Well, I'm a dynamic-typing enthusiast, and I'm not sure that that's
> obvious to me, although that may be because it's not clear to me where
> the boundary between "no-typing" and "weak-typing" (whether static or
> dynamic) is. I can at least see a case for the idea that adding the
> integer 1 to the string "2" ought to return the string "3".

The only way you could make that work is by doing type checks -- either
compile-time overloading or run-time dispatch.

Forth is a truly untyped language.  Storage consists of undifferentiated
words.  The available operators interpret these words in various ways as
instructed, but it is the programmer's responsibility to invoke the
correct operators corresponding to the data represented.  For example,
there are distinct integer and floating-point addition operators.
Nothing other than common sense will stop you from applying the integer
addition operator to floating point numbers.  Except that this isn't a
well-defined concept given the typelessness: let us say more precisely,
then, that nothing will stop you from applying the integer addition
operator to words which previously you had interpreted as representing
floating point numbers.

(It's interesting that the very typelessness of Forth brings to the fore
issues of value and representation which can usually be neglected in
languages with type systems, whether static or dynamic.)

-- [mdw]
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413174109.GD4558@gildor.inglorion.net>
On Mon, Apr 13, 2009 at 10:09:40AM -0700, ·····@franz.com wrote:
> One thing that gives Lisp enthusiasts their enthusiasm is the idea of
> "correct and continue" style of programming.

Exactly. I feel this is badly undervalued by people who have never 
experienced it. As are many other features of Lisp (Common Lisp, in 
particular).

Regards,

Bob

-- 
An opinion should be the result of thought, not a substitute for it.
	-- Jef Mallett


From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <13rejd4j4u9gd$.15sguqfy616ly.dlg@40tude.net>
On Mon, 13 Apr 2009 10:09:40 -0700 (PDT), ·····@franz.com wrote:

> For the sake of our friend Dmitry, and if you also want to be
> complete, you would also have to include a definition for "no
> typing" (or weak typing) where instead of being caught at compile-time
> or run-time, type errors may _not_ be caught.

That is a common misconception. What *you* call dynamic typing (run-time
checks) does not catch type errors. Instead of that it transforms them into
some well-defined behavior.

Weak typing in a *type safe* language is basically same thing. The
difference is that the behavior upon failed compile- or run-time check is
less rigid. Instead of exception propagation, a weakly-typed language may
allow interpretation as another type (implicit conversion) etc.

I and, I presume, Robert too disagree with narrow definition of dynamic
typing as run-time checks. Dynamic typing is late binding to types. The
most important feature of dynamic typing is dispatch, and thus support of
generic programming through classes of types. It becomes most useful in
combination with static typing where classes of types are associated with
some types and so checked statically. In particular, this allows to
guarantee that dynamic dispatch will never fail at run-time.

Weakly typed languages can be statically or dynamically typed in our
definition of static/dynamic typing.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <21b42fda-0141-42d4-9451-d453c7c997a6@y13g2000yqn.googlegroups.com>
On Apr 13, 11:42 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Mon, 13 Apr 2009 10:09:40 -0700 (PDT), ·····@franz.com wrote:
> > For the sake of our friend Dmitry, and if you also want to be
> > complete, you would also have to include a definition for "no
> > typing" (or weak typing) where instead of being caught at compile-time
> > or run-time, type errors may _not_ be caught.
>
> That is a common misconception.

Ah, yes; it's the old "condescension argument".  I use it myself from
time to time.  Let's examine your use of the word "misconception".  If
you trust Wikipedia, it gives a reasonable definition for
misconception at http://en.wikipedia.org/wiki/Misconception:

 "A misconception happens when a person believes in a concept which is
objectively false."

Now let's suppose my concept is objectively false.  How have you
proved your case objectively?  What definitions have you provided?
I've not seen any definitions from you since the time of my last post
asking you to define your terms.  So the answer is: you haven't shown
any case for a common misconception.  Instead, you've relied on your
own assumptions about definitions that you hold, and are not making
those assumptions explicit.  Yet, since the definitions are themselves
not necessarily common, the real misconception is yours, in that you
believe that I (or Robert, for whom I don't pretend to speak) would
come to the same conclusion as you.  How can we, when you haven't
stated your assumptions?

> What *you* call dynamic typing (run-time
> checks) does not catch type errors. Instead of that it transforms them into
> some well-defined behavior.

Here you have not defined "catch" or "type error" or "transform",
which could cause your misconception of what this conversation is
about.  If you google on "type error" you'll get a huge wealth of
knowledge, including Type I and Type II errors, type errors for ML,
type-errors for CL, as well as exceptions in Python and CL.  I don't
know if you are reading this on c.l.l/c.l.s or on c.l.m, but since
this thread was started in a Lisp setting (based on the presentation
of the paper with the subject of this thread and its recent
presentation at the International _Lisp_ Conference) the definition of
type error must at least be either consistent with a Lisp definition
of type-error (which can be found here
http://www.franz.com/support/documentation/8.1/ansicl/dictentr/type-err.htm
or here http://www.lispworks.com/documentation/HyperSpec/Body/e_tp_err.htm#type-error)
or with some Scheme definition of it.  However, your usage of the
phrase "catch type errors" does not allow CL type errors, and so your
definitions are at odds with the CL community.  It seems ironic that a
"Common" Lisp community might have a "common" misconception about
something it defines very well.  I've said it before, and I'll say it
again for your sake: state your own definitions and assumptions before
you argue your case.

> Weak typing in a *type safe* language is basically same thing.

Unless I misunderstand this statement, you're saying here that weak-
typing in a type-safe language is equivalent to dynamic-typing. If
this is not the case, correct me now by rewording your statement.

If so though, another look at Wikipedia shows where your insistence on
assuming definitions can get you into trouble.  Looking at
http://en.wikipedia.org/wiki/Type_safety, the very first sentence
should alert you to the need to define your terms: "In computer
science, type safety is a property of some programming languages that
is defined differently by different communities", and the first
sentence in the second paragraph directly contradicts any assertion
that dynamic-typing cannot also be type-safe: "The enforcement can be
static, catching potential errors at compile time, or dynamic,
associating type information with values at run time and consulting
them as needed to detect imminent errors, or a combination of both."

> The difference is that the behavior upon failed compile- or run-time check is
> less rigid. Instead of exception propagation, a weakly-typed language may
> allow interpretation as another type (implicit conversion) etc.

That statement would be apropos if we had been talking about type-
unsafe programming, but we're not.

> I and, I presume, Robert too disagree with narrow definition of dynamic
> typing as run-time checks. Dynamic typing is late binding to types. The
> most important feature of dynamic typing is dispatch, and thus support of
> generic programming through classes of types. It becomes most useful in
> combination with static typing where classes of types are associated with
> some types and so checked statically. In particular, this allows to
> guarantee that dynamic dispatch will never fail at run-time.
>
> Weakly typed languages can be statically or dynamically typed in our
> definition of static/dynamic typing.

Before you presume to know about dynamic typing (especially that such
languages must be weakly typed) you should first learn a language like
CL which is a strongly typed language that supports both static and
dynamic typing (i.e. strongly).

Duane
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <n4ynwxwguth8.1koldhn3ulvwe.dlg@40tude.net>
On Mon, 13 Apr 2009 16:12:24 -0700 (PDT), ·····@franz.com wrote:

> On Apr 13, 11:42�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Mon, 13 Apr 2009 10:09:40 -0700 (PDT), ·····@franz.com wrote:
>>> For the sake of our friend Dmitry, and if you also want to be
>>> complete, you would also have to include a definition for "no
>>> typing" (or weak typing) where instead of being caught at compile-time
>>> or run-time, type errors may _not_ be caught.
>>
>> That is a common misconception.
> 
> Ah, yes; it's the old "condescension argument".  I use it myself from
> time to time.  Let's examine your use of the word "misconception".  If
> you trust Wikipedia, it gives a reasonable definition for
> misconception at http://en.wikipedia.org/wiki/Misconception:
> 
>  "A misconception happens when a person believes in a concept which is
> objectively false."

A reasonable definition.

> Now let's suppose my concept is objectively false.  How have you
> proved your case objectively?

That is very easy. If "error" means illegal as it does in the case of "type
error" in a statically typed language, then, trivially, a failed type check
at run-time is not that kind of error. So your conception of run-time
checks for type errors is inconsistent and thus objectively false. You
could try to give another definition of error, but then it would fail for
the case of statically typing.

> What definitions have you provided?

Do you need a formal definition of type? Do you intended to discuss it at a
formal level? I doubt it would be appropriate in a news group.

> I've not seen any definitions from you since the time of my last post
> asking you to define your terms.  So the answer is: you haven't shown
> any case for a common misconception.

See above.

>> What *you* call dynamic typing (run-time
>> checks) does not catch type errors. Instead of that it transforms them into
>> some well-defined behavior.
> 
> Here you have not defined "catch" or "type error" or "transform",
> which could cause your misconception of what this conversation is
> about.

That was not a formalized statement, rather it was an informal explanation
of what happens. Let us not play word games, otherwise I will require you
to formally define each English word you would use.

> It seems ironic that a
> "Common" Lisp community might have a "common" misconception about
> something it defines very well.

Or that Common Lisp itself is a common misconception. Sorry, I could not
resist. (:-))

Anyway, talking about type systems we obviously should ignore definitions
given by particular languages. What C++ calls class is a type. So what?

>> Weak typing in a *type safe* language is basically same thing.
> 
> Unless I misunderstand this statement, you're saying here that weak-
> typing in a type-safe language is equivalent to dynamic-typing.

Yes, because you can treat a name as if the object bound to the name had
several distinct types.

> If so though, another look at Wikipedia shows where your insistence on
> assuming definitions can get you into trouble.  Looking at
> http://en.wikipedia.org/wiki/Type_safety, the very first sentence
> should alert you to the need to define your terms: "In computer
> science, type safety is a property of some programming languages that
> is defined differently by different communities",

It should have alerted you when you asked for a definition of
"weakly-typed." If you weren't prepare to discuss *any* definition just
because it would be [necessarily] controversial to some language manuals,
then why did you ask?

>> The difference is that the behavior upon failed compile- or run-time check is
>> less rigid. Instead of exception propagation, a weakly-typed language may
>> allow interpretation as another type (implicit conversion) etc.
> 
> That statement would be apropos if we had been talking about type-
> unsafe programming, but we're not.

No, it is type safe, because no object is interpreted falsely. Type
conversion is well-defined. Example: PL/1 is safely weakly-typed.

An example of type unsafe language is FORTRAN-IV where you can pass
INTEGER*4 to where REAL*4 is expected and handle it there as REAL*4 without
any conversion.

Note that any type unsafe language can be made "safe" merely by introducing
implicit rubbish conversions like *(T*)& in C. But FORTRAN-IV did not do
even this.

>> I and, I presume, Robert too disagree with narrow definition of dynamic
>> typing as run-time checks. Dynamic typing is late binding to types. The
>> most important feature of dynamic typing is dispatch, and thus support of
>> generic programming through classes of types. It becomes most useful in
>> combination with static typing where classes of types are associated with
>> some types and so checked statically. In particular, this allows to
>> guarantee that dynamic dispatch will never fail at run-time.
>>
>> Weakly typed languages can be statically or dynamically typed in our
>> definition of static/dynamic typing.
> 
> Before you presume to know about dynamic typing (especially that such
> languages must be weakly typed) you should first learn a language like
> CL which is a strongly typed language that supports both static and
> dynamic typing (i.e. strongly).

Should I response with a recommendations what you should study? If you have
nothing to say to the substance of argument, then just agree [or agree to
disagree].

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87prffwdtd.fsf.mdw@metalzone.distorted.org.uk>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> That is very easy. If "error" means illegal as it does in the case of
> "type error" in a statically typed language, then, trivially, a failed
> type check at run-time is not that kind of error.

Good.  So: either the idea a runtime type check giving rise to an error
is either incoherent, /or/ your premise is wrong.  Fancy playing a
guessing game?

> So your conception of run-time checks for type errors is inconsistent
> and thus objectively false.

Nope.  Picked the wrong one.

> You could try to give another definition of error, but then it would
> fail for the case of statically typing.

`There are more things in heaven and earth, Horatio, than are dreamt of
in your philosophies.'
        -- Hamlet

That is, there are more kinds of error that you seem to think.  The word
derives from Latin, where it means to wander, and thence to deviate or
stray (from a correct path).  Nowadays, it pretty much means `mistake'.

With regards to programming languages and environments, we can make use
of the concept of an `erroneous program': one that is not correct.  Of
course, correctness itself is in need of definition, but we can finesse
this by speaking of correctness relative to a given specification.  For
the most part, a programming language or environment is given only a
program and not a specification, and isn't really in a position to be
able to tell whether the program is `correct' in this sense; but it can
provide some useful approximations.

We can distinguish between /diagnosed/ and /undiagnosed/ errors.  The
former are those errors which are formally reported as being so; the
latter are the others, which are often termed `logic errors' though
sometimes these will lead to diagnosed errors in due course.

Errors can be diagnosed at different times: a compiler can diagnose
errors at compile time, maybe; or it can generate code which diagnoses
errors at run time.  The runtime environment can also diagnose errors,
e.g., calls to functions with invalid arguments, as can an operating
system kernel and the processor itself.

But the notion of `error' is a relative one.  Diagnosing errors in a
program is part of a compiler's job: it is behaving correctly when it
issues error messages about an invalid program.  Diagnosing errors is
part of a processor's job: if the program contains an access to an area
of memory which is unmapped or forbidden to it, the processor diagnoses
an error condition, and this is part of its correct operation.  Here's
where it gets complicated: because diagnosis of `errors' is correct and
defined behaviour of some part of a system, it can in fact be relied
upon by another part which, despite provoking diagnoses, is still
actually performing in accordance with its specification.  Therefore,
when we look at the different parts and layers of a system, different
behaviours will appear erroneous; and what seems erroneous from one
point of view may be correct from another.

Anyway, that was a rather roundabout way of saying that trying to claim
that the idea of an `error' must be strictly limited /either/ to
compile-time static analysis /or/ runtime checking is rather foolish.

> > What definitions have you provided?
>
> Do you need a formal definition of type? Do you intended to discuss it
> at a formal level? I doubt it would be appropriate in a news group.

I think it would be very approrpriate to a newsgroup whose topic of
conversation is programming languages.

> > Unless I misunderstand this statement, you're saying here that weak-
> > typing in a type-safe language is equivalent to dynamic-typing.
>
> Yes, because you can treat a name as if the object bound to the name
> had several distinct types.

This is absurd.  There's nothing at all wrong with an object having
multiple distinct types.  In fact, it's the common state of affairs in
any language with a moderately sophisticated type system.  For example,
in Common Lisp, the value NIL is:

  * a symbol,
  * an atom,
  * a list,
  * a boolean value,
  * the sole inhabitant of the type NULL,
  * a sequence, and
  * a value (i.e., an inhabitant of the `top' type T).

(It also inhabits an infinite number of other types constructed using
the AND, OR, MEMBER and SATISFIES type operators.)

In ML, the function

        fun map f [] = []
          | map f (x::xs) = f x :: map f xs

has the following types

  * ('a -> 'b) -> 'a list -> 'b list
  * ('a -> int) -> 'a list -> int list
  * (('a -> 'b) * 'a -> 'c) -> (('a -> 'b) * 'a) list -> 'c list

and so on, /ad nauseam/.  A significant feature of the Hindley--Milner
type system is that every expression has a distinguished /most general/
type, which in the case of `map' above is the first one I listed.

Again, having established that your words don't mean what they look as
if they mean, I'm left none the wiser as to what they actually do mean.

> It should have alerted you when you asked for a definition of
> "weakly-typed." If you weren't prepare to discuss *any* definition
> just because it would be [necessarily] controversial to some language
> manuals, then why did you ask?

Possibly to find out whether you were deliberately queering the pitch
with a bogus definition, with a view to discovering whether further
discussion is actually useful?  (Here I'm speculating about Duane's
motivations, not attempting to speak for him.)

> Should I response with a recommendations what you should study? If you
> have nothing to say to the substance of argument, then just agree [or
> agree to disagree].

I hadn't noticed anywhere in the discussion which exposed a significant
gap in Duane's knowledge.  I'm pretty sure you'd have made a fuss if
you'd noticed one.  Whereas your knowledge seems pretty patchy.

(I'm definitely getting tired of this discussion.  I'll only chip in if
I see something really interesting to talk about.  The discussion of
errors and multiple types seemed sufficiently fun this time.)

-- [mdw]
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ae8df832-70c1-4565-9f65-06e7ac55b507@x1g2000prh.googlegroups.com>
On Apr 14, 6:43 am, Mark Wooding <····@distorted.org.uk> wrote:
> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>
> > It should have alerted you when you asked for a definition of
> > "weakly-typed." If you weren't prepare to discuss *any* definition
> > just because it would be [necessarily] controversial to some language
> > manuals, then why did you ask?
>
> Possibly to find out whether you were deliberately queering the pitch
> with a bogus definition, with a view to discovering whether further
> discussion is actually useful?  (Here I'm speculating about Duane's
> motivations, not attempting to speak for him.)

Thanks, Mark, for writing up this nice response; I believe that you
have caught the essence of what I was trying to do (to help Dmitry
understand his blind spot).  Apparently, he's not buying it, because
he has his definitions, and believes that they are the only possible
definitions.  It's too bad he doesn't have any real experience in CL
or other strongly-typed dynamic languages, so he can realize his blind
spot.  You can lead a horse to water, but you can't make him drink...

> > Should I response with a recommendations what you should study? If you
> > have nothing to say to the substance of argument, then just agree [or
> > agree to disagree].
>
> I hadn't noticed anywhere in the discussion which exposed a significant
> gap in Duane's knowledge.  I'm pretty sure you'd have made a fuss if
> you'd noticed one.  Whereas your knowledge seems pretty patchy.

Well, thank you.  I'm sure I have gaps in my knowledge; it would be a
small man that could not admit to as much.

> (I'm definitely getting tired of this discussion.  I'll only chip in if
> I see something really interesting to talk about.  The discussion of
> errors and multiple types seemed sufficiently fun this time.)

I've also had enough, also, at least for now; his response to your
post has shown me that no progress is being made.  Thanks for your
effort, though.

Duane
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e4bf2c$0$27784$607ed4bc@cv.net>
·····@franz.com wrote:
> Well, thank you.  I'm sure I have gaps in my knowledge; it would be a
> small man that could not admit to as much.

Leave me out of this!

kt
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <phx2kjfxct7c.1aixj8bmo1n87$.dlg@40tude.net>
On Tue, 14 Apr 2009 14:43:42 +0100, Mark Wooding wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> 
>> That is very easy. If "error" means illegal as it does in the case of
>> "type error" in a statically typed language, then, trivially, a failed
>> type check at run-time is not that kind of error.
> 
> Good.  So: either the idea a runtime type check giving rise to an error
> is either incoherent, /or/ your premise is wrong.  Fancy playing a
> guessing game?

You mean the premise that in a statically typed language the compiler flags
program with type errors as illegal? Is it wrong?

>> So your conception of run-time checks for type errors is inconsistent
>> and thus objectively false.
> 
> Nope.  Picked the wrong one.
> 
>> You could try to give another definition of error, but then it would
>> fail for the case of statically typing.
> 
> `There are more things in heaven and earth, Horatio, than are dreamt of
> in your philosophies.'
>         -- Hamlet
> 
> That is, there are more kinds of error that you seem to think.  The word
> derives from Latin, where it means to wander, and thence to deviate or
> stray (from a correct path).  Nowadays, it pretty much means `mistake'.

Ah, some errors are more errors than others? No the program is either
formally legal or not (I don't consider poorly constructed languages where
legality might be undecidable). Show us a legal program with statically
decidable type errors in a statically typed language.

(If you don't like the word "error" use any other of your choice.)

> With regards to programming languages and environments, we can make use
> of the concept of an `erroneous program': one that is not correct.

Please, do not mix erroneous (well-formed, legal, but semantically
incorrect programs) and illegal programs. They are clearly different. A
type error renders the program illegal in a statically typed language, not
erroneous but illegal.

[... irrelevant stuff about program semantics ...]

>>> What definitions have you provided?
>>
>> Do you need a formal definition of type? Do you intended to discuss it
>> at a formal level? I doubt it would be appropriate in a news group.
> 
> I think it would be very approrpriate to a newsgroup whose topic of
> conversation is programming languages.

I saw no single formal definition in this group, so far. If you really have
interest in that we could try. Unfortunately your comments indicate
otherwise.

>>> Unless I misunderstand this statement, you're saying here that weak-
>>> typing in a type-safe language is equivalent to dynamic-typing.
>>
>> Yes, because you can treat a name as if the object bound to the name
>> had several distinct types.
> 
> This is absurd.  There's nothing at all wrong with an object having
> multiple distinct types.

I didn't say it is wrong. I said it is weakly typed. If you consider weak
typing as a shame, well, some people would answer that it greatly improves
their "productivity."

(It could be an interesting psychological study why some people using
dynamically typed languages, feel themselves ashamed when recognize that
what they do is weakly or untyped. So what? If it increased
"productivity"?)

> In fact, it's the common state of affairs in
> any language with a moderately sophisticated type system.  For example,
> in Common Lisp, the value NIL is:
> 
>   * a symbol,
>   * an atom,
>   * a list,
>   * a boolean value,
>   * the sole inhabitant of the type NULL,
>   * a sequence, and
>   * a value (i.e., an inhabitant of the `top' type T).

Yes, overloading is traditionally attributed to weak typing, because even
if it does not associate many types with one object, it still does one name
with many objects having different types and thus in the end achieving same
effect.

[... personal attacks ...]

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090424033905.90@gmail.com>
On 2009-04-14, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> Please, do not mix erroneous (well-formed, legal, but semantically
> incorrect programs) and illegal programs. They are clearly different.

This distinction is merely a matter of convention in a particular programming
language: it follows the division between syntax and semantics.
Syntax isn't semantics, yes. But from this it doesn't follow that
wrong syntax is different from wrong semantics. Wrong is just wrong!

> A type error renders the program illegal in a statically typed language, not
> erroneous but illegal.

Idiot boy, pay attention now.  You are obviously not familiar with
state-of-the-art static typing, only some declarations-everwhere junk which was
state-of-the-art in 1967.

In modern staticall typed languages, type inference is used to deduce the type
of every expression in the program. Declarations may be there but are optional.

Whether or not a program is illegal depends on the depth of the type inference.

You've already been lectured about how type checking is an undecideable
problem, equivalent to the halting problem. 

If the difference between ``legal'' and ``illlegal'' depends on solving an
undecideable problem, then your notion of legality is weak. ``Illegal'' means
either that a positive type mismatch was found, or simply that an insufficient
number of steps of the type checking algorithm were run to conclude that the
program is legal!

> [... irrelevant stuff about program semantics ...]

You should apologize.

>> This is absurd.  There's nothing at all wrong with an object having
>> multiple distinct types.
>
> I didn't say it is wrong. I said it is weakly typed.

This isn't what ``weakly typed'' means. Note that even in strongly typed,
static languages, the same name can have different types.  This can happen if
it is a local variable in different scopes. Would you say that C is weakly
typed because you can write { int x; ... }  { double x; ... }? Same name, x,
different type.

A better example is that can happen due to polymorphism also. For instance, in
the Haskell language, there is an identity function called id. This function
just returns its argument so that id 3 yields 3, and id "foo" returns "foo".
So the argument and return value of id doesn't have a single, unique type.

Weak typing isn't a particularly well-defined concept, because strong typing
isn't, and weak typing is simply a contrast to strong typing.

Strong typing may refer to the ability to define new types based on
name equivalence (e.g. so that you cannot assign a variable containing
kilograms to a variable which measures seconds even though they are both 64 bit
floats). Or strong typing is used to refer to a type system that cannot be
subverted: one that has only safe type conversions among types and makes no
provisions for type punning of any kind.

> (It could be an interesting psychological study why some people using
> dynamically typed languages, feel themselves ashamed when recognize that
> what they do is weakly or untyped. So what? If it increased
> "productivity"?)

Idiot, how is the situation untyped when every object has an immutable type tag
over its entire lifetime? How much more typed can the situation be?

It's pitifully obvious that you need to upgrade your intellect for this type of
debate.
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e4d41c$0$5930$607ed4bc@cv.net>
Kaz Kylheku wrote:
> On 2009-04-14, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
>> A type error renders the program illegal in a statically typed language, not
>> erroneous but illegal.
> 
> Idiot boy, pay attention now.  

Finally the tone of this thread is getting down to a level at which I 
might be able to contribute!

kt
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87ocv07iv7.fsf.mdw@metalzone.distorted.org.uk>
Robbert Haarman <··············@inglorion.net> writes:

> If there is some sort of consensus in computer science that differs
> from what I have presented, I would like to know about it. As it is,
> at least the Wikipedia article on type systems seems to employ
> definitions that are very similar to mine:

I disagree.  In fact, I suspect that its definitions are more similar to
those I gave earlier.  You snipped those parts, though.

> > Static typing
>
> > A programming language is said to use static typing when type checking 
> > is performed during compile-time as opposed to run-time.

: In static typing, types are associated with variables not values.

> > Dynamic typing
>
> > A programming language is said to be dynamically typed, or just 
> > 'dynamic', when the majority of its type checking is performed at 
> > run-time as opposed to at compile-time.

: In dynamic typing types are associated with values not variables.

This clearly supports my contention that `types' in the two cases are
different, and apply to different program entities.

And, furthermore, we have later:

: Combinations of dynamic and static typing
:
: The presence of static typing in a programming language does not
: necessarily imply the absence of all dynamic typing mechanisms.

I'll admit that this doesn't go as far as I did, but it's a clear step
away from concluding that the two are mutually exclusive.

> Source: 
> http://en.wikipedia.org/w/index.php?title=Type_system&oldid=282920211#Type_checking

Ditto. ;-)

-- [mdw]
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413151838.GA4558@gildor.inglorion.net>
On Mon, Apr 13, 2009 at 03:01:32PM +0100, Mark Wooding wrote:
> Robbert Haarman <··············@inglorion.net> writes:
> 
> > If there is some sort of consensus in computer science that differs
> > from what I have presented, I would like to know about it. As it is,
> > at least the Wikipedia article on type systems seems to employ
> > definitions that are very similar to mine:
> 
> I disagree.  In fact, I suspect that its definitions are more similar to
> those I gave earlier.  You snipped those parts, though.
> 
> > > Static typing
> >
> > > A programming language is said to use static typing when type checking 
> > > is performed during compile-time as opposed to run-time.
> 
> : In static typing, types are associated with variables not values.
> 
> > > Dynamic typing
> >
> > > A programming language is said to be dynamically typed, or just 
> > > 'dynamic', when the majority of its type checking is performed at 
> > > run-time as opposed to at compile-time.
> 
> : In dynamic typing types are associated with values not variables.
> 
> This clearly supports my contention that `types' in the two cases are
> different, and apply to different program entities.

The reason I don't like to phrase the definitions the way you did is 
that they are tied to the concept of variables. What if there are no 
variables?

For example, many ML programs I have written use no or very 
few variables; almost all bindings are constants. How do you determine 
whether type checking is static or dynamic in that case?

> And, furthermore, we have later:
> 
> : Combinations of dynamic and static typing
> :
> : The presence of static typing in a programming language does not
> : necessarily imply the absence of all dynamic typing mechanisms.
> 
> I'll admit that this doesn't go as far as I did, but it's a clear step
> away from concluding that the two are mutually exclusive.

Ok, so when you say "static typing and dynamic are not mutually 
exclusive", you mean that you can have both in the same programming 
language. This is true.

Regards,

Bob

-- 
Those who can, do. Those who can't, sue.

From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <9ebb4959-44bd-4c4c-a437-887bdf78be72@z9g2000yqi.googlegroups.com>
On Apr 13, 11:18 am, Robbert Haarman <··············@inglorion.net>
wrote:
[...]
> For example, many ML programs I have written use no or very
> few variables; almost all bindings are constants. How do you determine
> whether type checking is static or dynamic in that case?

What happens if you do something[1] along the lines of

(define (f x)
  (+ x 42))

and then later do

(define (g x)
  (reverse (f x))

Shouldn't a statically-checked language refuse to compile g on the
grounds that you can't reverse a number, and that adding 42 to
something has got to return a number?

Cheers,
Pillsy

[1] ML syntax has never really stuck to my brain.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413162352.GC4558@gildor.inglorion.net>
On Mon, Apr 13, 2009 at 08:57:34AM -0700, Pillsy wrote:
> On Apr 13, 11:18 am, Robbert Haarman <··············@inglorion.net>
> wrote:
> [...]
> > For example, many ML programs I have written use no or very
> > few variables; almost all bindings are constants. How do you determine
> > whether type checking is static or dynamic in that case?
> 
> What happens if you do something[1] along the lines of
> 
> (define (f x)
>   (+ x 42))
> 
> and then later do
> 
> (define (g x)
>   (reverse (f x))
> 
> Shouldn't a statically-checked language refuse to compile g on the
> grounds that you can't reverse a number, and that adding 42 to
> something has got to return a number?

Yes. And OCaml does:

# let f x = x + 42;;
val f : int -> int = <fun>
# let g x = List.rev (f x);;
                     ^^^^^
This expression has type int but is here used with type 'a list

(The '(f x)' is underlined on the terminal; the carets are my attempt to 
represent that in plain text.)

Now, by my definition, this is static typing, because g is rejected by 
the compiler. By contrast, SBCL is dynamically typed, because it accepts 
g, but gives a type error at run time:

* (defun f (x) (+ x 42))

F
* (defun g (x) (reverse (f x)))

G
* (defun test () (g 12))

TEST
* (test)

debugger invoked on a TYPE-ERROR in thread #<THREAD "initial thread" 
RUNNING {A8345D1}>:
  The value 54 is not of type SEQUENCE.

(Perhaps I should have used Scheme, as Pillsy did, but I don't have a 
Scheme implementation handy.)

Now, I am curious how the people who define static typing as "variables 
have types" see these examples, given that the OCaml code does not 
contain any variables.

Regards,

Bob

-- 
That that is is that that is not not, that is, not that that is not.


From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <79a795ae-f3ff-4c31-9cae-03e3aa0fe2ab@b1g2000vbc.googlegroups.com>
On Apr 13, 12:23 pm, Robbert Haarman <··············@inglorion.net>
wrote:
[...]
> Now, I am curious how the people who define static typing as "variables
> have types" see these examples, given that the OCaml code does not
> contain any variables.

Mutability isn't really fundamental to the idea of variables. There
certainly isn't any notion of either in the lambda calculus, which is
where functional languages like OCaml get their ideas of what
functions and variables are.

Nor is it even a necessary feature in plain old imperative languages.
I think almost everyone would agree that x is a variable in

double f(double x) {
  return 2 * x;
}

despite the fact that it's never changed.

Cheers,
Pillsy
From: Johan Ur Riise
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87ws9m9xut.fsf@morr.riise-data.net>
Robbert Haarman <··············@inglorion.net> writes:

> On Mon, Apr 13, 2009 at 08:57:34AM -0700, Pillsy wrote:
>> On Apr 13, 11:18�am, Robbert Haarman <··············@inglorion.net>
>> wrote:
>> [...]
>> > For example, many ML programs I have written use no or very
>> > few variables; almost all bindings are constants. How do you determine
>> > whether type checking is static or dynamic in that case?
>> 
>> What happens if you do something[1] along the lines of
>> 
>> (define (f x)
>>   (+ x 42))
>> 
>> and then later do
>> 
>> (define (g x)
>>   (reverse (f x))
>> 
>> Shouldn't a statically-checked language refuse to compile g on the
>> grounds that you can't reverse a number, and that adding 42 to
>> something has got to return a number?
>
> Yes. And OCaml does:
>
> # let f x = x + 42;;
> val f : int -> int = <fun>
> # let g x = List.rev (f x);;
>                      ^^^^^
> This expression has type int but is here used with type 'a list
>
> (The '(f x)' is underlined on the terminal; the carets are my attempt to 
> represent that in plain text.)
>
> Now, by my definition, this is static typing, because g is rejected by 
> the compiler. By contrast, SBCL is dynamically typed, because it accepts 
> g, but gives a type error at run time:
>
> * (defun f (x) (+ x 42))
>
> F
> * (defun g (x) (reverse (f x)))
>
> G
> * (defun test () (g 12))
>
> TEST
> * (test)
>
> debugger invoked on a TYPE-ERROR in thread #<THREAD "initial thread" 
> RUNNING {A8345D1}>:
>   The value 54 is not of type SEQUENCE.
>
> (Perhaps I should have used Scheme, as Pillsy did, but I don't have a 
> Scheme implementation handy.)

Note that sbcl would warn if the functions are compiled together, that is 
with the compile-file function.

With the above four lines in a file t2.lisp, I get

CL-USER> (compile-file "t2")
; compiling file "/home/johan/prg/rdweb/t2.lisp" (written 15 APR 2009 09:26:45 AM):
; compiling (DEFUN F ...)
; compiling (DEFUN G ...)

; file: /home/johan/prg/rdweb/t2.lisp
; in: DEFUN G
;     (REVERSE (F X))
;
; note: deleting unreachable code
;
; caught WARNING:
;   Asserted type SEQUENCE conflicts with derived type (VALUES NUMBER &OPTIONAL).
;   See also:
;     The SBCL Manual, Node "Handling of Types"

; compiling (DEFUN TEST ...)
; compiling (TEST);
                  ; compilation unit finished
                  ;   caught 1 WARNING condition
                  ;   printed 1 note

; /home/johan/prg/rdweb/t2.fasl written
; compilation finished in 0:00:00
#P"/home/johan/prg/rdweb/t2.fasl"
T
T
CL-USER>
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87hc0s79de.fsf.mdw@metalzone.distorted.org.uk>
Robbert Haarman <··············@inglorion.net> writes:

> The reason I don't like to phrase the definitions the way you did is 
> that they are tied to the concept of variables. What if there are no 
> variables?

In fact, if you look, I avoided this problem by explaining that it's
/expressions/ which have types.  I left the issue of how the compiler
deduces the type of an expression deliberately vague.

-- [mdw]
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413191610.GF4558@gildor.inglorion.net>
On Mon, Apr 13, 2009 at 06:26:37PM +0100, Mark Wooding wrote:
> Robbert Haarman <··············@inglorion.net> writes:
> 
> > The reason I don't like to phrase the definitions the way you did is 
> > that they are tied to the concept of variables. What if there are no 
> > variables?
> 
> In fact, if you look, I avoided this problem by explaining that it's
> /expressions/ which have types.  I left the issue of how the compiler
> deduces the type of an expression deliberately vague.

Oh, yes, I see now. I was looking at the text you quoted from Wikipedia, 
not the definitions you provided yourself in an earlier post. You mean 
these definitions:

>  * `Static typing' is better named `expression typing'.  An expression
>    is a syntactic entity which denotes a computation.  Static typing
>    assigns each expression a type, according to some rules, possibly
>    based on other annotations in the source.  If this assignment fails
>    (e.g., there is no applicable rule to assign a type to an
>    expression, or there are multiple rules that assign distinct types
>    without an means of disambiguation) then the program is considered
>    ill-formed.

>  * `Dynamic typing' is better named `value typing'.  A value is a
>    runtime entity which stores a (compound or atomic) datum.  Dynamic
>    typing assigns a type to each value, which can be checked at runtime
>    by functions and operators acting on those values.  If an operator
>    or function is applied to a value with an inappropriate type, then
>    an error can be signalled.  This may indicate that the program is
>    incorrect, or simply be a means of validating input data.

Ok. These are very different from the definitions I have provided. They 
certainly place the emphasis very differently. Where my definitions are 
concerned with when you stop to report a type error, your definitions 
are concerned with how you go about determining whether or not to report 
a type error.

Regards,

Bob

-- 
How to get $$$$ quickly:

	1. Hold down the <shift> key
	2. Type 4444


From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090421104721.440@gmail.com>
["Followup-To:" header set to comp.lang.lisp.]
On 2009-04-11, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> On Sat, 11 Apr 2009 11:54:04 +0100, Mark Wooding wrote:
>
>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> 
>>> If they indeed wanted to test (rather than just to *probe*), they should
>>> have to invest a huge amount of up front work compared to trivial
>>> attributing their variables with types.
>> 
>> I must be reading this wrong.  You appear to be claiming that static
>> type checking is a substitute for testing.
>
> Sure it is. You don't need to test for type errors, since types are
> checked.

What if this statically typed program implements an evaluator for a dynamically
typed language? A program in this dynamically typed language is included too,
as a character string literal.

The run-time behavior of the statically typed program is to interpret this
character string.

The interpreter may terminate with an indication that a type mismatch occured
in the program represented by the string.

Where are the static type checks now?

You didn't think this through very well.

> I repeat the point I made before.

``P and P'' is more true than just ``P'', after all.

> The only consistent dynamic typing is no
> typing.

No-typing isn't dynamic typing. Dynamic typing inextricably associates a type
with each run-time value.

No-type languages have typically been static (e.g. BCPL).

Static typing means type of a datum is a property of the procedure which
operates on the datum.

Dynamic typing means type is inherent in the datum; if the procedure disagrees,
the situation is diagnosable.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <10hx95zghccb3.qsg4xj7appk5.dlg@40tude.net>
On Fri, 10 Apr 2009 11:14:15 -0700 (PDT), Raffael Cavallaro wrote:

> On Apr 10, 1:46�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
> 
>> It is interesting to learn the cases when type violation becomes
>> irrelevant. The only one case is when there is no any type. In all other
>> cases it is just a bug.
> 
> Only because you're thinking only of one class of program, where the
> code to be run is known at compile time.

This is a logical fallacy. Trivially: all code is known before run to the
compiler of the code. Otherwise we are talking about different codes and
different compilers, like in the case you presented.

It also does not show that violation can be helpful when slips undetected
or detected later than possible.

Really, the point of considering disability as an advantage cannot be
defended. The only reason why a check shouldn't be made early is because
this would be technically impossible or too expensive to do. Because there
exist types which are obviously trivial to check, pure dynamic typing (=
static typing with exactly one type) is not defendable. Surely we can talk
about concrete types, and there are lots of, which are undecidable to
compare at different states of program life cycle. This perfectly OK.
Nobody seriously argue that all types have to be checked statically or that
all program semantics has to be mapped to types. The point is that we shall
check as much as we can.

> IOW, there are problem domains where dynamic typing is simply
> necessary because we don't yet know at compile time what we'll be
> running. In these cases, dynamically typed languages let us write
> programs that static type checkers cannot prove correct and won't let
> us compile (short of the reductio ad absurdum of implementing dynamic
> typing on top our static type system).

Nevertheless, we can always narrow the class of types we deal with. This is
what generic programming is about. It is never so that we can say
absolutely nothing about expected types. (Again, your example, simplified
as a compiler/interpreter which does not know what the code it translates
would actually do is illegal. Compilers do not program.)

> Conversely, there exist problem domains where we don't really care if
> we end up rejecting some programs that could possibly be correct at
> runtime because we want strong guarantees of safety before anything is
> ever allowed to run. In such domains static type checking provides
> added security at a cost that is inconsequential in that domain.

Well, as known in pattern recognition, you can push it towards minimum of
false negatives or to minimum of false positives, but you never can exclude
either or both.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <dbdba82f-d38c-4f56-a787-ca7ac15060fe@u8g2000yqn.googlegroups.com>
On Apr 10, 3:59 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:

> Really, the point of considering disability as an advantage cannot be
> defended. The only reason why a check shouldn't be made early is because
> this would be technically impossible or too expensive to do.

Or because it would be a waste of time, such as forcing programmers to
stub out some portion of a program that will not be executed yet,
before allowing them to test another part now. Dynamically typed
languages allow programmers to test one part of a program even when
other parts don't yet exist, and those of us who use such languages
find this feature saves us a lot of time and unnecessary effort.

> Nobody seriously argue that all types have to be checked statically or that
> all program semantics has to be mapped to types. The point is that we shall
> check as much as we can.

Which would be fine if that's what static type checkers did. But they
often won't let legal programs run because they insist on checking
things that can't yet be checked, such as bits of the program that
aren't written yet.

> Nevertheless, we can always narrow the class of types we deal with. This is
> what generic programming is about. It is never so that we can say
> absolutely nothing about expected types.

But now we're back to the reductio ad absurdum argument because often
we can only narrow the type down to lisp-object, or scheme-object, or,
in the case of biok, string.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1n0x0vs6u1qvz$.1ef86nyq2lf6q$.dlg@40tude.net>
On Fri, 10 Apr 2009 13:16:57 -0700 (PDT), Raffael Cavallaro wrote:

> On Apr 10, 3:59�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
> 
>> Really, the point of considering disability as an advantage cannot be
>> defended. The only reason why a check shouldn't be made early is because
>> this would be technically impossible or too expensive to do.
> 
> Or because it would be a waste of time, such as forcing programmers to
> stub out some portion of a program that will not be executed yet,
> before allowing them to test another part now. Dynamically typed
> languages allow programmers to test one part of a program even when
> other parts don't yet exist, and those of us who use such languages
> find this feature saves us a lot of time and unnecessary effort.

Hmm, this has nothing to do with static typing. It is merely separate
compilation.

BTW, static analysis shines here. Because if you statically know that the
code is unreachable or unreferenced, the compiler can safely drop it,
giving a warning. In dynamic case it cannot do this, because it cannot
know. You propose to carry the garbage with?

>> Nobody seriously argue that all types have to be checked statically or that
>> all program semantics has to be mapped to types. The point is that we shall
>> check as much as we can.
> 
> Which would be fine if that's what static type checkers did. But they
> often won't let legal programs run because they insist on checking
> things that can't yet be checked, such as bits of the program that
> aren't written yet.

See above, it is not a necessary property of statically typed system. Yes,
there are languages with types declared globally and those which do not
really support separate compilation, encapsulation and modularity. Those
are just bad languages, statically typed or not.

>> Nevertheless, we can always narrow the class of types we deal with. This is
>> what generic programming is about. It is never so that we can say
>> absolutely nothing about expected types.
> 
> But now we're back to the reductio ad absurdum argument because often
> we can only narrow the type down to lisp-object, or scheme-object, or,
> in the case of biok, string.

No, we can always say a lot more. One of the advantages of static typing is
that it encourages the programmer to think about what distinguishes these
objects from others. Thus it does refactoring, reuse, consistency,
verifiability and overall better design.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <cadf2be1-6ade-4763-8264-ab9f58b8bdba@f19g2000yqh.googlegroups.com>
On Apr 10, 4:54 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:

> BTW, static analysis shines here. Because if you statically know that the
> code is unreachable or unreferenced, the compiler can safely drop it,
> giving a warning. In dynamic case it cannot do this, because it cannot
> know. You propose to carry the garbage with?

As programs are being written and changed they are not yet complete
wholes which can be statically analyzed but are instead growing,
changing things which cannot be fully analyzed because they are
incomplete. The portions that cannot be typed yet are *not* dead code;
they just contain calls to as-yet undefined functions. I *don't* want
the compiler to eliminate these functions. I just want the compiler to
issue an undefined-function warning and run what I ask it to without
either requiring me to stub out the as-yet undefined function, or
eliminating the routine that calls it as dead code because it cannot,
as yet, be called.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1wzhpdmt9hn09$.1xkymjxzdgjq9$.dlg@40tude.net>
On Fri, 10 Apr 2009 14:24:17 -0700 (PDT), Raffael Cavallaro wrote:

> On Apr 10, 4:54�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
> 
>> BTW, static analysis shines here. Because if you statically know that the
>> code is unreachable or unreferenced, the compiler can safely drop it,
>> giving a warning. In dynamic case it cannot do this, because it cannot
>> know. You propose to carry the garbage with?
> 
> As programs are being written and changed they are not yet complete
> wholes which can be statically analyzed but are instead growing,
> changing things which cannot be fully analyzed because they are
> incomplete.

The same argument can be applied in order to show that incomplete program
cannot be executed.

For very same reason you could analyse incomplete (tightly coupled,
non-modular) program, you cannot say anything meaningful about its observed
behavior.

You have not a language, but a design problem. The advantage of statically
typed systems is that they detect potentially design problems like this.
And do it early, because design problems are expensive to fix.

> The portions that cannot be typed yet are *not* dead code;
> they just contain calls to as-yet undefined functions.

Do you call these functions or not. If you don't, there is no problem with
that in *both* cases.

> I *don't* want
> the compiler to eliminate these functions. I just want the compiler to
> issue an undefined-function warning and run what I ask it to without
> either requiring me to stub out the as-yet undefined function, or
> eliminating the routine that calls it as dead code because it cannot,
> as yet, be called.

So you actually do not know if and when you call which functions. Certainly
it is not the way I am writing programs...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <a13679eb-6b40-47e6-9246-6ca625e9861a@e5g2000vbe.googlegroups.com>
On Apr 11, 5:49 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:

> The same argument can be applied in order to show that incomplete program
> cannot be executed.

Lisp users run incomplete programs all the time - we just get a
runtime error, are dropped into the debugger, and either add the
missing bit in the debugger and continue merrily on our way, or back
out and refactor, etc.

You have a rigidly static notion of what can/can't/should/shouldn't be
allowed to run which is the enemy of exploratory programming and
programmer productivity[1]

[1] used here in the sense, possibly unfamiliar to you, of usable
code, not random sequences of instructions.
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74bn0bF12g454U1@mid.individual.net>
On Sat, 11 Apr 2009 06:51:57 -0700, Raffael Cavallaro wrote:

> On Apr 11, 5:49 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
> 
>> The same argument can be applied in order to show that incomplete
>> program cannot be executed.
> 
> Lisp users run incomplete programs all the time - we just get a runtime
> error, are dropped into the debugger, and either add the missing bit in
> the debugger and continue merrily on our way, or back out and refactor,
> etc.
> 
> You have a rigidly static notion of what can/can't/should/shouldn't be
> allowed to run which is the enemy of exploratory programming and
> programmer productivity[1]
> 
> [1] used here in the sense, possibly unfamiliar to you, of usable code,
> not random sequences of instructions.

I had a look at Dmitry's homepage, apparently he uses Ada 95/2005.  So
it is possible that exploratory programming, among other things, is
unfamiliar to him.

From Ada's wikipedia entry:

"Ada also supports a large number of compile-time checks to help avoid
bugs that would not be detectable until run-time in some other
languages or would require explicit checks to be added to the source
code. Ada is designed to detect small problems in very large, massive
software systems. For example, Ada detects each misspelled variable
(due to the rule to declare each variable name), and Ada pinpoints
unclosed if-statements, which require "END IF" rather than mismatching
with any "END" token."

What a truly amazing language!  Whereas in CL, when I write

(lambda (x)
  (if (statement-involving-x)
      value1
      value2))

I sometimes wonder which of the last two )'s belong to (lambda and
(if.  Sometimes I mix up the two )'s and write it like this:

(lambda (x)
  (if (statement-involving-x)
      value1
      value2))

and my dumb compiler doesn't even give me an error message.

Tamas
From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <32504cfa-faa2-439d-9d80-5337f9365bb3@e24g2000vbe.googlegroups.com>
On Apr 11, 10:15 am, Tamas K Papp <······@gmail.com> wrote:

> and my dumb compiler doesn't even give me an error message.

Yes. This is why xml is so much superior to sexps - each closing tag
explicitly (and, more importantly, quite verbosely) matches an equally
explicit and verbose opening tag.

;^)
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <fbfde66f-9db9-4ca2-8174-2fd5d494489d@n8g2000vbb.googlegroups.com>
On Apr 11, 5:49 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Fri, 10 Apr 2009 14:24:17 -0700 (PDT), Raffael Cavallaro wrote:
[...]
> > The portions that cannot be typed yet are *not* dead code;
> > they just contain calls to as-yet undefined functions.

> Do you call these functions or not. If you don't, there is no problem with
> that in *both* cases.

If you *do* call one of these functions, there's also no problem. You
get dropped into the debugger, go to another window and define the
function, rewind the stack a bit and start again.

It's not so different from what happens when you add 3 to NIL, and
it's really not a much bigger deal.

Cheers,
Pillsy
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5h00j5z1dn9v.10jixmtxwtuap.dlg@40tude.net>
On Sat, 11 Apr 2009 08:02:42 -0700 (PDT), Pillsy wrote:

> On Apr 11, 5:49�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 14:24:17 -0700 (PDT), Raffael Cavallaro wrote:
> [...]
>>> The portions that cannot be typed yet are *not* dead code;
>>> they just contain calls to as-yet undefined functions.
> 
>> Do you call these functions or not. If you don't, there is no problem with
>> that in *both* cases.
> 
> If you *do* call one of these functions, there's also no problem. You
> get dropped into the debugger, go to another window and define the
> function, rewind the stack a bit and start again.

So you agree that this is a bug. Therefore the compiler is right telling
you so, *before* you start the program. Where is a problem? What is the
reason to wait until a debugger window pops up? After all, if you design a
GUI it could take a lot of mouse clicks before that happens. Do you find
debugging a fun? I don't. I hate debugging.

> It's not so different from what happens when you add 3 to NIL, and
> it's really not a much bigger deal.

I never wrote a program that adds 3 to null, because it is impossible in
the statically typed language I am using.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <2fd9275c-1017-4964-8de4-4d54e2b0ecd0@v9g2000vbb.googlegroups.com>
On Apr 11, 11:25 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Sat, 11 Apr 2009 08:02:42 -0700 (PDT), Pillsy wrote:
> > On Apr 11, 5:49 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> On Fri, 10 Apr 2009 14:24:17 -0700 (PDT), Raffael Cavallaro wrote:
> > [...]
> >>> The portions that cannot be typed yet are *not* dead code;
> >>> they just contain calls to as-yet undefined functions.
>
> >> Do you call these functions or not. If you don't, there is no problem with
> >> that in *both* cases.
>
> > If you *do* call one of these functions, there's also no problem. You
> > get dropped into the debugger, go to another window and define the
> > function, rewind the stack a bit and start again.
>
> So you agree that this is a bug.

Of course there's a bug. So what? I'm still in the early testing and
design phases of the project, so I don't particularly care if there
are silly, easy-to-fix bugs yet.

> Therefore the compiler is right telling you so, *before* you start the
> program.

Yes...?

> Where is a problem?

The problem is that in order to catch those occasional bugs, I have to
waste a whole lot of time convincing the compiler to compile a program
that I know is pretty damned likely to be buggy anyway.

Cheers,
Pillsy
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <qbr1984brm19$.1kllduya909gm$.dlg@40tude.net>
On Sat, 11 Apr 2009 08:56:26 -0700 (PDT), Pillsy wrote:

> The problem is that in order to catch those occasional bugs, I have to
> waste a whole lot of time convincing the compiler to compile a program
> that I know is pretty damned likely to be buggy anyway.

Maybe it is just so that you are trying to convince yourself that it isn't
that buggy as you surely know it is. Now I understand your point: it would
be so easy to fool yourself if not that damned compiler... (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Blaak
Subject: static vs dynamic typing (Re: PLOT: A non-parenthesized, infix Lisp!)
Date: 
Message-ID: <u63h8ze7p.fsf_-_@STRIPCAPStelus.net>
I think everyone in this debate should all loosen up.

I have used both statically typed languages and dynamic ones, and my
observations, for what they're worth, are:

a) Using static typing is not nearly as hard as the dynamic camp seems
   to think it is.

b) Dynamically typed programs seem to run a lot more robustly than the
   static camp seems willing to admit.

c) The best languages are those where the programmer can choose to be
   static or dynamic as desired, giving the best of all worlds.

My current language at the moment is mostly ActionScript3 (Flex), which
is essentially JavaScript plus optional types. JavaScript in turn is
really just a Lisp with C/Java-like syntax.

I am finding it great for having the compiler point out to be the stupid
errors, and verify design issues, yet flexible enough to have implicit
typing when needed.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090410201205.GV3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 09:59:50PM +0200, Dmitry A. Kazakov wrote:
> 
> Really, the point of considering disability as an advantage cannot be
> defended. The only reason why a check shouldn't be made early is because
> this would be technically impossible or too expensive to do.

But that is exactly the argument. Remember, the original claim which I 
took issue with is the claim that Boo leads to great programmer 
productivity, because it is dynamically typed. The implication, then, is 
that static typing is too expensive, in terms of programmer 
productivity. I am not convinced that this is the case, but if it is the 
case, then this is indeed a valid argument for not performing static 
type checking.

Regards,

Bob

-- 
"Beware of bugs in the above code; I have only proved it correct, but not
tried it."
	-- Donald Knuth


From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1jncwcyutma2$.1784la67hzuxu$.dlg@40tude.net>
On Fri, 10 Apr 2009 22:12:05 +0200, Robbert Haarman wrote:

> On Fri, Apr 10, 2009 at 09:59:50PM +0200, Dmitry A. Kazakov wrote:
>> 
>> Really, the point of considering disability as an advantage cannot be
>> defended. The only reason why a check shouldn't be made early is because
>> this would be technically impossible or too expensive to do.
> 
> But that is exactly the argument. Remember, the original claim which I 
> took issue with is the claim that Boo leads to great programmer 
> productivity, because it is dynamically typed. The implication, then, is 
> that static typing is too expensive, in terms of programmer 
> productivity. I am not convinced that this is the case, but if it is the 
> case, then this is indeed a valid argument for not performing static 
> type checking.

This cannot apply to static typing as a whole, because there obviously
exist trivial cases which simply cannot limit productivity. I cannot
imagine how multiplying string to string can suddenly become productive
when strings are not declared strings.

In order to make the argument working he should have presented a concrete
class of types for which 1) their declaration would hinder productivity 2)
productivity would be measured in a reasonable way.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <cd910175-8c81-464c-8413-00d128eec122@f19g2000yqh.googlegroups.com>
On Apr 10, 4:37 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Fri, 10 Apr 2009 22:12:05 +0200, Robbert Haarman wrote:
[...]
> This cannot apply to static typing as a whole, because there obviously
> exist trivial cases which simply cannot limit productivity. I cannot
> imagine how multiplying string to string can suddenly become productive
> when strings are not declared strings.

But surely you can imagine how having to take the time to declare that
the two things being multiplied are numbers ahead of time could limit
productivity. It can force the programmer to do needless work now in
order to get the program to run at all, even if it's in a prototype
stage and might be throwaway code.

It's exactly the converse of one of the major performance benefits of
static typing, where the compiler gets to generate code that doesn't
have to check types because it knows they aren't need.

Cheers,
Pillsy
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1h3e5fld95y3d.ah1kpydwef9i$.dlg@40tude.net>
On Fri, 10 Apr 2009 13:54:53 -0700 (PDT), Pillsy wrote:

> On Apr 10, 4:37�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 22:12:05 +0200, Robbert Haarman wrote:
> [...]
>> This cannot apply to static typing as a whole, because there obviously
>> exist trivial cases which simply cannot limit productivity. I cannot
>> imagine how multiplying string to string can suddenly become productive
>> when strings are not declared strings.
> 
> But surely you can imagine how having to take the time to declare that
> the two things being multiplied are numbers ahead of time could limit
> productivity.

No, because I would never come to this. Why should I declare it a number if
I know that I do not need multiplication? This is exactly why typing is so
helpful. I think in terms of types and interfaces of. I know what are the
operations I will use. The compiler just follows me.

> It can force the programmer to do needless work now in
> order to get the program to run at all, even if it's in a prototype
> stage and might be throwaway code.

In fact, I don't really know what throwaway code is good for. To me
try-and-fail does not go beyond compilation time. If I wish to check
something, I do it using the compiler. I practically never run it. I am too
lazy for that. (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <b1841007-bbaf-43b4-99f1-b067700392d9@h28g2000yqd.googlegroups.com>
On Apr 10, 5:09 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Fri, 10 Apr 2009 13:54:53 -0700 (PDT), Pillsy wrote:
[...]
> > But surely you can imagine how having to take the time to declare that
> > the two things being multiplied are numbers ahead of time could limit
> > productivity.

> No, because I would never come to this. Why should I declare it a number if
> I know that I do not need multiplication?

You shouldn't. But conversely, why should you declare it a number if
you know you *do* need multiplication? What the heck else are you
multiply?
[...]
> In fact, I don't really know what throwaway code is good for.

If your major question is whether your approach is fast enough, or
numerically stable enough, or if your simulation captures the the
relevant features of the system being simulated, you are probably
going to write a lot of throwaway code.

Or at least, I'm going to write a lot of throwaway code. I really
shouldn't make that sort of assumption about how you approach
problems.

Cheers,
Pillsy
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <6zaygcqj1n4w.s4z29l1lx388$.dlg@40tude.net>
On Fri, 10 Apr 2009 14:41:28 -0700 (PDT), Pillsy wrote:

> On Apr 10, 5:09�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 13:54:53 -0700 (PDT), Pillsy wrote:
> [...]
>>> But surely you can imagine how having to take the time to declare that
>>> the two things being multiplied are numbers ahead of time could limit
>>> productivity.
> 
>> No, because I would never come to this. Why should I declare it a number if
>> I know that I do not need multiplication?
> 
> You shouldn't. But conversely, why should you declare it a number if
> you know you *do* need multiplication? What the heck else are you
> multiply?

Elements of a multiplicative group, matrix to vector,... the list is
infinite.

> [...]
>> In fact, I don't really know what throwaway code is good for.
> 
> If your major question is whether your approach is fast enough, or
> numerically stable enough, or if your simulation captures the the
> relevant features of the system being simulated, you are probably
> going to write a lot of throwaway code.

I don't see how this changes types. You are talking about implementation
issues which should by no mean influence the specification. So if one
numerical method is instable it does not mean that another method would be
non-numeric, or?

> Or at least, I'm going to write a lot of throwaway code. I really
> shouldn't make that sort of assumption about how you approach
> problems.

If you write a lot of throwaway code, how can you claim higher
productivity? I hope that the throwaway code is not counted as production
code?

BTW, how do you distinguish throwaway and production code in order to
ensure that the former will never be treated as the latter?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <16bebf39-d15e-415c-9e3d-704e0bf3f866@m24g2000vbp.googlegroups.com>
On Apr 11, 5:17 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Fri, 10 Apr 2009 14:41:28 -0700 (PDT), Pillsy wrote:
[...]
> > If your major question is whether your approach is fast enough, or
> > numerically stable enough, or if your simulation captures the the
> > relevant features of the system being simulated, you are probably
> > going to write a lot of throwaway code.

> I don't see how this changes types.

It doesn't change *types*.

The issue isn't what the types are, it's whether it's worth the effort
to deal with static type *checks*.

Types and type checks just aren't the same thing.

Cheers,
Pillsy
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <vw5bkmbtiwzj$.1fxdfqicrr571$.dlg@40tude.net>
On Sat, 11 Apr 2009 07:16:16 -0700 (PDT), Pillsy wrote:

> On Apr 11, 5:17�am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 14:41:28 -0700 (PDT), Pillsy wrote:
> [...]
>>> If your major question is whether your approach is fast enough, or
>>> numerically stable enough, or if your simulation captures the the
>>> relevant features of the system being simulated, you are probably
>>> going to write a lot of throwaway code.
> 
>> I don't see how this changes types.
> 
> It doesn't change *types*.

Fine, what is wrong with stating these types.

> The issue isn't what the types are, it's whether it's worth the effort
> to deal with static type *checks*.

Which efforts, it is already done. Checks are performed by the compiler.

> Types and type checks just aren't the same thing.

Great. So we have agreed that

1. Types are needed
2. Types errors has to be detected

The only problem is that you don't want to detect these errors at compile
time, even if you could?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e0bc5f$0$95486$742ec2ed@news.sonic.net>
Dmitry A. Kazakov wrote:

> The only problem is that you don't want to detect these errors at compile
> time, even if you could?

Dmitry, I don't think anyone objects to detecting type errors, or even 
type warnings, before runtime.  People who use dynamically typed languages
just prefer not to have such type errors or warnings stop them from 
running the program.  

Most coders I know and work with prefer finding logical errors in code 
by running it and seeing what happens as it interacts with real data. 
I'm sorry if you're offended by folks preferring to treat type errors 
exactly the same as any other kind of logic error, but that's what 
many folks do prefer.  

Type errors detectable before time are mostly hypothetical in nature; 
an operation has been used that is not defined on variables of this 
type.   Type errors found at runtime are very concrete and easy to
understand; Hey, this code tried to add 23 to "The rain in Spain."  
Having actual values rather than just hypothetical types makes it 
easier to understand what went wrong and go debug it.  

Aside from type errors per se, remember that there's still a 
large class of type-correct programs that cannot be proven type-
correct.  And the halting problem says that no matter how advanced 
we get, that class of programs will remain nonempty.

And also, sometimes these are applications that necessarily run 
24/7 and which need to be modified with new types or new methods 
while running. In a no-downtime situation, static typechecks 
"before runtime" can never happen because there is no "before" to 
work with.  I like being able to write code, for example, that 
manages lists of things, have it in a program, and then, while 
the program is running, add new user-defined types whose methods 
use my list code to manage lists of objects of the new type 
(which didn't even exist when the list module started) and link 
this into the program while it's running, without ever stopping 
the program.


                                Bear
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74bj95F12p316U1@mid.individual.net>
On Sat, 11 Apr 2009 11:17:02 +0200, Dmitry A. Kazakov wrote:

> If you write a lot of throwaway code, how can you claim higher
> productivity? I hope that the throwaway code is not counted as
> production code?

Architects/designers usually make blueprints, physical scale models
and lately, 3d renderings of the objects with a computer.  If you
don't understand why they do this, you will never understand the role
of throwaway code.

Tamas
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <nv7b1lgld2lw.1dq6sd044d53.dlg@40tude.net>
On 11 Apr 2009 13:12:06 GMT, Tamas K Papp wrote:

> On Sat, 11 Apr 2009 11:17:02 +0200, Dmitry A. Kazakov wrote:
> 
>> If you write a lot of throwaway code, how can you claim higher
>> productivity? I hope that the throwaway code is not counted as
>> production code?
> 
> Architects/designers usually make blueprints, physical scale models
> and lately, 3d renderings of the objects with a computer.  If you
> don't understand why they do this, you will never understand the role
> of throwaway code.

I am afraid you are confusing models with what they ought to model as well
as tools with goals.

I am surprised to hear that a physical scale model (throwaway code)
requires less efforts than labeling the project folder "it will be a
bridge, guys, not a stall" (type annotation).

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74blimF12r2joU1@mid.individual.net>
On Sat, 11 Apr 2009 15:30:50 +0200, Dmitry A. Kazakov wrote:

> On 11 Apr 2009 13:12:06 GMT, Tamas K Papp wrote:
> 
>> On Sat, 11 Apr 2009 11:17:02 +0200, Dmitry A. Kazakov wrote:
>> 
>>> If you write a lot of throwaway code, how can you claim higher
>>> productivity? I hope that the throwaway code is not counted as
>>> production code?
>> 
>> Architects/designers usually make blueprints, physical scale models and
>> lately, 3d renderings of the objects with a computer.  If you don't
>> understand why they do this, you will never understand the role of
>> throwaway code.
> 
> I am afraid you are confusing models with what they ought to model as
> well as tools with goals.

Nope, the separation you introduce is artificial and reflects your
lack of experience with Lisp.  Lisp is the best model for Lisp
code. Experienced Lisp programmers sketch their programs in Lisp, and
continuously refine them until they arrive at the end result.

Static typing prevents this from happening, because it distracts the
programmer and the flow is broken.  Haskell and its ilk are very good
among languages with static typing since they implement static typing
in a minimally intrusive way, but it still manages to be a pain.

> I am surprised to hear that a physical scale model (throwaway code)
> requires less efforts than labeling the project folder "it will be a
> bridge, guys, not a stall" (type annotation).

That is fine when your components are well-known things and you are
just repackaging them.  But when you are developing something new, you
can't just label it because you have to say more about the object.
Good dynamic languages allows you to sketch the description of the
object in the language itself.  The emphasis is on "sketch": it is
something that is a good first approximation, but implementing it in a
language with static typing would imply a lot of wasted effort just to
please the compiler.

Tamas
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <688dac19-c6d8-4769-810c-8871c9122d36@u8g2000yqn.googlegroups.com>
On Apr 10, 1:46 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Fri, 10 Apr 2009 10:07:36 -0700 (PDT), Pillsy wrote:
[...]
> > In *some* circumstances, they are.

> It is interesting to learn the cases when type violation becomes
> irrelevant.

I didn't say type violations were going to be irrelevant, I said type
*checks* were going to be irrelevant. There's a big difference!

> In all other cases it is just a bug.

Well, sure, but sometimes coding defensively to avoid a certain class
of bug is going to take more effort than just dealing with the bug
when it comes up. Some bugs are really simple to figure out and kind
of hard to blunder into by accident---the sort where you subtract a
hash table from a string is one of those.

Cheers,
Pillsy
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090420131245.315@gmail.com>
On 2009-04-10, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> It is interesting to learn 

Is it now? You might want to test this hypothesis.
From: Vsevolod
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1c0950a5-c7e7-4535-bfbc-520334d0694c@r34g2000vba.googlegroups.com>
On Apr 10, 8:46 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> It is interesting to learn the cases when type violation becomes
> irrelevant. The only one case is when there is no any type. In all other
> cases it is just a bug.

And besides the run-time generated code, mentioned earlier, have you
heard of duck typing?

Cheers,
Vsevolod
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1lalmv7yd9q28.k3gs02vhy8ao.dlg@40tude.net>
On Sat, 11 Apr 2009 13:41:02 -0700 (PDT), Vsevolod wrote:

> On Apr 10, 8:46 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> It is interesting to learn the cases when type violation becomes
>> irrelevant. The only one case is when there is no any type. In all other
>> cases it is just a bug.
> 
> And besides the run-time generated code, mentioned earlier,

I answered that in other post.

> have you heard of duck typing?

Yes, it should not be allowed in a well-designed language.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Vsevolod
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <cef2387d-1872-470a-919b-8340db10b83d@v9g2000vbb.googlegroups.com>
On Apr 12, 11:18 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> On Sat, 11 Apr 2009 13:41:02 -0700 (PDT), Vsevolod wrote:
> > On Apr 10, 8:46 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> It is interesting to learn the cases when type violation becomes
> >> irrelevant. The only one case is when there is no any type. In all other
> >> cases it is just a bug.
>
> > And besides the run-time generated code, mentioned earlier,
>
> I answered that in other post.
>
> > have you heard of duck typing?
>
> Yes, it should not be allowed in a well-designed language.

Who gave you the authority to state that? And, moreover, to tell
people which development style to use (as of the above discussion)? If
you ask the people, actually involved in programming language design,
I think few if any will give such strong opinions. It is PL
fanaticism.

Good bye
Vsevolod
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74e1u6F11jeq5U1@mid.individual.net>
On Sun, 12 Apr 2009 03:08:29 -0700, Vsevolod wrote:

> On Apr 12, 11:18 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Sat, 11 Apr 2009 13:41:02 -0700 (PDT), Vsevolod wrote:
>> > On Apr 10, 8:46 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>> > wrote:
>> >> It is interesting to learn the cases when type violation becomes
>> >> irrelevant. The only one case is when there is no any type. In all
>> >> other cases it is just a bug.
>>
>> > And besides the run-time generated code, mentioned earlier,
>>
>> I answered that in other post.
>>
>> > have you heard of duck typing?
>>
>> Yes, it should not be allowed in a well-designed language.
> 
> Who gave you the authority to state that? And, moreover, to tell people
> which development style to use (as of the above discussion)? If you ask

Be easy on the guy, he programs Ada 95/2005, which is a very serious
self-inflicted punishment.  He is frustrated because other people are
using nice programming languages.  The phenomenon is similar to the
pope telling other people how to conduct their sex life: if he can't
enjoy it, others shouldn't be able to either.

Tamas
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <9x7kxtv2quwq$.42n10pkhvo37.dlg@40tude.net>
On Sun, 12 Apr 2009 03:08:29 -0700 (PDT), Vsevolod wrote:

> On Apr 12, 11:18 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> On Sat, 11 Apr 2009 13:41:02 -0700 (PDT), Vsevolod wrote:
>>> On Apr 10, 8:46 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>> wrote:
>>>> It is interesting to learn the cases when type violation becomes
>>>> irrelevant. The only one case is when there is no any type. In all other
>>>> cases it is just a bug.
>>
>>> And besides the run-time generated code, mentioned earlier,
>>
>> I answered that in other post.
>>
>>> have you heard of duck typing?
>>
>> Yes, it should not be allowed in a well-designed language.
> 
> Who gave you the authority to state that?

Authority, did I proposed to make it a law? I answered your question. 

However there are certain merits in this idea. Alas, I doubt that lawmakers
would hear me. IMO a legal persecution of duck-typing seems far more useful
than fighting peer-to-peer file downloading... (:-))

> And, moreover, to tell
> people which development style to use (as of the above discussion)? If
> you ask the people, actually involved in programming language design,
> I think few if any will give such strong opinions. It is PL fanaticism.

Well people who have weak opinions should stay away from discussing them.
Anyway it seems that your opinion regarding your language preferences is no
less stronger than mine.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <874owtajiw.fsf.mdw@metalzone.distorted.org.uk>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> On Sun, 12 Apr 2009 03:08:29 -0700 (PDT), Vsevolod wrote:
>
> > On Apr 12, 11:18 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> On Sat, 11 Apr 2009 13:41:02 -0700 (PDT), Vsevolod wrote:
> >>> On Apr 10, 8:46 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> >>> wrote:
> >>>> It is interesting to learn the cases when type violation becomes
> >>>> irrelevant. The only one case is when there is no any type. In all other
> >>>> cases it is just a bug.
> >>
> >>> And besides the run-time generated code, mentioned earlier,
> >>
> >> I answered that in other post.
> >>
> >>> have you heard of duck typing?
> >>
> >> Yes, it should not be allowed in a well-designed language.
> > 
> > Who gave you the authority to state that?
>
> Authority, did I proposed to make it a law? I answered your question. 
>
> However there are certain merits in this idea. Alas, I doubt that lawmakers
> would hear me. IMO a legal persecution of duck-typing seems far more useful
> than fighting peer-to-peer file downloading... (:-))

If you were to say that the ideas of supressing duck-typing and
filesharing are similarly absurd, then I'll happily agree with you.  But
I'll also not comment further on the state of copyright law in these
newsgroups.

> > And, moreover, to tell people which development style to use (as of
> > the above discussion)? If you ask the people, actually involved in
> > programming language design, I think few if any will give such
> > strong opinions. It is PL fanaticism.
>
> Well people who have weak opinions should stay away from discussing
> them.  Anyway it seems that your opinion regarding your language
> preferences is no less stronger than mine.

I can't speak for Vsevolod, but I draw a very clear distinction between
my preferences for a programming language which I am going to use
personally, and how I think the design space of programming languages
ought to be populated.

My opinion on programming languages in general is that there ought to be
thorough exploration of the design space: static and dynamic typing (and
compromises in-between); lexical and dynamic scope; reference and value
assignment semantics; stateful imperative, object oriented, functional,
logic; and so on.  There are many paradigms[1] and features, gross and
subtle, that make languages suitable for different purposes and by
different people; and almost all of them are valuable for something.

I'm somewhat more particular about the kinds of languages that I like to
use -- and I've used and studied a wide variety before coming to my
conclusions.  But in fact there's a strong analogy: in the same way that
I believe that the space of programming languages ought to be widely
populated, and flexible, with programmers free to choose the language
which best suits them and the problems they're solving, I appreciate
flexibility and freedom in the specific language I'm using.  When I'm
programming, I don't like the language telling me that it knows better
than I do how to achieve a particular goal.  Languages which I find
amenable are collections of tools, agnostic about how I might feel like
using them.

But I'm aware that others think differently.  And while I might tell you
that I don't particularly enjoy programming in strict statically-typed
languages, that it probably isn't a particularly good fit for the way my
brain works, or the best possible use of my time or abilities, I'm not
going to tell you that you've chosen the wrong language for your project
just because it doesn't match my ideas of what /I'd/ like to use.

For reference, I like languages like Common Lisp and Python; they seem
free and open in a way I find pleasing.  I only really enjoy Haskell
when I'm subverting it (e.g., using its foreign function interface), but
I think it's a fascinating language and strongly hope it continues to
evolve and be used.  Oddly enough, I don't enjoy Scheme very much: there
seems to be an increasingly dogmatic streak in the RnRS documents which
seems out-of-keeping with its Lisp heritage.

On the other hand, I think that Java is mostly a mistake, but not
because of its choice to be statically typed: I see its choice of a
16-bit `character' type to represent Unicode[2] as being actively
dishonest; I've found that its idea of `checked exceptions' seriously
impedes accurate propagation of errors through formally-defined
interfaces; and I believe that Java's default use of bounded integer
types with silent overflow is fundamentally incompatible with its stated
aim of being a safe high-level language.  I see these (and others) as
objective failings, distinct from the subjective reasons why I don't
find programming in Java to be pleasurable.  Even so, I know many
programmers who do enjoy writing Java code, and while I find myself
unable to understand why, I continue to support the availability of Java
on their behalf.

[1] Yes.  I really mean it.

[2] An historically imposed mistake, admittedly, since Java predates the
    expansion of Unicode from 16 to 21 bits; but one that was clearly
    both foreseeable and avoidable.  The fact that Java chose to use an
    integer type with rigidly defined bounds -- rather than to provide
    an abstract character type -- is a major contributing factor.

-- [mdw]
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <12irh1dfzo1md$.3kw2i3vixoey$.dlg@40tude.net>
On Sun, 12 Apr 2009 18:06:47 +0100, Mark Wooding wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> 
>> On Sun, 12 Apr 2009 03:08:29 -0700 (PDT), Vsevolod wrote:
>>
>>> And, moreover, to tell people which development style to use (as of
>>> the above discussion)? If you ask the people, actually involved in
>>> programming language design, I think few if any will give such
>>> strong opinions. It is PL fanaticism.
>>
>> Well people who have weak opinions should stay away from discussing
>> them.  Anyway it seems that your opinion regarding your language
>> preferences is no less stronger than mine.
> 
> I can't speak for Vsevolod, but I draw a very clear distinction between
> my preferences for a programming language which I am going to use
> personally, and how I think the design space of programming languages
> ought to be populated.

In that case you do not consider programming as an engineering activity. I
do. Note that engineering assumes responsibility for taking decisions. This
excludes an indifferent attitude to bad practices.

*What* is a bad practice must be a subject of discussion, but not whether
we should or not fight bad practices. Yes, I am aware that programming is
not much engineering. But at least we should not behave us as if we were
shamans.

My comment on duck-typing was that it is bad practice. The answer was, -
shut up, we do whatever us pleases. That is silly and irresponsible.

> My opinion on programming languages in general is that there ought to be
> thorough exploration of the design space: static and dynamic typing (and
> compromises in-between); lexical and dynamic scope; reference and value
> assignment semantics; stateful imperative, object oriented, functional,
> logic; and so on.  There are many paradigms[1] and features, gross and
> subtle, that make languages suitable for different purposes and by
> different people; and almost all of them are valuable for something.
> 
> I'm somewhat more particular about the kinds of languages that I like to
> use -- and I've used and studied a wide variety before coming to my
> conclusions.  But in fact there's a strong analogy: in the same way that
> I believe that the space of programming languages ought to be widely
> populated, and flexible, with programmers free to choose the language
> which best suits them and the problems they're solving, I appreciate
> flexibility and freedom in the specific language I'm using.  When I'm
> programming, I don't like the language telling me that it knows better
> than I do how to achieve a particular goal.  Languages which I find
> amenable are collections of tools, agnostic about how I might feel like
> using them.

That is fine, so long you don't want to draw any conclusion from language
design exploration. If CS is a science and programming is engineering, then
you cannot just collect languages like post stamps.

[...personal language preferences...]

P.S. I don't like Java. It took worst from C++ and corrupted some right
ideas like contracted exceptions.

P.P.S. I don't consider static vs. dynamic typing actual, because
statically typed OO languages successfully incorporated everything useful
of dynamic typing.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Vsevolod
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <f1b04bb7-7178-482a-ab6d-29c41dcb05ea@w40g2000yqd.googlegroups.com>
On Apr 12, 9:19 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
> My comment on duck-typing was that it is bad practice. The answer was, -
> shut up, we do whatever us pleases. That is silly and irresponsible.

Your comment was, that it shouldn't be allowed. Without any
explanation why. Not even mentioning the words "bad practice". Isn't
that silly and irresponsible? I've just tried to match your attitude.
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e27df7$0$5925$607ed4bc@cv.net>
Vsevolod wrote:
> On Apr 12, 9:19 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
>> My comment on duck-typing was that it is bad practice. The answer was, -
>> shut up, we do whatever us pleases. That is silly and irresponsible.
> 
> Your comment was, that it shouldn't be allowed. Without any
> explanation why. Not even mentioning the words "bad practice". Isn't
> that silly and irresponsible? I've just tried to match your attitude.

This has the exciting potential for being the most exceedingly dreary 
infinite loop ever.

kt
From: ·····@franz.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <75b7f43b-f408-4b6d-98b7-83a8cddd7da0@e18g2000yqo.googlegroups.com>
On Apr 12, 4:49 pm, Kenneth Tilton <·········@gmail.com> wrote:
> Vsevolod wrote:
> > On Apr 12, 9:19 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> > wrote:
> >> My comment on duck-typing was that it is bad practice. The answer was, -
> >> shut up, we do whatever us pleases. That is silly and irresponsible.
>
> > Your comment was, that it shouldn't be allowed. Without any
> > explanation why. Not even mentioning the words "bad practice". Isn't
> > that silly and irresponsible? I've just tried to match your attitude.
>
> This has the exciting potential for being the most exceedingly dreary
> infinite loop ever.

And a tail-recursive one at that.

Duane
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090413080443.GD3826@gildor.inglorion.net>
On Sun, Apr 12, 2009 at 11:41:22PM -0700, ·····@franz.com wrote:
> On Apr 12, 4:49 pm, Kenneth Tilton <·········@gmail.com> wrote:
> >
> > This has the exciting potential for being the most exceedingly dreary
> > infinite loop ever.
> 
> And a tail-recursive one at that.

Alas, Usenet does not seem to do tail call elimination. ;-)

  -- Bob

From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090423033218.712@gmail.com>
On 2009-04-13, Robbert Haarman <··············@inglorion.net> wrote:
> On Sun, Apr 12, 2009 at 11:41:22PM -0700, ·····@franz.com wrote:
>> On Apr 12, 4:49 pm, Kenneth Tilton <·········@gmail.com> wrote:
>> >
>> > This has the exciting potential for being the most exceedingly dreary
>> > infinite loop ever.
>> 
>> And a tail-recursive one at that.
>
> Alas, Usenet does not seem to do tail call elimination. ;-)

That's implemented by trimming the quoted material once in a while and
collapsing the level of quoting, similarly to tail call implementations which
use a stack, but throw it away when it grows too long.
From: Mark Wooding
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87zlel7jbu.fsf.mdw@metalzone.distorted.org.uk>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> In that case you do not consider programming as an engineering
> activity. I do. 

Actually, I think of programming as being largely a craft.  A good
craftsman is aware of the science behind his materials and tools, but
there is more to craftsmanship than engineering.

> Note that engineering assumes responsibility for taking
> decisions. This excludes an indifferent attitude to bad practices.

One obviously warns those less experienced of past mistakes, and
discourages sloppy and lazy work -- and with a stick if necessary.  But
ultimately they will grow beyond our warning, our sticks and our advice,
and they must take responsibility for their own actions.

> *What* is a bad practice must be a subject of discussion, but not
> whether we should or not fight bad practices.

I don't think labels like `bad' belong in an area where there is
significant difference of opinion among experts.

> My comment on duck-typing was that it is bad practice. The answer was,
> - shut up, we do whatever us pleases. That is silly and irresponsible.

Given your closely reasoned and cogently argued discussion of the harm
caused by `duck-typing', I don't see how any reasonable person could
disagree.

Err... where was that discussion again?

> That is fine, so long you don't want to draw any conclusion from
> language design exploration.

Surely one cannot draw conclusions from the languages, but possibly from
their use.  

> If CS is a science and programming is engineering, then you cannot
> just collect languages like post stamps.

Actually, I have done pretty much that exact thing.  To excel a one's
craft, it is essential to know the tools at one's disposal.

> P.P.S. I don't consider static vs. dynamic typing actual, because
> statically typed OO languages successfully incorporated everything useful
> of dynamic typing.

Object orientation is but one paradigm among many, with different areas
of applicability.  You seem intent on ignoring things of which you have
no experience, rather than learning from them.  Erlang, for example, is
dynamically typed, and is not object oriented; yet it seems very
successful in its niche (which is highly concurrent systems, e.g., in
telephony systems).

-- [mdw]
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <inkx5a2uf58q$.15wm29w9s73xf$.dlg@40tude.net>
On Sun, 12 Apr 2009 20:39:17 +0100, Mark Wooding wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> 
>> *What* is a bad practice must be a subject of discussion, but not
>> whether we should or not fight bad practices.
> 
> I don't think labels like `bad' belong in an area where there is
> significant difference of opinion among experts.

Experts in what? Expertise is based on knowledge provided by science.
Discussions like this has nothing to do with that. All they run the same
pattern. After a short introductive phase when players identify themselves
with one of the parties, they enter the core phase of discussion: "who are
you to tell me?"

>> My comment on duck-typing was that it is bad practice. The answer was,
>> - shut up, we do whatever us pleases. That is silly and irresponsible.
> 
> Given your closely reasoned and cogently argued discussion of the harm
> caused by `duck-typing', I don't see how any reasonable person could
> disagree.
> 
> Err... where was that discussion again?

Does it require any? Duck-typing and shoe polish greatly improve
productivity etc. (:-))

>> That is fine, so long you don't want to draw any conclusion from
>> language design exploration.
> 
> Surely one cannot draw conclusions from the languages, but possibly from
> their use.  

Language use reflects its design. People choose languages according to
their preferences. (Whatever outcry it may cause, I do think that some
people should not program professionally... (:-))

>> P.P.S. I don't consider static vs. dynamic typing actual, because
>> statically typed OO languages successfully incorporated everything useful
>> of dynamic typing.
> 
> Object orientation is but one paradigm among many, with different areas
> of applicability.

I don't believe in paradigms. I cannot imagine, say, a paradigm of bridge
construction.

> You seem intent on ignoring things of which you have
> no experience, rather than learning from them.  Erlang, for example, is
> dynamically typed, and is not object oriented; yet it seems very
> successful in its niche (which is highly concurrent systems, e.g., in
> telephony systems).

The most successful language is Visual Basic...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090422172620.17@gmail.com>
On 2009-04-13, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> On Sun, 12 Apr 2009 20:39:17 +0100, Mark Wooding wrote:
>
>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>> 
>>> *What* is a bad practice must be a subject of discussion, but not
>>> whether we should or not fight bad practices.
>> 
>> I don't think labels like `bad' belong in an area where there is
>> significant difference of opinion among experts.
>
> Experts in what?

Experts in material that you don't even have the beginning of a foggy clue
about, and which you aren't going to acquire overnight.

> Expertise is based on knowledge provided by science.
> Discussions like this has nothing to do with that.

Speaking for yourself, of course.

> All they run the same pattern.

The pattern that you have no clue and just irritate people who do have a clue?

> After a short introductive phase when players identify themselves
> with one of the parties, they enter the core phase of discussion: "who are
> you to tell me?"

Hi, I'm from the well-informed party of crusty, seasoned software experts.

So is Mark Wooding.

>
>>> My comment on duck-typing was that it is bad practice. The answer was,
>>> - shut up, we do whatever us pleases. That is silly and irresponsible.
>> 
>> Given your closely reasoned and cogently argued discussion of the harm
>> caused by `duck-typing', I don't see how any reasonable person could
>> disagree.
>> 
>> Err... where was that discussion again?
>
> Does it require any? Duck-typing and shoe polish greatly improve
> productivity etc. (:-))

``Duck typing'' is a very recently introduced term, supposedly originating in
the Python online community (where it should have stayed), whereas dynamic
typing has been in use for fifty years.  ``Duck typing'' is not a synonym for
``dynamic typing''.  It places emphasis on type substitutability
(object-oriented polymorphism).  The idea is that an object is, for some
practical purpose, of a type if the methods defined to operate on that type
accept the object---i.e.  ``if something walks like a duck, and quacks like a
duck, it is a duck''.  This is an oversimplification, because under
polymorphism there are always distinctions among different kinds of ``ducks'',
since some or all of the methods do subtly different things for differently
typed objects.  If all methods behave the same way for some object A and B,
then A and B are effectively of the same type. If all methods just behave
sensibly for A and B, and without failure, A and B are not of the same type.  A
and B may both respond meaningfully to the ``quack'' method, but do not quack
exactly the same way.

Terms like ``duck typing'' are part of a trend to dumb down computer science,
and should be rejected.

> I don't believe in paradigms. I cannot imagine, say, a paradigm of bridge
> construction.

Not only are there paradigms for bridge construction, but they even have names,
like ``Pratt truss''.

As any engineer knows.
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74eu9oF13aedjU1@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Sun, 12 Apr 2009 18:06:47 +0100, Mark Wooding wrote:
> 
>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>
>>> On Sun, 12 Apr 2009 03:08:29 -0700 (PDT), Vsevolod wrote:
>>>
>>>> And, moreover, to tell people which development style to use (as of
>>>> the above discussion)? If you ask the people, actually involved in
>>>> programming language design, I think few if any will give such
>>>> strong opinions. It is PL fanaticism.
>>> Well people who have weak opinions should stay away from discussing
>>> them.  Anyway it seems that your opinion regarding your language
>>> preferences is no less stronger than mine.
>> I can't speak for Vsevolod, but I draw a very clear distinction between
>> my preferences for a programming language which I am going to use
>> personally, and how I think the design space of programming languages
>> ought to be populated.
> 
> In that case you do not consider programming as an engineering activity. I
> do. Note that engineering assumes responsibility for taking decisions. This
> excludes an indifferent attitude to bad practices.
> 
> *What* is a bad practice must be a subject of discussion, but not whether
> we should or not fight bad practices. Yes, I am aware that programming is
> not much engineering. But at least we should not behave us as if we were
> shamans.

http://www.chrisewings.com/images/Dilbert/pages/dilbert2005071744002_gif.htm

We know very little what are bad practices and what aren't. We can all 
only speak from personal, anecdotal evidence at best. There is no 
serious research into finding out what are bad practices and what 
aren't. We are all just guessing.

In the meantime, people use the most weird and badly designed 
programming languages and are very successful in coming up with good 
products. At the same time, very bad software is developed in supposedly 
well-designed languages. What does that tell us? Maybe this is not the 
important factor. Maybe other factors are far more important.

What if that were the case?

> P.P.S. I don't consider static vs. dynamic typing actual, because
> statically typed OO languages successfully incorporated everything useful
> of dynamic typing.

It's probably impossible to have a static type system for a 'fully' 
reflective programming language.

Based on my personal, anecdotal evidence, my guess is that 'full' 
reflection will eventually be more important than any amount of static 
guarantees you could ever have.

Program correctness is a pipe dream. That static type systems achieve it 
is a myth.


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <15yail0dfvdkb.wyctsa35hm3c$.dlg@40tude.net>
On Sun, 12 Apr 2009 21:38:31 +0200, Pascal Costanza wrote:

> Dmitry A. Kazakov wrote:
>> On Sun, 12 Apr 2009 18:06:47 +0100, Mark Wooding wrote:
>> 
>>> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
>>>
>>>> On Sun, 12 Apr 2009 03:08:29 -0700 (PDT), Vsevolod wrote:
>>>>
>>>>> And, moreover, to tell people which development style to use (as of
>>>>> the above discussion)? If you ask the people, actually involved in
>>>>> programming language design, I think few if any will give such
>>>>> strong opinions. It is PL fanaticism.
>>>> Well people who have weak opinions should stay away from discussing
>>>> them.  Anyway it seems that your opinion regarding your language
>>>> preferences is no less stronger than mine.
>>> I can't speak for Vsevolod, but I draw a very clear distinction between
>>> my preferences for a programming language which I am going to use
>>> personally, and how I think the design space of programming languages
>>> ought to be populated.
>> 
>> In that case you do not consider programming as an engineering activity. I
>> do. Note that engineering assumes responsibility for taking decisions. This
>> excludes an indifferent attitude to bad practices.
>> 
>> *What* is a bad practice must be a subject of discussion, but not whether
>> we should or not fight bad practices. Yes, I am aware that programming is
>> not much engineering. But at least we should not behave us as if we were
>> shamans.
> 
> http://www.chrisewings.com/images/Dilbert/pages/dilbert2005071744002_gif.htm

I remember that one. (:-))

> We know very little what are bad practices and what aren't. We can all 
> only speak from personal, anecdotal evidence at best. There is no 
> serious research into finding out what are bad practices and what 
> aren't. We are all just guessing.

For the third time we are returning the point that higher productivity is
an urban legend without a hard proof. It cannot serve as an argument either
for or against static typing.

> In the meantime, people use the most weird and badly designed 
> programming languages and are very successful in coming up with good 
> products. At the same time, very bad software is developed in supposedly 
> well-designed languages. What does that tell us? Maybe this is not the 
> important factor. Maybe other factors are far more important.

Like that software design is still not engineering and programmers aren't
engineers.

> What if that were the case?

No matter. There is no other choice if programs are written by humans. You
cannot change programmers, so the only way out is to change the tools they
use. The language is one of the tools.

>> P.P.S. I don't consider static vs. dynamic typing actual, because
>> statically typed OO languages successfully incorporated everything useful
>> of dynamic typing.
> 
> It's probably impossible to have a static type system for a 'fully' 
> reflective programming language.

I think that a fully reflective language would most likely be inconsistent.

> Based on my personal, anecdotal evidence, my guess is that 'full' 
> reflection will eventually be more important than any amount of static 
> guarantees you could ever have.

I don't think so, however I understand the motivation behind it. Probably
some bright idea will appear that will be capable to reconcile it. After
all a polymorphic value of many types also appeared as inconsistent at
first glance.

> Program correctness is a pipe dream. That static type systems achieve it 
> is a myth.

Program correctness is a paramount problem. An incorrect program is like a
car that does not move. If programming will ever become engineering it
shall reach a reasonable level of predictability.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74f2m4F12g9lqU1@mid.individual.net>
Dmitry A. Kazakov wrote:

>> We know very little what are bad practices and what aren't. We can all 
>> only speak from personal, anecdotal evidence at best. There is no 
>> serious research into finding out what are bad practices and what 
>> aren't. We are all just guessing.
> 
> For the third time we are returning the point that higher productivity is
> an urban legend without a hard proof. It cannot serve as an argument either
> for or against static typing.

I agree with that.

In my personal experience, static typing reduces my productivity rather 
than enhances it. But this can be different for other people, of course.

> No matter. There is no other choice if programs are written by humans. You
> cannot change programmers, so the only way out is to change the tools they
> use. The language is one of the tools.

We're changing programmers by changing the tools they use. My conjecture 
is that the main reason why there are so many 'average' programmers is 
exactly because they are given tools for 'average' programmers. It's a 
self-fulfilling prophecy.

>>> P.P.S. I don't consider static vs. dynamic typing actual, because
>>> statically typed OO languages successfully incorporated everything useful
>>> of dynamic typing.
>> It's probably impossible to have a static type system for a 'fully' 
>> reflective programming language.
> 
> I think that a fully reflective language would most likely be inconsistent.

It seems to me that this sentence doesn't mean anything.

>> Based on my personal, anecdotal evidence, my guess is that 'full' 
>> reflection will eventually be more important than any amount of static 
>> guarantees you could ever have.
> 
> I don't think so, however I understand the motivation behind it. Probably
> some bright idea will appear that will be capable to reconcile it. After
> all a polymorphic value of many types also appeared as inconsistent at
> first glance.

Personally, I'm not interested whether anyone will ever be able to 
reconcile it or not. I can already use 'sufficiently' reflective 
programming languages today, and they allow me to focus on other topics 
that i personally find a lot more interesting.

And don't tell me I shouldn't...

>> Program correctness is a pipe dream. That static type systems achieve it 
>> is a myth.
> 
> Program correctness is a paramount problem. An incorrect program is like a
> car that does not move. If programming will ever become engineering it
> shall reach a reasonable level of predictability.

Until then, we can relax and enjoy playing around a little...


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <40f0c772-1417-4a68-8831-262d3c52e9f3@s20g2000yqh.googlegroups.com>
On Apr 12, 4:04 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
wrote:
[...]
> Program correctness is a paramount problem. An incorrect program is like a
> car that does not move. If programming will ever become engineering it
> shall reach a reasonable level of predictability.

Cars routinely fail to move. Indeed, there is a rather large industry
devoted entirely to making cars move again.

Does that mean mechanical engineering isn't... er... engineering?

Cheers,
Pillsy
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1mh87z8a537uq.2s61cil9lcfn.dlg@40tude.net>
On Mon, 13 Apr 2009 05:55:23 -0700 (PDT), Pillsy wrote:

> On Apr 12, 4:04�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
> wrote:
> [...]
>> Program correctness is a paramount problem. An incorrect program is like a
>> car that does not move. If programming will ever become engineering it
>> shall reach a reasonable level of predictability.
> 
> Cars routinely fail to move. Indeed, there is a rather large industry
> devoted entirely to making cars move again.
> 
> Does that mean mechanical engineering isn't... er... engineering?

No it does not. When you buy a new car it moves. Moreover, the car vendor
is liable to provide you a warranty. Now compare that with software
licenses. Search the Internet for the figures how many software projects
fail. Did you ever hear about a car model that could not move because of
constructive defects?

That is because mechanical engineering is statically typed. No engineer
would nail the nut in order to improve "productivity." Those who advocate
for the paradigm of "nailing nuts" usually don't pass their first exam...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74h0j5F13m0ofU1@mid.individual.net>
On Mon, 13 Apr 2009 16:06:26 +0200, Dmitry A. Kazakov wrote:

> On Mon, 13 Apr 2009 05:55:23 -0700 (PDT), Pillsy wrote:
> 
>> On Apr 12, 4:04 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>> wrote:
>> [...]
>>> Program correctness is a paramount problem. An incorrect program is
>>> like a car that does not move. If programming will ever become
>>> engineering it shall reach a reasonable level of predictability.
>> 
>> Cars routinely fail to move. Indeed, there is a rather large industry
>> devoted entirely to making cars move again.
>> 
>> Does that mean mechanical engineering isn't... er... engineering?
> 
> No it does not. When you buy a new car it moves. Moreover, the car
> vendor is liable to provide you a warranty. Now compare that with
> software licenses. Search the Internet for the figures how many software
> projects fail. Did you ever hear about a car model that could not move
> because of constructive defects?
> 
> That is because mechanical engineering is statically typed. No engineer
> would nail the nut in order to improve "productivity." Those who
> advocate for the paradigm of "nailing nuts" usually don't pass their
> first exam...

Haha.  Heard about the Ariane 5 Flight 501? [1] The software that
caused the rocket to explode in space was written in Ada, one of the
most hardcore bondage and discipline languages ever invented.  Just to
emphasize the point: static typing didn't catch any of the errors that
lead to the failure.

Also from the Wikipedia page:

"Pre-flight tests had never been performed on the re-alignment code
under simulated Ariane 5 flight conditions, so the error was not
discovered before launch."

So perhaps _testing_ the code would have helped.  13 years later you
are still arguing that static typing is oh so useful, etc.  Kind of
sad.

Tamas

[1] http://en.wikipedia.org/wiki/Ariane_5_Flight_501
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <1vjq9j51sjrzz.1e34z6bszgku5.dlg@40tude.net>
On 13 Apr 2009 14:29:57 GMT, Tamas K Papp wrote:

> On Mon, 13 Apr 2009 16:06:26 +0200, Dmitry A. Kazakov wrote:
> 
>> On Mon, 13 Apr 2009 05:55:23 -0700 (PDT), Pillsy wrote:
>> 
>>> On Apr 12, 4:04�pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>> wrote:
>>> [...]
>>>> Program correctness is a paramount problem. An incorrect program is
>>>> like a car that does not move. If programming will ever become
>>>> engineering it shall reach a reasonable level of predictability.
>>> 
>>> Cars routinely fail to move. Indeed, there is a rather large industry
>>> devoted entirely to making cars move again.
>>> 
>>> Does that mean mechanical engineering isn't... er... engineering?
>> 
>> No it does not. When you buy a new car it moves. Moreover, the car
>> vendor is liable to provide you a warranty. Now compare that with
>> software licenses. Search the Internet for the figures how many software
>> projects fail. Did you ever hear about a car model that could not move
>> because of constructive defects?
>> 
>> That is because mechanical engineering is statically typed. No engineer
>> would nail the nut in order to improve "productivity." Those who
>> advocate for the paradigm of "nailing nuts" usually don't pass their
>> first exam...
> 
> Haha.  Heard about the Ariane 5 Flight 501? [1] The software that
> caused the rocket to explode in space was written in Ada, one of the
> most hardcore bondage and discipline languages ever invented.  Just to
> emphasize the point: static typing didn't catch any of the errors that
> lead to the failure.
> 
> Also from the Wikipedia page:
> 
> "Pre-flight tests had never been performed on the re-alignment code
> under simulated Ariane 5 flight conditions, so the error was not
> discovered before launch."
> 
> So perhaps _testing_ the code would have helped.  13 years later you
> are still arguing that static typing is oh so useful, etc.  Kind of
> sad.

It helped, the rocket exploded during a "run-time test". (:-)) Did you
suggest that the control center should have tried to debug the Ariane code
while it was exploding? (:-)) One test was enough, the next Ariane 5 raised
without problems.

BTW, just to your knowledge, Ariane 5 failed not because of a software
fault. The software was OK and functioned as intended. The minor problem
was, that it was the software from Ariane 4... (:-)) The management decided
to spare...

Last but not least, you could also look at the software design process they
used. It included a *huge* amount of tests. When specifications are wrong,
no test and no check can help.

Anyway all this is totally irrelevant to the issue of whether software
developing is engineering.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Chris Barts
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <87prffvdzd.fsf@chbarts.motzarella.org>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:

> On 13 Apr 2009 14:29:57 GMT, Tamas K Papp wrote:
>
>> On Mon, 13 Apr 2009 16:06:26 +0200, Dmitry A. Kazakov wrote:
>> 
>>> On Mon, 13 Apr 2009 05:55:23 -0700 (PDT), Pillsy wrote:
>>> 
>>>> On Apr 12, 4:04 pm, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>>>> wrote:
>>>> [...]
>>>>> Program correctness is a paramount problem. An incorrect program is
>>>>> like a car that does not move. If programming will ever become
>>>>> engineering it shall reach a reasonable level of predictability.
>>>> 
>>>> Cars routinely fail to move. Indeed, there is a rather large industry
>>>> devoted entirely to making cars move again.
>>>> 
>>>> Does that mean mechanical engineering isn't... er... engineering?
>>> 
>>> No it does not. When you buy a new car it moves. Moreover, the car
>>> vendor is liable to provide you a warranty. Now compare that with
>>> software licenses. Search the Internet for the figures how many software
>>> projects fail. Did you ever hear about a car model that could not move
>>> because of constructive defects?
>>> 
>>> That is because mechanical engineering is statically typed. No engineer
>>> would nail the nut in order to improve "productivity." Those who
>>> advocate for the paradigm of "nailing nuts" usually don't pass their
>>> first exam...
>> 
>> Haha.  Heard about the Ariane 5 Flight 501? [1] The software that
>> caused the rocket to explode in space was written in Ada, one of the
>> most hardcore bondage and discipline languages ever invented.  Just to
>> emphasize the point: static typing didn't catch any of the errors that
>> lead to the failure.
>> 
>> Also from the Wikipedia page:
>> 
>> "Pre-flight tests had never been performed on the re-alignment code
>> under simulated Ariane 5 flight conditions, so the error was not
>> discovered before launch."
>> 
>> So perhaps _testing_ the code would have helped.  13 years later you
>> are still arguing that static typing is oh so useful, etc.  Kind of
>> sad.
>
> It helped, the rocket exploded during a "run-time test". (:-)) Did you
> suggest that the control center should have tried to debug the Ariane code
> while it was exploding? (:-)) One test was enough, the next Ariane 5 raised
> without problems.

Ha ha ha. One 'test' that cost a ton of money. Even the strongest of
static typing failed.

> BTW, just to your knowledge, Ariane 5 failed not because of a software
> fault. The software was OK and functioned as intended. 

Proving that static typing tends to make vacuous guarantees, and so is
no replacement for testing. (Meaning that it can't even much *reduce*
the amount of testing you do, because the 'guarantees' it provides
can't be assumed to be meaningful.)

> The minor problem was, that it was the software from Ariane
> 4... (:-)) The management decided to spare...

Blaming management is something engineers do, now? ;)

> Last but not least, you could also look at the software design process they
> used. It included a *huge* amount of tests. When specifications are wrong,
> no test and no check can help.

It obviously didn't include enough tests. And their belief in strong
specifications (part of the stating typing philosophy) obviously
didn't save them. The solution is obvious: Dump some of the specs and
increase the time budget for testing.

> Anyway all this is totally irrelevant to the issue of whether software
> developing is engineering.

Ah, but it's very relevant to how useful static typing is.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <unfccglst6m2$.1e93qut076bao$.dlg@40tude.net>
On Tue, 14 Apr 2009 02:25:26 -0600, Chris Barts wrote:

> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> writes:
> 
>> Last but not least, you could also look at the software design process they
>> used. It included a *huge* amount of tests. When specifications are wrong,
>> no test and no check can help.
> 
> It obviously didn't include enough tests. And their belief in strong
> specifications (part of the stating typing philosophy) obviously
> didn't save them. The solution is obvious: Dump some of the specs and
> increase the time budget for testing.

That was exactly what managers did, they ignited the rocket!

Maybe they were lispers? (:-))

>> Anyway all this is totally irrelevant to the issue of whether software
>> developing is engineering.
> 
> Ah, but it's very relevant to how useful static typing is.

No it is irrelevant. When hardware does not correspond to its specification
no test *without* the hardware can help, not even a mock test.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090422042751.531@gmail.com>
On 2009-04-12, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
> On Sun, 12 Apr 2009 18:06:47 +0100, Mark Wooding wrote:
>> I can't speak for Vsevolod, but I draw a very clear distinction between
>> my preferences for a programming language which I am going to use
>> personally, and how I think the design space of programming languages
>> ought to be populated.
>
> In that case you do not consider programming as an engineering activity.

Wooding's above statement admits no such conclusion on any rational basis.

> I do.

You're hardly an engineer. Just the latest element in a sequence of
shit-for-brains bohunks that have been stumbling into the Lisp newsgroup
lately.
From: ··················@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <5ab80bfb-8312-4d72-90ce-0be7649594be@o11g2000yql.googlegroups.com>
Programming isn't an engineering activity.

Dynamic typing is nice because it let me test things out as I go.
Static type checking is nice because it gives me some feedback on the
terrible code I write.

Are they somehow mutually exclusive?

Overall I make more mistakes getting my inputs to match my expected
outputs than 'Did i use a string there or an int?', but I make both
types of mistakes! I make all sorts of mistakes all the time! Then I
test or ask the compiler and I go back and fix whatever is wrong!

Miracle of miracles; that doesn't sound anything like a serious
engineering discipline at all.
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e30756$0$22552$607ed4bc@cv.net>
··················@gmail.com wrote:
> Programming isn't an engineering activity.
> 
> Dynamic typing is nice because it let me test things out as I go.
> Static type checking is nice because it gives me some feedback on the
> terrible code I write.
> 
> Are they somehow mutually exclusive?
> 
> Overall I make more mistakes getting my inputs to match my expected
> outputs than 'Did i use a string there or an int?', but I make both
> types of mistakes! I make all sorts of mistakes all the time! Then I
> test or ask the compiler and I go back and fix whatever is wrong!
> 
> Miracle of miracles; that doesn't sound anything like a serious
> engineering discipline at all.


You are listening at the wrong level. Either that or your definition of 
"fix" needs to be fixed.

hth,kt
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <iv06p1awi5el.1kujtwtawc41m.dlg@40tude.net>
On Mon, 13 Apr 2009 02:37:50 +0000 (UTC), Kaz Kylheku wrote:

> On 2009-04-12, Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
>> On Sun, 12 Apr 2009 18:06:47 +0100, Mark Wooding wrote:
>>> I can't speak for Vsevolod, but I draw a very clear distinction between
>>> my preferences for a programming language which I am going to use
>>> personally, and how I think the design space of programming languages
>>> ought to be populated.
>>
>> In that case you do not consider programming as an engineering activity.
> 
> Wooding's above statement admits no such conclusion on any rational basis.

Huh, in his response he admitted that programming is a craft.

>> I do.
> 
> You're hardly an engineer. Just the latest element in a sequence of
> shit-for-brains bohunks that have been stumbling into the Lisp newsgroup
> lately.

I am not in the Lisp newsgroup, it is cross-posting. Lisp is outside my
professional interests.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090410183337.GP3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 10:07:36AM -0700, Pillsy wrote:
> 
> Once I've got something that I think is pretty good, then getting
> earlier checking is a lot more useful, and having a static type-system
> is a lot more appealing, but it's not appealing enough to make dealing
> with that system worth the trouble in the earlier stages of
> development.

Ok, so what you are saying is basically that you accept errors in your 
program during the exploratory stage, but it would be nice to have all 
the errors found and fixed in the final program. I think this is 
something we can all agree on.

It also makes a good case for having optional static checking. Disable 
the checks while exploring, then enable the checks when you are working 
to transform the result of your exploration into a final program. Best 
of both worlds.

However, why would you want to allow errors in your exploratory phase? 
There seems to be an assumption that this makes you more productive, but 
is that really the case?

I can see why not having to go and fix all the breakage immediately 
saves you time in going from one explorative step to the next, but, 
eventually, that breakage does have to be fixed. And static checking is 
a great asset here, because it can verify that you have, indeed, fixed 
all the breakage, or, if you haven't, point you to exactly those places 
you still need to fix.

Until we have an answer to the question if static checking does actually 
impair productivity, I fear static vs. dynamic typing is a discussion 
that can go on forever, without making any real progress. The bad news 
is that I don't have the answer. The good news is that there are plenty 
of programming languages to choose from, and if none of them are good 
enough for you, you can always write your own. :-)

Regards,

Bob

-- 
God is dead. - Nietzsche
Nietzsche is dead. - God

From: Pillsy
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <e584e2a6-ee9b-49ac-ad4b-981e351a8d45@k41g2000yqh.googlegroups.com>
On Apr 10, 2:33 pm, Robbert Haarman <··············@inglorion.net>
wrote:

> On Fri, Apr 10, 2009 at 10:07:36AM -0700, Pillsy wrote:

> > Once I've got something that I think is pretty good, then getting
> > earlier checking is a lot more useful, and having a static type-system
> > is a lot more appealing, but it's not appealing enough to make dealing
> > with that system worth the trouble in the earlier stages of
> > development.

> Ok, so what you are saying is basically that you accept errors in your
> program during the exploratory stage, but it would be nice to have all
> the errors found and fixed in the final program. I think this is
> something we can all agree on.

That is, indeed, what I am saying.

> It also makes a good case for having optional static checking. Disable
> the checks while exploring, then enable the checks when you are working
> to transform the result of your exploration into a final program. Best
> of both worlds.

Sure. My favorite Lisp implementation will do static type checks with
the proper compiler settings.

> However, why would you want to allow errors in your exploratory phase?

Because the time I spend dealing with the occasional type error during
the exploratory phase is less than the time I would spend getting all
the declarations right ahead of time. If I figure I'm going to throw
away or fix 9 versions of a function before I get one that does what I
want it to do, and will spend time tracking down various bugs in the
early version, it's not necessarily a huge sacrifice to have to throw
away of fix a tenth version because I made a type error a static
checker would have caught but that ended up dropping me into the
debugger instead.

> There seems to be an assumption that this makes you more productive, but
> is that really the case?

It aligns well with my experience of what makes me more productive.
Other people have different styles of development, work in different
domains, and have different approachs to problem-solving, and may well
have had completely experiences.
[...]
> Until we have an answer to the question if static checking does actually
> impair productivity, I fear static vs. dynamic typing is a discussion
> that can go on forever, without making any real progress.

This is quite possible. My belief is that the different people will
have different answers to the question, and that those answers will
probably be right for them. I think discussions that occur on the
level of, "This works for me, and here's why it works for me...." are
often worth having, even though by their very nature there may not be
a definitive answer.

Cheers,
Pillsy
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090410192444.GT3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 11:50:24AM -0700, Pillsy wrote:
> On Apr 10, 2:33 pm, Robbert Haarman <··············@inglorion.net>
> wrote:
> 
> > There seems to be an assumption that this makes you more productive, but
> > is that really the case?
> 
> It aligns well with my experience of what makes me more productive.
> Other people have different styles of development, work in different
> domains, and have different approachs to problem-solving, and may well
> have had completely experiences.
> [...]

Yes. It is very difficult to answer the question, because there are so 
many variables. For example, programming languages don't usually differ 
from one another only in that one is statically typed and the other is 
dynamically typed.

> > Until we have an answer to the question if static checking does actually
> > impair productivity, I fear static vs. dynamic typing is a discussion
> > that can go on forever, without making any real progress.
> 
> This is quite possible. My belief is that the different people will
> have different answers to the question, and that those answers will
> probably be right for them. I think discussions that occur on the
> level of, "This works for me, and here's why it works for me...." are
> often worth having, even though by their very nature there may not be
> a definitive answer.

Indeed. In the end, the best choice may well depend on the 
circumstances. In that case, knowing which choice is best in the given 
circumstance, or even knowing that a choice is "pretty good" in the 
given circumstances, is much more interesting than which choice, if any, 
is best in general.

Regards,

Bob

-- 
If source code is outlawed, only outlaws will have source code.


From: William James
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <grth5602pio@enews4.newsguy.com>
Robbert Haarman wrote:

> 
> -- 
> God is dead. - Nietzsche
> Nietzsche is dead. - God

-- 
When will you both admit that you're dead? --- Bill
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e27d8c$0$5896$607ed4bc@cv.net>
William James wrote:
> Robbert Haarman wrote:
> 
>> -- 
>> God is dead. - Nietzsche
>> Nietzsche is dead. - God
> 

Stop it. You are going to scare Costanza.

hth, kenny
From: ········@gmail.com
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <a860b306-fa36-427e-bed1-17a39ff97a19@g19g2000yql.googlegroups.com>
On 12 Apr, 21:58, "William James" <·········@yahoo.com> wrote:
> > God is dead. - Nietzsche
> > Nietzsche is dead. - God
>
> --
> When will you both admit that you're dead? --- Bill

"are"?

But Christ resurrected, do you remember?
And we even have a 3D photography (kind of) of that fact: you can find
it at Turin, Italy (the latest scientific researches confirm its
authenticity).

http://www.sindone.it/

:)
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <749d6sF12l51mU1@mid.individual.net>
On Fri, 10 Apr 2009 18:42:26 +0200, Dmitry A. Kazakov wrote:

> However that does not wonder me. Dynamic typing consequently leads to no
> typing. No need to be ashamed of guys, just speak your mind. How are

Nope, dynamic typing leads to dynamic typing.  If you don't understand
what dynamic typing is, that is fine, just don't engage in discussions
that involve the concept because doing so exposes your ignorance.

Cheers,

Tamas
From: Pascal Costanza
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <74bbt1F12mopiU2@mid.individual.net>
Dmitry A. Kazakov wrote:
> On Fri, 10 Apr 2009 09:13:46 -0700 (PDT), Pillsy wrote:
> 
>> On Apr 10, 11:44 am, "Dmitry A. Kazakov" <·······@dmitry-kazakov.de>
>> wrote:
>> [...]
>>> Further I would also like to an explanation how later or less checks could
>>> improve this rate and thus productivity.
>> How could such an obvious point need explanation? Eliminating
>> *irrelevant* checks will clearly increase productivity, because
>> irrelevant checks are, by definition, a waste of time.
>>
>>> Especially the issue how program correctness can be defined without
>>> checks, which, according to the point need to be reduced in order
>>> to improve "productivity."
>> This is a really pitiful strawman. Just because you can't define
>> program correctness without the idea of conforming to *some* set of
>> checks hardly means that you can't define program correctness without
>> conforming to *every possible* set of checks.
> 
> Wow, now it becomes interesting. So type checks are irrelevant. That's
> honest. At least!
> 
> But that was not the original point. It was, that type checks are great to
> perform later. You should have argued for untyped languages.
> 
> However that does not wonder me. Dynamic typing consequently leads to no
> typing. 

That's incorrect.

This is SBCL 1.0.27, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.

SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses.  See the CREDITS and COPYING files in the
distribution for more information.
* (type-of 42)

(INTEGER 0 536870911)
* (typep 42 'number)

T
* (defun foo ()
     (flet ((bar (x) (+ x 42)))
       (bar "string")))
; in: LAMBDA NIL
;     (+ X 42)
;
; caught WARNING:
;   Asserted type NUMBER conflicts with derived type
;   (VALUES (SIMPLE-ARRAY CHARACTER (6)) &OPTIONAL).
;   See also:
;     The SBCL Manual, Node "Handling of Types"
;
; compilation unit finished
;   caught 1 WARNING condition

FOO


> No need to be ashamed of guys, just speak your mind. How are going
> to define correctness outside types (sets of values and operations on
> them)? I am curious.

There is no such thing as program correctness. See 
http://doi.acm.org/10.1145/379486.379512


Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: eric-and-jane-smith
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <J1QDl.10703$g%5.5138@newsfe23.iad>
"Dmitry A. Kazakov" <·······@dmitry-kazakov.de> wrote in
····································@40tude.net: 

> I would like to see them. Precisely the number of man-hours required
> to achieve the rate software failure at the given severity level per
> source code line per one second of execution.

Even if someone published such statistics, the information would not be 
reliable.  There are too many ways for errors and misunderstandings to 
make it useless.  The best we can probably do is tally the opinions of 
programmers with enough experience in different paradigms for their 
opinions to be meaningful.

I've been programming for decades, from assembler language through dozens 
of programming languages to Common Lisp.  For the past 7 to 8 years I 
have been using Common Lisp and have been of the opinion that it is by 
far the best programming language of all the ones I have used.  Not 
necessarily the best for everyone, but the best for me.

I see a lot of advantages and disadvantages in both static and dynamic 
typing vs each other.  I'm not convinced that either one of them is a 
clear winner.

From my point of view, the best reason to use dynamic typing is that Lisp 
uses it.  If anyone could ever convince me they invented a better 
programming language, and it used static typing, I would be glad to use a 
good implementation of it.  But all the languages I have looked at had 
too many deficiencies, and would have impaired my productivity.  That 
includes Haskell and OCaml, both of which are outstandingly good 
programming languages, but would nevertheless impair my programming 
productivity relative to Common Lisp.

I don't even consider error checking to be the biggest advantage of 
static typing.  One other advantage is being able to overload function 
names, which can increase productivity, because it fits in better with 
the way we think and the way our natural languages work.  Another is that 
the information the compiler gets for optimization does not require as 
much compiler intelligence to use well, so the same amount of compiler 
sophistication can give more optimization.

But the advantages of dynamic typing seem just as good overall to me.  
E.g. that the programmer doesn't have to think of variables as having 
types, or that the effective type of a variable is "whatever fits the 
situation" with no need to expend mental effort on that, which would 
distract important mental attention away from other, more important 
considerations.

When you think of a variable as having a particular type, you often miss 
opportunities to create more generic code, which could come in handy in 
unexpected ways.  You often duplicate a lot of efforts before you finally 
see the common factors.  And that duplicated effort often turns out to be 
orders of magnitude more costly than you would expect.  Such as when you 
fix a subtle bug in one set of code, but the same bug remains in another, 
with much rarer symptoms, much harder to diagnose, till a number of years 
later it leads to huge amounts of wasted effort, when it finally becomes 
way too important, but way too obscure.

In any case, I can only judge a programming language as a whole.  Static 
vs dynamic typing would never be a major consideration to me, because I 
can see the advantages and disadvantages of both, and could never be 
completely satisfied with either.  For the time being, Common Lisp is the 
big winner of my small increment of mindshare, and will retain it 
indefinitely, until someone finally comes up with something that seems 
really better to me.
From: Dmitry A. Kazakov
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <vmyyfd2v2jn5.65szqcsge7jr.dlg@40tude.net>
On Fri, 10 Apr 2009 22:49:13 GMT, eric-and-jane-smith wrote:

> But the advantages of dynamic typing seem just as good overall to me.  
> E.g. that the programmer doesn't have to think of variables as having 
> types, or that the effective type of a variable is "whatever fits the 
> situation" with no need to expend mental effort on that, which would 
> distract important mental attention away from other, more important 
> considerations.

How the program reader is supposed to judge about the "situation" at hand?
Typing is a way to convey what it is all about. The advantages of static
typing you mentioned are merely consequences of this. The compiler and the
programmer are on the same page.

> When you think of a variable as having a particular type, you often miss 
> opportunities to create more generic code, which could come in handy in 
> unexpected ways.

Exactly the opposite. When I annotate a variable as constrained to some
type or subtype, doing this I bring the constraint into the design. It is
an explicit and visible design decision, which is a subject of
consideration and justification. When you put your constraints implicitly,
nobody, including you is aware of. So you cannot reason about merits of
these constraints and remove them in order to generalize or refactor your
code.

> You often duplicate a lot of efforts before you finally 
> see the common factors.  And that duplicated effort often turns out to be 
> orders of magnitude more costly than you would expect.  Such as when you 
> fix a subtle bug in one set of code, but the same bug remains in another, 
> with much rarer symptoms, much harder to diagnose, till a number of years 
> later it leads to huge amounts of wasted effort, when it finally becomes 
> way too important, but way too obscure.

To clarify things. What you are talking about is substitutability.
Substitutability is undecidable and you cannot claim that fixing bug in the
context A would not be introducing of another bug when the code is used in
the context B. As a matter of fact, typing is hugely helpful for improving
dealing with substitutability.

> In any case, I can only judge a programming language as a whole.  Static 
> vs dynamic typing would never be a major consideration to me, because I 
> can see the advantages and disadvantages of both, and could never be 
> completely satisfied with either.  For the time being, Common Lisp is the 
> big winner of my small increment of mindshare, and will retain it 
> indefinitely, until someone finally comes up with something that seems 
> really better to me.

That is fair enough. You like it because you like Lisp. I dislike it
because I dislike Lisp. In this case this explanation works... (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090411140431.GY3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 10:49:13PM +0000, eric-and-jane-smith wrote:
> "Dmitry A. Kazakov" <·······@dmitry-kazakov.de> wrote in
> ····································@40tude.net: 
> 
> > I would like to see them. Precisely the number of man-hours required
> > to achieve the rate software failure at the given severity level per
> > source code line per one second of execution.
> 
> Even if someone published such statistics, the information would not be 
> reliable.  There are too many ways for errors and misunderstandings to 
> make it useless.

I think this is very likely to be true. This is why I object to 
statements like "X is dynamically typed, leading to greater programmer 
productivity". I am simply not convinced this is the case. In the 
absence of any advantages that come from dynamic typing, I would rather 
have static typing's assurance that the program is free of type errors.

> The best we can probably do is tally the opinions of 
> programmers with enough experience in different paradigms for their 
> opinions to be meaningful.

This, of course, does not establish that there is an actual difference 
in productivity, let alone that such a difference is due to dynamic 
typing. Which is not saying that listening to experienced programmers 
sharing their accumulated wisdom isn't useful!

> From my point of view, the best reason to use dynamic typing is that Lisp 
> uses it.

I think that is the right way to see it. Lisp is a great language, so 
you use it. Lisp also happens to have dynamic typing, so you get that 
with the bargain.

> I don't even consider error checking to be the biggest advantage of 
> static typing.  One other advantage is being able to overload function 
> names, which can increase productivity, because it fits in better with 
> the way we think and the way our natural languages work.

I agree that this is a great advantage, but you don't need static typing 
for this to work. Common Lisp supports it, for example, while still 
being dynamically typed.

> Another is that the information the compiler gets for optimization 
> does not require as much compiler intelligence to use well, so the 
> same amount of compiler sophistication can give more optimization.

This is certainly a point to consider.

> But the advantages of dynamic typing seem just as good overall to me.  
> E.g. that the programmer doesn't have to think of variables as having 
> types, or that the effective type of a variable is "whatever fits the 
> situation" with no need to expend mental effort on that, which would 
> distract important mental attention away from other, more important 
> considerations.

I think the important consideration is "can this operation be applied to 
that value?" When you implement a function, you make certain assumptions 
about the values that are passed in. Static typing will check some of 
these assumptions, and stop you from running your program if the 
assumptions are not correct.

I don't see how you need to do less thinking about the types of your 
values when writing

(defun my-reverse (xs)
  (labels ((aux (xs ys)
             (if (null xs) ys
               (aux (cdr xs) (cons (car xs) ys)))))
    (aux xs '())))

than when writing

let my_reverse xs =
  let rec aux xs ys = match xs with
    | [] -> ys
    | (x::others) -> aux others (x::ys)
  in aux xs []

The difference comes when you write

(defun wrong () (my-reverse 42))

or

let wrong _ = my_reverse 42

In Lisp, you have now defined a function that will fail when called. In 
OCaml, you have now failed to define a function.

Regards,

Bob

-- 
A journey of a thousand miles starts under one's feet.
	-- Lao Tze


From: Rob Warnock
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <0PadnYmgaMjFdkLUnZ2dnUVZ_t2dnZ2d@speakeasy.net>
Dmitry A. Kazakov <·······@dmitry-kazakov.de> wrote:
+---------------
| Secondly, as an alternative to Lisp I propose a random generator of
| hexadecimal machine code. Any sequence of machine codes is a correct
| program.
+---------------

Not true! In most ISPs there are sequences which will cause
"Illegal Instruction" traps and/or machine checks. Such programs
are, by definition, incorrect.

+---------------
| It would not smoke the CPU, you know.
+---------------

That's also not true on certain CPUs. [E.g., on most of the MIPS ISPs,
which have software-maintained fully-associative TLBs, putting multiple
PTEs into the TLB which overlap address ranges *can* literally "smoke"
(or at least overheat to the point of damage) the CPU!]

+---------------
| > I don't see how having the programs I'd like to write be rejected
| > is a productivity win.
| 
| Random generator is greatly more productive.
+---------------

Hmmm... Random code ==> Causes machine check which cannot be cleared
without power-cycling the machine. Strange definition of "productive"...


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <748tucF12avqpU1@mid.individual.net>
On Fri, 10 Apr 2009 08:39:13 +0200, Robbert Haarman wrote:

> I disagree. You are implying that dynamic typing leads to greater
> productivity than static typing. I don't think this is the case.
> 
> Taking "static typing" to mean that programs that cannot be correctly at
> compile time are rejected at compile time, whereas "dynamic typing"
> means type errors lead to rejection at run-time, static typing means, by
> definition, rejecting bad programs early. It seems to me this would be a
> productivity gain.

Your classification is flawed: CL is certainly not statically typed,
but my compiler (SBCL) does analyze the functions I compile and warns
me about a lot of things.

Also, "static typing means, by definition, rejecting bad programs
early" is sheer idiocy - a lot of "bad" programs are not caught by
type checking.  Most of the "badness" in my programs arise from
conceptual mistakes or inappropriate algorithms, no compiler would be
able to catch those.

> Now, in theory, you could perform all the same type inference and type
> checking on a dynamically typed language that you could perform on a
> statically typed language, as long as your program is written in a style
> that we know how to do type inference for. In practice, this is often
> not the case. The result is that programs written in dynamically typed

You should update your knowledge of modern compilers.  In practice,
modern CL compilers are able to perform a lot of optimizations, even
when they are unaided by declarations.  With appropriate declarations,
CL can generate very fast code.

I never aim to write my programs "in a style that we know how to do
type inference for", I try to write them in a style that is clear,
concise and comfortable to me.  I leave optimization to my compiler,
this approach works very well for me.

Tamas
From: namekuseijin
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ac0237c5-3ec7-4bfb-8de9-d8152d2970db@a7g2000yqk.googlegroups.com>
On Apr 10, 9:55 am, Tamas K Papp <······@gmail.com> wrote:
> You should update your knowledge of modern compilers.  In practice,
> modern CL compilers are able to perform a lot of optimizations, even
> when they are unaided by declarations.  With appropriate declarations,
> CL can generate very fast code.
>
> I never aim to write my programs "in a style that we know how to do
> type inference for", I try to write them in a style that is clear,
> concise and comfortable to me.  I leave optimization to my compiler,
> this approach works very well for me.

Indeed.  The Stalin Scheme compiler is like that, performing type
inference and several optimizations on a normal Scheme program and
producing very fast binaries matching and at times surpassing hand-
coded C:

http://justindomke.wordpress.com/2009/02/23/the-stalin-compiler/

It' very, very slow though:  it's a whole program compiler.  The idea
is to get a normal Scheme program that was developed, debugged and
tested in a Scheme interpreter and in the end of the development cycle
send it to Stalin to produce a final fast version.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090410180859.GO3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 12:55:41PM +0000, Tamas K Papp wrote:
> On Fri, 10 Apr 2009 08:39:13 +0200, Robbert Haarman wrote:
> 
> > Taking "static typing" to mean that programs that cannot be correctly at
> > compile time are rejected at compile time, whereas "dynamic typing"
> > means type errors lead to rejection at run-time, static typing means, by
> > definition, rejecting bad programs early. It seems to me this would be a
> > productivity gain.
> 
> Your classification is flawed: CL is certainly not statically typed,
> but my compiler (SBCL) does analyze the functions I compile and warns
> me about a lot of things.

For the record, I am aware of that (I use SBCL myself).

> Also, "static typing means, by definition, rejecting bad programs
> early" is sheer idiocy - a lot of "bad" programs are not caught by
> type checking.  Most of the "badness" in my programs arise from
> conceptual mistakes or inappropriate algorithms, no compiler would be
> able to catch those.

My bad. I should have clarified that by "bad" I meant "invalid, 
according to the type system". I thought that would have been clear from 
the context, but, clearly, this wasn't the case.

> > Now, in theory, you could perform all the same type inference and type
> > checking on a dynamically typed language that you could perform on a
> > statically typed language, as long as your program is written in a style
> > that we know how to do type inference for. In practice, this is often
> > not the case. The result is that programs written in dynamically typed
> 
> You should update your knowledge of modern compilers. In practice, 
> modern CL compilers are able to perform a lot of optimizations, even 
> when they are unaided by declarations.  With appropriate declarations, 
> CL can generate very fast code.

Again, I am aware of this.

> I never aim to write my programs "in a style that we know how to do
> type inference for", I try to write them in a style that is clear,
> concise and comfortable to me.  I leave optimization to my compiler,
> this approach works very well for me.

Of course. I am not suggesting there is anything wrong with this 
approach.

What I am saying is that, all things considered, it is nice to have 
guarantees. Static typing provides one such guarantee: that there are no 
type errors in the program. (Of course, unless the type system is broken 
- which the type systems of many languages are.)

Current research on type systems focuses on expressing ever more 
properties of the code in the type system. Combined with static typing, 
this allows the compiler to check more and more properties of the code, 
and reject a program if these properties are not as they should be.

Another poster in this discussion commented that it takes so much time 
to get a program through the type checker. As a counter point to that, I 
would like to repeat what I have heard from many Haskell programmers:

  It takes me a long time before I get my programs to pass the type 
  checker, but after that, they work flawlessly.

This, I think, shows the power of static checking: instead of allowing 
an incorrect program to run (with all the consequences of doing so), it 
attempts to catch errorneous programs and preventing them from 
ever running. It's another way of letting the compiler work for you.

Regards,

Bob

-- 
An eye for an eye makes the whole world blind.
	-- Gandhi


From: Larry Coleman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <d780074f-1dd3-423b-9585-9215d7987d3f@f19g2000yqh.googlegroups.com>
On Apr 10, 2:08 pm, Robbert Haarman <··············@inglorion.net>
wrote:

> Another poster in this discussion commented that it takes so much time
> to get a program through the type checker. As a counter point to that, I
> would like to repeat what I have heard from many Haskell programmers:
>
>   It takes me a long time before I get my programs to pass the type
>   checker, but after that, they work flawlessly.
>
> This, I think, shows the power of static checking: instead of allowing
> an incorrect program to run (with all the consequences of doing so), it
> attempts to catch errorneous programs and preventing them from
> ever running. It's another way of letting the compiler work for you.
>

I've done some Haskell programming, and what you hear from them is
correct, but I don't think it's only because of static typing. I think
it's mostly because of the lack of side effects, and the pure
functional style necessary as a result. When writing a program
involves mostly arranging function calls, most missteps will cause
type errors. Languages that allow side effects and imperative control
structures also allow more opportunities for missteps that don't cause
type errors. For example, I've also used Ocaml and F#, and fought with
the type checker only to find that the imperative parts of my program
were still bug-ridden even after I was done fighting.

As a side note, Dr. Harrop is conspicuous by his absence in this
thread. He seems to be busy on clf trying to convince everyone there
that Haskell is too slow.

Larry
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090410185834.GR3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 11:38:53AM -0700, Larry Coleman wrote:
> 
> I've done some Haskell programming, and what you hear from them is
> correct, but I don't think it's only because of static typing. I think
> it's mostly because of the lack of side effects, and the pure
> functional style necessary as a result. When writing a program
> involves mostly arranging function calls, most missteps will cause
> type errors. Languages that allow side effects and imperative control
> structures also allow more opportunities for missteps that don't cause
> type errors. For example, I've also used Ocaml and F#, and fought with
> the type checker only to find that the imperative parts of my program
> were still bug-ridden even after I was done fighting.

Good points. Thanks for sharing.

> As a side note, Dr. Harrop is conspicuous by his absence in this
> thread. He seems to be busy on clf trying to convince everyone there
> that Haskell is too slow.

I haven't been seeing him on c.l.misc lately, either. I assumed he had 
found better uses for his time than having the same discussions over and 
over again.

Regards,

Bob

-- 
Wise men talk because they have something to say; fools, because they
have to say something.

	-- Plato

From: Tamas K Papp
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <749ieoF1272deU1@mid.individual.net>
On Fri, 10 Apr 2009 20:08:59 +0200, Robbert Haarman wrote:

> What I am saying is that, all things considered, it is nice to have
> guarantees. Static typing provides one such guarantee: that there are no
> type errors in the program. (Of course, unless the type system is broken
> - which the type systems of many languages are.)

When we are considering all things, we are doing a cost/benefit
analysis.  For me the cost of a static type system outweigh the
benefits, but since these things are subjective, this may be different
for you.

> Current research on type systems focuses on expressing ever more
> properties of the code in the type system. Combined with static typing,
> this allows the compiler to check more and more properties of the code,
> and reject a program if these properties are not as they should be.

I have seen some of that research and was not too impressed.  I think
that increasing costs/diminishing returns kick in very quickly when
you try to "check" everything with static typing.  For example, I have
seen Haskell code that checks conformability of matrices statically.
I use a lot of matrix computations in my programs, but I don't really
see the value of this: whenever I make a mistake, I just pop into the
debugger, find the offending piece of code, correct it and recompile,
and I am done.  Using static typing to check for this would transform
a minor, occasional inconvenience into a constant pain in the ass; not
a trade-off I would prefer.

> Another poster in this discussion commented that it takes so much time
> to get a program through the type checker. As a counter point to that, I
> would like to repeat what I have heard from many Haskell programmers:
> 
>   It takes me a long time before I get my programs to pass the type
>   checker, but after that, they work flawlessly.

I am sorry to say this, but he/she was clearly bullshitting.  Static
typing does not guarantee that your programs work flawlessly.  I
thought this was obvious, but apparently not.

> This, I think, shows the power of static checking: instead of allowing
> an incorrect program to run (with all the consequences of doing so), it
> attempts to catch errorneous programs and preventing them from ever
> running. It's another way of letting the compiler work for you.

I frequently write programs which are incorrect and would not get past
a compiler, then run them and discover that I should rewrite them
completely, not because of errors that type checking would have
caught, but because of other design issues.  In this case, the
compiler would prevent me from doing my work.

Reconstructing from their arguments, proponents of static typing have
the following scenario in mind: you design your whole program, then
you sit down and type it in, occasionally making some typos, which the
compiler catches for you and everyone is happy.  Real life is rarely
ever like this.

Tamas
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090410190903.GS3826@gildor.inglorion.net>
On Fri, Apr 10, 2009 at 06:45:45PM +0000, Tamas K Papp wrote:
> On Fri, 10 Apr 2009 20:08:59 +0200, Robbert Haarman wrote:
> 
> > What I am saying is that, all things considered, it is nice to have
> > guarantees. Static typing provides one such guarantee: that there are no
> > type errors in the program. (Of course, unless the type system is broken
> > - which the type systems of many languages are.)
> 
> When we are considering all things, we are doing a cost/benefit
> analysis.  For me the cost of a static type system outweigh the
> benefits, but since these things are subjective, this may be different
> for you.

You are correct, it is a cost-benefit analysis. This is also why various 
people have pointed out that they prefer dynamic typing in general, but 
see the value of static typing for some types of program (e.g. in 
safety-critical applications).

I have to wonder, however, what the cost of static typing really is. 
Granted, it depends on what you check and how you check it. The cost can 
be enormous. But does it have to be?

> > Another poster in this discussion commented that it takes so much time
> > to get a program through the type checker. As a counter point to that, I
> > would like to repeat what I have heard from many Haskell programmers:
> > 
> >   It takes me a long time before I get my programs to pass the type
> >   checker, but after that, they work flawlessly.
> 
> I am sorry to say this, but he/she was clearly bullshitting.  Static
> typing does not guarantee that your programs work flawlessly.  I
> thought this was obvious, but apparently not.

No, you are right. Unless you somehow manage to express "flawless" into 
something that can be statically checked, the fact that the type checker 
finds no errors certainly doesn't mean there aren't any. However, that 
is not what is being claimed, either. Rather, the claim is that, for the 
programs these programmers wrote in Haskell, no bugs were found that 
weren't found by the type checker. That's certainly possible.

Regards,

Bob

-- 
This sentence no verb


From: Chris Barts
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <878wm7k93v.fsf@chbarts.motzarella.org>
Robbert Haarman <··············@inglorion.net> writes:

> What I am saying is that, all things considered, it is nice to have 
> guarantees. Static typing provides one such guarantee: that there are no 
> type errors in the program. (Of course, unless the type system is broken 
> - which the type systems of many languages are.)

That is a vacuous guarantee in Java, for example, as it is in most
languages that took their type systems from Algol: The idea that an
integer, a rational, and a complex number are not universally
interchangeable in most circumstances is a flaw. At best, restricting
the domain of a function to some subset of the numeric tower is a
performance hack and should only be done late in development if it
proves advantageous.

In a larger sense, type checking is no replacement for testing. In
fact, the subset of errors you can catch via a type system is a small
subset of the number of errors you can reasonably catch during
testing. For example, consider a function of two numbers that requires
the numbers be relatively prime. No type system can guarantee that
because it isn't determinable during compile time. Only testing can
catch an error of that sort. Type checking only obviates the tests
that careful coding is likely to obviate as well.

So you have to test anyway. What, then, does always-on static typing
get me? We can go back and forth over this (and we are, and we're
likely to continue) but my answer is 'not enough'.

> Current research on type systems focuses on expressing ever more 
> properties of the code in the type system. Combined with static typing, 
> this allows the compiler to check more and more properties of the code, 
> and reject a program if these properties are not as they should be.

This reminds me of AI research, except AI research abandoned
type-system-like inference systems decades ago in favor of fuzzier
pattern recognition of the type Google uses.

> Another poster in this discussion commented that it takes so much time 
> to get a program through the type checker. As a counter point to that, I 
> would like to repeat what I have heard from many Haskell programmers:
>
>   It takes me a long time before I get my programs to pass the type 
>   checker, but after that, they work flawlessly.

I like Haskell when I'm not beating my head against its type
checker. I will admit that it can catch some non-obvious errors when
I'm writing code that works with complex data structures, but in
general I'm just fighting it to accept code I know is reasonable.

> This, I think, shows the power of static checking: instead of allowing 
> an incorrect program to run (with all the consequences of doing so),

What consequences? Are you expecting static typing to allow you to
deploy untested code in the real world? If you don't have a good
testing regime in place, no type system can save you.
From: William James
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <grtg8l02p3j@enews4.newsguy.com>
Marco Antoniotti wrote:

> On Apr 10, 3:55�pm, Tamas K Papp <······@gmail.com> wrote:
> > On Fri, 10 Apr 2009 08:39:13 +0200, Robbert Haarman wrote:
> > > I disagree. You are implying that dynamic typing leads to greater
> > > productivity than static typing. I don't think this is the case.
> > 
> > > Taking "static typing" to mean that programs that cannot be
> > > correctly at compile time are rejected at compile time, whereas
> > > "dynamic typing" means type errors lead to rejection at run-time,
> > > static typing means, by definition, rejecting bad programs early.
> > > It seems to me this would be a productivity gain.
> > 
> > Your classification is flawed: CL is certainly not statically typed,
> > but my compiler (SBCL) does analyze the functions I compile and
> > warns me about a lot of things.
> > 
> > Also, "static typing means, by definition, rejecting bad programs
> > early" is sheer idiocy - a lot of "bad" programs are not caught by
> > type checking. �Most of the "badness" in my programs arise from
> > conceptual mistakes or inappropriate algorithms, no compiler would
> > be able to catch those.
> 
> Yeah.  What was that?
> 
> 
>         Objective Caml version 3.11.0
> 
> # let rec fatt n =
>   match n with
>       0 -> 1
>     | n -> n * fatt (n - 1)
> ;;
>         val fatt : int -> int = <fun>
> # fatt 13;;
>   - : int = -215430144
> #


# let rec factorial = function
      13 -> failwith "foo"
    |  0 -> 1
    |  n -> n * factorial (n - 1)
  ;;
val factorial : int -> int = <fun>
# factorial 5;;
- : int = 120
# factorial 33;;
Exception: Failure "foo".



For bignums:


# #load "nums.cma" ;;
# open Num ;;
# let rec factorial = function
      Int 0  ->  Int 1
    |     n  ->  mult_num n (factorial (pred_num n));;
val factorial : Num.num -> Num.num = <fun>
# string_of_num (factorial (Int 44)) ;;
- : string = "2658271574788448768043625811014615890319638528000000000"
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090419222630.55@gmail.com>
On 2009-04-10, Robbert Haarman <··············@inglorion.net> wrote:
> I disagree. You are implying that dynamic typing leads to greater 
> productivity than static typing. I don't think this is the case.
>
> Taking "static typing" to mean that programs that cannot be correctly at 
> compile time are rejected at compile time, whereas "dynamic typing" 
> means type errors lead to rejection at run-time, static typing means, by 
> definition, rejecting bad programs early. It seems to me this would be a 
> productivity gain.

Yes, it would, if the problem of identifying bad programs was decideable.

Doh!
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <eaac3382-bbcf-4afa-834a-dde66dd5f9f0@r8g2000yql.googlegroups.com>
Hi Chris!
> > This is ok. Easy macros are harmful.
>
> This is like saying easy functions are harmful,
> because it shows up
> deficiencies in the standard library.
> It's a non-sequitur.
I didn't mean macros themselves are harmful, but
having them as an excuse for bad syntax is harmful.
Other languages w/o macros have syntax which is
more carefully designed.

> > It is mostly statically-typed (and hence fast)
> This is both wrong and wrong-headed.
I didn't measure boo vs lisp. But in fact,
lisp programs are fast only when most types
are declared. You can look at lisp code at

http://shootout.alioth.debian.org/

and its comparative performance to have
some ideas about that. You can even try
to improve/add SBCL code there. The most
spectacular is regexp example.
Here lisp's ability to render native
code at runtime should have shown
its advantages. But SBCL is not the
winner of regexp competition. So... I
have something to protect what I say.
Until someone write faster regexp code,
I believe that lisp is slower than C
and Java.
From: Chris Barts
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <874owvk8o4.fsf@chbarts.motzarella.org>
budden <···········@mail.ru> writes:

> Hi Chris!
>> > This is ok. Easy macros are harmful.
>>
>> This is like saying easy functions are harmful,
>> because it shows up
>> deficiencies in the standard library.
>> It's a non-sequitur.
> I didn't mean macros themselves are harmful, but
> having them as an excuse for bad syntax is harmful.
> Other languages w/o macros have syntax which is
> more carefully designed.

Well, I don't think Common Lisp's syntax is bad so this discussion is
unlikely to go anywhere.

>
>> > It is mostly statically-typed (and hence fast)
>> This is both wrong and wrong-headed.
> I didn't measure boo vs lisp. But in fact,
> lisp programs are fast only when most types
> are declared. 

This is true. I realize this. My point is that most Lisp code (on a
function-by-function basis) doesn't need to be fast, or at least not
*that* fast. Lisp's type system is an optimization you can turn on
only as needed, which is by far how I prefer to develop code.

> You can look at lisp code at
>
> http://shootout.alioth.debian.org/
>
> and its comparative performance to have
> some ideas about that. You can even try
> to improve/add SBCL code there. The most
> spectacular is regexp example.
> Here lisp's ability to render native
> code at runtime should have shown
> its advantages. But SBCL is not the
> winner of regexp competition. 

It's the winner in not annoying me. That's the main competition I care
about.

> So... I have something to protect what I say.  Until someone write
> faster regexp code, I believe that lisp is slower than C and Java.

No, it just shows that SBCL isn't as good at optimizing Common Lisp as
the C and Java compilers they used are at optimizing their respective
languages. Saying a language is slow is meaningless: Did C get faster
as more optimizations were added to commonly used C compilers? No. The
language itself hardly changed. The compilers changed. 

A language is a specification, and a specification is only fast when
the paper it's written on has been shot out of a cannon.
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <aba2d80f-d11a-4e9b-8636-ea435daa494f@j12g2000vbl.googlegroups.com>
> It's the winner in not annoying me. That's the main competition I care
> about.
I was talking about execution speed, not about annoying you.

> A language is a specification, and a specification is only fast when
> the paper it's written on has been shot out of a cannon.
But only an implementation can run on a hardware.
From: Leandro Rios
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <grrdlp$h31$1@news.motzarella.org>
budden escribi�:

> I didn't measure boo vs lisp. But in fact,
> lisp programs are fast only when most types
> are declared. You can look at lisp code at
> 
> http://shootout.alioth.debian.org/
> 
> and its comparative performance to have
> some ideas about that. You can even try
> to improve/add SBCL code there. The most
> spectacular is regexp example.

Do you mean the regex-dna example?

Leandro
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <88bc9ea8-4845-4285-97a7-050357b21310@r28g2000vbp.googlegroups.com>
Hi Leandro!
> Do you mean the regex-dna example?
Yes. Also, one can see that generally
CL is slower than C++ and server Java.

I'm impressed how discussion about static/vs
dynamic typing expanded. Anyone seems to forget
about the topic :)
From: William James
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <grqh480j4t@enews4.newsguy.com>
budden wrote:

> I'm a rather marginal person at comp.lang.lisp

Yes, not being a member of a pack of hyenas is admirable,
but don't boast about it.

-- 
Common Lisp is a significantly ugly language.  --- Dick Gabriel 
The good news is, it's not Lisp that sucks, but Common Lisp.
 --- Paul Graham
Common LISP is the PL/I of Lisps.  ---  Jeffrey M. Jacobs
From: Kenneth Tilton
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49ddc254$0$27775$607ed4bc@cv.net>
Ray Dillinger wrote:
> David Moon has created a programming language called PLOT, 
> for "Programming Language for Old-Timers." 
> 
...

> PLOT... prefers to use indentation 
> rather than parens to denote expression nesting.  

Game over. But...why was this not cross-posted to python?! They live for 
indentation!

kt
From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <85b110ef-9be2-47d0-b218-91001578b7f4@s28g2000vbp.googlegroups.com>
On Apr 9, 5:39 am, Kenneth Tilton <·········@gmail.com> wrote:
> Ray Dillinger wrote:
> > David Moon has created a programming language called PLOT,
> > for "Programming Language for Old-Timers."
>
> ...
>
> > PLOT... prefers to use indentation
> > rather than parens to denote expression nesting.  
>
> Game over. But...why was this not cross-posted to python?! They live for
> indentation!
>
> kt

Indeed, PLOT really is a plot, and the acronym secretly stands for:


Pythonic Load Of Turds

or

Pythonic Lisp Of Tomorrow

depending on how you feel about significant whitespace

;^)
From: Raffael Cavallaro
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <b1d8dc99-1362-4670-8028-8527ac45bd84@e21g2000vbe.googlegroups.com>
On Apr 9, 3:47 pm, Raffael Cavallaro <················@gmail.com>
wrote:
> On Apr 9, 5:39 am, Kenneth Tilton <·········@gmail.com> wrote:
>
> > Ray Dillinger wrote:
> > > David Moon has created a programming language called PLOT,
> > > for "Programming Language for Old-Timers."
>
> > ...
>
> > > PLOT... prefers to use indentation
> > > rather than parens to denote expression nesting.  
>
> > Game over. But...why was this not cross-posted to python?! They live for
> > indentation!
>
> > kt
>
> Indeed, PLOT really is a plot, and the acronym secretly stands for:
>
> Pythonic Load Of Turds
>
> or
>
> Pythonic Lisp Of Tomorrow
>
> depending on how you feel about significant whitespace
>
> ;^)

How long, I wonder, till someone makes a pun about Dave *Moon*?
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49de9569$0$95560$742ec2ed@news.sonic.net>
Kenneth Tilton wrote:

> Ray Dillinger wrote:

>> PLOT... prefers to use indentation
>> rather than parens to denote expression nesting.
 
> Game over. But...why was this not cross-posted to python?! They live for
> indentation!

Yah, significant whitespace isn't my favorite invention either, 
but it *can* work in languages designed for it. 

Not crossposted to Python 'cause I don't like the attitude of 
the Pythonistas.  So sue me. Even the Lisp/Scheme crosspost is 
less likely to result in a flamewar than a Python/either would 
be.  Also, because PLOT is (semantically) more like a Lisp than 
it is like Python. 

                                Bear
From: game_designer
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <38c563ce-7bd2-4dc7-88f5-17d7539fb56d@l1g2000yqk.googlegroups.com>
On Apr 9, 6:37 pm, Ray Dillinger <····@sonic.net> wrote:
> Kenneth Tilton wrote:
> > Ray Dillinger wrote:
> >> PLOT... prefers to use indentation
> >> rather than parens to denote expression nesting.
> Not crossposted to Python 'cause I don't like the attitude of
> the Pythonistas.  So sue me. Even the Lisp/Scheme crosspost is
> less likely to result in a flamewar than a Python/either would
> be.  Also, because PLOT is (semantically) more like a Lisp than
> it is like Python.

What is the point of referring to something with infix notation and
without the parens Lisp? Most people would call that Ruby, Phyton, ...
Seems to me that Lisp is Lisp with prefix and parens for a good
reason.

alex
From: Ray Dillinger
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <49e69a56$0$95571$742ec2ed@news.sonic.net>
game_designer wrote:

> On Apr 9, 6:37 pm, Ray Dillinger <····@sonic.net> wrote:
>>...PLOT is (semantically) more like a Lisp than
>> it is like Python.

> What is the point of referring to something with infix notation and
> without the parens Lisp? Most people would call that Ruby, Phyton, ...
> Seems to me that Lisp is Lisp with prefix and parens for a good
> reason.

Don't confuse semantics with surface syntax.  PLOT and Dylan 
(and a few other languages) are Lisps because: 

1. For any source code there is a data structure isomorphic 
   to that code, which can be written and read.

2. There is a simple way to read something that would 
   otherwise be read as code, as data instead (like QUOTE 
   in Common Lisp), according to the same isomorphism.

3. Data which is isomorphic to source code according to 
   the isomorphism used in 1 and 2, can be converted into 
   executable code just as source code can (like EVAL 
   in Common Lisp), either at compile time or runtime.

4. There is syntactic abstraction mechanism (macros) that 
   uses the code-data isomorphism.

The traditional way to achieve Lisp semantics is with a fully-
parenthesized prefix syntax for code which is identical to a
notation for an isomorphic list structure of data.  But other 
solutions are possible.  It is the semantics, and not the 
syntax, that makes a language a lisp. 

                                Bear
From: Kaz Kylheku
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090419082814.843@gmail.com>
On 2009-04-09, Ray Dillinger <····@sonic.net> wrote:
>
> David Moon has created a programming language called PLOT, 
> for "Programming Language for Old-Timers." 

For the rest of us:

  Parenthesized, Indented, (but otherwise) Free-Form Lisp Expressions.

  PIFFLE!

:)
From: jeff
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <ae2c6d41-a941-4ce6-b9a0-4c246b91d170@k2g2000yql.googlegroups.com>
Is there any actual code here?
From: budden
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <9a588e1f-9012-4b0a-ac88-a090810fadb3@b16g2000yqb.googlegroups.com>
On 10 ÁÐÒ, 15:45, jeff <········@gmail.com> wrote:
> Is there any actual code here?
Yes, it is interesting question :)
If this is only a spec, I think I could do better one.

Some comments on the language proposed:
> Define as much as possible by the binding of names to values.
I think it is not a good policy. 2-lisp is fine. Indeed,
people use N-lisp. E.g. I use lisp to generate SQL. Consider
stored procedure database.foo. What should symbol foo mean?
A. It should mean reference to a metadata of database.foo.
B. It should invoke database.foo from lisp.
C. It should represent database.foo when generating trigger code.
In fact, I want it to have all three meanings. I also want be
able to navigate through all code I created for database.foo
from my EMACS M-. command. If language is intentionnaly 1-lisp,
it becomes hard.

> case-insensitive names
Inacceptable. It is a great tradegy that CL is
case-insensitive by default and standard symbols are
uppercase. Very inconvinent to do metaprogramming for
case-sensitive targets.

> Minimize the use of punctuation and maximize the use of
> whitespace, for readability
Not sure it is ok. Indented code is definitely more readable
but kill-sexp and forward-sexp EMACS commands are very cool.

> Everything about the language is to be defined in the
> language itself.
Very good.

> The language is to be fully extensible by users, with
> no magic.
Very good, but CL is already almost ideal in this. Not sure it
can be imporeved significantly.

> In Lisp, code walking requires ad hoc code to understand every
> "special form." This is unmodular, error-prone, and a waste of time.
> As always, the solution to this type of problem is object
> orientation. In PLOT there is a well-defined, object-oriented
> interface to the Abstract Syntax Tree, scopes, and definitions.
> This is one reason why objects are better than S-expressions
> as a representation for program source code.
I'm unsure. It looks like having one good standard code walker
solves that problem too.

> As mentioned above, a token-stream keeps track of source
> locations.
Nice to have this built in. CL sucks here. Reader should be
able to annotate every cons it reads. To fix it, one need
redefine entire CL reader. Result is some difficulties
finding error locations, which greatly reduces CL
coding productivity. Especially horrible is trying
to find errors in macroexpanded code.

Also it looks like language abandons an ability
to print data readably which is extremely powerful
feature of lisp. Or maybe I didn't find it.
Having index of all symbols would be useful here.
From: fft1976
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <c8aaf7b4-7d03-4d09-96c5-88450a29b1fa@j9g2000prh.googlegroups.com>
Very long thread. What did the hive mind decide about this syntax, if
anything?
From: Scott Burson
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <537f7658-4c5f-4a47-adb1-703ca8a038c2@b6g2000pre.googlegroups.com>
On May 10, 10:14 am, fft1976 <·······@gmail.com> wrote:
> Very long thread. What did the hive mind decide about this syntax, if
> anything?

I don't think the syntax was discussed after message 7 or so.  The
vast majority of the thread was installment 372 of the never-ending
debate between those who like static typing and those who like dynamic
typing.  The original topic was long lost, I'm afraid.

My opinion?  It looks like an interesting design, worked out with Dave
Moon's usual meticulousness.  But I'm a parenthophile, happily hacking
in CL, with no need for a new Lisp.

-- Scott
From: Ray Blaak
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <uzldko8h1.fsf@STRIPCAPStelus.net>
fft1976 <·······@gmail.com> writes:

> Very long thread. What did the hive mind decide about this syntax, if
> anything?

Shit, sorry.

I don't even know, but would guess it's a no.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Robbert Haarman
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <20090511051245.GK3862@gildor.inglorion.net>
The little I've seen, I like, other than macros. The syntax is pretty 
easy to read and quite similar to other programming languages. The 
syntactic sugar for anonymous functions is nice, and solves one of the 
issues I have with the lisps I use (lambda is too verbose).

The macro syntax...I find it needlessly ugly, but perhaps it just takes 
getting used to.

On the whole, I would have to actually use the language to be able to 
really judge it. And the final judgment will depend on much more than 
the syntax. To name a few things:

 - Easy access to features I use a lot, such as command line 
   arguments, TCP and UDP sockets, POSIX (environment variables, users, 
   and more), binary blobs and bitwise operations.

 - Speed and memory usage of programs written in the language.

 - Safety and reliability of programs written in the language. (Think 
   buffer overflows, exceptions that exit your infinite loop, ...).

 - Overal feel of the language. Does it just let you write what you 
   want, or does it force things upon you? (Think "public static void", 
   "free()", or, on the other side, metaprogramming.)

I already have language implementations that do well on some 
combinations of the above. If PLOT manages to give me a new combination, 
I will definitely want to use it. 

For some data points, I consider C to be great at features and 
efficiency, bad at safety, and decent at overall feel. Ruby is good at 
features, bad at efficiency, decent at safety and reliability, and 
great at overall feel. Lisps tend to be good at safety and reliability 
and overall feel, bad at features, and at best decent in terms of 
efficiency (heavily dependent on implementation).

Make a Lisp (fully parenthesized or not) that has easy access to the 
features I use and that allows easy creation of executables that are 
more or less efficient and you have something very hard to beat.

All this purely my opinion, of course.

Regards,

Bob

-- 
Success is getting what you want; happiness is wanting what you get.

From: ·············@gmx.at
Subject: Re: PLOT: A non-parenthesized, infix Lisp!
Date: 
Message-ID: <c5c4cb08-84b2-4eb0-9ac6-169d8c568390@z5g2000vba.googlegroups.com>
On 11 Mai, 07:12, Robbert Haarman <··············@inglorion.net>
wrote:
> The little I've seen, I like, other than macros. The syntax is pretty
> easy to read and quite similar to other programming languages. The
> syntactic sugar for anonymous functions is nice, and solves one of the
> issues I have with the lisps I use (lambda is too verbose).
>
> The macro syntax...I find it needlessly ugly, but perhaps it just takes
> getting used to.
>
> On the whole, I would have to actually use the language to be able to
> really judge it. And the final judgment will depend on much more than
> the syntax. To name a few things:
>
>  - Easy access to features I use a lot, such as command line
>    arguments, TCP and UDP sockets, POSIX (environment variables, users,
>    and more), binary blobs and bitwise operations.

I would be interested to get a more detailed list of
features you consider important.

Greetings Thomas Mertes

Seed7 Homepage:  http://seed7.sourceforge.net
Seed7 - The extensible programming language: User defined statements
and operators, abstract data types, templates without special
syntax, OO with interfaces and multiple dispatch, statically typed,
interpreted or compiled, portable, runs under linux/unix/windows.