From: ilias
Subject: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <3D7CB8DF.8050108@pontos.net>
This request is adressed to those people which have extensive knowledge 
about different, if possible all avaiable historical and so called 
modern language.

Which language should i take a look on?

Is there something, that can impress me?

I'm a LISP novice. But from what i've seen i can construct with this 
language nearly everything i can imagine.

So the questions:

- which language out there gives me this freedom of construction?
- which language has some interesting constructs i could assimilate?
- is there any complete comparision of languages available, which gives 
a quick overview?

I ask this in c.l.l., cause it relates to LISP.

From: Oleg
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <aligh2$ag9$1@newsmaster.cc.columbia.edu>
ilias wrote:

[...]
> Which language should i take a look on?

ML, more specifically its O'Caml dialect. It's features include:

a) A langauge and an implementation in one (from the practical point of 
view, this is an advantage, but see point "d"). Here I list features of 
various importance for both.
b) Interpreter, virtual machine, and native compiler
c) Maximum portability (I know of people developing commercial Windows 
products under Linux)
d) Open source (you are free to fork the implementation if you don't like 
something)
e) A license that does not inhibit commercial use of your programs
f) Type inference (no need to declare types in function definitions)
g) Very strict type system: no overloading and no implicit conversions; 
even printf is statically type checked!
h) Eager AKA strict execution model (as opposed to lazy as in Haskell)
i) Both functional and imperative programming (i.e. the language has 
higher-order functions)
j) Both OO and non-OO programming
k) Abstract types
l) Polymorphism
m) Garbage collection
n) Automatic marshalling: no need to write serialization code (at least in 
some/most cases)
o) Fast execution (IMHO, as a result of static type checking, run-time does 
not need to type-check)
p) Good performance in programming contests
q) Easy C interface (both for calling O'Caml from C and C from O'Caml)
r) Lexical scoping, of course
s) O'Reilly book in French, and its English translation freely available on 
the net (but not [yet] published)
t) Pattern matching

Cheers and pardon me/us for drifting off-topic.
Oleg
From: ilias
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <3D7CCC57.4080000@pontos.net>
Oleg wrote:
> ilias wrote:
> 
> [...]
> 
>>Which language should i take a look on?
> 
> ML, more specifically its O'Caml dialect. It's features include:
> 
...

> 
> Cheers and pardon me/us for drifting off-topic.
> Oleg


pardon granted.

Thank you for you in-topic-reply.

i've not looked yet at: http://www.ocaml.org/

The language, from what you have written, sounds interesting.

What about the ability to generate code?

Is there any with the strength of CL?
From: Oleg
Subject: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <alikuf$ece$1@newsmaster.cc.columbia.edu>
ilias wrote:

> Oleg wrote:
>> ilias wrote:
>> 
>> [...]
>> 
>>>Which language should i take a look on?
>> 
>> ML, more specifically its O'Caml dialect. It's features include:
>> 
> ...
> 
>> 
>> Cheers and pardon me/us for drifting off-topic.
>> Oleg
> 
> 
> pardon granted.
> 
> Thank you for you in-topic-reply.
> 
> i've not looked yet at: http://www.ocaml.org/
> 
> The language, from what you have written, sounds interesting.
> 
> What about the ability to generate code?

Of course O'Caml programs can generate O'Caml code, and I don't see why it 
should be any harder than in Lisp. Just like Lisp, O'Caml programs are 
expressions, and you can even have prefix arithmetic operators: you can 
write "(+) 5 7" or "((+) 5 7)" instead of "5 + 7", and also "(f (g (h x)))" 
instead of "f (g (h x))" if you insist. Speaking of Lisp syntax in O'Caml, 
I think O'Caml standard distribution even has a module for Lisp syntax that 
lets it understand things like "(if a b c)" as "if a then b else c", but I 
doubt that many people use it.

I personally never had much use for programs in language X that generate 
programs in language X, especially if language X has higher-order 
functions, but maybe others can't live wihtout it.

Oleg
From: ilias
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <3D7CDE6D.60107@pontos.net>
Oleg wrote:
> ilias wrote:
> 
>>Oleg wrote:
>>
>>>ilias wrote:
>>>
>>>[...]
>>>
>>>>Which language should i take a look on?
>>>
>>>ML, more specifically its O'Caml dialect. It's features include:
>>
>>...
>>
>>>Cheers and pardon me/us for drifting off-topic.
>>>Oleg
>>
>>pardon granted.
>>
>>Thank you for you in-topic-reply.
>>
>>i've not looked yet at: http://www.ocaml.org/
>>
>>The language, from what you have written, sounds interesting.
>>
>>What about the ability to generate code?

i'm quoting uncontrolled:

> Of course O'Caml programs can generate O'Caml code

can you give me an example please?


> you can write "(+) 5 7" or "((+) 5 7)" instead of    "5 + 7"

 > and also  "(f (g (h x)))" instead of "f (g (h x))"



> has a module for [...] syntax that 

Can make syntax-modules myself?
Must i generate such a module to add a syntax?
Or can i change the syntax directly in a standard-program on the fly?
From: Oleg
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <alinja$i7g$1@newsmaster.cc.columbia.edu>
ilias wrote:

> i'm quoting uncontrolled:
> 
>> Of course O'Caml programs can generate O'Caml code
> 
> can you give me an example please?

Certainly. Here's an example of code for
a valid O'Caml program that generates code for 
a valid O'Caml program that generates code for 
a valid O'Caml program that generates code for 
a valid O'Caml program that generates code for ....

AKA "the program that prints itself" (TM)

------ begin here ------------
---------- end here------------

It's small too. (0Mb)
 
>> you can write "(+) 5 7" or "((+) 5 7)" instead of����"5�+�7"
> 
> >�and�also��"(f�(g�(h�x)))"�instead�of�"f�(g�(h�x))"
> 
> 
> 
>> has a module for [...] syntax that
> 
> Can make syntax-modules myself?

Yes.  http://caml.inria.fr/camlp4/manual/manual001.html

> Must i generate such a module to add a syntax?
> Or can i change the syntax directly in a standard-program on the fly?

Possibly. I use standard syntax. What is the nature of your obsession with 
syntax?

Oleg
From: Dave Bakhash
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <c29sn0jne8h.fsf@no-knife.mit.edu>
Oleg <············@myrealbox.com> writes:

> > Must i generate such a module to add a syntax?  Or can i change the
> > syntax directly in a standard-program on the fly?
> 
> Possibly. I use standard syntax. What is the nature of your obsession
> with syntax?

Syntax has a lot to do with the Lisp language.  Non-Lisp programmers
under-estimate the importance because they program in other languages
whose syntaxes are only minor variations of one another, and don't buy
the programmer very much.  For CL programmers, the syntax, built-in
runtime reader and evaluator, and more are what make it so usable to
them.  That's why they "obsess" over it.

dave
From: Oleg
Subject: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alj0n5$pmc$1@newsmaster.cc.columbia.edu>
Dave Bakhash wrote:

> Syntax has a lot to do with the Lisp language.��Non-Lisp�programmers
> under-estimate the importance because they program in other languages
> whose syntaxes are only minor variations of one another, and don't buy
> the programmer very much.��For�CL�programmers,�the�syntax,�built-in
> runtime reader and evaluator, and more are what make it so usable to
> them.��That's�why�they�"obsess"�over�it.

I'm not an expert in macros, but IIRC I've seen an example of a "loop" 
macro in Lisp that was used to demonstrate their usefulness. I'm not sure I 
understand how using macros is any better than using higher-order functions 
(HOFs) though.

E.g. to write a loop construct that increments its arument by 2 instead of 
1, in O'Caml, I would write

let rec loop2 start finish f = 
  if finish < start then () else (f start; loop2 (start + 2) finish f)

which is probably close to what one could do with HOFs in Lisp. No need for 
macros. Now

loop2 1 9 print_int

will print 13579. I guess the old saying that "if you don't know it, you 
won't miss it" probably applies to me and Lisp macros here.

Cheers,
Oleg
From: ilias
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3D7D0B0E.1070007@pontos.net>
Oleg wrote:
> Dave Bakhash wrote:
> 
> 
>>Syntax has a lot to do with the Lisp language.  Non-Lisp programmers
>>under-estimate the importance because they program in other languages
>>whose syntaxes are only minor variations of one another, and don't buy
>>the programmer very much.  For CL programmers, the syntax, built-in
>>runtime reader and evaluator, and more are what make it so usable to
>>them.  That's why they "obsess" over it.
> 
> 
> I'm not an expert in macros, but IIRC I've seen an example of a "loop" 
> macro in Lisp that was used to demonstrate their usefulness. I'm not sure I 
> understand how using macros is any better than using higher-order functions 
> (HOFs) though.

...

> will print 13579. I guess the old saying that "if you don't know it, you 
> won't miss it" probably applies to me and Lisp macros here.

i'm not sure. look here:

on page ... around 220 i think.
http://www.paulgraham.com/onlisptext.html

and im just writing in another topic, see actual posts "The Challange of 
Nested Macros".

(did anyone know how i can reference to another threads, without 
pointing to google?)
From: Matthew Danish
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <20020910030057.J19990@lain.res.cmu.edu>
On Mon, Sep 09, 2002 at 04:36:53PM -0400, Oleg wrote:
> Dave Bakhash wrote:
> 
> > Syntax has a lot to do with the Lisp language.??Non-Lisp?programmers
> > under-estimate the importance because they program in other languages
> > whose syntaxes are only minor variations of one another, and don't buy
> > the programmer very much.??For?CL?programmers,?the?syntax,?built-in
> > runtime reader and evaluator, and more are what make it so usable to
> > them.??That's?why?they?"obsess"?over?it.
> 
> I'm not an expert in macros, but IIRC I've seen an example of a "loop" 
> macro in Lisp that was used to demonstrate their usefulness. I'm not sure I 
> understand how using macros is any better than using higher-order functions 
> (HOFs) though.
> 
> E.g. to write a loop construct that increments its arument by 2 instead of 
> 1, in O'Caml, I would write
> 
> let rec loop2 start finish f = 
>   if finish < start then () else (f start; loop2 (start + 2) finish f)
> 
> which is probably close to what one could do with HOFs in Lisp. No need for 
> macros. Now
> 
> loop2 1 9 print_int
> 

Considering that Common Lisp can also express the same higher-order
function, perhaps you should consider why the LOOP macro is used.  Or
any macro for that matter.  It has already been demonstrated with the
lambda calculus that you can implement program control flow using only
functions.  Do you see people doing that?  Why have a WHEN macro when
you can just do (cond (... ...) (t nil)) ?

On a lighter note:

If you want to have some fun, why not write a nice higher-order function
to do:

(defun silly-loop (string &optional (increment 1) (final-char nil))
  (loop for n from 0 by increment 
        for char across string
        until (eql char final-char)
        collect char into char-bag
        sum n into sum
        finally (return (values char-bag sum n))))

Try to make it half as readable.  And as efficient.

(If you don't behave, I'll break out a 36-line perverse combination of
LOOP and FORMAT and make you do that one ;)  (or perhaps that 30 line
LOOP which parses mbox files, I have somewhere)

> will print 13579. I guess the old saying that "if you don't know it, you 
> won't miss it" probably applies to me and Lisp macros here. 

There's also that old saying: "If you don't know it, you probably don't
know enough to compare it against something else"

(Not that I'm any less guilty of that at times =)

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alkbsd$pvo$1@newsmaster.cc.columbia.edu>
Matthew Danish wrote:

> 
> If you want to have some fun, why not write a nice higher-order function
> to do:
> 
> (defun silly-loop (string &optional (increment 1) (final-char nil))
> (loop�for�n�from�0�by�increment
> for�char�across�string
> until�(eql�char�final-char)
> collect�char�into�char-bag
> sum�n�into�sum
> finally�(return�(values�char-bag�sum�n))))
> 
> Try to make it half as readable.��And�as�efficient.

let silly_loop ?(increment = 1) ?(final_char = '\000') s = 
    let sum = ref 0 and i = ref 0 and char_bag = ref [] in
    let _ = try while true do
                    char_bag := s.[!i] :: !char_bag;
                    sum := !sum + !i;
                    if List.hd !char_bag = final_char then raise Exit;
                    i := !i + increment;
                done
    with _ -> () in (!char_bag, !sum, !i);;

Cheers
Oleg
From: Joe Marshall
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <fzwi0yms.fsf@ccs.neu.edu>
Oleg <············@myrealbox.com> writes:

> Matthew Danish wrote:
> 
> > 
> > If you want to have some fun, why not write a nice higher-order function
> > to do:
> > 
> > (defun silly-loop (string &optional (increment 1) (final-char nil))
> > (loop�for�n�from�0�by�increment
> > for�char�across�string
> > until�(eql�char�final-char)
> > collect�char�into�char-bag
> > sum�n�into�sum
> > finally�(return�(values�char-bag�sum�n))))
> > 
> > Try to make it half as readable.��And�as�efficient.
> 
> let silly_loop ?(increment = 1) ?(final_char = '\000') s = 
>     let sum = ref 0 and i = ref 0 and char_bag = ref [] in
>     let _ = try while true do
>                     char_bag := s.[!i] :: !char_bag;
>                     sum := !sum + !i;
>                     if List.hd !char_bag = final_char then raise Exit;
>                     i := !i + increment;
>                 done
>     with _ -> () in (!char_bag, !sum, !i);;
> 
> Cheers
> Oleg

Gee.  Without all those parethesis in the way Oleg's version is
*much* more readable.  (Although it could use a few more curly
braces and perhaps a dollar sign or two.)
From: Brian Palmer
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <0whofb5spk9.fsf@rescomp.Stanford.EDU>
Joe Marshall <···@ccs.neu.edu> writes:

> Gee.  Without all those parethesis in the way Oleg's version is
> *much* more readable.  (Although it could use a few more curly
> braces and perhaps a dollar sign or two.)

hey, perl should be pretty good at this sort of thing.
(And, incidentally, I think the more explicit solution avoids an
 annoying "gotcha"; almost every time I
 see a loop that has two iterating constructs, I think it's nested,
 rather than parallel loops) 

sub silly_loop {
  my ($increment, $final,$sum,$charbag);
  ($_,$increment,$final) = (shift,(shift or 1),shift);

  s/^(.*?)\Q$final\E.*/$1/ if defined($final);
    
  while (scalar(/(.)/g)) {
    $sum += (pos()-1)*$increment;
    $charbag .= $1;
  }
  return ($charbag,$sum,(pos||length)*$increment);
}

-- 
If you want divine justice, die.
                  -- Nick Seldon 
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkwuptbumv.fsf@pc022.bln.elmeg.de>
Joe Marshall <···@ccs.neu.edu> writes:

> Oleg <············@myrealbox.com> writes:
> 
> > let silly_loop ?(increment = 1) ?(final_char = '\000') s = 
> >     let sum = ref 0 and i = ref 0 and char_bag = ref [] in
> >     let _ = try while true do
> >                     char_bag := s.[!i] :: !char_bag;
> >                     sum := !sum + !i;
> >                     if List.hd !char_bag = final_char then raise Exit;
> >                     i := !i + increment;
> >                 done
> >     with _ -> () in (!char_bag, !sum, !i);;

> Gee.  Without all those parethesis in the way Oleg's version is
> *much* more readable.  (Although it could use a few more curly
> braces and perhaps a dollar sign or two.)

And it solves the wrong problem.  The /real/ problem is how to do

let loop ?? = ???;;

Or...  OCaml has a keyword `lazy'.  How would you implement it if it
wasn't already there?  (With Lisp macros you could).  Same with
`while', or `for'.  Or make a repeat...until, which isn't already
there, IIRC.

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <allev5$lsn$1@newsmaster.cc.columbia.edu>
Nils Goesche wrote:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
>> Oleg <············@myrealbox.com> writes:
>> 
>> > let silly_loop ?(increment = 1) ?(final_char = '\000') s =
>> >     let sum = ref 0 and i = ref 0 and char_bag = ref [] in
>> >     let _ = try while true do
>> >                     char_bag := s.[!i] :: !char_bag;
>> >                     sum := !sum + !i;
>> >                     if List.hd !char_bag = final_char then raise Exit;
>> >                     i := !i + increment;
>> >                 done
>> >     with _ -> () in (!char_bag, !sum, !i);;
> 
>> Gee.  Without all those parethesis in the way Oleg's version is
>> *much* more readable.  (Although it could use a few more curly
>> braces and perhaps a dollar sign or two.)
> 
> And it solves the wrong problem.  The /real/ problem is how to do
> 
> let loop ?? = ???;;

If the example Matthew Danish posted was pseudocode and needed a "loop" 
macro defined first to become valid CL code, then why [in hell] would I 
need a "loop" macro in O'Caml if I can implement silly_loop as easily in 
_valid_ O'Caml as in Lisp pseudocode?
 
> Or...  OCaml has a keyword `lazy'.  How would you implement it if it
> wasn't already there?  

You can't add keywords to the language AFAIK. I doubt that it's necessary 
to have a special keyword to define lazy data types.

> (With Lisp macros you could).  

In C++ too, while you can't add keywords, you can overload ordinary 
operators in such a way that a team of NSA experts will not be able to 
decipher your smallish program. Is that the goal?

> Same with
> `while', or `for'.  Or make a repeat...until, which isn't already
> there, IIRC.

IME do {} while(...); in C++ was almost always poor design. It's not in 
O'Caml for a reason.

I already posted "loop2" implementation in this thread. Other things you 
mention could be implemented similarly. Note that the body of

for i = x to y do
   (* ...body ... *)
done

is essentially a function of type int -> unit.

Cheers,
Oleg
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkadmpbqrq.fsf@pc022.bln.elmeg.de>
Oleg <············@myrealbox.com> writes:

> Nils Goesche wrote:
> 
> > And it solves the wrong problem.  The /real/ problem is how to do
> > 
> > let loop ?? = ???;;
> 
> If the example Matthew Danish posted was pseudocode and needed a "loop" 
> macro defined first to become valid CL code, then why [in hell] would I 
> need a "loop" macro in O'Caml if I can implement silly_loop as easily in 
> _valid_ O'Caml as in Lisp pseudocode?

LOOP wasn't always part of Lisp.  People wrote it as a macro.  It
still is a macro, only that it is part of the ANSI standard, now.
You've been asking for examples for what you can do with macros.  LOOP
itself is such an example.  Not what you can do with LOOP, but LOOP
itself.  And so is the whole of CLOS.  When it was invented, people
added it to Lisp as a bunch of macros.

> > Or...  OCaml has a keyword `lazy'.  How would you implement it if it
> > wasn't already there?  
> 
> You can't add keywords to the language AFAIK. I doubt that it's
> necessary to have a special keyword to define lazy data types.

The point is that lazy <FOO> defers the evaluation of the expression
<FOO>.  `lazy' cannot be a function because (as OCaml and Lisp are
strict) the arguments to a function call are always evaluated before
the function is entered.  With macros you can do that (and other
things).  lazy is a keyword in OCaml /precisely/ because people might
want it and you can't add it to the language if it isn't already
there.  Lisp doesn't have LAZY but you can add it when you want it,
see PAIP, for instance (I think it's called DELAY there).  And doing
that is very easy, whereas using the CamlP4 monster is not.

> > (With Lisp macros you could).
> 
> In C++ too, while you can't add keywords, you can overload ordinary 
> operators in such a way that a team of NSA experts will not be able to 
> decipher your smallish program. Is that the goal?

Guess.

> > Same with `while', or `for'.  Or make a repeat...until, which
> > isn't already there, IIRC.
> 
> IME do {} while(...); in C++ was almost always poor design. It's not in 
> O'Caml for a reason.

It is true that you don't need it very often.  But in some situations
it is exactly the right thing and it not using it would be silly.  In
Lisp, you could easily add it to the language (but it is already a
simple special case of LOOP).

> I already posted "loop2" implementation in this thread. Other things you 
> mention could be implemented similarly. Note that the body of
> 
> for i = x to y do
>    (* ...body ... *)
> done
> 
> is essentially a function of type int -> unit.

That doesn't matter.  Note that all you're saying is that OCaml is
Turing-complete, which nobody denies, anyway.

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Software Scavenger
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <a6789134.0209100919.1bda5485@posting.google.com>
Oleg <············@myrealbox.com> wrote in message news:<············@newsmaster.cc.columbia.edu>...

> let silly_loop ?(increment = 1) ?(final_char = '\000') s = 

I'm curious to know what do-combinations would look like in ocaml.
It works like this in lisp:

(do-combinations (a b c) '(1 2 3 4)
   (print (list a b c)))
and the output is:
(1 2 3) 
(1 2 4) 
(1 3 4) 
(2 3 4)

You can give it any number of symbols, i.e. K, e.g. K is 3 above and
the symbols are a, b, and c, and it iterates though all combinations
of N things taken K at a time.  The '(1 2 3 4) are the N things, with
N being 4.

I assume you can implement do-combinations in ocaml easily, but I'm
curious to see what it would look like.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <allbod$jd4$1@newsmaster.cc.columbia.edu>
Software Scavenger wrote:

> Oleg <············@myrealbox.com> wrote in message
> news:<············@newsmaster.cc.columbia.edu>...
> 
>> let silly_loop ?(increment = 1) ?(final_char = '\000') s =
> 
> I'm curious to know what do-combinations would look like in ocaml.
> It works like this in lisp:
> 
> (do-combinations (a b c) '(1 2 3 4)
>    (print (list a b c)))
> and the output is:
> (1 2 3)
> (1 2 4)
> (1 3 4)
> (2 3 4)
> 
> You can give it any number of symbols, i.e. K, e.g. K is 3 above and
> the symbols are a, b, and c, and it iterates though all combinations
> of N things taken K at a time.  The '(1 2 3 4) are the N things, with
> N being 4.
> 
> I assume you can implement do-combinations in ocaml easily, but I'm
> curious to see what it would look like.

I understand that do-combinations is not defined in CL (at least LispWorks 
environment seems to tell me so). If I were to implement it in O'Caml, it 
would have the type:

val do_combinations: int -> 'a list -> 'a list list

i.e. it would take a list (of any type of variable), and integer (K = 3 in 
your case) and  return a list of lists (of that type), and I would use it 
like this:

do_combinations 3 [1; 2; 3; 4]

which sould return 
[[1; 2; 3]; [1; 2; 4]; [1; 3; 4]; [2; 3; 4]]

or 

do_combinations 3 ["eins"; "zwei"; "drei"; "vier"]

which should return
[["eins"; "zwei"; "drei"]; ["eins"; "zwei"; "vier"]; ["eins"; "drei"; 
"vier"]; ["zwei"; "drei"; "vier"]]

(type-safe polymorphism)

If you wanted to pretty-print the result, you could use "iter" HOF defined 
in List module.

Cheers
Oleg

P.S. If you still want me to implement do_combinations in O'Caml, give me a 
Lisp implementation (I don't want to think of an algorithm for it)
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lksn0hbts6.fsf@pc022.bln.elmeg.de>
Oleg <············@myrealbox.com> writes:

> Software Scavenger wrote:
> 
> > (do-combinations (a b c) '(1 2 3 4)
> >    (print (list a b c)))

> I understand that do-combinations is not defined in CL (at least LispWorks 
> environment seems to tell me so). If I were to implement it in O'Caml, it 
> would have the type:
> 
> val do_combinations: int -> 'a list -> 'a list list

No, that's not the point.  The point is that you can write

  (do-combinations (a b c) some-list
     <BODY>)

and then the code in BODY is called repeatedly with A, B and C bound
to the values of a ``combination''.  To do that with higher order
functions, you'd have to write something like

  (do-combinations (lambda (a b c)
                      <BODY>)
                   some-list)

and you write do-combinations as a macro iff you don't want to do
that.

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alld8l$kft$1@newsmaster.cc.columbia.edu>
Nils Goesche wrote:

> Oleg <············@myrealbox.com> writes:
> 
>> Software Scavenger wrote:
>> 
>> > (do-combinations (a b c) '(1 2 3 4)
>> >    (print (list a b c)))
> 
>> I understand that do-combinations is not defined in CL (at least
>> LispWorks environment seems to tell me so). If I were to implement it in
>> O'Caml, it would have the type:
>> 
>> val do_combinations: int -> 'a list -> 'a list list
> 
> No, that's not the point.  The point is that you can write
> 
>   (do-combinations (a b c) some-list
>      <BODY>)
> 
> and then the code in BODY is called repeatedly with A, B and C bound
> to the values of a ``combination''.  To do that with higher order
> functions, you'd have to write something like
> 
>   (do-combinations (lambda (a b c)
>                       <BODY>)
>                    some-list)

More precisely, in O'Caml you would do

List.iter print_int_list (do_combinations 3 [1; 2; 3; 4])

What's wrong with that? 

Now let's see that "do_combinations" definition! And I'll try to show how 
much easier it is to define it in O'Caml (if that is the case).

Cheers,
Oleg

> and you write do-combinations as a macro iff you don't want to do
> that.
> 
> Regards,
From: Christopher Browne
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <allhig$1pd0ha$1@ID-125932.news.dfncis.de>
In the last exciting episode, Oleg <············@myrealbox.com> wrote::
> Nils Goesche wrote:
>
>> Oleg <············@myrealbox.com> writes:
>> 
>>> Software Scavenger wrote:
>>> 
>>> > (do-combinations (a b c) '(1 2 3 4)
>>> >    (print (list a b c)))
>> 
>>> I understand that do-combinations is not defined in CL (at least
>>> LispWorks environment seems to tell me so). If I were to implement it in
>>> O'Caml, it would have the type:
>>> 
>>> val do_combinations: int -> 'a list -> 'a list list
>> 
>> No, that's not the point.  The point is that you can write
>> 
>>   (do-combinations (a b c) some-list
>>      <BODY>)
>> 
>> and then the code in BODY is called repeatedly with A, B and C bound
>> to the values of a ``combination''.  To do that with higher order
>> functions, you'd have to write something like
>> 
>>   (do-combinations (lambda (a b c)
>>                       <BODY>)
>>                    some-list)
>
> More precisely, in O'Caml you would do
>
> List.iter print_int_list (do_combinations 3 [1; 2; 3; 4])
>
> What's wrong with that? 

What's wrong with that is that you have it "inside-out."

The point of the exercise is to write code that might look something
like:

with-combinations (a, b, c) of [ 1; 2; 3; 4] do
  do_this(a, b, c);
  do_that(a, b, c);
  do_something_else(a, b, c);
enddo;

The "HOF way" of doing this would be to have something like

let dostuff (a, b, c) =
   do_this (a, b, c);
   do_that (a, b, c);
   do_something_else (a, b, c);

And then do:

List.iter dostuff (do_combinations 3 [1; 2; 3; 4])

But the whole point is to NOT need to create the extra function,
dostuff(a,b,c), when it's only being used once.

There's not any fundamental "horror" at having the function; the point
is that with Lisp, you can create macros so that this sort of thing
isn't necessary.

It is Really Useful when working with things where there is some sort
of "protocol" surrounding the use of something.

For instance:

When you work with a file, starting with OPEN, and ending with CLOSE,
it's nice to hide that all inside.

Your code looks like:

    (with-open-file (s filename other-parms-controlling-opening)
      (do this with s)
      (do that with s)
      (do something else with s))

The OPEN and CLOSE are hidden; you never need to worry about them.  

And you can embed this inside your code.

You _don't_ have to split things up artificially into extra functions
just because you're using this.

You _don't_ have to create:

(defun do-stuff (s more-args)
  (do this with s)
  (do that with s)
  (do something else with s))

The reason why it's Rather Bad to need to create the extra function is
that this whole thing might be embedded in some other lexical state.  It's not just: 

    (with-open-file (s filename other-parms-controlling-opening)
      (do this with s)
      (do that with s)
      (do something else with s))

It's more like:

(defun foo (h1 h2 h3) 
 (let ((a (+ 1 h3))
       (b 2))
    (with-open-file (s filename other-parms-controlling-opening)
      (do this with s a)
      (do that with s b h1)
      (do something else with s a b h2))))

By having the macro expand in-place, there's no need to name extra
functions, no need to expressly create an extra
"let/labels/flabels/..." environment in order to create extra
functions.

No such "muss and fuss."  The macro system expands the structure
looking like (with-open-file args &body) into the set of code that
manages the file.  It may _create_ a let or two.  It may call some
functions.  I don't need to care about those details; I just write
code.
-- 
(reverse (concatenate 'string ·············@" "enworbbc"))
http://cbbrowne.com/info/lisp.html
"We believe Windows 95 is a walking antitrust violation"
-- Bryan Sparks
From: Stephen J. Bevan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3hegx1sb4.fsf@dino.dnsalias.com>
Christopher Browne <········@acm.org> writes:
> When you work with a file, starting with OPEN, and ending with CLOSE,
> it's nice to hide that all inside.
> 
> Your code looks like:
> 
>     (with-open-file (s filename other-parms-controlling-opening)
>       (do this with s)
>       (do that with s)
>       (do something else with s))
> 
> The OPEN and CLOSE are hidden; you never need to worry about them.  

I think do-combinations example I snipped is a good one for showing
how something can be done with macros that it is not easy to do using
higher order functions (modulo arguments about SCC that can remove the
list creation from the HOF version).  However, here with
with-open-file you've strayed into territory which it is easy to do
with higher order functions and not even particularly onerous in any
language with higher order functions.

> (defun foo (h1 h2 h3) 
>  (let ((a (+ 1 h3))
>        (b 2))
>     (with-open-file (s filename other-parms-controlling-opening)
>       (do this with s a)
>       (do that with s b h1)
>       (do something else with s a b h2))))
> 
> By having the macro expand in-place, there's no need to name extra
> functions, no need to expressly create an extra
> "let/labels/flabels/..." environment in order to create extra
> functions.

Does the ... cover lambda :-

 (defun foo (h1 h2 h3) 
  (let ((a (+ 1 h3))
        (b 2))
     (with-open-file filename other-parms-controlling-opening
       (lambda (s)
         (do this with s a)
         (do that with s b h1)
         (do something else with s a b h2))))

I don't find this any worse than the macro version myself for this
particular example (one extra indentation, possible closure creation).
For smaller examples, a more concise notation for lambda would be
helpful, and with macros one is free to create it :-)
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alm9e5$auv$2@newsmaster.cc.columbia.edu>
Christopher Browne wrote:

> The "HOF way" of doing this would be to have something like
> 
> let dostuff (a, b, c) =
> do_this�(a,�b,�c);
> do_that�(a,�b,�c);
> do_something_else�(a,�b,�c);
> 
> And then do:
> 
> List.iter dostuff (do_combinations 3 [1; 2; 3; 4])

Another "HOF way" is to define:

val do_combinations: int -> 'a list -> ('a list -> unit) -> unit

See (message id): ·················@newsmaster.cc.columbia.edu

This should probably be preferred if you don't actually need the list of 
all combinations.

Cheers
Oleg
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkfzwhbrzt.fsf@pc022.bln.elmeg.de>
Oleg <············@myrealbox.com> writes:

> Nils Goesche wrote:
> 
> > The point is that you can write
> > 
> >   (do-combinations (a b c) some-list
> >      <BODY>)
> > 
> > and then the code in BODY is called repeatedly with A, B and C bound
> > to the values of a ``combination''.  To do that with higher order
> > functions, you'd have to write something like
> > 
> >   (do-combinations (lambda (a b c)
> >                       <BODY>)
> >                    some-list)
> 
> More precisely, in O'Caml you would do
> 
> List.iter print_int_list (do_combinations 3 [1; 2; 3; 4])
> 
> What's wrong with that? 
> 
> Now let's see that "do_combinations" definition! And I'll try to show how 
> much easier it is to define it in O'Caml (if that is the case).

You don't get it.  The macro DO-COMBINATIONS does /not/ cons up a list
of lists.  It repeatedly executes some lines of code with A, B and C
bound to some values that depend on SOME-LIST.  Now read what I wrote
again.

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <allg3g$mob$1@newsmaster.cc.columbia.edu>
Nils Goesche wrote:

> Oleg <············@myrealbox.com> writes:
> 
>> Nils Goesche wrote:
>> 
>> > The point is that you can write
>> > 
>> >   (do-combinations (a b c) some-list
>> >      <BODY>)
>> > 
>> > and then the code in BODY is called repeatedly with A, B and C bound
>> > to the values of a ``combination''.  To do that with higher order
>> > functions, you'd have to write something like
>> > 
>> >   (do-combinations (lambda (a b c)
>> >                       <BODY>)
>> >                    some-list)
>> 
>> More precisely, in O'Caml you would do
>> 
>> List.iter print_int_list (do_combinations 3 [1; 2; 3; 4])
>> 
>> What's wrong with that?
>> 
>> Now let's see that "do_combinations" definition! And I'll try to show how
>> much easier it is to define it in O'Caml (if that is the case).
> 
> You don't get it.  The macro DO-COMBINATIONS does /not/ cons up a list
> of lists.  It repeatedly executes some lines of code with A, B and C
> bound to some values that depend on SOME-LIST.  Now read what I wrote
> again.

Perhaps I'm not expressing myself very clearly. do_combinations is not 
equivalent to DO-COMBINATIONS. I should have named it make_combinations 
(and we'll call it that henceforth to avoid confusion)

You can define do_combinations _function_ in O'Caml that is similar to 
DO-COMBINATIONS macro in Lisp:

val do_combinations: int -> 'a list -> ('a list -> unit) -> unit

and use it like this:

do_combinations 3 [1; 2; 3; 4] print_int_list

which would print
1 2 3
1 2 4 
1 3 4 
2 3 4

with appropriately defined print_int_list [1]. No need for macros.

Oleg

[1] let print_int_list lst = List.iter print_int lst; print_newline()
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240674236491004@naggum.no>
* Oleg <············@myrealbox.com>
| Perhaps I'm not expressing myself very clearly.

  You are expressing yourself very clearly.  You make it exceptionally clear
  that you understand exactly zilch of what other people are telling you.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <allgrc$mob$3@newsmaster.cc.columbia.edu>
Erik Naggum wrote:

> * Oleg <············@myrealbox.com>
> | Perhaps I'm not expressing myself very clearly.
> 
>   You are expressing yourself very clearly.  You make it exceptionally
>   clear that you understand exactly zilch of what other people are telling
>   you.

Erik, do me (and everyone) a favor, add me to your killfile. I don't like 
you for personal reasons, and let's leave it at that.

Oleg
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240674949895400@naggum.no>
* Oleg <············@myrealbox.com>
| Erik, do me (and everyone) a favor, add me to your killfile. I don't like 
| you for personal reasons, and let's leave it at that.

  Your irrationality and your hatred have both been noted.  Thank you for your
  candor.  Few people so readily admit to have such deep personal problems.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Tim Bradshaw
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ey365xd1vp2.fsf@cley.com>
* oleg inconnu wrote:

> You can define do_combinations _function_ in O'Caml that is similar to 
> DO-COMBINATIONS macro in Lisp:

> val do_combinations: int -> 'a list -> ('a list -> unit) -> unit

This is a fairly common implementation technique for a class of macros
in Lisp, too.  For a typical WITH-x macro you define a function:

    (call-with-x function arg arg ...)

which arranges to set up whatever the x is (maybe an open file) and
then to do the desetup at the end.

Then the macro definition is very simple:

    (defmacro with-x ((arg ...) &body code)
      `(let ((.fn. #'(lambda (arg ...) ,@code)))
         (declare (dynamic-extent .fn.))
         (funcall .fn. ...)))

While Lisp people clearly think that the (with-x (...) ...) form is
much easier to read, it's probably arguable that the CALL-WITH-x form
is OK, too.  Certainly if Lisp had a less-verbose way of doing
anonymous functions it would probably be even more common - say
(call-with-x [(y) ...] ...) or something (where [(x) ...] means
(lambda (x) ...)).

DO-COMBINATIONS is a classic WITH-x macro in this sense.

But what all these WITH-x macros have in common is that they don't
actually do any (or much) syntactic rearrangement.  Macros which *do*
do such rearrangement can't be written this way - you have to do
things to the body of the macro to produce code to be executed.  It's
rather hard to give small examples of these macros, because, by
definition, they do more work.

I guess one small example would be one of the many HTML-generating
macros:

  (with-html-output (s)
    (:html
      (:head (:title "foo"))
      (:body
        (:h1 "foo")
        ...)))

Even though this is a WITH-x macro, the body of this macro isn't Lisp
code at all, it's stylized HTML with possibly interleaved Lisp code.
The macro is in fact a little compiler which takes this `code' and
compiles it into Lisp.

I kind of suspect that I'm preaching to a congregation which is
divided here: the Lisp people are already converted and are getting
bored, and the O'Caml people will never be converted because `HOFs are
all you need' and they will never understand why macros are useful.

--tim
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lk1y81bp15.fsf@pc022.bln.elmeg.de>
Tim Bradshaw <···@cley.com> writes:

> I kind of suspect that I'm preaching to a congregation which is
> divided here: the Lisp people are already converted and are getting
> bored, and the O'Caml people will never be converted because `HOFs are
> all you need' and they will never understand why macros are useful.

Some of them do understand why macros are useful.  That's why they
invented CamlP4 for a substitute.  When I still used OCaml, I already
knew some Lisp and felt the need for macros when I became more fluent
in OCaml.  So I learned to use CamlP4.  But using it was so awkward
that I finally thought ``WTF am I doing here??'' and returned to Lisp.

(The type checker was another reason.  Ever tried to define an
Y-combinator in OCaml or SML?  Or missed PRINT?)

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alll09$q9f$1@newsmaster.cc.columbia.edu>
Nils Goesche wrote:

> Tim Bradshaw <···@cley.com> writes:
> 
>> I kind of suspect that I'm preaching to a congregation which is
>> divided here: the Lisp people are already converted and are getting
>> bored, and the O'Caml people will never be converted because `HOFs are
>> all you need' and they will never understand why macros are useful.
> 
> Some of them do understand why macros are useful.  That's why they
> invented CamlP4 for a substitute.  When I still used OCaml, I already
> knew some Lisp and felt the need for macros when I became more fluent
> in OCaml.  So I learned to use CamlP4.  But using it was so awkward
> that I finally thought ``WTF am I doing here??'' and returned to Lisp.

I'm positive that a greater percentage of serious O'Caml users know Lisp 
well than vice versa. Guys who created O'Caml, like Xavier Leroy et. al. 
certainly were Lisp experts. So it's not the Lisp people who are 
"converted".
 
As to Camlp4, I don't need it and I don't use it. You make it sound like 
it's necessary.

> (The type checker was another reason.  

The type checker is there for a reason: one is execution speed, another 
reason is reliability. While a type checker can never guarantee that your 
program will do what you wanted, it removes a *great* deal of bugs: in C++, 
I would frequently have this bug when I divide one int by another and treat 
the result as a float. That's a very nasty type of bug, especially in any 
kind of scientific/numeric application. Lisp is such an extreme case that 
I'm surprised people are surprised that Lisp isn't used at JPL [1]. It 
wouldn't even prevent you from dividing an int by a string in some branch 
of code! 

Oleg

[1] 
http://groups.google.com/groups?selm=gat-1902021257120001%40eglaptop.jpl.nasa.gov&output=gplain
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkwupta79e.fsf@pc022.bln.elmeg.de>
Oleg <············@myrealbox.com> writes:

> Nils Goesche wrote:
> 
> > Tim Bradshaw <···@cley.com> writes:
> > 
> >> I kind of suspect that I'm preaching to a congregation which is
> >> divided here: the Lisp people are already converted and are getting
> >> bored, and the O'Caml people will never be converted because `HOFs are
> >> all you need' and they will never understand why macros are useful.
> > 
> > Some of them do understand why macros are useful.  That's why they
> > invented CamlP4 for a substitute.  When I still used OCaml, I already
> > knew some Lisp and felt the need for macros when I became more fluent
> > in OCaml.  So I learned to use CamlP4.  But using it was so awkward
> > that I finally thought ``WTF am I doing here??'' and returned to Lisp.
> 
> I'm positive that a greater percentage of serious O'Caml users know Lisp 
> well than vice versa. Guys who created O'Caml, like Xavier Leroy et. al. 
> certainly were Lisp experts. So it's not the Lisp people who are 
> "converted".

I don't see the point here.  Yes, I am rather sure that only a Lisp
expert could have come up with the idea to write CamlP4.

> As to Camlp4, I don't need it and I don't use it. You make it sound
> like it's necessary.

Well, what /is/ necessary?  I /felt/ that it's necessary because I am
used to having the power macros give me.  Obviously some OCaml man
felt so, too.  In comp.lang.c and comp.programming you can find people
who think that all the stuff in C++ or higher level languages is
unnecessary.  And the C++ freaks all think that closures are
unnecessary, as well as garbage collection or multi-methods.

To see the `necessity' of such things, you have to work with them
until you're used enough to them to see how they can be useful to
you.  You obviously haven't worked much with macros yet, so you don't
see the point in using them.  It takes a while.

> > (The type checker was another reason.  
> 
> The type checker is there for a reason:

Believe me, you don't have to tell me why the Hindley-Milner types
like static typing so much.  I know that.  I happen to disagree, but I
don't know why I should have to explain that again in comp.lang.lisp.

> one is execution speed, another reason is reliability. While a type
> checker can never guarantee that your program will do what you
> wanted, it removes a *great* deal of bugs: in C++, I would
> frequently have this bug when I divide one int by another and treat
> the result as a float. That's a very nasty type of bug, especially
> in any kind of scientific/numeric application. Lisp is such an
> extreme case that I'm surprised people are surprised that Lisp isn't
> used at JPL [1]. It wouldn't even prevent you from dividing an int
> by a string in some branch of code!

CL-USER 22 > (/ 42 "2")

Error: In / of (42 "2") arguments should be of type NUMBER.
  1 (continue) Return a value to use.
  2 Supply a new second argument.
  3 (abort) Return to level 0.
  4 Return to top loop level 0.

Type :b for backtrace, :c <option number> to proceed,  or :? for other options

CL-USER 23 : 1 > 

It prevented me all right, I'd say.  Lisp is not Perl.

Now, what happens if I divide two ints?

CL-USER 24 > (/ 5 3)
5/3

CL-USER 25 > (* 3 (/ 5 3))
5

Looks perfectly correct to me.  Let's define the factorial function in
the usual stupid way:

CL-USER 26 > (defun fac (n)
               (if (zerop n)
                   1
                 (* n (fac (1- n)))))
FAC

CL-USER 27 > (mapcar #'fac '(0 1 2 3 4))
(1 1 2 6 24)

So far, so good.  Now lets feed it some bigger argument:

CL-USER 28 > (fac 50)
30414093201713378043612608166064768844377641568960512000000000000

Looks good.  But wait...  do you know what happens if you do the same
in OCaml?

# let fac n = if n = 0 then 1 else n * (n - 1);;
val fac : int -> int = <fun>
# fac 50;;
- : int = 2450

Ah.  Interesting.  And WRONG!

Seriously, I doubt that there is any general purpose programming
language that is better suited for doing mathematics than Common
Lisp.  And I havent even started with complex numbers yet...

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alm0k7$5o9$1@newsmaster.cc.columbia.edu>
Nils Goesche wrote:

> Oleg wrote:
>> The type checker is there for a reason:
> 
> Believe me, you don't have to tell me why the Hindley-Milner types
> like static typing so much.��I�know�that.��I�happen�to�disagree,�but�I
> don't know why I should have to explain that again in comp.lang.lisp.
> 
>> one is execution speed, another reason is reliability. While a type
>> checker can never guarantee that your program will do what you
>> wanted, it removes a great deal of bugs: in C++, I would
>> frequently have this bug when I divide one int by another and treat
>> the result as a float. That's a very nasty type of bug, especially
>> in any kind of scientific/numeric application. Lisp is such an
>> extreme case that I'm surprised people are surprised that Lisp isn't
>> used at JPL [1]. It wouldn't even prevent you from dividing an int
>> by a string in some branch of code!
> 
> CL-USER 22 > (/ 42 "2")
> 
> Error: In / of (42 "2") arguments should be of type NUMBER.
> 1�(continue)�Return�a�value�to�use.
> 2�Supply�a�new�second�argument.
> 3�(abort)�Return�to�level�0.
> 4�Return�to�top�loop�level�0.
> 
> Type :b for backtrace, :c <option number> to proceed,��or�:?�for�other
> options
> 
> 
> CL-USER 23 : 1 >
> 
> It prevented me all right, I'd say.��Lisp�is�not�Perl.

Perhaps a more "advanced" example was due?

(defparameter p 1)
(defun f () (if (> p 0) (/ 4 p) (/ 3 "4")))

When your hypothetical Mars lander based on a Lisp Machine finds alien life 
(or other conditions not used in debugging) and is all excited to write 
home to mom, it may die while composing the message...

Oleg
From: Thomas F. Burdick
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <xcvhegxgzyf.fsf@conquest.OCF.Berkeley.EDU>
Oleg <············@myrealbox.com> writes:

> Perhaps a more "advanced" example was due?
> 
> (defparameter p 1)
> (defun f () (if (> p 0) (/ 4 p) (/ 3 "4")))
> 
> When your hypothetical Mars lander based on a Lisp Machine finds alien life 
> (or other conditions not used in debugging) and is all excited to write 
> home to mom, it may die while composing the message...

You do realize that type inferencing is *allowed* in Lisp, right?

home:tmp/foo.lisp:
  (defparameter *p* 1)
  (defun f ()
    (if (> *p* 0)
        (/ 4 *p*)
        (/ 3 "4")))

At the CMUCL toplevel:
  * (compile-file "home:tmp/foo.lisp")
  Converted F.
  Compiling DEFUN F: 
  
  File: /home/t/tf/tfb/tmp/foo.lisp
  
  In: DEFUN F
    (/ 3 "4")
  Warning: Lisp error during constant folding:
  Argument Y is not a NUMBER: "4".
  Warning: This is not a (VALUES &OPTIONAL NUMBER &REST T):
    "4"
  
  Byte Compiling Top-Level Form: 
  
  Compilation unit finished.
    2 warnings
  
  
  #p"/home/t/tf/tfb/tmp/foo.sparcf"
  T
  T
  * 


-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alm54g$939$1@newsmaster.cc.columbia.edu>
Thomas F. Burdick wrote:

> 
> You do realize that type inferencing is allowed in Lisp, right?
> 

Are you saying that type inferece in Lisp can prevent run-time type errors 
or only help by giving warnings in _some_ cases? Maybe I'm mistaken, but I 
don't see how the former can be the case. I will try to come up with 
counter-examples if you confirm that that's in fact what you really meant.

Cheers,
Oleg
From: Christopher Browne
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <almhp2$1r5j58$5@ID-125932.news.dfncis.de>
In the last exciting episode, Oleg <············@myrealbox.com> wrote::
> Thomas F. Burdick wrote:
>> You do realize that type inferencing is allowed in Lisp, right?
>> 
>
> Are you saying that type inferece in Lisp can prevent run-time type errors 
> or only help by giving warnings in _some_ cases? Maybe I'm mistaken, but I 
> don't see how the former can be the case. I will try to come up with 
> counter-examples if you confirm that that's in fact what you really meant.

Maybe you should look at the Python compiler.  

It does compile time type inferencing.

It does that now.  It has done that for some years now.

The notion that you could come up with "counterexamples" to prove that
the Python compiler isn't doing what it has been doing for _years and
years_ is just complete silliness.

See the EncyCMUCLopedia for more details on what the Python compiler
_does_ do.
-- 
(reverse (concatenate 'string ····················@" "454aa"))
http://www3.sympatico.ca/cbbrowne/spiritual.html
Rules of  the Evil  Overlord #6.  "I will not  gloat over  my enemies'
predicament before killing them." <http://www.eviloverlord.com/>
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alml5h$jrh$1@newsmaster.cc.columbia.edu>
Christopher Browne wrote:

> In the last exciting episode, Oleg <············@myrealbox.com> wrote::
>> Thomas F. Burdick wrote:
>>> You do realize that type inferencing is allowed in Lisp, right?
>>> 
>>
>> Are you saying that type inferece in Lisp can prevent run-time type
>> errors or only help by giving warnings in _some_ cases? Maybe I'm
>> mistaken, but I don't see how the former can be the case. I will try to
>> come up with counter-examples if you confirm that that's in fact what you
>> really meant.
> 
> Maybe you should look at the Python compiler.
> 
> It does compile time type inferencing.
> 
> It does that now.  It has done that for some years now.
> 
> The notion that you could come up with "counterexamples" to prove that
> the Python compiler isn't doing what it has been doing for _years and
> years_ is just complete silliness.

Perhaps you weren't reading carefully. I offered to give counterexamples to 
the claim that "[compile-time] type inference in Lisp (CMUCL) can prevent 
run-time type errors [completely]" [1] if such a claim were made.

Oleg
[1] it does in O'Caml :-)
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6cofb4a6zb.fsf@octagon.mrl.nyu.edu>
Oleg <············@myrealbox.com> writes:

> Christopher Browne wrote:
> 
> > In the last exciting episode, Oleg <············@myrealbox.com> wrote::
> >> Thomas F. Burdick wrote:
> >>> You do realize that type inferencing is allowed in Lisp, right?
> >>> 
> >>
> >> Are you saying that type inferece in Lisp can prevent run-time type
> >> errors or only help by giving warnings in _some_ cases? Maybe I'm
> >> mistaken, but I don't see how the former can be the case. I will try to
> >> come up with counter-examples if you confirm that that's in fact what you
> >> really meant.
> > 
> > Maybe you should look at the Python compiler.
> > 
> > It does compile time type inferencing.
> > 
> > It does that now.  It has done that for some years now.
> > 
> > The notion that you could come up with "counterexamples" to prove that
> > the Python compiler isn't doing what it has been doing for _years and
> > years_ is just complete silliness.
> 
> Perhaps you weren't reading carefully. I offered to give counterexamples to 
> the claim that "[compile-time] type inference in Lisp (CMUCL) can prevent 
> run-time type errors [completely]" [1] if such a claim were made.

==============================================================================

datatype IlTipo = Zut of int | Gnao of int list | Mannaggia of int -> int;

fun I_generate_a_runtime_error (Zut(x)) = x + 1
    |  I_generate_a_runtime_error (Gnao(_)) = 42;

fun zot x = I_generate_a_runtime_error x

zot (Mannaggia (fn x => x + 1));

==============================================================================

The above will generate a runtime error in SML (a 'nonexahustive match failure'
exception).

Given that the amount of information you get out of the CMUCL Python
compiler is more or less the same that you get out of the SML one, I'd
say you have to prove that SML (or OCaml - I admit I have not tested
the above on it) will always prevent you from giving you runtime
errors.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Hannah Schroeter
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alnp76$9io$1@c3po.schlund.de>
Hello!

Marco Antoniotti  <·······@cs.nyu.edu> wrote:

>[...]
>
>==============================================================================
>
>datatype IlTipo = Zut of int | Gnao of int list | Mannaggia of int -> int;
>
>fun I_generate_a_runtime_error (Zut(x)) = x + 1
>    |  I_generate_a_runtime_error (Gnao(_)) = 42;
>
>fun zot x = I_generate_a_runtime_error x
>
>zot (Mannaggia (fn x => x + 1));
>
>==============================================================================
>
>The above will generate a runtime error in SML (a 'nonexahustive match failure'
>exception).

>Given that the amount of information you get out of the CMUCL Python
>compiler is more or less the same that you get out of the SML one, I'd
>say you have to prove that SML (or OCaml - I admit I have not tested
>the above on it) will always prevent you from giving you runtime
>errors.

*ML compilers (can) generate compile-time warnings on functions like
I_generate_a_runtime_error.

Kind regards,

Hannah.
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6c7khs9wj4.fsf@octagon.mrl.nyu.edu>
······@schlund.de (Hannah Schroeter) writes:

> Hello!
> 
> Marco Antoniotti  <·······@cs.nyu.edu> wrote:
> 
> >[...]
> >
> >==============================================================================
> >
> >datatype IlTipo = Zut of int | Gnao of int list | Mannaggia of int -> int;
> >
> >fun I_generate_a_runtime_error (Zut(x)) = x + 1
> >    |  I_generate_a_runtime_error (Gnao(_)) = 42;
> >
> >fun zot x = I_generate_a_runtime_error x
> >
> >zot (Mannaggia (fn x => x + 1));
> >
> >==============================================================================
> >
> >The above will generate a runtime error in SML (a 'nonexahustive match failure'
> >exception).
> 
> >Given that the amount of information you get out of the CMUCL Python
> >compiler is more or less the same that you get out of the SML one, I'd
> >say you have to prove that SML (or OCaml - I admit I have not tested
> >the above on it) will always prevent you from giving you runtime
> >errors.
> 
> *ML compilers (can) generate compile-time warnings on functions like
> I_generate_a_runtime_error.

Yes.  I know.  And the SML compiler I used does that: it generates a
*warning*.  However, that is similar to the information you get from
the CMUCL Python compiler when you pass around types that are not
correct.

Again, as I said previously the best language would be CL with solid
type inferencing: CMUCL/Python comes pretty close.  By switching to
*ML languages I feel I give up way too much to justify the change.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alpnn4$k5l$1@newsmaster.cc.columbia.edu>
Marco Antoniotti wrote:

> 
> Yes.��I�know.��And�the�SML�compiler�I�used�does�that:�it�generates�a
> warning.��However,�that�is�similar�to�the�information�you�get�from
> the CMUCL Python compiler when you pass around types that are not
> correct.

Using a compiler that maybe someday will check types is like driving a car 
that maybe someday will have wheels.

> Again, as I said previously the best language would be CL with solid
> type inferencing: CMUCL/Python comes pretty close.��

It doesn't even come close. The best language would be O'Caml with maybe a 
better default syntax.

> By�switching�to
> *ML languages I feel I give up way too much to justify the change.
> 
> Cheers
From: Raffael Cavallaro
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <aeb7ff58.0209120847.21569b25@posting.google.com>
Oleg <············@myrealbox.com> wrote in message news:<············@newsmaster.cc.columbia.edu>...
> Marco Antoniotti wrote:
> 
> > 
> > Yes.��I�know.��And�the�SML�compiler�I�used�does�that:�it�generates�a
> > warning.��However,�that�is�similar�to�the�information�you�get�from
> > the CMUCL Python compiler when you pass around types that are not
> > correct.
> 
> Using a compiler that maybe someday will check types is like driving a car 
> that maybe someday will have wheels.

If you want type checking, use O'Caml (or whatever language you
prefer).

Most people who are wild for type checking don't have much experience
of dynamically typed languages, and they vastly overrate the
importance of compiler type checking.

Ask yourself this: If compiler type checking is so necessary (as
necessary, say, as wheels to a car) how have Smalltalk developers been
able to deploy millions of lines of code in mission critical
enterprise settings for decades, when Smalltalk has no compiler type
checking?

Hint: It involves actually *testing* your code.
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6cr8fz8eqx.fsf@octagon.mrl.nyu.edu>
Oleg <············@myrealbox.com> writes:

> Marco Antoniotti wrote:
> 
> > 
> > Yes.��I�know.��And�the�SML�compiler�I�used�does�that:�it�generates�a
> > warning.��However,�that�is�similar�to�the�information�you�get�from
> > the CMUCL Python compiler when you pass around types that are not
> > correct.
> 
> Using a compiler that maybe someday will check types is like driving a car 
> that maybe someday will have wheels.

Python will check types.  *And* it will give you useful information
about where your program will fail because of type errors.  If you
insist on running the code you will suffer the consequences.

The same happens in SML (and I suppose, but may be wrong, in O'Caml):
you can write a program that may generate run time errors if you
ignore the compiler's messages.

> > Again, as I said previously the best language would be CL with solid
> > type inferencing: CMUCL/Python comes pretty close.��
> 
> It doesn't even come close.

        let foo (x) = 3 + 3.14 + x;

I do not want the compiler to annoy me with such trivialities when
there is an established practice for this sort of things in decades of
Computer Science.

> The best language would be O'Caml with maybe a 
> better default syntax.

On this we may somewhat agree.  That is the main problem of the *ML
crowd.  They left S-expressions and the equivalence of programs and
data behind.  By doing that IMHO they missed too much.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alqnlj$ecb$1@newsmaster.cc.columbia.edu>
Marco Antoniotti wrote:

> 
> let�foo�(x)�=�3�+�3.14�+�x;

what if you wanted

let foo x y = x / y

How would you expect your "ideal compiler" for an "ideal language" to infer 
type for this thing if operator overloading was allowed?

When I started using O'Caml, I was annoyed by this at first too, but later 
on, I realized that the similarity between "/" and and "/." is purely 
ad-hoc and that these operations are fundamentally different. I'd rather be 
reminded to clarify my code than have the compiler assume things and do 
implicit conversions.

Oleg

> 
> I do not want the compiler to annoy me with such trivialities when
> there is an established practice for this sort of things in decades of
> Computer Science.
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6cofb27u5j.fsf@octagon.mrl.nyu.edu>
Oleg <············@myrealbox.com> writes:

> Marco Antoniotti wrote:
> 
> > 
> > let�foo�(x)�=�3�+�3.14�+�x;
> 
> what if you wanted
> 
> let foo x y = x / y
> 
> How would you expect your "ideal compiler" for an "ideal language" to infer 
> type for this thing if operator overloading was allowed?

The behavior of Common Lisp is good enough for me.  They are both
numbers and the generic addition routine should be applied.  If I need
efficiency I add declarations.  If I call foo with a string
CMUCL/Python will type infer that that is not kosher and will tel me.

> When I started using O'Caml, I was annoyed by this at first too, but later 
> on, I realized that the similarity between "/" and and "/." is purely 
> ad-hoc and that these operations are fundamentally different. I'd rather be 
> reminded to clarify my code than have the compiler assume things and do 
> implicit conversions.

That is your right.  I have the right to be wanting the opposite.

Cheers


-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Wolfhard Buß
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m37khrurmg.fsf@buss-14250.user.cis.dfn.de>
Oleg <············@myrealbox.com> writes:
> The best language would be O'Caml with maybe a 
> better default syntax.

Marco Antoniotti <·······@cs.nyu.edu> writes:
> On this we may somewhat agree.  That is the main problem of the *ML
> crowd.  They left S-expressions and the equivalence of programs and
> data behind.  By doing that IMHO they missed too much.

Lispers know that Meta Languages get assimilated into
S-expression Lisp.  Resistance was futile - usually.

-- 
"I believe in the horse. The automobile is a passing phenomenon."
                              --  Kaiser Wilhelm II. (1859-1941)
From: Ng Pheng Siong
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alqe5q$i9s$1@mawar.singnet.com.sg>
According to Wolfhard Bu� <·····@gmx.net>:
> "I believe in the horse. The automobile is a passing phenomenon."
>                               --  Kaiser Wilhelm II. (1859-1941)

He lived to 1941? Just (mildly) curious: where did he spend the years after
1918?

-- 
Ng Pheng Siong <····@netmemetic.com> * http://www.netmemetic.com
From: Wolfhard Buß
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m33csfuq66.fsf@buss-14250.user.cis.dfn.de>
····@vista.netmemetic.com (Ng Pheng Siong) writes:
> He lived to 1941? Just (mildly) curious: where did he spend the years after
> 1918?

In Doorn, The Netherlands.

-- 
"I believe in the horse. The automobile is a passing phenomenon."
                              --  Kaiser Wilhelm II. (1859-1941)
From: Tim Bradshaw
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ey31y7zqy27.fsf@cley.com>
* oleg inconnu wrote:

> It doesn't even come close. The best language would be O'Caml with
> maybe a better default syntax.

Might I suggest, then, that you would be spending your time more
usefully posting to an O'Caml-related newsgroup rather than annoying
people who prefer Lisp?

--tim
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-45328A.22311811092002@copper.ipg.tsnz.net>
In article <············@newsmaster.cc.columbia.edu>,
 Oleg <············@myrealbox.com> wrote:

> Perhaps you weren't reading carefully. I offered to give counterexamples to 
> the claim that "[compile-time] type inference in Lisp (CMUCL) can prevent 
> run-time type errors [completely]" [1] if such a claim were made.
> 
> Oleg
> [1] it does in O'Caml :-)

Only because O'Caml (and other HM languages) define certain things to 
*not* be type errors that most people here would in fact regard as being 
type errors.

-- Bruce
From: Wade Humeniuk
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ScIf9.13983$Qx1.646749@news1.telusplanet.net>
"Oleg" <············@myrealbox.com> wrote in message
·················@newsmaster.cc.columbia.edu...
> Thomas F. Burdick wrote:
>
> >
> > You do realize that type inferencing is allowed in Lisp, right?
> >
>
> Are you saying that type inferece in Lisp can prevent run-time type errors
> or only help by giving warnings in _some_ cases? Maybe I'm mistaken, but I
> don't see how the former can be the case. I will try to come up with
> counter-examples if you confirm that that's in fact what you really meant.


Nothing can prevent run-time errors (even run-time type errors, take the case of
the cosmic ray altering your RAM and changing the type or value of some data,
its outside of the compilers ability to catch), at least Common Lisp can handle
run time errors (and catch that cosmic ray error) and potentially recover from
the error (not by crashing or producing unpredictable results).  Bugs and
programmer errors are inevitable but with testing (maybe you do not believe in
testing?) and a language which is error handling fortified this is less of a
problem.  You can still make errors in Ocaml, right????  With Lisp if an error
is caught you can also "correct" the problem in the running image without taking
the whole system down.

Wade
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alpgkh$fo9$1@newsmaster.cc.columbia.edu>
Wade Humeniuk wrote:

> Nothing can prevent run-time errors (even run-time type errors, take the
> case of
> the cosmic ray altering your RAM and changing the type or value of some
> data, its outside of the compilers ability to catch), at least Common Lisp
> can handle
> run time errors (and catch that cosmic ray error) 

You are joking right? What if that cosmic ray alters Lisp image in the 
core? Or do you think Lisp doesn't run from RAM?

>and potentially recover
> from
> 
> the error (not by crashing or producing unpredictable results).�
>
>�Bugs�and
> programmer errors are inevitable but with testing (maybe you do not
> believe in
> 
> testing?) and a language which is error handling fortified this is less of
> a problem.��You�can�still�make�errors�in�Ocaml,�right????��With�Lisp�if�an
> error
> 
> is caught you can also "correct" the problem in the running image without
> taking
> the whole system down.
> 
> Wade
From: Tim Bradshaw
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ey3bs73ehlj.fsf@cley.com>
* oleg inconnu wrote:

> You are joking right? What if that cosmic ray alters Lisp image in the 
> core? Or do you think Lisp doesn't run from RAM?

There are a whole range of things you can do about such errors (either
in data or program space).  Assuming a system with ECC (so ruling out
most cheap PCs which I understand often don't even have parity -
people who run important programs on these systems deserve what they
get):

    A single bit error (most likely from a cosmic ray) will be
    corrected by the system but will probably be reported somewhere
    and logged for stats purposes (you want to worry if the same
    memory chip or assembly is generating repeated single-bit errors).

    A double bit error (less likely) will be detected.  There are
    several recovery strategies from this.  You can try reading
    several more times to see if it goes away.  If the page is program
    text and is unaltered since it came from disk you can refetch it.
    If you can clean up the error you can then remap the page to some
    other bit of physical memory and do something to indicate that
    that chip has gone bad. Finally you can attempt to signal and
    handle the error.  Of course the system may be too badly damaged
    to handle the error, but it may well not be.  A `safe' approach
    would be to have a very low-level handler for the error which will
    cause the memory image of the system to be written out (with the
    bad bits marked, clearly), for later recovery.  However if the
    system is doing something like controlling a machine it almost
    certainly wants to do more than this - such as cause the machine
    to go into a safe state.

    A more-than-double-bit error will not generally be detected in
    normal commercial systems.  If you want to handle such errors you
    likely need two physically separated mirrored memory systems, and
    this starts to look very expensive.


You may be surprised to know that quite vanilla systems do most or all
of these things.  Typically the OS gets to know about the error and
then will try to handle it, possibly notifying the application.  These
happen often enough that OS and hardware designers spend time
designing systems that handle them elegantly.  I'm actually enormously
reassured that people do this - it gives me (some small) hope that
there are people who actually understand engineering involved with
computers, and thus some hope that things might one day get better.

--tim
From: Wade Humeniuk
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <VG1g9.17529$_J3.747030@news0.telusplanet.net>
"Oleg" <············@myrealbox.com> wrote in message
·················@newsmaster.cc.columbia.edu...
> Wade Humeniuk wrote:
>
> > Nothing can prevent run-time errors (even run-time type errors, take the
> > case of
> > the cosmic ray altering your RAM and changing the type or value of some
> > data, its outside of the compilers ability to catch), at least Common Lisp
> > can handle
> > run time errors (and catch that cosmic ray error)
>
> You are joking right? What if that cosmic ray alters Lisp image in the
> core? Or do you think Lisp doesn't run from RAM?

Yes, no system is safe from catastrophic failure.  But at least Common Lisp
gives you the chance of recovering from even that type of error.  Lispworks
catches segmentation violations, invalid instructions and such.  It tries to
signal a condition and have some code handle it.  Though CL cannot recover from
all possible errors at least it has some run-time error handling that exceeds
that of most other languages.

It is my experience that most software is not written with errors in mind.
Small local errors (like what happens when you cannot open a file, type errors)
and global errors (like what do you do when you run out of memory or disk,
message queues fill up, etc.).  These problems are often punted (especially
global errors) since they are difficult to solve.  The CL condition system seems
to be a good way of implementing this error handling in an orthogonal manner.

Wade
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240692384671659@naggum.no>
* Thomas F. Burdick
| You do realize that type inferencing is *allowed* in Lisp, right?

  Seriously, do you think you are talking to an intelligent person who is
  interested in listening to anybody when that is the kind of problems he
  comes up with?  The troll detector should go off pretty loudly with such
  moronic comments as you are trying to counter with facts and intelligent
  communication.  Trolls are not after either.  They are emotional beings who
  need emotional answers.  The poor sap from the storngly-typed language camp
  needs /reassurance/ that his incompetence at programming does not hurt him.
  Give him a language without strong typing and he fears he will hurt himself,
  which he will because the incredible complexity of the type system that he
  needed the compiler to figure out for him has left him completely unable to
  think in types on his own, like a kid who always used a calculator will not
  be able to do simple arithmetic in his head.  It is not that it is difficult
  -- it is merely that these people have never even had to do it manually, so
  they have no idea how simple it really is to those who know how to do it.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Thomas F. Burdick
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <xcv4rcwe8od.fsf@hurricane.OCF.Berkeley.EDU>
Erik Naggum <····@naggum.no> writes:

> * Thomas F. Burdick
> | You do realize that type inferencing is *allowed* in Lisp, right?
> 
>   Seriously, do you think you are talking to an intelligent person who is
>   interested in listening to anybody when that is the kind of problems he
>   comes up with?  The troll detector should go off pretty loudly with such
>   moronic comments as you are trying to counter with facts and intelligent
>   communication.

Hmm, I think maybe the sarcasm in the above statement got lost in the
writing.  I was just using it as an excuse to point out a cool feature
of Python, not to Oleg, but to anyone reading who didn't know about it.

>   Trolls are not after either.  They are emotional beings who need
>   emotional answers.  The poor sap from the storngly-typed language
>   camp needs /reassurance/ that his incompetence at programming does
>   not hurt him.  Give him a language without strong typing and he
>   fears he will hurt himself, which he will because the incredible
>   complexity of the type system that he needed the compiler to
>   figure out for him has left him completely unable to think in
>   types on his own, like a kid who always used a calculator will not
>   be able to do simple arithmetic in his head.  It is not that it is
>   difficult -- it is merely that these people have never even had to
>   do it manually, so they have no idea how simple it really is to
>   those who know how to do it.

I know someone who went to a suburban school system, flush with money,
who used calculators in class starting in the 9th grade, and took a
lot of math in school -- his last year of high school, he took
multivariable calculus.  He graduated college with a CS degree.
Recently, I was talking to him, and he was stuck analysing an
algorithm, and asked for help.  I looked at what he was doing, and
within a couple minutes pointed out how he could get past where he was
stuck.  He has better math skills than me (by quite a lot), but
between the low-calculator-use math classes I've taken, and the
tutoring I've done, I can do arithmetic and simple algebra in my head
very well -- which was really all this analysis needed.  [I wonder how
far this parallel works]

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Joe Marshall
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <amBf9.227218$kp.823930@rwcrnsc52.ops.asp.att.net>
"Oleg" <············@myrealbox.com> wrote in message ·················@newsmaster.cc.columbia.edu...
>
> Perhaps a more "advanced" example was due?
>
> (defparameter p 1)
> (defun f () (if (> p 0) (/ 4 p) (/ 3 "4")))
>
> When your hypothetical Mars lander based on a Lisp Machine finds alien life
> (or other conditions not used in debugging) and is all excited to write
> home to mom, it may die while composing the message...

Nonetheless, it did *not* divide a number by string.
From: J.St.
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <87y9a84h1o.fsf@jmmr.no-ip.com>
"Joe Marshall" <·············@attbi.com> writes:

> "Oleg" <············@myrealbox.com> wrote in message ·················@newsmaster.cc.columbia.edu...
> >
> > Perhaps a more "advanced" example was due?
> >
> > (defparameter p 1)
> > (defun f () (if (> p 0) (/ 4 p) (/ 3 "4")))
> >
> > When your hypothetical Mars lander based on a Lisp Machine finds alien life
> > (or other conditions not used in debugging) and is all excited to write
> > home to mom, it may die while composing the message...
> 
> Nonetheless, it did *not* divide a number by string.

As it was already said: It's not Perl. :)

But besides: "Oleg" is not willing to be convinced of anything... or
at least accept other aspects.

Regards,
Julian
From: Bulent Murtezaoglu
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <87it1d4jzc.fsf@acm.org>
>>>>> "NG" == Nils Goesche <······@cartan.de> writes:
[...]
    NG> Well, what /is/ necessary?  I /felt/ that it's necessary
    NG> because I am used to having the power macros give me.
    NG> Obviously some OCaml man felt so, too.  In comp.lang.c and
    NG> comp.programming you can find people who think that all the
    NG> stuff in C++ or higher level languages is unnecessary.  And
    NG> the C++ freaks all think that closures are unnecessary, as
    NG> well as garbage collection or multi-methods. [...]

Yes this is because you are looking down as outlined in 
http://www.paulgraham.com/avg.html and the C/C++ crowd is looking up.

[I'll leave the rest alone]

cheers,

BM
From: Stephen J. Bevan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m38z291q5d.fsf@dino.dnsalias.com>
Nils Goesche <······@cartan.de> writes:
> Seriously, I doubt that there is any general purpose programming
> language that is better suited for doing mathematics than Common
> Lisp.  And I havent even started with complex numbers yet...

I don't know if you consider APL and its (ASCII based) descendants J
(http://www.jsoftware.com) and K (http://www.kx.com) to be general
purpose programming languages but they are arguably better suited to
doing mathematics.
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkofb5a6vc.fsf@pc022.bln.elmeg.de>
Nils Goesche <······@cartan.de> writes:

> Looks perfectly correct to me.  Let's define the factorial function in
> the usual stupid way:
> 
> CL-USER 26 > (defun fac (n)
>                (if (zerop n)
>                    1
>                  (* n (fac (1- n)))))
> FAC
> 
> CL-USER 27 > (mapcar #'fac '(0 1 2 3 4))
> (1 1 2 6 24)
> 
> So far, so good.  Now lets feed it some bigger argument:
> 
> CL-USER 28 > (fac 50)
> 30414093201713378043612608166064768844377641568960512000000000000
> 
> Looks good.  But wait...  do you know what happens if you do the same
> in OCaml?
> 
> # let fac n = if n = 0 then 1 else n * (n - 1);;
> val fac : int -> int = <fun>
> # fac 50;;
> - : int = 2450
> 
> Ah.  Interesting.  And WRONG!

Darn, I suck ;-)  But again:

let rec fac n = if n = 0 then 1 else n * fac (n - 1);;
val fac : int -> int = <fun>
# fac 50;;
- : int = 0

Still wrong.

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240690993463545@naggum.no>
* Oleg <············@myrealbox.com>
| The type checker is there for a reason: one is execution speed, another
| reason is reliability.

  This is commonly believed, but in fact, a type checker introduces bugs.  It
  shifts the cost of a certain class of mistakes so much that the kinds of
  mistakes people make in the presence of a type checker destroy their ability
  to think clearly about types.  Experience strongly suggests that people who
  use strongly-typed languages and have compilers who produce informative
  error messages when type constraints are violated, cause their programmers
  to believe several idiotic ideas: (1) Type errors are important.  They are
  not.  You would not make them if the cost of making type errors was higher.
  (2) Satifsying the compiler is no longer only an inconsequential necessary
  condition for a program to be correct, it becomes a sufficient condition in
  the minds of those who make the first mistake.  (3) Errors in programs are
  separated into two kinds of vastly different nature: static errors (which the
  compiler may report) and dynamic errors (which the compiler cannot find).
  This false dichotomy completely warps the minds of progrrammers in these
  languages: Instead of becoming fundamentally stupid errors that the compiler
  should just go and fix, static errors become /more/ important than dynamic
  errors, leading to serious growth of dynamic errors because programmers tend
  to rely on the compiler for detection and correction of mistakes.

| While a type checker can never guarantee that your program will do what you
| wanted, it removes a *great* deal of bugs: in C++, I would frequently have
| this bug when I divide one int by another and treat the result as a float.

  In a real programming language created by people who actually understand
  numerical types and mathematics, dividing one integer by another does not
  lose information and truncate the value to an integer if the result is not
  in fact an integer.  The rational number that is an exact representation of
  integer division does not exist in languages where types are used to increase
  execution speed -- if you were primarily occupied with execution speed, you
  would not even be able to /invent/ the rational number since it may easily be
  a drag on performance.

| That's a very nasty type of bug, especially in any kind of scientific/numeric
| application.

  This kind of bug is a consequence of strong typing and thus must be caught
  by the strongly-typed system that introduced it.  It is, however, fascinating
  that people who invent bug opportunities because of some elegance, according
  to some religion, of doing something the wrong way, do not understand that
  when they have chosen to /overcome/ the bug opportunity they have invented,
  such as with type checkers, other people may have removed the bug opportunity
  altogether and thus not be any worse off; they may indeed be /better/ off when
  the bug opportunity has been removed.  Of course, provide a user of the bug-
  opportunity-ridden languages with a language where this kind of bug does not
  occur and he will feel frightened and lost, filled with emotions that prevent
  him from thinking and acting rationally.  The simple concept of numeric type
  stacks (or towers or hierarchies) does not readily register with people used
  to over-specify their types, but it is also typical that they feel fear when
  bereft of their security blanket a.k.a. type checker and thus shut down their
  thinking.  It is in fact a grave design error for integer division to return
  integers when the results are not always integers, but this moronic flaw in
  the language does not register with people who have narrowed their options
  to that which their type system can express -- Sapir-Whorf all over again --
  and hence they cannot begin to understand a fundamentally different system.

  People also make the kinds of mistakes they can afford to make.  Because they
  know that the type checker will kick in and catch their type bugs, they allow
  themselves to create type bugs to begin with by not expending the necessary
  mental capacity on this kind of inhumanly intricate detail.  While probably
  not conscious except in the highly intelligent and introspective, when you
  realize that the cost of a caught type mistake is less than the cost of being
  totally anal about types when you write the code, you let the type checker
  figure these things out for you.  The psychology of this phenomenon is not
  only glaringly obvious, it has been researched in many other areas.  People
  who pay attention to detail consider the cost of making mistakes higher than
  the cost of correcting them for personal reasons.  (This a productive use of
  emotions, but one may get carried away.)  Whether verbalized or not, people
  /are/ consciously aware of the cost of making mistakes and breaking rules --
  it is called taking risks.  Risks are poorly understood because people are so
  frequently told not to take them.  

| Lisp is such an extreme case that I'm surprised people are surprised that
| Lisp isn't used at JPL [1].

  This is mere idiotic flame bait.  This doofus is clearly not posting here to
  understand anything he wishes to learn but to parade his lack of mental
  resources and stagnated thinking.  Judgmental types tend to favor strong
  typing and only one way of doing things.

| It wouldn't even prevent you from dividing an int by a string in some branch
| of code!

  This is more idiotic flame bait.  Experienced programmers with many years of
  actual experience know that timing of the reported error is inconsequential
  and there is a significant loss of real value in pretending that static bugs
  are more important than dynamic bugs because some fancy type theory can find
  the former.  In fact, every uncaught dynamic error proves that focusing too
  much on the static errors is a grave mistake.

[ Oleg, please do not respond.  You are not expected to overcome your personal
  problems and behave rationally in the face of fundemental criticism of your
  belief system.  Save us the irrationality and the hatred of your feelings. ]

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Oleg
Subject: OT: sad rude old man (was: macros vs HOFs)
Date: 
Message-ID: <alm3f9$7tv$1@newsmaster.cc.columbia.edu>
Erik Naggum wrote:

> * Oleg <············@myrealbox.com>
> | The type checker is there for a reason: one is execution speed, another
> | reason is reliability.

[..drivel..]

> [ Oleg, please do not respond.  You are not expected to overcome your
> [ personal
>   problems and behave rationally in the face of fundemental criticism of
>   your
>   belief system.  Save us the irrationality and the hatred of your
>   feelings. ]

After I _politely_ ask Erik to add me to his "dreaded" killfile, he 
responds to my postings *twice* and asks _me_ not to respond. Is this guy 
the ultimate combination of a silly person and a socially inept masochist 
or what?

Erik, I don't hate you. I just don't like you. You aren't important enough 
to be hated. 

You are a sad rude old man who never had a life. I know all about you. And 
never, Erik, and I mean never, send me any personal mail telling me who I 
should boycott [1]. In America, we don't tolerate this kind of crap, 
especially from jerks like you.

Have a nice day,
Oleg

[1] Re: LISP - When you've seen it, what else can impress?
     From: Erik Naggum <····@naggum.no>
     To: Oleg <············@myrealbox.com>
 
     /Please/ do not respond to ilias.

     -- 
     Erik Naggum, Oslo, Norway
From: Erik Naggum
Subject: Re: OT: sad rude old man (was: macros vs HOFs)
Date: 
Message-ID: <3240698556237302@naggum.no>
* Oleg <············@myrealbox.com>
| After I _politely_ ask Erik to add me to his "dreaded" killfile, he responds
| to my postings *twice* and asks _me_ not to respond.

  The article was not for you.  Deal with it, moron.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Thomas F. Burdick
Subject: Re: OT: sad rude old man (was: macros vs HOFs)
Date: 
Message-ID: <xcv7khse980.fsf@hurricane.OCF.Berkeley.EDU>
Oleg <············@myrealbox.com> writes:

> After I _politely_ ask Erik to add me to his "dreaded" killfile, he 
> responds to my postings *twice* and asks _me_ not to respond. Is this guy 
> the ultimate combination of a silly person and a socially inept masochist 
> or what?

That ain't how it works.  If you don't want to converse with someone,
*you* gotta ignore *him*.

> You are a sad rude old man who never had a life. I know all about you. And 
> never, Erik, and I mean never, send me any personal mail telling me who I 
> should boycott [1]. In America, we don't tolerate this kind of crap, 
> especially from jerks like you.

You're right, especially in the last year, there's been a huge
resurgence of fascistic thugs.  It's gonna be a long, hard fight when
the west coast ports go on strike.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Thomas F. Burdick
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <xcvadmoe9ma.fsf@hurricane.OCF.Berkeley.EDU>
Erik Naggum <····@naggum.no> writes:

> * Oleg <············@myrealbox.com>
> | The type checker is there for a reason: one is execution speed, another
> | reason is reliability.
> 
>   This is commonly believed, but in fact, a type checker introduces
>   bugs.  It shifts the cost of a certain class of mistakes so much
>   that the kinds of mistakes people make in the presence of a type
>   checker destroy their ability to think clearly about types.
>   Experience strongly suggests that people who use strongly-typed
>   languages and have compilers who produce informative error
>   messages when type constraints are violated, cause their
>   programmers to believe several idiotic ideas:

This is interesting -- I've noted a similar disposition among users of
statically-typed languages (speaking statistically, of course, as
there are plenty of individuals who don't fall into any given mental
trap).  I have a completely untested hypothesis on this subject, and
I'm too inexperienced with this type of language to even run a
thought-experiment on it: this mental trap is the result of having the
type inferencer complain that your program is not fully typable,
*while it is still in heavy development*.  When it complains that it
can prove you've made an error, that's good, it prevents you from
doing stupid things that make for often hard-to-find bugs.  When it
complains that your still largely unfinished program isn't typable,
that's inappropriate.  But there may be some use (depending on the
nature of the program) in having it complain that your
production-ready program isn't typable.

The experiment I'd like to try here would be to have a Lisp compiler
that would attempt to completely type your program.  If you've told it
that it's compiling code that it should be able to type, it will warn
you if it can't; otherwise it'll keep quiet.  As someone with only a
small amount of experience with language systems that insist on being
able to type your programs, I don't have any stupidity alarms to go
off here; I'm curious if any Lispers who have more experience with
such systems think this could be of value.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lk4rcwz80c.fsf@pc022.bln.elmeg.de>
···@hurricane.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> The experiment I'd like to try here would be to have a Lisp compiler
> that would attempt to completely type your program.  If you've told it
> that it's compiling code that it should be able to type, it will warn
> you if it can't; otherwise it'll keep quiet.  As someone with only a
> small amount of experience with language systems that insist on being
> able to type your programs, I don't have any stupidity alarms to go
> off here; I'm curious if any Lispers who have more experience with
> such systems think this could be of value.

I think it's not going to work.  A simple example would be

(defun main ()
  (pprint (+ 42 (read))))

Now, to make sure there will be no type-errors, you'd have to add code
that will make sure that it will be a number that is added to 42;
however, how is the compiler supposed to decide whether your checks
are sufficient?  The compiler would have to /understand/ your code.  I
think you'll be quickly running into unsolvable problems like the
halting problem and friends.  (I'm just guessing here, but would be
very surprised if I wasn't guessing right ;-)

The static type checkers of compilers for languages like ML aren't
perfect either.  In fact, they will reject a valid program if they
can't prove it to be type safe.  Those languages are designed in a
special way that makes sure that the class of programs that /can/ be
proven to be type safe by the type inference algorithm is sufficiently
large.

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Thomas F. Burdick
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <xcvfzwgf3uw.fsf@famine.OCF.Berkeley.EDU>
Nils Goesche <······@cartan.de> writes:

> ···@hurricane.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> 
> > The experiment I'd like to try here would be to have a Lisp compiler
> > that would attempt to completely type your program.  If you've told it
> > that it's compiling code that it should be able to type, it will warn
> > you if it can't; otherwise it'll keep quiet.  As someone with only a
> > small amount of experience with language systems that insist on being
> > able to type your programs, I don't have any stupidity alarms to go
> > off here; I'm curious if any Lispers who have more experience with
> > such systems think this could be of value.
> 
> I think it's not going to work.  A simple example would be
> 
> (defun main ()
>   (pprint (+ 42 (read))))
> 
> Now, to make sure there will be no type-errors, you'd have to add code
> that will make sure that it will be a number that is added to 42;
> however, how is the compiler supposed to decide whether your checks
> are sufficient?  The compiler would have to /understand/ your code.  I
> think you'll be quickly running into unsolvable problems like the
> halting problem and friends.  (I'm just guessing here, but would be
> very surprised if I wasn't guessing right ;-)

Well, this would be a good example of a program that it wouldn't be
possible to type check.  For something like this, you wouldn't tell
the compiler to try.  However, if your program did type checks, like:

  #+sbcl
  ;; Declarations are assertions
  (defun main ()
    (pprint (+ 42 (the number (read)))))

the compiler would easily be able to know the call to + was safe.

> The static type checkers of compilers for languages like ML aren't
> perfect either.  In fact, they will reject a valid program if they
> can't prove it to be type safe.  Those languages are designed in a
> special way that makes sure that the class of programs that /can/ be
> proven to be type safe by the type inference algorithm is sufficiently
> large.

Type inferencers tend to give up when things get too hard.  If it's
necessary for the compiler to type check the entire program, obviously
that's not a good thing.  For something like this, where it's
optional, it's a lot more reasonable.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Pascal Costanza
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alo9qv$6be$1@newsreader2.netcologne.de>
Thomas F. Burdick wrote:

[...]
> This is interesting -- I've noted a similar disposition among users of
> statically-typed languages (speaking statistically, of course, as
> there are plenty of individuals who don't fall into any given mental
> trap).  I have a completely untested hypothesis on this subject, and
> I'm too inexperienced with this type of language to even run a
> thought-experiment on it: this mental trap is the result of having the
> type inferencer complain that your program is not fully typable,
> *while it is still in heavy development*.  When it complains that it
> can prove you've made an error, that's good, it prevents you from
> doing stupid things that make for often hard-to-find bugs.  When it
> complains that your still largely unfinished program isn't typable,
> that's inappropriate.  But there may be some use (depending on the
> nature of the program) in having it complain that your
> production-ready program isn't typable.

Such a thing, in fact, exists. The Eclipse IDE (http://www.eclipse.org) 
includes a Java compiler that statically checks types and reports errors 
(as other Java compilers), but it allows you to run the code 
nonetheless, i.e. those parts that have been successfully compiled 
(unlike other Java compilers/IDEs). Obviously, even users of static 
languages tend to have a need for more dynamic features. (This is 
another proof of Paul Graham's statement about traditional languages 
heading towards Common Lisp's flexibility.)

(BTW, a Common Lisp plugin for Eclipse would be nice. ;)

> The experiment I'd like to try here would be to have a Lisp compiler
> that would attempt to completely type your program.  If you've told it
> that it's compiling code that it should be able to type, it will warn
> you if it can't; otherwise it'll keep quiet.  As someone with only a
> small amount of experience with language systems that insist on being
> able to type your programs, I don't have any stupidity alarms to go
> off here; I'm curious if any Lispers who have more experience with
> such systems think this could be of value.

You want to have soft typing, right? I would like that also, seems to me 
to be the best compromise as of yet.

Pascal
From: Thomas F. Burdick
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <xcvbs74f3mf.fsf@famine.OCF.Berkeley.EDU>
Pascal Costanza <········@web.de> writes:

> Thomas F. Burdick wrote:
>
> > The experiment I'd like to try here would be to have a Lisp compiler
> > that would attempt to completely type your program.  If you've told it
> > that it's compiling code that it should be able to type, it will warn
> > you if it can't; otherwise it'll keep quiet.  As someone with only a
> > small amount of experience with language systems that insist on being
> > able to type your programs, I don't have any stupidity alarms to go
> > off here; I'm curious if any Lispers who have more experience with
> > such systems think this could be of value.
> 
> You want to have soft typing, right? I would like that also, seems to me 
> to be the best compromise as of yet.

Basically, yes :)

All in all, I'm pretty happy with my current setup (CMUCL/SBCL, I use
both).  Languages that require your programs to pass the type checker
seem to me to cost too much for far too little benefit.  But it seems
like a nice optional tool -- and although I don't feel like I'm
missing anything, what would I know -- I didn't miss CLOS when I was
writing C.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Erann Gat
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <gat-1109021038290001@k-137-79-50-101.jpl.nasa.gov>
In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:

> Experienced programmers with many years of
> actual experience know that timing of the reported error is inconsequential

Though I agree with Erik's overall point, that the value of static typing
is highly overrated by its advocates, I have to disagree with this. 
Catching errors at compile time can be very useful, particularly in
embedded applications where actually running the program can be very
expensive, and in some cases even dangerous.

E.
From: Gareth McCaughan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <slrnanvgjc.2vcs.Gareth.McCaughan@g.local>
Erik Naggum wrote:

[Oleg, about int / int -> int]
> | That's a very nasty type of bug, especially in any kind of
> | scientific/numeric application.
> 
>   This kind of bug is a consequence of strong typing and thus must be caught
>   by the strongly-typed system that introduced it.

I think "consequence" is a bit strong. The Python language[1]
is dynamically typed, but it has the int/int -> int "feature".

It's probably true that only a static typing system can
*excuse* -- as opposed to *explain* -- making integer
division yield integers.

I should perhaps add that making int/int *not* truncate
is in the plans for Python's future. You can imagine
the backward-compatibility nightmares.

Oh, and can I put in a plea for not using "strong"
to mean "static" in this context? C is statically typed
but not strongly typed. CL is strongly typed but not
statically typed. And, in the interests of completeness:
Perl is neither strongly typed nor statically typed;
Pascal is both strongly and statically typed.


[1] Nothing to do with the Python CL compiler, of course.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240773995474691@naggum.no>
* Gareth McCaughan
| I should perhaps add that making int/int *not* truncate is in the plans for
| Python's future. You can imagine the backward-compatibility nightmares.

 Will there be a new infix operator for the real division?  I have seen //
 used for this purpose, but some jerk had to use it up for comments.  (Even in
 C++, ;; could be used for comments.)

| Oh, and can I put in a plea for not using "strong" to mean "static" in this
| context? C is statically typed but not strongly typed. CL is strongly typed
| but not statically typed. And, in the interests of completeness: Perl is
| neither strongly typed nor statically typed; Pascal is both strongly and
| statically typed.

  This is worthwhile clarification.  Thanks.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Alexander Schmolck
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <yfs3csfjr2i.fsf@black132.ex.ac.uk>
Erik Naggum <····@naggum.no> writes:

> * Gareth McCaughan
> | I should perhaps add that making int/int *not* truncate is in the plans for
> | Python's future. You can imagine the backward-compatibility nightmares.
> 
>  Will there be a new infix operator for the real division?  I have seen //
>  used for this purpose, but some jerk had to use it up for comments.  (Even in
>  C++, ;; could be used for comments.)


~/> python
Python 2.2.1 (#1, Apr 10 2002, 20:17:06)
>>> from __future__ import division
>>> 3 / 2
1.5
>>> 3 // 2
1
>>> 3.0 // 2.0
1.0


There is a proposal to add rationals to the core language
(http://www.python.org/peps/pep-0239.html), so maybe ``3 / 2`` will be
exactness presevering at some point.

As an aside: although what I know about CL's way of dealing with numbers seems
rather sensible to me (promoting to a more general type iff necessary), many
people are apparently uneasy about the idea that the result type can depend on
the parameter values (rather than type). I'd be interested to hear from the
experience of lisp users whether lisp's way of promoting numbers actually
causes problems in practice.

Also, is there one overarching reason why rationals aren't complex?

alex
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6ck7lr85yh.fsf@octagon.mrl.nyu.edu>
Alexander Schmolck <··········@gmx.net> writes:

> Erik Naggum <····@naggum.no> writes:
> 
> > * Gareth McCaughan
> > | I should perhaps add that making int/int *not* truncate is in the plans for
> > | Python's future. You can imagine the backward-compatibility nightmares.
> > 
> >  Will there be a new infix operator for the real division?  I have seen //
> >  used for this purpose, but some jerk had to use it up for comments.  (Even in
> >  C++, ;; could be used for comments.)
> 
> 
> ~/> python
> Python 2.2.1 (#1, Apr 10 2002, 20:17:06)
> >>> from __future__ import division
> >>> 3 / 2
> 1.5
> >>> 3 // 2
> 1
> >>> 3.0 // 2.0
> 1.0
> 
> 
> There is a proposal to add rationals to the core language
> (http://www.python.org/peps/pep-0239.html), so maybe ``3 / 2`` will be
> exactness presevering at some point.

Greenspun, greenspun! :)

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240847754703746@naggum.no>
* Alexander Schmolck
| As an aside: although what I know about CL's way of dealing with numbers seems
| rather sensible to me (promoting to a more general type iff necessary), many
| people are apparently uneasy about the idea that the result type can depend on
| the parameter values (rather than type).

  What do you mean, the type depends on the parameters?  The result type of a
  division of integers is a rational number.  Now, remember your mathematics:
  an integer is a rational number.  Only if you think an object is of only one
  type could you become confused about this.  The beauty of the Common Lisp
  numeric tower is that numbers satisfy several type predicates.

| I'd be interested to hear from the experience of lisp users whether lisp's
| way of promoting numbers actually causes problems in practice.

  But they are /not/ promoted!  An integer is a specialized form of a rational.
  I mean, you can even evaluate (denominator 11) and get 1.

| Also, is there one overarching reason why rationals aren't complex?

  Huh?  This is such a confused question.  The mathematical background of the
  Common Lisp numeric tower is reason enough.  If you want complex numbers
  with a rational number for the real and imaginary part, you got it.  I do not
  understand what the question would otherwise mean.  

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Christophe Rhodes
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <sqadmnj812.fsf@lambda.jcn.srcf.net>
Erik Naggum <····@naggum.no> writes:

> * Alexander Schmolck
> | Also, is there one overarching reason why rationals aren't complex?
> 
>   Huh?  This is such a confused question.  The mathematical background of the
>   Common Lisp numeric tower is reason enough.  If you want complex numbers
>   with a rational number for the real and imaginary part, you got it.  I do not
>   understand what the question would otherwise mean.  

My interpretation of the question was "why does
  (TYPEP 1/2 'COMPLEX)
not return T?", coming from the view of the real numbers as a line in
the complex plane.

It's not a priori obvious why this decision was taken, I think, as it
blurs the distinction slightly between existence and representation;
that's not to say that I think the wrong decision was taken, of
course.

Cheers,

Christophe
-- 
Jesus College, Cambridge, CB5 8BL                           +44 1223 510 299
http://www-jcsu.jesus.cam.ac.uk/~csr21/                  (defun pling-dollar 
(str schar arg) (first (last +))) (make-dispatch-macro-character #\! t)
(set-dispatch-macro-character #\! #\$ #'pling-dollar)
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240850594207047@naggum.no>
* Christophe Rhodes
| It's not a priori obvious why this decision was taken, I think, as it blurs
| the distinction slightly between existence and representation; that's not to
| say that I think the wrong decision was taken, of course.

  The decision was not taken by Common Lisp, but by mathematicians long before
  there were programming languages and representations.  If you plan to work
  with complex numbers, knowing their history seems like a very good idea to me.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Christophe Rhodes
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <sq65xajok1.fsf@lambda.jcn.srcf.net>
Erik Naggum <····@naggum.no> writes:

> * Christophe Rhodes
> | It's not a priori obvious why this decision was taken, I think, as it blurs
> | the distinction slightly between existence and representation; that's not to
> | say that I think the wrong decision was taken, of course.
> 
>   The decision was not taken by Common Lisp, but by mathematicians long before
>   there were programming languages and representations.  If you plan to work
>   with complex numbers, knowing their history seems like a very good idea to me.

Well, there's something I'm not understanding, then.

Gauss proved that (slightly loosely) there are n solutions to an
nth-order polynomial, each of the form a + ib (where i is (sqrt
-1)). These numbers are known as complex numbers, including those for
which b is zero. So in that sense, 1/2 is very much a complex number;
if it isn't, then the complex field isn't closed algebraically.

Christophe
-- 
Jesus College, Cambridge, CB5 8BL                           +44 1223 510 299
http://www-jcsu.jesus.cam.ac.uk/~csr21/                  (defun pling-dollar 
(str schar arg) (first (last +))) (make-dispatch-macro-character #\! t)
(set-dispatch-macro-character #\! #\$ #'pling-dollar)
From: Wolfhard Buß
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3y9a6qcil.fsf@buss-14250.user.cis.dfn.de>
Erik Naggum
> If you plan to work with complex numbers, knowing their history seems like
> a very good idea to me.

Christophe Rhodes 
> So in that sense, 1/2 is very much a complex number;

Usually octonions
are introduced in terms of pairs of quaternions,
are introduced in terms of pairs of complexs,
are introduced in terms of pairs of reals,
are introduced in terms of equivalent Cauchy sequences (Cantor) or
Dedekind sections or something else of rationals,
are introduced in terms of pairs of integers,
are introduced in terms of pairs of natural numbers,
which are usually introduced through Peano's axiom scheme,
set theoretic as sets or in terms of the unavoidable lambda calculus
as functions; or something else.

Obviously a real is a real and a complex is a complex.

Fortunately there is a chain of canonical embeddings of the `lower'
into the `higher' sets, such that a `lower' number has a unique
representation as a `higher' number, such that you may `identify'
the real thing with its representation.

So in that sense 1/2 is very much a complex number.

-- 
"Die ganzen Zahlen hat der liebe Gott gemacht; alles andere ist Menschenwerk"
"God made the natural numbers; all else is the work of man."
                                            --  Leopold Kronecker (1821-1891)
From: Russell Wallace
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3d8b3f6c.253937020@news.eircom.net>
On 13 Sep 2002 14:50:42 +0200, ·····@gmx.net (Wolfhard
=?iso-8859-1?q?Bu=DF?=) wrote:

>"Die ganzen Zahlen hat der liebe Gott gemacht; alles andere ist Menschenwerk"
>"God made the natural numbers; all else is the work of man."

Since complex numbers show up at the core of quantum mechanics, does
that mean we can sue Him for theft of intellectual property? ^.^

-- 
"Mercy to the guilty is treachery to the innocent."
Remove killer rodent from address to reply.
http://www.esatclear.ie/~rwallace
From: Tim Bradshaw
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ey3elbojr8i.fsf@cley.com>
* Russell Wallace wrote:

> Since complex numbers show up at the core of quantum mechanics, does
> that mean we can sue Him for theft of intellectual property? ^.^

This is actually a wonderful thing.  Until QM complex number were just
a convenience in physics, after it they were a necessity - if you
don't have complex numbers you need to invent objects which are
isomorphic to them to do QM.

--tim
From: Alexander Schmolck
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <yfs7khqhvn2.fsf@black132.ex.ac.uk>
Erik Naggum <····@naggum.no> writes:

> * Christophe Rhodes
> | It's not a priori obvious why this decision was taken, I think, as it blurs
> | the distinction slightly between existence and representation; that's not to
> | say that I think the wrong decision was taken, of course.
> 
>   The decision was not taken by Common Lisp, but by mathematicians long before
>   there were programming languages and representations.  If you plan to work
>   with complex numbers, knowing their history seems like a very good idea to me.


Well, the people responsible for CL apparently felt that they were deviating
from the traditional usage in mathematics:

  In mathematics, the set of real numbers is traditionally described as a
  subset of the complex numbers, but in Common Lisp, the type real and the
  type complex are disjoint.

  -- hyperspec


alex
From: Alexander Schmolck
Subject: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <yfs3csdixgr.fsf_-_@black132.ex.ac.uk>
Erik Naggum <····@naggum.no> writes:

> * Alexander Schmolck
> | As an aside: although what I know about CL's way of dealing with numbers seems
> | rather sensible to me (promoting to a more general type iff necessary), many
> | people are apparently uneasy about the idea that the result type can depend on
> | the parameter values (rather than type).
> 
>   What do you mean, the type depends on the parameters?  The result type of a
>   division of integers is a rational number.  Now, remember your mathematics:
>   an integer is a rational number.  

Yes, but a rational number is not necessarily an integer (neither in math nor
in CL) and the result type of e.g. a division of integers in CL *can* also be
an integer:

[29]> (integerp (/ 3 1))
T
[30]> (integerp (/ 3 2))
NIL

Irrespective of the fact that all the numbers involved are subtypes of
rationals, this is still different from the behavior of most commonly used
programming languages (including dynamically typed ones) where the result type
of mathematical operations is completely determined by the types of the
parameters. That CL deviates from this common behavior would seem to make it
easier to write abstract numeric code, but one can at least imagine some
disadvantages. For example the fact that most languages will signal an error
if one attempts something like (sqrt some-negative-float) might also help to
find errors due to bugs or numerical roundoff in some cases. Other potential
issues are efficiency (e.g. due to converting back and forwards between
different internal representations), problems with different storage
requirements, and the behavior of arrays or matrices (e.g. how should division
of the elements of a matrix of integers through an integer behave?). In
addition and expression that evaluates to a float in one implementation might
return a rational in another, which could lead to compatibility problems. I
don't know what other things somebody might come up with, but I guess you get
the idea.

I'd like to know whether CL programmers who have written quite a bit of
numerical code feel that CL's number system is just better than what's on
offer in most or all other programming languages for almost any situation or
whether there sometimes are problems/bugs that wouldn't occur in say python or
whatever.


Also, CL is not completely consistent in the approach that mathematical
subsets of a certain group of numbers are just treated as special cases of
this group (e.g. reals are not a subtype of complex and #c(1.0 0.0) and 1.0
behave rather differently, so it is for example not possible to compare the
magnitude of the former to another number; but then #c(1.0 0) isn't of type
complex and eql to 1.0).

I'd be interested to know what the rationale for these decisions was.


>   Only if you think an object is of only one type could you become confused
>   about this.  The beauty of the Common Lisp

I didn't think an object had to be of only one type.

> 
> | I'd be interested to hear from the experience of lisp users whether lisp's
> | way of promoting numbers actually causes problems in practice.
> 
>   But they are /not/ promoted!

Yes they are, unless we have different ideas of what promoting a number
means. (sqrt -1.0) is of type complex ( (complexp (sqrt -1.0) => t) ), but
-1.0 isn't and neither is (sqrt 1.0).

> | Also, is there one overarching reason why rationals aren't complex?
> 
>   Huh?  This is such a confused question.  The mathematical background of the

Sorry, this was badly worded because I was in a hurry. Let me rephrase: is
there one overarching reason why in CL real (and thus also rational) isn't a
subtype of complex (i.e. (complexp 1) => nil)?

Mathematicaly (complexp 1) ==> T would seem more intuitive and in scheme which
has, I think, a similar number model (complex? 1) indeed evaluates to #t (same
for Dylan, I guess).


alex
From: Erik Naggum
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <3240932601351524@naggum.no>
* Alexander Schmolck
| That CL deviates from this common behavior

  It does not.

| (e.g. how should division of the elements of a matrix of integers through an
| integer behave?).

  Look, learn the language first, /then/ construct hypothetical problems.  If
  you want a division of two integers to yield an integer always, you have the
  four subtly different operators that does precisely this.  Using the general
  division operator is Just Plain Wrong.  Quit whining about non-problems.

| In addition and expression that evaluates to a float in one implementation
| might return a rational in another, which could lead to compatibility
| problems.

  Do you have any examples of this?

| I don't know what other things somebody might come up with, but I guess you
| get the idea.

  I see that you are happy constructing hypothetical problems, but you have so
  far not provided the connection from them to reality.  This is actually far
  more relevant than anything you can come up with in a vacuum.

| I'd be interested to know what the rationale for these decisions was.

  I believe this is part of the public record.  I have not checked this part
  of the specifications sufficiently closely, and even misremembered the issue
  on complex, but I believe the story on complex numbers have been covered
  sufficiently well in CLtL1 and CLtL2.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Alexander Schmolck
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <yfsbs6xfs6y.fsf@black132.ex.ac.uk>
Erik Naggum <····@naggum.no> writes:

> * Alexander Schmolck
> | That CL deviates from this common behavior
> 
>   It does not.

How does knowing that 'a' is of type rational completely determine the return
type of (sqrt a)?

>   If you want a division of two integers to yield an integer always, you
>   have the four subtly different operators that does precisely this.  Using
>   the general division operator is Just Plain Wrong.

What makes you think I disagree with this?

> Quit whining about non-problems.

I'm not. I made no claim that any of the examples I provided weren't
non-problems. And my aim is not trying to come up with some arm chair
reasoning to prove to you that the number system in CL is really deficient or
that because it is different from what the more widely used language X does it
cannot possibly work -- I'm simply interested in the practical experiences
people who wrote quite a bit of numeric code with it made and in particular in
the question whether the obvious advantages in terms of abstraction are
counterbalanced to some non-negigible extent by problems one would be less
likely to encounter with more "traditional" (or rather "widely used") number
systems (the answer might well be 'no' -- that's at least what I'd tell
someone who asked me if a similar question about python's choise to to
indicate block structure by indentation accompanied by some frequently brought
about arguments against it).


>   I see that you are happy constructing hypothetical problems, but you have so
>   far not provided the connection from them to reality.  This is actually far
>   more relevant than anything you can come up with in a vacuum.

As indicated, I rather like the idea of writing code that will produce
mathematically correct results given arguments of different numeric types
without much needless loss of exactness but with automatic canonicalization.
However I currently lack the opportunity and time to write the amounts of
numeric lisp code that would be needed to reach an informed opinion from my
own experience, but I still think trying to draw from the practical expertise
of other people is often vastly superior to trying to derive something from
first principles or just adopting what is most common. Since CL was designed
with some care by mathematically sophisticated people, produces fast numeric
code and has been around for some time, c.l.l wouldn't seem like bad community
to quizz about these things. It is of course possible that what I want to know
about is really to imprecise or otherwise unsuitable to be properly answered
in that way.


> 
> | In addition and expression that evaluates to a float in one implementation
> | might return a rational in another, which could lead to compatibility
> | problems.
> 
>   Do you have any examples of this?

If you mean of the first: (expt 27 1/3), (log 8 2) etc.


>   I believe the story on complex numbers have been covered sufficiently well
>   in CLtL1 and CLtL2.

I did have a look at CLtL2 but couldn't find it. Has anyone got a pointer?


alex
From: Erik Naggum
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <3241259604149116@naggum.no>
* Alexander Schmolck <··········@gmx.net>
| How does knowing that 'a' is of type rational completely determine the return
| type of (sqrt a)?

  Please try to understand.  The argument to `sqrt� is of type `number�, and
  the return value is of type `number�.

| What makes you think I disagree with this?

  Your repetitive argument about division.

| I'm not.

  Sometimes, other people are better judges of such things than oneself.

| I'm simply interested in the practical experiences [...]

  But you cannot do that usefully without understanding the theory.

| I still think trying to draw from the practical expertise of other people is
| often vastly superior to trying to derive something from first principles or
| just adopting what is most common.

  Sure, but you need to understand fully what they have practical expertise in.

| It is of course possible that what I want to know about is really to
| imprecise or otherwise unsuitable to be properly answered in that way.

  It has appeared to me that you lack foundation in your quest for knowledge
  and thus that there is a high risk of leading you astray given truth that
  you will misinterpret.

| If you mean of the first: (expt 27 1/3), (log 8 2) etc.

  I meant specifically the compatibility problems you alluded to.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Alexander Schmolck
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <yfs1y7qf151.fsf@black132.ex.ac.uk>
Erik Naggum <····@naggum.no> writes:

> * Alexander Schmolck <··········@gmx.net>
> | How does knowing that 'a' is of type rational completely determine the return
> | type of (sqrt a)?
> 
>   Please try to understand.  The argument to `sqrt� is of type `number�, and
>   the return value is of type `number�.

True, but how is this in any way relevant? I take it you know that I am
perfectly aware that 'a' by virtue of being of type rational is also of type
number and that the result of sqrt(a) will also be of type number? I also
think you ought to be able to figure out that "*the* return type" in the above
context can only meaningfully refer to a subtype of what class-of for that
value returns. In those other languages I was refering to, knowing the
equivalent to the result of (class-of a) allows one to deduce the result of
(class-of (sqrt a)). In common lisp it doesn't (and that might well be much
more sensible). That was my point.

> 
> | What makes you think I disagree with this?
> 
>   Your repetitive argument about division.

Since prior to and since my initial posting I have never considered (/ 3 2) =>
1 as desirable behavior you must clearly have misunderstood me. I'm not sure
what you think the repetitive argument about division was, but maybe I
understand how misunderstanding resulted from the matrix example. Did you
maybe think I was talking about /inplace/ division of an array of integers 'a'
by some integer 'i'? What I was refering to is the question of how should the
logic of CL's treatment of numbers be extended to e.g. matrices or Arrays as
numeric entities, when I perform e.g. an element-wise array-division or sqrt,
resulting in a new Array. For efficiency reasons the elements of an Array
should all have the same internal representation, but one would obviously also
like to behave matrices and Arrays similar to scalars where sensible and
possible.


> | I'm simply interested in the practical experiences [...]
>
>   But you cannot do that usefully without understanding the theory.
> 

Sure, but where do you think I don't understand the theory?

> | I still think trying to draw from the practical expertise of other people is
> | often vastly superior to trying to derive something from first principles or
> | just adopting what is most common.
> 
>   Sure, but you need to understand fully what they have practical expertise in.

Yes, but since I doubt we are the only two people who share this awareness,
I'd expect that people who share their practical experiences with me would
provide that context.


>   It has appeared to me that you lack foundation in your quest for knowledge
>   and thus that there is a high risk of leading you astray given truth that
>   you will misinterpret.

The danger of leading people astray certainly exists. I think generally the
best strategy in these cases is to still provide the answer that might lead
the person further astray, but explicitly alert him to that danger (possibly
trying to help pointing out what he really wants or what exactly he would need
to understand to appreciate the answer). That works fine if the confused
person turns out not to be that confused after all and is also useful to other
readers. Otherwise any further confusion is upon his own head.


Anyway, I'd still be interested in hearing about the rationale for the
following design decisions:

* Why are the types complex and real disjoint?

  You said that this decision was not taken by CL but by mathematics long ago,
  which seems a bit puzzling to me (and apparently others), not least since
  the hyperspec notes that the disjointness is a departure from mathematical
  tradition. I also had a look in CLtL2, as you suggested but couldn't find
  any relevant information (maybe I looked in the wrong places).
  
* Why is #c(3.0 1.0) not canoicalized to a real wheras #c(3.0 0) is but an
  alternative spelling of 3.0?

>   I meant specifically the compatibility problems you alluded to.

OK, but please don't tell me that this example is stupid (I made no claim that
there are non-stupid examples in practice; rather this is an example of what I
am interested to hear about from people):

(nth (log 8 2) '(1 2 3 4))


alex
From: Erik Naggum
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <3241440997740419@naggum.no>
* Alexander Schmolck
| Sure, but where do you think I don't understand the theory?

  Because you show no signs of listening when the same argument is just
  repeated unchanged from posting to posting.  I write in the hope that the
  reader thinks about it.  This there is no evidence of that, I tire quickly.

| Anyway, I'd still be interested in hearing about the rationale for the
| following design decisions:

  Perhaps you should pay someone if you want to order them around without
  listening to their objections to your demands.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Barry Margolin
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <rSni9.11$S13.1454@paloalto-snr1.gtei.net>
In article <···············@black132.ex.ac.uk>,
Alexander Schmolck  <··········@gmx.net> wrote:
>* Why are the types complex and real disjoint?
>
>  You said that this decision was not taken by CL but by mathematics long ago,
>  which seems a bit puzzling to me (and apparently others), not least since
>  the hyperspec notes that the disjointness is a departure from mathematical
>  tradition. I also had a look in CLtL2, as you suggested but couldn't find
>  any relevant information (maybe I looked in the wrong places).

COMPLEX in CL refers to a representation, rather than an abstract
mathematical concept.  Since the type NUMBER is the union of REAL and
COMPLEX, there's no need for REAL to be a subtype of COMPLEX (if it were,
COMPLEX would be the same as NUMBER).

CL's type system is complex because it's used for several independent
purposes: representation, discrimination, optimization.

>* Why is #c(3.0 1.0) not canoicalized to a real wheras #c(3.0 0) is but an
>  alternative spelling of 3.0?

Did you mean #c(3.0 0.0)?  Floating point numbers are assumed to be
inexact, so we don't bother canonicalizing zeroes away.  But integer zeroes
are exact, so it makes sense to eliminate them when they're not needed.

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Bruce Hoult
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <bruce-72D9A2.11140414092002@copper.ipg.tsnz.net>
In article <··················@black132.ex.ac.uk>,
 Alexander Schmolck <··········@gmx.net> wrote:

> Sorry, this was badly worded because I was in a hurry. Let me rephrase: is
> there one overarching reason why in CL real (and thus also rational) isn't a
> subtype of complex (i.e. (complexp 1) => nil)?
> 
> Mathematicaly (complexp 1) ==> T would seem more intuitive and in scheme which
> has, I think, a similar number model (complex? 1) indeed evaluates to #t (same
> for Dylan, I guess).

Yes, <complex> is the superclass of <real> in Dylan.

<real> is the superclass of <float> and <rational>.

<float> is the superclass of <single-float> and <double-float>

<rational> is the superclass of <integer>

-- Bruce
From: Alexander Schmolck
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <yfs7khlfrng.fsf@black132.ex.ac.uk>
Bruce Hoult <·····@hoult.org> writes:

> <complex> is the superclass of <real> in Dylan.
> 

Do you happen to have any pointers as to the of this departure from CL? I'd be
quite interested because after all Dylan is a Lisp descendant and one of the
declared design aims was very good runtime performance whereas the hyperspec
cites effiency concerns as one of the reasons for disjointness of real and
complex numbers.

alex
From: Bruce Hoult
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <bruce-694AFC.14391717092002@copper.ipg.tsnz.net>
In article <···············@black132.ex.ac.uk>,
 Alexander Schmolck <··········@gmx.net> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > <complex> is the superclass of <real> in Dylan.
> > 
> 
> Do you happen to have any pointers as to the of this departure from CL? I'd be
> quite interested because after all Dylan is a Lisp descendant and one of the
> declared design aims was very good runtime performance whereas the hyperspec
> cites effiency concerns as one of the reasons for disjointness of real and
> complex numbers.

Well I didn't design it (I started with Dylan long after it was 
designed), but here is my understanding:

In Dylan, <complex> is an abstract class with no slots, it is a "sealed" 
class, and it is a superclass of all built-in numeric types.

- <complex> is an abstract class.  You can't call make(<complex>), but 
you can use <complex> as the type of a variable (including a specializer 
in a method).

- <complex> is sealed.  This means that it can have subclasses, but they 
must be defined in the same library as <complex> itself (in this case, 
the standard Dylan runtime library).  This enables strong type 
inferenceing, and open-coding of any needed type dispatch code.

The DRM in fact does not define any non-real subclasses of <complex>, 
it's more or less a placeholder.  Particular Dylan implementations can 
provide non-real subclasses of <complex>.  Gwydion Dylan doesn't at 
present, but the architectural place for them is reserved and since it's 
Free software some could be added at any time if there is a demand.  The 
end user, however, can't add new subclasses of <complex>.

An end-user *can* add new subclasses of <number>.  This is an open 
class, subclasses can be added at any time (including at runtime), but 
the trade-off is that method dispatch is likely to be slow for any 
variables typed simply as <number>.  In the past I've used this to add 
such things as 3D coordinates and transformation matrices.  By sealing 
my new branch of the numeric tree I got excellent speed for variables 
whose type is declared (or infered).

-- Bruce
From: Kaz Kylheku
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <cf333042.0209171144.7088202e@posting.google.com>
Alexander Schmolck <··········@gmx.net> wrote in message news:<··················@black132.ex.ac.uk>...
> Irrespective of the fact that all the numbers involved are subtypes of
> rationals, this is still different from the behavior of most commonly used
> programming languages (including dynamically typed ones) where the result type
> of mathematical operations is completely determined by the types of the
> parameters.

CL is better than other languages; and being better requires being
different. If an operation involving two ratios produces an integer,
the resulting type ought to be an integer. Otherwise you get stupid
behavior, such as 1/2 divided by 1/2 being a ratio equivalent to the
integer 1, but 2 divided by 2 being 1.

> That CL deviates from this common behavior would seem to make it
> easier to write abstract numeric code, but one can at least imagine some
> disadvantages. For example the fact that most languages will signal an error
> if one attempts something like (sqrt some-negative-float) might also help to
> find errors due to bugs or numerical roundoff in some cases.

It's trivial to write a trapping version of sqrt, if you want to
restrict the domain to the positive real number ray of the complex
plane. It's easier to make a specialization out of the general sqrt,
than to generalize over a dumb sqrt.

Other potential
> issues are efficiency (e.g. due to converting back and forwards between
> different internal representations), problems with different storage

There could be a special representation of ratios which codes for an
integer.

Mostly taken care of by optional declarations.

> requirements, and the behavior of arrays or matrices (e.g. how should division
> of the elements of a matrix of integers through an integer behave?).

Clearly, if it's intended to be a matrix of integers, then it is an
error to store anything but integers into its cells. So you have to
define what you mean by division, and then choose the appropriate
semantics. Perhaps you want some truncating division rather than the /
operator.

Really, the problem is with the concept of a ``matrix of integers'' to
begin with. Whether or not something is a matrix of integers depends
on how it is manipulated and used.

 In
> addition and expression that evaluates to a float in one implementation might
> return a rational in another, which could lead to compatibility problems. 
That is incorrect. Operations on inexact numbers do not result in
exact numbers in Lisp, except for ones whose abstract semantics
produce integers. That would be completely different from realizing
that an evenly divisible ratio is an integer. Because pretending that
an inexact number is exact is actually an intellectual mistake; you
are manufacturing precision out of thin air. Realizing that 4/2 is the
integer 2, on the other hand, is perfectly correct.

Thus for instance (/ 4.0 2.0) produces 2.0, not 2. Whereas (floor 4.3)
yields 4. No problem there.
 
> Also, CL is not completely consistent in the approach that mathematical
> subsets of a certain group of numbers are just treated as special cases of
> this group (e.g. reals are not a subtype of complex and #c(1.0 0.0) and 1.0
> behave rather differently, so it is for example not possible to compare the
> magnitude of the former to another number; but then #c(1.0 0) isn't of type
> complex and eql to 1.0).

How so?

  (< (abs 1.5) (abs #c(10 10)))   ==>  T

See, the real 1.5 has a smaller magnitude than the complex 10i + 10.
If you suspect that your operands may be complex numbers, then you
have to explicitly compute the magnitude.

  (= #c(1.0 0) 1.0)  ==>  T

I'm no expert on numeric processing in Lisp, but this works fine in
the Lisp I'm using. :)
From: Erik Naggum
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <3241289619834156@naggum.no>
* Kaz Kylheku
| Thus for instance (/ 4.0 2.0) produces 2.0, not 2. Whereas (floor 4.3)
| yields 4. No problem there.

  Just for completeness, tangential to your point: We have `ffloor� etc.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Richard Fateman
Subject: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <3D88A6A6.50907@cs.berkeley.edu>
Kaz Kylheku wrote:

.....
>
> 
> It's trivial to write a trapping version of sqrt, if you want to
> restrict the domain to the positive real number ray of the complex
> plane. It's easier to make a specialization out of the general sqrt,
> than to generalize over a dumb sqrt.


If you want a fast sqrt, say one that uses the sqrt instruction, it
is hard to make it out of a general sqrt by declaration and specialization
especially since the compiler would have to check a declaration
for (sqrt x)  that x was a double-precision float and (>= x 0.0d0), or
otherwise deduce the sign of x.

So it would be very useful for people concerned with speed and
the occasional numeric computation to have a double-float-sqrt that was
available in ANSI common lisp, and computed (say) (sqrt (abs x))  or
any other agreed upon consequence for negative x. A sqrt that never ever
returned a complex.   The in-principle argument that you can go from general
to specific depends on the in-practice optimization of arithmetic
expressions.   But this rarely comes to the top of the to-do list for
compiler writers.

 From the numeric computation perspective, a dumb and fast sqrt over
the reals would be useful. The other sqrt could still be there, of course.


As far as floating point numbers being inexact, this is false, and
using this as a basis for reasoning leads to problems.  Floating
point numbers represent very specific rational numbers of the
form  integer * 2 ^integer.   Typically the IEEE 754 standard
for fp is used on Lisps.  The requirement is usually that
you take the inputs, do the exact operation, and then if the
result is representable, you must provide it.  Otherwise you
return the nearest value among the exact numbers that are
representable.
   If you choose to represent YOUR approximate numbers as
fp, that is your choice.  You could choose to represent
approximate numbers as intervals: pairs of EXACT rationals,
representing upper and lower bounds. Or some other way.

But the fp numbers are exact and can be converted to the obvious
rationals.  (except for NaNs, +-Inf, and they too could be converted
to rationals that look like 0/0, 1/0, -1/0.  I wrote a
paper on this.  Oddly enough, if we allow CL to have
these rational numbers the programs to do +, *, probably
become smaller. )
From: Christophe Rhodes
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <sq7khj2rcy.fsf@lambda.jcn.srcf.net>
Richard Fateman <·······@cs.berkeley.edu> writes:

> Kaz Kylheku wrote:
> 
> .....
> >
> > It's trivial to write a trapping version of sqrt, if you want to
> > restrict the domain to the positive real number ray of the complex
> > plane. It's easier to make a specialization out of the general sqrt,
> > than to generalize over a dumb sqrt.
> 
> 
> If you want a fast sqrt, say one that uses the sqrt instruction, it
> is hard to make it out of a general sqrt by declaration and specialization
> especially since the compiler would have to check a declaration
> for (sqrt x)  that x was a double-precision float and (>= x 0.0d0), or
> otherwise deduce the sign of x.

[ sbcl-0.7.7, but at the very least CMUCL will do this too, and I'd
  guess ACL as well: ]

* (defun foo (x) 
    (declare (optimize (speed 3) (safety 0)) 
             (type (single-float 0.0) x)) 
    (sqrt x))

; note: doing float to pointer coercion (cost 13) to "<return value>"
* (disassemble 'foo)

; 09030CDB:       FSQRT                       ; no-arg-parsing entry point
;      CDD:       WAIT

Cheers,

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Christophe Rhodes
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <sq3cs72r9p.fsf@lambda.jcn.srcf.net>
Richard Fateman <·······@cs.berkeley.edu> writes:

> Kaz Kylheku wrote:
> 
> .....
> >
> > It's trivial to write a trapping version of sqrt, if you want to
> > restrict the domain to the positive real number ray of the complex
> > plane. It's easier to make a specialization out of the general sqrt,
> > than to generalize over a dumb sqrt.
> 
> 
> If you want a fast sqrt, say one that uses the sqrt instruction, it
> is hard to make it out of a general sqrt by declaration and specialization
> especially since the compiler would have to check a declaration
> for (sqrt x)  that x was a double-precision float and (>= x 0.0d0), or
> otherwise deduce the sign of x.

[ sbcl-0.7.7, but at the very least CMUCL will do this too, and I'd
  guess ACL as well: ]

* (defun foo (x) 
    (declare (optimize (speed 3) (safety 0)) 
             (type (single-float 0.0) x)) 
    (sqrt x))

; note: doing float to pointer coercion (cost 13) to "<return value>"
* (disassemble 'foo)

; 09030CDB:       FSQRT                       ; no-arg-parsing entry point
;      CDD:       WAIT
;                 [...]

Cheers,

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Duane Rettig
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <4znufdyla.fsf@beta.franz.com>
Christophe Rhodes <·····@cam.ac.uk> writes:

> Richard Fateman <·······@cs.berkeley.edu> writes:
> 
> > Kaz Kylheku wrote:
> > 
> > .....
> > >
> > > It's trivial to write a trapping version of sqrt, if you want to
> > > restrict the domain to the positive real number ray of the complex
> > > plane. It's easier to make a specialization out of the general sqrt,
> > > than to generalize over a dumb sqrt.
> > 
> > 
> > If you want a fast sqrt, say one that uses the sqrt instruction, it
> > is hard to make it out of a general sqrt by declaration and specialization
> > especially since the compiler would have to check a declaration
> > for (sqrt x)  that x was a double-precision float and (>= x 0.0d0), or
> > otherwise deduce the sign of x.
> 
> [ sbcl-0.7.7, but at the very least CMUCL will do this too, and I'd
>   guess ACL as well: ]
> 
> * (defun foo (x) 
>     (declare (optimize (speed 3) (safety 0)) 
>              (type (single-float 0.0) x)) 
>     (sqrt x))
> 
> ; note: doing float to pointer coercion (cost 13) to "<return value>"
> * (disassemble 'foo)
> 
> ; 09030CDB:       FSQRT                       ; no-arg-parsing entry point
> ;      CDD:       WAIT
> ;                 [...]

Yes, Allegro CL does this on all architectures which provide a sqrt
instruction.  Same with abs, and also, on the x86: sin, cos, and tan
(for ranges between +- 2^63).

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Richard Fateman
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <3D88BA3C.1090107@cs.berkeley.edu>
It's great to see that Franz and CMU have actually
done this!  I may have to revise some old programs
that tried to dance around too-general arithmetic.

Actually, the program foo has to box up its answer,
so a better test may be compiling something like this:

(defun foo2 (a)
;; replace a[0] by sqrt(a[0]), non-neg single-float
;; returns nil to avoid boxing the result.
  (declare (optimize (speed 3) (debug 0)(safety 0))
	  (type (simple-array (single-float 0.0)(1)) a))
  (setf (aref a 0)(sqrt (aref a 0))) nil)


on Allegro CL/intel this is the same code length
as the previously offered

 (defun foo (x) 
 (declare (optimize (speed 3) (debug 0)(safety 0)) 
 (type (single-float 0.0) x)) 
 (sqrt x))


and in fact almost the same instructions
except for the necessity here to jump to sys::new-single-float

RJF


> 
From: Tim Bradshaw
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <ey3sn07kxko.fsf@cley.com>
* Richard Fateman wrote:
> Kaz Kylheku wrote:
> If you want a fast sqrt, say one that uses the sqrt instruction, it
> is hard to make it out of a general sqrt by declaration and specialization
> especially since the compiler would have to check a declaration
> for (sqrt x)  that x was a double-precision float and (>= x 0.0d0), or
> otherwise deduce the sign of x.

Is this much harder than, say, having to deduce that the arguments to
`*' are double floats so you can do generic arithmetic?  Obviously
the x >= 0 bit is mildly harder, but is it *much* harder?

--tim
From: Barry Margolin
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <K93i9.18$4C6.1268@paloalto-snr1.gtei.net>
In article <···············@cley.com>, Tim Bradshaw  <···@cley.com> wrote:
>* Richard Fateman wrote:
>> Kaz Kylheku wrote:
>> If you want a fast sqrt, say one that uses the sqrt instruction, it
>> is hard to make it out of a general sqrt by declaration and specialization
>> especially since the compiler would have to check a declaration
>> for (sqrt x)  that x was a double-precision float and (>= x 0.0d0), or
>> otherwise deduce the sign of x.
>
>Is this much harder than, say, having to deduce that the arguments to
>`*' are double floats so you can do generic arithmetic?  Obviously
>the x >= 0 bit is mildly harder, but is it *much* harder?

Is it even necessary to check the sign?  Can't the SQRT instruction report
an error if the argument is negative?  Or will it just return a NaN, which
the followon code will have to check for?  Perhaps the implementation can
make use of trapping NaNs (I'm pushing the limit of my knowledge of IEEE
floating point -- I've seen the phrase "trapping NaN" used, but I don't
know the details of it).

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Joe Marshall
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <u1kn2iut.fsf@ccs.neu.edu>
Barry Margolin <······@genuity.net> writes:

> In article <···············@cley.com>, Tim Bradshaw  <···@cley.com> wrote:
> >* Richard Fateman wrote:
> >> Kaz Kylheku wrote:
> >> If you want a fast sqrt, say one that uses the sqrt instruction, it
> >> is hard to make it out of a general sqrt by declaration and specialization
> >> especially since the compiler would have to check a declaration
> >> for (sqrt x)  that x was a double-precision float and (>= x 0.0d0), or
> >> otherwise deduce the sign of x.
> >
> >Is this much harder than, say, having to deduce that the arguments to
> >`*' are double floats so you can do generic arithmetic?  Obviously
> >the x >= 0 bit is mildly harder, but is it *much* harder?
> 
> Is it even necessary to check the sign?  Can't the SQRT instruction report
> an error if the argument is negative?  Or will it just return a NaN, which
> the followon code will have to check for?  Perhaps the implementation can
> make use of trapping NaNs (I'm pushing the limit of my knowledge of IEEE
> floating point -- I've seen the phrase "trapping NaN" used, but I don't
> know the details of it).

The IEEE floating point API specifies 5 `exceptions' (OVERFLOW,
UNDERFLOW, ZERODIVIDE, INVALID, INEXACT) that are to be
detected and what to do about them.  You can tell the API to `ignore'
certain exceptions.  When you do this, a `reasonable' result is
supposed to be returned.  For instance, if you ignore OVERFLOW, an
infinity is returned.  If you *don't* ignore overflow, the OVERFLOW
flag is turned on and an unnormalized number with a biased exponent is
returned.  In theory, this would enable the API client to take some
corrective action.  A number of computer languages don't allow user
code to access the full API and simply set the floating point trap
enables to some default value.

In any case, SQRT of a negative number would be considered an INVALID
operation.  If traps are enabled, the INVALID flag would be set and
presumably somehow the process will be notified.  If traps are not
enabled, SQRT of a negative number will produce a `quiet NaN'.

A `quiet NaN' doesn't cause an INVALID trap when it is used as an 
operand.  It is simply propagated on as output.  A `quiet NaN'
presumably enters the calculation because an INVALID operation was
ignored.  The theory is that if you ignored it the first time, you
still want to ignore it.

A `trapping NaN' causes an INVALID trap.  The floating point unit
would not generate these.  This would be used by a compiler as an
value for an uninitialized floating point variable.  If INVALID traps
are disabled, the floating point unit would simply turn `trapping
NaN's into non-trapping ones.

So you could either arrange for SQRT to cause an exception, or you
could arrange for it to be silent and check if the result is NaN.
From: Barry Margolin
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <VD4i9.26$4C6.1659@paloalto-snr1.gtei.net>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>So you could either arrange for SQRT to cause an exception, or you
>could arrange for it to be silent and check if the result is NaN.

That check would presumably be as expensive as checking for negative before
the SQRT instruction.  So if the goal is to avoid any extra checking code,
you'd have to enable the exception, so that there's no overhead in the
valid cases.

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Joe Marshall
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <it132i72.fsf@ccs.neu.edu>
Barry Margolin <······@genuity.net> writes:

> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
> >So you could either arrange for SQRT to cause an exception, or you
> >could arrange for it to be silent and check if the result is NaN.
> 
> That check would presumably be as expensive as checking for negative before
> the SQRT instruction.  

If not more so (you'd be wasting your time boxing the NaN).
From: Barry Margolin
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <o05i9.30$4C6.1656@paloalto-snr1.gtei.net>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>Barry Margolin <······@genuity.net> writes:
>
>> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>> >So you could either arrange for SQRT to cause an exception, or you
>> >could arrange for it to be silent and check if the result is NaN.
>> 
>> That check would presumably be as expensive as checking for negative before
>> the SQRT instruction.  
>
>If not more so (you'd be wasting your time boxing the NaN).

I expect the check would be done before boxing, since there's no need to
return a result if the check fails.

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Tim Bradshaw
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <ey3ofaul0tq.fsf@cley.com>
* Barry Margolin wrote:
> That check would presumably be as expensive as checking for negative before
> the SQRT instruction.  So if the goal is to avoid any extra checking code,
> you'd have to enable the exception, so that there's no overhead in the
> valid cases.

One question is: does enabling exceptions slow down the HW
significantly?  I'd hope it wouldn't, but I have a horrible feeling
that it might given the kind of C mindset that people probably design
for.

--tim
From: Christophe Rhodes
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <sqznuent70.fsf@lambda.jcn.srcf.net>
Tim Bradshaw <···@cley.com> writes:

> * Barry Margolin wrote:
> > That check would presumably be as expensive as checking for negative before
> > the SQRT instruction.  So if the goal is to avoid any extra checking code,
> > you'd have to enable the exception, so that there's no overhead in the
> > valid cases.
> 
> One question is: does enabling exceptions slow down the HW
> significantly?  I'd hope it wouldn't, but I have a horrible feeling
> that it might given the kind of C mindset that people probably design
> for.

The impression I get, from reading between the lines of architecture
manuals, is that if you want precise exceptions (i.e. exceptions
signalled as soon as the operation occurs) it does slow down the
hardware.  Imprecise exceptions (signalled whenever the processor
feels like it[1]) wouldn't cause this problem.  Though this kind of
thing can expose infelicities in operating system kernels, too ("the
kernel doesn't use floating point; why should any program?"...)

Cheers,

Christophe

[1] handwavy terminology basically because I am fuzzy on the details :-)
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Thomas F. Burdick
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <xcvk7lhx58z.fsf@hurricane.OCF.Berkeley.EDU>
Christophe Rhodes <·····@cam.ac.uk> writes:

> Tim Bradshaw <···@cley.com> writes:
> 
> > * Barry Margolin wrote:
> > > That check would presumably be as expensive as checking for negative before
> > > the SQRT instruction.  So if the goal is to avoid any extra checking code,
> > > you'd have to enable the exception, so that there's no overhead in the
> > > valid cases.
> > 
> > One question is: does enabling exceptions slow down the HW
> > significantly?  I'd hope it wouldn't, but I have a horrible feeling
> > that it might given the kind of C mindset that people probably design
> > for.
> 
> The impression I get, from reading between the lines of architecture
> manuals, is that if you want precise exceptions (i.e. exceptions
> signalled as soon as the operation occurs) it does slow down the
> hardware.  Imprecise exceptions (signalled whenever the processor
> feels like it[1]) wouldn't cause this problem.  Though this kind of
> thing can expose infelicities in operating system kernels, too ("the
> kernel doesn't use floating point; why should any program?"...)

Right, because if you want precise exceptions, you need to pay for
synchronization, whereas imprecise exceptions can be signaled when
synchronization would have happened anyway.  A sufficiently smart
compiler could probably arrange for things to be scheduled in such a
way that the synchronization didn't cost too much (at least, if you
had a good mix of integer and FP operations that gave it the raw
material to work with).  I've been meaning to investigate this area of
Python, but honestly, I'm kind of scared of what I might find :-)

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Hartmann Schaffer
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <3d8a7baa@news.sentex.net>
In article <···············@hurricane.ocf.berkeley.edu>,
	···@hurricane.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> ...
> synchronization would have happened anyway.  A sufficiently smart
> compiler could probably arrange for things to be scheduled in such a
> way that the synchronization didn't cost too much (at least, if you
> had a good mix of integer and FP operations that gave it the raw
> material to work with). 

i doubt it.  floating point operations usually don't complete in one
cycle.  if you want to have precise interrupts, you would have to halt
scheduling until the floating point operation is completed, or you
would have to be able to associatethe exception with the instruction
that caused it (feed this info through the pipeline).  if subsequent
instructons can cause exceptions as well, this can get mesy very fast

hs

-- 

don't use malice as an explanation when stupidity suffices
From: Duane Rettig
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <4ptv9t938.fsf@beta.franz.com>
··@heaven.nirvananet (Hartmann Schaffer) writes:

> In article <···············@hurricane.ocf.berkeley.edu>,
> 	···@hurricane.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> > ...
> > synchronization would have happened anyway.  A sufficiently smart
> > compiler could probably arrange for things to be scheduled in such a
> > way that the synchronization didn't cost too much (at least, if you
> > had a good mix of integer and FP operations that gave it the raw
> > material to work with). 
> 
> i doubt it.  floating point operations usually don't complete in one
> cycle.  if you want to have precise interrupts, you would have to halt
> scheduling until the floating point operation is completed, or you
> would have to be able to associatethe exception with the instruction
> that caused it (feed this info through the pipeline).  if subsequent
> instructons can cause exceptions as well, this can get mesy very fast

Yes, but this is possible and in fact is done on the Alpha.  I'll
answer further in my response to Daniel Barlow's article.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Daniel Barlow
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <87r8fpo3nq.fsf@noetbook.telent.net>
···@hurricane.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Right, because if you want precise exceptions, you need to pay for
> synchronization, whereas imprecise exceptions can be signaled when
> synchronization would have happened anyway.  A sufficiently smart

It's worth noting that on a typical Unix system, a fp trap involves a
trip into supervisor mode (i.e. the kernel) which will stare hard at
the error and eventually send your process SIGFPE.  This is probably
slow enough that you'd want to reserve it for "can't happen" or at
least "very unlikely to happen" situations.

> compiler could probably arrange for things to be scheduled in such a
> way that the synchronization didn't cost too much (at least, if you
> had a good mix of integer and FP operations that gave it the raw
> material to work with).  I've been meaning to investigate this area of
> Python

Consulting for example the Alpha arch reference manual (a cpu
architecture known for its, uh, "outside-the-box" approach to division
of responsibilities for correct fp error handling) one finds a big
list of things that the compiler must ensure when generating code to
be executed in a trap shadow.  This includes stuff like "don't reuse
the destination register, don't jump, don't branch, don't mess with
the sp or fp" and so on.  

And if you think you cann get around this by disabling the traps and
testing the IEEE "an exception happened here" bits at some later more
convenient time, bad luck.  The hardware is at liberty to ignore your
trap disables and trap into the kernel anyway: the kernel is expected
to fix up the IEEE exception-happened bits (which are maintained in
software), supply a default value, and restart the computation.  So
you still have to follow the trap rules even though userspace never sees the
trap.

> but honestly, I'm kind of scared of what I might find :-)

Sensible...

-dan

-- 

  http://ww.telent.net/cliki/ - Link farm for free CL-on-Unix resources 
From: Duane Rettig
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <4lm5xt700.fsf@beta.franz.com>
Daniel Barlow <···@telent.net> writes:

> ···@hurricane.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> 
> > Right, because if you want precise exceptions, you need to pay for
> > synchronization, whereas imprecise exceptions can be signaled when
> > synchronization would have happened anyway.  A sufficiently smart
> 
> It's worth noting that on a typical Unix system, a fp trap involves a
> trip into supervisor mode (i.e. the kernel) which will stare hard at
> the error and eventually send your process SIGFPE.  This is probably
> slow enough that you'd want to reserve it for "can't happen" or at
> least "very unlikely to happen" situations.
> 
> > compiler could probably arrange for things to be scheduled in such a
> > way that the synchronization didn't cost too much (at least, if you
> > had a good mix of integer and FP operations that gave it the raw
> > material to work with).  I've been meaning to investigate this area of
> > Python
> 
> Consulting for example the Alpha arch reference manual (a cpu
> architecture known for its, uh, "outside-the-box" approach to division
> of responsibilities for correct fp error handling) one finds a big
> list of things that the compiler must ensure when generating code to
> be executed in a trap shadow.  This includes stuff like "don't reuse
> the destination register, don't jump, don't branch, don't mess with
> the sp or fp" and so on.  

This is actually all very elementary, if you understand what is going on.
Certain operations might not be doable in the hardware (for example,
some of the earlier Alphas can't handle operations on denormalized
floats).  These will cause a trap, but since you can't predict when the
trap will occur, the instructions have a way to inform the trap handler
that it is safe to re-execute the instructions (properly) in software.
This is done by setting the "software" bit in the instruction, by making
sure that no initial values in a run of float operations are destroyed
(hence the need to not overwrite a register) and at the end of the
sequence by placing a trapb (trap barrier) instruction so that the
interrupts can be synchronized.

> And if you think you cann get around this by disabling the traps and
> testing the IEEE "an exception happened here" bits at some later more
> convenient time, bad luck.  The hardware is at liberty to ignore your
> trap disables and trap into the kernel anyway: the kernel is expected
> to fix up the IEEE exception-happened bits (which are maintained in
> software), supply a default value, and restart the computation.  So
> you still have to follow the trap rules even though userspace never sees the
> trap.

Whether the trap handler retries the instruction sequence in hardware
(the more preferable situation) or generates a SIGFPE depends on 
how the instructions are coded.  If the "software" trap bit is not set,
then the trap handler will never retry and you'll always get a SIGFPE.
If it is set, but you have overwritten one of the registers that are
needed between the instruction that caused the exception and the current
instruction, then you also get a SIGFPE.

Here is an example:

CL-USER(1): (compile (defun foo (a b c)
                       (declare (optimize speed (safety 0) (debug 0))
                                (double-float a b c))
                       (+ a b c)))
FOO
NIL
NIL
CL-USER(2): (disassemble *)
;; disassembly of #<Function FOO>
;; formals: A B C

;; code start: #x3049b04c:
   0: 8c120006             ldt f0,6(r18)
   4: 8c510006             ldt f2,6(r17)
   8: 8c700006             ldt f3,6(r16)
  12: 5862b40a             addt/su f3,f2,f10
  16: 60000000             trapb 
  20: 5d4a0402             cpys f10,f10,f2
  24: 5840b40a             addt/su f2,f0,f10
  28: 60000000             trapb 
  32: 5d4a0400             cpys f10,f10,f0
  36: a36e0213             ldl r27,531(r14)         SYS::NEW-DOUBLE-FLOAT
  40: 47ff0404     [bis]   clr r4
  44: 6bfb0000             jmp r31,(r27),0
CL-USER(3): 

Note that the addt (ieee double-float add) has a /s modifier, which is
the "software" bit.  This, along with the trapb instruction, ensures that
the trap handler will retry the instruction in software if it fails in
hardware.  One peephole improvement that could be done would be to
rearrange the registers to not start a value in f0, and to then remove
the extra move (cpys) instructions and the first trapb, and to assign the
registers in such a way that no register on the input side appeared as
output later on.  Such a sequence might have looked like

   ldt f1,6(r18)
   ldt f2,6(r17)
   ldt f3,6(r16)
   addt/su f3,f2,f10
   addt/su f10,f1,f0
   trapb
   ...

If we had had incentive, we might have performed such optimizations.
But it's not like the Alpha is going anywhere :-(  I believe that
the unoptimized sequence is not too bad performance-wise, because in
the more deeply pipelined versions, the knowledge of whether an
exception is going to happen can be predicted in advance of the actual
operation, so the trapb instructions can mostly be treated as nops.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Barry Margolin
Subject: Re: sqrt  and speed and fp Was Re: Numbers in Lisp
Date: 
Message-ID: <sXFi9.3$xP6.498@paloalto-snr1.gtei.net>
In article <··············@noetbook.telent.net>,
Daniel Barlow  <···@telent.net> wrote:
>···@hurricane.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>
>> Right, because if you want precise exceptions, you need to pay for
>> synchronization, whereas imprecise exceptions can be signaled when
>> synchronization would have happened anyway.  A sufficiently smart
>
>It's worth noting that on a typical Unix system, a fp trap involves a
>trip into supervisor mode (i.e. the kernel) which will stare hard at
>the error and eventually send your process SIGFPE.  This is probably
>slow enough that you'd want to reserve it for "can't happen" or at
>least "very unlikely to happen" situations.

Or "don't care how long it takes when it happens" situations.  If the
likely end result is that it will display an error message and/or invoke
the interactive debugger, it doesn't really matter if it takes an extra few
milliseconds.  This is why trapping is often preferable to checking -- the
overhead only occurs in the error cases.

But for Common Lisp, this may not be the case, because sqrt of a negative
number doesn't result in an error, it produces a complex automatically.
Someone needs to determine how often this occurs, and decide whether the
tradeoff is appropriate.

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Alexander Schmolck
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <yfsvg52dlse.fsf@black132.ex.ac.uk>
···@ashi.footprints.net (Kaz Kylheku) writes:

> Alexander Schmolck <··········@gmx.net> wrote in message news:<··················@black132.ex.ac.uk>...
> CL is better than other languages; and being better requires being
> different. 

No argument. By pointing out what I perceived as difference (I am glad we seem
to agree about what I described as difference), I didn't mean to imply that
what CL does isn't better.

> the resulting type ought to be an integer. Otherwise you get stupid
> behavior, such as 1/2 divided by 1/2 being a ratio equivalent to the
> integer 1, but 2 divided by 2 being 1.

Do you think this is similar behavior to what you criticize above?:
[17]> (- #c(1.0 0.0) 1.0)
#C(0.0 0.0)
[18]> (- 1.0 1.0)
0.0


> It's trivial to write a trapping version of sqrt, if you want to
> restrict the domain to the positive real number ray of the complex
> plane. It's easier to make a specialization out of the general sqrt,
> than to generalize over a dumb sqrt.

Yes, I think this is a good argument.

> 
> Other potential
> > issues are efficiency (e.g. due to converting back and forwards between
> > different internal representations), problems with different storage
> 
> There could be a special representation of ratios which codes for an
> integer.
> 
> Mostly taken care of by optional declarations.

Yes, if e.g. declaring something as ratio prevents undesired conversion of the
internal representation to, say, a long.

> Clearly, if it's intended to be a matrix of integers, then it is an
> error to store anything but integers into its cells. So you have to
> define what you mean by division, and then choose the appropriate
> semantics. Perhaps you want some truncating division rather than the /
> operator.

Sorry, my example was to cryptic. I didn't mean dividing the elements of the
matrix in-place, I meant an element-wise division operation on a matrix (or
array), leaving the original matrix unmodified and returning a new matrix
(whose cells needn't be of type integer). See my reply to Erik Naggum.

> > addition and expression that evaluates to a float in one implementation might
> > return a rational in another, which could lead to compatibility problems. 
> That is incorrect. Operations on inexact numbers do not result in

Yes, it should read "an" instead of "and" :) (I think you misread: I didn't
claim that operations on /inexact/ numbers sometimes yield exact results. But
implementations are free to implement some operations on rational numbers to
return either an inexact result (float) or an exact one (rational).)

> > Also, CL is not completely consistent in the approach that mathematical
> > subsets of a certain group of numbers are just treated as special cases of
> > this group (e.g. reals are not a subtype of complex and #c(1.0 0.0) and 1.0
> > behave rather differently, so it is for example not possible to compare the
> > magnitude of the former to another number; but then #c(1.0 0) isn't of type
> > complex and eql to 1.0).
> 
> How so?

Well, take (< #c(1.0 0) #c(2.0 0)) vs. (< #c(1.0 0.0) #c(2.0 0.0)). The former
works, the latter doesn't, because canonicalization to a real only takes place
for the former.

> 
>   (= #c(1.0 0) 1.0)  ==>  T

If I interpret your reason for this reply correctly: the eql in the paragraph
above was not negated ( (eql #c(1.0 0) 1.0) ==> T; but not (eql #c(1.0 0.0)
1.0); = works for both ).

> 
> I'm no expert on numeric processing in Lisp, but this works fine in
> the Lisp I'm using. :)


thanks for your input.

alex
From: Vassil Nikolov
Subject: Re: Numbers in Lisp (was: macros vs HOFs)
Date: 
Message-ID: <f34a0f4f.0209200027.e8503a9@posting.google.com>
Alexander Schmolck <··········@gmx.net> wrote in message news:<···············@black132.ex.ac.uk>...
> ···@ashi.footprints.net (Kaz Kylheku) writes:
[...]
> > the resulting type ought to be an integer. Otherwise you get stupid
> > behavior, such as 1/2 divided by 1/2 being a ratio equivalent to the
> > integer 1, but 2 divided by 2 being 1.
> 
> Do you think this is similar behavior to what you criticize above?:
> [17]> (- #c(1.0 0.0) 1.0)
> #C(0.0 0.0)
> [18]> (- 1.0 1.0)
> 0.0

It is not similar because floating-point numbers are considered
inexact, so #c(0.0 0.0) is treated as if it is a (hopefully small)
rectangle centered on 0, while 0.0 is treated as if it is a
(hopefully small) segment centered on 0, so one cannot be replaced
by the other.

On the contrary, rational numbers are considered exact, so the
value of (/ 1/2 1/2) is identical to 1.

[...]
> Well, take (< #c(1.0 0) #c(2.0 0)) vs. (< #c(1.0 0.0) #c(2.0 0.0)). The former
> works, the latter doesn't, because canonicalization to a real only takes place
> for the former.

No, it doesn't.

Only #c(1 0) or #c(1/2 0) would be canonicalized to 1 and 1/2,
respectively; #c(1.0 0) will not.

See, for example, the HyperSpec, 12.1.5.3.1 (the fourth case).

Only = is more `forgiving' in that (= #c(1.0 0) 1.0) => T
(and ditto for #c(1.0 0.0)), but (EQL #c(1.0 0) 1.0) => NIL.

---Vassil.
From: Ray Blaak
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <uwuprn2rx.fsf@telus.net>
················@pobox.com (Gareth McCaughan) writes:
> Erik Naggum wrote:
> [Oleg, about int / int -> int]
> > | That's a very nasty type of bug, especially in any kind of
> > | scientific/numeric application.
> > 
> >   This kind of bug is a consequence of strong typing and thus must be caught
> >   by the strongly-typed system that introduced it.
> 
> I think "consequence" is a bit strong. The Python language[1]
> is dynamically typed, but it has the int/int -> int "feature".
> 
> It's probably true that only a static typing system can
> *excuse* -- as opposed to *explain* -- making integer
> division yield integers.

This has nothing to do with static typing per se.

The issue is that the languages that give only int/int->int simply do not also
have int/int->real. That is, they have the "wrong" set of built-in functions,
as opposed to the fact the function usage is statically checked.

A language that had the equivalent of CL's numeric tower could still be
readily statically typed. There would be nothing stopping one from using the
equivalent of "arbitrary number" all over the place, and using explicit
truncates at those places where one is sure an int is needed.

In particular int/int->int need not exist, allowing static typing to actually
prevent these accidental errors. I.e. you would have to do it on purpose:

 (truncate (/ x y))

--
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: Gareth McCaughan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <slrnao1t1k.2vcs.Gareth.McCaughan@g.local>
Ray Blaak wrote:

[Erik, writing about the int/int->int problem some languages have:]
> > >   This kind of bug is a consequence of strong typing and thus
> > >   must be caught by the strongly-typed system that introduced it.

[me:]
> > I think "consequence" is a bit strong. The Python language[1]
> > is dynamically typed, but it has the int/int -> int "feature".
> > 
> > It's probably true that only a static typing system can
> > *excuse* -- as opposed to *explain* -- making integer
> > division yield integers.

[Ray:]
> This has nothing to do with static typing per se.
> 
> The issue is that the languages that give only int/int->int simply do not also
> have int/int->real. That is, they have the "wrong" set of built-in functions,
> as opposed to the fact the function usage is statically checked.

I agree that static typing doesn't cause the problem.
(That was my point.) What I meant by "only a static
typing system can excuse, etc" was that if you must
have int/int -> int, then you'd better also have
static typing, because at least then you can pretend
that you just have two different "/" operators and
the language does something a bit like type inference
to work out which one you want. Whereas, in a dynamically
typed language, what happens is that you get two
completely different behaviours selected between
at runtime, arbitrarily unpredictably. That feels
really wrong.

Even in statically typed languages, I'd be happier
if "/" didn't mean "divide and convert to integer".
But that usage sits more comfortably in C than it
does in Python; more comfortably in ML than it would
in CL.

> A language that had the equivalent of CL's numeric tower could still be
> readily statically typed. There would be nothing stopping one from using the
> equivalent of "arbitrary number" all over the place, and using explicit
> truncates at those places where one is sure an int is needed.

Of course.

> In particular int/int->int need not exist, allowing static typing to actually
> prevent these accidental errors. I.e. you would have to do it on purpose:
> 
>  (truncate (/ x y))

Of course.

At this point, I typed: "But if you do that, you should
in fact do what CL does, providing one division-yielding-integer
function for each form of real->integer conversion."
But, on reflection, maybe that's wrong. It doesn't take
an especially smart compiler to avoid consing up a rational
when the next thing it's going to do is truncate it or turn
it into a float, writing the expression out fully only costs
a few extra characters, and -- as the Python folks like to
put it -- "Explicit is better than implicit". (That maxim
can be an excuse for clunkiness, but I don't think there's
anything very clunky about *not* conflating division and
real->int conversion.)

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240856262261155@naggum.no>
* Gareth McCaughan
| I agree that static typing doesn't cause the problem.  (That was my point.)

  I think my point has been muddled.  When I said that this (/ int int) -> int
  is /a consequence of/ static typing, that does not mean that it is not also
  a consequence of more factors.  It means that if you choose static typing,
  you will also make this kind of design choice.  In particular, if you choose
  types that are close to the machine, (/ int int) -> int is the obvious choice
  because the hardware that you have chosen to model does precisely that.

  If, however, you think in mathematical terms, you do not have (/ int int) to
  begin with, you have (/ number number), and the result is of type number,
  but this would not aid efficiency at all!  Since better efficiency is a goal
  of the application of most type theories, a type systems that do not consider
  all (numeric) types disjoint are basically worthless.

* Ray Blaak
| (truncate (/ x y))

* Gareth McCaughan
| Of course.

  Pardon me, but (truncate (/ x y)) is stupid when (truncate x y) expresses
  the operation better and even returns the two values that machine division
  instructions routinely produce instead of having to compute a new, second
  return value.  Note the alternatives `floor�, `ceiling�, and `round�, as well.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Ray Blaak
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <u3csejr4u.fsf@telus.net>
Erik Naggum <····@naggum.no> writes:
>   If, however, you think in mathematical terms, you do not have (/ int int)
>   to begin with, you have (/ number number), and the result is of type
>   number, but this would not aid efficiency at all!  Since better efficiency
>   is a goal of the application of most type theories, a type systems that do
>   not consider all (numeric) types disjoint are basically worthless.

The other major goal, of course, is "correctness", insofar as that can be
shown statically for those who are interested. CL's numeric tower of subtypes
is perfectly useful statically as well for this purpose.

However, even for the goal of pure efficiency, the numeric types need not be
necessarily disjoint. One would use "arbitrary number" as necessary
(i.e. conservatively, when in doubt), and use the more restricted types where
shown to be needed. In those places where the compiler/interpreter knows
restricted types are being used, more efficient code can be generated.

That is, the implementations can be effectively disjoint, while semantically,
from the programmer's point of view, the inclusive ordering of numeric
subtypes would still hold.

> * Ray Blaak
> | (truncate (/ x y))

>   Pardon me, but (truncate (/ x y)) is stupid when (truncate x y) expresses
>   the operation better and even returns the two values that machine division
>   instructions routinely produce instead of having to compute a new, second
>   return value.  Note the alternatives `floor�, `ceiling�, and `round�, as
>   well.

Yes, one should always use the most appropriate operation the language
provides.

My point was that regardless of the actual operation, in a (good) static
language the programmer chooses the conversion to the restricted type
explicitly, rather than (possibly erroneously) letting implicit converions do
the work.

--
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <87y9a52uk8.fsf@darkstar.cartan>
Ray Blaak <·····@telus.net> writes:

> Erik Naggum <····@naggum.no> writes:
> >   If, however, you think in mathematical terms, you do not
> >   have (/ int int) to begin with, you have (/ number number),
> >   and the result is of type number, but this would not aid
> >   efficiency at all!  Since better efficiency is a goal of
> >   the application of most type theories, a type systems that
> >   do not consider all (numeric) types disjoint are basically
> >   worthless.
> 
> The other major goal, of course, is "correctness", insofar as
> that can be shown statically for those who are interested. CL's
> numeric tower of subtypes is perfectly useful statically as
> well for this purpose.

Yes, correctness and efficiency are the goals of static typing.
The correctness part however is probably a myth (see the plist
example I posted in this thread, for instance).  So we are left
with efficiency.

> However, even for the goal of pure efficiency, the numeric
> types need not be necessarily disjoint. One would use
> "arbitrary number" as necessary (i.e. conservatively, when in
> doubt), and use the more restricted types where shown to be
> needed. In those places where the compiler/interpreter knows
> restricted types are being used, more efficient code can be
> generated.

Just as in Lisp: You add declarations narrowing the set of
possible numeric types of certain variables when needed.

One could probably try to do that in a static language, too:
Always use the most ``arbitrary'' number type available at first,
and restrict when needed for efficiency.  But... does anybody
ever do that?  All static languages I've met so far distinguish
for instance between fixnums and bignums.  And all library
functions take fixnums.  Take any static language of your choice
and try to add 42 to a bignum.  It will be awkward: You'll first
have to convert 42 to a bignum.  And if you call a library
function that takes a fixnum you'll have to explicitly convert
back every bignum argument to a fixnum, handling all kinds of
exceptions along the way.  And that is not enough: We might have
to take rationals into account, so we can do something like

CL-USER 1 > (defparameter *a* 5)
*A*

CL-USER 2 > *a*
5

CL-USER 3 > (/ *a* 3)
5/3

CL-USER 4 > (setq *a* (/ *a* 3))
5/3

CL-USER 5 > (setq *a* (* 3 *a*))
5

So, we should have calculated with rationals, and only rationals,
to begin with.  So, now, our whole program has to deal with
rationals and not with integers (otherwise we'd quickly go nuts
being forced to convert fixnums, bignums and rationals into each
other).  This means, I now have to rewrite all of my software to
compute not only with bignums but with rationals.  God beware if
only a single float shows up somewhere...

Of course, all ``correctness'' is gone anyway, now: Every number,
even 0, is an arbitrary precision rational now; I only have to
``cast'' whenever I want to call a library function -- what can
the type checker possibly catch?  And this is different from Lisp:
When I know the argument of the function FOO will always be a
fixnum and I want more efficiency, I'll declare the argument to
be a fixnum within the definition of FOO and be done with it; but
in a static language, I'd have to change /every call/ to FOO so
it casts the argument to a fixnum.

Of course, along the way, you'll forget about 37 calls to FOO,
and the type checker will stop you for every single one of them.
And when you're done with all 37 error messages, and have added
all your 37 casts, you'll think: ``Thank God for my type checker.
He just saved me the time for finding 37 bugs.''

Or maybe not?

Regards,
-- 
Nils Goesche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Hartmann Schaffer
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3d843920@news.sentex.net>
In article <··············@darkstar.cartan>,
	Nils Goesche <···@cartan.de> writes:
> ...
> ever do that?  All static languages I've met so far distinguish
> for instance between fixnums and bignums.  And all library
> functions take fixnums.

not really surprising.  practically all of them follow pretty closely
what can be efficiently mapped into the available hardware

hs

-- 

don't use malice as an explanation when stupidity suffices
From: Ray Blaak
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <uznuhkiim.fsf@telus.net>
Nils Goesche <···@cartan.de> writes:
> Yes, correctness and efficiency are the goals of static typing.
> The correctness part however is probably a myth (see the plist
> example I posted in this thread, for instance).  So we are left
> with efficiency.

Complete correctness is of course a myth.  Significant help? That depends on
how the program is structured. It you write it like below, then no:

> Of course, all ``correctness'' is gone anyway, now: Every number,
> even 0, is an arbitrary precision rational now; I only have to
> ``cast'' whenever I want to call a library function -- what can
> the type checker possibly catch?  And this is different from Lisp:
> When I know the argument of the function FOO will always be a
> fixnum and I want more efficiency, I'll declare the argument to
> be a fixnum within the definition of FOO and be done with it; but
> in a static language, I'd have to change /every call/ to FOO so
> it casts the argument to a fixnum.

What tends to be done instead is to have a "region" where one knows that the
restricted type will be used heavily (i.e. some inner part of some numeric
algorithm). One casts at the boundaries, and within the region the type
checking and efficiency gains are done to good effect.

> Of course, along the way, you'll forget about 37 calls to FOO,
> and the type checker will stop you for every single one of them.
> And when you're done with all 37 error messages, and have added
> all your 37 casts, you'll think: ``Thank God for my type checker.
> He just saved me the time for finding 37 bugs.''
> 
> Or maybe not?

Not this time. One should use their tools wisely. If for a particular problem
the tool is not helping, only impeding, then ditch the tool. That can
certainly include static typing.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: Gareth McCaughan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <slrnao4q6g.r10.Gareth.McCaughan@g.local>
Erik Naggum wrote:

> * Gareth McCaughan
> | I agree that static typing doesn't cause the problem.  (That was my point.)
> 
>   I think my point has been muddled.  When I said that this (/ int int) -> int
>   is /a consequence of/ static typing, that does not mean that it is not also
>   a consequence of more factors.  It means that if you choose static typing,
>   you will also make this kind of design choice.  In particular, if you choose
>   types that are close to the machine, (/ int int) -> int is the obvious choice
>   because the hardware that you have chosen to model does precisely that.

If the only reason for static typing were efficiency, I'd
agree: but I think it's the obsession with efficiency, not
the static typing, that's responsible. As a supporting
data point, I observe that Haskell is strongly typed but
does int/int -> rational.

What int/int is mostly symptomatic of, I think, isn't exactly
either static typing or efficiency obsession. I think it's
symptomatic of a desire to be close to the machine, which
isn't quite the same as either. (BCPL was close to the machine
but not exactly statically typed; Haskell is statically typed
but not close to the machine. Java is close to the machine
but not efficient; Common Lisp is efficient but not close to
the machine.)

>   If, however, you think in mathematical terms, you do not have (/ int int)
>   to begin with, you have (/ number number), and the result is of type
>   number, but this would not aid efficiency at all!  Since better efficiency
>   is a goal of the application of most type theories, a type systems that
>   do not consider all (numeric) types disjoint are basically worthless.

I suspect you don't mean what I think you're saying here,
but I can't tell which end the problem is at :-). Common Lisp
has a type system; that type system does aid efficiency;
but it doesn't consider all numeric types disjoint.

It's sort of true that if you're doing static types then
you need to consider types to be disjoint, in that you
need to know *the* type of every expression in the program,
but that's perfectly compatible with having subtype relations
too. Integer can be a subtype of Rational even if everything
gets typed either as Integer or as Ratio.

> * Ray Blaak
> | (truncate (/ x y))
> 
> * Gareth McCaughan
> | Of course.
> 
>   Pardon me, but (truncate (/ x y)) is stupid when (truncate x y) expresses
>   the operation better and even returns the two values that machine division
>   instructions routinely produce instead of having to compute a new, second
>   return value.  Note the alternatives `floor�, `ceiling�, and `round�,
>   as well.

I don't think "truncate x y" *does* express the operation
better. It expresses it *well*, but so does the alternative.
Note that in mathematics there is no special symbol for
"integer part of the quotient of", nor "nearest integer to
the quotient of", nor either of the other two. Mathematicians'
notation isn't always optimal even for mathematics, never mind
for programming, so all this is intended to show is that the
(truncate (/ x y)) notation is perfectly usable.

Of course, (truncate (/ x y)) feels inefficient. But it doesn't
take a specially smart compiler to make that feeling an illusion.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240949276974621@naggum.no>
* Gareth McCaughan
| It's sort of true that if you're doing static types then you need to
| consider types to be disjoint, in that you need to know *the* type of every
| expression in the program, but that's perfectly compatible with having
| subtype relations too.  Integer can be a subtype of Rational even if
| everything gets typed either as Integer or as Ratio.

  One of the major problems I have with strong typing is precisely that it
  flies in the face of another pretentious theory, object-orientation, which
  supposedly should have a type hierarchy and run-time dispatch on the type of
  the actual object.  The two theories seem to be seriously at odds.  What we
  have in Common Lisp is thoroughly object-oriented approach.  This should be
  good, but somehow the strongly-typed nutjobs go "eep, eep" (thanks, Tim) and
  seem to ignore that their theories are all bunk if they cannot handle a type
  hierarchy.  Even before I started programming in Common Lisp, I found the
  desire to know /the/ (single) type of every expression to be suspect and the
  theories wanting when they made that premise.  As if you could not reason
  about types unless you had only one type!  As if you could not operate with
  union types created on the fly!  All bunk, I say.
  
| Of course, (truncate (/ x y)) feels inefficient. But it doesn't take a
| specially smart compiler to make that feeling an illusion.

  Not so.  Please remember the secondary return value.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Gareth McCaughan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <slrnao65hc.12b.Gareth.McCaughan@g.local>
Erik Naggum wrote:

>   One of the major problems I have with strong typing is precisely that it
>   flies in the face of another pretentious theory, object-orientation, which
>   supposedly should have a type hierarchy and run-time dispatch on the type of
>   the actual object.  The two theories seem to be seriously at odds.  What we
>   have in Common Lisp is thoroughly object-oriented approach.  This should be
>   good, but somehow the strongly-typed nutjobs go "eep, eep" (thanks, Tim) and
>   seem to ignore that their theories are all bunk if they cannot handle a type
>   hierarchy.  Even before I started programming in Common Lisp, I found the
>   desire to know /the/ (single) type of every expression to be suspect and the
>   theories wanting when they made that premise.  As if you could not reason
>   about types unless you had only one type!  As if you could not operate with
>   union types created on the fly!  All bunk, I say.

What statically typed languages with OO do, in effect,
is to say: There's a dividing line cutting more or less
"horizontally" across the type hierarchy; the compiler
needs to work out the upper portion of everything's type,
but runtime dispatch can cope with everything below the
line. (The picture I have in my head: there's some sort
of lattice of types, and every type corresponds to a set
of nodes in the lattice, namely those you can reach from
"the" type by going upwards in the lattice. The compiler
needs to know, for each object, the set of nodes-above-the-line
corresponding to its type.)

The idea, obviously, is that decisions "above the line"
correspond to bigger differences of representation than
decisions "below the line", in the implementation.

That isn't as neat as either "You don't need any types
at compile time" or "You need to know all types completely
at compile time", but it doesn't seem too unprincipled.

This view of static-typing-plus-OO makes me wonder whether
a language might want to allow the user to choose where
the line is drawn. So, you get to choose an "upward-closed"
set of types; in exchange for having to write your program
so that the compiler can identify every expression's type
to that extent, you get the assurance that decisions about
those types will be made efficiently, that you'll be told
of anything in your program that can be shown not to make
sense by considering those types, and maybe even that
objects whose types are completely represented by the
information "above the line" will be implemented in as
efficient a way as your compiler knows how.

Ideally, you'd be able to choose to have nothing at all
above the line, in which case you'd get CL-like dynamic
typing; or to have everything above the line, in which
case you get the promise of no dynamic dispatch of any
sort anywhere. Making the language *usable* in both
circumstances might prove tricky :-). (Specifically,
implementing its standard library would be interesting.)

I'm rambling. I'll stop. :-)

[I said:]
> | Of course, (truncate (/ x y)) feels inefficient. But it doesn't take a
> | specially smart compiler to make that feeling an illusion.
> 
>   Not so.  Please remember the secondary return value.

I hadn't forgotten it. I had, however, neglected to
consider exactly what it is. I was thinking:
(truncate <rational>) can return two values, the
truncated part and the remainder. But the remainder
isn't the same as the remainder you get from CL's
TRUNCATE, and it *would* take quite a smart compiler
to notice what's going on if you say something along
the lines of

    (multiple-value-bind (q r) (truncate (/ x y))
      (let ((r1 (* r y)))
        ...))

let alone cases where you do without the binding of R1.

So: I recant. CL's division operators are indeed
better than having a single division operator and
a range of single-argument real->integer converters.
(Though even that would be an improvement on what
most languages offer.)

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Ray Blaak
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <uvg57f57p.fsf@telus.net>
Erik Naggum <····@naggum.no> writes:
>   One of the major problems I have with strong typing is precisely that it
>   flies in the face of another pretentious theory, object-orientation, which
>   supposedly should have a type hierarchy and run-time dispatch on the type of
>   the actual object.  The two theories seem to be seriously at odds.  

They are not in actuality, however. Plenty of statically typed languages work
with type hierarchies and have the sufficient run-time information to allow
dispatching. These concepts do not contradict each other.

All static typing is really about is looking at any point in the source and
asking "what do we know now?". The answers instruct what can be optimized and
what violations can be reported. Sometimes one knows more and sometimes less.

When moving from a more restricted to a more general type, less is known
statically, so information must instead necessarily be maintained at
runtime. This information can be used for error checking, dispatching,
debugging, reflection, etc.

There is nothing wrong or contradictory with a static system that uses CL's
approach of complete maintenance of type information at run-time. Static
languages like C/C++ that eschew (most) runtime information are not
representative of all static languages. That approach only makes sense if
one's sole purpose for using static type checking was efficiency.

>   What we have in Common Lisp is thoroughly object-oriented approach.  This
>   should be good, but somehow the strongly-typed nutjobs go "eep, eep"
>   (thanks, Tim) and seem to ignore that their theories are all bunk if they
>   cannot handle a type hierarchy.

Their theories would indeed be bunk if they couldn't handle type hierarchies.
Decent static languages can handle them, though. CL's approach *is* good.

--
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: Tim Bradshaw
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ey3u1kr3vxl.fsf@cley.com>
* Ray Blaak wrote:

> They are not in actuality, however. Plenty of statically typed languages work
> with type hierarchies and have the sufficient run-time information to allow
> dispatching. These concepts do not contradict each other.

> All static typing is really about is looking at any point in the source and
> asking "what do we know now?". The answers instruct what can be optimized and
> what violations can be reported. Sometimes one knows more and
> sometimes less.

My understanding of this whole thing (which may be incorrect) is that
the SSL - or as we might call them `eep eep' - languages need to know
the *actual concrete type* of things all the way through before they
are happy about compiling.  So consider something like:

(defmethod foo ((x number) (y number))
  (* (+ x y) (- x y) (/ x y) (* x y)))

Something like this pretty much can't be compiled at all by an eep eep
language because, clearly, without much more knowledge of the types
involved, you know almost nothing about whether you might get runtime
errors or not.  Instead, you'd have to build some vast compile-time
type signature for it, and then check for each call whether you knew
enough about things to compile a version for that call.

The other thing that these languages must rely on, it seems to me, is
an enormous closed-world assumption, because you must need to know,
at compile time, the whole type signature of anything or you can never
get anywhere.  For instance it wouldn't do to compile stuff that uses
the above GF without knowing that there's another method:

(defmethod foo ((x integer) y)
  (declare (ignore y))
  (/ x 0))

Of course a system which merely tries to make use of the information
it can get at compile time can cope fine, but the kind of claim that
seems to be made by the eep eep people is that you can really get
enough information about types such that the system can say `there
will never be runtime type errors' for realistic programs.  That's a
very strong claim, and if it's to be true without *complete*
compile-time type knowledge it needs to rely on all sorts of monotonic
behaviour of subtypes which isn't, I think, actually true for things
like numbers, let alone user-defined type lattices.

--tim
From: Ray Blaak
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <9183fc69.0209161251.7727c1fd@posting.google.com>
Tim Bradshaw <···@cley.com> wrote in message news:<···············@cley.com>...
> Of course a system which merely tries to make use of the information
> it can get at compile time can cope fine

Exactly. This is the way to do things.

> but the kind of claim that
> seems to be made by the eep eep people is that you can really get
> enough information about types such that the system can say `there
> will never be runtime type errors' for realistic programs.  That's a
> very strong claim

A very strong claim indeed. It is in fact false in general, and almost
immediately in realistic programs. "Real" static languages almost
always have some sort of runtime checking work to do, for otherwise
they would not be usable.

Here's a task: find me a non-toy static language that actually asserts
such a claim. I would be interested in finding a counter example that
breaks it, even with whole program analysis.

Cheers,
Ray Blaak
·····@telus.net
From: Tim Bradshaw
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ey33cs9xhch.fsf@cley.com>
* Ray Blaak wrote:

> Here's a task: find me a non-toy static language that actually asserts
> such a claim. I would be interested in finding a counter example that
> breaks it, even with whole program analysis.

I haven't said that the *languages* make such a claim, I've said that
(some of) their supporters make such a claim.  This is obviously
foolish of me because in order to prove my claim I need to kill the
supporters to prevent them going back on (or `clarifying') their
claims when they become inconvenient.  However, this is OK - killing
aliens is allowed in this kind of movie, right?

--tim
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkadmg4xm4.fsf@pc022.bln.elmeg.de>
·····@telus.net (Ray Blaak) writes:

> Tim Bradshaw <···@cley.com> wrote in message news:<···············@cley.com>...

> > but the kind of claim that seems to be made by the eep eep people
> > is that you can really get enough information about types such
> > that the system can say `there will never be runtime type errors'
> > for realistic programs.
> 
> "Real" static languages almost always have some sort of runtime
> checking work to do, for otherwise they would not be usable.
> 
> Here's a task: find me a non-toy static language that actually
> asserts such a claim. I would be interested in finding a counter
> example that breaks it, even with whole program analysis.

Now I'm surprised.  Take SML or OCaml, for instance (at least SML is
certainly not a toy language, I think).  What runtime type error could
happen in these languages (assuming that you treat warnings as errors
and will not get a ``match failure'')?  Of course, the following could
be regarded as a runtime type error of sorts:

type blark = Foo of int
           | Bar of string;;

let some_fun = fun
    Foo n -> <do something with n>
  | _ -> raise CrashTheSystem;;

Well, at least in my eyes it is just a runtime type error, but I think
most fans of static typing would say it isn't.  Or do you mean
something else?

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6c65x4yal0.fsf@octagon.mrl.nyu.edu>
Nils Goesche <······@cartan.de> writes:

> ·····@telus.net (Ray Blaak) writes:
> 
> > Tim Bradshaw <···@cley.com> wrote in message news:<···············@cley.com>...
> 
> > > but the kind of claim that seems to be made by the eep eep people
> > > is that you can really get enough information about types such
> > > that the system can say `there will never be runtime type errors'
> > > for realistic programs.
> > 
> > "Real" static languages almost always have some sort of runtime
> > checking work to do, for otherwise they would not be usable.
> > 
> > Here's a task: find me a non-toy static language that actually
> > asserts such a claim. I would be interested in finding a counter
> > example that breaks it, even with whole program analysis.
> 
> Now I'm surprised.  Take SML or OCaml, for instance (at least SML is
> certainly not a toy language, I think).  What runtime type error could
> happen in these languages (assuming that you treat warnings as errors
> and will not get a ``match failure'')?  Of course, the following could
> be regarded as a runtime type error of sorts:
> 
> type blark = Foo of int
>            | Bar of string;;
> 
> let some_fun = fun
>     Foo n -> <do something with n>
>   | _ -> raise CrashTheSystem;;
> 
> Well, at least in my eyes it is just a runtime type error, but I think
> most fans of static typing would say it isn't.  Or do you mean
> something else?

The classic example of runtime error you can get in *ML languages is a
non-exhaustive match on a data type.

You get a warning in Ocaml and ML (and other as well I suppose) but
not a compiler error.

# type zut = X of int | Y of int list | Z of string;;
type zut = X of int | Y of int list | Z of string

# let gnao (X y) = y + 4;;
Characters 9-22:
Warning: this pattern-matching is not exhaustive.
Here is an example of a value that is not matched:
(Z _|Y _)
  let gnao (X y) = y + 4;;
           ^^^^^^^^^^^^^
val gnao : zut -> int = <fun>

# gnao (X 4);;
- : int = 8

# gnao (Y [3]);;
Exception: Match_failure ("", 9, 22).

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkwupk3d4n.fsf@pc022.bln.elmeg.de>
Marco Antoniotti <·······@cs.nyu.edu> writes:

> Nils Goesche <······@cartan.de> writes:
> 
> > Take SML or OCaml, for instance.  What runtime type error could
> > happen in these languages (assuming that you treat warnings as
> > errors and will not get a ``match failure'')?
> 
> The classic example of runtime error you can get in *ML languages is
> a non-exhaustive match on a data type.
> 
> You get a warning in Ocaml and ML (and other as well I suppose) but
> not a compiler error.

Ok, but let's be fair.  Assuming you treat warnings as errors; then
you should get no match failures at all.  Any other kind of runtime
type errors?

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6celbswtpk.fsf@octagon.mrl.nyu.edu>
Nils Goesche <······@cartan.de> writes:

> Marco Antoniotti <·······@cs.nyu.edu> writes:
> 
> > Nils Goesche <······@cartan.de> writes:
> > 
> > > Take SML or OCaml, for instance.  What runtime type error could
> > > happen in these languages (assuming that you treat warnings as
> > > errors and will not get a ``match failure'')?
> > 
> > The classic example of runtime error you can get in *ML languages is
> > a non-exhaustive match on a data type.
> > 
> > You get a warning in Ocaml and ML (and other as well I suppose) but
> > not a compiler error.
> 
> Ok, but let's be fair.  Assuming you treat warnings as errors; then
> you should get no match failures at all.  Any other kind of runtime
> type errors?

To be completely fair.  I believe that the type inferecing provided by
the *ML class of languages is a great thing.  However it comes at too
great expense for *my* tastes.

As for the specific example, I honestly do not think you can get
runtime errors of the kind you get in CL with a *ML language if you
treat all warnings as errors.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Ray Blaak
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <uelbr92k9.fsf@telus.net>
Nils Goesche <······@cartan.de> writes:
> ·····@telus.net (Ray Blaak) writes:
> > "Real" static languages almost always have some sort of runtime
> > checking work to do, for otherwise they would not be usable.
> 
> Now I'm surprised.  Take SML or OCaml, for instance (at least SML is
> certainly not a toy language, I think).  What runtime type error could
> happen in these languages

I don't know *ML very well, but the usual errors I had in mind involves some
attempt at converting a value of a general type to that of a more restricted
type, i.e., downcasting. Dividing by 0 is another classic one.

> let some_fun = fun
>     Foo n -> <do something with n>
>   | _ -> raise CrashTheSystem;;
> 
> Well, at least in my eyes it is just a runtime type error, but I think
> most fans of static typing would say it isn't.  Or do you mean
> something else?

Often the only difference between a language runtime error and a user defined
one such as above is that the language runtime error is thrown by a function
that happens to be defined in a language library instead of a user-defined
one.

That is, the difference is ultimately meaningless.

I suppose that people usually think of the compiler generating special "raw
code" to implement runtime checks, but that really should be thought of as an
implementation detail. One way or the other error situations can occur at
runtime that need to be checked for.

Anyway, my 30 second google search gave this page for ML error situations:

  http://www-2.cs.cmu.edu/People/rwh/introsml/core/exceptions.htm

--
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <878z23w0x3.fsf@darkstar.cartan>
Ray Blaak <·····@telus.net> writes:

> Their theories would indeed be bunk if they couldn't handle
> type hierarchies.  Decent static languages can handle them,
> though. CL's approach *is* good.

So, Common Lisp is in fact statically typed with type t as a
default for everything? :)

Regards,
-- 
Nils Goesche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Ray Blaak
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <uwuplkhol.fsf@telus.net>
Nils Goesche <···@cartan.de> writes:
> Ray Blaak <·····@telus.net> writes:
> 
> > Their theories would indeed be bunk if they couldn't handle
> > type hierarchies.  Decent static languages can handle them,
> > though. CL's approach *is* good.
> 
> So, Common Lisp is in fact statically typed with type t as a
> default for everything? :)

You might be mocking, but the static vs dynamic typing divide is actually a
false one: there is only typing continuum from one extreme to another. 

It's all a matter of what one declares to the compiler/interpreter and what is
known by default. Your standard strict static language requires everything to
be specified. You typical dynamic language does not. Some are in between.

In that sense, then yes, CL could be viewed that way.

My point though, was CL's object approach is not at all inconsistent with
static typing. Any stong static typing fanatic who objects to CL's typing
system on theoretical grounds doesn't properly understand how type heirarchies
can be dealt with.

Compare Ada, Java and Dylan for various ways of dealing with subtype
heirarchies in a static manner.  Dylan's approach in particular is quite
consistent and general: dynamic or static, according to the programmer's
choice, such that statically known types participate in a complete type
heirarchy. Java's interfaces lets one an object be considered as any one of a
truly disjoint set of types simultaneously. Ada does not have any root object
type at all, but has both builtin and programmer-defined subtype heirarchies.
All of them maintain the necessary runtime information to implement needed
runtime checks, to perform dispatching, etc.

I really don't understand what the freaks on either side of the debate are so
upset about.  Use what you wish, understand the tradeoffs, and most
importantly, get your work done without making too many stupid mistakes.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: Ray Blaak
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ur8fvf4tw.fsf@telus.net>
Erik Naggum <····@naggum.no> writes:
>   One of the major problems I have with strong typing is [...]

Just to clarify my reply, I was reading "strong" to mean "strong and *static*"
here, for otherwise the post does not make sense: CL is strongly typed, after
all.

--
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: Matthew Danish
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <20020911035635.H23781@lain.res.cmu.edu>
On Tue, Sep 10, 2002 at 04:35:35PM -0400, Oleg wrote:
> The type checker is there for a reason: one is execution speed, another 
> reason is reliability. While a type checker can never guarantee that your 
> program will do what you wanted, it removes a *great* deal of bugs: in C++, 
> I would frequently have this bug when I divide one int by another and treat 
> the result as a float. That's a very nasty type of bug, especially in any 

Fortunately, in Lisp, dividing one integer by another results in a rational
number.  While doing so in O'Caml also results in a rational number, the
answer it does result in is not so "rational" :-).  int * int -> int is
somewhat of a mistake, don't you think?  What is 7/4?

> kind of scientific/numeric application. Lisp is such an extreme case that 
> wouldn't even prevent you from dividing an int by a string in some branch 
> of code! 

Material scientists tend to classify various substances into two
categories: "brittle" and "ductile".  Brittle materials are relatively
strong, but when a force is applied from particular directions (or an
unusually strong force is applied) they tend to experience extreme
plastic deformation and failure.  Ductile materials, on the other hand,
tend to accept far more abuse while only deforming elastically (they
return to their original shape).

In small structures, where tight specifications are needed, the
unyielding rigidity of brittle materials may prove useful.

But for large structures, or structures expected to experience
unpredictably applied forces, choosing a brittle material is
disasterous.

Did you know that all tall buildings and large bridges sway?  Or that
airplane wings flex, and the skin of the fuselage can tolerate a 3 foot
crack without catastrophic failure?  Why are car tires made of rubber,
instead of concrete?  Surely concrete will maintain a circular shape far
better than rubber would.  

I will leave as an exercise for the reader the application of this
metaphor to the current discussion.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: ····@sivos.rl.no
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3it1ce5oi.fsf@sivos.rl.no>
Matthew Danish:


> Brittle materials are relatively strong, but when a force is applied
> from particular directions (or an unusually strong force is applied)
> they tend to experience extreme plastic deformation and failure.
> Ductile materials, on the other hand, tend to accept far more abuse
> while only deforming elastically (they return to their original
> shape).

Hmm, I am very surprised.

I once learned that a ductile material can take a lot of _plastic_
deformation before fracturing. Plastic deformation does of course mean
that it does not at all return to its original shape once its ductile
property has been exploited. Gold is a very ductile material, it can
be squeezed leaf thin without fracturing.

Brittle materials fracture with no or very little plastic flow.  Their
main characteristic is that they respond to increasing load with an
elastic deformation and then fracture. In contrast, ductile materials
respond with an elastic deformation, then plastic deformation and then
finally, fracture. In general; the more plastic deformation prior to
fracture, the more ductile. (This is a bit simplified, cross sectional
area reduction during the plastic flow plays a role here)

Brittle materials being releatively strong, may be a little
far fetched. They often show relative low tensile strength. The
compressive strength may however, be an order of magnitude higher.

Brittle materials are often counted among the "hard" materials,
but hardness is a different and quite complex animal.

> In small structures, where tight specifications are needed, the
> unyielding rigidity of brittle materials may prove useful.

Rigidity (stiffness) is often a combination of elasticity and
geometry. Geometry may be more important than the material. In any
case, the Young's modulus is likely to be more important than the
ductility.


> But for large structures, or structures expected to experience
> unpredictably applied forces, choosing a brittle material is
> disasterous.

Quite a few large structures, bridges included, built in the 1800s
still stand, even those built from ordinary cast iron parts. Cast iron
(gray iron) is, as you know, not very ductile.  I absolutely agree on
ductility being an important property when it comes to preventing
cracks from advancing. The key factor is high energy absorption during
plastic deformation in the front area of a propagating crack.  Plastic
deformation that is, the material does not return to its original
shape.


> Did you know that all tall buildings and large bridges sway?  Or that
> airplane wings flex, and the skin of the fuselage can tolerate a 3 foot
> crack without catastrophic failure?  

Flexing is not (at least not directly) related to ductility.  Rubber,
as in rubber tires, is flexible but not very ductile. In the mid 1950s
brittle fractures in aluminium caused the loss of at least two (Comet)
aircrafts. Brittle fractures also sent early steel ships to the sea
bed, even though they were built in mild, and supposedly ductile,
steel. Also, quite a few naval ships were lost during world war two
due to brittle fractures.

> I will leave as an exercise for the reader the application of this
> metaphor to the current discussion.

Giving Lisp illiterates like myself a hard time, right :-)

-- 
Odd Skjaeveland ····@sivos.rl.no
From: Tim Bradshaw
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ey3fzwfekby.fsf@cley.com>
* odds  wrote:

> Quite a few large structures, bridges included, built in the 1800s
> still stand, even those built from ordinary cast iron parts. Cast iron
> (gray iron) is, as you know, not very ductile.  I absolutely agree on
> ductility being an important property when it comes to preventing
> cracks from advancing. The key factor is high energy absorption during
> plastic deformation in the front area of a propagating crack.  Plastic
> deformation that is, the material does not return to its original
> shape.

I live quite close to one famous example - the Forth rail bridge - and
I think that the reason it is still up is due to good victorian
engineering practices: `how thick do we think this needs to be?
right, so let's make it ten times as thick as that and then we'll be
OK'.  I think this construction style was quite influenced by the
earlier Tay bridge, which, almost equally famously, isn't there any
more.

(I liked your article BTW, it's good to hear actual proper engineering
use of these terms we bandy about.  The notion of brittleness is, I
think, important to programming, and I think it is quite similar to
the engineering term - programs tend to undergo sudden catastrophic
failure, and this is not a good feature.)

--tim
From: Matthew Danish
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <20020912205905.M23781@lain.res.cmu.edu>
On Wed, Sep 11, 2002 at 08:34:37PM +0200, ····@sivos.rl.no wrote:
> Matthew Danish:
> > Brittle materials are relatively strong, but when a force is applied
> > from particular directions (or an unusually strong force is applied)
> > they tend to experience extreme plastic deformation and failure.
> > Ductile materials, on the other hand, tend to accept far more abuse
> > while only deforming elastically (they return to their original
> > shape).
> 
> Hmm, I am very surprised.
> 
> I once learned that a ductile material can take a lot of _plastic_
> deformation before fracturing. Plastic deformation does of course mean
> that it does not at all return to its original shape once its ductile
> property has been exploited. Gold is a very ductile material, it can
> be squeezed leaf thin without fracturing.

I was thinking so much of failure that I neglected to mention the
plastic deformation stage of ductile materials!  Anyway, you are
correct, though the main point of this paragraph was that brittle
materials fracture quickly after excessive strain and ductile ones do not.

> Brittle materials fracture with no or very little plastic flow.  Their
> main characteristic is that they respond to increasing load with an
> elastic deformation and then fracture. In contrast, ductile materials
> respond with an elastic deformation, then plastic deformation and then
> finally, fracture. In general; the more plastic deformation prior to
> fracture, the more ductile. (This is a bit simplified, cross sectional
> area reduction during the plastic flow plays a role here)

This is a better explanation.

> Brittle materials being releatively strong, may be a little
> far fetched. They often show relative low tensile strength. The
> compressive strength may however, be an order of magnitude higher.

This disparity is what I meant to express using the phrase "forces
applied from particular directions".

> Brittle materials are often counted among the "hard" materials,
> but hardness is a different and quite complex animal.
> 
> > In small structures, where tight specifications are needed, the
> > unyielding rigidity of brittle materials may prove useful.
> 
> Rigidity (stiffness) is often a combination of elasticity and
> geometry. Geometry may be more important than the material. In any
> case, the Young's modulus is likely to be more important than the
> ductility.

OTOH, many materials with large Young's modulus are also brittle.
I'm no expert, but it would seem that a large jump in stress per unit
strain would indicate a material that is not going to experience plastic
deformation gracefully.

> > But for large structures, or structures expected to experience
> > unpredictably applied forces, choosing a brittle material is
> > disasterous.
> 
> Quite a few large structures, bridges included, built in the 1800s
> still stand, even those built from ordinary cast iron parts. Cast iron
> (gray iron) is, as you know, not very ductile.  I absolutely agree on
> ductility being an important property when it comes to preventing
> cracks from advancing. The key factor is high energy absorption during
> plastic deformation in the front area of a propagating crack.  Plastic
> deformation that is, the material does not return to its original
> shape.

But if one of those cast-iron bridges were to experience, say, an
extremely overweight truck, the bridge would be more likely to fracture
than to bend (even plastically).  Though, now that we are talking about
a more complicated structure, that is probably an oversimplification.

> > Did you know that all tall buildings and large bridges sway?  Or that
> > airplane wings flex, and the skin of the fuselage can tolerate a 3 foot
> > crack without catastrophic failure?  
> 
> Flexing is not (at least not directly) related to ductility.  Rubber,
> as in rubber tires, is flexible but not very ductile. In the mid 1950s

Aiee, I just threw that example in there without thinking about it.

> brittle fractures in aluminium caused the loss of at least two (Comet)

They use a particular alloy of aluminum nowadays, though I cannot recall
what it is.

> aircrafts. Brittle fractures also sent early steel ships to the sea
> bed, even though they were built in mild, and supposedly ductile,
> steel. Also, quite a few naval ships were lost during world war two
> due to brittle fractures.

Temperature changes, I presume.


I've attempted comparisons between the mechanical and the computer
worlds a couple times, since I was taking mechanical engineering courses
for a year and a half before switching out.  But, I'd better get my
terms straight first!  Thanks for the input.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alpeut$epd$1@newsmaster.cc.columbia.edu>
Matthew Danish wrote:

> On Tue, Sep 10, 2002 at 04:35:35PM -0400, Oleg wrote:
>> The type checker is there for a reason: one is execution speed, another
>> reason is reliability. While a type checker can never guarantee that your
>> program will do what you wanted, it removes a great deal of bugs: in C++,
>> I would frequently have this bug when I divide one int by another and
>> treat
> 
>> the result as a float. That's a very nasty type of bug, especially in any
> 
> Fortunately, in Lisp, dividing one integer by another results in a
> rational number.��While�doing�so�in�O'Caml�also�results�in�a�rational
> number,�the answer it does result in is not so "rational" :-).��int�*�int
> ->�int�is somewhat of a mistake, don't you think?��What�is�7/4?

Maybe people are different, but I don't remember ever needing rational 
numbers while writing programs.

Oleg
From: Greg Menke
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3fzwfs7vt.fsf@europa.pienet>
Oleg <············@myrealbox.com> writes:

> Matthew Danish wrote:
> 
> > On Tue, Sep 10, 2002 at 04:35:35PM -0400, Oleg wrote:
> >> The type checker is there for a reason: one is execution speed, another
> >> reason is reliability. While a type checker can never guarantee that your
> >> program will do what you wanted, it removes a great deal of bugs: in C++,
> >> I would frequently have this bug when I divide one int by another and
> >> treat
> > 
> >> the result as a float. That's a very nasty type of bug, especially in any
> > 
> > Fortunately, in Lisp, dividing one integer by another results in a
> > rational number.��While�doing�so�in�O'Caml�also�results�in�a�rational
> > number,�the answer it does result in is not so "rational" :-).��int�*�int
> > ->�int�is somewhat of a mistake, don't you think?��What�is�7/4?
> 
> Maybe people are different, but I don't remember ever needing rational 
> numbers while writing programs.
> 

Try to do some accounting software in C.

Gregm
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-069BC3.00491013092002@copper.ipg.tsnz.net>
In article <··············@europa.pienet>,
 Greg Menke <··········@toadmail.com> wrote:

> > Maybe people are different, but I don't remember ever needing rational 
> > numbers while writing programs.
> 
> Try to do some accounting software in C.

IEEE double precision floating point is perfectly fine for that purpose, 
assuming that you work with amounts denominated in cents and don't 
require totals over $90,071,992,547,409.91.

-- Bruce
From: Joe Marshall
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <1y7zibuo.fsf@ccs.neu.edu>
Bruce Hoult <·····@hoult.org> writes:

> In article <··············@europa.pienet>,
>  Greg Menke <··········@toadmail.com> wrote:
> 
> > > Maybe people are different, but I don't remember ever needing rational 
> > > numbers while writing programs.
> > 
> > Try to do some accounting software in C.
> 
> IEEE double precision floating point is perfectly fine for that purpose, 
> assuming that you work with amounts denominated in cents and don't 
> require totals over $90,071,992,547,409.91.

But don't try to transfer funds between Cyprus and Turkey...
From: sv0f
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <none-1209021315450001@129.59.212.53>
In article <···························@copper.ipg.tsnz.net>, Bruce Hoult
<·····@hoult.org> wrote:

>In article <··············@europa.pienet>,
> Greg Menke <··········@toadmail.com> wrote:
>
>> > Maybe people are different, but I don't remember ever needing rational 
>> > numbers while writing programs.
>> 
>> Try to do some accounting software in C.
>
>IEEE double precision floating point is perfectly fine for that purpose, 
>assuming that you work with amounts denominated in cents and don't 
>require totals over $90,071,992,547,409.91.

Well, a single bit is sufficient if it's just you and Ogg
fighting over a proto-penny.
From: Greg Menke
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3hegv9fzz.fsf@europa.pienet>
Bruce Hoult <·····@hoult.org> writes:

> In article <··············@europa.pienet>,
>  Greg Menke <··········@toadmail.com> wrote:
> 
> > > Maybe people are different, but I don't remember ever needing rational 
> > > numbers while writing programs.
> > 
> > Try to do some accounting software in C.
> 
> IEEE double precision floating point is perfectly fine for that purpose, 
> assuming that you work with amounts denominated in cents and don't 
> require totals over $90,071,992,547,409.91.
> 

While a particular value might be accurately represented with floating
point, it doesn't mean all the values you come across will be.  If you
start doing amortizations of various kinds using floats, many will
work just fine but once the magic inflection point is reached, you
will experience an insidious loss of precision in the least
significant digits of your results.

Then you'll implement the round-the-next-lowest-significant-place
trick- which will fix some of the errors but expose others in
different parts of the system.  The errors will move around depending
on all sorts of factors which you will exhaustively track down one by
one and add more code to fix.  Ultimately you'll end up implementing
your own arithmetic routines & homebrew numeric-like types (at >= 10x
the overhead) that together accumulate enough crufty magic to <just>
manage to get the math to come out right and heaven help the person
who has to maintain it later.  Since you can't overload C operators,
you'll end up having done all your math using functions, arranged just
as you would have done in Lisp- but with performance crippled beyond
your wildest dreams.

Gregm
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-4230BF.14044713092002@copper.ipg.tsnz.net>
In article <··············@europa.pienet>,
 Greg Menke <··········@toadmail.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > In article <··············@europa.pienet>,
> >  Greg Menke <··········@toadmail.com> wrote:
> > 
> > > > Maybe people are different, but I don't remember ever needing rational 
> > > > numbers while writing programs.
> > > 
> > > Try to do some accounting software in C.
> > 
> > IEEE double precision floating point is perfectly fine for that purpose, 
> > assuming that you work with amounts denominated in cents and don't 
> > require totals over $90,071,992,547,409.91.
> > 
> 
> While a particular value might be accurately represented with floating
> point, it doesn't mean all the values you come across will be.  If you
> start doing amortizations of various kinds using floats, many will
> work just fine but once the magic inflection point is reached, you
> will experience an insidious loss of precision in the least
> significant digits of your results.
> 
> Then you'll implement the round-the-next-lowest-significant-place
> trick- which will fix some of the errors but expose others in
> different parts of the system.  The errors will move around depending
> on all sorts of factors which you will exhaustively track down one by
> one and add more code to fix.  Ultimately you'll end up implementing
> your own arithmetic routines & homebrew numeric-like types (at >= 10x
> the overhead) that together accumulate enough crufty magic to <just>
> manage to get the math to come out right and heaven help the person
> who has to maintain it later.  Since you can't overload C operators,
> you'll end up having done all your math using functions, arranged just
> as you would have done in Lisp- but with performance crippled beyond
> your wildest dreams.

You are incorrect.

IEEE doubles guarantee perfectly exact results for all operations on 
integers up to 2^53.  (other, of course, than divisions that do not 
produce integer results)

Even better, you have a choice of checking for overflow or other 
inexactness in a particualar operation, checking for overflow or other 
inexactness anywhere in a sequence of operations via the "sticky" bit, 
or having an exception raised if there is overflow or other inexactness.  
This is all supported for free by the hardware on any IEEE conforming 
machine.

It's really rather amusing how many people know just enough about FP to 
know that naieve use of FP for money is a very bad idea, but fail to 
understand that used correctly IEEE FP is extremely useful.

-- Bruce
From: Raymond Wiker
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <861y7y4a9v.fsf@raw.grenland.fast.no>
Bruce Hoult <·····@hoult.org> writes:

> IEEE doubles guarantee perfectly exact results for all operations on 
> integers up to 2^53.  (other, of course, than divisions that do not 
> produce integer results)
> 
> Even better, you have a choice of checking for overflow or other 
> inexactness in a particualar operation, checking for overflow or other 
> inexactness anywhere in a sequence of operations via the "sticky" bit, 
> or having an exception raised if there is overflow or other inexactness.  
> This is all supported for free by the hardware on any IEEE conforming 
> machine.
> 
> It's really rather amusing how many people know just enough about FP to 
> know that naieve use of FP for money is a very bad idea, but fail to 
> understand that used correctly IEEE FP is extremely useful.

        Hum. You mentioned earlier the number 90071992547409.91. This
corresponds to the integer (2^53 - 1) / 100.

        It may be true that you can use the FP hardware to operate on
integers up to 2^53. It may even be true that you can consider this as
scaled value, with the two LSD representing cents (or pennies, or �re,
or eurocents, or whatever).

        Note, however, that this is not sufficient to satisfy various
standards for financial calculations. There is a EU standard that
specifies 6 digits for the fractional part, which gives you a maximum
of 9007199254, which is well within the "Enron Range".

-- 
Raymond Wiker                        Mail:  ·············@fast.no
Senior Software Engineer             Web:   http://www.fast.no/
Fast Search & Transfer ASA           Phone: +47 23 01 11 60
P.O. Box 1677 Vika                   Fax:   +47 35 54 87 99
NO-0120 Oslo, NORWAY                 Mob:   +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-A8692E.00092714092002@copper.ipg.tsnz.net>
In article <··············@raw.grenland.fast.no>,
 Raymond Wiker <·············@fast.no> wrote:

>         Hum. You mentioned earlier the number 90071992547409.91. This
> corresponds to the integer (2^53 - 1) / 100.
> 
>         It may be true that you can use the FP hardware to operate on
> integers up to 2^53. It may even be true that you can consider this as
> scaled value, with the two LSD representing cents (or pennies, or �re,
> or eurocents, or whatever).
> 
>         Note, however, that this is not sufficient to satisfy various
> standards for financial calculations. There is a EU standard that
> specifies 6 digits for the fractional part, which gives you a maximum
> of 9007199254, which is well within the "Enron Range".

Not being in the EU I'm not aware of such a law.  However I'm sure you 
are correct that certain transactions or companies in certain countries 
would not be able to use IEEE doubles for money because of the limited 
number of significant figures.  That however is *not* the reason 
normally given for not using FP for money, which is rounding errors.

-- Bruce
From: Russell Wallace
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3d8b2553.247254500@news.eircom.net>
On 12 Sep 2002 15:15:12 -0400, Greg Menke <··········@toadmail.com>
wrote:

>While a particular value might be accurately represented with floating
>point, it doesn't mean all the values you come across will be.

Rational numbers aren't a universal solution for financial
calculations either. Consider 1.00 / 3; rational arithmetic will give
you the result 1/3, which is correct in mathematics but wrong in
accounting. Worse, if printed in a report formatted to 2 decimals, it
will _look_ like the correct answer (0.33) so you won't notice the
problem until later.

When doing accounting, you have to be explicit about when and how you
want to round anyway, whatever type of numbers you use.

-- 
"Mercy to the guilty is treachery to the innocent."
Remove killer rodent from address to reply.
http://www.esatclear.ie/~rwallace
From: Tim Bradshaw
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ey33cs4lo9j.fsf@cley.com>
* Russell Wallace wrote:


> When doing accounting, you have to be explicit about when and how you
> want to round anyway, whatever type of numbers you use.

Yes, but you are likely better off by using system-provided exact
numbers and then rounding yourself, rather than using system-provided
inexact numbers and then trying to second-guess the rounding that is
going on anyway.  Better, I think, to have all the rounding explicit.

--tim
From: Russell Wallace
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3d8b3944.252360342@news.eircom.net>
On 20 Sep 2002 15:38:32 +0100, Tim Bradshaw <···@cley.com> wrote:

>Yes, but you are likely better off by using system-provided exact
>numbers and then rounding yourself, rather than using system-provided
>inexact numbers and then trying to second-guess the rounding that is
>going on anyway.  Better, I think, to have all the rounding explicit.

I agree; floating point for accounting is only good if you can stick
to the range within which you won't have rounding going on in the
background, otherwise it's better to use something else.

-- 
"Mercy to the guilty is treachery to the innocent."
Remove killer rodent from address to reply.
http://www.esatclear.ie/~rwallace
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240844763646745@naggum.no>
*  Greg Menke
| Try to do some accounting software in C.

* Bruce Hoult
| IEEE double precision floating point is perfectly fine for that purpose

  Wrong answer.  I cannot /believe/ that people still think floating point is
  usable for accounting purposes.  Nothing really beats binary coded decimal
  for this kind of tasks.  Realizing that should be part of your education as
  a programmer.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Christopher Browne
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alqre6$bdb1$1@ID-125932.news.dfncis.de>
Centuries ago, Nostradamus foresaw when Erik Naggum <····@naggum.no> would write:
> *  Greg Menke
> | Try to do some accounting software in C.
>
> * Bruce Hoult
> | IEEE double precision floating point is perfectly fine for that purpose
>
>   Wrong answer.  I cannot /believe/ that people still think floating point is
>   usable for accounting purposes.  Nothing really beats binary coded decimal
>   for this kind of tasks.  Realizing that should be part of your education as
>   a programmer.

.. And this is why COBOL persists, to this day, despite how much
people despise it.  

COBOL has, as the essential numeric data type, what amounts to BCD.

Is it fabulously efficient?  Well, consider that IBM has been tuning
hardware to do BCD stuff for probably 40 years.  And that the old Z-80
CPU had BCD instructions.  As did M68K and 80x86.  

It's doubtless a LOT simpler to get a verified-to-be-correct BCD
arithmetic unit than is the case for FP, and I'd be willing to bet
that BCD takes up a _whopping_ lot less hardware than does FP.

You don't need to do a lot of numerical analysis to be _certain_ that
BCD ops are doing the right thing.

Hmm.  I wonder if anyone has ever implemented BIGNUMs in Lisp using
BCD.  (The evident fly in the ointment would be base conversions.)

In a "BCD world," instead of having DSP-like instructions do do funky
and/or/xor/... operations, you might wind up with instructions
specifically intended to do arbitrary-precision BCD ops with
parameterized lengths...
-- 
(concatenate 'string "cbbrowne" ·@ntlug.org")
http://www.ntlug.org/~cbbrowne/nonrdbms.html
To quote from a friend's conference talk:  "they told me that their
network was physically secure, so I asked them `then what's with all
these do-not-leave-valuables-in-your-desk signs?'".
-- Henry Spencer
From: Joe Marshall
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y9a79d4s.fsf@ccs.neu.edu>
Christopher Browne <········@acm.org> writes:

> Hmm.  I wonder if anyone has ever implemented BIGNUMs in Lisp using
> BCD.  (The evident fly in the ointment would be base conversions.)

I once implemented FIXNUMs in a toy lisp this way.  FIXNUMs were 12
BCD digits, six to the left of the decimal point, six to the right.
From: Christopher Browne
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alr0dv$cvfl$1@ID-125932.news.dfncis.de>
In the last exciting episode, Joe Marshall <···@ccs.neu.edu> wrote::
> Christopher Browne <········@acm.org> writes:
>
>> Hmm.  I wonder if anyone has ever implemented BIGNUMs in Lisp using
>> BCD.  (The evident fly in the ointment would be base conversions.)

> I once implemented FIXNUMs in a toy lisp this way.  FIXNUMs were 12
> BCD digits, six to the left of the decimal point, six to the right.

Interesting...

That has the feeling of not being totally ANSI compliant, and it's
certainly not what people would usually _expect_...  :-)

How was performance? / Was this coded using native BCD operators?
-- 
(concatenate 'string "aa454" ·@freenet.carleton.ca")
http://cbbrowne.com/info/languages.html
"The  problem might  possibly be  to do  with the  fact that  asm code
written for the x86 environment  is, on other platforms, about as much
use  as  a   pork  pie  at  a  Jewish   wedding."-  Andrew  Gierth  in
comp.unix.programmer
From: Joe Marshall
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <Zl9g9.249197$kp.845852@rwcrnsc52.ops.asp.att.net>
"Christopher Browne" <········@acm.org> wrote in message ··················@ID-125932.news.dfncis.de...
> In the last exciting episode, Joe Marshall <···@ccs.neu.edu> wrote::
> > Christopher Browne <········@acm.org> writes:
> >
> >> Hmm.  I wonder if anyone has ever implemented BIGNUMs in Lisp using
> >> BCD.  (The evident fly in the ointment would be base conversions.)
>
> > I once implemented FIXNUMs in a toy lisp this way.  FIXNUMs were 12
> > BCD digits, six to the left of the decimal point, six to the right.
>
> Interesting...
>
> That has the feeling of not being totally ANSI compliant, and it's
> certainly not what people would usually _expect_...  :-)

Fortunately, I didn't have to worry about ANSI compliance because
I wrote it before X3J13.

> How was performance?

Not too bad, but nothing to write home about.  The important thing
was that a useful range of integers and fractions were representable
without having to use floating point (which was hard to come by).

> Was this coded using native BCD operators?

Yes.  The amusing thing was discovering that some odd and apparently
useless opcodes were just the ticket for writing BCD code.  I think
one was `rotate nybble indirect and subtract' which would do a four
bit shift between the accumulator and a memory location with a conditional
decrement.  For the life of me I couldn't understand why such an
instruction existed until I implemented my BCD division routine.
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-F30338.13551113092002@copper.ipg.tsnz.net>
In article <·············@ID-125932.news.dfncis.de>,
 Christopher Browne <········@acm.org> wrote:

> You don't need to do a lot of numerical analysis to be _certain_ that
> BCD ops are doing the right thing.

You also don't need any numerical analysis to be _certain_ that IEEE 
doubles are doing the correct thing for integers below the limit I 
mentioned earlier.

-- Bruce
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-9FDC51.13532513092002@copper.ipg.tsnz.net>
In article <················@naggum.no>, Erik Naggum <····@naggum.no> 
wrote:

> *  Greg Menke
> | Try to do some accounting software in C.
> 
> * Bruce Hoult
> | IEEE double precision floating point is perfectly fine for that purpose
> 
>   Wrong answer.  I cannot /believe/ that people still think floating point is
>   usable for accounting purposes.  Nothing really beats binary coded decimal
>   for this kind of tasks.  Realizing that should be part of your education as
>   a programmer.

BCD makes conversion to and from printed form cheaper.  Other than that, 
what is wrong with a binary representation?

-- Bruce
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240918387614328@naggum.no>
* Bruce Hoult
| BCD makes conversion to and from printed form cheaper.  Other than that, 
| what is wrong with a binary representation?

  Let me remind you that you did not favor just "binary representation", but
  IEEE double-precision floating point, of which the "floating-point" part is
  The Wrong Answer, not binary representatio.  The difference lies in the
  exactness of fractional values, as I am sure you are aware at some level,
  and it is not sufficient simply to store dollar amounts in cents.  Exchange
  rates and stock prices have different fractional values (normally 100th and
  16th of a cent, although the latter is subject to some "modernization"), and
  computing with interest rates could require arbitrary precision.  Rounding
  in monetary matters also differs from mathematical rounding.  Most countries
  have laws that require rounding upward in the [.5, 1) range and down in the
  [0, .5) range, contrary to IEEE rounding with round-to-even, which is much
  fairer over time.  All in all, our cultures have decided long ago that the
  arithmetic properties of monetary amounts are different from the arithmetic
  properties of other mathematical values.  (This is one of many reasons why
  teaching children to do arithmetic operations on monetary values is harmful
  to their mathematical understanding of the number system.)

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-4F2A11.11050514092002@copper.ipg.tsnz.net>
In article <················@naggum.no>, Erik Naggum <····@naggum.no> 
wrote:

> * Bruce Hoult
> | BCD makes conversion to and from printed form cheaper.  Other than that, 
> | what is wrong with a binary representation?
> 
>   Let me remind you that you did not favor just "binary representation", but
>   IEEE double-precision floating point, of which the "floating-point" part is
>   The Wrong Answer, not binary representatio.  The difference lies in the
>   exactness of fractional values, as I am sure you are aware at some level,

The point is that you do *not* use fractional values.  If some quantity 
is specified in 100ths of a cent or 16ths of a cent then you use *that* 
as your unit.

I'm not sure why this is so hard to understand.

-- Bruce
From: Greg Menke
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3u1kt2udk.fsf@europa.pienet>
Bruce Hoult <·····@hoult.org> writes:

> In article <················@naggum.no>, Erik Naggum <····@naggum.no> 
> wrote:
> 
> > * Bruce Hoult
> > | BCD makes conversion to and from printed form cheaper.  Other than that, 
> > | what is wrong with a binary representation?
> > 
> >   Let me remind you that you did not favor just "binary representation", but
> >   IEEE double-precision floating point, of which the "floating-point" part is
> >   The Wrong Answer, not binary representatio.  The difference lies in the
> >   exactness of fractional values, as I am sure you are aware at some level,
> 
> The point is that you do *not* use fractional values.  If some quantity 
> is specified in 100ths of a cent or 16ths of a cent then you use *that* 
> as your unit.
> 
> I'm not sure why this is so hard to understand.

The problem isn't fractions, FP types can accurately represent many
fractions just like they can many integers.  The problem is unforseen
& difficult to detect rounding which may occur in intermediate terms
of arithmetic- when the precision demands of the results of a
particular operation exceed the capacity of the type.  You'll probably
run into rounding trouble pretty quickly if you use single precision
FP- going to double precision only decreases the likelihood of the
problem for given values of operands and operators.

I completely agree that lots of financial arithmetic can be accurately
done in FP- but the potential for unexpected rounding is always there,
and when it shows up, you're screwed because you can't trust the
results of your arithmetic.  The likely result is lots of verification
of each step which is tricky because sometimes the rounding errors of
the original equation are compensated for by different rounding errors
in the verification, leading to your verification routines accepting
the wrong results.

Gregm
From: Paul Foley
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m2lm658be7.fsf@mycroft.actrix.gen.nz>
On 13 Sep 2002 22:11:03 -0400, Greg Menke wrote:

> Bruce Hoult <·····@hoult.org> writes:

>> The point is that you do *not* use fractional values.  If some quantity 
>> is specified in 100ths of a cent or 16ths of a cent then you use *that* 
>> as your unit.
>> 
>> I'm not sure why this is so hard to understand.

> The problem isn't fractions, FP types can accurately represent many
> fractions just like they can many integers.  The problem is unforseen
> & difficult to detect rounding which may occur in intermediate terms
> of arithmetic- when the precision demands of the results of a
> particular operation exceed the capacity of the type.

Integers in the range he's talking about don't have rounding problems.
Again, note that Bruce is talking about using floats only for integer
values, all of which can be represented perfectly accurately.

>                                                        You'll probably
> run into rounding trouble pretty quickly if you use single precision
> FP- going to double precision only decreases the likelihood of the
> problem for given values of operands and operators.

There's no bigger problem with single-floats than with double-floats;
the range is just smaller (16777216 instead of > 9 quadrillion).

When you try to store numbers bigger than that, you'll run into
problems, yes -- just as you do if you try to store a number bigger
than 2147483647 in a 32 bit (signed) integer (though the problem is
different; you lose bits off the bottom rather than off the top, so it
starts rounding rather than just going completely nuts)

> I completely agree that lots of financial arithmetic can be accurately
> done in FP- but the potential for unexpected rounding is always there,
> and when it shows up, you're screwed because you can't trust the
> results of your arithmetic.

"I completely agree that lots of financial arithmetic can be accurately 
done in int- but the potential for unexpected overflow is always there,
and when it shows up, you're screwed because you can't trust the
results of your arithmetic."


If you don't have bignums and you don't have a 64 bit integer type,
and 53 bits is enough, a double float is a good solution.

-- 
If that makes any sense to you, you have a big problem.
                                      -- C. Durance, Computer Science 234
(setq reply-to
  (concatenate 'string "Paul Foley " "<mycroft" '(··@) "actrix.gen.nz>"))
From: Greg Menke
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m365x8dbjc.fsf@europa.pienet>
Paul Foley <·······@actrix.gen.nz> writes:

> On 13 Sep 2002 22:11:03 -0400, Greg Menke wrote:
> 
> > Bruce Hoult <·····@hoult.org> writes:
> 
> >> The point is that you do *not* use fractional values.  If some quantity 
> >> is specified in 100ths of a cent or 16ths of a cent then you use *that* 
> >> as your unit.
> >> 
> >> I'm not sure why this is so hard to understand.
> 
> > The problem isn't fractions, FP types can accurately represent many
> > fractions just like they can many integers.  The problem is unforseen
> > & difficult to detect rounding which may occur in intermediate terms
> > of arithmetic- when the precision demands of the results of a
> > particular operation exceed the capacity of the type.
> 
> Integers in the range he's talking about don't have rounding problems.
> Again, note that Bruce is talking about using floats only for integer
> values, all of which can be represented perfectly accurately.

I agree.  Its not the representation per se that causes the problems,
its the results of arithmetic.  If I have a $1,000,000
discount/premium and a 12 decimal place yield rate and I do an
amortization using them, the precision which the results and
intermediate terms need is right up at the limit of what doubles can
achieve.  A factor of 2 or 3, or even smaller difference at various
points in the math can push values into rounding without you being
aware of it except by carefully formulated cross-checking and
computation of the results by different methods.


> >                                                        You'll probably
> > run into rounding trouble pretty quickly if you use single precision
> > FP- going to double precision only decreases the likelihood of the
> > problem for given values of operands and operators.
> 
> There's no bigger problem with single-floats than with double-floats;
> the range is just smaller (16777216 instead of > 9 quadrillion).
> 
> When you try to store numbers bigger than that, you'll run into
> problems, yes -- just as you do if you try to store a number bigger
> than 2147483647 in a 32 bit (signed) integer (though the problem is
> different; you lose bits off the bottom rather than off the top, so it
> starts rounding rather than just going completely nuts)

You're exactly making my point.  Its the unexpected rounding behavior
thats the problem, not the use of integers or fractions in FP.  

I've used doubles for financial systems before and they work just fine
in the majority of cases.  Even if you store fractions, a
round-the-next-lower-decimal-place function will fairly easily take
care of the rounding problems.  But in a significant minority of the
cases, rounding occurs that you can't control- and on a system that
requires accuracy down to the penny for all transactions all the time
(with regular audits, cross checks, etc..), this will become
apparent.  If all you're doing is adding up money, its not a big deal-
its the 17+ decimal place intermediate terms that get you.

 
> "I completely agree that lots of financial arithmetic can be accurately 
> done in int- but the potential for unexpected overflow is always there,
> and when it shows up, you're screwed because you can't trust the
> results of your arithmetic."
> 
> 
> If you don't have bignums and you don't have a 64 bit integer type,
> and 53 bits is enough, a double float is a good solution.

Its a solution that can work- but its not good in the same way C ints
aren't good for plain integer arithmetic- you're always having to make
sure you don't exhaust the capabilities of the type, and whats worse,
the programming language won't help you detect the problem.

Gregm
From: Johan Ur Riise
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <87lm63rqsy.fsf@egg.topp.dyndns.com>
Paul Foley <·······@actrix.gen.nz> writes:

> 
> If you don't have bignums and you don't have a 64 bit integer type,
> and 53 bits is enough, a double float is a good solution.
> 

Next time I have to do financial computing in C++, I shall look for a
numeric package. Searching for "rational numbers library c" on Google
pointed med to CLN, a class library for numbers. In fact, it looks
like it is related to Common Lisp number type system.

-- 
Johan Ur Riise
From: Christopher Browne
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <am2pbv$25b6l$1@ID-125932.news.dfncis.de>
The world rejoiced as Johan Ur Riise <·@rsc.no> wrote:
> Paul Foley <·······@actrix.gen.nz> writes:
>> If you don't have bignums and you don't have a 64 bit integer type,
>> and 53 bits is enough, a double float is a good solution.
>
> Next time I have to do financial computing in C++, I shall look for a
> numeric package. Searching for "rational numbers library c" on Google
> pointed med to CLN, a class library for numbers. In fact, it looks
> like it is related to Common Lisp number type system.

It was written by some of the people involved with implementing CLISP.
-- 
(reverse (concatenate 'string ········@" "enworbbc"))
http://www.ntlug.org/~cbbrowne/spiritual.html
We are Pentium of Borg.  Division is futile. You will be approximated.
(seen in someone's .signature)
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240949798321688@naggum.no>
* Bruce Hoult
| The point is that you do *not* use fractional values.

  Marvellous.

| If some quantity is specified in 100ths of a cent or 16ths of a cent then
| you use *that* as your unit.

  And if you multiply them, you get 1600th of a cent as the unit, then convert
  to another currency with 6-digit precision and you effectively compute with
  1,600,000,000th of a cent as the smallest unit.  Suddenly, you have only 22
  bits left for the cent value, and a transaction worth more than USD 42,000
  will no longer fit in your double-precision floating point number.  Great!

| I'm not sure why this is so hard to understand.

  Because, as several times in the past, you are just plain wrong, and you are
  the last person on the planet to admit it or figure it out.  So I shall do us
  all a huge favor and not attempt to convince you of the errors of your ways.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Erann Gat
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <gat-1309021711580001@192.168.1.50>
In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:

> * Bruce Hoult
> | The point is that you do *not* use fractional values.
> 
>   Marvellous.
> 
> | If some quantity is specified in 100ths of a cent or 16ths of a cent then
> | you use *that* as your unit.
> 
>   And if you multiply them, you get 1600th of a cent as the unit,

Why would you ever need to multiply monetary quantities?

E.
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240956245803780@naggum.no>
* Erann Gat
| Why would you ever need to multiply monetary quantities?

  Units, dude.  Units.  Just /add/ two numbers with different denominators,
  and you get into this problem right away.  Granted, you can make do with
  400th of a cent, but then you need to get into the whole least common
  denominator game, and you should have used rationals to begin with.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Erann Gat
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <gat-1309021854250001@192.168.1.50>
In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:

> * Erann Gat
> | Why would you ever need to multiply monetary quantities?
> 
>   Units, dude.  Units.  Just /add/ two numbers with different denominators,
>   and you get into this problem right away.  Granted, you can make do with
>   400th of a cent, but then you need to get into the whole least common
>   denominator game, and you should have used rationals to begin with.

Sorry, but that answer makes no sense to me.  "Units, dude, units" is what
caused me to pose the question in the first place.  If I multiply dollars
by dollars I get dollars-squared, which I find mighty puzzling.  (I am
reminded of when I was twelve or so and my father tried to explain to me
that the acceleration of gravity was 9.8 meters per second squared, and I
couldn't wrap my brain around what a squared second would be.)

In my experience there are only two mathematical operations that one ever
actually performs on monetary values: multiplying a monetary value by a
scalar, or adding two monetary values with the same units.  But my
financial experience is pretty much limited to balancing my checkbook so
maybe I'm missing something here.

E.
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240957881335279@naggum.no>
* ···@jpl.nasa.gov (Erann Gat)
| If I multiply dollars
| by dollars I get dollars-squared, which I find mighty puzzling.

  I conclude that you are illiterate.  Go take a remedial reading class and
  when you have passed it, try to understand the very /next/ sentence from the
  article you obviously stopped comprehending after the first few words and
  thought you had understood enough to form your stupid response.  You will
  have to fax me the graduation papers before I believe you can read and I want
  to respond to you again.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Marc Spitzer
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <slrnao57lg.2f3q.marc@oscar.eng.cv.net>
In article <····················@192.168.1.50>, Erann Gat wrote:
> In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
> 
>> * Erann Gat
>> | Why would you ever need to multiply monetary quantities?
>> 
>>   Units, dude.  Units.  Just /add/ two numbers with different denominators,
>>   and you get into this problem right away.  Granted, you can make do with
>>   400th of a cent, but then you need to get into the whole least common
>>   denominator game, and you should have used rationals to begin with.
> 
> Sorry, but that answer makes no sense to me.  "Units, dude, units" is what
> caused me to pose the question in the first place.  If I multiply dollars
> by dollars I get dollars-squared, which I find mighty puzzling.  (I am
> reminded of when I was twelve or so and my father tried to explain to me
> that the acceleration of gravity was 9.8 meters per second squared, and I
> couldn't wrap my brain around what a squared second would be.)

here is a simple example you have an integer type that can hold
4,000,000,000 units an you have a max transaction size of $10,000,000
no problem right?  now we need to take care of pennies also so your
new max transaction size is 1,000,000,000 cents (units) but wait there
are fractions of a cent lets say you need to keep track of 1/16 so now
have a max unit size of 16,000,000,000.  Do you see the problem now.

Or were you just deliberately not getting it?

marc



> 
> In my experience there are only two mathematical operations that one ever
> actually performs on monetary values: multiplying a monetary value by a
> scalar, or adding two monetary values with the same units.  But my
> financial experience is pretty much limited to balancing my checkbook so
> maybe I'm missing something here.
> 
> E.
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-5D8F74.14040414092002@copper.ipg.tsnz.net>
In article <····················@192.168.1.50>,
 ···@jpl.nasa.gov (Erann Gat) wrote:

> In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
> 
> > * Erann Gat
> > | Why would you ever need to multiply monetary quantities?
> > 
> >   Units, dude.  Units.  Just /add/ two numbers with different denominators,
> >   and you get into this problem right away.  Granted, you can make do with
> >   400th of a cent, but then you need to get into the whole least common
> >   denominator game, and you should have used rationals to begin with.
> 
> Sorry, but that answer makes no sense to me.  "Units, dude, units" is what
> caused me to pose the question in the first place.  If I multiply dollars
> by dollars I get dollars-squared, which I find mighty puzzling.  (I am
> reminded of when I was twelve or so and my father tried to explain to me
> that the acceleration of gravity was 9.8 meters per second squared, and I
> couldn't wrap my brain around what a squared second would be.)
> 
> In my experience there are only two mathematical operations that one ever
> actually performs on monetary values: multiplying a monetary value by a
> scalar, or adding two monetary values with the same units.  But my
> financial experience is pretty much limited to balancing my checkbook so
> maybe I'm missing something here.

No, you're not missing anything.  In fact you see the issues pretty 
clearly.

-- Bruce
From: Erann Gat
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <gat-1309022351590001@192.168.1.50>
In article <···························@copper.ipg.tsnz.net>, Bruce Hoult
<·····@hoult.org> wrote:

> In article <····················@192.168.1.50>,
>  ···@jpl.nasa.gov (Erann Gat) wrote:
> 
> > In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
> > 
> > > * Erann Gat
> > > | Why would you ever need to multiply monetary quantities?
> > > 
> > >   Units, dude.  Units.  Just /add/ two numbers with different
denominators,
> > >   and you get into this problem right away.  Granted, you can make do with
> > >   400th of a cent, but then you need to get into the whole least common
> > >   denominator game, and you should have used rationals to begin with.
> > 
> > Sorry, but that answer makes no sense to me.  "Units, dude, units" is what
> > caused me to pose the question in the first place.  If I multiply dollars
> > by dollars I get dollars-squared, which I find mighty puzzling.  (I am
> > reminded of when I was twelve or so and my father tried to explain to me
> > that the acceleration of gravity was 9.8 meters per second squared, and I
> > couldn't wrap my brain around what a squared second would be.)
> > 
> > In my experience there are only two mathematical operations that one ever
> > actually performs on monetary values: multiplying a monetary value by a
> > scalar, or adding two monetary values with the same units.  But my
> > financial experience is pretty much limited to balancing my checkbook so
> > maybe I'm missing something here.
> 
> No, you're not missing anything.  In fact you see the issues pretty 
> clearly.

Hm, thanks for the support Bruce, but I'm afraid you're wrong.  I wasn't
seeing this clearly (which I realized as I was framing a response to Marc
Spitzer).  I misinterpreted the antecedent in the phrase "if you multiply
*them*".  Erik's point is that if you add a quantity whose unit is 1/16 to
a quantity whose unit is 1/100 you get a quantity whose unit is 1/1600 (or
1/400 if you reduce to the LCD.)  It's the units that get multiplied, not
the quantities.

E.
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-CA96D3.21363714092002@copper.ipg.tsnz.net>
In article <····················@192.168.1.50>,
 ···@jpl.nasa.gov (Erann Gat) wrote:

> Hm, thanks for the support Bruce, but I'm afraid you're wrong.  I wasn't
> seeing this clearly (which I realized as I was framing a response to Marc
> Spitzer).  I misinterpreted the antecedent in the phrase "if you multiply
> *them*".  Erik's point is that if you add a quantity whose unit is 1/16 to
> a quantity whose unit is 1/100 you get a quantity whose unit is 1/1600 (or
> 1/400 if you reduce to the LCD.)  It's the units that get multiplied, not
> the quantities.

But financial systems don't *do* that.  They only add things expressed 
in the same units as each other.  Precisions don't mount endlessly.

-- Bruce
From: Marc Spitzer
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <slrnao849h.2m3r.marc@oscar.eng.cv.net>
In article <···························@copper.ipg.tsnz.net>, Bruce Hoult wrote:
> In article <····················@192.168.1.50>,
>  ···@jpl.nasa.gov (Erann Gat) wrote:
> 
>> Hm, thanks for the support Bruce, but I'm afraid you're wrong.  I wasn't
>> seeing this clearly (which I realized as I was framing a response to Marc
>> Spitzer).  I misinterpreted the antecedent in the phrase "if you multiply
>> *them*".  Erik's point is that if you add a quantity whose unit is 1/16 to
>> a quantity whose unit is 1/100 you get a quantity whose unit is 1/1600 (or
>> 1/400 if you reduce to the LCD.)  It's the units that get multiplied, not
>> the quantities.
> 
> But financial systems don't *do* that.  They only add things expressed 
> in the same units as each other.  Precisions don't mount endlessly.
> 
> -- Bruce

and if those units are 1/16 of a cent then the max unit is 1/1600 of the 
max dollar amount.  That is all I said and it is what I read Erik's 
message to say on the subject.  So a double is not as big as it
apears at first gance that it all.

marc
From: Marc Spitzer
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <slrnao84cj.2m3r.marc@oscar.eng.cv.net>
In article <····················@oscar.eng.cv.net>, Marc Spitzer wrote:
> In article <···························@copper.ipg.tsnz.net>, Bruce Hoult wrote:
>> In article <····················@192.168.1.50>,
>>  ···@jpl.nasa.gov (Erann Gat) wrote:
>> 
>>> Hm, thanks for the support Bruce, but I'm afraid you're wrong.  I wasn't
>>> seeing this clearly (which I realized as I was framing a response to Marc
>>> Spitzer).  I misinterpreted the antecedent in the phrase "if you multiply
>>> *them*".  Erik's point is that if you add a quantity whose unit is 1/16 to
>>> a quantity whose unit is 1/100 you get a quantity whose unit is 1/1600 (or
>>> 1/400 if you reduce to the LCD.)  It's the units that get multiplied, not
>>> the quantities.
>> 
>> But financial systems don't *do* that.  They only add things expressed 
>> in the same units as each other.  Precisions don't mount endlessly.
>> 
>> -- Bruce
> 
> and if those units are 1/16 of a cent then the max unit is 1/1600 of the 
> max dollar amount.  That is all I said and it is what I read Erik's 

err, switch dollar and unit for this to work.

marc

> message to say on the subject.  So a double is not as big as it
> apears at first gance that it all.
> 
> marc
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-1C3EDB.12310914092002@copper.ipg.tsnz.net>
In article <················@naggum.no>, Erik Naggum <····@naggum.no> 
wrote:

> * Bruce Hoult
> | The point is that you do *not* use fractional values.
> 
>   Marvellous.
> 
> | If some quantity is specified in 100ths of a cent or 16ths of a cent then
> | you use *that* as your unit.
> 
>   And if you multiply them, you get 1600th of a cent as the unit, 
>   then convert to another currency with 6-digit precision and you 
>   effectively compute with 1,600,000,000th of a cent as the smallest 
>   unit.  Suddenly, you have only 22 bits left for the cent value, and 
>   a transaction worth more than USD 42,000 will no longer fit in your 
>   double-precision floating point number.  Great!

I'm not sure what field you are thinking of, but numbers are *not* used 
in that way in either accounting or finance.  It's simply not a problem.

Think about it in terms of dimensional analysis.  Physics uses values 
with dimensions involving squared and higher powers of quantities.  
Accounting *doesn't*.


> | I'm not sure why this is so hard to understand.
> 
>   Because, as several times in the past, you are just plain wrong, 
>   and you are the last person on the planet to admit it or figure it 
>   out.  So I shall do us all a huge favor and not attempt to convince 
>   you of the errors of your ways.

Uh huh.

I worked in stockbroking and finance companies for a decade, writing 
software dealing with stock and FX and government stock calculations.  I 
never had any trouble meeting specs, using FP.

You're clearly outside your area of expertise here.

-- Bruce
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240955460519515@naggum.no>
* Bruce Hoult <·····@hoult.org>
| I worked in stockbroking and finance companies for a decade, writing
| software dealing with stock and FX and government stock calculations.  I
| never had any trouble meeting specs, using FP.

  I have worked with Oslo Stock Exchange and related businesses since 1990,
  when I specified the protocol between the exchange and brokers' computer
  systems. and I have designed and specified several protocol since then.  We
  had several problems with software vendors who used floating-point to store
  numeric values.  It was, even back then, a well-established fact that people
  ran into problems when they chose to use floating-point for numeric values
  that were not, in fact, floating point, but fix-point.

| You're clearly outside your area of expertise here.

  Your competence in assessing mine is also remarkably lacking.  That you keep
  on fighting is even more puzzling.  I think you should try to talk to someone
  who still cares about you about your incessant desire to make a fool of huge
  yourself when you are simply /factually/ wrong about something.

  How do you know that you have been meeting specs with floating point?  I
  think you are the kind of programmer who makes things work.  I am the kind
  of programmer who makes sure things do not fail.  What "works" for you is
  not even relevant to me.  There are sufficient problems with floating-point
  that it cannot be used in software that has to be exactly right all the time.
   It does not matter that you can detect when you are no longer exact, because
   you have to do something when that happens to become exact, again.  You
  could give up when you run out of precision in your floating-point format,
  but that is generally not an acceptable option.  So you have to a Plan B
  when this happens.  There may be good reasons to work with a Plan A and a
  Plan B, but during my long carreer as a programmer, I have seen one thing
  again and again that makes me desire not to rely on Plan B: It is almost
  never debugged properly because it is not expected to be used.  This is in
  sharp contrast to military-style contingency planning, where you rely on
  your contingency plan to be failsafe when your primary plan may fail.  I am
  not a fully trained paranoid (i.e., lawyer), but I believe that understanding
  the need for and nature of contingency planning is a requirement for anyone
  who teaches planning, and that is, in effect, what programmers teach their
  computers.

  By the way, if you want 53-bit integers with double-precision floating-point,
  why not go for the full 64-bit integers that you get even on 32-bit machines
  with C99's new `long long int� type?  Or you could use the 80-bit floating-
  point that is used in the Intel Architecture.  Or perhaps the MMX registers.
  However, I would expect an implementation of a bignum library to make the
  most of the hardware.  If the implementation is not strong on bignus, that
  is, somewhat ironically, because bignums are also Plan B for most systems.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-15AFCA.14132914092002@copper.ipg.tsnz.net>
In article <················@naggum.no>, Erik Naggum <····@naggum.no> 
wrote:

> * Bruce Hoult <·····@hoult.org>
> | I worked in stockbroking and finance companies for a decade, writing
> | software dealing with stock and FX and government stock calculations.  I
> | never had any trouble meeting specs, using FP.
> 
>   I have worked with Oslo Stock Exchange and related businesses since 
>   1990, when I specified the protocol between the exchange and 
>   brokers' computer systems. and I have designed and specified 
>   several protocol since then.  We had several problems with software 
>   vendors who used floating-point to store numeric values.

I don't doubt it.  Thee are plenty of people out there totally ignorant 
of the properties of computer arithmetic.


>  It was, even back then, a well-established fact that people ran into 
>  problems when they chose to use floating-point for numeric values 
>  that were not, in fact, floating point, but fix-point.

In 1990?  I guess so, since I was taught the same thing around 1980.  
However, as with many things that are taught to you as an undergraduate 
(or ::shudder:: at highschool), it turns out to be a half-truth or 
approximation at best, designed to keep the ignorant out of trouble.

One point you are missing with your talk about Plan B's and bignums is 
that financial systems have specifications for their outputs as well as 
their inputs.  It is no use using bignums internally if you then have to 
feed the result into e.g. a stock exchange or Reuters/Bloomburg/whatever 
system that is specified with a fixed number of significant figures.

-- Bruce
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240959537226670@naggum.no>
* Bruce Hoult
| However, as with many things that are taught to you as an undergraduate (or
| ::shudder:: at highschool), it turns out to be a half-truth or approximation
| at best, designed to keep the ignorant out of trouble.

  If you believe this is relevant, you assume too much.  Quit being so annoying.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Eric Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3D841BD9.7B332938@naggum.no>
Eric,

This is your good twin talking.  Take your Prozac.  You have obviously forgotten
it again.

Eric



Erik Naggum wrote:

> * Bruce Hoult
> | However, as with many things that are taught to you as an undergraduate (or
> | ::shudder:: at highschool), it turns out to be a half-truth or approximation
> | at best, designed to keep the ignorant out of trouble.
>
>   If you believe this is relevant, you assume too much.  Quit being so annoying.
>
> --
> Erik Naggum, Oslo, Norway
>
> Act from reason, and failure makes you rethink and study harder.
> Act from faith, and failure makes you blame someone and push harder.
From: Christopher Browne
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alu8ij$16dpr$1@ID-125932.news.dfncis.de>
Oops! Bruce Hoult <·····@hoult.org> was seen spray-painting on a wall:
> In article <················@naggum.no>, Erik Naggum <····@naggum.no> 
> wrote:
>
>> * Bruce Hoult <·····@hoult.org>
>> | I worked in stockbroking and finance companies for a decade, writing
>> | software dealing with stock and FX and government stock calculations.  I
>> | never had any trouble meeting specs, using FP.
>> 
>>   I have worked with Oslo Stock Exchange and related businesses since 
>>   1990, when I specified the protocol between the exchange and 
>>   brokers' computer systems. and I have designed and specified 
>>   several protocol since then.  We had several problems with software 
>>   vendors who used floating-point to store numeric values.
>
> I don't doubt it.  Thee are plenty of people out there totally ignorant 
> of the properties of computer arithmetic.
>
>
>>  It was, even back then, a well-established fact that people ran into 
>>  problems when they chose to use floating-point for numeric values 
>>  that were not, in fact, floating point, but fix-point.
>
> In 1990?  I guess so, since I was taught the same thing around 1980.  
> However, as with many things that are taught to you as an undergraduate 
> (or ::shudder:: at highschool), it turns out to be a half-truth or 
> approximation at best, designed to keep the ignorant out of trouble.
>
> One point you are missing with your talk about Plan B's and bignums is 
> that financial systems have specifications for their outputs as well as 
> their inputs.  It is no use using bignums internally if you then have to 
> feed the result into e.g. a stock exchange or Reuters/Bloomburg/whatever 
> system that is specified with a fixed number of significant figures.

No, I don't think that is being missed.

Using bignums and/or rationals internally that provide 'total
precision,' and well-trustable math predicates, means that you can be
quite certain that the _internal_ dealings with the values has been
handled faithfully.

If you then have to round it to three decimal places, for output
purposes, _that's fine_.

At the end of the day, what is likely to happen is that you'll have
sets of transactions that will be denominated in terms of having two
or three decimal places.  

Get them right and you've got the Important Thing right.  If using
bignums and rationals makes it easier to be confident that the values
are correct, then you've made life easier by using bignums/rationals.

And at the end of the day, the ultimate transactions will involve
things like:

  "Foo Paid $750.376 for 72.378 units of Bar."

And that's _exact_, whatever the interim values whilst doing
calculations may have been.
-- 
(concatenate 'string "cbbrowne" ·@acm.org")
http://www.ntlug.org/~cbbrowne/rdbms.html
"The only ``intuitive'' interface is the nipple. After that, it's all
learned."  -- Bruce Ediger, ·······@teal.csn.org on X interfaces.
From: Christopher C. Stacy
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ud6rh9o84.fsf@dtpq.com>
The accounting systems I'm familiar with just use integer
pennies for everything, never any kind of floating values.
From: Hartmann Schaffer
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3d813c88@news.sentex.net>
In article <············@newsmaster.cc.columbia.edu>,
	Oleg <············@myrealbox.com> writes:
> ...
> Maybe people are different, but I don't remember ever needing rational 
> numbers while writing programs.

throutghout most of the 1800s, most people didn't need internal
combustion engines either

hs

-- 

don't use malice as an explanation when stupidity suffices
From: Craig Brozefsky
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <87fzweppct.fsf@piracy.red-bean.com>
··@heaven.nirvananet (Hartmann Schaffer) writes:

> In article <············@newsmaster.cc.columbia.edu>,
> 	Oleg <············@myrealbox.com> writes:
> > ...
> > Maybe people are different, but I don't remember ever needing rational 
> > numbers while writing programs.
> 
> throutghout most of the 1800s, most people didn't need internal
> combustion engines either

Still, most don't need internal combustion engines.

Not to detract from the original intent of your comment.

-- 
Sincerely,
Craig Brozefsky <·····@red-bean.com>
Free Scheme/Lisp Software  http://www.red-bean.com/~craig
From: Tim Bradshaw
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ey3it1dyn28.fsf@cley.com>
* oleg inconnu wrote:
> The type checker is there for a reason: one is execution speed, another 
> reason is reliability. While a type checker can never guarantee that your 
> program will do what you wanted, it removes a *great* deal of bugs: in C++, 
> I would frequently have this bug when I divide one int by another and treat 
> the result as a float. That's a very nasty type of bug, especially in any 
> kind of scientific/numeric application. Lisp is such an extreme case that 
> I'm surprised people are surprised that Lisp isn't used at JPL [1]. It 
> wouldn't even prevent you from dividing an int by a string in some branch 
> of code! 

Can anyone come up with a *real-world* example of a deployed Lisp
system which has failed due to runtime type problems?  Or is this all
just academic masturbation by the static type people?

--tim
From: Erann Gat
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <gat-1109020904370001@192.168.1.50>
In article <············@newsmaster.cc.columbia.edu>, Oleg
<············@myrealbox.com> wrote:

> I'm surprised people are surprised that Lisp isn't used at JPL [1]. It 
> wouldn't even prevent you from dividing an int by a string in some branch 
> of code! 

Lisp was used at JPL for decades (and it's actually still in use in at
least one fielded application, though there's currently no active Lisp
development to my knowledge).  The reason it fell out of favor had
absolutely nothing to do with typing, or any other techincal reason; it
was purely political.

Erann Gat
Principal Scientist
Jet Propulsion Laboratory
California Institute of Technology
···@jpl.nasa.gov

Disclaimer: The views expressed in this posting are my own.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alphrk$gdr$2@newsmaster.cc.columbia.edu>
Erann Gat wrote:

> In article <············@newsmaster.cc.columbia.edu>, Oleg
> <············@myrealbox.com> wrote:
> 
>> I'm surprised people are surprised that Lisp isn't used at JPL [1]. It
>> wouldn't even prevent you from dividing an int by a string in some branch
>> of code!
> 
> Lisp was used at JPL for decades (and it's actually still in use in at
> least one fielded application, though there's currently no active Lisp
> development to my knowledge).  The reason it fell out of favor had
> absolutely nothing to do with typing, or any other techincal reason; it
> was purely political.

Did you "lose your faith" for political reasons too?

Oleg
From: Erann Gat
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <gat-1209020858060001@192.168.1.50>
In article <············@newsmaster.cc.columbia.edu>, Oleg
<············@myrealbox.com> wrote:

> Erann Gat wrote:
> 
> > In article <············@newsmaster.cc.columbia.edu>, Oleg
> > <············@myrealbox.com> wrote:
> > 
> >> I'm surprised people are surprised that Lisp isn't used at JPL [1]. It
> >> wouldn't even prevent you from dividing an int by a string in some branch
> >> of code!
> > 
> > Lisp was used at JPL for decades (and it's actually still in use in at
> > least one fielded application, though there's currently no active Lisp
> > development to my knowledge).  The reason it fell out of favor had
> > absolutely nothing to do with typing, or any other techincal reason; it
> > was purely political.
> 
> Did you "lose your faith" for political reasons too?
> 
> Oleg

Hm, seems Nils was right about you.  You really are just a troll.

E.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alqmke$dlp$1@newsmaster.cc.columbia.edu>
Erann Gat wrote:

> In article <············@newsmaster.cc.columbia.edu>, Oleg
> <············@myrealbox.com> wrote:
> 
>> Erann Gat wrote:
>> 
>> > In article <············@newsmaster.cc.columbia.edu>, Oleg
>> > <············@myrealbox.com> wrote:
>> > 
>> >> I'm surprised people are surprised that Lisp isn't used at JPL [1]. It
>> >> wouldn't even prevent you from dividing an int by a string in some
>> >> branch of code!
>> > 
>> > Lisp was used at JPL for decades (and it's actually still in use in at
>> > least one fielded application, though there's currently no active Lisp
>> > development to my knowledge).  The reason it fell out of favor had
>> > absolutely nothing to do with typing, or any other techincal reason; it
>> > was purely political.
>> 
>> Did you "lose your faith" for political reasons too?
>> 
>> Oleg
> 
> Hm, seems Nils was right about you.  You really are just a troll.

Don't like it when people point out holes in your story? Stop being a baby.
From: Erann Gat
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <gat-1209021153460001@k-137-79-50-101.jpl.nasa.gov>
In article <············@newsmaster.cc.columbia.edu>, Oleg
<············@myrealbox.com> wrote:

> Erann Gat wrote:
> 
> > In article <············@newsmaster.cc.columbia.edu>, Oleg
> > <············@myrealbox.com> wrote:
> > 
> >> Erann Gat wrote:
> >> 
> >> > In article <············@newsmaster.cc.columbia.edu>, Oleg
> >> > <············@myrealbox.com> wrote:
> >> > 
> >> >> I'm surprised people are surprised that Lisp isn't used at JPL [1]. It
> >> >> wouldn't even prevent you from dividing an int by a string in some
> >> >> branch of code!
> >> > 
> >> > Lisp was used at JPL for decades (and it's actually still in use in at
> >> > least one fielded application, though there's currently no active Lisp
> >> > development to my knowledge).  The reason it fell out of favor had
> >> > absolutely nothing to do with typing, or any other techincal reason; it
> >> > was purely political.
> >> 
> >> Did you "lose your faith" for political reasons too?
> >> 
> >> Oleg
> > 
> > Hm, seems Nils was right about you.  You really are just a troll.
> 
> Don't like it when people point out holes in your story? Stop being a baby.

I have no problems with someone pointing out holes in my story, but that's
not what you did.  You didn't point anything out.  You asked a question,
which was at once ad-hominem and non-sequitur.  (And then you followed up
with another ad-hominem attack.)  That's what trolls do.

If you want to point out a "hole in my story" I welcome that, but there
aren't any holes because what I said is true.  In fact I can document my
claim in excruciating detail.  Unlike many political processes, this one
actually played itself out in public view.  There were numerous witnesses
to all the key events.

E.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alqpi9$fti$1@newsmaster.cc.columbia.edu>
Erann Gat wrote:

> In article <············@newsmaster.cc.columbia.edu>, Oleg
> <············@myrealbox.com> wrote:
> 
>> Erann Gat wrote:
>> 
>> > In article <············@newsmaster.cc.columbia.edu>, Oleg
>> > <············@myrealbox.com> wrote:
>> > 
>> >> Erann Gat wrote:
>> >> 
>> >> > In article <············@newsmaster.cc.columbia.edu>, Oleg
>> >> > <············@myrealbox.com> wrote:
>> >> > 
>> >> >> I'm surprised people are surprised that Lisp isn't used at JPL [1].
>> >> >> It wouldn't even prevent you from dividing an int by a string in
>> >> >> some branch of code!
>> >> > 
>> >> > Lisp was used at JPL for decades (and it's actually still in use in
>> >> > at least one fielded application, though there's currently no active
>> >> > Lisp
>> >> > development to my knowledge).  The reason it fell out of favor had
>> >> > absolutely nothing to do with typing, or any other techincal reason;
>> >> > it was purely political.
>> >> 
>> >> Did you "lose your faith" for political reasons too?
>> >> 
>> >> Oleg
>> > 
>> > Hm, seems Nils was right about you.  You really are just a troll.
>> 
>> Don't like it when people point out holes in your story? Stop being a
>> baby.
> 
> I have no problems with someone pointing out holes in my story, but that's
> not what you did.  You didn't point anything out.  You asked a question,
> which was at once ad-hominem and non-sequitur.  (And then you followed up
> with another ad-hominem attack.)  That's what trolls do.
> 
> If you want to point out a "hole in my story" I welcome that, but there
> aren't any holes because what I said is true.  In fact I can document my
> claim in excruciating detail.  Unlike many political processes, this one
> actually played itself out in public view.  There were numerous witnesses
> to all the key events.

1) you stated that JPL ditched Lisp for political reasons
2) you stated that you "lost your faith" in Lisp [for technical reasons, it 
seemed]

I reminded you of "2" and you got sensitive like a little boy and started 
"calling names".

If you "lost your faith" for reasons other than technical or political, 
then I'd like to know how you _classify_ them (without going into 
long-winded Noggum-style diatribes please)

Oleg

P.S. Not that JPL is worth looking up to. After all, they hired Mr. Tisdale
http://groups.google.com/groups?q=tisdale+troll&btnG=Search&meta=site%3Dgroups
The man was trolling C++ groups for decades trying to push his C++ linear 
algebra standard onto people who, unlike him actually KNOW linear algebra, 
and asking childish questions in sci.astro.
From: Erann Gat
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <gat-1209021345310001@k-137-79-50-101.jpl.nasa.gov>
In article <············@newsmaster.cc.columbia.edu>, Oleg
<············@myrealbox.com> wrote:

> 1) you stated that JPL ditched Lisp for political reasons

Yes.

> 2) you stated that you "lost your faith" in Lisp [for technical reasons, it 
> seemed]

Not in this discussion I didn't.  You are apparently making an oblique
reference to an article I posted here a very long time ago, but which you
apparently only read the title and not the body.  The "faith" to which I
was referring to in that article has nothing to do with the topic at
hand.  (There are many reasons for this, but the most glaringly obvious
one is that at the time that I "lost my faith" I wasn't working at JPL. 
Duh!)

> I reminded you of "2" and you got sensitive like a little boy and started 
> "calling names".

I'm laughing too hard to frame a response to that.

> If you "lost your faith" for reasons other than technical or political, 
> then I'd like to know how you _classify_ them (without going into 
> long-winded Noggum-style diatribes please)

These questions are answered at length in the article to which you allude
and I see no reason to repeat myself, particularly since my personal views
are absolutely irrelevant what is under disucssion here.

Let's review.  You wrote:

> I'm surprised people are surprised that Lisp isn't used at JPL [1].
> It wouldn't even prevent you from dividing an int by a string in
> some branch of code!

Once you dig through the double negatives one infers that you believe 1)
JPL doesn't use Lisp, 2) people are surprised by this and 3) they should
not be.

(This leaves open to interpretation the question of whether you believe
JPL is a good example or a bad one, though you do clarify this in your
most recent post:

> P.S. Not that JPL is worth looking up to. After all, they hired Mr. Tisdale

How the hiring of a single person ought to reflect on an institution with
more than six thousand employees is open to dispute, but be that as it
may...)

All I am saying is that you are factually incorrect in your first implied
statement: JPL *does* use Lisp (or did), so whether people are surprised
by the "fact" that JPL doesn't is irrelevant, because JPL does.

E.

P.S.  Your claim that Lisp "wouldn't even prevent you from dividing an int
by a string in some branch of code" is also wrong.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alr0qi$ldc$1@newsmaster.cc.columbia.edu>
Erann Gat wrote:

>> I'm surprised people are surprised that Lisp isn't used at JPL [1].
>> It wouldn't even prevent you from dividing an int by a string in
>> some branch of code!
> 
> Once you dig through the double negatives one infers that you believe 1)
> JPL doesn't use Lisp, 2) people are surprised by this and 3) they should
> not be.

There are no double negatives there. Learn to count.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alr0ra$ldc$2@newsmaster.cc.columbia.edu>
Arghh... Of the two people I encountered who were posting to USENET from 
jpl.nasa.gov, both are complete idiots!
From: ilias
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alr232$omm$1@usenet.otenet.gr>
Oleg wrote:
> Arghh... Of the two people I encountered who were posting to USENET from 
> jpl.nasa.gov, both are complete idiots!

I think you've talked enouth about o'caml now, so that you should have 
clarified your picture about it.

As a pardon for blowing my topic, please give me now an quick overview 
with an reply to my initial post.
From: Thomas Stegen
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3D8115B1.7030704@cis.strath.ac.uk>
Oleg wrote:
> Erann Gat wrote:
> 
> 
>>In article <············@newsmaster.cc.columbia.edu>, Oleg
>><············@myrealbox.com> wrote:
>>
>>
>>>Erann Gat wrote:
>>>
>>>
>>>>In article <············@newsmaster.cc.columbia.edu>, Oleg
>>>><············@myrealbox.com> wrote:
>>>>
>>>>
>>>>>Erann Gat wrote:
>>>>>
>>>>>
>>>>>>In article <············@newsmaster.cc.columbia.edu>, Oleg
>>>>>><············@myrealbox.com> wrote:
>>>>>>
>>>>>>
>>>>>>>I'm surprised people are surprised that Lisp isn't used at JPL [1].
>>>>>>>It wouldn't even prevent you from dividing an int by a string in
>>>>>>>some branch of code!
>>>>>>>
>>>>>>Lisp was used at JPL for decades (and it's actually still in use in
>>>>>>at least one fielded application, though there's currently no active
>>>>>>Lisp
>>>>>>development to my knowledge).  The reason it fell out of favor had
>>>>>>absolutely nothing to do with typing, or any other techincal reason;
>>>>>>it was purely political.
>>>>>>
>>>>>Did you "lose your faith" for political reasons too?
>>>>>
>>>>>Oleg
>>>>>
>>>>Hm, seems Nils was right about you.  You really are just a troll.
>>>>
>>>Don't like it when people point out holes in your story? Stop being a
>>>baby.
>>>
>>I have no problems with someone pointing out holes in my story, but that's
>>not what you did.  You didn't point anything out.  You asked a question,
>>which was at once ad-hominem and non-sequitur.  (And then you followed up
>>with another ad-hominem attack.)  That's what trolls do.
>>
>>If you want to point out a "hole in my story" I welcome that, but there
>>aren't any holes because what I said is true.  In fact I can document my
>>claim in excruciating detail.  Unlike many political processes, this one
>>actually played itself out in public view.  There were numerous witnesses
>>to all the key events.
>>
> 
> 1) you stated that JPL ditched Lisp for political reasons
> 2) you stated that you "lost your faith" in Lisp [for technical reasons, it 
> seemed]


This might be hard for you to understand and I am probably just
wasting bandwidth on you. JPL ditching Lisp and Erann Gat Loosing
faith in Lisp does not necessarily have anything to do with
each other.

It is you who get defensive as a little child, by bringing up
bygones and irrelevant material.


-- 
Thomas Stegen
From: Paolo Amoroso
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <MZWBPXvEDLPX18FbpDvKHxQFvkzN@4ax.com>
On Wed, 11 Sep 2002 09:03:51 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:

> Lisp was used at JPL for decades (and it's actually still in use in at
> least one fielded application, though there's currently no active Lisp

Which fielded application?


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://www.paoloamoroso.it/ency/README
From: Erann Gat
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <gat-1309021059390001@192.168.1.50>
In article <····························@4ax.com>, Paolo Amoroso
<·······@mclink.it> wrote:

> On Wed, 11 Sep 2002 09:03:51 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:
> 
> > Lisp was used at JPL for decades (and it's actually still in use in at
> > least one fielded application, though there's currently no active Lisp
> 
> Which fielded application?
> 

It's called SDS (State Database System).  It's part of a project called
MDS (Mission Data System).  It's pretty much a vanilla Web application.

SDS is written in MCL, and it includes a lightweight HTTP server that may
still be of interest to people.  It's designed to present a programming
abstraction of interacting with Lisp objects rather than "pages" and
"cgi-scripts."  This lets you do some fairly wizzy things in very small
amounts of code.  An old (1999) version of the server code can be found at
http://alvin.jpl.nasa.gov/gat/ftp/http.lsp.  (This version runs in CLisp.)

E.
From: Bill Clementson
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <wk8z232etd.fsf@attbi.com>
···@jpl.nasa.gov (Erann Gat) writes:

> SDS is written in MCL, and it includes a lightweight HTTP server that may
> still be of interest to people.  It's designed to present a programming
> abstraction of interacting with Lisp objects rather than "pages" and
> "cgi-scripts."  This lets you do some fairly wizzy things in very small
> amounts of code.  An old (1999) version of the server code can be found at
> http://alvin.jpl.nasa.gov/gat/ftp/http.lsp.  (This version runs in CLisp.)

I downloaded http.lsp and had a play with it - very nice. However, I had
to make a couple of minor changes to the code to get it to work with a
recent version (2.29) of CLISP:

Line 302: change package of lisp:socket-server to socket:socket-server
Line 330: change #\CR to #\Return

Also, for beginners who are trying out your code, you might want to add
in the Quickstart comments the following comment:

3. Start up browser on the machine running the server and enter the
   following URL: http://localhost:1234 (or other port# if you didn't
   use the default). This will display the "Welcome to the CLisp http
   server example page" log in screen. Any userid/pw works here.

Thanks for sharing the code.
--
Bill Clementson
From: Pascal Costanza
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3D7F0377.690CF365@cs.uni-bonn.de>
Oleg wrote:
> 

> The type checker is there for a reason: one is execution speed, another
> reason is reliability. While a type checker can never guarantee that your
> program will do what you wanted, it removes a *great* deal of bugs:
[...]

No, a type checker is there for _two_ reasons (speed and reliability),
and that's exactly the problem. It's not necessarily a good idea to
combine two different, sometimes even contradictory goals into a single
concept. If you do that you have to make some compromises and you never
know if these compromises are the best choices in a concrete setting.

Personally I think that type systems are not per se a bad idea, but
those I have seen are overloaded with different goals and therefore too
entangled. IMHO, it's necessary to do more fundamental research on these
issues, especially empirical studies.

Pascal

--
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kaz Kylheku
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <cf333042.0209111150.4e0f6969@posting.google.com>
Oleg <············@myrealbox.com> wrote in message news:<············@newsmaster.cc.columbia.edu>...
> Nils Goesche wrote:
> 
> > Tim Bradshaw <···@cley.com> writes:
> > 
> >> I kind of suspect that I'm preaching to a congregation which is
> >> divided here: the Lisp people are already converted and are getting
> >> bored, and the O'Caml people will never be converted because `HOFs are
> >> all you need' and they will never understand why macros are useful.
> > 
> > Some of them do understand why macros are useful.  That's why they
> > invented CamlP4 for a substitute.  When I still used OCaml, I already
> > knew some Lisp and felt the need for macros when I became more fluent
> > in OCaml.  So I learned to use CamlP4.  But using it was so awkward
> > that I finally thought ``WTF am I doing here??'' and returned to Lisp.
> 
> I'm positive that a greater percentage of serious O'Caml users know Lisp 
> well than vice versa. Guys who created O'Caml, like Xavier Leroy et. al. 
> certainly were Lisp experts. So it's not the Lisp people who are 
> "converted".
>  
> As to Camlp4, I don't need it and I don't use it. You make it sound like 
> it's necessary.
> 
> > (The type checker was another reason.  
> 
> The type checker is there for a reason: one is execution speed, another 
> reason is reliability. While a type checker can never guarantee that your 
Type checking does nothing for execution speed. It verifies constraints.
An optimizer can use declared or inferred type information to improve
speed. Optimizing and type checking are distinct activities.

Requiring users to declare type information is not done for safety;
it is done purely to make it easier for the language implementor
to optimize code, while making it hard for everyone else to write it.

> program will do what you wanted, it removes a *great* deal of bugs: in C++, 
> I would frequently have this bug when I divide one int by another and treat 
> the result as a float. That's a very nasty type of bug, especially in any 
> kind of scientific/numeric application. Lisp is such an extreme case that 
> I'm surprised people are surprised that Lisp isn't used at JPL [1]. 

Lisp doesn't have this problem; the type is deduced dynamically from
the types of the operands. Dividing two integers will produce a ratio
of the division is inexact, or else an integer. The decision is dynamic,
depending on the run-time values. It cannot be type checked in the
general case.

> wouldn't even prevent you from dividing an int by a string in some branch > of code! 

Dividing an integer by a string would signal a condition. Of course, this
is not caught until it actually happens. A Lisp type checker could catch
only some statically obvious cases of this, but not every case.

Compilers for static languages *must* catch this kind of error at compile
time, because they throw away the information. Dividing an integer by
a string, if it was not diagnosed, would have completely unpredictable
results---for example, the bits representing some part of the string
object, such a pointer to its data, could be interpreted as an integer
value.

Programmers who only understand static languages think that this discarding
is necessary, and therefore static type checking is some essential thing
that every decent language must have, and therefore declarations are good
because only with declarations can everything be thoroughly checked.

But what happens is that these programmers invent second-order mechanisms
for introducing run-time typing. Things like integer type codes in structs.
For example, in the BSD operating system, a struct vnode type has
a v_type field, which can hold values like VREG (regular file),
VDIR (directory) and others. And there are explicit run-time checks to
make sure that the wrong operation isn't applied, such as trying to
readdir() on a regular file, or link() a directory, which must be
converted to -1/errno == EPERM results.

And of course, only programmer discipline ensures that the checks
are done properly and thoroughly. If you don't write the check,
you don't get the check.

If you don't have dynamic typing built into the language, programmers
will reinvent it using ad-hoc, inherently fragile programming
conventions.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alpfg5$f39$1@newsmaster.cc.columbia.edu>
Kaz Kylheku wrote:

> For example, in the BSD operating system, a struct vnode type has
> a v_type field, which can hold values like VREG (regular file),
> VDIR (directory) and others. And there are explicit run-time checks to
> make sure that the wrong operation isn't applied, such as trying to
> readdir() on a regular file, or link() a directory, which must be
> converted to -1/errno == EPERM results.

Maybe that's the problem: people who aren't used to strictly typed 
languages never really learned to think in terms of types. I'm deducing 
this from the fact that you wrote the above as an argument against strict 
static typing, while in fact it is almost a textbook example for pattern 
matching.

Oleg
From: J.St.
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <878z27mo0c.fsf@jmmr.no-ip.com>
Oleg <············@myrealbox.com> writes:

> Kaz Kylheku wrote:
> 
> > For example, in the BSD operating system, a struct vnode type has
> > a v_type field, which can hold values like VREG (regular file),
> > VDIR (directory) and others. And there are explicit run-time checks to
> > make sure that the wrong operation isn't applied, such as trying to
> > readdir() on a regular file, or link() a directory, which must be
> > converted to -1/errno == EPERM results.
> 
> Maybe that's the problem: people who aren't used to strictly typed 
> languages never really learned to think in terms of types. I'm deducing 
> this from the fact that you wrote the above as an argument against strict 
> static typing, while in fact it is almost a textbook example for pattern 
> matching.

Could you explain this in more detail? I came from PASCAL (you know,
(for its time) a strictly typed language), but do not see the
relationship to pattern matching.

Regards,
Julian
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkvg5bxreq.fsf@pc022.bln.elmeg.de>
··········@web.de (J.St.) writes:

> Oleg <············@myrealbox.com> writes:
> 
> > Kaz Kylheku wrote:
> > 
> > > For example, in the BSD operating system, a struct vnode type has
> > > a v_type field, which can hold values like VREG (regular file),
> > > VDIR (directory) and others. And there are explicit run-time checks to
> > > make sure that the wrong operation isn't applied, such as trying to
> > > readdir() on a regular file, or link() a directory, which must be
> > > converted to -1/errno == EPERM results.
> > 
> > Maybe that's the problem: people who aren't used to strictly typed 
> > languages never really learned to think in terms of types. I'm deducing 
> > this from the fact that you wrote the above as an argument against strict 
> > static typing, while in fact it is almost a textbook example for pattern 
> > matching.
> 
> Could you explain this in more detail? I came from PASCAL (you know,
> (for its time) a strictly typed language), but do not see the
> relationship to pattern matching.

Kaz was talking about C.  In ML-like languages, the first thing you
typically do is invent a bunch of union types, like

type blark = Foo of string
           | Bar of int
           | Nil;;

Now a variable of type blark can hold both a string and an int, or
Nil.  Whenever you write a function now that gets a variable of type
blark, you have to handle all cases, like

let some_fun = fun Foo s -> <do something with the string s>
                 | Bar n -> <do something with the integer n>
                 | Nil -> <do something else>;;

You don't have to handle all cases; typically you do

let some_fun = fun Bar n -> <do something with the integer n>
                 | _ -> raise CrashTheSystem;;

This looks indeed like reintroducing a kind of dynamic typing, just
that you have to specify every time what to do in every typecase, even
if you (but not the type checker) can prove that some cases will never
happen.  To make life easier for the type checker (but not for you),
lists can only hold values of a fixed type.  If you want a list that
can hold both strings and integers, you first have to define a union
type like blark and then use a blark list.  Of course, now you have to
pattern match every time you access an element of a list.  For
instance, if you have a plist like

 (42 "foo" 17 "bar" 19 "blark")

and you want to do something with the first pair, you'd do

let some_other_fun = fun Bar n :: Foo s :: _ -> <do something with
                                                 n and s>
                       | _ -> raise CrashTheSystem;;

Now you're crashing your system in a type safe manner when something
unexpected happens, and that's why the static typers are superior to
us mere mortals.

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkr8fzxqoi.fsf@pc022.bln.elmeg.de>
Nils Goesche <······@cartan.de> writes:

> Whenever you write a function now that gets a variable of type
> blark, you have to handle all cases, like
> 
> let some_fun = fun Foo s -> <do something with the string s>
>                  | Bar n -> <do something with the integer n>
>                  | Nil -> <do something else>;;
> 
> You don't have to handle all cases; typically you do

Make that: ``all cases explicitly''.

Sorry,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: J.St.
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <87wuprqlmr.fsf@jmmr.no-ip.com>
Nils Goesche <······@cartan.de> writes:

> Nils Goesche <······@cartan.de> writes:
> 
> > Whenever you write a function now that gets a variable of type
> > blark, you have to handle all cases, like
> > 
> > let some_fun = fun Foo s -> <do something with the string s>
> >                  | Bar n -> <do something with the integer n>
> >                  | Nil -> <do something else>;;
> > 
> > You don't have to handle all cases; typically you do
> 
> Make that: ``all cases explicitly''.


Thanks for explaining. I do not want to judge, but at least it looks
awkward to me.

Regards,
Julian
From: Matthew Danish
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <20020912214513.N23781@lain.res.cmu.edu>
On Thu, Sep 12, 2002 at 05:21:32PM +0200, J.St. wrote:
> Nils Goesche <······@cartan.de> writes:
> 
> > Nils Goesche <······@cartan.de> writes:
> > 
> > > Whenever you write a function now that gets a variable of type
> > > blark, you have to handle all cases, like
> > > 
> > > let some_fun = fun Foo s -> <do something with the string s>
> > >                  | Bar n -> <do something with the integer n>
> > >                  | Nil -> <do something else>;;
> > > 
> > > You don't have to handle all cases; typically you do
> > 
> > Make that: ``all cases explicitly''.
> 
> 
> Thanks for explaining. I do not want to judge, but at least it looks
> awkward to me.

Having see why simulating dynamic-typing is clumsy in a static system
(and not possible when redefinition is involved) I would like to point
out that pattern-matching is actually one of the neater features of *ML
languages.

For example:

(* non-destructive reverse of a list *)
fun list_reverse nil = nil
  | list_reverse (first::rest) = (list_reverse rest) @ [first]

Could be compared, roughly, to

(defmethod list-reverse ((l list))
  (if (null l)
      nil
      (destructuring-bind (first &rest rest) l
        (append (list-reverse rest)
	        (list first)))))

with the additional note that the Lisp function can handle lists with
elements of varying types and the ML function, as shown by Nils, can
only handle lists of a single type (which can be a union type).

The ML notation is definitely more convenient for these sorts of things.
On the other hand, a macro can be defined in Lisp which gives most of
the same convenience, and could also potentially handle different types
(the ML function is restricted to 'a list -> 'a list).  Also the ML
notation tends to get a bit messy when function pattern-match clauses
are mixed together with case pattern-match clauses and exception
handling pattern-match clauses.  Not knowing where parenthesis were
needed caused a great deal of trouble for me when I was learning SML.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Vassil Nikolov
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <f34a0f4f.0209152227.3fa8d706@posting.google.com>
Matthew Danish <·······@andrew.cmu.edu> wrote in message news:<·····················@lain.res.cmu.edu>...
[...]
> (* non-destructive reverse of a list *)
> fun list_reverse nil = nil
>   | list_reverse (first::rest) = (list_reverse rest) @ [first]
> 
> Could be compared, roughly, to
> 
> (defmethod list-reverse ((l list))
>   (if (null l)
>       nil
>       (destructuring-bind (first &rest rest) l
>         (append (list-reverse rest)
> 	        (list first)))))

Wouldn't

  (defmethod list-reverse ((l null))
    nil)

  (defmethod list-reverse ((l cons))
    (destructuring-bind (first &rest rest) l
      `(,@(list-reverse rest) ,first)))

be a more literal rendition into Common Lisp?

---Vassil.
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6cfzwf85le.fsf@octagon.mrl.nyu.edu>
Oleg <············@myrealbox.com> writes:

> Kaz Kylheku wrote:
> 
> > For example, in the BSD operating system, a struct vnode type has
> > a v_type field, which can hold values like VREG (regular file),
> > VDIR (directory) and others. And there are explicit run-time checks to
> > make sure that the wrong operation isn't applied, such as trying to
> > readdir() on a regular file, or link() a directory, which must be
> > converted to -1/errno == EPERM results.
> 
> Maybe that's the problem: people who aren't used to strictly typed 
> languages never really learned to think in terms of types.

What you call "thinking by types" is called by CLers "thinking
abstractedly": i.e. setting up the approrpiate set of linguistic
conventions that allow you to deal with the problem at hand.

The fact that you have your 'datatype' (in SML) does not mean that you
do not have 'defmacro' available.  All in all you achieve the same
effect, and, given a compiler like CMUCL/Python (I assume the
commercial compilers can also do that or similar), also in a type safe manner.

Granted, there are several nice things in the *ML family of language,
but this "thinking in types" is just a marketing spiel.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Thien-Thi Nguyen
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <kk9hegvukga.fsf@glug.org>
Oleg <············@myrealbox.com> writes:

> Maybe that's the problem: people who aren't used to strictly typed
> languages never really learned to think in terms of types. I'm
> deducing this from the fact that you wrote the above as an argument
> against strict static typing, while in fact it is almost a textbook
> example for pattern matching.

depending on the type theorist quoted you could probably find argument
that all thinking is a form of typing; the state of non-discernment is
the only untyped perception.  given this, programming can be seen as
codification of thought and its practice would necessarily be in terms
of types (other thoughts).

it is like filling a jug of water from the stream; the water takes the
shape of the jug but the stream flows on anyway.  where a C programmer
douses the fire w/ the (laboriously transported) jug of water, a Lisp
programmer simply diverts a rivulet from the waterwheel to do the job.

thi
From: Christopher Browne
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <allmc0$1rua2g$1@ID-125932.news.dfncis.de>
Centuries ago, Nostradamus foresaw when Nils Goesche <······@cartan.de> would write:
> (The type checker was another reason.  Ever tried to define an
> Y-combinator in OCaml or SML?  Or missed PRINT?)

I obviously need to read the Little Schemer and successor again.  I
still don't "get" why one would really _care_ about the Y-combinator.
-- 
(reverse (concatenate 'string ··········@" "enworbbc"))
http://www3.sympatico.ca/cbbrowne/x.html
If one synchronized swimmer drowns, do the rest have to drown too?
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lksn0ha747.fsf@pc022.bln.elmeg.de>
Christopher Browne <········@acm.org> writes:

> Centuries ago, Nostradamus foresaw when Nils Goesche <······@cartan.de> would write:
> > (The type checker was another reason.  Ever tried to define an
> > Y-combinator in OCaml or SML?  Or missed PRINT?)
> 
> I obviously need to read the Little Schemer and successor again.  I
> still don't "get" why one would really _care_ about the Y-combinator.

Just for the heck of it.  The point is, it is /trivial/ to write it in
Lisp.  It is also trivial to write an ``attempt'' in ML, but the type
checker won't let you.  You first have to invent a very clever trick
to beat the type checker.

And you get into such situations very easily, especially with modules
and functors.  You /know/ it works, you /know/ it is correct, only the
type checker doesn't.  I don't want that anymore.

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Rob Warnock
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <untgci2bjh9rd1@corp.supernews.com>
Nils Goesche  <······@cartan.de> wrote:
+---------------
| Christopher Browne <········@acm.org> writes:
| > I obviously need to read the Little Schemer and successor again.  I
| > still don't "get" why one would really _care_ about the Y-combinator.
| 
| Just for the heck of it.
+---------------

Well, it's a *little* bit more than *that*!  ;-}  ;-}

Basically, it provides (one way of showing) a solid theoretical basis
for recursive functions, even in a *purely* functional (non-mutational)
world. That is, it gives you a way to talk about what CL DEFUN & LABELS
and Scheme "define" & "letrec" do [w.r.t. recursive functions, that is]
*without* assuming that they're primitive in the language(s).

But beyond that, there are actually a few somewhat-useful things
you can learn from studying it, at least once. First, having done
something like define the factorial "the hard way", with Y, most
people see that there is a *less* general but simpler way to write
*specific* recursive functions without using Y [or the built-in
recursion of DEFUN or LABELS], e.g., for factorial:

	> (defun fact (n)
	    (flet ((aux (f n)
		     (if (< n 2) 1 (* n (funcall f f (1- n))))))
	      (aux #'aux n))) 
	FACT
	> (fact 5)
	120
	> (fact 50)
	30414093201713378043612608166064768844377641568960512000000000000
	> 

Then from that one might get the notion of a tree-walker that
passes *itself* down as one of the arguments to recursive calls.
Why? Because if you pass down the function to recurse with instead
of hard-coding it into the algorithm, you can *change* that function
at some point in the walk and the rest of the walk of that sub-tree
will automatically use the new function as the default.

Note that this is similar to but subtly different from doing the
tree walk with a CLOS generic function that dispatches based on
node type, and in fact two approaches are complimentary and can
be mixed.

In short, one might not ever actually use the formal Y-combinator
in practical coding, but having studied it might have some positive
spinoffs (maybe).

+---------------
| The point is, it is /trivial/ to write it in Lisp.
+---------------

Yup, once you've seen it and walk through it enough to really grok it.
But the *first* time's sure a bitch, i'n'it...  ;-}  ;-}


-Rob

-----
Rob Warnock, PP-ASEL-IA		<····@rpw3.org>
627 26th Avenue			<URL:http://www.rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkfzwgab79.fsf@pc022.bln.elmeg.de>
····@rpw3.org (Rob Warnock) writes:

> Nils Goesche  <······@cartan.de> wrote:
> +---------------
> | The point is, it is /trivial/ to write it in Lisp.
> +---------------
> 
> Yup, once you've seen it and walk through it enough to really grok
> it.

I said trivial to /write/ :-)

> But the *first* time's sure a bitch, i'n'it...  ;-}  ;-}

Indeed.  I didn't get it at all when I first saw it in some Scheme
code, I think.  Later I met it again in some book on programming
language semantics, saw the LC formula and immediately got it, so I
went to the computer to try it out...

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lk65xdbpy2.fsf@pc022.bln.elmeg.de>
Oleg <············@myrealbox.com> writes:

> Nils Goesche wrote:
> 
> > Oleg <············@myrealbox.com> writes:
> > 
> >> Nils Goesche wrote:
> >> 
> >> > The point is that you can write
> >> > 
> >> >   (do-combinations (a b c) some-list
> >> >      <BODY>)
> >> > 
> >> > and then the code in BODY is called repeatedly with A, B and C bound
> >> > to the values of a ``combination''.  To do that with higher order
> >> > functions, you'd have to write something like
> >> > 
> >> >   (do-combinations (lambda (a b c)
> >> >                       <BODY>)
> >> >                    some-list)
> >> 
> >> More precisely, in O'Caml you would do
> >> 
> >> List.iter print_int_list (do_combinations 3 [1; 2; 3; 4])
> >> 
> >> What's wrong with that?

> > You don't get it.  The macro DO-COMBINATIONS does /not/ cons up a list
> > of lists.  It repeatedly executes some lines of code with A, B and C
> > bound to some values that depend on SOME-LIST.  Now read what I wrote
> > again.
> 
> Perhaps I'm not expressing myself very clearly. do_combinations is not 
> equivalent to DO-COMBINATIONS. I should have named it make_combinations 
> (and we'll call it that henceforth to avoid confusion)
> 
> You can define do_combinations _function_ in O'Caml that is similar to 
> DO-COMBINATIONS macro in Lisp:
> 
> val do_combinations: int -> 'a list -> ('a list -> unit) -> unit
> 
> and use it like this:
> 
> do_combinations 3 [1; 2; 3; 4] print_int_list

You could do that in Lisp without macros, too; then, you'd have to
call DO-COMBINATIONS as

  (do-combinations 3 some-list (lambda (combi-list)
                                 blablabla))

The point is, you write the macro if you don't /want/ to call it that
way.  First, what I want is having A B C bound to the values in the
combi-list, and no consing:  I want to pass a function with 3
arguments.

  (do-combinations 3 some-list (lambda (a b c)
                                  blablabla))

Then, do-combinations should be smart enough to count the number of
args itself:

  (do-combinations some-list (lambda (a b c)
                                blabla))

But I want even more: Having to write lambda all the time here is just
silly.  So, I want

  (do-combinations some-list (a b c)
        blabla)

where blabla might be

     (foo a b)
     (bar b c)
     (zap c a)

or whatever.  If you don't /have/ macros, you might say my desire to
do it that way is unreasonable.  But the fact is, Lisp /does/ have
macros and I /can/ do it that way whenever I want.

> 
> which would print
> 1 2 3
> 1 2 4 
> 1 3 4 
> 2 3 4
> 
> with appropriately defined print_int_list [1]. No need for macros.
> 
> Oleg
> 
> [1] let print_int_list lst = List.iter print_int lst; print_newline()

To come closer to our example, you'd pass an anonymous function.

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Eduardo Muñoz
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <ubs75lfcy.fsf@jet.es>
Oleg <············@myrealbox.com> writes:

> Nils Goesche wrote:
> 

> > No, that's not the point.  The point is that you can write
> > 
> >   (do-combinations (a b c) some-list
> >      <BODY>)
> > 
> > and then the code in BODY is called repeatedly with A, B and C bound
> > to the values of a ``combination''.  To do that with higher order
> > functions, you'd have to write something like
> > 
> >   (do-combinations (lambda (a b c)
> >                       <BODY>)
> >                    some-list)
> 
> More precisely, in O'Caml you would do
> 
> List.iter print_int_list (do_combinations 3 [1; 2; 3; 4])
> 
> What's wrong with that? 

A problem that I see is that the number of
combinations may exceed you phisical memory.  

As example:

(with-file-lines (line file)
  (with-line-tokens (tokens line))
    ;; Here a string and a list are consed
    do-stuff)

or

(dolist (tokens (mapcar (lambda (line) (split-line line)) 
                        (read-file file)))
        ;; If the file is big enough the system
        ;; will crash and burn
        do-stuff)

or the ol' way of open file, test for eof,
read-line, etc... close file.


-- 

Eduardo Mu�oz
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alm9de$auv$1@newsmaster.cc.columbia.edu>
Eduardo Mu�oz wrote:

>> List.iter print_int_list (do_combinations 3 [1; 2; 3; 4])
>> 
>> What's wrong with that?
> 
> A problem that I see is that the number of
> combinations may exceed you phisical memory.

I think I addressed this problem here:

(message id): ·················@newsmaster.cc.columbia.edu

You can avoid creating a list of all combinations if you define

val do_combinations: int -> 'a list -> ('a list -> unit) -> unit

OTOH it's conceivable that in some cases you might just need a list of all 
combinations. Since "make_combinations" can be created using 
"do_combinations" above, this is probably, in fact, the Right Way of doing 
it.

Cheers
Oleg
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkbs74aaq1.fsf@pc022.bln.elmeg.de>
Oleg <············@myrealbox.com> writes:

> Eduardo Mu�oz wrote:
> 
> >> List.iter print_int_list (do_combinations 3 [1; 2; 3; 4])
> >> 
> >> What's wrong with that?
> > 
> > A problem that I see is that the number of
> > combinations may exceed you phisical memory.
> 
> I think I addressed this problem here:
> 
> (message id): ·················@newsmaster.cc.columbia.edu
> 
> You can avoid creating a list of all combinations if you define
> 
> val do_combinations: int -> 'a list -> ('a list -> unit) -> unit
> 
> OTOH it's conceivable that in some cases you might just need a list
> of all combinations. Since "make_combinations" can be created using
> "do_combinations" above, this is probably, in fact, the Right Way of
> doing it.

Nobody can possibly be so stupid.  You, Sir, are an ordinary troll.

Bye,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alph18$g1o$1@newsmaster.cc.columbia.edu>
Nils Goesche wrote:

 
> Nobody can possibly be so stupid.  You, Sir, are an ordinary troll.
> 
> Bye,

I got it. You are Noggum's alternate Internet personality. Noggum, get a 
life already!
From: Software Scavenger
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <a6789134.0209110610.49cfd0d2@posting.google.com>
Oleg <············@myrealbox.com> wrote in message news:<············@newsmaster.cc.columbia.edu>...

> P.S. If you still want me to implement do_combinations in O'Caml, give me a 
> Lisp implementation (I don't want to think of an algorithm for it)

Here is a Common Lisp implementation of it I had posted in another
thread:

(defmacro do-combinations (syms list &rest body)
   (labels ((work (syms x)
               (let ((y (gensym)))
                  (if (cdr syms)
                        `(loop as (,(car syms) . ,y) on ,x
                               do ,(work (cdr syms) y))
                     `(loop as ,(car syms) in ,x do ,@body)))))
      (work syms list)))

It generates some code to use nested loops.  The two backquoted loop
forms are for outer and innermost levels of nesting.  Do you have a
Common Lisp handy to do macroexpansion etc. to make it easier to see
how it works?

Note that unlike a typical implementation, this implementation is not
a wrapper for a do-combinations function.  It's the actual algorithm,
in the macro, implemented by generating the nested loops at
macroexpansion time to do the work at runtime.  So if the list of
symbols, which was (a b c) in the original example, were to have 100
symbols in it instead of 3, in a particular call to this macro, it
would generate 100 levels of nested loop.  If it were desired to not
generate varying amounts of code, the more common implementation with
a function wrapped by a macro would be appropriate.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <alpij7$gvi$1@newsmaster.cc.columbia.edu>
Software Scavenger wrote:

> 
> (defmacro do-combinations (syms list &rest body)
> (labels�((work�(syms�x)
> (let�((y�(gensym)))
> (if�(cdr�syms)
> `(loop�as�(,(car�syms)�.�,y)�on�,x
> do�,(work�(cdr�syms)�y))
> `(loop�as�,(car�syms)�in�,x�do�,@body)))))
> (work�syms�list)))

I'm sorry, this only rivals /usr/include/g++-3/stl_function.h in 
readability. No it's worse. I don't even know which tokens are variables 
and which ones are keywords. Perhaps it's a good thing there are no 
templates, err... I mean macros, in O'Caml. 

Oleg
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6cbs7385ff.fsf@octagon.mrl.nyu.edu>
Oleg <············@myrealbox.com> writes:

> Software Scavenger wrote:
> 
> > 
> > (defmacro do-combinations (syms list &rest body)
> > (labels�((work�(syms�x)
> > (let�((y�(gensym)))
> > (if�(cdr�syms)
> > `(loop�as�(,(car�syms)�.�,y)�on�,x
> > do�,(work�(cdr�syms)�y))
> > `(loop�as�,(car�syms)�in�,x�do�,@body)))))
> > (work�syms�list)))
> 
> I'm sorry, this only rivals /usr/include/g++-3/stl_function.h in 
> readability. No it's worse. I don't even know which tokens are variables 
> and which ones are keywords. Perhaps it's a good thing there are no 
> templates, err... I mean macros, in O'Caml. 

What is so unreadable about

        (do-combinations (a b c) (list 1 2 3 4 5)
           (do-something a c b))

This is what you care at the end of the day and this is something it
is *much* more difficult to obtain in any *ML language.  And that is
because the identity between code and data was lost too long ago.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-ACFBF3.14063513092002@copper.ipg.tsnz.net>
In article <···············@octagon.mrl.nyu.edu>,
 Marco Antoniotti <·······@cs.nyu.edu> wrote:

> Oleg <············@myrealbox.com> writes:
> 
> > Software Scavenger wrote:
> > 
> > > 
> > > (defmacro do-combinations (syms list &rest body)
> > > (labels�((work�(syms�x)
> > > (let�((y�(gensym)))
> > > (if�(cdr�syms)
> > > `(loop�as�(,(car�syms)�.�,y)�on�,x
> > > do�,(work�(cdr�syms)�y))
> > > `(loop�as�,(car�syms)�in�,x�do�,@body)))))
> > > (work�syms�list)))
> > 
> > I'm sorry, this only rivals /usr/include/g++-3/stl_function.h in 
> > readability. No it's worse. I don't even know which tokens are variables 
> > and which ones are keywords. Perhaps it's a good thing there are no 
> > templates, err... I mean macros, in O'Caml. 
> 
> What is so unreadable about
> 
>         (do-combinations (a b c) (list 1 2 3 4 5)
>            (do-something a c b))
> 
> This is what you care at the end of the day and this is something it
> is *much* more difficult to obtain in any *ML language.

True.


> And that is because the identity between code and data was lost too 
> long ago.

False.  I showed how this can easily be done in Dylan, which does not 
share CL's compile-time identity of code and data.

-- Bruce
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6c1y7yez1i.fsf@octagon.mrl.nyu.edu>
Bruce Hoult <·····@hoult.org> writes:

> In article <···············@octagon.mrl.nyu.edu>,
>  Marco Antoniotti <·······@cs.nyu.edu> wrote:
> 
> > Oleg <············@myrealbox.com> writes:
> > 
> > > Software Scavenger wrote:
> > > 
> > > > 
> > > > (defmacro do-combinations (syms list &rest body)
> > > > (labels�((work�(syms�x)
> > > > (let�((y�(gensym)))
> > > > (if�(cdr�syms)
> > > > `(loop�as�(,(car�syms)�.�,y)�on�,x
> > > > do�,(work�(cdr�syms)�y))
> > > > `(loop�as�,(car�syms)�in�,x�do�,@body)))))
> > > > (work�syms�list)))
> > > 
> > > I'm sorry, this only rivals /usr/include/g++-3/stl_function.h in 
> > > readability. No it's worse. I don't even know which tokens are variables 
> > > and which ones are keywords. Perhaps it's a good thing there are no 
> > > templates, err... I mean macros, in O'Caml. 
> > 
> > What is so unreadable about
> > 
> >         (do-combinations (a b c) (list 1 2 3 4 5)
> >            (do-something a c b))
> > 
> > This is what you care at the end of the day and this is something it
> > is *much* more difficult to obtain in any *ML language.
> 
> True.
> 
> 
> > And that is because the identity between code and data was lost too 
> > long ago.
> 
> False.  I showed how this can easily be done in Dylan, which does not 
> share CL's compile-time identity of code and data.

Form the context of the thread it should have been clear that my
remark was not directed at Dylan.  I know very well that you can do
that in Dylan as well.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Gareth McCaughan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <slrnao1tsn.2vcs.Gareth.McCaughan@g.local>
Oleg Somethingorother wrote:
> Software Scavenger wrote:
> 
> > 
> > (defmacro do-combinations (syms list &rest body)
> > (labels�((work�(syms�x)
> > (let�((y�(gensym)))
> > (if�(cdr�syms)
> > `(loop�as�(,(car�syms)�.�,y)�on�,x
> > do�,(work�(cdr�syms)�y))
> > `(loop�as�,(car�syms)�in�,x�do�,@body)))))
> > (work�syms�list)))
> 
> I'm sorry, this only rivals /usr/include/g++-3/stl_function.h in 
> readability. No it's worse. I don't even know which tokens are variables 
> and which ones are keywords. Perhaps it's a good thing there are no 
> templates, err... I mean macros, in O'Caml. 

Well, if you will delete all the indentation, you
should expect to get unreadable code. You'll find
that deleting all the inessential whitespace and
running it all together in a single line works
even better.

Obviously Lisp code is hard to read if you aren't
familiar with Lisp. And C++ code is hard to read
if you aren't familiar with C++. And both can be
hard to read even if you *are* familiar with the
language, because both languages (C++ more rarely
than CL) allow you to pack a lot of meaning into
a small volume of code.

But your comparison is quite an appropriate one.
Macro definitions are allowed to be ugly, just as
the headers for the C++ standard library are allowed
to be ugly. They are ugly so that your code can be
beautiful. As it happens, I think the DO-COMBINATIONS
macro is not difficult to read at all, but it
wouldn't matter all that much if it were as ugly
and unreadable as you say it is.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Erik Naggum
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3240852292130093@naggum.no>
* Gareth McCaughan
| They are ugly so that your code can be beautiful.

  This is a frequently misunderstood point.  A lot of programmers who get
  transfixed by some blinding elegance never understand that what they have to
  work with is elegant because a lot of the dirty details have been wiped under
  the carpet.  They sort of get this requirement that from this point onward,
  /all/ code must be "elegant".  I have come to believe that elegance, to be
  achieved where it did not previously exist, you must do a lot of hard, dirty
  work.  The simpler and more elegant you want the abstraction to be, the more
  time and effort you must expend on its fundamentals and its implementation.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Craig Brozefsky
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <87k7lqppjb.fsf@piracy.red-bean.com>
Erik Naggum <····@naggum.no> writes:

> * Gareth McCaughan
> | They are ugly so that your code can be beautiful.
> 
>   This is a frequently misunderstood point.  A lot of programmers who get
>   transfixed by some blinding elegance never understand that what they have to
>   work with is elegant because a lot of the dirty details have been wiped under
>   the carpet.  They sort of get this requirement that from this point onward,
>   /all/ code must be "elegant".  I have come to believe that elegance, to be
>   achieved where it did not previously exist, you must do a lot of hard, dirty
>   work.  The simpler and more elegant you want the abstraction to be, the more
>   time and effort you must expend on its fundamentals and its implementation.

Well put!

I am reminded of the brutal, planar elegance of a machine polished
stone counter top in contrast to the sensual, living elegance of a
rock from the riverbed.

-- 
Sincerely,
Craig Brozefsky <·····@red-bean.com>
Free Scheme/Lisp Software  http://www.red-bean.com/~craig
From: Hartmann Schaffer
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <3d814051@news.sentex.net>
In article <············@newsmaster.cc.columbia.edu>,
	Oleg <············@myrealbox.com> writes:
> Software Scavenger wrote:
> 
>> 
>> (defmacro do-combinations (syms list &rest body)
>> (labels�((work�(syms�x)
>> (let�((y�(gensym)))
>> (if�(cdr�syms)
>> `(loop�as�(,(car�syms)�.�,y)�on�,x
>> do�,(work�(cdr�syms)�y))
>> `(loop�as�,(car�syms)�in�,x�do�,@body)))))
>> (work�syms�list)))
> 
> I'm sorry, this only rivals /usr/include/g++-3/stl_function.h in 
> readability. No it's worse. I don't even know which tokens are variables 
> and which ones are keywords. Perhaps it's a good thing there are no 
> templates, err... I mean macros, in O'Caml. 

did you edit the code sample intentionally to make it look unreadable
so that you can complain about unreadability or was it simply a stupid
mistake?

hs

-- 

don't use malice as an explanation when stupidity suffices
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-7F4644.11463812092002@copper.ipg.tsnz.net>
In article <····························@posting.google.com>,
 ··········@mailandnews.com (Software Scavenger) wrote:

> Oleg <············@myrealbox.com> wrote in message 
> news:<············@newsmaster.cc.columbia.edu>...
> 
> > P.S. If you still want me to implement do_combinations in O'Caml, give me a 
> > Lisp implementation (I don't want to think of an algorithm for it)
> 
> Here is a Common Lisp implementation of it I had posted in another
> thread:
> 
> (defmacro do-combinations (syms list &rest body)
>    (labels ((work (syms x)
>                (let ((y (gensym)))
>                   (if (cdr syms)
>                         `(loop as (,(car syms) . ,y) on ,x
>                                do ,(work (cdr syms) y))
>                      `(loop as ,(car syms) in ,x do ,@body)))))
>       (work syms list)))
> 
> It generates some code to use nested loops.  The two backquoted loop
> forms are for outer and innermost levels of nesting.  Do you have a
> Common Lisp handy to do macroexpansion etc. to make it easier to see
> how it works?
> 
> Note that unlike a typical implementation, this implementation is not
> a wrapper for a do-combinations function.  It's the actual algorithm,
> in the macro, implemented by generating the nested loops at
> macroexpansion time to do the work at runtime.  So if the list of
> symbols, which was (a b c) in the original example, were to have 100
> symbols in it instead of 3, in a particular call to this macro, it
> would generate 100 levels of nested loop.  If it were desired to not
> generate varying amounts of code, the more common implementation with
> a function wrapped by a macro would be appropriate.

Here's Dylan version for you:

define macro do-combinations
  {do-combinations (?:name in ?:expression) ?:body end}
    => {for (?name in ?expression) ?body end};

  {do-combinations (?:name, ?names:* in ?:expression) ?:body end}
    => {for (y = ?expression then y.tail, while: y ~== #())
          let ?name = y.head;
          do-combinations (?names in y.tail) ?body end
       end};
end macro;

do-combinations (x,y,z in #("the", "quick", "brown", "fox", "jumps"))
  format-out("%= %= %=\n", x, y, z);
end;

·····@k7:~/programs/dylan/combinations > ./combinations 
"the" "quick" "brown"
"the" "quick" "fox"
"the" "quick" "jumps"
"the" "brown" "fox"
"the" "brown" "jumps"
"the" "fox" "jumps"
"quick" "brown" "fox"
"quick" "brown" "jumps"
"quick" "fox" "jumps"
"brown" "fox" "jumps"


Dylan's for() loop doesn't have the list destructuring feature of CL's 
loop macro, so I had to do that part by hand.

On the other hand, I can show you a slightly longer version that 
produces optimal code not only for lists but also for strings, vectors, 
hash tables etc (if the actual type is known at the point of 
macro-expansion, of course).

do-combinations (x,y,z in "qwerty")
  format-out("%= %= %=\n", x, y, z);
end;

·····@k7:~/programs/dylan/combinations > ./combinations 
'q' 'w' 'e'
'q' 'w' 'r'
'q' 'w' 't'
'q' 'w' 'y'
'q' 'e' 'r'
'q' 'e' 't'
'q' 'e' 'y'
'q' 'r' 't'
'q' 'r' 'y'
'q' 't' 'y'
'w' 'e' 'r'
'w' 'e' 't'
'w' 'e' 'y'
'w' 'r' 't'
'w' 'r' 'y'
'w' 't' 'y'
'e' 'r' 't'
'e' 'r' 'y'
'e' 't' 'y'
'r' 't' 'y'

Interested?

-- Bruce
From: Software Scavenger
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <a6789134.0209122046.1b071b3@posting.google.com>
Bruce Hoult <·····@hoult.org> wrote in message news:<···························@copper.ipg.tsnz.net>...

> Interested?

Only if it doesn't take much of your time.  If I were planning to
write a lot of code like this, variations of do-combinations and other
macros that build nested loops, I would probably build some kind of
foundation, something to build or implement the macros, such that each
of them would look very simple and could be implemented very quickly.

One variation of do-combinations that might be handy would be to
iterate through combinations of numbers from 1 to N, instead of a list
or vector.  Here is a macro to implement that, this time using a loop
instead of recursion for building the code.  It's interesting to
compare the two styles and think about which is better and why.  Also
note that if we don't have the vector version and don't want to bother
with it, we can easily use this version for it by using the numbers to
index the vector.  Of course your generic version would be the most
convenient of all, if we used this kind of functionality often enough
to make it worth the bother.

(defmacro do-combinations-n (syms n &rest body)
   (loop as x on (reverse syms)
         as i = (if (cadr x) `(1+ ,(cadr x))  1)
         as j downfrom n
         as u = `(loop as ,(car x) from ,i to ,j do)
         as v = `(,@u ,@body) then `(,@u ,v)
         finally (return v)))
From: Bruce Hoult
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <bruce-965E4D.17322113092002@copper.ipg.tsnz.net>
In article <···························@posting.google.com>,
 ··········@mailandnews.com (Software Scavenger) wrote:

> Bruce Hoult <·····@hoult.org> wrote in message 
> news:<···························@copper.ipg.tsnz.net>...
> 
> > Interested?
> 
> Only if it doesn't take much of your time.  If I were planning to
> write a lot of code like this, variations of do-combinations and other
> macros that build nested loops, I would probably build some kind of
> foundation, something to build or implement the macros, such that each
> of them would look very simple and could be implemented very quickly.

Well, I already did it, although it may be able to be improved.  Still, 
as someone said today, it's worth a bit of ugliness behind the scenes in 
order to make an elegant interface.


> One variation of do-combinations that might be handy would be to
> iterate through combinations of numbers from 1 to N, instead of a list
> or vector.

I agree.  The following code allows that in addition to lists, vectors, 
strings etc by using Dylan's "range" collection type -- that is an 
arithmetic sequence, stored as start, end and increment values.

define macro do-combinations
  {do-combinations (?names:* in ?list:expression) ?:body end} =>
    {let list = ?list;
     let (init, limit, next-state, finished?,
          current-key, current-element,
          current-element-setter, copy-state)
       = forward-iteration-protocol(list);
     do-combinations-body
       (?names in list from init
          using limit next-state finished? current-element copy-state
          in ?body)}
end macro;

define macro do-combinations-body
  {do-combinations-body
     (?:name in ?list:name from ?init:expression
        using ?limit:name ?next-state:name ?finished?:name 
              ?current-element:name ?copy-state:name
        in ?:body)}
    => {for (state = ?init then ?next-state(?list, state),
             until: ?finished?(?list, state, ?limit))
          let ?name = ?current-element(?list, state);
          ?body;
       end};

  {do-combinations-body
     (?:name, ?names:* in ?list:name from ?init:expression
        using ?limit:name ?next-state:name ?finished?:name 
              ?current-element:name ?copy-state:name
        in ?:body)}
    => {for (state = ?init then ?next-state(?list, state),
             until: ?finished?(?list, state, ?limit))
          let ?name = ?current-element(?list, state);
          do-combinations-body
            (?names in ?list from
               ?next-state(?list, ?copy-state(?list, state))
               using ?limit ?next-state ?finished?
                     ?current-element ?copy-state
               in ?body)
        end};
end macro;


This uses Dylan's "forward-iteration-protocol" generic function.  Every 
collection class provides a method on this GF, and things such as map() 
and the for() loop use it.

This macro is used identically to the previous one e.g.


do-combinations (x,y,z in #("the", "quick", "brown", "fox", "jumps"))
  format-out("%= %= %=\n", x, y, z);
end;


do-combinations (x,y,z in "qwerty")
  format-out("%= %= %=\n", x, y, z);
end;


do-combinations (x,y in make(<range>, from: 10, below: 25, by: 3))
  format-out("%= %= %=\n", x, y, z);
end;

·····@k7:~/programs/dylan/combinations > ./combinations 
10 13
10 16
10 19
10 22
13 16
13 19
13 22
16 19
16 22
19 22


Questions?


> Here is a macro to implement that, this time using a loop instead of 
> recursion for building the code.  It's interesting to compare the two 
> styles and think about which is better and why.

You don't get the choice in Dylan.  Macros can only do recursion, not 
iteration.

>   Also note that if we don't have the vector version and don't want 
>   to bother with it, we can easily use this version for it by using 
>   the numbers to index the vector.  Of course your generic version 
>   would be the most convenient of all, if we used this kind of 
>   functionality often enough to make it worth the bother.

Indeed.

-- Bruce
From: Kaz Kylheku
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <cf333042.0209111130.620bf4e1@posting.google.com>
Oleg <············@myrealbox.com> wrote in message news:<············@newsmaster.cc.columbia.edu>...
> Software Scavenger wrote:
> 
> > Oleg <············@myrealbox.com> wrote in message
> > news:<············@newsmaster.cc.columbia.edu>...
> > 
> >> let silly_loop ?(increment = 1) ?(final_char = '\000') s =
> > 
> > I'm curious to know what do-combinations would look like in ocaml.
> > It works like this in lisp:
> > 
> > (do-combinations (a b c) '(1 2 3 4)
> >    (print (list a b c)))
> > and the output is:
> > (1 2 3)
> > (1 2 4)
> > (1 3 4)
> > (2 3 4)

> like this:
> 
> do_combinations 3 [1; 2; 3; 4]
> 
> which sould return 
> [[1; 2; 3]; [1; 2; 4]; [1; 3; 4]; [2; 3; 4]]

No, you don't get it. The object is not to return a list of the
combinations, but to bind some variables to each combination,
and then evaluate forms in a lexical environment in which those
bindings are visible.

Please don't change the problem specification to suit your language!

Do-combinations is in fact a control construct; a function that
returns combinations is not a control construct.

These are very different things. One trivial difference is that
you could do:

  (do-combinations (a b c) '(1 2 3 4)
    (when (.. some condition involving a b c)
      (return)))

In other words, stop generating combinations upon encountering
some condition.

The space of combinations could be huge, so that consing up a list
and then searching through it could be intractable.

I want to see an O'Caml construct written in O'Caml which 
binds the variables a b c to the combinations of 1 2 3 4,
and executes a body of arbitrary statements under those bindings.
From: Stephen J. Bevan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3r8fywtgu.fsf@dino.dnsalias.com>
···@ashi.footprints.net (Kaz Kylheku) writes:
> I want to see an O'Caml construct written in O'Caml which 
> binds the variables a b c to the combinations of 1 2 3 4,
> and executes a body of arbitrary statements under those bindings.

Assuming you don't mind SML which is close enough so as not to make
any real difference ...

  fun listLazyFold (z, []) n c = n z
    | listLazyFold (z, (h::t)) n c =
        c (z, h, t, fn z => listLazyFold (z, t) n c)

  fun combs (n, l, z) f =
    let
      fun combs' (0, l, r, z, k) = 
            listLazyFold (z, r) k 
              (fn (z, h, t, k) => f (z, h::l, k))
        | combs' (n, l, r, z, k) =
            listLazyFold (z, r) k 
              (fn (z, h, t, k) => combs' (n-1, h::l, t, z, k))
    in
      combs' (n-1, [], l, z, fn id => id)
    end

The above calls f for each combination respresented as a list (in
reverse order, throw a rev into the call to f if you don't like this)
and lets f decide whether to continue to the next combination (call
the continuation) or stop (don't call the continuation).

For example the following binds "a" and "b" (in reverse) to each
combination pair and adds them to the list of combinations ..

  - combs (2, ["heart", "spade", "diamond", "club"], [])
      (fn (cs, [b,a], k) => k ([a,b]::cs));
  = stdIn:18.6-18.40 Warning: match nonexhaustive
            (cs,b :: a :: nil,k) => ...

  val it =
    [["diamond","club"],["spade","club"],["spade","diamond"],["heart","club"],
     ["heart","diamond"],["heart","spade"]] : string list list
  - 

Here's an example in which only the first three combinations are
generated and returned as a list :-

  - combs (2, ["heart", "spade", "diamond", "club"], (0, []))
      (fn ((n, cs), [b,a], k) =>
        if (n < 3) then k (n+1, ([a,b]::cs)) else (0, cs));
  = stdIn:28.6-28.81 Warning: match nonexhaustive
            ((n,cs),b :: a :: nil,k) => ...
  
  = val it = (0,[["heart","club"],["heart","diamond"],["heart","spade"]])
    : int * string list list
  - 

To test that it did indeed stop after the first n I used :-

  fun iota n =
    let
      fun iota' (0, cs) = cs
        | iota' (n, cs) = iota' (n-1, n::cs)
    in
      iota' (n, [])
    end

  - val big = iota 1000000;
  val big = [1,2,3,4,5,6,7,8,9,10,11,12,...] : int list

To generate a big list and then asked for the first three combinations :-

  - combs (2, big, (0, []))
      (fn ((n, cs), [b,a], k) =>
        if (n < 3) then k (n+1, ([a,b]::cs)) else (0, cs));
  = stdIn:30.6-30.81 Warning: match nonexhaustive
            ((n,cs),b :: a :: nil,k) => ...

  val it = (0,[[1,4],[1,3],[1,2]]) : int * int list list
  - 

This, as expected, returned quickly without generating all the combinations.
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lksn0e5e8v.fsf@pc022.bln.elmeg.de>
·······@dino.dnsalias.com (Stephen J. Bevan) writes:

> ···@ashi.footprints.net (Kaz Kylheku) writes:
> > I want to see an O'Caml construct written in O'Caml which 
> > binds the variables a b c to the combinations of 1 2 3 4,
> > and executes a body of arbitrary statements under those bindings.
> 
> Assuming you don't mind SML which is close enough so as not to make
> any real difference ...
> 
>   fun listLazyFold (z, []) n c = n z
>     | listLazyFold (z, (h::t)) n c =
>         c (z, h, t, fn z => listLazyFold (z, t) n c)
> 
>   fun combs (n, l, z) f =
>     let
>       fun combs' (0, l, r, z, k) = 
>             listLazyFold (z, r) k 
>               (fn (z, h, t, k) => f (z, h::l, k))
>         | combs' (n, l, r, z, k) =
>             listLazyFold (z, r) k 
>               (fn (z, h, t, k) => combs' (n-1, h::l, t, z, k))
>     in
>       combs' (n-1, [], l, z, fn id => id)
>     end
> 
> The above calls f for each combination respresented as a list (in
> reverse order, throw a rev into the call to f if you don't like this)
> and lets f decide whether to continue to the next combination (call
> the continuation) or stop (don't call the continuation).
> 
> For example the following binds "a" and "b" (in reverse) to each
> combination pair and adds them to the list of combinations ..
> 
>   - combs (2, ["heart", "spade", "diamond", "club"], [])
>       (fn (cs, [b,a], k) => k ([a,b]::cs));
>   = stdIn:18.6-18.40 Warning: match nonexhaustive
>             (cs,b :: a :: nil,k) => ...

And that's exactly why we like macros so much ;-)

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Stephen J. Bevan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m38z25x5t0.fsf@dino.dnsalias.com>
Nils Goesche <······@cartan.de> writes:
> >   - combs (2, ["heart", "spade", "diamond", "club"], [])
> >       (fn (cs, [b,a], k) => k ([a,b]::cs));
> >   = stdIn:18.6-18.40 Warning: match nonexhaustive
> >             (cs,b :: a :: nil,k) => ...
> 
> And that's exactly why we like macros so much ;-)

What does "exactly" refer to?  Is the call :-

> >   - combs (2, ["heart", "spade", "diamond", "club"], [])
> >       (fn (cs, [b,a], k) => k ([a,b]::cs));

the warning from the compiler :-

> >   = stdIn:18.6-18.40 Warning: match nonexhaustive
> >             (cs,b :: a :: nil,k) => ...

or the whole code?
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkk7lp6etp.fsf@pc022.bln.elmeg.de>
·······@dino.dnsalias.com (Stephen J. Bevan) writes:

> Nils Goesche <······@cartan.de> writes:
> > >   - combs (2, ["heart", "spade", "diamond", "club"], [])
> > >       (fn (cs, [b,a], k) => k ([a,b]::cs));
> > >   = stdIn:18.6-18.40 Warning: match nonexhaustive
> > >             (cs,b :: a :: nil,k) => ...
> > 
> > And that's exactly why we like macros so much ;-)
> 
> What does "exactly" refer to?  Is the call :-
> 
> > >   - combs (2, ["heart", "spade", "diamond", "club"], [])
> > >       (fn (cs, [b,a], k) => k ([a,b]::cs));
> 
> the warning from the compiler :-
> 
> > >   = stdIn:18.6-18.40 Warning: match nonexhaustive
> > >             (cs,b :: a :: nil,k) => ...
> 
> or the whole code?

Everything, and especially the usage of combs.  I mean, look at

(do-combinations (a b c) '(1 2 3 4)
  (print a)
  (print b)
  (print c))

You could implement /something like/ do-combinations as a function, of
course, then its usage would look like

(do-combi-fun 3 '(1 2 3 4) (lambda (a b c)
                             (print a) (print b) (print c)))

which comes closer to the macro but, frankly, I still hate it
(especially the 3 in there).  In ML you'd probably already get into
trouble with the type checker if you tried to implement it that way.
Also, the macro version doesn't cons.

In fact, ML had nothing to do with the whole question; the question
was what you can do with macros.  From the way it is used it is
absolutely clear that do-combinations can /not/ be a function.  How
many times do I have to spell this out?  (HintHint: a is possibly not
fbound, b and c are possibly not bound, and you probably do not want
to call do-combinations with the return value of PRINT) I have no idea
why Oleg thought he should give his examples in ML, apparently he
doesn't know that Lisp has higher order functions, too.

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Stephen J. Bevan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m34rctx1kd.fsf@dino.dnsalias.com>
Nils Goesche <······@cartan.de> writes:
> Everything, and especially the usage of combs.  I mean, look at
> 
> (do-combinations (a b c) '(1 2 3 4)
>   (print a)
>   (print b)
>   (print c))
> 
> You could implement /something like/ do-combinations as a function, of
> course, then its usage would look like
> 
> (do-combi-fun 3 '(1 2 3 4) (lambda (a b c)
>                              (print a) (print b) (print c)))
> 

or, closer to the layout of the macro version :-

 (do-combi-fun 3 '(1 2 3 4)
   (lambda (a b c)
     (print a)
     (print b)
     (print c)))
 

> which comes closer to the macro but, frankly, I still hate it
> (especially the 3 in there).

Ok.


> In ML you'd probably already get into
> trouble with the type checker if you tried to implement it that way.

Indeed, that's why a, b, c have to be wrapped in a list in ML.
It is possible that in other typed functional languages the list would
not be necessary via the use of a polytypic function.  However, I'm
far behind in my reading so it it is also possible that it would be of
no help in this situation.


> Also, the macro version doesn't cons.

Oh, you mean I'm not allowed to make the SCC appeal that would do away
with consing each combination :-)


> In fact, ML had nothing to do with the whole question; the question
> was what you can do with macros.  From the way it is used it is
> absolutely clear that do-combinations can /not/ be a function.  How
> many times do I have to spell this out?

I agree that do-combinations can not be a function as specified since
it clearly requires some form of syntactic abstraction.  What I think
Oleg was trying to show was that a solution using a HOF is visually
similar (not exactly the same, since it can't be).  Whether it is
similar enough to be practical is a judgement call.  Clearly in the
judgement of many (all?) here it is not similar enough.
From: Kaz Kylheku
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <cf333042.0209131323.4d80edd1@posting.google.com>
·······@dino.dnsalias.com (Stephen J. Bevan) wrote in message news:<··············@dino.dnsalias.com>...
> ···@ashi.footprints.net (Kaz Kylheku) writes:
> > I want to see an O'Caml construct written in O'Caml which 
> > binds the variables a b c to the combinations of 1 2 3 4,
> > and executes a body of arbitrary statements under those bindings.
> 
> Assuming you don't mind SML which is close enough so as not to make
> any real difference ...
> 
>   fun listLazyFold (z, []) n c = n z
>     | listLazyFold (z, (h::t)) n c =
>         c (z, h, t, fn z => listLazyFold (z, t) n c)
> 
>   fun combs (n, l, z) f =
>     let
>       fun combs' (0, l, r, z, k) = 
>             listLazyFold (z, r) k 
>               (fn (z, h, t, k) => f (z, h::l, k))
>         | combs' (n, l, r, z, k) =
>             listLazyFold (z, r) k 
>               (fn (z, h, t, k) => combs' (n-1, h::l, t, z, k))
>     in
>       combs' (n-1, [], l, z, fn id => id)
>     end
> 
> The above calls f for each combination respresented as a list (in
> reverse order, throw a rev into the call to f if you don't like this)
> and lets f decide whether to continue to the next combination (call
> the continuation) or stop (don't call the continuation).
> 
> For example the following binds "a" and "b" (in reverse) to each
> combination pair and adds them to the list of combinations ..
> 
>   - combs (2, ["heart", "spade", "diamond", "club"], [])
>       (fn (cs, [b,a], k) => k ([a,b]::cs));

Holy carpal tunnel, Batman! But this isn't quite right; I wanted an
example where variables are bound for you, not where you open code it
yourself.

There is a gaping lack of abstraction here; the user knows that combs
generates the combinations in a certain form and then pattern matches
on it to do the destructuring.

I would like to see some utterance which is comprised only of the
constituents that are relevant to the user:
- the inevitable symbol which identifies the language construct;
- the list from which the combinations are drawn;
- the variables which are to be bound to the combination elements; and
- the user's list of expressions, evaluated in the scope of the
variables.
From: Stephen J. Bevan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3u1ktuwtz.fsf@dino.dnsalias.com>
···@ashi.footprints.net (Kaz Kylheku) writes:
> >   - combs (2, ["heart", "spade", "diamond", "club"], [])
> >       (fn (cs, [b,a], k) => k ([a,b]::cs));
> 
> Holy carpal tunnel, Batman! But this isn't quite right; I wanted an
> example where variables are bound for you, not where you open code it
> yourself.
>
> There is a gaping lack of abstraction here; the user knows that combs
> generates the combinations in a certain form and then pattern matches
> on it to do the destructuring.
>
> I would like to see some utterance which is comprised only of the
> constituents that are relevant to the user:
> - the inevitable symbol which identifies the language construct;
> - the list from which the combinations are drawn;
> - the variables which are to be bound to the combination elements; and
> - the user's list of expressions, evaluated in the scope of the
> variables.

Switching from *ML (since this can't be typed in *ML) then the best I
could do as a HOF would be :-

   (combs 2 '(heart spade diamond club) '()
     (lambda (k cs a b)
       (funcall k (cons (list a b) cs))))

or if we remove the accumulator and instead use a side effect :-

   (let ((cs '()))
     (combs 2 '(heart spade diamond club)
       (lambda (k a b)
         (funcall k (push (list a b) cs)))))

However, other than the fact that the list generated by combs is
passed to apply so that it is automatically destructured there isn't
much of a difference between it and the ML version and so probably
still not acceptable.  That is, one still has indicate the number of
combinations (2 in this case) a lambda is still required to wrap the
user's list of expressions and one still has to to explicitly call 'k'
to move to the next combination.
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <87ptvh2pas.fsf@darkstar.cartan>
·······@dino.dnsalias.com (Stephen J. Bevan) writes:

> ···@ashi.footprints.net (Kaz Kylheku) writes:
> > >   - combs (2, ["heart", "spade", "diamond", "club"], [])
> > >       (fn (cs, [b,a], k) => k ([a,b]::cs));
> > 
> > Holy carpal tunnel, Batman! But this isn't quite right; I
> > wanted an example where variables are bound for you, not
> > where you open code it yourself.
> >
> Switching from *ML (since this can't be typed in *ML) then the
> best I could do as a HOF would be :-
> 
>    (combs 2 '(heart spade diamond club) '()
>      (lambda (k cs a b)
>        (funcall k (cons (list a b) cs))))

Why do you insist on using the continuation?  I thought we'd
agreed already on a functional signature as close as possible to
the macro version.  Usage should look like this:

  (do-combis-in-a-func 2 '(heart spade diamond club)
    (lambda (a b)
      <do-something-with-a-and-b>))

Ok, it still sucks, but by far not as much as the continuation
version, right? ;-)

Regards,
-- 
Nils Goesche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Stephen J. Bevan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3bs70v6fq.fsf@dino.dnsalias.com>
Nils Goesche <···@cartan.de> writes:
> Why do you insist on using the continuation?  I thought we'd
> agreed already on a functional signature as close as possible to
> the macro version.  Usage should look like this:
> 
>   (do-combis-in-a-func 2 '(heart spade diamond club)
>     (lambda (a b)
>       <do-something-with-a-and-b>))
> 
> Ok, it still sucks, but by far not as much as the continuation
> version, right? ;-)

One of Kaz's examples required the ability to exit after considering
only a finite number of combinations.  In a language with HOFs but not
necessarily any non-local escape mechanism then a continuation is a
way to achieve this.  If the language has a non-local escape mechanism
(e.g. catch+throw, raise+handle) then that could be used instead.  If
you prefer that version over the continuation version then that's
fine.  I didn't originally use it since a) it adds another piece of
machinery that is not in the macro version and b) using non-local
control transfers for situations you expect to happen is a contentious
topic so I attempted to avoid it.
From: Kaz Kylheku
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <cf333042.0209141711.31213e86@posting.google.com>
·······@dino.dnsalias.com (Stephen J. Bevan) wrote in message news:<··············@dino.dnsalias.com>...
> ···@ashi.footprints.net (Kaz Kylheku) writes:
> > >   - combs (2, ["heart", "spade", "diamond", "club"], [])
> > >       (fn (cs, [b,a], k) => k ([a,b]::cs));
> > 
> > Holy carpal tunnel, Batman! But this isn't quite right; I wanted an
> > example where variables are bound for you, not where you open code it
> > yourself.
> >
> > There is a gaping lack of abstraction here; the user knows that combs
> > generates the combinations in a certain form and then pattern matches
> > on it to do the destructuring.
> >
> > I would like to see some utterance which is comprised only of the
> > constituents that are relevant to the user:
> > - the inevitable symbol which identifies the language construct;
> > - the list from which the combinations are drawn;
> > - the variables which are to be bound to the combination elements; and
> > - the user's list of expressions, evaluated in the scope of the
> > variables.
> 
> Switching from *ML (since this can't be typed in *ML) then the best I
> could do as a HOF would be :-
> 
>    (combs 2 '(heart spade diamond club) '()
>      (lambda (k cs a b)
>        (funcall k (cons (list a b) cs))))

Exactly; and since that's the best you could do as HOF, and it's not
good enough, you would write a macro. The macro would eliminate the
irrelevant, distracting details, like having to explicitly specify
that you want the combinations to have 2 elements, and so forth. It
should be obvious by counting the number of variables that the user
specified that pairs are needed, triplets or whatever.

The point is that even a Lisp newbie who knows nothing about closures
or higher order functions can be taught how to use DO-COMBINATIONS!
Macros let you write custom language features that are useable to
someone who is knowledgeable in some problem domain, but not
necessarily Lisp. The macro allows one to work *exclusively* with the
ideal, abstract objects of the problem domain---the values from the
list being taken into combinations---through symbols that refer to
these objects.

Higher order functions don't hide the mechanism. The user has to
understand how to fit the pieces together; to know that a function
must be written with various parameters, some of which don't refer to
anything having to do with the problem domain, etc.
From: Kaz Kylheku
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <cf333042.0209111116.60b1815b@posting.google.com>
Oleg <············@myrealbox.com> wrote in message news:<············@newsmaster.cc.columbia.edu>...
> Matthew Danish wrote:
> 
> > 
> > If you want to have some fun, why not write a nice higher-order function
> > to do:
> > 
> > (defun silly-loop (string &optional (increment 1) (final-char nil))
> > (loop�for�n�from�0�by�increment
> > for�char�across�string
> > until�(eql�char�final-char)
> > collect�char�into�char-bag
> > sum�n�into�sum
> > finally�(return�(values�char-bag�sum�n))))
> > 
> > Try to make it half as readable.��And�as�efficient.
> 
> let silly_loop ?(increment = 1) ?(final_char = '\000') s = 
>     let sum = ref 0 and i = ref 0 and char_bag = ref [] in
>     let _ = try while true do
>                     char_bag := s.[!i] :: !char_bag;
>                     sum := !sum + !i;
>                     if List.hd !char_bag = final_char then raise Exit;
>                     i := !i + increment;
>                 done

In Lisp, any special form can, in principle, be implemented by a
macro. If someone took away your DO loop, you could write it from
scratch.

Suppose that the construct ``while true do ... done'' did not exist
in O'Caml. How would you implement it, such that exactly the same
syntax is supported (like the above example, for instance).
Can you do it entirely with higher order functions? I'd like
to see how.
From: Nils Goesche
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <lkznuoxr05.fsf@pc022.bln.elmeg.de>
···@ashi.footprints.net (Kaz Kylheku) writes:

> In Lisp, any special form can, in principle, be implemented by a
> macro. If someone took away your DO loop, you could write it from
> scratch.

That's because DO is in fact a macro.  The special forms are rather
the exceptions to what you mean :-) See

  3.1.2.1.2.1 Special Forms

in the HyperSpec, and ``special form'' in the Glossary.

You can also get a list by evaluating

(do-external-symbols (sym "CL")
  (when (special-operator-p sym)
    (print sym)))

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Kaz Kylheku
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <cf333042.0209131326.1ddf9851@posting.google.com>
···@ashi.footprints.net (Kaz Kylheku) wrote in message news:<····························@posting.google.com>...
> Suppose that the construct ``while true do ... done'' did not exist
> in O'Caml. How would you implement it, such that exactly the same
> syntax is supported (like the above example, for instance).
> Can you do it entirely with higher order functions? I'd like
> to see how.

Still waiting! :)
From: Stephen J. Bevan
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3y9a91xrk.fsf@dino.dnsalias.com>
Matthew Danish <·······@andrew.cmu.edu> writes:
> Considering that Common Lisp can also express the same higher-order
> function, perhaps you should consider why the LOOP macro is used.

Presumably because some (many?) people find that it better expresses
a solution to their problem.  Reasonable people can disagree about
whether it goes too far or alternately goes far enough.


> If you want to have some fun, why not write a nice higher-order function
> to do:
> 
> (defun silly-loop (string &optional (increment 1) (final-char nil))
>   (loop for n from 0 by increment 
>         for char across string
>         until (eql char final-char)
>         collect char into char-bag
>         sum n into sum
>         finally (return (values char-bag sum n))))
> 
> Try to make it half as readable.  And as efficient.

I'm not sure whether the following qualifies as being half as readable,
the efficiency would depend on whether the compiler will inline
the higher order function and I've ignored the optionallity of some of
the arguments, so with those proviso's here's some SML (since I would
use loop in Common Lisp) :-

  fun sillyLoop (string, increment, finalChar) =
    stringFoldr (([], 0, 0), string)
     (fn ((charBag, sum, n), char, next) =>
       if char = finalChar
       then (charBag, sum, n)
       else next (char::charBag, sum+n, n+increment))

This relies on the following auxillary (higher-order) function :-

  fun stringFoldr (z, s) f =
    let
      val l = String.size s;
      fun loop i z =
        if i = l
        then z
        else f (z, String.sub (s, i), loop (i+1))
    in
      loop 0 z
    end

This isn't part of SML but it is in my personal collection of string
utility functions.
From: Alexey Dejneka
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m3bs76a58b.fsf@comail.ru>
Oleg <············@myrealbox.com> writes:

> I'm not an expert in macros, but IIRC I've seen an example of a "loop" 
> macro in Lisp that was used to demonstrate their usefulness. I'm not sure I 
> understand how using macros is any better than using higher-order functions 
> (HOFs) though.

Camlyacc is a stand-alone preprocessor. Zebu and Meta are lisp
libraries, you can mix parsers written with them with ordinary Lisp
code.

-- 
Regards,
Alexey Dejneka
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <aljs3b$fc7$1@newsmaster.cc.columbia.edu>
Alexey Dejneka wrote:

> Oleg <············@myrealbox.com> writes:
> 
>> I'm not an expert in macros, but IIRC I've seen an example of a "loop"
>> macro in Lisp that was used to demonstrate their usefulness. I'm not sure
>> I understand how using macros is any better than using higher-order
>> functions (HOFs) though.
> 
> Camlyacc is a stand-alone preprocessor. Zebu and Meta are lisp
> libraries, you can mix parsers written with them with ordinary Lisp
> code.

Earlier in this thread, I was talking about Camlp4 (which is AFAIK 
different from camlyacc), and in this particular posting I was asking 
whether [Lisp] macros give you anything [O'Caml] HOFs don't [1]

As to changing the syntax in the same "compilation unit" (actually toplevel 
session), I vaguely remember doing it while tinkering with Camlp4 and 
following instructions in the tutorial. Interested parties should RTFM or 
ask in ocaml-list.

Cheers,
Oleg

[1] HOFs have nothing to do with Camlp4, camlyacc, etc.
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6clm69c29k.fsf@octagon.mrl.nyu.edu>
Oleg <············@myrealbox.com> writes:

> [1] HOFs have nothing to do with Camlp4, camlyacc, etc.

What is HOF?  (Note that I know about Camlp4)

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Oleg
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <all1uu$c9k$1@newsmaster.cc.columbia.edu>
Marco Antoniotti wrote:

> 
> Oleg <············@myrealbox.com> writes:
> 
>> [1] HOFs have nothing to do with Camlp4, camlyacc, etc.
> 
> What is HOF?  (Note that I know about Camlp4)

Higher-order function (a function that takes and/or returns functions).

Cheers
Oleg
From: Immanuel Litzroth
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <m2wuptn9w5.fsf@enfocus.be>
>>>>> "Oleg" == Oleg  <············@myrealbox.com> writes:

    Oleg> Dave Bakhash wrote:
    >> Syntax has a lot to do with the Lisp
    >> language.��Non-Lisp�programmers under-estimate the
    >> importance because they program in other languages whose
    >> syntaxes are only minor variations of one another, and don't
    >> buy the programmer very
    >> much.��For�CL�programmers,�the�syntax,�built-in
    >> runtime reader and evaluator, and more are what make it so
    >> usable to
    >> them.��That's�why�they�"obsess"�over�it.

There is an interesting paper on Simon Peyton Jones website:
"Template Meta-programming in Haskell".
Immanuel
From: Marco Antoniotti
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <y6c4rcxbt15.fsf@octagon.mrl.nyu.edu>
Immanuel Litzroth <·········@enfocus.be> writes:

> >>>>> "Oleg" == Oleg  <············@myrealbox.com> writes:
> 
>     Oleg> Dave Bakhash wrote:
>     >> Syntax has a lot to do with the Lisp
>     >> language.��Non-Lisp�programmers under-estimate the
>     >> importance because they program in other languages whose
>     >> syntaxes are only minor variations of one another, and don't
>     >> buy the programmer very
>     >> much.��For�CL�programmers,�the�syntax,�built-in
>     >> runtime reader and evaluator, and more are what make it so
>     >> usable to
>     >> them.��That's�why�they�"obsess"�over�it.
> 
> There is an interesting paper on Simon Peyton Jones website:
> "Template Meta-programming in Haskell".

Isn't this something alike the earliest "Aspect Oriented Programming"
work done by Kiczales in Common Lisp (or am I missing something?)

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Kaz Kylheku
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <cf333042.0209111111.12642147@posting.google.com>
Oleg <············@myrealbox.com> wrote in message news:<············@newsmaster.cc.columbia.edu>...
> I'm not an expert in macros, but IIRC I've seen an example of a "loop" 
> macro in Lisp that was used to demonstrate their usefulness. I'm not sure I 
> understand how using macros is any better than using higher-order functions 
> (HOFs) though.
> 
> E.g. to write a loop construct that increments its arument by 2 instead of 
> 1, in O'Caml, I would write
> 
> let rec loop2 start finish f = 
>   if finish < start then () else (f start; loop2 (start + 2) finish f)

By the way, is ``if'' a function in O'Caml?

> which is probably close to what one could do with HOFs in Lisp. No need for 
> macros. Now
> 
> loop2 1 9 print_int
>
> will print 13579. I guess the old saying that "if you don't know it, you 
> won't miss it" probably applies to me and Lisp macros here.

What you are missing is that macros open-code; that is, generate
code in place. That code can do things like create environments in
which
bindings are established, or control the evaluation of constituent
forms.
In fact, a macro can parse the raw form which invoked it, and emit an 
arbitrary translation.

What if your loop construct is to declare a loop variable, which is
visible to the constituent form? There is a big difference between
what you did above and, say:

   (dotimes (i 10) (print i))

dotimes is a language feature which causes the variable i to be
visible
to the form (print i). It evaluates that form ten times in its
lexically
apparent environment, successively binding the variable i to the
values 0 through 9. Successively calling a function with the arguments
0 through 9
is not the same thing.

Higher order functions do not replace macros; they do not have the
freedom
to arbitrarily interpret the meaning of a piece of the program. They
are subject to evaluation rules which prevent access to an unevaluated
argument expression. If you call a function with the argument
i, it cannot, for instance, be interpreted as the name of a variable
to
bind. At best, i can be lazily evaluated, at the latest possible time
before the value of that parameter is required, but that is not the
same thing as having control over evaluation.
From: Vassil Nikolov
Subject: Re: macros vs HOFs (was: O'Caml)
Date: 
Message-ID: <f34a0f4f.0209152335.7710dc3b@posting.google.com>
[on comparing the use of macros vs. the use of HOFs only]

(1) Can I ask for another example, namely, how would
    WITH-OUTPUT-TO-STRING look like implemented with HOFs?

    Specificially, consider this simplified variant: if the
    expression (... OUT ...) prints the characters "foo" to the OUT
    output stream, then

      (with-output-to-string-simplified (out) (... out ...)) => "foo"

    It could be implemented essentially as:

      (defmacro with-output-to-string-simplified (var form)
        `(let ((,var (make-string-output-stream)))
           ,form
           (prog1 (get-output-stream-string ,var)
                  (close ,var))))

    (`essentially' means we ignore error handling, ensuring that
    the underlying string output stream is closed, etc., for
    simplicity).

    How would that look like with HOFs?


(2) In a rather theoretical, intuitive, and fuzzy way, the
    fundamental difference between using macros and using HOFs
    lies, I think, in that with HOFs there is just a single
    stage in the computation---rewriting the data---while with
    macros there are two stages---rewriting the code and then
    rewriting the data.  (Considering computation as rewriting.)
    While these two approaches are equivalent in the sense of
    being Turing-complete, they are not equivalent in a more
    pragmatic sense that has to do with designing, writing,
    understanding, and maintaining programs.

    My 1e-2, I guess.

---Vassil.
From: synthespian
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <pan.2002.09.13.23.52.34.934551.29061@debian-rs.org>
On Mon, 09 Sep 2002 16:40:30 -0300, Dave Bakhash wrote:

> Oleg <············@myrealbox.com> writes:
> 
>> > Must i generate such a module to add a syntax?  Or can i change the
>> > syntax directly in a standard-program on the fly?
>> 
>> Possibly. I use standard syntax. What is the nature of your obsession
>> with syntax?
> 
> Syntax has a lot to do with the Lisp language.  Non-Lisp programmers
> under-estimate the importance because they program in other languages
> whose syntaxes are only minor variations of one another, and don't buy
> the programmer very much.  For CL programmers, the syntax, built-in
> runtime reader and evaluator, and more are what make it so usable to
> them.  That's why they "obsess" over it.
> 
> dave

Yeah, you bet. It seems syntax in other programming languages is just a
matter of "taste", which is a braindead criterium. People just don't get the
rationale behind the syntax, and they usually come from the Computer
Science course...I'm a medical student, and I have had the exquisite
taste of hearing a lot of shit from the CS students in what regards Lisp
(I remember one in particular who just freaked out when I told him about
OOP and *hash tables* in CL)...

And LISP still looks "avant-garde" decades later...:-) Did it look
"avant-garde" in the 50's ? I guess so, computers were avant-garde, now
their (almost) dirt cheap (hmmm, not where *I* live..)

And I don't feel guilty about my useless comment because this is one of
those looong threads in c.l.l. At least here I don't go screaming "why
don't you use Lisp and stop forcing C on me!!!!" in the hall (well, for
one thing, there's is no hall). ;-)

Cheers
synthespian

_________________________________________________________________
Micro$oft-Free Human         100% Debian GNU/Linux
     KMFMS              "Bring the genome to the people!
www.debian.org - www.debian-br.cipsga.org.br - www.debian-rs.org
From: sv0f
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <none-1709021302150001@129.59.212.53>
In article <····································@debian-rs.org>,
synthespian <···········@debian-rs.org> wrote:

>And LISP still looks "avant-garde" decades later...:-) Did it look
>"avant-garde" in the 50's ? I guess so, computers were avant-garde, now
>their (almost) dirt cheap (hmmm, not where *I* live..)

"Avant-garde"  I think you nailed it on the head.  Lisp looks like
an attempt from the distant past to anticipate the sleekness of
future design.  Alas, the sleekness never materialized, leaving us
syntactic abominations like C++ and Perl.  So Lisp looks both old
and hints at the future that might still be waiting for us.

(Forgive me, I spent Saturday and Sunday in an Art Deco post office
that's been converted into an art musuem.  It was hosting a show on
the painting, sculpture, and graphic and industrial design of the
1940s and 1950s.  I know now that all that was missing was the Lisp
1.5 manual!)
From: ilias
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <3D7CE87C.3090809@pontos.net>
Oleg wrote:
> ilias wrote:
> 
>>i'm quoting uncontrolled:
>>
>>>Of course O'Caml programs can generate O'Caml code
>>
>>can you give me an example please?

you example looked like a marketing-gag.
have you an real-life-example? With code?

comparision: very simple CL

(defmacro dummy_m (var1 var2) `(+ (* ,var1 10) (* ,var2 20) )))
(deffun   dummy_f (var1 var2)  (+ (*  var1 10) (*  var2 20) )))

call of macro   : (dummy_m 1 2)
call of function: (dummy_f 1 2)

(shows that syntax is the same. macro makes no sense here)

macros can be nested.

>>Can make syntax-modules myself?
> 
> Yes.  http://caml.inria.fr/camlp4/manual/manual001.html
> 
>>Must i generate such a module to add a syntax?
>>Or can i change the syntax directly in a standard-program on the fly?

open question.

looks like no.

this is might good, to prevend programmers from making big desasters.

but i want to make disaster.

i looks like O'Caml cannot beat CL freedom.
From: Oleg
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <aliqoj$kte$1@newsmaster.cc.columbia.edu>
ilias wrote:

> have you an real-life-example? With code?

An example of WHAT?! You wanted an example of a program that generates 
code, I gave you the simplest one there is.
 
> comparision: very simple CL
> 
> (defmacro dummy_m (var1 var2) `(+ (* ,var1 10) (* ,var2 20) )))
> (deffun���dummy_f�(var1�var2)��(+�(*��var1�10)�(*��var2�20)�)))

let dummy_f var1 var2 = var1 * 10 + var2 * 20

> call of macro���:�(dummy_m�1�2)
> call of function: (dummy_f 1 2)

dummy_f 1 2

> (shows that syntax is the same. macro makes no sense here)

If you want to change syntax RTFM. I posted a direct link.

Oleg
From: ilias
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <3D7CF6EB.1080702@pontos.net>
Oleg wrote:
> ilias wrote:
> 
> 
>>have you an real-life-example? With code?
> 
> 
> An example of WHAT?! You wanted an example of a program that generates 
> code, I gave you the simplest one there is.
>  
> 
>>comparision: very simple CL
>>
>>(defmacro dummy_m (var1 var2) `(+ (* ,var1 10) (* ,var2 20) )))
>>(deffun   dummy_f (var1 var2)  (+ (*  var1 10) (*  var2 20) )))
> 
> 
> let dummy_f var1 var2 = var1 * 10 + var2 * 20

this is the function.

where is the equivalent macro (codegenerator)?

>>call of macro   : (dummy_m 1 2)
>>call of function: (dummy_f 1 2)
> 
> dummy_f 1 2

again, where is the equivalent macro (codegenerator)?

>>(shows that syntax is the same. macro makes no sense here)

sorry, missleading! i meant:
macro makes no sense here (can and *should* use the function). Its only 
for comparision-reasons, to show the nearly identical syntax.

> If you want to change syntax RTFM. I posted a direct link.

yes. for the tables.
http://caml.inria.fr/camlp4/manual/manual001.html


you can show me (and everyone whos watching) the way of altering the 
syntax at program runtime. you know the language.

otherwise (for now) this become valid for me:

> Must i generate such a module to add a syntax?
> Or can i change the syntax directly in a standard-program on the fly?
> 
> open question.
> 
> looks like no.
> 
> this is might good, to prevend programmers from making big desasters.
> 
> but i want to make disaster.
> 
> i looks like O'Caml cannot beat CL freedom. 
From: ilias
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <alobro$gf1$1@usenet.otenet.gr>
can we continue where we've stopped?
From: Matthew Danish
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <20020910031401.A23781@lain.res.cmu.edu>
On Mon, Sep 09, 2002 at 01:15:58PM -0400, Oleg wrote:
> Of course O'Caml programs can generate O'Caml code, and I don't see why it 
> should be any harder than in Lisp. Just like Lisp, O'Caml programs are 

Unlike Lisp, O'Caml programs aren't structured data.

> expressions, and you can even have prefix arithmetic operators: you can 
> write "(+) 5 7" or "((+) 5 7)" instead of "5 + 7", and also "(f (g (h x)))" 
> instead of "f (g (h x))" if you insist. Speaking of Lisp syntax in O'Caml, 
> I think O'Caml standard distribution even has a module for Lisp syntax that 
> lets it understand things like "(if a b c)" as "if a then b else c", but I 
> doubt that many people use it.

It amazes me that you could read a Lisp newsgroup, and presumbly be
somewhat familiar with Lisp, and yet be so short-sighted as to be blind
to all but the first-level syntax.

You do know that there's more to Lisp syntax than prefix arithmetic
operators?  Or the ubiquitous usage of parenthesis?  Lisp syntax in a
language lacking the power to exploit it is a waste of time (although it
might make source code mungers easier to write).

> I personally never had much use for programs in language X that generate 
> programs in language X, especially if language X has higher-order 
> functions, but maybe others can't live wihtout it.

C programmers get by without higher-order functions. (somehow)

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Rob Warnock
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <uns44bbhh26id6@corp.supernews.com>
Matthew Danish  <·······@andrew.cmu.edu> wrote:
+---------------
| > I personally never had much use for programs in language X that generate 
| > programs in language X, especially if language X has higher-order 
| > functions, but maybe others can't live wihtout it.
| 
| C programmers get by without higher-order functions. (somehow)
+---------------

C supports higher-order functions. In fact, they're used quite
heavily in the Unix kernel code (well, at least in Irix, though
AFAIK the code I'm thinking of also exists in BSD) to do stuff
like MzScheme's "hash-table-for-each" or CL's "maphash", not to
mention the ubiquitous use in GUIs of functions which accept other
"callback" functions as arguments.

What C doesn't have built-in is *closures*! As a result, HOFs in C
almost universally resort to faking it with a separate argument to
the HOF which is an opaque cookie to be passed to one of the other
arguments, itself a function. So instead of CL's "maphash" signatures:

    (maphash function hash-table) => nil
      where: (function key value) => nil

you end up with something like this:

    void
    walkhash (ht_t table,
	      void (*fcn)(ht_t table, void *cookie, void *key, void *value),
	      void *cookie);

where the "walkhash" HOF calls the argument function with not only
the key/value pair, but actually *two* (in this case) pieces of data
that would in Scheme or Lisp normally be closed-over lexicals. [Two,
since otherwise the caller would have to allocate some memory to pack
them into, to get a single handle.]

Note that with a lot of hackery on the cookie (i.e., effectively
making the cookie be a dynamically-allocated "environment" frame)
you *can* even get the effect of nested closures or closures over
larger numbers of variables, but it's ugly.


-Rob

-----
Rob Warnock, PP-ASEL-IA		<····@rpw3.org>
627 26th Avenue			<URL:http://www.rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Matthew Danish
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <20020911033250.G23781@lain.res.cmu.edu>
On Tue, Sep 10, 2002 at 03:32:27PM -0000, Rob Warnock wrote:
> Matthew Danish  <·······@andrew.cmu.edu> wrote:
> +---------------
> | > I personally never had much use for programs in language X that generate 
> | > programs in language X, especially if language X has higher-order 
> | > functions, but maybe others can't live wihtout it.
> | 
> | C programmers get by without higher-order functions. (somehow)
> +---------------
> 
> C supports higher-order functions. In fact, they're used quite

You are completely correct, and I neglected to mention closures.
However C function pointers are such a pain in the ass, that I regard
them as a somewhat lower form of higher-order functions :-) Due to their
painfulness, they tend to be used much less by C programmers than a
functional language programmer.  Also, as you point out, they are much
less useful without closures.  So in a sense, "C programmers get by
without higher-order functions" in many situations where they would be
useful.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Rob Warnock
Subject: Re: OT: O'Caml (was: LISP - When you've seen it, what else can impress?)
Date: 
Message-ID: <unusn5f223ppef@corp.supernews.com>
Matthew Danish  <·······@andrew.cmu.edu> wrote:
+---------------
| However C function pointers are such a pain in the ass, that I regard
| them as a somewhat lower form of higher-order functions :-) Due to their
| painfulness, they tend to be used much less by C programmers than a
| functional language programmer.  Also, as you point out, they are much
| less useful without closures.  So in a sense, "C programmers get by
| without higher-order functions" in many situations where they would be
| useful.
+---------------

No argument.

However, it's worth also pointing out that once one has had significant
exposure to (quasi-)functional languages (CL & Scheme count here), one
may find *much* greater use of higher-order functions and perhaps even
explicit closures (manully-managed, or with a small auxiliary lib) popping
up in one's C code, with beneficial effects on coding time and correctness.

Heavens! I've even found myself occasionally using HOFs in *firmware*!! ;-}


-Rob

-----
Rob Warnock, PP-ASEL-IA		<····@rpw3.org>
627 26th Avenue			<URL:http://www.rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: ilias
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <ao4avp$f2m$8@usenet.otenet.gr>
follow-upthread:

·················································@newsmaster.cc.columbia.edu
From: Matthew Danish
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <20020910032529.B23781@lain.res.cmu.edu>
On Mon, Sep 09, 2002 at 12:00:32PM -0400, Oleg wrote:
> ilias wrote:
> 
> [...]
> > Which language should i take a look on?
> 
> ML, more specifically its O'Caml dialect. It's features include:
> 
> a) A langauge and an implementation in one (from the practical point of 
> view, this is an advantage, but see point "d"). Here I list features of 
> various importance for both.
> f) Type inference (no need to declare types in function definitions)
> g) Very strict type system: no overloading and no implicit conversions; 
> even printf is statically type checked!
[omitted many things that can be found on web sites]

a) From the practical point of view, this is an advantage...
   to the implementors.  But not to the users, who cannot be guaranteed
   a stable language even, who cannot be guaranteed that the latest
   whims of the compiler team won't break their code, who have no
   recourse in case of such breakage except to "go fork off".

f,g) Have you tried evaluating this, lately?: let rec f () = f;;

     But that's unfair, really.  It's good to experience a static type
     system; one never knows true freedom until it has been lost, after
     all. =)

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Oleg
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <alk7sm$n02$1@newsmaster.cc.columbia.edu>
Matthew Danish wrote:

> a) From the practical point of view, this is an advantage...
> to�the�implementors.��But�not�to�the�users,�who�cannot�be�guaranteed
> a�stable�language�even,�who�cannot�be�guaranteed�that�the�latest
> whims�of�the�compiler�team�won't�break�their�code,�who�have�no
> recourse�in�case�of�such�breakage�except�to�"go�fork�off".

I think core ML of O'Caml is reasonably stable. AFAIK most _users_ use core 
ML (stable features), while language developers tinker with things like OO 
and polymorphic constructors.

> f,g) Have you tried evaluating this, lately?: let rec f () = f;;

Did you mean "let rec f () = f ()" ? - infinite loop as intended.
What's the problem?

Oleg

> But�that's�unfair,�really.��It's�good�to�experience�a�static�type
> system;�one�never�knows�true�freedom�until�it�has�been�lost,�after
> all.�=)
From: Matthew Danish
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <20020911032719.F23781@lain.res.cmu.edu>
On Tue, Sep 10, 2002 at 03:45:32AM -0400, Oleg wrote:
> > f,g) Have you tried evaluating this, lately?: let rec f () = f;;
> 
> Did you mean "let rec f () = f ()" ? - infinite loop as intended.

Nope.  I meant exactly what I posted.

The equivalent being (defun f () #'f) in CL or (define (f) f) in Scheme.
It's in a similar situation to the Y-combinator with regards to the
typechecker.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Oleg
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <alph9p$g1o$2@newsmaster.cc.columbia.edu>
Matthew Danish wrote:

> On Tue, Sep 10, 2002 at 03:45:32AM -0400, Oleg wrote:
>> > f,g) Have you tried evaluating this, lately?: let rec f () = f;;
>> 
>> Did you mean "let rec f () = f ()" ? - infinite loop as intended.
> 
> Nope.  I meant exactly what I posted.
> 
> The equivalent being (defun f () #'f) in CL or (define (f) f) in Scheme.

Assuming such a function is not insane to start with, how exactly would you 
use it?

Oleg
From: Joe Marshall
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <vg5bgx1s.fsf@ccs.neu.edu>
Oleg <············@myrealbox.com> writes:

> Matthew Danish wrote:
> 
> > On Tue, Sep 10, 2002 at 03:45:32AM -0400, Oleg wrote:
> >> > f,g) Have you tried evaluating this, lately?: let rec f () = f;;
> >> 
> >> Did you mean "let rec f () = f ()" ? - infinite loop as intended.
> > 
> > Nope.  I meant exactly what I posted.
> > 
> > The equivalent being (defun f () #'f) in CL or (define (f) f) in Scheme.
> 
> Assuming such a function is not insane to start with, how exactly would you 
> use it?

By calling it, perhaps?
From: Oleg
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <alqmqi$dlp$2@newsmaster.cc.columbia.edu>
Joe Marshall wrote:

> Oleg <············@myrealbox.com> writes:
> 
>> Matthew Danish wrote:
>> 
>> > On Tue, Sep 10, 2002 at 03:45:32AM -0400, Oleg wrote:
>> >> > f,g) Have you tried evaluating this, lately?: let rec f () = f;;
>> >> 
>> >> Did you mean "let rec f () = f ()" ? - infinite loop as intended.
>> > 
>> > Nope.  I meant exactly what I posted.
>> > 
>> > The equivalent being (defun f () #'f) in CL or (define (f) f) in
>> > Scheme.
>> 
>> Assuming such a function is not insane to start with, how exactly would
>> you use it?
> 
> By calling it, perhaps?

An what argument?
From: Oleg
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <alqnp0$ecb$2@newsmaster.cc.columbia.edu>
Oleg wrote:

> Joe Marshall wrote:
> 
>> Oleg <············@myrealbox.com> writes:
>> 
>>> Matthew Danish wrote:
>>> 
>>> > On Tue, Sep 10, 2002 at 03:45:32AM -0400, Oleg wrote:
>>> >> > f,g) Have you tried evaluating this, lately?: let rec f () = f;;
>>> >> 
>>> >> Did you mean "let rec f () = f ()" ? - infinite loop as intended.
>>> > 
>>> > Nope.  I meant exactly what I posted.
>>> > 
>>> > The equivalent being (defun f () #'f) in CL or (define (f) f) in
>>> > Scheme.
>>> 
>>> Assuming such a function is not insane to start with, how exactly would
>>> you use it?
>> 
>> By calling it, perhaps?
> 
> An what argument?

s/An/On/gc
From: Joe Marshall
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <4rcvauyp.fsf@ccs.neu.edu>
Oleg <············@myrealbox.com> writes:

> Joe Marshall wrote:
> 
> > Oleg <············@myrealbox.com> writes:
> > 
> >> Matthew Danish wrote:
> >> 
> >> > On Tue, Sep 10, 2002 at 03:45:32AM -0400, Oleg wrote:
> >> >> > f,g) Have you tried evaluating this, lately?: let rec f () = f;;
> >> >> 
> >> >> Did you mean "let rec f () = f ()" ? - infinite loop as intended.
> >> > 
> >> > Nope.  I meant exactly what I posted.
> >> > 
> >> > The equivalent being (defun f () #'f) in CL or (define (f) f) in
> >> > Scheme.
> >> 
> >> Assuming such a function is not insane to start with, how exactly would
> >> you use it?
> > 
> > By calling it, perhaps?
> 
> An what argument?

Silly.  It doesn't take any arguments.
From: Oleg
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <alqppn$fti$2@newsmaster.cc.columbia.edu>
Joe Marshall wrote:

> Oleg <············@myrealbox.com> writes:
> 
>> Joe Marshall wrote:
>> 
>> > Oleg <············@myrealbox.com> writes:
>> > 
>> >> Matthew Danish wrote:
>> >> 
>> >> > On Tue, Sep 10, 2002 at 03:45:32AM -0400, Oleg wrote:
>> >> >> > f,g) Have you tried evaluating this, lately?: let rec f () = f;;
>> >> >> 
>> >> >> Did you mean "let rec f () = f ()" ? - infinite loop as intended.
>> >> > 
>> >> > Nope.  I meant exactly what I posted.
>> >> > 
>> >> > The equivalent being (defun f () #'f) in CL or (define (f) f) in
>> >> > Scheme.
>> >> 
>> >> Assuming such a function is not insane to start with, how exactly
>> >> would you use it?
>> > 
>> > By calling it, perhaps?
>> 
>> An what argument?
> 
> Silly.  It doesn't take any arguments.

If it doesn't take any arguments, than it's not a function. You should have 
stayed in school, kid.
From: Joe Marshall
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <sn0f9cpd.fsf@ccs.neu.edu>
Oleg <············@myrealbox.com> writes:

> Joe Marshall wrote:
> 
> > Oleg <············@myrealbox.com> writes:
> > 
> >> Joe Marshall wrote:
> >> 
> >> > Oleg <············@myrealbox.com> writes:
> >> > 
> >> >> Matthew Danish wrote:
> >> >> 
> >> >> > On Tue, Sep 10, 2002 at 03:45:32AM -0400, Oleg wrote:
> >> >> >> > f,g) Have you tried evaluating this, lately?: let rec f () = f;;
> >> >> >> 
> >> >> >> Did you mean "let rec f () = f ()" ? - infinite loop as intended.
> >> >> > 
> >> >> > Nope.  I meant exactly what I posted.
> >> >> > 
> >> >> > The equivalent being (defun f () #'f) in CL or (define (f) f) in
> >> >> > Scheme.
> >> >> 
> >> >> Assuming such a function is not insane to start with, how exactly
> >> >> would you use it?
> >> > 
> >> > By calling it, perhaps?
> >> 
> >> An what argument?
> > 
> > Silly.  It doesn't take any arguments.
> 
> If it doesn't take any arguments, than it's not a function. You should have 
> stayed in school, kid.

Hmmm....

(defun f () #'f)
(functionp #'f) => T

This is comp.lang.lisp, not sci.math
From: Tim Bradshaw
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <ey3n0qnorrp.fsf@cley.com>
* oleg inconnu wrote:

> If it doesn't take any arguments, than it's not a function. You
> should have stayed in school, kid.

If its type is a subtype of FUNCTION then it's a function.  This is a
Lisp newsgroup, not a mathematics one, or for that matter an ML one.
It's also far from the only Lisp function that takes no arguments.

--tim
From: Johannes Flieger
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <m3n0qmiqis.fsf@hazchem.anarchos.net>
Oleg <············@myrealbox.com> writes:

> Joe Marshall wrote:

> > Silly.  It doesn't take any arguments.
> 
> If it doesn't take any arguments, than it's not a function. You should have 
> stayed in school, kid.

For the record: 0-ary functions are often employed in
mathematical logic and type theory. The only requirement on a function
is that it be single-VALUED. 

Zero-argument functions are commonly employed to represent, e.g.,
logical constants. This has the advantage that it allows for a unified
treatment of constant and function symbols, which has the convenience
that some metatheorems are slightly less tedious to prove.

You might want to consult a decent elementary mathematical logic
textbook (Herbert B. Enderton's "A Mathematical Introduction to Logic"
should suffice [see p. 68, for example]).

You might also consider getting a refund from your school.

Regards,
        J.
-- 
From: Fernando Rodr�guez
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <u31snuc8773nsqlgum9q38iqr7kb9cpogc@4ax.com>
On Mon, 09 Sep 2002 12:00:32 -0400, Oleg <············@myrealbox.com> wrote:

>ilias wrote:
>
>[...]
>> Which language should i take a look on?
>
>ML, more specifically its O'Caml dialect.

Can't tell about ocaml, but Erlang and Oz seem to have some interesting
features.  



-----------------------
Fernando Rodriguez
From: Kaz Kylheku
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <cf333042.0209091220.47f731de@posting.google.com>
When you've seen it, what can impress you is nice software that works.

Language *ideas* can still impress you, but not their manifestation
as some stupid new syntax, implemented from scratch over an inadequate
substrate.

People experimenting with new programming language semantics ideas
should be using Common Lisp rather than writing everything from
scratch.
From: Paolo Amoroso
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <za99PSD9yT6BJZhoG5MObC9C37s3@4ax.com>
On Mon, 09 Sep 2002 18:06:07 +0300, ilias <·······@pontos.net> wrote:

> This request is adressed to those people which have extensive knowledge 
> about different, if possible all avaiable historical and so called 
> modern language.
> 
> Which language should i take a look on?

Do you mean you are done with Lisp (and hopefully comp.lang.lisp)? Great!
Next stop: INTERCAL.


> Is there something, that can impress me?

Probably not.


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://www.paoloamoroso.it/ency/README
From: Software Scavenger
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <a6789134.0209111555.2c71790@posting.google.com>
ilias <·······@pontos.net> wrote in message news:<················@pontos.net>...

> I'm a LISP novice. But from what i've seen i can construct with this 
> language nearly everything i can imagine.
> 
> So the questions:
> 
> - which language out there gives me this freedom of construction?

A lot of us were in your boat.  We wanted to find the perfect
programming language, or if that failed, to invent our own.  After
years of efforts, we gradually started to understand that life is too
short to try to do everything and achieve perfection.  There never
will be a perfect programming language.  But, of all the languages I
have investigated and used over the years, Common Lisp comes closer to
perfection than any other.  That means, compared to Common Lisp, the
other languages don't measure up, but have defects and limitations.

The best way for you to proceed would be to learn and use Common Lisp
until you become expert at it, before you spend any time at all on
other languages.  Learning programming languages is an investment of
time with a return of learning not only the language but also a lot of
interesting ideas from that language.  Such investment would pay off
for any language you learned, but the rate of return is higher for
Common Lisp.  Each hour you spend with some other language is one hour
less spent with Common Lisp, causing your investment of time to be
impaired by a lower rate of return.  And becoming expert pays off with
synergy, so your return on investment is even better than when you
start learning the language.
From: ilias
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <alompn$o3c$1@usenet.otenet.gr>
Software Scavenger wrote:
> ilias <·······@pontos.net> wrote in message news:<················@pontos.net>...
> 
> 
>>I'm a LISP novice. But from what i've seen i can construct with this 
>>language nearly everything i can imagine.
>>
>>So the questions:
>>
>>- which language out there gives me this freedom of construction?
> 
> 
> A lot of us were in your boat.  We wanted to find the perfect
> programming language, or if that failed, to invent our own.  

The the perfect language.

The nealy perfect 'middle-way'

> After
> years of efforts, we gradually started to understand that life is too
> short to try to do everything and achieve perfection.

So little time, i agree.

Thats why i don't want to build something new.

> There never
> will be a perfect programming language.

Universe is build by that.

> But, of all the languages I
> have investigated and used over the years, Common Lisp comes closer to
> perfection than any other.  That means, compared to Common Lisp, the
> other languages don't measure up, but have defects and limitations.

I see.

Have you investigated the very young languages, too? I heard something 
like O'CAML. But the thread gets to complex and i stopped reading.

If i understand right, there are some languages that fall into the 
"lisp-dialects", but they somehow 'hide' this. I mean, they don't call 
them DylanLisp or so.

> The best way for you to proceed would be to learn and use Common Lisp
> until you become expert at it, before you spend any time at all on
> other languages.  Learning programming languages is an investment of
> time with a return of learning not only the language but also a lot of
> interesting ideas from that language.  Such investment would pay off
> for any language you learned, but the rate of return is higher for
> Common Lisp.  Each hour you spend with some other language is one hour
> less spent with Common Lisp, causing your investment of time to be
> impaired by a lower rate of return.  And becoming expert pays off with
> synergy, so your return on investment is even better than when you
> start learning the language.

I understand you thoughts.
From: Oleg
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <alphlb$gdr$1@newsmaster.cc.columbia.edu>
Software Scavenger wrote:

> ilias <·······@pontos.net> wrote in message
> news:<················@pontos.net>...
> 
>> I'm a LISP novice. But from what i've seen i can construct with this
>> language nearly everything i can imagine.
>> 
>> So the questions:
>> 
>> - which language out there gives me this freedom of construction?
> 
> A lot of us were in your boat.  We wanted to find the perfect
> programming language, or if that failed, to invent our own.  After
> years of efforts, we gradually started to understand that life is too
> short to try to do everything and achieve perfection.  There never
> will be a perfect programming language.  But, of all the languages I
> have investigated and used over the years, Common Lisp comes closer to
> perfection than any other.  That means, compared to Common Lisp, the
> other languages don't measure up, but have defects and limitations.

You got it all wrong :) Compared to O'Caml, the other languages don't 
measure up, but have defects and limitations. CL is merely "interesting" 
(Unlike, say Java, Perl, Python, VB, etc. that are not even "interesting")

> The best way for you to proceed would be to learn and use Common Lisp
> until you become expert at it, before you spend any time at all on
> other languages.  Learning programming languages is an investment of
> time with a return of learning not only the language but also a lot of
> interesting ideas from that language.  Such investment would pay off
> for any language you learned, but the rate of return is higher for
> Common Lisp.  Each hour you spend with some other language is one hour
> less spent with Common Lisp, causing your investment of time to be
> impaired by a lower rate of return.  And becoming expert pays off with
> synergy, so your return on investment is even better than when you
> start learning the language.
From: Marco Antoniotti
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <y6c7khr8595.fsf@octagon.mrl.nyu.edu>
Oleg <············@myrealbox.com> writes:

> Software Scavenger wrote:
> 
> > ilias <·······@pontos.net> wrote in message
> > news:<················@pontos.net>...
> > 
> >> I'm a LISP novice. But from what i've seen i can construct with this
> >> language nearly everything i can imagine.
> >> 
> >> So the questions:
> >> 
> >> - which language out there gives me this freedom of construction?
> > 
> > A lot of us were in your boat.  We wanted to find the perfect
> > programming language, or if that failed, to invent our own.  After
> > years of efforts, we gradually started to understand that life is too
> > short to try to do everything and achieve perfection.  There never
> > will be a perfect programming language.  But, of all the languages I
> > have investigated and used over the years, Common Lisp comes closer to
> > perfection than any other.  That means, compared to Common Lisp, the
> > other languages don't measure up, but have defects and limitations.
> 
> You got it all wrong :) Compared to O'Caml, the other languages don't 
> measure up, 

How do you implement the Visitor Pattern in OCaml?  Code please.  You
show me yours, I'll show you mine.

> but have defects and limitations. CL is merely "interesting" 
> (Unlike, say Java, Perl, Python, VB, etc. that are not even
> "interesting")

I find OCaml "interesting".  It just does not measure up to *my*
expectations?

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: ilias
Subject: Re: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <alprg7$h89$1@usenet.otenet.gr>
Oleg wrote:
> Software Scavenger wrote:
> 
> 
>>ilias <·······@pontos.net> wrote in message
>>news:<················@pontos.net>...
>>
>>
>>>I'm a LISP novice. But from what i've seen i can construct with this
>>>language nearly everything i can imagine.
>>>
>>>So the questions:
>>>
>>>- which language out there gives me this freedom of construction?
>>
>>A lot of us were in your boat.  We wanted to find the perfect
>>programming language, or if that failed, to invent our own.  After
>>years of efforts, we gradually started to understand that life is too
>>short to try to do everything and achieve perfection.  There never
>>will be a perfect programming language.  But, of all the languages I
>>have investigated and used over the years, Common Lisp comes closer to
>>perfection than any other.  That means, compared to Common Lisp, the
>>other languages don't measure up, but have defects and limitations.
> 
> 
> You got it all wrong :) Compared to O'Caml, the other languages don't 
> measure up, but have defects and limitations. CL is merely "interesting" 
> (Unlike, say Java, Perl, Python, VB, etc. that are not even "interesting")

how old is O'Caml ?
From: Thomas Guettler
Subject: Python Was: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <alphsc$c4n$06$1@news.t-online.com>
ilias schrieb:
> This request is adressed to those people which have extensive knowledge 
> about different, if possible all avaiable historical and so called 
> modern language.
> 
> Which language should i take a look on?
> 
> Is there something, that can impress me?

I am very impressed by python.

The code is much more readable than lisp or perl. It has
a rich set of modules. There is even a object oriented
database (ZODB). It is open source and the newsgroup is
very helpful.

thomas
From: Marco Antoniotti
Subject: Re: Python Was: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <y6c3csf855n.fsf@octagon.mrl.nyu.edu>
Thomas Guettler <···········@thomas-guettler.de> writes:

> ilias schrieb:
> > This request is adressed to those people which have extensive
> > knowledge about different, if possible all avaiable historical and
> > so called modern language.
> > Which language should i take a look on?
> > Is there something, that can impress me?
> 
> I am very impressed by python.
> 
> The code is much more readable than lisp or perl.

Except when you accidently delete a space at the beginning of the line.

> It has a rich set of modules.

Who doesn't?

> There is even a object oriented database (ZODB).

So?

> It is open source and the newsgroup is very helpful.

Same here.

And CL is still more featurful than the Greenspun's applications.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Software Scavenger
Subject: Re: Python Was: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <a6789134.0209122136.61f3a3ac@posting.google.com>
Thomas Guettler <···········@thomas-guettler.de> wrote in message news:<···············@news.t-online.com>...

> The code is much more readable than lisp or perl. It has

I'm curious to know what do-combinations looks like in Python.
From: Marco Antoniotti
Subject: Re: Python Was: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <y6cwupqdkcw.fsf@octagon.mrl.nyu.edu>
··········@mailandnews.com (Software Scavenger) writes:

> Thomas Guettler <···········@thomas-guettler.de> wrote in message news:<···············@news.t-online.com>...
> 
> > The code is much more readable than lisp or perl. It has
> 
> I'm curious to know what do-combinations looks like in Python.

Does it look?

Cheers


-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
715 Broadway 10th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Johann Hibschman
Subject: Re: Python Was: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <m2k7lpinx8.fsf@physics.berkeley.edu>
Marco Antoniotti <·······@cs.nyu.edu> writes:

> ··········@mailandnews.com (Software Scavenger) writes:
> > > The code is much more readable than lisp or perl. It has
> > 
> > I'm curious to know what do-combinations looks like in Python.
> 
> Does it look?

Well, I like Python.  I've used Python and C++ for at least one major
project that, in retrospect, I should have used Common Lisp for, but
it did teach me the language.

I'd write something like do-combinations as a generator in python:

def combinations(k, objs):
    "Iterate recursively over the objects"
    if k > len(objs) or k == 0:
        return
    elif k == 1:
        for o in objs:
            yield [o]
    else:
        items = range(k); items[0] = objs[0]
        for items[1:] in combinations(k-1, objs[1:]):
            yield items
        for items in combinations(k, objs[1:]):
            yield items

for a, b, c in combinations(3, [1, 2, 3, 4, 5]):
    print "Combination is: ", a, b, c


Sure, I should probably use a more clever algorithm that doesn't eat
quite as much stack space, but that's easily done with a bit more
effort.  The important part is that I never have to cons up a list of
all the combinations.

The do-combinations example just happens to fit within the class of
macro-like problems that are solvable with generators.  I'm not trying
to argue that Python has anything like the expressive power of the CL
macro system, just that this particular example is doable.


--Johann
From: Thomas F. Burdick
Subject: Re: Python Was: LISP - When you've seen it, what else can impress?
Date: 
Message-ID: <xcvwuppbsky.fsf@tornado.OCF.Berkeley.EDU>
··········@mailandnews.com (Software Scavenger) writes:

> Thomas Guettler <···········@thomas-guettler.de> wrote in message news:<···············@news.t-online.com>...
> 
> > The code is much more readable than lisp or perl. It has
> 
> I'm curious to know what do-combinations looks like in Python.

Starting /opt/local/bin/cmucl ...
CMU Common Lisp 18d, running on tornado.OCF.Berkeley.EDU
Send questions to ··········@cons.org. and bug reports to ·········@cons.org.
Loaded subsystems:
    Python 1.0, target SPARCstation/Solaris 2
    CLOS based on PCL version:  September 16 92 PCL (f)
* 
* (disassemble (defmacro do-combinations (syms list &rest body)
   (labels ((work (syms x)
               (let ((y (gensym)))
                  (if (cdr syms)
                        `(loop as (,(car syms) . ,y) on ,x
                               do ,(work (cdr syms) y))
                     `(loop as ,(car syms) in ,x do ,@body)))))
      (work syms list))))
100DB260:       .ENTRY "DEFUN (SETF MACRO-FUNCTION)"(&rest args) ; (FUNCTION
                                                                    (&REST T) ..)
     278:       ADD        -858, %CODE
     27C:       ADD        %CFP, 32, %CSP
     280:       CMP        %NARGS, %ZERO
     284:       MOV        %CSP, %NL0
     288:       BEQ        L2
     28C:       ADD        %NARGS, %CSP
     290:       SUBCC      %NARGS, 24, %NL1
     294:       BLE        L1
     298:       MOV        %CSP, %NL3
     29C:       ADD        %CFP, %NARGS, %NL2
     2A0: L0:   ADD        -4, %NL2
     2A4:       SUBCC      4, %NL1
     2A8:       ADD        -4, %NL3
     2AC:       LD         [%NL2], %L0
     2B0:       BGT        L0
     2B4:       ST         %L0, [%NL3]
     2B8: L1:   SUBCC      %NARGS, 0, %NL1
     2BC:       BEQ        L2
     2C0:       SUBCC      4, %NL1
     2C4:       ST         %A0, [%NL0]
     2C8:       BEQ        L2
     2CC:       SUBCC      4, %NL1
     2D0:       ST         %A1, [%NL0+4]
     2D4:       BEQ        L2
     2D8:       SUBCC      4, %NL1
     2DC:       ST         %A2, [%NL0+8]
     2E0:       BEQ        L2
     2E4:       SUBCC      4, %NL1
     2E8:       ST         %A3, [%NL0+12]
     2EC:       BEQ        L2
     2F0:       SUBCC      4, %NL1
     2F4:       ST         %A4, [%NL0+16]
     2F8:       BEQ        L2
     2FC:       SUBCC      4, %NL1
     300:       ST         %A5, [%NL0+20]
     304: L2:   LD         [%LEXENV+7], %A2
     308:       ST         %OCFP, [%CFP]     ; :OPTIONAL entry point
     30C:       ST         %LRA, [%CFP+4]    ; No-arg-parsing entry point
     310:       LD         [%CODE+69], %A0   ; 'KERNEL:SIMPLE-UNDEFINED-FUNCTION
     314:       LD         [%CODE+73], %A1   ; :NAME
     318:       LD         [%CODE+77], %A3   ; :FORMAT-CONTROL
     31C:       LD         [%CODE+81], %A4   ; "Cannot funcall macro functions."
     320:       LD         [%CODE+65], %CNAME ; #<FDEFINITION object for ERROR>
     324:       MOV        %CFP, %OCFP
     328:       ADD        %ZERO, 20, %NARGS
     32C:       LD         [%CNAME+5], %A5
     330:       ADD        %CODE, 1088, %LRA
     334:       MOV        %CSP, %CFP
     338:       J          %A5+23
     33C:       MOV        %A5, %CODE
     340:       .LRA
     344:       MOV        %OCFP, %CSP
     348:       NOP
     34C:       ADD        -1088, %CODE
     350:       UNIMP      10                ; Error trap
     354:       BYTE       #x04
     355:       BYTE       #x19              ; INVALID-ARGUMENT-COUNT-ERROR
     356:       BYTE       #xFE, #xED, #x01  ; NARGS
     359:       .ALIGN     4
     35C:       UNIMP      10                ; Error trap
     360:       BYTE       #x04
     361:       BYTE       #x01              ; OBJECT-NOT-FUNCTION-ERROR
     362:       BYTE       #xFE, #x0E, #x02  ; A0
     365:       .ALIGN     4
     368:       UNIMP      10                ; Error trap
     36C:       BYTE       #x04
     36D:       BYTE       #x16              ; OBJECT-NOT-SYMBOL-ERROR
     36E:       BYTE       #xFE, #x2E, #x02  ; A1
     371:       .ALIGN     4

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'