From: Henry Bigelow
Subject: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159299709.385833.102430@i3g2000cwc.googlegroups.com>
hi,

i am writing to get an idea whether lisp would be useful.  so far, i
think it would be ideal as a mode of expression, but i'm not sure
whether it has (or i would be able to write) memory efficient or
sufficiently fast code with it.

i have zero lisp experience.  i started with perl, then c++, then
o'caml.  a few days ago i read paul graham's summary of john mccarthy's
paper, and then started reading the original, and On Lisp.  i'm
completely wowed, and would love to use it to write my programs.

i'm a bioinformatics student, writing a bayesian network software with
a very demanding memory requirement, and potentially long running time
for training.  it predicts protein structure, and must have hundreds of
nodes for each protein, each with a matrix storing relationships
between amino acids and structural states.  if this is done
efficiently, it certainly fits in a 2GB machine.

but i'm not sure why certain lisp programs in the shootout are so
large.  how do i interpret the results of the shootout?  for example,
lisp fannkuch is 26 times slower than the c version, and lisp chameneos
is 120x larger.

i'm surprised to see such large discrepancy actually.  is it possible
to improve on these numbers?  so far, the idea of macros and
code-as-data and mini-compilers is completely fantastic and foreign to
me, and seems to hold much promise for writing code with an eye on
memory and speed efficiency.  but the shootout isn't encouraging in
certain cases.

TIA

henry


chameneos	-1.2	-120	1.2
fannkuch	  -26	-2.5	-1.2
k-nucleotide	  -8.1	-3.1	1.4
n-body	           -1.4	  -92	-1.2
pidigits	-5.7	-51	-1.1	
regex-dna	139	-7.7	2.0	 300,000  (yay)

From: ······@gmail.com
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159301938.768966.209040@e3g2000cwe.googlegroups.com>
Henry Bigelow wrote:
> i have zero lisp experience.  i started with perl, then c++, then
> o'caml.  a few days ago i read paul graham's summary of john mccarthy's
> paper, and then started reading the original, and On Lisp.  i'm
> completely wowed, and would love to use it to write my programs.

Welcome here!

> but i'm not sure why certain lisp programs in the shootout are so
> large.  how do i interpret the results of the shootout?  for example,
> lisp fannkuch is 26 times slower than the c version, and lisp chameneos
> is 120x larger.

It would be useful to post a link to the actual data page -- at least
to see what they count as "program size". Unless one does lisp -> C ->
executable conversion (which is totally possible with some less popular
lisp dialects), I'd assume that resulting lisp executable is a complete
image, which would include all of the library AND compiler.

> is it possible to improve on these numbers?

Minimize consing, use lisp arrays, make sure to declare your variables
(to let compiler know what it is allowed to optimize) -- at the end
memory requirements should not be much worse with lisp than for C or
Fortran (except for the above-mentioned constant additon of having
compiler and library "always there", which is a bonus ;-) ).

As to speed -- this is the required reading:
http://www.flownet.com/gat/papers/lisp-java.pdf ;-)

Of course one would want to use compiled version of lisp, check out
SBCL...

Just my $0.02,

Paul B.
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159305072.957391.191970@k70g2000cwa.googlegroups.com>
······@gmail.com wrote:
> Welcome here!

thank you.

>
> > but i'm not sure why certain lisp programs in the shootout are so
> > large.  how do i interpret the results of the shootout?  for example,
> > lisp fannkuch is 26 times slower than the c version, and lisp chameneos
> > is 120x larger.

>
> It would be useful to post a link to the actual data page -- at least
> to see what they count as "program size". Unless one does lisp -> C ->
> executable conversion (which is totally possible with some less popular
> lisp dialects), I'd assume that resulting lisp executable is a complete
> image, which would include all of the library AND compiler.
>

sorry for the confusion, by "larger" i meant memory use.

here's the link:

http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=sbcl&lang2=gcc



> > is it possible to improve on these numbers?
>
> Minimize consing, use lisp arrays, make sure to declare your variables
> (to let compiler know what it is allowed to optimize) -- at the end
> memory requirements should not be much worse with lisp than for C or
> Fortran (except for the above-mentioned constant additon of having
> compiler and library "always there", which is a bonus ;-) ).
>
> As to speed -- this is the required reading:
> http://www.flownet.com/gat/papers/lisp-java.pdf ;-)

thanks for the link.  looks very encouraging.

a more important question is:

is this shootout really definitive?  are all the programs up there
written such that one would consider them both elegant and efficient,
or are some very poorly written?

is it a popular shootout, or are there other benchmarks that people
like better?

thanks,

henry


>
> Of course one would want to use compiled version of lisp, check out
> SBCL...
> 
> Just my $0.02,
> 
> Paul B.
From: ······@gmail.com
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159310650.133548.206290@i3g2000cwc.googlegroups.com>
There is this page on Shootout site:
http://shootout.alioth.debian.org/gp4/miscfile.php?file=benchmarking&title=Flawed%20Benchmarks
and the first link off it is quite educational... ;-)

> a more important question is:
>
> is this shootout really definitive?  are all the programs up there
> written such that one would consider them both elegant and efficient,
> or are some very poorly written?
> is it a popular shootout, or are there other benchmarks that people
> like better?

This is the first time I've heard of this particular competition (and
set of benchmarks) -- it looks rather cool! I doubt it has the same
clout as SPEC though... :-)

Hey, while you are at learning lisp, why not debug that 120x memory
program to see why this happens?  profile and time are your friends...
;-)

CL-USER>CL-USER> (time (main 5000000))
10000000
Evaluation took:
  34.466 seconds of real time
  12.708794 seconds of user run time
  21.369335 seconds of system run time
  [Run times include 0.044 seconds GC run time.]
  0 page faults and
  79,975,616 bytes consed.
NIL
CL-USER>

-- yes, almost 80 MB consed -- but why the heck threading overherad is
so high? (most of runtime is in system time, uh-huh...)

Paul B.
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159372598.589924.51120@i42g2000cwa.googlegroups.com>
······@gmail.com wrote:
> There is this page on Shootout site:
> http://shootout.alioth.debian.org/gp4/miscfile.php?file=benchmarking&title=Flawed%20Benchmarks
> and the first link off it is quite educational... ;-)
>
> > a more important question is:
> >
> > is this shootout really definitive?  are all the programs up there
> > written such that one would consider them both elegant and efficient,
> > or are some very poorly written?
> > is it a popular shootout, or are there other benchmarks that people
> > like better?
>
> This is the first time I've heard of this particular competition (and
> set of benchmarks) -- it looks rather cool! I doubt it has the same
> clout as SPEC though... :-)
>
> Hey, while you are at learning lisp, why not debug that 120x memory
> program to see why this happens?  profile and time are your friends...
> ;-)
>
> CL-USER>CL-USER> (time (main 5000000))
> 10000000
> Evaluation took:
>   34.466 seconds of real time
>   12.708794 seconds of user run time
>   21.369335 seconds of system run time
>   [Run times include 0.044 seconds GC run time.]
>   0 page faults and
>   79,975,616 bytes consed.
> NIL
> CL-USER>
>
> -- yes, almost 80 MB consed -- but why the heck threading overherad is
> so high? (most of runtime is in system time, uh-huh...)

thanks paul!  by the way, i read the paper analyzing a lisp fannkuch
benchmark for efficiency, and it mentioned the use of 'aref' and
several other things.  i looked at the actual benchmark, and was hoping
to find some telltale sign that it didn't use any of these ideas, but i
didn't.  i have no idea whether the shootout fannkuch lisp benchmark is
written the way this paper describes.

can i interest you or anyone else to optimize one or more of these
benchmarks, so they aren't such negative publicity for lisp?

BTW for those reading, the shootout is:

http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=sbcl&lang2=gcc

and the offending benchmarks are called fannkuch and chameneos.


henry


> 
> Paul B.
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159551404.921008.228960@i3g2000cwc.googlegroups.com>
hi paul,
  i took a look at SPEC--it seems definitive, indeed, but for comparing
computer systems, not languages.  is that right?  would you be able to
point me in a direction to compare languages?  i have a hunch that
there is something very wrong with the lisp benchmark which is about
100 times more memory/ or slower than a c program, but it'd be very
difficult for me to rewrite it more efficiently.

it's possible that for various reasons, people don't care enough about
the shootout

http://shootout.alioth.debian.org/

to fix the benchmarks.  or, it's possible that the lisp ones are indeed
written to be as memory/speed efficient as possible.

what do you think is going on here?

thanks,

henry


> This is the first time I've heard of this particular competition (and
> set of benchmarks) -- it looks rather cool! I doubt it has the same
> clout as SPEC though... :-)
>
> Hey, while you are at learning lisp, why not debug that 120x memory
> program to see why this happens?  profile and time are your friends...
> ;-)

you're overestimating my skill here!


>
> CL-USER>CL-USER> (time (main 5000000))
> 10000000
> Evaluation took:
>   34.466 seconds of real time
>   12.708794 seconds of user run time
>   21.369335 seconds of system run time
>   [Run times include 0.044 seconds GC run time.]
>   0 page faults and
>   79,975,616 bytes consed.
> NIL
> CL-USER>
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159564489.409528.167370@k70g2000cwa.googlegroups.com>
Henry Bigelow ha escrito:

> hi paul,
>   i took a look at SPEC--it seems definitive, indeed, but for comparing
> computer systems, not languages.  is that right?  would you be able to
> point me in a direction to compare languages?  i have a hunch that
> there is something very wrong with the lisp benchmark which is about
> 100 times more memory/ or slower than a c program, but it'd be very
> difficult for me to rewrite it more efficiently.
>
> it's possible that for various reasons, people don't care enough about
> the shootout
>
> http://shootout.alioth.debian.org/
>
> to fix the benchmarks.  or, it's possible that the lisp ones are indeed
> written to be as memory/speed efficient as possible.
>
> what do you think is going on here?

We have already said that:

1) Lisp loads the entire compiler/debugger/documentation among with the
program. As you can imagine, it is impossible to compite to C in memory
ussage in this situation (imagine that you would have to load GCC with
your C program, it would be much bigger, don't think so?). But please,
take also in count that, while in a program written in C it does not
count the memory shared with the C library (it is linked dinamically),
in Lisp you have the Lisp library loaded with you, so the C library is
already consuming memory from your system, but the program which you
use to see that memory doesn't take that in count.
2) It is not 100x slower. Please take a look at:
http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=sbcl&lang2=gcc
That demonstrates that, except for some bechmarks in which algorithms
differ a lot, SBCL is almost as fast as C.
3) SBCL is not the only implementation. There are other implementations
which even run faster, and which consumes less memory.
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159577501.845665.30020@i3g2000cwc.googlegroups.com>
Javier wrote:

>
> We have already said that:
>
> 1) Lisp loads the entire compiler/debugger/documentation among with the
> program.

i didn't know that.  but, if so, why aren't all the other benchmarks
100x more memory?  k-nucleotide is only 3.1x more memory.

As you can imagine, it is impossible to compite to C in memory
> ussage in this situation (imagine that you would have to load GCC with
> your C program, it would be much bigger, don't think so?). But please,
> take also in count that, while in a program written in C it does not
> count the memory shared with the C library (it is linked dinamically),
> in Lisp you have the Lisp library loaded with you, so the C library is
> already consuming memory from your system, but the program which you
> use to see that memory doesn't take that in count.



> 2) It is not 100x slower. Please take a look at:
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=sbcl&lang2=gcc
> That demonstrates that, except for some bechmarks in which algorithms
> differ a lot, SBCL is almost as fast as C.

agreed.  fannkuch is 27x slower, and chameneos requires 120x more
memory.  i'm not contesting anything besides 'fannkuch' for speed.  i'm
not really concerned about 'startup' because it's amortized.

> 3) SBCL is not the only implementation. There are other implementations
> which even run faster, and which consumes less memory.

yes, but the use of SBCL here is obviously not why the fannkuch
benchmark is 26 times slower, instead of 3 or 5 times slower.


thanks in advance,

henry
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159579595.294789.7020@m7g2000cwm.googlegroups.com>
Henry Bigelow ha escrito:

> Javier wrote:
>
> >
> > We have already said that:
> >
> > 1) Lisp loads the entire compiler/debugger/documentation among with the
> > program.
>
> i didn't know that.  but, if so, why aren't all the other benchmarks
> 100x more memory?  k-nucleotide is only 3.1x more memory.

Because that algorithm uses a lot of memory, even in C.
If you don't believe me, download and install SBCL. Start it. Only
starting it, requires about 6Mb.
Now define a function:

(defun jj () (format t "hello"))

Now it requires about 11Mb, which means that the compiler has been
loaded and compiled your function in real time. If you start to do
things, like using the debugger, the documentation, extra libraries,
etc, your application can grow in memory very fast.
A similar thing happens with Java.
In C, you are not compiling code in real time, and your program uses
the libraries on your system, so this makes the ilussion that your C
program uses very little memory. But it probably no. What is happening
is that your C program uses libraries that are already loaded in memory
(for example stdlib).
You can see that this is always happening even in C. Take for example
Mozilla, Open Office, the X server, GCC... all of them are using a lot
of memory because they need libraries which are not already dinamically
loaded.

> > 2) It is not 100x slower. Please take a look at:
> > http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=sbcl&lang2=gcc
> > That demonstrates that, except for some bechmarks in which algorithms
> > differ a lot, SBCL is almost as fast as C.
>
> agreed.  fannkuch is 27x slower, and chameneos requires 120x more
> memory.  i'm not contesting anything besides 'fannkuch' for speed.  i'm
> not really concerned about 'startup' because it's amortized.

A good measure of how well is SBCL doing is disassembling:

(disassemble 'jj)

; 1160AD7A:       BA27000008       MOV EDX, 134217767         ;
no-arg-parsing entry point
;       7F:       8B3D14AD6011     MOV EDI, [#x1160AD14]      ; "hello"
;       85:       8B0518AD6011     MOV EAX, [#x1160AD18]      ;
#<FDEFINITION object for FORMAT>
;       8B:       B908000000       MOV ECX, 8
;       90:       FF75F8           PUSH DWORD PTR [EBP-8]
;       93:       FF6005           JMP DWORD PTR [EAX+5]
;       96:       90               NOP
;       97:       90               NOP
;       98:       0F0B0A           BREAK 10                   ; error
trap
;       9B:       02               BYTE #X02
;       9C:       18               BYTE #X18                  ;
INVALID-ARG-COUNT-ERROR
;       9D:       4D               BYTE #X4D                  ; ECX
;       9E:       0F0B0A           BREAK 10                   ; error
trap
;       A1:       02               BYTE #X02
;       A2:       18               BYTE #X18                  ;
INVALID-ARG-COUNT-ERROR
;       A3:       4D               BYTE #X4D                  ; ECX


If you understand some asemssembler you can see that SBCL is actually
very clever compiling things. :-)
(For example, it is using a jump in line 93 instead of function call to
avoid some overhead. Not all C compilers are able to do so.)
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <451f0c98$0$8747$ed2619ec@ptn-nntp-reader02.plus.net>
Javier wrote:
> 2) It is not 100x slower. Please take a look at:
>
http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=sbcl&lang2=gcc
> That demonstrates that, except for some bechmarks in which algorithms
> differ a lot, SBCL is almost as fast as C.

According to the page you cite SBCL is certainly not "almost as fast as C".
Just look at the graph.

Moreover, as a HLL, Lisp should be relatively better than C when algorithms
are allowed to differ (many algorithms are prohibitively complicated to
write in C).

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Juho Snellman
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <slrnehrlfb.bu5.jsnell@sbz-30.cs.Helsinki.FI>
Henry Bigelow <·········@gmail.com> wrote:
> hi paul,
>   i took a look at SPEC--it seems definitive, indeed, but for comparing
> computer systems, not languages.  is that right?  would you be able to
> point me in a direction to compare languages?  i have a hunch that
> there is something very wrong with the lisp benchmark which is about
> 100 times more memory/ or slower than a c program, but it'd be very
> difficult for me to rewrite it more efficiently.
>
> it's possible that for various reasons, people don't care enough about
> the shootout

As far as I can tell, the lifecycle of a shootout benchmark goes like
this:

  1. A new benchmark is added, nobody cares
  2. A newbie with 2 hours of Lisp experience notices that there's
     no implementation for the benchmark. He thinks that a crappy
     implementation is better than no implementation, and proceeds
     to write something embarrasingly slow and hideously ugly.
  3. People who mistakenly think that the Shootout has any validity
     bitch and moan about Lisp being slow on that benchmark.
  4. Someone who knows how to optimize Lisp code gets tired of the
     moaning, and either rewrites the whole program (if it was
     sufficiently hideous) or makes a few obvious but critical
     improvements (if the program was salvageable). A huge speedup
     results, the people from step 3 start whining about something else.
  5. The requirements for the benchmark are modified, and the optimized
     Lisp implementation gets deleted. There's no sign of it ever
     having existed.
  6. Return to step 2

The fannkuch benchmark is currently at step 3, having been at step 5
at least once. The old fannkuch program was about 10 times faster than
what is currently up on the shootout site.

-- 
Juho Snellman
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159594164.286719.75430@i3g2000cwc.googlegroups.com>
Juho Snellman wrote:
> Henry Bigelow <·········@gmail.com> wrote:
> > hi paul,
> >   i took a look at SPEC--it seems definitive, indeed, but for comparing
> > computer systems, not languages.  is that right?  would you be able to
> > point me in a direction to compare languages?  i have a hunch that
> > there is something very wrong with the lisp benchmark which is about
> > 100 times more memory/ or slower than a c program, but it'd be very
> > difficult for me to rewrite it more efficiently.
> >
> > it's possible that for various reasons, people don't care enough about
> > the shootout
>
> As far as I can tell, the lifecycle of a shootout benchmark goes like
> this:
>
>   1. A new benchmark is added, nobody cares
>   2. A newbie with 2 hours of Lisp experience notices that there's
>      no implementation for the benchmark. He thinks that a crappy
>      implementation is better than no implementation, and proceeds
>      to write something embarrasingly slow and hideously ugly.
>   3. People who mistakenly think that the Shootout has any validity
>      bitch and moan about Lisp being slow on that benchmark.
>   4. Someone who knows how to optimize Lisp code gets tired of the
>      moaning, and either rewrites the whole program (if it was
>      sufficiently hideous) or makes a few obvious but critical
>      improvements (if the program was salvageable). A huge speedup
>      results, the people from step 3 start whining about something else.
>   5. The requirements for the benchmark are modified, and the optimized
>      Lisp implementation gets deleted. There's no sign of it ever
>      having existed.
>   6. Return to step 2
>
> The fannkuch benchmark is currently at step 3, having been at step 5
> at least once. The old fannkuch program was about 10 times faster than
> what is currently up on the shootout site.

juho, thanks for your response.  i took a look at the benchmarks, and
saw that you'd modified several of them.  is this your comment, by the
way, from the spectral-norm benchmark? :

Note that sbcl is at least 10 times faster than either clisp or gcl
;; on this program, running with an argument of 500.  It would be nice
;; to know why the others are so slow.


if so, did you ever figure out the cause of the discrepancy?  i don't
know whether it's true of c or c++ that different compilers can give
drastically different code speeds.

it looks like you have a lot of experience.  so, benchmark politics
aside, do you find lisp to be fast enough to do heavy-duty computing,
(in my case, a bayesian network with hundreds of nodes and lots of
floating point addition, but also written in a very extensible,
abstract style)

thanks,

henry

p.s.  what would you say to changing the shootout such that multiple
programs can be submitted, and the results shown based on whatever
selection criteria wanted?  who moderates it?



> 
> -- 
> Juho Snellman
From: Wade Humeniuk
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <5NoTg.46656$cz3.23033@edtnps82>
Speaking of which.  By modifying the original somewhat (I do not have SBCL)
I do not have the patience to code it like C, so....

(defvar *fannkuch-prints* nil)
(defvar *fannkuch-max-flips* 0)

(declaim (inline flip make-fannkuch init-fannkuch print-fannkuch
                  copy-fannkuch permute-fannkuch count-flips))

(defun make-fannkuch (n)
   (make-array n :element-type '(unsigned-byte 8)))

(defun init-fannkuch (fannkuch end)
   (loop for n from 1
         for i from 0 below end
         do (setf (aref fannkuch i) n))
   fannkuch)

(defun copy-fannkuch (from to start end)
   (declare (type (simple-array (unsigned-byte 8) (*)) from to)
            (optimize (speed 3) (safety 0) (debug 0) #+lispworks(fixnum-safety 0)))
   (loop for i from start below end
         do (setf (aref to i) (aref from i))))

(defun permute-fannkuch (fannkuch n)
   (declare (type (simple-array (unsigned-byte 8) (*)) fannkuch)
            (optimize (speed 3) (safety 0) (debug 0) #+lispworks(fixnum-safety 0)))
   (let ((first (aref fannkuch 0)))
     (loop for i from 0 below (1- n)
           do (setf (aref fannkuch i) (aref fannkuch (1+ i))))
     (setf (aref fannkuch (1- n)) first)
     fannkuch))

(defun print-fannkuch (permutation)
   (declare (type (simple-array (unsigned-byte 8) (*)) permutation)
            (optimize (speed 3) (safety 0) (debug 0) #+lispworks(fixnum-safety 0)))
   (when (< *fannkuch-prints* 30)
     (loop for i across permutation do
           (princ i))
     (terpri)
     (incf *fannkuch-prints*)))

(defun flip (permutation)
   (declare (type (simple-array (unsigned-byte 8) (*)) permutation)
            (optimize (speed 3) (safety 0) (debug 0) #+lispworks(fixnum-safety 0)))
   (let ((n (aref permutation 0)))
     (when (> n 1)
       (loop for start from 0
             for end downfrom (1- n)
             while (< start end) do
             (rotatef (aref permutation start)
                      (aref permutation end))))))

(defun count-flips (permutation)
   (declare (type (simple-array (unsigned-byte 8) (*)) permutation)
            (optimize (speed 3) (safety 0) (debug 0) #+lispworks(fixnum-safety 0)))
   (let ((flips 0))
     (loop until (= (aref permutation 0) 1)
           do (flip permutation)
              (incf flips))
     (setf *fannkuch-max-flips* (max flips *fannkuch-max-flips*))
     flips))

(defun fanndance (fannkuch len &aux (first (first fannkuch)) (next (second fannkuch)))
   (declare (type (simple-array (unsigned-byte 8) (*)) first next)
            (type fixnum len)
            (optimize (speed 3) (safety 0) (debug 0) #+lispworks(fixnum-safety 0)))
   (copy-fannkuch first next 0 (length first))
   (if (= len 1)
       (progn (count-flips next) (print-fannkuch first))
     (progn
       (dotimes (i (1- len))
         (fanndance (cdr fannkuch) (1- len))
         (permute-fannkuch next len))
       (fanndance (cdr fannkuch) (1- len)))))

(defun fannkuch (n)
   (let ((*fannkuch-prints* 0) (*fannkuch-max-flips* 0)
         (fannkuch (loop repeat (1+ n) collect (make-fannkuch n))))
     (init-fannkuch (car fannkuch) n)
     (format t "~A"
             (with-output-to-string (*standard-output*)
               (fanndance fannkuch n)))
     (format t "Pfannkuchen(~D) = ~D" n *fannkuch-max-flips*)))

#+ignore(defun main ()
	(fannkuch (parse-integer (second *posix-argv*))))

CL-USER 49 > (time (fannkuch 10))
Timing the evaluation of (FANNKUCH 10)
12345678910
21345678910
23145678910
32145678910
31245678910
13245678910
23415678910
32415678910
34215678910
43215678910
42315678910
24315678910
34125678910
43125678910
41325678910
14325678910
13425678910
31425678910
41235678910
14235678910
12435678910
21435678910
24135678910
42135678910
23451678910
32451678910
34251678910
43251678910
42351678910
24351678910
Pfannkuchen(10) = 38
user time    =      2.874
system time  =      0.000
Elapsed time =   0:00:03
Allocation   = 3792 bytes standard / 4554 bytes conses
0 Page faults
Calls to %EVAL    34
NIL

CL-USER 50 >
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159717392.032041.172730@k70g2000cwa.googlegroups.com>
Wade Humeniuk wrote:
> Speaking of which.  By modifying the original somewhat (I do not have SBCL)
> I do not have the patience to code it like C, so....
>

wow, thanks wade!

i'm going to test this out and compare it to the original too.  so far
i've just downloaded slime and sbcl and am stumbling my way around it
all.

what sort of things do you use lisp for, by the way?  and do you use
other languages too?

cheers,

henry
From: Wade Humeniuk
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <n7STg.35430$Lb5.22648@edtnps89>
Henry Bigelow wrote:
> Wade Humeniuk wrote:
>> Speaking of which.  By modifying the original somewhat (I do not have SBCL)
>> I do not have the patience to code it like C, so....
>>
> 
> wow, thanks wade!

You're welcome.  The fastest version I have tried (so far) of fannkuch
is the algorithm  I translated (yesterday) from the GCC entry (most
others use the same approach).

(defun fannkuch (n)
   (declare (optimize (speed 3) (safety 0) (debug 0) #+lispworks(fixnum-safety 0))
	   (type (integer 0 128) n))
   (if (< n 1) (return-from fannkuch 0))
   (let ((perm (make-array n :element-type '(integer 0 128)))
         (perml (make-array n :element-type '(integer 0 128)))
         (count (make-array n :element-type '(integer 0 128)))
         (flips 0) (flipsmax 0) (r n) (nl (1- n)) (perm0 0)
         (check 0)
         (k 0) (i 0))
     (declare (type (simple-array (integer 0 128) (*)) perm perml count)
              (type (integer 0 256) flips flipsmax nl perm0 check k i)
	     (type (integer 0 128) r))

     (loop for i from 0 below n do (setf (aref perml i) i))

     (loop
      (when (< check 30)
        (loop for i from 0 below n do (princ (1+ (aref perml i))))
        (terpri)
        (incf check))

      (loop while (> r 1) do
            (setf (aref count (1- r)) r)
            (decf r))

      (unless (or (= (aref perml 0) 0) (= (aref perml nl) nl))
        (setf flips 0)
        (loop for i fixnum from 0 below n do (setf (aref perm i) (aref perml i)))
        (setf k (aref perml 0))
        (loop while (/= k 0) do
              (loop for j fixnum downfrom (1- k)
                    for i fixnum from 1
                    while (< i j) do (rotatef (aref perm i) (aref perm j)))
              (incf flips)
              (rotatef k (aref perm k)))
        (when (< flipsmax flips)
          (setf flipsmax flips)))

      (loop do
       (when (= r n) (return-from fannkuch flipsmax))
       (setf i 0 perm0 (aref perml 0))
       (loop while (< i r) do
             (setf k (1+ i)
                   (aref perml i) (aref perml k)
                   i k))
       (setf (aref perml r) perm0)
       (when (> (decf (aref count r)) 0) (loop-finish))
       (incf r)))))

I finally loaded SBCL 0.9.17 onto one of my FreeBSD machines.
On the lowly PIIIx2 500MHz (time (fannkuch 10))

SBCL 0.9.17        3.01s
LWL 4.3.7          4.75s
gcc                1.80s
(from the Shootout Site gcc -O3 -fomit-frame-pointer -march=pentium3)

Scaling that to the results on the shootout site puts gcc=0.47,
LW=1.24 and SBCL=0.79.

Why the shootout site would have removed the previous faster Lisp
version is beyond me.

Wade


> 
> i'm going to test this out and compare it to the original too.  so far
> i've just downloaded slime and sbcl and am stumbling my way around it
> all.
> 
> what sort of things do you use lisp for, by the way?  and do you use
> other languages too?
> 

Not anything numerically intensive.  Currently at work I use it to write
testers/emulators. At home, my first CL program written and delivered as
a complete app was... (I did it to learn CL).  If you want to learn CL
I would suggest you try something non-trivial like that.  Nothing like
getting your feet wet and to to see how CL makes development so much easier.

http://www.download.com/The-Runner-s-Log/3000-2136_4-6893549.html?tag=lst-0-1

Wade
From: Isaac Gouy
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159820991.730908.181370@h48g2000cwc.googlegroups.com>
Wade Humeniuk wrote:
> Henry Bigelow wrote:
> > Wade Humeniuk wrote:
> >> Speaking of which.  By modifying the original somewhat (I do not have SBCL)
> >> I do not have the patience to code it like C, so....
> >>
> >
> > wow, thanks wade!
>
> You're welcome.  The fastest version I have tried (so far) of fannkuch
> is the algorithm  I translated (yesterday) from the GCC entry (most
> others use the same approach).
>
> (defun fannkuch (n)
>    (declare (optimize (speed 3) (safety 0) (debug 0) #+lispworks(fixnum-safety 0))
> 	   (type (integer 0 128) n))
>    (if (< n 1) (return-from fannkuch 0))
>    (let ((perm (make-array n :element-type '(integer 0 128)))
>          (perml (make-array n :element-type '(integer 0 128)))
>          (count (make-array n :element-type '(integer 0 128)))
>          (flips 0) (flipsmax 0) (r n) (nl (1- n)) (perm0 0)
>          (check 0)
>          (k 0) (i 0))
>      (declare (type (simple-array (integer 0 128) (*)) perm perml count)
>               (type (integer 0 256) flips flipsmax nl perm0 check k i)
> 	     (type (integer 0 128) r))
>
>      (loop for i from 0 below n do (setf (aref perml i) i))
>
>      (loop
>       (when (< check 30)
>         (loop for i from 0 below n do (princ (1+ (aref perml i))))
>         (terpri)
>         (incf check))
>
>       (loop while (> r 1) do
>             (setf (aref count (1- r)) r)
>             (decf r))
>
>       (unless (or (= (aref perml 0) 0) (= (aref perml nl) nl))
>         (setf flips 0)
>         (loop for i fixnum from 0 below n do (setf (aref perm i) (aref perml i)))
>         (setf k (aref perml 0))
>         (loop while (/= k 0) do
>               (loop for j fixnum downfrom (1- k)
>                     for i fixnum from 1
>                     while (< i j) do (rotatef (aref perm i) (aref perm j)))
>               (incf flips)
>               (rotatef k (aref perm k)))
>         (when (< flipsmax flips)
>           (setf flipsmax flips)))
>
>       (loop do
>        (when (= r n) (return-from fannkuch flipsmax))
>        (setf i 0 perm0 (aref perml 0))
>        (loop while (< i r) do
>              (setf k (1+ i)
>                    (aref perml i) (aref perml k)
>                    i k))
>        (setf (aref perml r) perm0)
>        (when (> (decf (aref count r)) 0) (loop-finish))
>        (incf r)))))
>
> I finally loaded SBCL 0.9.17 onto one of my FreeBSD machines.
> On the lowly PIIIx2 500MHz (time (fannkuch 10))
>
> SBCL 0.9.17        3.01s
> LWL 4.3.7          4.75s
> gcc                1.80s
> (from the Shootout Site gcc -O3 -fomit-frame-pointer -march=pentium3)
>
> Scaling that to the results on the shootout site puts gcc=0.47,
> LW=1.24 and SBCL=0.79.
>
> Why the shootout site would have removed the previous faster Lisp
> version is beyond me.
>
> Wade

Incidentally, if you would like this program or any variant you might
author from Juho Snellman's program to appear on the computer language
shootout, you should follow the FAQ instructions
http://shootout.alioth.debian.org/gp4/faq.php#help


>
>
> >
> > i'm going to test this out and compare it to the original too.  so far
> > i've just downloaded slime and sbcl and am stumbling my way around it
> > all.
> >
> > what sort of things do you use lisp for, by the way?  and do you use
> > other languages too?
> >
>
> Not anything numerically intensive.  Currently at work I use it to write
> testers/emulators. At home, my first CL program written and delivered as
> a complete app was... (I did it to learn CL).  If you want to learn CL
> I would suggest you try something non-trivial like that.  Nothing like
> getting your feet wet and to to see how CL makes development so much easier.
>
> http://www.download.com/The-Runner-s-Log/3000-2136_4-6893549.html?tag=lst-0-1
> 
> Wade
From: Wade Humeniuk
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <xviUg.50412$cz3.43232@edtnps82>
Isaac Gouy wrote:
> Wade Humeniuk wrote:
>> Henry Bigelow wrote:
>>> Wade Humeniuk wrote:
>>>> Speaking of which.  By modifying the original somewhat (I do not have SBCL)
>>>> I do not have the patience to code it like C, so....
>>>>
>>> wow, thanks wade!
>> You're welcome.  The fastest version I have tried (so far) of fannkuch
>> is the algorithm  I translated (yesterday) from the GCC entry (most
>> others use the same approach).
>>
>> (defun fannkuch (n)
>>    (declare (optimize (speed 3) (safety 0) (debug 0) #+lispworks(fixnum-safety 0))
>> 	   (type (integer 0 128) n))
>>    (if (< n 1) (return-from fannkuch 0))
>>    (let ((perm (make-array n :element-type '(integer 0 128)))
>>          (perml (make-array n :element-type '(integer 0 128)))
>>          (count (make-array n :element-type '(integer 0 128)))
>>          (flips 0) (flipsmax 0) (r n) (nl (1- n)) (perm0 0)
>>          (check 0)
>>          (k 0) (i 0))
>>      (declare (type (simple-array (integer 0 128) (*)) perm perml count)
>>               (type (integer 0 256) flips flipsmax nl perm0 check k i)
>> 	     (type (integer 0 128) r))
>>
>>      (loop for i from 0 below n do (setf (aref perml i) i))
>>
>>      (loop
>>       (when (< check 30)
>>         (loop for i from 0 below n do (princ (1+ (aref perml i))))
>>         (terpri)
>>         (incf check))
>>
>>       (loop while (> r 1) do
>>             (setf (aref count (1- r)) r)
>>             (decf r))
>>
>>       (unless (or (= (aref perml 0) 0) (= (aref perml nl) nl))
>>         (setf flips 0)
>>         (loop for i fixnum from 0 below n do (setf (aref perm i) (aref perml i)))
>>         (setf k (aref perml 0))
>>         (loop while (/= k 0) do
>>               (loop for j fixnum downfrom (1- k)
>>                     for i fixnum from 1
>>                     while (< i j) do (rotatef (aref perm i) (aref perm j)))
>>               (incf flips)
>>               (rotatef k (aref perm k)))
>>         (when (< flipsmax flips)
>>           (setf flipsmax flips)))
>>
>>       (loop do
>>        (when (= r n) (return-from fannkuch flipsmax))
>>        (setf i 0 perm0 (aref perml 0))
>>        (loop while (< i r) do
>>              (setf k (1+ i)
>>                    (aref perml i) (aref perml k)
>>                    i k))
>>        (setf (aref perml r) perm0)
>>        (when (> (decf (aref count r)) 0) (loop-finish))
>>        (incf r)))))
>>
>> I finally loaded SBCL 0.9.17 onto one of my FreeBSD machines.
>> On the lowly PIIIx2 500MHz (time (fannkuch 10))
>>
>> SBCL 0.9.17        3.01s
>> LWL 4.3.7          4.75s
>> gcc                1.80s
>> (from the Shootout Site gcc -O3 -fomit-frame-pointer -march=pentium3)
>>
>> Scaling that to the results on the shootout site puts gcc=0.47,
>> LW=1.24 and SBCL=0.79.
>>
>> Why the shootout site would have removed the previous faster Lisp
>> version is beyond me.
>>
>> Wade
> 
> Incidentally, if you would like this program or any variant you might
> author from Juho Snellman's program to appear on the computer language
> shootout, you should follow the FAQ instructions
> http://shootout.alioth.debian.org/gp4/faq.php#help
> 

Sure,

I submitted a slightly modified version. It is only 30% slower than the gcc
version.

Wade

;;; The Computer Language Shootout
;;; http://shootout.alioth.debian.org/
;;; Contributed by Wade Humeniuk
;;;
;;; fannkuch for Lisp (SBCL)
;;;
;;; Compile: sbcl --load fannkuch.lisp --eval "(save-lisp-and-die \"fannkuch.core\" 
:purify t :toplevel (lambda () (main) (quit)))"
;;;
;;; Run: sbcl --noinform --core fannkuch.core %A

(defun write-permutation (perm)
   (loop for i across perm do
	(princ (1+ i)))
   (terpri))

(defun fannkuch (n)
   (declare (optimize (speed 3) (safety 0) (debug 0)) (fixnum n))
   (assert (< 1 n 128))
   (let ((perm (make-array n :element-type 'fixnum))
         (perm1 (make-array n :element-type 'fixnum))
         (count (make-array n :element-type 'fixnum))
         (flips 0) (flipsmax 0) (r n) (check 0) (k 0)
	(i 0) (perm0 0))

     (declare ((simple-array fixnum (*)) perm perm1 count)
              (fixnum flips flipsmax check k r i perm0))

     (dotimes (i n) (setf (aref perm1 i) i))

     (loop

      (when (< check 30)
        (write-permutation perm1)
        (incf check))

      (loop while (> r 1) do
            (setf (aref count (1- r)) r)
            (decf r))

      (unless (or (= (aref perm1 0) 0)
		 (= (aref perm1 (1- n)) (1- n)))
        (setf flips 0)
        (dotimes (i n) (setf (aref perm i) (aref perm1 i)))
        (setf k (aref perm1 0))
        (loop while (/= k 0) do
              (loop for j fixnum downfrom (1- k)
                    for i fixnum from 1
                    while (< i j) do (rotatef (aref perm i) (aref perm j)))
              (incf flips)
              (rotatef k (aref perm k)))
        (setf flipsmax (max flipsmax flips)))

      (loop do
	   (when (= r n)
	     (return-from fannkuch flipsmax))
	   (setf i 0 perm0 (aref perm1 0))
	   (loop while (< i r) do
		 (setf k (1+ i)
		       (aref perm1 i) (aref perm1 k)
		       i k))
	   (setf (aref perm1 r) perm0)
	   (when (> (decf (aref count r)) 0) (loop-finish))
	   (incf r)))))

(defun main ()
   (let ((n (parse-integer (second *posix-argv*))))
     (format t "Pfannkuchen(~D) = ~D~%" n (fannkuch n))))
From: Wade Humeniuk
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <YotUg.9358$N4.5262@clgrps12>
Thank you Isaac for getting the benchmark up last night.

Lisp SBCL #2 is now 5th on

http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=all

However there seems to be a problem getting it going on the
Debian AMD Sempron test.  Something is different about getting
SBCL and CMUCL execed's (unexec'd) on those platforms.  I assume
it is Debian preventing execution.

http://shootout.alioth.debian.org/debian/benchmark.php?test=fannkuch&lang=sbcl&id=2
http://shootout.alioth.debian.org/debian/benchmark.php?test=fannkuch&lang=cmucl&id=2

Wade
From: Isaac Gouy
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159894274.636215.315940@i42g2000cwa.googlegroups.com>
Wade Humeniuk wrote:
> Thank you Isaac for getting the benchmark up last night.
>
> Lisp SBCL #2 is now 5th on
>
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=all
>
> However there seems to be a problem getting it going on the
> Debian AMD Sempron test.  Something is different about getting
> SBCL and CMUCL execed's (unexec'd) on those platforms.  I assume
> it is Debian preventing execution.
>
> http://shootout.alioth.debian.org/debian/benchmark.php?test=fannkuch&lang=sbcl&id=2
> http://shootout.alioth.debian.org/debian/benchmark.php?test=fannkuch&lang=cmucl&id=2
>
> Wade

We ask for push everything to go through the tracker, so we can um keep
track of things... There's a bug tracker
http://alioth.debian.org/tracker/?atid=411002&group_id=30402&func=browse
From: Juho Snellman
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <slrnehs83h.cl4.jsnell@sbz-30.cs.Helsinki.FI>
Henry Bigelow <·········@gmail.com> wrote:
> juho, thanks for your response.  i took a look at the benchmarks, and
> saw that you'd modified several of them.  is this your comment, by the
> way, from the spectral-norm benchmark? :
>
> Note that sbcl is at least 10 times faster than either clisp or gcl
> ;; on this program, running with an argument of 500.  It would be nice
> ;; to know why the others are so slow.
>
> if so, did you ever figure out the cause of the discrepancy?

No, that's not my comment. But different implementations are naturally
going to have different performance characteristics. As an obvious
example, clisp is a bytecode interpreter. Why would someone expect it
to perform as well as a native code compiler on computationally heavy
code?

Likewise the compiler technology that SBCL uses is completely
different from that of GCL. It'd be more surprising if they actually
did perform equally well on some program.

> i don't
> know whether it's true of c or c++ that different compilers can give
> drastically different code speeds.

It's true, there's just less room for a clever compiler to make a
difference over a mediocre one than in Lisp.

For example a few years ago Sun released a new version of their C
compiler which resulted in a 10x speedup on one of the SpecFP 2000
programs (179.art). It did that by statically detecting a memory
access pattern that was unfriendly to caches, and rearranging some
loops in the code to make the access pattern more efficient while
preserving the same semantics.

> it looks like you have a lot of experience.  so, benchmark politics
> aside, do you find lisp to be fast enough to do heavy-duty computing,
> (in my case, a bayesian network with hundreds of nodes and lots of
> floating point addition, but also written in a very extensible,
> abstract style)

Many people seem to find CMUCL and SBCL fast enough for real-world
numerical computing.

> p.s.  what would you say to changing the shootout such that multiple
> programs can be submitted, and the results shown based on whatever
> selection criteria wanted?

Sounds like a horrible idea. You want *more* crappy programs for
people to whine about? :-)

-- 
Juho Snellman
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159719200.942163.85240@i42g2000cwa.googlegroups.com>
Juho Snellman wrote:
> Henry Bigelow <·········@gmail.com> wrote:
> > juho, thanks for your response.  i took a look at the benchmarks, and
> > saw that you'd modified several of them.  is this your comment, by the
> > way, from the spectral-norm benchmark? :
> >
> > Note that sbcl is at least 10 times faster than either clisp or gcl
> > ;; on this program, running with an argument of 500.  It would be nice
> > ;; to know why the others are so slow.
> >
> > if so, did you ever figure out the cause of the discrepancy?
>
> No, that's not my comment. But different implementations are naturally
> going to have different performance characteristics.

> As an obvious
> example, clisp is a bytecode interpreter. Why would someone expect it
> to perform as well as a native code compiler on computationally heavy
> code?

right.  i actually assumed we're talking all native, when comparing
speeds.  i didn't know that clisp doesn't generate native code!  so
then i'm not surprised.  but, did they mean sbcl native is 10x faster
than gcl native, or than gcl bytecode?


>
> Likewise the compiler technology that SBCL uses is completely
> different from that of GCL. It'd be more surprising if they actually
> did perform equally well on some program.



>
> > i don't
> > know whether it's true of c or c++ that different compilers can give
> > drastically different code speeds.
>
> It's true, there's just less room for a clever compiler to make a
> difference over a mediocre one than in Lisp.

that's interesting.  it makes intuitive sense that this is the case,
but i'd never thought about it.  so what's your favorite compiler?
more broadly, do you think theoretically, either gcl or sbcl or some
other compiler has approached the maximum speed possible, or is there a
lot more room for growth?


>
> For example a few years ago Sun released a new version of their C
> compiler which resulted in a 10x speedup on one of the SpecFP 2000
> programs (179.art). It did that by statically detecting a memory
> access pattern that was unfriendly to caches, and rearranging some
> loops in the code to make the access pattern more efficient while
> preserving the same semantics.

wow!  has anything like that happened with lisp compilers recently?  or
do you think anything like that is likely to happen?

>
> > it looks like you have a lot of experience.  so, benchmark politics
> > aside, do you find lisp to be fast enough to do heavy-duty computing,
> > (in my case, a bayesian network with hundreds of nodes and lots of
> > floating point addition, but also written in a very extensible,
> > abstract style)
>
> Many people seem to find CMUCL and SBCL fast enough for real-world
> numerical computing.
>
> > p.s.  what would you say to changing the shootout such that multiple
> > programs can be submitted, and the results shown based on whatever
> > selection criteria wanted?
>
> Sounds like a horrible idea. You want *more* crappy programs for
> people to whine about? :-)

yes!  :)

> 
> -- 
> Juho Snellman
From: Isaac Gouy
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159723050.798022.243860@e3g2000cwe.googlegroups.com>
Henry Bigelow wrote:
> Juho Snellman wrote:
-snip-

> > > p.s.  what would you say to changing the shootout such that multiple
> > > programs can be submitted, and the results shown based on whatever
> > > selection criteria wanted?
> >
> > Sounds like a horrible idea. You want *more* crappy programs for
> > people to whine about? :-)
>
> yes!  :)

Multiple programs can be submitted, and the results for all those
programs are shown. Occasionally we go through and throw-out slower
programs (unless they seem interesting).

And then there's "FAQ: Who's working on similar projects?"
http://shootout.alioth.debian.org/gp4/faq.php#similar
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159742821.162851.303070@m7g2000cwm.googlegroups.com>
Isaac Gouy wrote:
> Henry Bigelow wrote:
> > Juho Snellman wrote:
> -snip-
>
> > > > p.s.  what would you say to changing the shootout such that multiple
> > > > programs can be submitted, and the results shown based on whatever
> > > > selection criteria wanted?
> > >
> > > Sounds like a horrible idea. You want *more* crappy programs for
> > > people to whine about? :-)
> >
> > yes!  :)
>
> Multiple programs can be submitted, and the results for all those
> programs are shown. Occasionally we go through and throw-out slower
> programs (unless they seem interesting).

hi isaac,
  thanks for your reply.  and thanks especially for maintaining the
shootout.  it was a very enlightening day when i first came across it a
few years ago.

i think the territory of benchmarks and contests is unavoidably
contentious.  and in some ways this is a positive thing.  (although in
others, unfortunately, negative)

to mitigate the frustration of having a benchmark deleted, (even if it
becomes invalid due to a change in requirements), would you consider a
CVS tree, one for each benchmark/language combination?  or would that
be impractical?

the branches or leaves could be labeled by benchmark criteria, and
whether they satisfy it, and the running statistics, and possibly even
the version of the compiler that compiled it.  that way, it would be
plain to everyone who wants to know whether a given version was
producing the correct output, and in certain cases, what tweaks made
something run faster.  that would be interesting to me at least.

of course, it would have the drawback of accumulating 'dead' leaves,
maybe even spam.  depending on how prevalent it was, i suppose this
approach would become impractical.  i don't know.

also, i feel the concept of a shootout and shootin are orthgonal.  i'd
be interested to follow the development of both.

thanks in advance,

henry
From: Isaac Gouy
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159749480.174771.227410@m73g2000cwd.googlegroups.com>
Henry Bigelow wrote:
> Isaac Gouy wrote:
> > Henry Bigelow wrote:
> > > Juho Snellman wrote:
> > -snip-
> >
> > > > > p.s.  what would you say to changing the shootout such that multiple
> > > > > programs can be submitted, and the results shown based on whatever
> > > > > selection criteria wanted?
> > > >
> > > > Sounds like a horrible idea. You want *more* crappy programs for
> > > > people to whine about? :-)
> > >
> > > yes!  :)
> >
> > Multiple programs can be submitted, and the results for all those
> > programs are shown. Occasionally we go through and throw-out slower
> > programs (unless they seem interesting).
>
> hi isaac,
>   thanks for your reply.  and thanks especially for maintaining the
> shootout.  it was a very enlightening day when i first came across it a
> few years ago.
>
> i think the territory of benchmarks and contests is unavoidably
> contentious.  and in some ways this is a positive thing.  (although in
> others, unfortunately, negative)
>
> to mitigate the frustration of having a benchmark deleted, (even if it
> becomes invalid due to a change in requirements), would you consider a
> CVS tree, one for each benchmark/language combination?  or would that
> be impractical?
>
> the branches or leaves could be labeled by benchmark criteria, and
> whether they satisfy it, and the running statistics, and possibly even
> the version of the compiler that compiled it.  that way, it would be
> plain to everyone who wants to know whether a given version was
> producing the correct output, and in certain cases, what tweaks made
> something run faster.  that would be interesting to me at least.
>
> of course, it would have the drawback of accumulating 'dead' leaves,
> maybe even spam.  depending on how prevalent it was, i suppose this
> approach would become impractical.  i don't know.
>
> also, i feel the concept of a shootout and shootin are orthgonal.  i'd
> be interested to follow the development of both.
>
> thanks in advance,
>
> henry

It's not clear to me what problem your suggestion is supposed to solve.


I think the place to understand "what tweaks made something run faster"
is within a particular language community like this:
http://www.haskell.org/hawiki/ShootoutEntry

The old Doug Bagley benchmarks were mostly replaced by new benchmarks
during autumn/winter 2005 - I haven't been keeping track but I don't
think the benchmarks have been changed this year.
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159751035.948532.208030@i42g2000cwa.googlegroups.com>
>
> It's not clear to me what problem your suggestion is supposed to solve.
>
the problem CVS will solve was mentioned by jon, juho and wade in their
description of the "life cycle" of a benchmark entry:

jon:  "the shootout maintainers claim the program never existed." etc.

juho:

>5. The requirements for the benchmark are modified, and the optimized
>    Lisp implementation gets deleted. There's no sign of it ever
>   having existed.

and wade:

>Why the shootout site would have removed the previous faster Lisp
>version is beyond me.

if several people could all contribute their versions, there might not
be any bone of contention as to which implementation of a given
benchmark was or is best.  they'd all be up there with the results side
by side.

but, my hope is not just to solve a 'problem' with the shootout, but to
make it even more useful.  it would be interesting to see what the
running time difference would be between a lisp fannkuch imperatively
written, and lisp fannkuch functional version would be, for example.

saying this, i realize it might be a lot more work for the maintainers,
and possibly create problems.  but i took a look at the 'shootin' site,
and the author seems to promote the idea of a 'wiki but with a
community-rated account system'.  perhaps commit privileges for CVS
could be regulated similarly, and it might then distribute the workload
and the accountability.

does this answer your question, or was it something else?

thanks,

henry


>
> I think the place to understand "what tweaks made something run faster"
> is within a particular language community like this:
> http://www.haskell.org/hawiki/ShootoutEntry


>
> The old Doug Bagley benchmarks were mostly replaced by new benchmarks
> during autumn/winter 2005 - I haven't been keeping track but I don't
> think the benchmarks have been changed this year.
From: Isaac Gouy
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159757623.688035.291380@i3g2000cwc.googlegroups.com>
Henry Bigelow wrote:
> >
> > It's not clear to me what problem your suggestion is supposed to solve.
> >
> the problem CVS will solve was mentioned by jon, juho and wade in their
> description of the "life cycle" of a benchmark entry:
>
> jon:  "the shootout maintainers claim the program never existed." etc.

As I said, for this go read the shootout mailing-list archive.


> juho:
>
> >5. The requirements for the benchmark are modified, and the optimized
> >    Lisp implementation gets deleted. There's no sign of it ever
> >   having existed.
>
> and wade:
>
> >Why the shootout site would have removed the previous faster Lisp
> >version is beyond me.

The shootout site removed "the previous faster Lisp version" of
Fannkuch because it no longer had any measured time to display, it no
longer gave the correct answer, it was broken, it stayed down at the
bottom of the table showing 'Error' (I don't know how many months
passed before it was finally removed - maybe we waited until someone
contributed a working program).

(When Fannkuch was changed back in 2005, I went through the contributed
Fannkuch programs in the tracker and emailed the people who had
provided an email address to let them know that the spec had been
changed. Within a short time we received fixed programs for most of the
programming languages.)


> if several people could all contribute their versions, there might not
> be any bone of contention as to which implementation of a given
> benchmark was or is best.  they'd all be up there with the results side
> by side.
>
> but, my hope is not just to solve a 'problem' with the shootout, but to
> make it even more useful.  it would be interesting to see what the
> running time difference would be between a lisp fannkuch imperatively
> written, and lisp fannkuch functional version would be, for example.

I guess you haven't noticed the 2 Scala programs we show for fannkuch
http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=scala&id=2
http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=scala&id=0

> saying this, i realize it might be a lot more work for the maintainers,
> and possibly create problems.

You seem to be suggesting that we should show every program that has
ever been contributed, for ever. Do you understand that we continually
update the language implementations and re-measure the programs?

>  but i took a look at the 'shootin' site,
> and the author seems to promote the idea of a 'wiki but with a
> community-rated account system'.  perhaps commit privileges for CVS
> could be regulated similarly, and it might then distribute the workload
> and the accountability.

If you think "the shootin" has promise then help make something happen
with "the shootin".
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159762042.608883.41140@m73g2000cwd.googlegroups.com>
Isaac Gouy wrote:
> Henry Bigelow wrote:
> > >
> > > It's not clear to me what problem your suggestion is supposed to solve.
> > >
> > the problem CVS will solve was mentioned by jon, juho and wade in their
> > description of the "life cycle" of a benchmark entry:
> >
> > jon:  "the shootout maintainers claim the program never existed." etc.
>
> As I said, for this go read the shootout mailing-list archive.
>
>
> > juho:
> >
> > >5. The requirements for the benchmark are modified, and the optimized
> > >    Lisp implementation gets deleted. There's no sign of it ever
> > >   having existed.
> >
> > and wade:
> >
> > >Why the shootout site would have removed the previous faster Lisp
> > >version is beyond me.
>
> The shootout site removed "the previous faster Lisp version" of
> Fannkuch because it no longer had any measured time to display, it no
> longer gave the correct answer, it was broken, it stayed down at the
> bottom of the table showing 'Error' (I don't know how many months
> passed before it was finally removed - maybe we waited until someone
> contributed a working program).
>
> (When Fannkuch was changed back in 2005, I went through the contributed
> Fannkuch programs in the tracker and emailed the people who had
> provided an email address to let them know that the spec had been
> changed. Within a short time we received fixed programs for most of the
> programming languages.)
>
>
> > if several people could all contribute their versions, there might not
> > be any bone of contention as to which implementation of a given
> > benchmark was or is best.  they'd all be up there with the results side
> > by side.
> >
> > but, my hope is not just to solve a 'problem' with the shootout, but to
> > make it even more useful.  it would be interesting to see what the
> > running time difference would be between a lisp fannkuch imperatively
> > written, and lisp fannkuch functional version would be, for example.
>
> I guess you haven't noticed the 2 Scala programs we show for fannkuch
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=scala&id=2
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=scala&id=0

i didn't notice that.  i'm just talking about eliminating the
controversy caused by deleting old submissions, even if they are
non-working.  the fact that they are non-working is important
information too.

>
> > saying this, i realize it might be a lot more work for the maintainers,
> > and possibly create problems.
>
> You seem to be suggesting that we should show every program that has
> ever been contributed, for ever. Do you understand that we continually
> update the language implementations and re-measure the programs?

i was suggesting keeping the history of edits for each program, but not
necessarily displaying the results for each.  and, not to remeasure the
entire history of all versions--that would obviously be impractical.

so, for each benchmark, a CVS history of edits.  and with each leaf on
the version, whatever test results were performed, with whatever
language implementation etc.  some versions would be retested with
newer compiler implementations, or against newer algorithm requirements
and assigned 'error', or whatever, just the way you normally do it.
the only difference would be that this information would be recorded,
and you wouldn't have to make a decision whether to delete it or not.

so, what i propose doesn't require any more computation than you
already do, just more diskspace.

i don't know, maybe there really isn't any way to remedy this
situation.  anyone have any other ideas?

henry
From: Isaac Gouy
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159765616.145635.276140@h48g2000cwc.googlegroups.com>
Henry Bigelow wrote:
> Isaac Gouy wrote:
> > Henry Bigelow wrote:
> > > >
> > > > It's not clear to me what problem your suggestion is supposed to solve.
> > > >
> > > the problem CVS will solve was mentioned by jon, juho and wade in their
> > > description of the "life cycle" of a benchmark entry:
> > >
> > > jon:  "the shootout maintainers claim the program never existed." etc.
> >
> > As I said, for this go read the shootout mailing-list archive.
> >
> >
> > > juho:
> > >
> > > >5. The requirements for the benchmark are modified, and the optimized
> > > >    Lisp implementation gets deleted. There's no sign of it ever
> > > >   having existed.
> > >
> > > and wade:
> > >
> > > >Why the shootout site would have removed the previous faster Lisp
> > > >version is beyond me.
> >
> > The shootout site removed "the previous faster Lisp version" of
> > Fannkuch because it no longer had any measured time to display, it no
> > longer gave the correct answer, it was broken, it stayed down at the
> > bottom of the table showing 'Error' (I don't know how many months
> > passed before it was finally removed - maybe we waited until someone
> > contributed a working program).
> >
> > (When Fannkuch was changed back in 2005, I went through the contributed
> > Fannkuch programs in the tracker and emailed the people who had
> > provided an email address to let them know that the spec had been
> > changed. Within a short time we received fixed programs for most of the
> > programming languages.)
> >
> >
> > > if several people could all contribute their versions, there might not
> > > be any bone of contention as to which implementation of a given
> > > benchmark was or is best.  they'd all be up there with the results side
> > > by side.
> > >
> > > but, my hope is not just to solve a 'problem' with the shootout, but to
> > > make it even more useful.  it would be interesting to see what the
> > > running time difference would be between a lisp fannkuch imperatively
> > > written, and lisp fannkuch functional version would be, for example.
> >
> > I guess you haven't noticed the 2 Scala programs we show for fannkuch
> > http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=scala&id=2
> > http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=scala&id=0
>
> i didn't notice that.  i'm just talking about eliminating the
> controversy caused by deleting old submissions, even if they are
> non-working.  the fact that they are non-working is important
> information too.

Why do you think the fact that broken programs are important
information for the computer language shootout - we deal in working
programs, that's why we provide example output files for people to diff
against.

Did you notice that we accept programs contributed as file attachements
to our tracking system - we don't delete from the tracking system.


>
> >
> > > saying this, i realize it might be a lot more work for the maintainers,
> > > and possibly create problems.
> >
> > You seem to be suggesting that we should show every program that has
> > ever been contributed, for ever. Do you understand that we continually
> > update the language implementations and re-measure the programs?
>
> i was suggesting keeping the history of edits for each program, but not
> necessarily displaying the results for each.  and, not to remeasure the
> entire history of all versions--that would obviously be impractical.
>
> so, for each benchmark, a CVS history of edits.  and with each leaf on
> the version, whatever test results were performed, with whatever
> language implementation etc.  some versions would be retested with
> newer compiler implementations, or against newer algorithm requirements
> and assigned 'error', or whatever, just the way you normally do it.
> the only difference would be that this information would be recorded,
> and you wouldn't have to make a decision whether to delete it or not.

I think you are starting to define a project - go do! ;-)

>
> so, what i propose doesn't require any more computation than you
> already do, just more diskspace.
>
> i don't know, maybe there really isn't any way to remedy this
> situation.  anyone have any other ideas?
> 
> henry
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <452172c5$0$8730$ed2619ec@ptn-nntp-reader02.plus.net>
Isaac Gouy wrote:
>> saying this, i realize it might be a lot more work for the maintainers,
>> and possibly create problems.
> 
> You seem to be suggesting that we should show every program that has
> ever been contributed, for ever. Do you understand that we continually
> update the language implementations and re-measure the programs?

Why don't you reinstate the ray tracer benchmark? I still think it is the
best designed benchmark on the shootout (spectral-norm is my second
favourite).

PS: If your answer is going to be "because the different language's
implementations use different algorithms" then I'll repeat "so do several
of your other benchmarks".

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Isaac Gouy
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159820684.718032.146400@c28g2000cwb.googlegroups.com>
Jon Harrop wrote:
> Isaac Gouy wrote:
> >> saying this, i realize it might be a lot more work for the maintainers,
> >> and possibly create problems.
> >
> > You seem to be suggesting that we should show every program that has
> > ever been contributed, for ever. Do you understand that we continually
> > update the language implementations and re-measure the programs?
>
> Why don't you reinstate the ray tracer benchmark? I still think it is the
> best designed benchmark on the shootout (spectral-norm is my second
> favourite).
>
> PS: If your answer is going to be "because the different language's
> implementations use different algorithms" then I'll repeat "so do several
> of your other benchmarks".
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> Objective CAML for Scientists
> http://www.ffconsultancy.com/products/ocaml_for_scientists

Let no one accuse you of modesty.
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <452170cc$0$8730$ed2619ec@ptn-nntp-reader02.plus.net>
Henry Bigelow wrote:
> it looks like you have a lot of experience.  so, benchmark politics
> aside, do you find lisp to be fast enough to do heavy-duty computing,
> (in my case, a bayesian network with hundreds of nodes and lots of
> floating point addition, but also written in a very extensible,
> abstract style)

You may be interested in my ray tracer benchmarks:

  http://www.ffconsultancy.com/free/ray_tracer/languages.html

On my computer, those benchmarks show Lisp to be roughly twice as slow and
twice as verbose as OCaml. However, the benchmark is quite specific. It
only tests data structures (trees) and floating point performance (ray
sphere). Also, timings vary considerably between architectures. Now that
Intel has a decent CPU, I'll be interested to see a Core Duo version of
this benchmark.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159840289.200717.60100@h48g2000cwc.googlegroups.com>
hi jon,

> You may be interested in my ray tracer benchmarks:
>
>   http://www.ffconsultancy.com/free/ray_tracer/languages.html

thank you!  very enlightening.  i read the analysis.  i have a few
questions about this excerpt

"However, the designers of the ML family of languages (including OCaml
and SML) deliberately avoided some of the functionality provided by
Lisp in order to facilitate static type checking and improve
performance."

i guess i misunderstood something.  does lisp not have any type
inference?  or does it have partial type inference if you explicitly
declare the types for some variables and not others?

"Specifically, Lisp provides macros to customise syntax and allows them
to be entwined with ordinary code, and provides run-time code
manipulation. SML provides neither macros nor run-time code
generation."


"OCaml provides camlp4 macros, a limited form that are separate from
the language, and the derived language MetaOCaml also provides run-time
code generation"

are camlp4 macros as powerful as lisp macros?

and, is the run-time code generation of MetaOCaml as powerful as lisp's
'compile' function?


in the bigger picture, do you foresee any advancements to either lisp
or ocaml that would improve either of them, maybe by sharing ideas?

thanks,

henry




>
> On my computer, those benchmarks show Lisp to be roughly twice as slow and
> twice as verbose as OCaml.

while i'm not surprised lisp is slower for this benchmark, i am
confused why it would be more verbose.


>However, the benchmark is quite specific. It
> only tests data structures (trees) and floating point performance (ray
> sphere). Also, timings vary considerably between architectures. Now that
> Intel has a decent CPU, I'll be interested to see a Core Duo version of
> this benchmark.
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> Objective CAML for Scientists
> http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Rahul Jain
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87k63h5m1m.fsf@nyct.net>
"Henry Bigelow" <·········@gmail.com> writes:

> hi jon,
>
>> You may be interested in my ray tracer benchmarks:
>>
>>   http://www.ffconsultancy.com/free/ray_tracer/languages.html
>
> thank you!  very enlightening.  i read the analysis.  i have a few
> questions about this excerpt
>
> "However, the designers of the ML family of languages (including OCaml
> and SML) deliberately avoided some of the functionality provided by
> Lisp in order to facilitate static type checking and improve
> performance."
>
> i guess i misunderstood something.  does lisp not have any type
> inference?  or does it have partial type inference if you explicitly
> declare the types for some variables and not others?

What do you mean by "Lisp"? The standard does not require type
inference, as it's merely a way to optimize code by eliding type
dispatches at run time and allowing the inlining of the implementation
of the operator for that specific type.

Lisp has a much richer type system than any of the MLs. CMUCL, for
example, can do type inference to determine whether the arcsin of a
value rounded off to the nearest integer will fit in a machine word or
whether it might need to have a bignum (MLs don't have bignums as part
of the language).

For example, if we compile the following:
(defun test (x y)
  (declare (type (integer 5 100) x)
           (type (integer 1 5) y))
  (expt x y))

CMUCL tells us (among other things):
Function: #<Function TEST {582F61C9}>
Function arguments:
  (x y)
Its defined argument types are:
  ((INTEGER 5 100) (INTEGER 1 5))
Its result type is:
  (INTEGER 5 10000000000)

Therefore, it has figured out that an integer from 5 to 100 raised to
the power of an integer from 1 to 5 gives us an integer from 5 to
10000000000. It can then use this information to determine whether to
return a boxed (with type information if the result can overflow a
register) or unboxed (raw value in a register) value.


As an exmaple of this, we can compile the following:
(defun test (x y)
  (declare (optimize speed (safety 0))
           (type (integer 5 1000000) x)
           (type (integer 1 5000000) y))
  (* x y))

and the compiler tells us:
; Note: Forced to do GENERIC-* (cost 30).
;     Unable to do inline fixnum arithmetic (cost 4) because:
;     The result is a (INTEGER 5 5000000000000), not a FIXNUM.
;     Unable to do inline (signed-byte 32) arithmetic (cost 5) because:
;     The result is a (INTEGER 5 5000000000000), not a (SIGNED-BYTE 32).
;     etc.

Delete a few zeroes from each of the upper bounds and the note goes
away, as the * can be inlined (in the dissasembly, we see:
      C9:       IMUL    EDX, EDI
instead of:
      7E:       CALL    #x100002D0           ; #x100002D0: GENERIC-*


This topic is VERY, VERY important to optimizing Lisp numeric code, so
be SURE to study it carefully.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159928560.760346.307390@i42g2000cwa.googlegroups.com>
Rahul Jain wrote:

> > "However, the designers of the ML family of languages (including OCaml
> > and SML) deliberately avoided some of the functionality provided by
> > Lisp in order to facilitate static type checking and improve
> > performance."
> >
> > i guess i misunderstood something.  does lisp not have any type
> > inference?  or does it have partial type inference if you explicitly
> > declare the types for some variables and not others?
>
> What do you mean by "Lisp"? The standard does not require type
> inference, as it's merely a way to optimize code by eliding type
> dispatches at run time and allowing the inlining of the implementation
> of the operator for that specific type.

i don't know what i mean by lisp. :)   or what the standard defines.
does the standard require that "lisp can be written in itself" as paul
graham says (whatever that means)?

and, is that why

somehow has a drawback that aggressive type inference can't be done,
with any compiler of lisp.

>
> Lisp has a much richer type system than any of the MLs. CMUCL, for
> example, can do type inference to determine whether the arcsin of a
> value rounded off to the nearest integer will fit in a machine word or
> whether it might need to have a bignum (MLs don't have bignums as part
> of the language).

yes, but it's been mentioned several times that, in order to get lisp
code to run fast, you have to "litter it with type declarations".  so,
it seems, there isn't much type inference going on.

also, i got that idea from this wikipedia page:

http://en.wikipedia.org/wiki/Type_system

about halfway down it mentions this:

Static typing usually results in compiled code that executes more
quickly. When the compiler knows the exact data types that are in use,
it can produce machine code that just does the right thing. Further,
compilers in statically typed languages can find shortcuts more easily.
Some dynamically-typed languages such as Common Lisp allow optional
type declarations for optimization for this very reason. Static typing
makes this pervasive. See optimization.


>
> For example, if we compile the following:
> (defun test (x y)
>   (declare (type (integer 5 100) x)
>            (type (integer 1 5) y))
>   (expt x y))
>
> CMUCL tells us (among other things):
> Function: #<Function TEST {582F61C9}>
> Function arguments:
>   (x y)
> Its defined argument types are:
>   ((INTEGER 5 100) (INTEGER 1 5))
> Its result type is:
>   (INTEGER 5 10000000000)
>
> Therefore, it has figured out that an integer from 5 to 100 raised to
> the power of an integer from 1 to 5 gives us an integer from 5 to
> 10000000000. It can then use this information to determine whether to
> return a boxed (with type information if the result can overflow a
> register) or unboxed (raw value in a register) value.
>

that is neat.  i'd never heard of that before.

>
> As an exmaple of this, we can compile the following:
> (defun test (x y)
>   (declare (optimize speed (safety 0))
>            (type (integer 5 1000000) x)
>            (type (integer 1 5000000) y))
>   (* x y))
>
> and the compiler tells us:
> ; Note: Forced to do GENERIC-* (cost 30).
> ;     Unable to do inline fixnum arithmetic (cost 4) because:
> ;     The result is a (INTEGER 5 5000000000000), not a FIXNUM.
> ;     Unable to do inline (signed-byte 32) arithmetic (cost 5) because:
> ;     The result is a (INTEGER 5 5000000000000), not a (SIGNED-BYTE 32).
> ;     etc.
>
> Delete a few zeroes from each of the upper bounds and the note goes
> away, as the * can be inlined (in the dissasembly, we see:
>       C9:       IMUL    EDX, EDI
> instead of:
>       7E:       CALL    #x100002D0           ; #x100002D0: GENERIC-*
>
>
> This topic is VERY, VERY important to optimizing Lisp numeric code, so
> be SURE to study it carefully.

i see.  that's pretty cool.  so, what is your programming style?  do
you first prototype something without any type declarations, and then
add in a few in the compute-intensive code?

thanks,

henry



>
> --
> Rahul Jain
> ·····@nyct.net
> Professional Software Developer, Amateur Quantum Mechanicist
From: Rahul Jain
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87mz8bopvy.fsf@nyct.net>
"Henry Bigelow" <·········@gmail.com> writes:

> does the standard require that "lisp can be written in itself" as paul
> graham says (whatever that means)?

It doesn't explicitly require that, but it defines a turing-complete
language, so yeah... you can write a lisp compiler and interpreter in
lisp, as you can in C or Perl. The difference, I think, that PG is
getting at is that the most basic lisp data is code itself. Thinking
about code in an abstract manner is just second nature when you're
programming in lisp.

> and, is that why
>
> somehow has a drawback that aggressive type inference can't be done,
> with any compiler of lisp.

I don't see how those two statements could possibly correlate, and later
I demonstrate more aggressive type inference being done than any other
language I've heard of.

>> Lisp has a much richer type system than any of the MLs. CMUCL, for
>> example, can do type inference to determine whether the arcsin of a
>> value rounded off to the nearest integer will fit in a machine word or
>> whether it might need to have a bignum (MLs don't have bignums as part
>> of the language).
>
> yes, but it's been mentioned several times that, in order to get lisp
> code to run fast, you have to "litter it with type declarations".  so,
> it seems, there isn't much type inference going on.

No, you "litter" it where needed. It depends from compiler to compiler. 
C and Java code is so littered with type declarations, the highway
police would arrest the programmer if he hadn't already run into a
lamppost because his code crashed.

> also, i got that idea from this wikipedia page:
>
> http://en.wikipedia.org/wiki/Type_system
>
> about halfway down it mentions this:
>
> Static typing usually results in compiled code that executes more
> quickly. When the compiler knows the exact data types that are in use,
> it can produce machine code that just does the right thing. Further,
> compilers in statically typed languages can find shortcuts more easily.
> Some dynamically-typed languages such as Common Lisp allow optional
> type declarations for optimization for this very reason. Static typing
> makes this pervasive. See optimization.

Haha! I just modified that paragraph last month to fix the way it
described "Lisp". But yes, Common Lisp can be as static or dynamic as
you want it to be.

> i see.  that's pretty cool.  so, what is your programming style?  do
> you first prototype something without any type declarations, and then
> add in a few in the compute-intensive code?

That's the third rule of optimization applied to type declarations. So
yes. :)

Remember not to conflate type declarations with type checks. (Although
code declared to be safe will always check types when it calls standard
lisp functions.)

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <45239ac3$0$24506$ed2e19e4@ptn-nntp-reader04.plus.net>
Rahul Jain wrote:
> Lisp has a much richer type system than any of the MLs.

I'm not sure what you mean by that. ML does lots of things that Lisp does
not and vice-versa.

> CMUCL, for 
> example, can do type inference to determine whether the arcsin of a
> value rounded off to the nearest integer will fit in a machine word or
> whether it might need to have a bignum

ML doesn't have a numeric tower. You have to explicitly state that you want
an int, float, bignum and so on.

> (MLs don't have bignums as part of the language).

OCaml, SML and F# all have bignums as part of the language.

> This topic is VERY, VERY important to optimizing Lisp numeric code, so
> be SURE to study it carefully.

In contrast, succinct ML is typically fast ML.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Rahul Jain
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87irizop5m.fsf@nyct.net>
Jon Harrop <···@ffconsultancy.com> writes:

> Rahul Jain wrote:
>> Lisp has a much richer type system than any of the MLs.
>
> I'm not sure what you mean by that. ML does lots of things that Lisp does
> not and vice-versa.

Hmm. I'm not sure about that. All I can think of is inference of
parametrized array types (non-upgraded) across functions. Lisp just
doesn't _require_ those things to be done. If someone cared, they could
augment a compiler to do it.

What I'm talking about, to clarify, is for an array declared to have an
element type of (integer 1 100), a function that returns an element of
it should have an inferred return type of (integer 1 100).

For:
(defun test (x)
  (declare (type (array (integer 1 100) (*)) x))
  (aref x 1))
CMUCL inferrs a return type of:
Its result type is:
  (UNSIGNED-BYTE 8)

So it has upgraded the element type to what it's actually going to store
in memory. Dunno why it doesn't just propagate the element type
defintion directly.

>> CMUCL, for 
>> example, can do type inference to determine whether the arcsin of a
>> value rounded off to the nearest integer will fit in a machine word or
>> whether it might need to have a bignum
>
> ML doesn't have a numeric tower. You have to explicitly state that you want
> an int, float, bignum and so on.
>
>> (MLs don't have bignums as part of the language).
>
> OCaml, SML and F# all have bignums as part of the language.

Interesting. Some self-proclaimed ocaml genius claimed that it didn't
have bignums. Maybe he meant that you need to explicitly... um... 
declare (?!?) a number to be a bignum? That would fit in with the lack
of a numeric tower.

>> This topic is VERY, VERY important to optimizing Lisp numeric code, so
>> be SURE to study it carefully.
>
> In contrast, succinct ML is typically fast ML.

Yeah, but you don't know if it's going to be correct unless you manually
perform the kind of type inference that CMUCL does. :)

On a more serious note, you still end up with the problem that your code
is either slow but tolerant of different architectures or it's fast on
some architectures and incorrect on others. Sure, you could maintain two
parallel versions of the codebase, but don't programmers like
abstraction? (yeah, I know, the mainstream abhors abstraction. They'd be
happy writing machine code if they could still get paid enough for their
time.)

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1160006928.432227.289650@m73g2000cwd.googlegroups.com>
Rahul Jain ha escrito:

> Hmm. I'm not sure about that. All I can think of is inference of
> parametrized array types (non-upgraded) across functions. Lisp just
> doesn't _require_ those things to be done. If someone cared, they could
> augment a compiler to do it.
> What I'm talking about, to clarify, is for an array declared to have an
> element type of (integer 1 100), a function that returns an element of
> it should have an inferred return type of (integer 1 100).
>
> For:
> (defun test (x)
>   (declare (type (array (integer 1 100) (*)) x))
>   (aref x 1))
> CMUCL inferrs a return type of:
> Its result type is:
>   (UNSIGNED-BYTE 8)
>
> So it has upgraded the element type to what it's actually going to store
> in memory. Dunno why it doesn't just propagate the element type
> defintion directly.

One important thing I miss from Lisp compared to ML, is for example the
ability to test at compile time about the type of something. For
example you can define a list type in the style of Lisp:

type 'a lisp_list = Nil | List of 'a list

so now every variable using this type can be either Nil or a list
itself. Or even binary trees:

type 'a btree = Empty | Node of 'a * 'a btree * 'a btree

I found this feature very usefull.

> Interesting. Some self-proclaimed ocaml genius claimed that it didn't
> have bignums.

It does:

http://caml.inria.fr/pub/docs/manual-ocaml/libref/Big_int.html

If you don't like those function names, you can rename the operators:

let ( +$ ) = add_big_int


> Maybe he meant that you need to explicitly... um...
> declare (?!?) a number to be a bignum? That would fit in with the lack
> of a numeric tower.

I don't understand what you're talking about, but you can always
convert any int or string to big_int and viceversa.

> >> This topic is VERY, VERY important to optimizing Lisp numeric code, so
> >> be SURE to study it carefully.
> >
> > In contrast, succinct ML is typically fast ML.
>
> Yeah, but you don't know if it's going to be correct unless you manually
> perform the kind of type inference that CMUCL does. :)

I think that if you are a good programmer, you must know the type of
anything you are writing, even if it is writen in Lisp. And ML does
allow you to use "generic" or arbitray types when you need. Just take
the example of the binary tree above.

> On a more serious note, you still end up with the problem that your code
> is either slow but tolerant of different architectures or it's fast on
> some architectures and incorrect on others.
> Sure, you could maintain two
> parallel versions of the codebase, but don't programmers like
> abstraction? (yeah, I know, the mainstream abhors abstraction. They'd be
> happy writing machine code if they could still get paid enough for their
> time.)

Static inference always produce faster code compared with dynamic
inference, for an equivalent code base. But you usually need to declare
things on dynamic languages to help the inferer, while in statically
infered ones you don't, so you save a lot of time on both writing and
profiling.
You don't need to maintain parallel versions of the codebase, or if you
do, you would also do in Lisp (this is not language specific, but the
kind of algorithm/implementation you choose).
From: Joe Marshall
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1160066460.212734.270010@e3g2000cwe.googlegroups.com>
Javier wrote:
>
> Static inference always produce faster code compared with dynamic
> inference, for an equivalent code base.

http://www.hpl.hp.com/techreports/1999/HPL-1999-77.pdf

``Contrary to intuition, we demonstrate that it is possible to
use a piece of software to improve the performance of a
native, statically optimized program binary, while it is
executing. Dynamo not only speeds up real application
programs, its performance improvement is often quite
significant. For example, the performance of many +O2
optimized SPECint95 binaries running under Dynamo is
comparable to the performance of their +O4 optimized
version running without Dynamo.''
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1160074756.560126.184620@i42g2000cwa.googlegroups.com>
Joe Marshall ha escrito:

> Javier wrote:
> >
> > Static inference always produce faster code compared with dynamic
> > inference, for an equivalent code base.
>
> http://www.hpl.hp.com/techreports/1999/HPL-1999-77.pdf
>
> ``Contrary to intuition, we demonstrate that it is possible to
> use a piece of software to improve the performance of a
> native, statically optimized program binary, while it is
> executing. Dynamo not only speeds up real application
> programs, its performance improvement is often quite
> significant. For example, the performance of many +O2
> optimized SPECint95 binaries running under Dynamo is
> comparable to the performance of their +O4 optimized
> version running without Dynamo.''

In practice, you can almost reach the speed of a statically program,
but then the compiler must create multiple versions of the functions to
acommodate to the varius types it may use. This is the case of Psyco
(for Python).
SBCL is not, as far as I know, able to do so, and you must declare
everything which is not so susceptible to be inferred at compile time.
Even in the case that some implementation is able to do so, you have
got the penalty of the compiler being compiling everything in real time
for every new variable type encountered, so meanwhile it is compiling,
you are wasting  CPU cycles. And of course, you also waste CPU cache
hits, because the code is more fragmented and much bigger. And of
course, you waste more memory too.
So I still believe that dynamic inference does always have less
performance for a similar codebase and compilers of similar qualities.
From: Rahul Jain
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87d596lbl0.fsf@nyct.net>
"Javier" <·······@gmail.com> writes:

> And of course, you also waste CPU cache hits, because the code is
> more fragmented and much bigger.

Not so. Dynamo dynamically defragments the code.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Pascal Costanza
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <4ol9ijFevujrU1@individual.net>
Javier wrote:

> So I still believe that dynamic inference does always have less
> performance for a similar codebase and compilers of similar qualities.

Perform a few benchmarks, and you'll be surprised.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Rahul Jain
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87y7rulcy0.fsf@nyct.net>
"Javier" <·······@gmail.com> writes:

> One important thing I miss from Lisp compared to ML, is for example the
> ability to test at compile time about the type of something. For
> example you can define a list type in the style of Lisp:
>
> type 'a lisp_list = Nil | List of 'a list
>
> so now every variable using this type can be either Nil or a list
> itself. Or even binary trees:
>
> type 'a btree = Empty | Node of 'a * 'a btree * 'a btree
>
> I found this feature very usefull.

How can you not declare types in Lisp? I don't see how this has anything
to do with "testing" anything at compile time.

>> Maybe he meant that you need to explicitly... um...
>> declare (?!?) a number to be a bignum? That would fit in with the lack
>> of a numeric tower.
>
> I don't understand what you're talking about, but you can always
> convert any int or string to big_int and viceversa.

Yes, but how do _I_ know when I need to use bignums or not?

> I think that if you are a good programmer, you must know the type of
> anything you are writing, even if it is writen in Lisp. And ML does
> allow you to use "generic" or arbitray types when you need. Just take
> the example of the binary tree above.

But you may not know what that type _means_ because that type is not
defined to mean anything. A machine word is a different size on
different processors. I can't tell ad-hoc whether a value will fit into
"the" machine word of "the" processor it might run on at any time in the
future. On the other hand, Lisp allows me to say what size the values
will have and the compiler figures out whether it will necessarily fit
into a machine word or not and emits the appropriate machine code for
that inference.

> Static inference always produce faster code compared with dynamic
> inference, for an equivalent code base.

Huh? Dynamic "inference" doesn't make sense in the context of any Lisp
system I know of. Are you talking about recompiling the codepath each
time it's run with different types... or what?

> But you usually need to declare things on dynamic languages to help
> the inferer, while in statically infered ones you don't, so you save a
> lot of time on both writing and profiling.

Huh? No way. You need (theoretically) to declare just as much in Lisp as
you do in ML. Some compilers aren't as good about type inference, and
some programmers want to keep their codebase dynamic so they can load
fixes into a live system that might change the datatypes involved in
some codepaths.

> You don't need to maintain parallel versions of the codebase, or if you
> do, you would also do in Lisp (this is not language specific, but the
> kind of algorithm/implementation you choose).

No. Because Lisp types are more closely related to the actual
mathematical types as opposed to ML types which are more closely related
to the things the computer happens to use at the moment.

I don't need to write one version with integer declarations and another
with fixnum declarations. I just define the ranges of the inputs and the
type inferer takes care of figuring out whether that architecture on
that implementation can use fixnums or needs to do generic arithmetic. 
Please refer to the examples I gave. If I compiled the code on a 64 bit
machine, I wouldn't get the efficiency note. If I wanted to have that
code optimized in ML, I'd need to write one bignum version, one int
version, write code for what the CMUCL type inferer would compute, and
have that run at compile-time to determine whether I want to use the
bignum or int version given the processor's machine word size. In Lisp,
I just write the code in the way I think about it, mathematically.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1160097355.899319.133850@h48g2000cwc.googlegroups.com>
Rahul Jain ha escrito:


> > type 'a btree = Empty | Node of 'a * 'a btree * 'a btree
> >
> > I found this feature very usefull.
>
> How can you not declare types in Lisp? I don't see how this has anything
> to do with "testing" anything at compile time.

Basically, in ML every type must me known at compile time. If you use
the binary tree definition above, which is an arbitray one, then you
must provide a context to it. For example:

# let my_tree = Empty;;
=> val my_tree : 'a btree = Empty

but, if you assign a real value:

# let my_tree = Node (5, Empty, Empty);;
=> val my_tree : int btree = Node (5, Empty, Empty)

the compiler is able to create a new type called "int btree" from the
generic one.
The good thing about this is that, while being able to use a "dynamic"
type like a btree for storing any kind of data you want to, you also
are enforzed to not mix int with strings or floats  in that tree if you
don't intentionally desire that.
This means that your code will always be type safe, avoiding some
annoying bugs.
In Lisp, if you create a binary tree, it may contain data of several
types, which is not safe. Or you can start declaring things, but then
you end up with not elegant spaggetti code, much like  if it where made
in C.

> >> Maybe he meant that you need to explicitly... um...
> >> declare (?!?) a number to be a bignum? That would fit in with the lack
> >> of a numeric tower.
> >
> > I don't understand what you're talking about, but you can always
> > convert any int or string to big_int and viceversa.
>
> Yes, but how do _I_ know when I need to use bignums or not?

You must know, you are a programmer. Even in Lisp you must know the
type you are using for every variable.

> > I think that if you are a good programmer, you must know the type of
> > anything you are writing, even if it is writen in Lisp. And ML does
> > allow you to use "generic" or arbitray types when you need. Just take
> > the example of the binary tree above.
>
> But you may not know what that type _means_ because that type is not
> defined to mean anything.

It does mean. Even in mathemathics does mean. It is not the same to do
arithmetic with natural numbers than with real or complex ones. On
computers, this is important too.

> A machine word is a different size on
> different processors. I can't tell ad-hoc whether a value will fit into
> "the" machine word of "the" processor it might run on at any time in the
> future.
> On the other hand, Lisp allows me to say what size the values
> will have and the compiler figures out whether it will necessarily fit
> into a machine word or not and emits the appropriate machine code for
> that inference.

In respect to word size, you are right, dynamic ilanguagges allows you
to write less and more standard code. But the cost of this is that the
rest of the system may be unsafe if you don't declare things.

> > Static inference always produce faster code compared with dynamic
> > inference, for an equivalent code base.
>
> Huh? Dynamic "inference" doesn't make sense in the context of any Lisp
> system I know of. Are you talking about recompiling the codepath each
> time it's run with different types... or what?

I meant dynamic languages in comparison with stong ones having static
inference. Sorry if I was not clear.

> > But you usually need to declare things on dynamic languages to help
> > the inferer, while in statically infered ones you don't, so you save a
> > lot of time on both writing and profiling.
>
> Huh? No way. You need (theoretically) to declare just as much in Lisp as
> you do in ML. Some compilers aren't as good about type inference, and
> some programmers want to keep their codebase dynamic so they can load
> fixes into a live system that might change the datatypes involved in
> some codepaths.

Normaly, when a data type is changed, some side effects are happening
somewhere in your code. For example, you cannot change the type of an
important variable from an integer to a string and pretend that nothing
would happen in you code.
Your statement is only true for interchanging integers and float of
different word size, in which I recognise Lisp is more concise than ML.
But not much better, you can still use different types into one single
variable in ML:

# type number = Int of int | Float of float | Big of Big_int.big_int;;
type number = Int of int | Float of float | Big of Big_int.big_int
# let n = Float 9.4;;
val n : number = Float 9.4
# match n with
  | Float n -> print_string "It is a float"
  | Int n -> print_string "It is an integer"
  | Big n -> print_string "It is a big number";;
=>      It is a float
- : unit = ()
From: Rahul Jain
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87ac49jq2e.fsf@nyct.net>
"Javier" <·······@gmail.com> writes:

> Rahul Jain ha escrito:
>
>
>> > type 'a btree = Empty | Node of 'a * 'a btree * 'a btree
>> >
>> > I found this feature very usefull.
>>
>> How can you not declare types in Lisp? I don't see how this has anything
>> to do with "testing" anything at compile time.
>
> Basically, in ML every type must me known at compile time. If you use
> the binary tree definition above, which is an arbitray one, then you
> must provide a context to it. For example:
>
> # let my_tree = Empty;;
> => val my_tree : 'a btree = Empty
>
> but, if you assign a real value:
>
> # let my_tree = Node (5, Empty, Empty);;
> => val my_tree : int btree = Node (5, Empty, Empty)
>
> the compiler is able to create a new type called "int btree" from the
> generic one.

Lisp has parametrized types, too. You can't tell the compiler how to
propagate your parameters through various operators, because that's not
in the standard. But you can do it given a specific compiler.

> The good thing about this is that, while being able to use a "dynamic"
> type like a btree for storing any kind of data you want to, you also
> are enforzed to not mix int with strings or floats  in that tree if you
> don't intentionally desire that.

Yes, that can be done for arrays in Lisp. As I said before, custom data
types don't have their parameters inferred automatically because there's
no good way to tell what each parameter actually means.

> This means that your code will always be type safe, avoiding some
> annoying bugs.

Oh jeez. Shouldn't this kind of discussion be in comp.programming?
I believe that's where these kinds of pointless discussions abound.

> In Lisp, if you create a binary tree, it may contain data of several
> types, which is not safe.

Wrong. By your logic, the Lisp code is not a "safe" data structure.

> Or you can start declaring things, but then
> you end up with not elegant spaggetti code, much like  if it where made
> in C.

I don't see where the term "spaghetti" comes from. The code would be
easy enough to follow, and type inference allows you to not need to
declare much.

>> > I don't understand what you're talking about, but you can always
>> > convert any int or string to big_int and viceversa.
>>
>> Yes, but how do _I_ know when I need to use bignums or not?
>
> You must know, you are a programmer. Even in Lisp you must know the
> type you are using for every variable.

That doesn't jive. I know the type, but that's only _because_ I'm using
Lisp, and that's the reason why the _actual_ type exists. I don't care
about how that needs to be implemented on different hardware.

>> > I think that if you are a good programmer, you must know the type of
>> > anything you are writing, even if it is writen in Lisp. And ML does
>> > allow you to use "generic" or arbitray types when you need. Just take
>> > the example of the binary tree above.
>>
>> But you may not know what that type _means_ because that type is not
>> defined to mean anything.
>
> It does mean. Even in mathemathics does mean. It is not the same to do
> arithmetic with natural numbers than with real or complex ones. On
> computers, this is important too.

Um, huh? The reals are an analytic continuation of the naturals (with
the integers as a waypoint). Same with complex and reals. That means
that arithmetic with the "complex" operators is the same as with the
"natural" operators, except for cases where you're using that analytic
continuation and the result is a complex number.

Give me the Lisp Integer type declaration that is the same as "int" in
SML.

> In respect to word size, you are right, dynamic ilanguagges allows you
> to write less and more standard code. But the cost of this is that the
> rest of the system may be unsafe if you don't declare things.

Or not. You seem to insist that CMUCL doesn't exist. You also seem to
insist that a Lisp compiler itself must be an unsafe program because it
can take arbitrary data.

>> > Static inference always produce faster code compared with dynamic
>> > inference, for an equivalent code base.
>>
>> Huh? Dynamic "inference" doesn't make sense in the context of any Lisp
>> system I know of. Are you talking about recompiling the codepath each
>> time it's run with different types... or what?
>
> I meant dynamic languages in comparison with stong ones having static
> inference. Sorry if I was not clear.

Strong languages? You're not getting clearer. :)

Do you mean strongly-typed? That's Lisp. And it has static inference if
the compiler so chooses.

Being a dyanamic language does nothing to preclude anything from being
in the type system.

>> > But you usually need to declare things on dynamic languages to help
>> > the inferer, while in statically infered ones you don't, so you save a
>> > lot of time on both writing and profiling.
>>
>> Huh? No way. You need (theoretically) to declare just as much in Lisp as
>> you do in ML. Some compilers aren't as good about type inference, and
>> some programmers want to keep their codebase dynamic so they can load
>> fixes into a live system that might change the datatypes involved in
>> some codepaths.
>
> Normaly, when a data type is changed, some side effects are happening
> somewhere in your code. For example, you cannot change the type of an
> important variable from an integer to a string and pretend that nothing
> would happen in you code.

Sometimes I can. I could be dealing with generic operators that will
just adapt to the new types. I change what I need to change and don't
change what I don't need to. Lisp is not ML.

> Your statement is only true for interchanging integers and float of
> different word size, in which I recognise Lisp is more concise than ML.
> But not much better, you can still use different types into one single
> variable in ML:
>
> # type number = Int of int | Float of float | Big of Big_int.big_int;;
> type number = Int of int | Float of float | Big of Big_int.big_int
> # let n = Float 9.4;;
> val n : number = Float 9.4
> # match n with
>   | Float n -> print_string "It is a float"
>   | Int n -> print_string "It is an integer"
>   | Big n -> print_string "It is a big number";;
> =>      It is a float
> - : unit = ()

But you still can't choose statically, for all architectures which of
them you get your input as. If you choose wrong, you get an incorrect
answer or a slow program. And since word sizes are different on
different processors, you will always be wrong no matter what you choose
in some cases.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Thomas A. Russ
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <ymimz8913i8.fsf@sevak.isi.edu>
"Javier" <·······@gmail.com> writes:

> Rahul Jain ha escrito:
> >
> > Yes, but how do _I_ know when I need to use bignums or not?
> 
> You must know, you are a programmer. Even in Lisp you must know the
> type you are using for every variable.

Actually, this is not true in the case of certain distinctions such as
the (arbitrary) one between FIXNUMs and BIGNUMs.  Since the point at
which these values roll over from one type to another is
implementation-dependent, it is hardly reasonable to expect that one
need to know in advance exactly which implementation the code is being
run on.  And to have to produce different code for each such
implementation.

You should only need to know this if you are trying to do something very
specific where the exact type is important to you.

If all you care about is the more general type of INTEGER or NUMBER,
then it hardly seems convenient to need to declare or even know exactly
which type of number you are getting.  If you want to, say, write code
to find equational roots using the Newton method, why do you need to
care whether the equation returns fixnum, bignum, single floats or
double floats?

> In respect to word size, you are right, dynamic ilanguagges allows you
> to write less and more standard code. But the cost of this is that the
> rest of the system may be unsafe if you don't declare things.

Well, safety is always relative.  Just because there is compile-time
type safety doesn't help you when there is interactive input.  Nor does
it protect you against algorithmic bugs, divide by zero (unless you make
rather heroic assumptions about the cleverness of the type system and
what it can figure out.)

> > > Static inference always produce faster code compared with dynamic
> > > inference, for an equivalent code base.
> >
> > Huh? Dynamic "inference" doesn't make sense in the context of any Lisp
> > system I know of. Are you talking about recompiling the codepath each
> > time it's run with different types... or what?
> 
> I meant dynamic languages in comparison with stong ones having static
> inference. Sorry if I was not clear.

Well, if we need to argue generalizations, then how about dynamic
languages always let you write your code faster than statically typed
languages?

Besides, a statically typed language is not always superior in execution
speed.  If what you are doing requires type dispatch, the compiler,
especially if it has specially-tweaked mechanisms for this is probably a
lot more efficient than whatever you have to add to your program to get
the same effect and flexibility.  (Greenspun's 10th rule, etc.)

> # type number = Int of int | Float of float | Big of Big_int.big_int;;
> type number = Int of int | Float of float | Big of Big_int.big_int
> # let n = Float 9.4;;
> val n : number = Float 9.4
> # match n with
>   | Float n -> print_string "It is a float"
>   | Int n -> print_string "It is an integer"
>   | Big n -> print_string "It is a big number";;
> =>      It is a float
> - : unit = ()

So why would this sort of construct be any more efficient than a dynamic
dispatch, when the runtime system knows it needs to handle this?




-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1160223445.754997.262920@i3g2000cwc.googlegroups.com>
Thomas A. Russ ha escrito:

> "Javier" <·······@gmail.com> writes:
>
> > Rahul Jain ha escrito:
> > >
> > > Yes, but how do _I_ know when I need to use bignums or not?
> >
> > You must know, you are a programmer. Even in Lisp you must know the
> > type you are using for every variable.
>
> Actually, this is not true in the case of certain distinctions such as
> the (arbitrary) one between FIXNUMs and BIGNUMs.

Yes, but you still need to know what type are you using for most of the
variables. For example you must know that you are using a list, and
cannot trat it as if it were an integer, or you'll get an error.


> > # type number = Int of int | Float of float | Big of Big_int.big_int;;
> > type number = Int of int | Float of float | Big of Big_int.big_int
> > # let n = Float 9.4;;
> > val n : number = Float 9.4
> > # match n with
> >   | Float n -> print_string "It is a float"
> >   | Int n -> print_string "It is an integer"
> >   | Big n -> print_string "It is a big number";;
> > =>      It is a float
> > - : unit = ()
>
> So why would this sort of construct be any more efficient than a dynamic
> dispatch, when the runtime system knows it needs to handle this?

It is not more efficent, but it is true that you only use dispatch when
you need, while in Lisp the compiler is probably doing it almost all
the time.

CL-USER> (defun sum-to-nums (a b) (+ a b))
SUM-TO-NUMS
CL-USER> (disassemble 'sum-to-nums)
; 11CB50A1:       8B55F4           MOV EDX, [EBP-12]          ;
no-arg-parsing entry point
;       A4:       8B7DF0           MOV EDI, [EBP-16]
;       A7:       E894B034EF       CALL #x1000140             ;
GENERIC-+
;       AC:       7302             JNB L0
;       AE:       8BE3             MOV ESP, EBX
;       B0: L0:   8D65F8           LEA ESP, [EBP-8]
;       B3:       F8               CLC
;       B4:       8B6DFC           MOV EBP, [EBP-4]
;       B7:       C20400           RET 4

It is doing a call to the generic-+, which I don't have acess to it,
but + outputs this code:
CL-USER> (disassemble '+)
; 106DFB68:       .ENTRY +(&REST ARGS)                        ;
(FUNCTION
                                                              ;  # *)
;      B80:       8F45F8           POP DWORD PTR [EBP-8]
;      B83:       E337             JECXZ L1
;      B85:       8D5DE0           LEA EBX, [EBP-32]
;      B88:       29CB             SUB EBX, ECX
;      B8A:       8BE3             MOV ESP, EBX
;      B8C:       8BD9             MOV EBX, ECX
;      B8E:       83E90C           SUB ECX, 12
;      B91:       7612             JBE L0
;      B93:       57               PUSH EDI
;      B94:       56               PUSH ESI
;      B95:       8D7C2408         LEA EDI, [ESP+8]
;      B99:       8BF5             MOV ESI, EBP
;      B9B:       29DE             SUB ESI, EBX
;      B9D:       C1E902           SHR ECX, 2
;      BA0:       FC               CLD
;      BA1:       F2               REPNE
;      BA2:       A5               MOVSD
;      BA3:       5E               POP ESI
;      BA4:       5F               POP EDI
;      BA5: L0:   8BCB             MOV ECX, EBX
;      BA7:       8955DC           MOV [EBP-36], EDX
;      BAA:       83F904           CMP ECX, 4
;      BAD:       7410             JEQ L2
;      BAF:       897DD8           MOV [EBP-40], EDI
;      BB2:       83F908           CMP ECX, 8
;      BB5:       7408             JEQ L2
;      BB7:       8975D4           MOV [EBP-44], ESI
;      BBA:       EB03             JMP L2
;      BBC: L1:   8D65E0           LEA ESP, [EBP-32]
;      BBF: L2:   894DF0           MOV [EBP-16], ECX
;      BC2:       8B4DF0           MOV ECX, [EBP-16]
;      BC5:       8D740CFC         LEA ESI, [ESP+ECX-4]
;      BC9:       BB0B000008       MOV EBX, 134217739
;      BCE:       E35F             JECXZ L7
;      BD0:       8D144D00000000   LEA EDX, [ECX*2]
;      BD7:       800D2403000804   OR BYTE PTR [#x8000324], 4  ;
unboxed_region
;      BDE:       0315D0BB3100     ADD EDX, [#x31BBD0]        ;
_boxed_region
;      BE4:       3B15D4BB3100     CMP EDX, [#x31BBD4]        ;
boxed_region
;      BEA:       7607             JBE L3
;      BEC:       E80F4393EF       CALL #x13F00               ;
_alloc_overflow_edx
;      BF1:       EB12             JMP L4
;      BF3: L3:   3315D0BB3100     XOR EDX, [#x31BBD0]        ;
_boxed_region
;      BF9:       3115D0BB3100     XOR [#x31BBD0], EDX        ;
_boxed_region
;      BFF:       3315D0BB3100     XOR EDX, [#x31BBD0]        ;
_boxed_region
;      C05: L4:   8D5203           LEA EDX, [EDX+3]
;      C08:       C1E902           SHR ECX, 2
;      C0B:       FD               STD
;      C0C:       8BDA             MOV EBX, EDX
;      C0E:       EB06             JMP L6
;      C10: L5:   83C208           ADD EDX, 8
;      C13:       8952F9           MOV [EDX-7], EDX
;      C16: L6:   AD               LODSD
;      C17:       8942FD           MOV [EDX-3], EAX
;      C1A:       E2F4             LOOP L5
;      C1C:       C742010B000008   MOV DWORD PTR [EDX+1], 134217739
;      C23:       80352403000804   XOR BYTE PTR [#x8000324], 4  ;
unboxed_region
;      C2A:       7403             JEQ L7
;      C2C:       0F0B09           BREAK 9                    ; pending
interrupt trap
;      C2F: L7:   81FB0B000008     CMP EBX, 134217739
;      C35:       745A             JEQ L14
;      C37:       8B4B01           MOV ECX, [EBX+1]
;      C3A:       8B53FD           MOV EDX, [EBX-3]
;      C3D:       F6C203           TEST DL, 3
;      C40:       740F             JEQ L8
;      C42:       8BC2             MOV EAX, EDX
;      C44:       2407             AND AL, 7
;      C46:       3C07             CMP AL, 7
;      C48:       753A             JNE L13
;      C4A:       8A42F9           MOV AL, [EDX-7]
;      C4D:       3C22             CMP AL, 34
;      C4F:       7733             JNBE L13
;      C51: L8:   EB1F             JMP L11
;      C53: L9:   8BC1             MOV EAX, ECX
;      C55:       2407             AND AL, 7
;      C57:       3C03             CMP AL, 3
;      C59:       753D             JNE L15
;      C5B:       8BC1             MOV EAX, ECX
;      C5D:       8B4001           MOV EAX, [EAX+1]
;      C60:       8945F4           MOV [EBP-12], EAX
;      C63:       8B79FD           MOV EDI, [ECX-3]
;      C66:       E8D50492F0       CALL #x1000140             ;
GENERIC-+
;      C6B:       7302             JNB L10
;      C6D:       8BE3             MOV ESP, EBX
;      C6F: L10:  8B4DF4           MOV ECX, [EBP-12]
;      C72: L11:  81F90B000008     CMP ECX, 134217739
;      C78:       75D9             JNE L9
;      C7A: L12:  8D65F8           LEA ESP, [EBP-8]
;      C7D:       F8               CLC
;      C7E:       8B6DFC           MOV EBP, [EBP-4]
;      C81:       C20400           RET 4
;      C84: L13:  8B0560FB6D10     MOV EAX, [#x106DFB60]      ; 'NUMBER
;      C8A:       0F0B0A           BREAK 10                   ; error
trap
;      C8D:       03               BYTE #X03
;      C8E:       1F               BYTE #X1F                  ;
OBJECT-NOT-TYPE-ERROR
;      C8F:       8E               BYTE #X8E                  ; EDX
;      C90:       0E               BYTE #X0E                  ; EAX
;      C91: L14:  31D2             XOR EDX, EDX
;      C93:       EBE5             JMP L12
;      C95:       90               NOP
;      C96:       90               NOP
;      C97:       90               NOP
;      C98: L15:  0F0B0A           BREAK 10                   ; error
trap
;      C9B:       02               BYTE #X02
;      C9C:       02               BYTE #X02                  ;
OBJECT-NOT-LIST-ERROR
;      C9D:       4E               BYTE #X4E                  ; ECX
;
NIL

which I think is the code to decompose &rest, and then call the
generic-+ function (which would contain even more code). This is not
very much efficent.
Of course, you can declare sum-to-nums to just sum integers, but then,
what is the benefit of using a dynamic language?
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1160226472.158431.255010@b28g2000cwb.googlegroups.com>
Thomas A. Russ ha escrito:

> > In respect to word size, you are right, dynamic ilanguagges allows you
> > to write less and more standard code. But the cost of this is that the
> > rest of the system may be unsafe if you don't declare things.
>
> Well, safety is always relative.  Just because there is compile-time
> type safety doesn't help you when there is interactive input.  Nor does
> it protect you against algorithmic bugs, divide by zero (unless you make
> rather heroic assumptions about the cleverness of the type system and
> what it can figure out.)

I'll put here an example:

(defun sum-list (lst)
  (cond
    ((null lst) nil)
    (t (+ (car lst) (sum-list (cdr lst))))))

This compiles on SBCL without give up a single warning. But if you try
to exec it:

CL-USER> (sum-list (list 1 2 3))
=> Error Argument Y is not a NUMBER: NIL
   [Condition of type SIMPLE-TYPE-ERROR]

If you try to write the same function on ML:

# let rec sum_list lst =
  match lst with
      []->[]
    | h::t -> h + (sum_list t);;
      Characters 67-83:
      | h::t -> h + (sum_list t);;
                ^^^^^^^^^^^^^^^^
=> This expression has type int but is here used with type 'a list

The compiler inmediatly gives you an error and do not let you to
compile and use it.
This error may be self evident as it is very simple. But imagine the
same kind of error on a big system, or in a function which is 100 lines
long with several conditions. On Lisp, your application may run fine
for ages until some day the bug appear. Just imagine what would happen
If the life of people depends on this kind of application (for example
for controlling train or flight traffics), or if it is used to manage
bank accounts...
From: Stefan Nobis
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87k63cf62w.fsf@snobis.de>
"Javier" <·······@gmail.com> writes:

> This error may be self evident as it is very simple.

Yes.

> But imagine the same kind of error on a big system, or in a function
> which is 100 lines long with several conditions.

That's what Thomas tried to say: In these cases there are so many ways
to put errors in your code of which only (very) few can be caught be a
static type system. So you have to do something about the other types
of errors anyway and you can't rely only on your static type system.

In fact the difference is not very big (in Lisp you have to do a
little bit more to also catch type errors but the hard work are test
cases or the like for all the other bugs) but on the other hand Lisp
is much more flexible and fun -- so why bother with static type
systems. :)

-- 
Stefan.
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1160246088.959811.151420@i3g2000cwc.googlegroups.com>
Stefan Nobis ha escrito:

> "Javier" <·······@gmail.com> writes:
>
> > This error may be self evident as it is very simple.
>
> Yes.
>
> > But imagine the same kind of error on a big system, or in a function
> > which is 100 lines long with several conditions.
>
> That's what Thomas tried to say: In these cases there are so many ways
> to put errors in your code of which only (very) few can be caught be a
> static type system. So you have to do something about the other types
> of errors anyway and you can't rely only on your static type system.

In fact it is not only the type system. :)
For example, in ML the null keyword does not exist, and so nothing
equivalent. Everything must have a correct value. The combination of
static type checking and not null arithmetic becomes to a safer system.
Your application may present incorrect solutions to the problems, but
at least it is going to get running in most cases.

> In fact the difference is not very big (in Lisp you have to do a
> little bit more to also catch type errors but the hard work are test
> cases or the like for all the other bugs) but on the other hand Lisp
> is much more flexible and fun -- so why bother with static type
> systems. :)

Sure, there is probably more fun on Lisp (this is just a personal
taste).
I would like a mixture of both worlds. For example, I'd like Lisp to be
strong in most cases (for example, to not allow to mix integers and
lists), but still tolerant with compatible ones (for example, to mix
bignums and integers). As far as I know, this doesn't exist at the
moment.
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <45290afd$0$16556$ed2619ec@ptn-nntp-reader01.plus.net>
Stefan Nobis wrote:
> That's what Thomas tried to say: In these cases there are so many ways
> to put errors in your code of which only (very) few can be caught be a
> static type system.

In my experience, the vast majority are caught by the static type system.
This is really what the static vs dynamic debate boils down to: a
difference in belief.

> Lisp is much more flexible and fun

What makes Lisp more flexible and fun compared to OCaml?

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Pascal Costanza
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <4osh4rFg3a4cU1@individual.net>
Jon Harrop wrote:
> Stefan Nobis wrote:
>> That's what Thomas tried to say: In these cases there are so many ways
>> to put errors in your code of which only (very) few can be caught be a
>> static type system.
> 
> In my experience, the vast majority are caught by the static type system.

How do you know that? Do you write all your programs both in 
dynamically- and statically-typed languages, and keep a log of how many 
errors have been caught in which way?

> This is really what the static vs dynamic debate boils down to: a
> difference in belief.

So what is it? Experience or belief?


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <4529cf44$0$16552$ed2619ec@ptn-nntp-reader01.plus.net>
Pascal Costanza wrote:
> Jon Harrop wrote:
>> Stefan Nobis wrote:
>>> That's what Thomas tried to say: In these cases there are so many ways
>>> to put errors in your code of which only (very) few can be caught be a
>>> static type system.
>> 
>> In my experience, the vast majority are caught by the static type system.
> 
> How do you know that? Do you write all your programs both in
> dynamically- and statically-typed languages, and keep a log of how many
> errors have been caught in which way?

When I moved from dynamically typed languages to statically typed ones, I
noticed the compiler picking up lots of errors that I would have had to
debug at runtime (and hope that I happened to stumble upon them before
selling the software).

>> This is really what the static vs dynamic debate boils down to: a
>> difference in belief.
> 
> So what is it? Experience or belief?

Both I guess. In my experience, this is quite task specific. Certain tasks
(e.g. GUI work) are quite dynamic by nature, so static typing is less
useful and dynamic typing is likely to be more concise.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Ken Tilton
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <j_vWg.28$8I6.1@newsfe12.lga>
Jon Harrop wrote:
> Pascal Costanza wrote:
> 
>>Jon Harrop wrote:
>>
>>>Stefan Nobis wrote:
>>>
>>>>That's what Thomas tried to say: In these cases there are so many ways
>>>>to put errors in your code of which only (very) few can be caught be a
>>>>static type system.
>>>
>>>In my experience, the vast majority are caught by the static type system.
>>
>>How do you know that? Do you write all your programs both in
>>dynamically- and statically-typed languages, and keep a log of how many
>>errors have been caught in which way?
> 
> 
> When I moved from dynamically typed languages to statically typed ones, I
> noticed the compiler picking up lots of errors that I would have had to
> debug at runtime

Unresponsive! Of /course/ C++ (for example) nags you to death at 
compile-time, of /course/ CL frequently has to tell me '+ does not know 
what do with "your mama!"'. We knooooooowwwwww /that/.

The question is specifically about bugs that make it into production. Of 
those that do in a Lisp app, how many would have been caught by a 
C++-like compiler. And vice versa: how often do C++ apps fail /because 
of/ type rigidity (see "Ariane 5"). And then we can look at trade-offs, 
bugs caught-not-caught vs cost of development, and consider how Lisp's 
overall lower cost of development leaves more $$$ for testing, and how 
Lisp's superior reflectivity makes possible better testing.

Now I concede that that would be a terribly difficult research 
undertaking, so it is probably easier to find a really good programmer 
and just ask them which language is better.

Hi. I am Kenny. Lisp is better.

I did enjoy having C++ slap me around when I ported Cells to C++, such 
that by the time I got a clean compile the thing worked, so I see the 
advantage. But that was porting something, not inventing/discovering it. 
The "discovering" is important. I simply would not have stumbled onto 
Cells (and stumble I did) in anything other than Lisp. And that is why 
Lisp is better.

hth, kt

-- 
Cells: http://common-lisp.net/project/cells/

"I'll say I'm losing my grip, and it feels terrific."
    -- Smiling husband to scowling wife, New Yorker cartoon
From: jayessay
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <m37iz9z26p.fsf@rigel.goldenthreadtech.com>
Ken Tilton <·········@gmail.com> writes:

> production. Of those that do in a Lisp app, how many would have been
> caught by a C++-like compiler.

Or even *ML compilers.  IME, such logic bugs are not caught at all by
type systems.  The two are orthogonal.

> And vice versa: how often do C++ apps fail /because of/ type
> rigidity (see "Ariane 5").

I agree with your basic position, but this is a category error in a
couple ways...


BTW: How 'bout those Yankees? <cough cough - choke> ;-)

/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Ken Tilton
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <JrCWg.14$Fv5.5@newsfe10.lga>
jayessay wrote:
> Ken Tilton <·········@gmail.com> writes:
> 
> 
>>production. Of those that do in a Lisp app, how many would have been
>>caught by a C++-like compiler.
> 
> 
> Or even *ML compilers.  IME, such logic bugs are not caught at all by
> type systems.  The two are orthogonal.
> 
> 
>>And vice versa: how often do C++ apps fail /because of/ type
>>rigidity (see "Ariane 5").
> 
> 
> I agree with your basic position, but this is a category error in a
> couple ways...
> 
> 
> BTW: How 'bout those Yankees? <cough cough - choke> ;-)

Go Mets! Go Giants!

Anybody know a few pitchers George can buy? You can take A-Rod...please!

kt

-- 
Cells: http://common-lisp.net/project/cells/

"I'll say I'm losing my grip, and it feels terrific."
    -- Smiling husband to scowling wife, New Yorker cartoon
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <452aefda$0$8715$ed2619ec@ptn-nntp-reader02.plus.net>
Ken Tilton wrote:
> Jon Harrop wrote:
>> When I moved from dynamically typed languages to statically typed ones, I
>> noticed the compiler picking up lots of errors that I would have had to
>> debug at runtime
> 
> Unresponsive! Of /course/ C++ (for example) nags you to death at
> compile-time, of /course/ CL frequently has to tell me '+ does not know
> what do with "your mama!"'. We knooooooowwwwww /that/.
> 
> The question is specifically about bugs that make it into production. Of
> those that do in a Lisp app, how many would have been caught by a
> C++-like compiler...

Sure. The same is probably true of Lisp vs Java but that has nothing to do
with dynamic vs static typing.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <45274633$0$8718$ed2619ec@ptn-nntp-reader02.plus.net>
Thomas A. Russ wrote:
> If you want to, say, write code
> to find equational roots using the Newton method, why do you need to
> care whether the equation returns fixnum, bignum, single floats or
> double floats?

Precision and computational complexity.

> Just because there is compile-time
> type safety doesn't help you when there is interactive input.

No. Look at the OCaml top-level, for example.

> Nor does 
> it protect you against algorithmic bugs,

Static type checking facilitates pattern match exhaustiveness checking, for
example.

> Well, if we need to argue generalizations, then how about dynamic
> languages always let you write your code faster than statically typed
> languages?

I see no evidence of that.

> Besides, a statically typed language is not always superior in execution
> speed.

Can you give an example of a program written in a dynamically typed language
for which there is no faster equivalent written in a statically typed
language?

> If what you are doing requires type dispatch, the compiler, 
> especially if it has specially-tweaked mechanisms for this is probably a
> lot more efficient than whatever you have to add to your program to get
> the same effect and flexibility.  (Greenspun's 10th rule, etc.)

Maybe. Can you give an example?

>> # type number = Int of int | Float of float | Big of Big_int.big_int;;
>> type number = Int of int | Float of float | Big of Big_int.big_int
>> # let n = Float 9.4;;
>> val n : number = Float 9.4
>> # match n with
>>   | Float n -> print_string "It is a float"
>>   | Int n -> print_string "It is an integer"
>>   | Big n -> print_string "It is a big number";;
>> =>      It is a float
>> - : unit = ()
> 
> So why would this sort of construct be any more efficient than a dynamic
> dispatch, when the runtime system knows it needs to handle this?

It is probably less efficient. It is certainly more verbose. Very few ML
programs use a numeric tower, e.g. very few OCaml programs use the numeric
tower provided by the Num module.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Javier
Subject: Lisp dinamism vs ML safety
Date: 
Message-ID: <egegib$cl5$1@wolfberry.srv.cs.cmu.edu>
Thomas A. Russ ha escrito:

> > In respect to word size, you are right, dynamic ilanguagges allows you
> > to write less and more standard code. But the cost of this is that the
> > rest of the system may be unsafe if you don't declare things.
>
> Well, safety is always relative.  Just because there is compile-time
> type safety doesn't help you when there is interactive input.  Nor does
> it protect you against algorithmic bugs, divide by zero (unless you make
> rather heroic assumptions about the cleverness of the type system and
> what it can figure out.)

I'll put here an example:

(defun sum-list (lst)
  (cond
    ((null lst) nil)
    (t (+ (car lst) (sum-list (cdr lst))))))

This compiles on SBCL without give up a single warning. But if you try
to exec it:

CL-USER> (sum-list (list 1 2 3))
=> Error Argument Y is not a NUMBER: NIL
   [Condition of type SIMPLE-TYPE-ERROR]

If you try to write the same function on ML:

# let rec sum_list lst =
  match lst with
      []->[]
    | h::t -> h + (sum_list t);;
      Characters 67-83:
      | h::t -> h + (sum_list t);;
                ^^^^^^^^^^^^^^^^
=> This expression has type int but is here used with type 'a list

The compiler inmediatly gives you an error and do not let you to
compile and use it.
This error may be self evident as it is very simple. But imagine the
same kind of error on a big system, or in a function which is 100 lines
long with several conditions. On Lisp, your application may run fine
for ages until some day the bug appear. Just imagine what would happen
If the life of people depends on this kind of application (for example
for controlling train or flight traffics), or if it is used to manage
bank accounts...
From: Rahul Jain
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <8764eskcmt.fsf@nyct.net>
"Javier" <·······@gmail.com> writes:

> I'll put here an example:
>
> (defun sum-list (lst)
>   (cond
>     ((null lst) nil)
>     (t (+ (car lst) (sum-list (cdr lst))))))
>
> This compiles on SBCL without give up a single warning.

If you actually asked it to, it would.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Thomas A. Russ
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <ymiwt78yu8z.fsf@sevak.isi.edu>
"Javier" <·······@gmail.com> writes:

> I'll put here an example:
> 
> (defun sum-list (lst)
>   (cond
>     ((null lst) nil)
>     (t (+ (car lst) (sum-list (cdr lst))))))
> 
> This compiles on SBCL without give up a single warning. But if you try
> to exec it:
> 
> CL-USER> (sum-list (list 1 2 3))
> => Error Argument Y is not a NUMBER: NIL
>    [Condition of type SIMPLE-TYPE-ERROR]

Well, it looks like Common Lisp found the error pretty quickly.

> If you try to write the same function on ML:
> 
> # let rec sum_list lst =
>   match lst with
>       []->[]
>     | h::t -> h + (sum_list t);;
>       Characters 67-83:
>       | h::t -> h + (sum_list t);;
>                 ^^^^^^^^^^^^^^^^
> => This expression has type int but is here used with type 'a list
> 
> The compiler inmediatly gives you an error and do not let you to
> compile and use it.

So in this example, the only claim is really that ML flags the error at
compile time, whereas Common Lisp flags it at run time.  Now, perhaps if
one were using a language without dynamic function redefinition and
linking, there might be some large advantage to having such a detection
at compile time.

After all, if you have to go through a long edit-compile-link cycle,
then you really do need to have lots of errors caught at compile time,
since it is more efficient than finding them at run time.

In Common Lisp, though, you can very easily work on small pieces of the
code, so after finding the error in the SUM-LIST function, one just
needs to correct that function, re-compile just the function (and every
reasonable lisp development environment such as the commercial IDEs,
Slime, iLisp, etc. supports this nicely in the editor buffer) and
proceed with the testing.

In fact, with the standard debugger support, you can even recompile the
function and continue out of the error case without having to restart
the entire system.

Consider the following example interaction (ACL 5.0):

USER> (defun sum-list (lst)
        (cond ((null lst) nil)
              (t (+ (car lst) (sum-list (cdr lst))))))
SUM-LIST
USER> (defun test (sample-lists)
        (dolist (sample sample-lists)
          (format t "Sum of ~S = ~S~%" sample (sum-list sample))))
TEST

;; Run a test:
USER> (test '((1 1 1) (2 2 2) (1 2 3 4) (3 4 5)))
Error: `NIL' is not of the expected type `NUMBER'
  [condition type: TYPE-ERROR]

Restart actions (select using :continue):
 0: Return to Top Level (an "abort" restart)
 1: Abort #<PROCESS Initial Lisp Listener>

;; Look at where the error occurs.  It is in this expression.
[1] USER> :frame
Expression: (+ 1 NIL)

;; Find and fix bug in the code, adopting the convention that
;; the sum of the empty list is zero.  Matches semantics of the
;; built-in + function.
;; Redefine code here at run-time.  (In real operation done in
;; source through IDE, but it works the same way)
[1] USER> (defun sum-list (lst)
            (cond ((null lst) 0)
                  (t (+ (car lst) (sum-list (cdr lst))))))
SUM-LIST

;; Continue from the error.  Since the error frame was (+ 1 NIL)
;; at this point we substitute the proper value for NIL and have
;; our function return (+ 1 0) while still on the call stack in
;; the debugger.  Note that we only have to do this once, since
;; subsequent iterations in the test loop know use the redefined
;; SUM-LIST function, through the magic of dynamic linking.
[1] USER> :return (+ 1 0)
Sum of (1 1 1) = 3
Sum of (2 2 2) = 6
Sum of (1 2 3 4) = 10
Sum of (3 4 5) = 12
NIL
USER> 

The point of this is to show that there isn't really any big savings in
finding this sort of error at compile time.  The workflow is a bit
different in Common Lisp, since the overhead of editting, compiling and
linking is very low.  For compiling and linking single functions, it is
virtually non-existent, so it doesn't matter much that the error is
detected at run-time rather than compile-time.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Joel Reymont
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160510128.422770.325000@m73g2000cwd.googlegroups.com>
What I always wanted to know:

How do you do this in SLIME?

    Thanks, Joel

Thomas A. Russ wrote:

> ;; Look at where the error occurs.  It is in this expression.
> [1] USER> :frame
> Expression: (+ 1 NIL)
>
> ;; Find and fix bug in the code, adopting the convention that
> ;; the sum of the empty list is zero.  Matches semantics of the
> ;; built-in + function.
> ;; Redefine code here at run-time.  (In real operation done in
> ;; source through IDE, but it works the same way)
> [1] USER> (defun sum-list (lst)
>             (cond ((null lst) 0)
>                   (t (+ (car lst) (sum-list (cdr lst))))))
> SUM-LIST
>
> ;; Continue from the error.  Since the error frame was (+ 1 NIL)
> ;; at this point we substitute the proper value for NIL and have
> ;; our function return (+ 1 0) while still on the call stack in
> ;; the debugger.  Note that we only have to do this once, since
> ;; subsequent iterations in the test loop know use the redefined
> ;; SUM-LIST function, through the magic of dynamic linking.
> [1] USER> :return (+ 1 0)
From: Rahul Jain
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <87bqojj30e.fsf@nyct.net>
"Joel Reymont" <······@gmail.com> writes:

> What I always wanted to know:
>
> How do you do this in SLIME?

I see a command caled sldb-return-from-frame here.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Thomas A. Russ
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <ymi4pu9yr9z.fsf@sevak.isi.edu>
"Joel Reymont" <······@gmail.com> writes:

> What I always wanted to know:
> 
> How do you do this in SLIME?
> 
>     Thanks, Joel

Using what underlying lisp?

Slime's debugger and ability to restart are limited a bit by the
underlying lisp, at least as far as I understand it.  I don't use SLIME
all that much.

For the record, I don't happen to know what the incantation is for CMUCL
or SBCL....


> Thomas A. Russ wrote:
> 
> > ;; Look at where the error occurs.  It is in this expression.
> > [1] USER> :frame
> > Expression: (+ 1 NIL)
> >
> > ;; Find and fix bug in the code, adopting the convention that
> > ;; the sum of the empty list is zero.  Matches semantics of the
> > ;; built-in + function.
> > ;; Redefine code here at run-time.  (In real operation done in
> > ;; source through IDE, but it works the same way)
> > [1] USER> (defun sum-list (lst)
> >             (cond ((null lst) 0)
> >                   (t (+ (car lst) (sum-list (cdr lst))))))
> > SUM-LIST
> >
> > ;; Continue from the error.  Since the error frame was (+ 1 NIL)
> > ;; at this point we substitute the proper value for NIL and have
> > ;; our function return (+ 1 0) while still on the call stack in
> > ;; the debugger.  Note that we only have to do this once, since
> > ;; subsequent iterations in the test loop know use the redefined
> > ;; SUM-LIST function, through the magic of dynamic linking.
> > [1] USER> :return (+ 1 0)
> 

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Daniel Janus
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <slrneir0k6.hia.przesunmalpe@students.mimuw.edu.pl>
Dnia 10.10.2006 Thomas A. Russ <···@sevak.isi.edu> napisa�/a:

[snip]

This is perhaps a side-note on the topic rather than an actual reply, but
as an amateur OCaml programmer and one who writes Common Lisp code for a
living (though I'm rather new to Lisp), I'd like to throw in my three cents
worth.

First off, I must say that I enjoy both OCaml and Lisp, and I think they
both are excellent languages to program in.  And each of them has features
that I find myself sorely missing when I write in the other.  This feeling
is often related to the issue of static/dynamic typing and type
correctness, and here's why.

One of the first things I wrote in Lisp was a tree editor, and I had a hard
time trying to figure out how to represent trees in Lisp.  Back then, I
didn't grok CLOS yet, so I was reluctant to use that; instead, I found a
clue in "Practical Common Lisp", in the chapter about various uses of cons
cells: "Treating structures built from cons cells as trees is just about as
natural as treating them as lists.  What is a list of lists, after all, but
another way of thinking of a tree?"  It wasn't that natural for me, though:
I spent several hours over a sheet of paper, drawing bunches of cons cells
pointing at each other and trying to figure out how they represent trees in
a "left child/right sibling" way.  And I yearned the OCaml type system
where I could simply write:

   type 'a tree = Leaf | Node of 'a * 'a tree list

which not only is simpler, but has the advantage of clearly describing what
a tree really is.

And this is one of the points I'd like to make:  When you program in ML,
you think about the type of your data.  You just don't need to declare it
yourself, since the compiler infers that for you; but the fact that you
think of it helps you better understand what you can and cannot do with the
data.  In fact, before I write an OCaml function, I often take the time to
think what type of data it should take and what type it should return, and
write that along with a short description of the function's expected
behaviour in a comment.  And it frequently turns out that it works as
expected the very first time it compiles.  (Of course, not every time,
but the ratio is significantly higher than in Lisp.)

A case-study, to better illustrate what I mean:  A friend once gave me
a chess riddle.  In a certain game, White performed the following moves:

   1. f3  2. Kf2  3. Kg3  4. Kh4

and have been checkmated after Black's reply in fourth move.  The task
is to re-create Black's moves.

It took me no more than ten hours to write from scratch a OCaml program
that finds both possible answers by means of brute-force game tree
searching.  I started with modelling the subset of chess that
can be played within the first four moves (without castling and
en-passants, since I felt those won't be necessary):

   type piece = King | Queen | Rook | Bishop | Knight | Pawn
   and field = (piece * who) option
   and chessboard = field array array
   and who = White | Black
   and position = {board : chessboard; who : who}
   and coord = int * int
   and move = coord * coord ;;

Obviously, I needed a function that makes a move: it takes a position
and a move, and returns the position after making this move.  Of
what type should it be?

   (* val make_move : position -> move -> position *)

And so on.  It turns out that for many problems, the constraints enforced 
by the type system make the solution not harder, but easier to write.

But not for *all* problems, and this is where Lisp comes into play.  First
off, there are macros, and macros can be used to create new abstractions in
a very powerful way, as we all know (for example, to convert a given chunk
of Lisp code into continuation-passing style, which is next to unthinkable
in most other languages).  The fact that code is data, and data can be
code, doesn't mix well with the straight-jacket of strict static typing.

Besides, static typing sometimes can just be a pain in the arse.  
Think an event library for event-driven programming (e.g. GUI),
where an event can be just about anything, from a keystroke to a
mouse move to arrival of a network packet to the finishing of a 
background computation to a system signal.  And each of these should be
handled in its own way.  It would be very cumbersome to maintain a
variant type for all these kinds of events and functions that
manipulate it; OTOH, this is a perfect application for Lisp's 
generic function system.

To sum up: for my chess program, I'd favour OCaml.  For a web application, 
I do not hesitate to choose Lisp.

Best regards,
-- 
Daniel 'Nathell' Janus, GG #1631668, ············@nathell.korpus.pl
"Though a program be but three lines long, someday it will have to be
maintained."
      -- The Tao of Programming
From: Stefan Nobis
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <878xjoy42l.fsf@snobis.de>
"Javier" <·······@gmail.com> writes:

> I'll put here an example:

That's not the point. Yes, in trivial examples there are quite some
bugs a static type system does help with.

But in non-trivial applications the (static) type system is not
enough, you need more tools (like unit/regression tests) to
find/eleminate all bugs. So if you are writing excessive tests anyway,
such simple type errors would be caught and your static type systems
buys you (nearly) nothing... or at least the advantages in this area
are much small than your example would suggest and the disadvantages
of the static type system possibly become more weighty.

-- 
Stefan.
From: Thomas A. Russ
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <ymiodskyt1x.fsf@sevak.isi.edu>
"Javier" <·······@gmail.com> writes:

> I'll put here an example:
> 
> (defun sum-list (lst)
>   (cond
>     ((null lst) nil)
>     (t (+ (car lst) (sum-list (cdr lst))))))
> 
> This compiles on SBCL without give up a single warning. But if you try
> to exec it:
> 
> CL-USER> (sum-list (list 1 2 3))
> => Error Argument Y is not a NUMBER: NIL
>    [Condition of type SIMPLE-TYPE-ERROR]
> 
> If you try to write the same function on ML:
> 
> # let rec sum_list lst =
>   match lst with
>       []->[]
>     | h::t -> h + (sum_list t);;
>       Characters 67-83:
>       | h::t -> h + (sum_list t);;
>                 ^^^^^^^^^^^^^^^^
> => This expression has type int but is here used with type 'a list
> 
> The compiler inmediatly gives you an error and do not let you to
> compile and use it.

OK, I must admit that I'm not really up on ML, but does the ML
equivalent of the following slightly modified version of SUM-LIST
compile properly, or does it give an error as well:

(defun sum-list (lst)
   (cond ((null lst) nil)
         ((null (cdr lst)) (car lst))
         (t (+ (car lst) (sum-list (cdr lst))))))


(sum-list '(1 2 3))  => 6
(sum-list '(2))      => 2
(sum-list '())       => NIL


-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Javier
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160515504.807134.206240@b28g2000cwb.googlegroups.com>
Thomas A. Russ ha escrito:
> OK, I must admit that I'm not really up on ML, but does the ML
> equivalent of the following slightly modified version of SUM-LIST
> compile properly, or does it give an error as well:
>
> (defun sum-list (lst)
>    (cond ((null lst) nil)
>          ((null (cdr lst)) (car lst))
>          (t (+ (car lst) (sum-list (cdr lst))))))
>
>
> (sum-list '(1 2 3))  => 6
> (sum-list '(2))      => 2
> (sum-list '())       => NIL

No, it doesn't if you translate expresion by expresion, because you are
still mixing lists with integers.
But, if you want something similar, you can use type variants:

type lisp_value = Nil | Number of int

# let rec sum_list lst = match lst with
  | [] -> 0
  | h::t -> match h with
	Nil -> sum_list t
      | Number n -> n + (sum_list t)

=>         val sum_list : lisp_value list -> int = <fun>

# sum_list [Number 6; Number 3; Nil; Number 9];;
=> - : int = 18

But on ML, Nil has nosense. It is never possible to accidentally mix
something with a null value, just because a null value does not exists.
Elsewhere, you can simulate null like the example above if you wish.
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452c2084$0$8726$ed2619ec@ptn-nntp-reader02.plus.net>
Javier wrote:
> type lisp_value = Nil | Number of int
> 
> # let rec sum_list lst = match lst with
>   | [] -> 0
>   | h::t -> match h with
> Nil -> sum_list t
>       | Number n -> n + (sum_list t)

Leverage pattern matching:

# let rec sum_list = function
    | [] -> 0
    | None :: t -> sum_list t
    | Some n :: t -> n + sum_list t;;
val sum_list : int option list -> int = <fun>

> But on ML, Nil has nosense. It is never possible to accidentally mix
> something with a null value, just because a null value does not exists.
> Elsewhere, you can simulate null like the example above if you wish.

Yes. If you want an optional value then you use an option type (as I did).
Most of the time, you don't want an option value.

That's one of the things I hate about Java and C#...

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Isaac Gouy
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160600256.467186.325870@h48g2000cwc.googlegroups.com>
Jon Harrop wrote:
> Javier wrote:
> > type lisp_value = Nil | Number of int
> >
> > # let rec sum_list lst = match lst with
> >   | [] -> 0
> >   | h::t -> match h with
> > Nil -> sum_list t
> >       | Number n -> n + (sum_list t)
>
> Leverage pattern matching:
>
> # let rec sum_list = function
>     | [] -> 0
>     | None :: t -> sum_list t
>     | Some n :: t -> n + sum_list t;;
> val sum_list : int option list -> int = <fun>
>
> > But on ML, Nil has nosense. It is never possible to accidentally mix
> > something with a null value, just because a null value does not exists.
> > Elsewhere, you can simulate null like the example above if you wish.
>
> Yes. If you want an optional value then you use an option type (as I did).
> Most of the time, you don't want an option value.
>
> That's one of the things I hate about Java and C#...

Maybe you haven't caught up with Java annotations yet... things change.

>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> Objective CAML for Scientists
> http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Rahul Jain
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <87lknmft90.fsf@nyct.net>
"Isaac Gouy" <·····@yahoo.com> writes:

> Jon Harrop wrote:
>> Yes. If you want an optional value then you use an option type (as I did).
>> Most of the time, you don't want an option value.
>>
>> That's one of the things I hate about Java and C#...
>
> Maybe you haven't caught up with Java annotations yet... things change.

The compiler won't prevent you from passing in a null because of any
annotation I've seen.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Henry Bigelow
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160631930.793081.114760@m73g2000cwd.googlegroups.com>
hi everyone,
  just wanted to say thanks for the posts.  very enlightening!

would it be possible to write a system of macros, say, to make
available the abstract, recursive, inferred, variant etc. yet static
typesystem of O'Caml in lisp?  say, in the way CLOS was an extension of
lisp to objects.

or is there a reason why this is impossible to do in lisp?

why is it not possible to 'turn off' static typechecking for individual
functions while compiling o'caml?


it seems the debate centered a lot around the relative benefits of
representational power and efficiency costs, but there wasn't much
mention of the theoretical grounds against having a new language with
the best of both worlds.

thanks,

henry


Jon Harrop wrote:
> Javier wrote:
> > type lisp_value = Nil | Number of int
> >
> > # let rec sum_list lst = match lst with
> >   | [] -> 0
> >   | h::t -> match h with
> > Nil -> sum_list t
> >       | Number n -> n + (sum_list t)
>
> Leverage pattern matching:
>
> # let rec sum_list = function
>     | [] -> 0
>     | None :: t -> sum_list t
>     | Some n :: t -> n + sum_list t;;
> val sum_list : int option list -> int = <fun>
>
> > But on ML, Nil has nosense. It is never possible to accidentally mix
> > something with a null value, just because a null value does not exists.
> > Elsewhere, you can simulate null like the example above if you wish.
>
> Yes. If you want an optional value then you use an option type (as I did).
> Most of the time, you don't want an option value.
>
> That's one of the things I hate about Java and C#...
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> Objective CAML for Scientists
> http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Raffael Cavallaro
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <2006101201553543658-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2006-10-12 01:45:30 -0400, "Henry Bigelow" <·········@gmail.com> said:

> would it be possible to write a system of macros, say, to make
> available the abstract, recursive, inferred, variant etc. yet static
> typesystem of O'Caml in lisp?  say, in the way CLOS was an extension of
> lisp to objects.

It's already been done - its called Qi: <http://www.lambdassociates.org>
From: Pascal Costanza
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <4p680qFgbdmsU1@individual.net>
Henry Bigelow wrote:
> hi everyone,
>   just wanted to say thanks for the posts.  very enlightening!
> 
> would it be possible to write a system of macros, say, to make
> available the abstract, recursive, inferred, variant etc. yet static
> typesystem of O'Caml in lisp?  say, in the way CLOS was an extension of
> lisp to objects.
> 
> or is there a reason why this is impossible to do in lisp?

That's not related to Lisp per se. But in the general case, dynamic 
typing and static typing are not compatible. In Common Lisp, there are a 
lot of cases where you can very conveniently change types at runtime. So 
a static check of types would be a great burden. Here is my standard 
example for this:

(defclass person ()
   ((name :initarg :name)))

(let ((p (make-instance 'person :name "Pascal")))
   (eval (read))
   (setf (slot-value p 'address) "Brussels"))

There is no chance to see upfront whether this program will fail or not, 
since the user can redefine the class person when (eval (read)) is 
executed. At that time, the slot 'address could be added to the class 
person. This is a contrived example, but similar things are indeed 
actually quite useful in a certain class of programs. (Think of 
frame-based knowledge representation systems, for example.)

On the other hand, in a statically typed language (with a sufficiently 
interesting type system, like Hindley-Milner type systems), you can do 
things like dispatch on the expected return type of a function. In a 
dynamically typed language, you don't have any guarantees about future 
computations, but in statically typed language, you can ensure that only 
certain types (or a certain type) is returned from a function.

So in the extreme case, dynamically typed and statically typed languages 
offer different kinds of expressiveness, and depending on what you want 
to do, the one approach is indeed better suited for a problem at hand 
than the other. If you don't want to take advantage of the specific 
advantages, then it's rather just a matter of taste.

All other claims about static vs. dynamic types are bogus and unfounded, 
and based on anecdotal evidence at best. Especially the typical claims 
wrt efficiency and safety are unfounded. These claims are typically 
demonstrated on small toy programs, and this doesn't prove anything.

> why is it not possible to 'turn off' static typechecking for individual
> functions while compiling o'caml?

Because this would change the semantics of the language.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <453042e1$0$8751$ed2619ec@ptn-nntp-reader02.plus.net>
Henry Bigelow wrote:
> would it be possible to write a system of macros, say, to make
> available the abstract, recursive, inferred, variant etc. yet static
> typesystem of O'Caml in lisp?  say, in the way CLOS was an extension of
> lisp to objects.
> 
> or is there a reason why this is impossible to do in lisp?

That is entirely possible to do in Lisp. Indeed, the first ML
implementations were written in Lisp. However, you can't recover the
performance of OCaml without writing an OCaml compiler, in which case you
might as well write the compiler in OCaml and not Lisp.

It would probably be less work to write a Lisp compiler in OCaml that
targetted OCaml rather than assembler.

> why is it not possible to 'turn off' static typechecking for individual
> functions while compiling o'caml?

There are ways to circumvent the type system in OCaml. The Obj module
provides a function called "magic" with the type 'a -> 'b that lets you
alter the type of any value (including functions). However, it is very
difficult to use correctly without an intimate understanding of the
compiler and, consequently, is really only for use by compiler writers.

> it seems the debate centered a lot around the relative benefits of
> representational power and efficiency costs, but there wasn't much
> mention of the theoretical grounds against having a new language with
> the best of both worlds.

Perhaps it would be beneficial to make OCaml and Lisp more interoperable.
From my point of view, Lisp doesn't save enough effort on anything that it
would be worth the effort to "drop into Lisp". I'm more interested in
interoperability with things like .NET, which the F# language provides...

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Pascal Costanza
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <4pbgcoFi3qujU1@individual.net>
Jon Harrop wrote:

> From my point of view, Lisp doesn't save enough effort on anything that it
> would be worth the effort to "drop into Lisp". I'm more interested in
> interoperability with things like .NET, which the F# language provides...

If this means that you are clearly not interested in Lisp, what are you 
doing here in comp.lang.lisp, then?


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Raffael Cavallaro
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <2006101618075911272-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2006-10-14 03:02:57 -0400, Pascal Costanza <··@p-cos.net> said:

> If this means that you are clearly not interested in Lisp, what are you 
> doing here in comp.lang.lisp, then?

Like most missionaries he's trying to civilize the natives. Problem is, 
the natives hereabouts are not benighted, are aware of the 
alternatives, and have made a conscious choice to use a dynamically 
typed language despite the supposed superiority of static typing.
From: Thomas A. Russ
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <ymi3b9nyi7z.fsf@sevak.isi.edu>
Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-mac.com> writes:

> On 2006-10-14 03:02:57 -0400, Pascal Costanza <··@p-cos.net> said:
> 
> > If this means that you are clearly not interested in Lisp, what are
> > you doing here in comp.lang.lisp, then?
> 
> 
> Like most missionaries he's trying to civilize the natives. Problem is,
> the natives hereabouts are not benighted, are aware of the alternatives,
> and have made a conscious choice to use a dynamically typed language
> despite the supposed superiority of static typing.

Oh dear.
Does this mean we are not just heathen, but heretical heathen?


-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Raffael Cavallaro
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <2006101700365727544-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2006-10-16 19:05:36 -0400, ···@sevak.isi.edu (Thomas A. Russ) said:

> Oh dear.
> Does this mean we are not just heathen, but heretical heathen?

A compiler inquisition is imminent. Apostates like yourself will be 
stretched on the rack until they concede the divine nature and 
universal applicability of Hindley-Milner type systems.
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452ea079$0$8725$ed2619ec@ptn-nntp-reader02.plus.net>
Henry Bigelow wrote:
> would it be possible to write a system of macros, say, to make
> available the abstract, recursive, inferred, variant etc. yet static
> typesystem of O'Caml in lisp?  say, in the way CLOS was an extension of
> lisp to objects.
> 
> or is there a reason why this is impossible to do in lisp?

That is entirely possible to do in Lisp. Indeed, the first ML
implementations were written in Lisp. However, you can't recover the
performance of OCaml without writing an OCaml compiler, in which case you
might as well write the compiler in OCaml and not Lisp.

It would probably be less work to write a Lisp compiler in OCaml that
targetted OCaml rather than assembler.

> why is it not possible to 'turn off' static typechecking for individual
> functions while compiling o'caml?

There are ways to circumvent the type system in OCaml. The Obj module
provides a function called "magic" with the type 'a -> 'b that lets you
alter the type of any value (including functions). However, it is very
difficult to use correctly without an intimate understanding of the
compiler and, consequently, is really only for use by compiler writers.

> it seems the debate centered a lot around the relative benefits of
> representational power and efficiency costs, but there wasn't much
> mention of the theoretical grounds against having a new language with
> the best of both worlds.

Perhaps it would be beneficial to make OCaml and Lisp more interoperable.
From my point of view, Lisp doesn't save enough effort on anything that it
would be worth the effort to "drop into Lisp". I'm more interested in
interoperability with things like .NET, which the F# language provides...

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Alan Crowe
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <86iris2snp.fsf@cawtech.freeserve.co.uk>
"Javier" <·······@gmail.com> writes:

> Thomas A. Russ ha escrito:
> 
> > > In respect to word size, you are right, dynamic ilanguagges allows you
> > > to write less and more standard code. But the cost of this is that the
> > > rest of the system may be unsafe if you don't declare things.
> >
> > Well, safety is always relative.  Just because there is compile-time
> > type safety doesn't help you when there is interactive input.  Nor does
> > it protect you against algorithmic bugs, divide by zero (unless you make
> > rather heroic assumptions about the cleverness of the type system and
> > what it can figure out.)
> 
> I'll put here an example:
> 
> (defun sum-list (lst)
>   (cond
>     ((null lst) nil)
>     (t (+ (car lst) (sum-list (cdr lst))))))
> 

Imagine that the compiler spots this and you rush to fix it
as the telephone rings.

(defun sum-list (list)
  (cond
    ((null list) 1)
    (t (+ (car list)(sum-list (cdr list))))))

After the phone call you compile it; since it passes the
static type checks you press on to the next piece of code.
Have you won or lost? Why? 

I've been working on some code so that I can do

CL-USER> (defun sum-list (lst)
           (declare (test (((5 7)) 12)))
           (cond
             ((null lst) nil)
             (t (+ (car lst) (sum-list (cdr lst))))))

arglist = ((5 7)), required result = 12, actual result NIL.
((NESTUIT::UNEXPECTED-SIGNAL #<SIMPLE-TYPE-ERROR {481F15ED}>)) 
0.00% passed.
SUM-LIST

CL-USER> (defun sum-list (lst)
           (declare (test (((5 7)) 12)))
           (cond
             ((null lst) 1)
             (t (+ (car lst) (sum-list (cdr lst))))))

arglist = ((5 7)), required result = 12, actual result 13.
((NESTUIT::WRONG-ANSWER :EXPECTED 12 :RECEIVED 13)) 
0.00% passed.
SUM-LIST

CL-USER> (defun sum-list (lst)
           (declare (test (((5 7)) 12)))
           (cond
             ((null lst) 0)
             (t (+ (car lst) (sum-list (cdr lst))))))

arglist = ((5 7)), required result = 12, actual result 12.
All passed.
SUM-LIST

It shadows defun, uses the declaration declaration to tell
CL about TEST, and uses eval-when to get the declared tests
run a compile time.

The code is online at

   http://www.cawtech.demon.co.uk/nestuit/current.lisp

so that you can see that I really have conforming code that
does this, but I've been tinkering and it is in a mess with
bits broken, and I've got three bits of documentation, all
obselete, all disagreeing.

If I press on with this, will I really use it, will I find
it worthwhile?

I don't think it is possible to think clearly about this
issue if you use the word "correct" for code that passes the
static type checks. I propose "typeck" as a neologism for
code that passes the static type checks. You need a
vocabularly that permits you to say that

    (defun sum-list (list)
      (cond
        ((null list) 1)
        (t (+ (car list)(sum-list (cdr list))))))

is typeck but incorrect.

If you are stuck with a terminology that forces you to say
that it is correct but incorrect, you will just get in a
muddle.

Alan Crowe
Edinburgh
Scotland
From: Javier
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160513967.116743.28120@m7g2000cwm.googlegroups.com>
Alan Crowe ha escrito:

> I've been working on some code so that I can do
> The code is online at
>
>    http://www.cawtech.demon.co.uk/nestuit/current.lisp

This is very nice, thanks for the code.

> I don't think it is possible to think clearly about this
> issue if you use the word "correct" for code that passes the
> static type checks. I propose "typeck" as a neologism for
> code that passes the static type checks. You need a
> vocabularly that permits you to say that
>
>     (defun sum-list (list)
>       (cond
>         ((null list) 1)
>         (t (+ (car list)(sum-list (cdr list))))))
>
> is typeck but incorrect.


Yes, you are right. But if you eliminate both null exceptions and type
errors, I think that you are probably going to avoid at least 50% of
the bugs.
Your test system can really help to avoid all of this kind of error.
From: Wolfram Fenske
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160541696.700997.274850@e3g2000cwe.googlegroups.com>
"Javier" <·······@gmail.com> writes:

> Alan Crowe ha escrito:

[...]

>> I don't think it is possible to think clearly about this
>> issue if you use the word "correct" for code that passes the
>> static type checks. I propose "typeck" as a neologism for
>> code that passes the static type checks. You need a
>> vocabularly that permits you to say that
>>
>>     (defun sum-list (list)
>>       (cond
>>         ((null list) 1)
>>         (t (+ (car list)(sum-list (cdr list))))))
>>
>> is typeck but incorrect.
>
>
> Yes, you are right. But if you eliminate both null exceptions and type
> errors, I think that you are probably going to avoid at least 50% of
> the bugs.

Null-pointer exceptions and other memory-related errors certainly
account for a huge part of errors in C/C++ programs.  So having a
garbage collector is a huge win, IMO.  But that's a little beside the
point since both OCaml and Lisp (and many others) are garbage
collected.

When it comes to type errors I'm far from convinced they account for
that many errors in dynamically typed languages.  I do a lot of
programming in Python and I hardly ever find myself making type
errors.  Most errors I make are semantic errors and nothing can help
me with that.  In fact I found OCaml's type checking was getting in my
way most of the time because it didn't allow me to store objects of
different types in a list, for example.  But in languages like C or
Java it's much worse because a significant part of the code is just
declaring variables and their types.  It's really annoying.  I think
people put way too much stock in statically verifiable type-safety.

> Your test system can really help to avoid all of this kind of error.

Yes, testing is the way to go.  In my experience type errors are
discovered rather quickly and can easily be solved; semantic errors
are more difficult to detect.  That's why you need tests, whether you
work in a statically typed language or not.


Wolfram
From: Javier
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160573714.998692.147990@m7g2000cwm.googlegroups.com>
Wolfram Fenske ha escrito:

> When it comes to type errors I'm far from convinced they account for
> that many errors in dynamically typed languages.  I do a lot of
> programming in Python and I hardly ever find myself making type
> errors.  Most errors I make are semantic errors and nothing can help
> me with that.

I don't fully agree. When programming on Lisp (and I don't actually do
a lot of programming on Lisp), I found that most of my bugs come from
the fact that I try to use one thing when the algorithm was trying to
do a similar thing but with a different type.
For example, I usually pass a list of lists, but forget to enclose the
data on that second list.

CL-USER> (defun get-x (obj) (assoc 'x obj))
GET-X
CL-USER> (defun mul-x (obj) (* (get-x obj) 5))
MUL-X
CL-USER> (mul-x '((x 3)))
Argument X is not a NUMBER: (X 5)
   [Condition of type SIMPLE-TYPE-ERROR]


The bad thing here is that the compiler were not able to detect that
(get-x obj) is never going to return a number, so mul-x is broken. On
ML this thing is just imposible to happen, your code won't compile.
I think this kind of error is pretty common. At least I am continiusly
making them, and it is very possible that some of them would be able to
pass any further check if the algorithm is longer and not so evident at
first glance.
Yes, the test system by Alan Crowe may be helpfull in this case, but
then you must know what to test! There are times that the algorithm is
so complicate that it is very difficult to take in count all the
possible type combinations. On ML, the compiler does this for you, so
you only worry about the correct value and not the correct type.

> In fact I found OCaml's type checking was getting in my
> way most of the time because it didn't allow me to store objects of
> different types in a list, for example.

It does allow you if you intentionally desire it:

type to_store = Number of int | String of string | Anythingelse
let my_list = [Number 5; Anythingelse; Number 0; String "hello"];;

> But in languages like C or
> Java it's much worse because a significant part of the code is just
> declaring variables and their types.  It's really annoying.  I think
> people put way too much stock in statically verifiable type-safety.

I don't think so. At the end, you are also typechecking your Lisp
program or you'll find serius bugs soon or late (probably sooner than
later, but sometimes too late). The fact that you don't need to declare
types doesn't mean that you are free to ignore them, in most cases.
(And in practice, you end up phisically declaring things in Lisp for
incresed performance, so Ocaml is more concise, shorter and usually
faster, so I don't see any gain in this respect for Lisp).
From: Rahul Jain
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <87pscyhmrd.fsf@nyct.net>
"Javier" <·······@gmail.com> writes:

> (And in practice, you end up phisically declaring things in Lisp for
> incresed performance, so Ocaml is more concise, shorter and usually
> faster, so I don't see any gain in this respect for Lisp).

You end up declaring just as much as you would in Ocaml, as I've demonstrated.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Javier
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160594076.591584.86120@e3g2000cwe.googlegroups.com>
Rahul Jain ha escrito:

> "Javier" <·······@gmail.com> writes:
>
> > (And in practice, you end up phisically declaring things in Lisp for
> > incresed performance, so Ocaml is more concise, shorter and usually
> > faster, so I don't see any gain in this respect for Lisp).
>
> You end up declaring just as much as you would in Ocaml, as I've demonstrated.

For example, the fibonaci, SBCL does not optimize if you do not declare
this:

(defun fib (n)
  (declare (fixnum n))
  (the fixnum
    (if (< n 2)
	1
      (+ (fib (- n 1)) (fib (- n 2))))))


If you write this function instead:

(defun fib (n)
	   (if (< n 2)
	       1
	       (+ (fib (- n 1)) (fib (- n 2)))))

and you do disassemble both, you can clearly see that the produced code
is very much unoptimized in comparison.
But, if you write the same function on ML, you always get the optimized
one, and I have not declared a single type:

let rec fib n =  if n < 2 then 1 else (fib (n-1)) + (fib (n-2));;
From: Rahul Jain
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <87u02afu4x.fsf@nyct.net>
"Javier" <·······@gmail.com> writes:

> Rahul Jain ha escrito:
>
>> "Javier" <·······@gmail.com> writes:
>>
>> > (And in practice, you end up phisically declaring things in Lisp for
>> > incresed performance, so Ocaml is more concise, shorter and usually
>> > faster, so I don't see any gain in this respect for Lisp).
>>
>> You end up declaring just as much as you would in Ocaml, as I've demonstrated.
>
> For example, the fibonaci, SBCL does not optimize if you do not declare
> this:
>
> (defun fib (n)
>   (declare (fixnum n))
>   (the fixnum
>     (if (< n 2)
> 	1
>       (+ (fib (- n 1)) (fib (- n 2))))))

Because here you are tricking the compiler or making a promise that
cannot be inferred from the types involved. This shows that SBCL's type
inference is BETTER than what you get from your MLs.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Pascal Costanza
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <4p4uarFh7tjbU1@individual.net>
Javier wrote:
> Rahul Jain ha escrito:
> 
>> "Javier" <·······@gmail.com> writes:
>>
>>> (And in practice, you end up phisically declaring things in Lisp for
>>> incresed performance, so Ocaml is more concise, shorter and usually
>>> faster, so I don't see any gain in this respect for Lisp).
>> You end up declaring just as much as you would in Ocaml, as I've demonstrated.
> 
> For example, the fibonaci, SBCL does not optimize if you do not declare
> this:
> 
> (defun fib (n)
>   (declare (fixnum n))
>   (the fixnum
>     (if (< n 2)
> 	1
>       (+ (fib (- n 1)) (fib (- n 2))))))
> 
> 
> If you write this function instead:
> 
> (defun fib (n)
> 	   (if (< n 2)
> 	       1
> 	       (+ (fib (- n 1)) (fib (- n 2)))))
> 
> and you do disassemble both, you can clearly see that the produced code
> is very much unoptimized in comparison.
> But, if you write the same function on ML, you always get the optimized
> one, and I have not declared a single type:
> 
> let rec fib n =  if n < 2 then 1 else (fib (n-1)) + (fib (n-2));;

What do you do in ML if you want the _correct_ one?


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Javier
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160595647.361073.230680@i3g2000cwc.googlegroups.com>
Pascal Costanza ha escrito:

> > let rec fib n =  if n < 2 then 1 else (fib (n-1)) + (fib (n-2));;
>
> What do you do in ML if you want the _correct_ one?

May I understand for "the _correct_ one" the one that do the
calculation for integers and floats with a single codebase?
If that is what you ask, there are several solutions, but I admit that
they are not so much generic as the Lisp one.
Well, this is the advantage of Lisp over ML.
From: Pascal Costanza
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <4p50t8Fha59bU1@individual.net>
Javier wrote:
> Pascal Costanza ha escrito:
> 
>>> let rec fib n =  if n < 2 then 1 else (fib (n-1)) + (fib (n-2));;
>> What do you do in ML if you want the _correct_ one?
> 
> May I understand for "the _correct_ one" the one that do the
> calculation for integers and floats with a single codebase?

Yes, especially for integers that don't arbitrarily wrap around or lead 
to runtime exceptions when they become too large. That's much more 
interesting than the purported safety you get from static type systems.

> If that is what you ask, there are several solutions, but I admit that
> they are not so much generic as the Lisp one.
> Well, this is the advantage of Lisp over ML.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452d548c$0$8734$ed2619ec@ptn-nntp-reader02.plus.net>
Pascal Costanza wrote:
> Javier wrote:
>> Pascal Costanza ha escrito:
>> May I understand for "the _correct_ one" the one that do the
>> calculation for integers and floats with a single codebase?
> 
> Yes, especially for integers that don't arbitrarily wrap around or lead
> to runtime exceptions when they become too large. That's much more
> interesting than the purported safety you get from static type systems.

No it isn't. A huge number of bugs get caught by static type systems and
very few bugs are caused by integer overflows.

>> If that is what you ask, there are several solutions, but I admit that
>> they are not so much generic as the Lisp one.
>> Well, this is the advantage of Lisp over ML.

If you use SML and compile with MLton then it will use big ints if you give
your fib function a big int. With SML/NJ or F# you can add a single type
annotation for bigint.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Bill Atkins
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <m2bqoi7foj.fsf@bertrand.local>
Jon Harrop <···@ffconsultancy.com> writes:

> No it isn't. A huge number of bugs get caught by static type systems and
> very few bugs are caused by integer overflows.

But bugs of the former kind are trivially caught with testing, while
bugs of the latter type are frequently never caught at all...
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452d5748$0$8734$ed2619ec@ptn-nntp-reader02.plus.net>
Bill Atkins wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> No it isn't. A huge number of bugs get caught by static type systems and
>> very few bugs are caused by integer overflows.
> 
> But bugs of the former kind are trivially caught with testing,

Unit testing is a poor substitute for static analysis in the same way
that "plugging a few numbers in" is a poor substitute for mathematical
proof.

> while bugs of the latter type are frequently never caught at all...

They rarely arise.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Wolfram Fenske
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160613419.852950.295450@k70g2000cwa.googlegroups.com>
Jon Harrop <···@ffconsultancy.com> writes:

> Bill Atkins wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
>>> No it isn't. A huge number of bugs get caught by static type systems and
>>> very few bugs are caused by integer overflows.
>>
>> But bugs of the former kind are trivially caught with testing,
>
> Unit testing is a poor substitute for static analysis in the same way
> that "plugging a few numbers in" is a poor substitute for mathematical
> proof.

I'm not so sure about that.  But it's certainly true the other way
around: a large class of errors cannot be detected statically.  That's
where unit tests come in.


Wolfram
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452da747$0$8719$ed2619ec@ptn-nntp-reader02.plus.net>
Wolfram Fenske wrote:
> I'm not so sure about that.  But it's certainly true the other way
> around: a large class of errors cannot be detected statically.  That's
> where unit tests come in.

Sure, a different large class of errors cannot be detected with unit tests.
That's why unit testing and static analysis are good complements.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Pascal Costanza
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <4p5894Fh2ouiU2@individual.net>
Jon Harrop wrote:
> Bill Atkins wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
>>> No it isn't. A huge number of bugs get caught by static type systems and
>>> very few bugs are caused by integer overflows.
>> But bugs of the former kind are trivially caught with testing,
> 
> Unit testing is a poor substitute for static analysis in the same way
> that "plugging a few numbers in" is a poor substitute for mathematical
> proof.
> 
>> while bugs of the latter type are frequently never caught at all...
> 
> They rarely arise.

But when they do, you're screwed.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Bill Atkins
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <m2u029tuel.fsf@weedle-24.dynamic.rpi.edu>
Jon Harrop <···@ffconsultancy.com> writes:

> Unit testing is a poor substitute for static analysis in the same way
> that "plugging a few numbers in" is a poor substitute for mathematical
> proof.

I'm not necessarily talking about unit testing.  Lisp programmers will
almost always play around with functions after writing them, to make
sure that they work and to look for improvments.  The mistake in the
code we're discussing would be caught immediately.  Even scanning it
with your eyes would draw attention to the bug right away.

> They rarely arise.

Ah, well that settles it for me.

But when they do arise (which apparently happens "rarely," for reasons
yet unexplained), they can be pretty ugly: 

http://sunsolve.sun.com/search/document.do?assetkey=1-26-45525-1
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452eaabb$0$8729$ed2619ec@ptn-nntp-reader02.plus.net>
Bill Atkins wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Unit testing is a poor substitute for static analysis in the same way
>> that "plugging a few numbers in" is a poor substitute for mathematical
>> proof.
> 
> I'm not necessarily talking about unit testing.  Lisp programmers will
> almost always play around with functions after writing them, to make
> sure that they work and to look for improvments.

That is also not a substitute for mathematical proof.

> The mistake in the code we're discussing would be caught immediately.

Testing a function manually using a REPL and finding an error is
not "immediate" compared to running it through a type checker that takes a
fraction of a second to find the bug.

> Even scanning it with your eyes would draw attention to the bug right
> away. 

If that were true, people would not have spent decades developing ever more
sophisticated forms of static analysis to catch errors.

> But when they do arise (which apparently happens "rarely," for reasons
> yet unexplained),
>
> they can be pretty ugly: 
> 
> http://sunsolve.sun.com/search/document.do?assetkey=1-26-45525-1

I've had more problems with floating point precision than with integer
overflow.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Bill Atkins
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <m2d58w3mcp.fsf@bertrand.local>
Jon Harrop <···@ffconsultancy.com> writes:

> Bill Atkins wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
>>> Unit testing is a poor substitute for static analysis in the same way
>>> that "plugging a few numbers in" is a poor substitute for mathematical
>>> proof.
>> 
>> I'm not necessarily talking about unit testing.  Lisp programmers will
>> almost always play around with functions after writing them, to make
>> sure that they work and to look for improvments.
>
> That is also not a substitute for mathematical proof.

And I'm not saying that it is.  Static typing is not a substitute for
a mathematical proof that your program will do what you intend -
merely that the compiler is happy with the way you're using types.

>> The mistake in the code we're discussing would be caught immediately.
>
> Testing a function manually using a REPL and finding an error is
> not "immediate" compared to running it through a type checker that takes a
> fraction of a second to find the bug.

Yes, but obviously the type checker will not find semantic bugs as
reliably as testing will.
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452f4943$0$8736$ed2619ec@ptn-nntp-reader02.plus.net>
Bill Atkins wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Bill Atkins wrote:
>>> I'm not necessarily talking about unit testing.  Lisp programmers will
>>> almost always play around with functions after writing them, to make
>>> sure that they work and to look for improvments.
>>
>> That is also not a substitute for mathematical proof.
> 
> And I'm not saying that it is.  Static typing is not a substitute for
> a mathematical proof that your program will do what you intend -
> merely that the compiler is happy with the way you're using types.

It is a proof none the less.

>>> The mistake in the code we're discussing would be caught immediately.
>>
>> Testing a function manually using a REPL and finding an error is
>> not "immediate" compared to running it through a type checker that takes
>> a fraction of a second to find the bug.
> 
> Yes, but obviously the type checker will not find semantic bugs as
> reliably as testing will.

Again, type checking is a form of testing. I'd also contest that playing in
a REPL will find bugs "more reliably" than using static analysis.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Bill Atkins
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <m264eo42eq.fsf@weedle-24.dynamic.rpi.edu>
Jon Harrop <···@ffconsultancy.com> writes:

> Again, type checking is a form of testing. I'd also contest that playing in
> a REPL will find bugs "more reliably" than using static analysis.

Assuming you have perfect tests, you can catch any bug in a function.

Assuming you have perfect static type checking, you can only catch
type-related bugs in a function.

Which is a more reliable way of making sure your code does what you want?
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <45303901$0$8721$ed2619ec@ptn-nntp-reader02.plus.net>
Bill Atkins wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Again, type checking is a form of testing. I'd also contest that playing
>> in a REPL will find bugs "more reliably" than using static analysis.
> 
> Assuming you have perfect tests, you can catch any bug in a function.

Even if you have perfect tests and infinite time to run them, you cannot
catch any bug in a function.

> Assuming you have perfect static type checking, you can only catch
> type-related bugs in a function.

For an arbitrary definition of "type", yes. If you want, you can use theorem
provers to do static analysis.

> Which is a more reliable way of making sure your code does what you want?

Neither. That's my point: the two approaches complement one another.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: jayessay
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <m3y7rkxjdv.fsf@rigel.goldenthreadtech.com>
Jon Harrop <···@ffconsultancy.com> writes:

> Bill Atkins wrote:
> > Jon Harrop <···@ffconsultancy.com> writes:
> >> Bill Atkins wrote:
> >>> I'm not necessarily talking about unit testing.  Lisp programmers will
> >>> almost always play around with functions after writing them, to make
> >>> sure that they work and to look for improvments.
> >>
> >> That is also not a substitute for mathematical proof.
> > 
> > And I'm not saying that it is.  Static typing is not a substitute for
> > a mathematical proof that your program will do what you intend -
> > merely that the compiler is happy with the way you're using types.
> 
> It is a proof none the less.

Actually, depending on your POV, it may not be.  Even so, just because
something is a proof doesn't imply that either the the proof or what
it is proving is interesting or non trivial.


/Jon


> I'd also contest that playing in a REPL will find bugs "more
> reliably" than using static analysis.

Or vice-versa.  So?


/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <4530395b$0$8721$ed2619ec@ptn-nntp-reader02.plus.net>
jayessay wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> It is a proof none the less.
> 
> Actually, depending on your POV, it may not be.  Even so, just because
> something is a proof doesn't imply that either the the proof or what
> it is proving is interesting or non trivial.

Right. The challenge is making the proof as relevant as possible.

>> I'd also contest that playing in a REPL will find bugs "more
>> reliably" than using static analysis.
> 
> Or vice-versa.  So?

I believe Bill's assertion is that manual testing obviates the need for
static analysis, and I disagree.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Rahul Jain
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <87pscyftvn.fsf@nyct.net>
Jon Harrop <···@ffconsultancy.com> writes:

> No it isn't. A huge number of bugs get caught by static type systems and
> very few bugs are caused by integer overflows.

You still don't understand that static type systems offer NOTHING in
regards to what you're talking about. I've showed how Lisp's dynamic
typing gets BETTER results than ML's static type system even
theoretically could.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Thomas Lindgren
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <87y7rmebfn.fsf@dev.null>
Jon Harrop <···@ffconsultancy.com> writes:

> Pascal Costanza wrote:
> > Javier wrote:
> >> Pascal Costanza ha escrito:
> >> May I understand for "the _correct_ one" the one that do the
> >> calculation for integers and floats with a single codebase?
> > 
> > Yes, especially for integers that don't arbitrarily wrap around or lead
> > to runtime exceptions when they become too large. That's much more
> > interesting than the purported safety you get from static type systems.
> 
> No it isn't. A huge number of bugs get caught by static type systems and
> very few bugs are caused by integer overflows.

I think I've heard the static-vs-dynamic argument about three hundred
times by now. So instead of ringing the bell for round 301, let me
suggest a new approach: Isn't it high time that someone actually
studied and measured this? Solid methodology and concrete numbers
rather than bright-eyed testimonials and overstretched anecdote?
Anyone? Gentlemen, start your studies. Please.

Best,
                        Thomas
-- 
Thomas Lindgren	

"Ever tried. Ever failed. No matter. Try again. Fail again. Fail better."
From: Javier
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160611574.804561.207930@c28g2000cwb.googlegroups.com>
Thomas Lindgren ha escrito:

> Jon Harrop <···@ffconsultancy.com> writes:
>
> > Pascal Costanza wrote:
> > > Javier wrote:
> > >> Pascal Costanza ha escrito:
> > >> May I understand for "the _correct_ one" the one that do the
> > >> calculation for integers and floats with a single codebase?
> > >
> > > Yes, especially for integers that don't arbitrarily wrap around or lead
> > > to runtime exceptions when they become too large. That's much more
> > > interesting than the purported safety you get from static type systems.
> >
> > No it isn't. A huge number of bugs get caught by static type systems and
> > very few bugs are caused by integer overflows.
>
> I think I've heard the static-vs-dynamic argument about three hundred
> times by now. So instead of ringing the bell for round 301, let me
> suggest a new approach: Isn't it high time that someone actually
> studied and measured this? Solid methodology and concrete numbers
> rather than bright-eyed testimonials and overstretched anecdote?
> Anyone? Gentlemen, start your studies. Please.

Well, there are at least two facts:

1) ML can't do NIL arithmetic.
2) ML can't accidentaly mix types.

Yes, you still can have other bugs (like integer overflows). But it is
not true that just for using Lisp you are going to avoid them. So using
Lisp you can have integer overflows and NIL incorrect references, and
can accidentally mix types, while using ML you can only have integer
overflows or other exoteric bugs that also arise on Lisp.
So there you have 2 things which makes ML safer than Lisp. Are these
two advantages not so important? A lot of people think they are, and
that is because they are using ML, Ada, or other strongly typed
languages. Even there are people who prefer middle-typed languages like
Java, C#, etc.

But there are other facts too:

1) Lisp has macros. (Ocaml does have too, but they are a pain and not
widely used.)
2) Lisp can mix types when convenient. For example, mixing integer with
big numbers or floats.

Some people prefer freedom, others security. Both have costs.
From: Rahul Jain
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <874puafhm6.fsf@nyct.net>
"Javier" <·······@gmail.com> writes:

> Yes, you still can have other bugs (like integer overflows). But it is
> not true that just for using Lisp you are going to avoid them. So using
> Lisp you can have integer overflows and NIL incorrect references, and
> can accidentally mix types [...]

No you can't! Please learn how to read. You can CHOOSE to have those
errors, but that's your own fault for asking for those bugs.

You're either trolling quite well or are completely closed-minded.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Rainer Joswig
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <joswig-EA4AFF.11094614102006@news-europe.giganews.com>
In article <························@c28g2000cwb.googlegroups.com>,
 "Javier" <·······@gmail.com> wrote:

> Thomas Lindgren ha escrito:
> 
> > Jon Harrop <···@ffconsultancy.com> writes:
> >
> > > Pascal Costanza wrote:
> > > > Javier wrote:
> > > >> Pascal Costanza ha escrito:
> > > >> May I understand for "the _correct_ one" the one that do the
> > > >> calculation for integers and floats with a single codebase?
> > > >
> > > > Yes, especially for integers that don't arbitrarily wrap around or lead
> > > > to runtime exceptions when they become too large. That's much more
> > > > interesting than the purported safety you get from static type systems.
> > >
> > > No it isn't. A huge number of bugs get caught by static type systems and
> > > very few bugs are caused by integer overflows.
> >
> > I think I've heard the static-vs-dynamic argument about three hundred
> > times by now. So instead of ringing the bell for round 301, let me
> > suggest a new approach: Isn't it high time that someone actually
> > studied and measured this? Solid methodology and concrete numbers
> > rather than bright-eyed testimonials and overstretched anecdote?
> > Anyone? Gentlemen, start your studies. Please.
> 
> Well, there are at least two facts:
> 
> 1) ML can't do NIL arithmetic.
> 2) ML can't accidentaly mix types.
> 
> Yes, you still can have other bugs (like integer overflows). But it is
> not true that just for using Lisp you are going to avoid them. So using
> Lisp you can have integer overflows and NIL incorrect references, and
> can accidentally mix types, while using ML you can only have integer
> overflows or other exoteric bugs that also arise on Lisp.
> So there you have 2 things which makes ML safer than Lisp. Are these
> two advantages not so important? A lot of people think they are, and
> that is because they are using ML, Ada, or other strongly typed
> languages. Even there are people who prefer middle-typed languages like
> Java, C#, etc.
> 
> But there are other facts too:
> 
> 1) Lisp has macros. (Ocaml does have too, but they are a pain and not
> widely used.)
> 2) Lisp can mix types when convenient. For example, mixing integer with
> big numbers or floats.
> 
> Some people prefer freedom, others security. Both have costs.

http://mitpress.mit.edu/sicp/full-text/sicp/book/node1.html

"Pascal is for building pyramids--imposing, breathtaking, static structures
built by armies pushing heavy blocks into place.
Lisp is for building organisms--imposing, breathtaking, dynamic
structures built by squads fitting fluctuating myriads of simpler
organisms into place."

-- 
http://lispm.dyndns.org/
From: Pascal Costanza
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <4p586kFh2ouiU1@individual.net>
Jon Harrop wrote:
> Pascal Costanza wrote:
>> Javier wrote:
>>> Pascal Costanza ha escrito:
>>> May I understand for "the _correct_ one" the one that do the
>>> calculation for integers and floats with a single codebase?
>> Yes, especially for integers that don't arbitrarily wrap around or lead
>> to runtime exceptions when they become too large. That's much more
>> interesting than the purported safety you get from static type systems.
> 
> No it isn't. A huge number of bugs get caught by static type systems and
> very few bugs are caused by integer overflows.

Interesting. Where is the statistical data to back this up? Or is this 
again one of those fuzzy "beliefs" based on "experience" things?

>>> If that is what you ask, there are several solutions, but I admit that
>>> they are not so much generic as the Lisp one.
>>> Well, this is the advantage of Lisp over ML.
> 
> If you use SML and compile with MLton then it will use big ints if you give
> your fib function a big int. With SML/NJ or F# you can add a single type
> annotation for bigint.

See, I don't have to worry about these things in Common Lisp.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Camm Maguire
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <54d58x6xtg.fsf@intech19.enhanced.com>
Greetings!

"Javier" <·······@gmail.com> writes:

> Pascal Costanza ha escrito:
> 
> > > let rec fib n =  if n < 2 then 1 else (fib (n-1)) + (fib (n-2));;
> >
> > What do you do in ML if you want the _correct_ one?
> 
> May I understand for "the _correct_ one" the one that do the
> calculation for integers and floats with a single codebase?
> If that is what you ask, there are several solutions, but I admit that
> they are not so much generic as the Lisp one.
> Well, this is the advantage of Lisp over ML.
> 

I think he means one that will not overflow machine integers -- you
can't have 'fully optimized code' that works with arbitrary bignums.

Take care,

-- 
Camm Maguire			     			····@enhanced.com
==========================================================================
"The earth is but one country, and mankind its citizens."  --  Baha'u'llah
From: ··············@hotmail.com
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160778393.554781.327640@f16g2000cwb.googlegroups.com>
Camm Maguire wrote:

>
> I think he means one that will not overflow machine integers -- you
> can't have 'fully optimized code' that works with arbitrary bignums.

There's an assumption here that you are on a relatively modern,
"general purpose" architecture. If fixnum overflow traps are supported
in hardware with no visible cost for the fixnum-result case, then one
would not need to give up safety to get 'fully optimized' performance.

Of course, such an architecture would mean that "machine integers" are
pretty much "Lisp integers".
From: Wolfram Fenske
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160610128.680368.11040@e3g2000cwe.googlegroups.com>
"Javier" <·······@gmail.com> writes:

> Wolfram Fenske ha escrito:
>
>> When it comes to type errors I'm far from convinced they account for
>> that many errors in dynamically typed languages.  I do a lot of
>> programming in Python and I hardly ever find myself making type
>> errors.  Most errors I make are semantic errors and nothing can help
>> me with that.
>
> I don't fully agree. When programming on Lisp (and I don't actually do
> a lot of programming on Lisp), I found that most of my bugs come from
> the fact that I try to use one thing when the algorithm was trying to
> do a similar thing but with a different type.
> For example, I usually pass a list of lists, but forget to enclose the
> data on that second list.
>
> CL-USER> (defun get-x (obj) (assoc 'x obj))
> GET-X
> CL-USER> (defun mul-x (obj) (* (get-x obj) 5))
> MUL-X
> CL-USER> (mul-x '((x 3)))
> Argument X is not a NUMBER: (X 5)
>    [Condition of type SIMPLE-TYPE-ERROR]
>
>
> The bad thing here is that the compiler were not able to detect that
> (get-x obj) is never going to return a number, so mul-x is broken. On
> ML this thing is just imposible to happen, your code won't compile.
> I think this kind of error is pretty common. At least I am continiusly
> making them, and it is very possible that some of them would be able to
> pass any further check if the algorithm is longer and not so evident at
> first glance.

Type errors may be prettey common, but in my experience they also tend
to show up quickly and can easily be fixed.  So if 50% of you bugs are
type errors and 50% are semantic errors, you'll probably end up
spending 90% of the time fixing the semantic errors.

> Yes, the test system by Alan Crowe may be helpfull in this case, but
> then you must know what to test!

Huh?  What do you mean?

> There are times that the algorithm is so complicate that it is very
> difficult to take in count all the possible type combinations. On
> ML, the compiler does this for you, so you only worry about the
> correct value and not the correct type.

If your function is that difficult, figuring out the types is probably
the least of your problems.

>> In fact I found OCaml's type checking was getting in my
>> way most of the time because it didn't allow me to store objects of
>> different types in a list, for example.
>
> It does allow you if you intentionally desire it:
>
> type to_store = Number of int | String of string | Anythingelse
> let my_list = [Number 5; Anythingelse; Number 0; String "hello"];;

Right, but I don't like having to define new types all the time.
After all, not having to do that is *the* major advantage of a dynamic
type system over a static one, IMO.

[...]

>> I think people put way too much stock in statically verifiable
>> type-safety.
>
> I don't think so.

The reason I wrote that is because sometimes I get the feeling that
proponents of statically typed languages think that a program that
passes the type checks is basically bug-free.

> At the end, you are also typechecking your Lisp program or you'll
> find serius bugs soon or late (probably sooner than later, but
> sometimes too late).

If you develop critical systems you better had more sophisticated
tests than just a type checker.

> The fact that you don't need to declare types doesn't mean that you
> are free to ignore them, in most cases.

Believe me, I'm well aware of that.

> (And in practice, you end up phisically declaring things in Lisp for
> incresed performance,

In practice, you end up declaring types in performance critical
functions only.

> so Ocaml is more concise, shorter and

I highly doubt that because OCaml doesn't have macros.  (It does have
a mechanism for extending its syntax but it's not as easy to use as
Common Lisp's macros.)  You may have to write more code in Lisp
because you have to add type declarations but it's quite possible that
macros will more than make up for that.

> usually faster, so I don't see any gain in this respect for Lisp).

You're right, knowing types at compile time is a huge win
performance-wise.  But it only matters for functions that account for
a sufficiently large part of the runtime.  That's why I prefer a
dynamically typed language that lets me declare types when I really
need the performance and otherwise leaves me alone.  It's a trade-off,
really:

  1. How much debugging time does the type system spare you?
  2. How much development time do you have to expend working around
     the type system (e. g. writing a function like Lisp's ASSOC or
     so)?
  3. Also: how much development time do you need improving the
     performance of your dynamically typed program.

I think 2. is a bigger factor than 1. so I prefer dynamically typed
languages.  3. might not be an issue at all if I find my program is
efficient enough as it is.


Wolfram

PS: OCaml's approach to static type systems is actually pretty cool.
Because of its type inference mechanism you hardly have to declare any
types, so a lot of the time it's like programming in a dynamically
typed language.  Still, I felt the type system was getting in my way a
lot of the time when it forced me to write more complicated versions
of some functions that would have been much easier (but still
type-safe) in Lisp, e g..
From: Javier
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160617546.193262.100200@i3g2000cwc.googlegroups.com>
Wolfram Fenske ha escrito:

> Type errors may be prettey common, but in my experience they also tend
> to show up quickly and can easily be fixed.  So if 50% of you bugs are
> type errors and 50% are semantic errors, you'll probably end up
> spending 90% of the time fixing the semantic errors.

Googling, I found these type errors, which are not so much evident and
show up so quickly:

·······································@common-lisp.net/msg00039.html
http://common-lisp.net/pipermail/mcclim-devel/2005-January/003619.html
·············································@cons.org/msg02985.html
http://paste.lisp.org/display/21870

In fact, it seems to be very common, even on programs which are not in
"alpha" development state.


> > Yes, the test system by Alan Crowe may be helpfull in this case, but
> > then you must know what to test!
>
> Huh?  What do you mean?

That you can fail in testing every posible condition or value returned
by your functions, specially if it is a very big system.

> > The fact that you don't need to declare types doesn't mean that you
> > are free to ignore them, in most cases.
>
> Believe me, I'm well aware of that.

Sorry, but I don't believe you. :-)
Are you aware of classes? And methods managing types?
You are all the time declaring things. Yes, you are probably not
writing them (like Ocaml's users), but you are declaring them in your
mind. :-)
From: Wolfram Fenske
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160685441.974929.315230@i42g2000cwa.googlegroups.com>
"Javier" <·······@gmail.com> writes:

> Wolfram Fenske ha escrito:
>
>> Type errors may be prettey common, but in my experience they also tend
>> to show up quickly and can easily be fixed.  So if 50% of you bugs are
>> type errors and 50% are semantic errors, you'll probably end up
>> spending 90% of the time fixing the semantic errors.
>
> Googling, I found these type errors, which are not so much evident and
> show up so quickly:
>
> ·······································@common-lisp.net/msg00039.html
> http://common-lisp.net/pipermail/mcclim-devel/2005-January/003619.html
> ·············································@cons.org/msg02985.html
> http://paste.lisp.org/display/21870

I can't tell how obvious they are because I don't know the code.  At
least in the first example, there are several warnings of the sort:

    WARNING: DEFUN/DEFMACRO: redefining function MAKE-PART

Seems to me the error is something else altogether.

[...]

>> > Yes, the test system by Alan Crowe may be helpfull in this case, but
>> > then you must know what to test!
>>
>> Huh?  What do you mean?
>
> That you can fail in testing every posible condition or value returned
> by your functions,

Sure you can.  But what makes you think that a static analysis will
detect every possible failure mode?

> specially if it is a very big system.

But that's not how it works.  You don't write thousands of lines of
code and then run one test to convince yourself everything is OK.  Let
me quote the Wikipedia [1]:

    In computer programming, a unit test is a procedure used to
    validate that a particular module of source code is working
    properly.  The procedure is to write test cases for all functions
    and methods so that whenever a change causes a regression, it can
    be quickly identified and fixed.

>> > The fact that you don't need to declare types doesn't mean that you
>> > are free to ignore them, in most cases.
>>
>> Believe me, I'm well aware of that.
>
> Sorry, but I don't believe you. :-)

Yes, you're right: I prefer dynamically typed languages, therefore I
must be stupid.

> Are you aware of classes? And methods managing types?
> You are all the time declaring things. Yes, you are probably not
> writing them (like Ocaml's users), but you are declaring them in your
> mind. :-)

Geez, what a brilliant insight!  Seriously, I was being polite the
first time but apparently you didn't get it.  Go lecture someone else!


Wolfram

Footnotes: 
[1]  <http://en.wikipedia.org/wiki/Unit_test>
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452daa19$0$8720$ed2619ec@ptn-nntp-reader02.plus.net>
Wolfram Fenske wrote:
> PS: OCaml's approach to static type systems is actually pretty cool.
> Because of its type inference mechanism you hardly have to declare any
> types, so a lot of the time it's like programming in a dynamically
> typed language.  Still, I felt the type system was getting in my way a
> lot of the time when it forced me to write more complicated versions
> of some functions that would have been much easier (but still
> type-safe) in Lisp, e g..

I'm dying to hear this example... :-)

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Wolfram Fenske
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160683975.591585.11320@e3g2000cwe.googlegroups.com>
Jon Harrop <···@ffconsultancy.com> writes:

> Wolfram Fenske wrote:
>> PS: OCaml's approach to static type systems is actually pretty cool.
>> Because of its type inference mechanism you hardly have to declare any
>> types, so a lot of the time it's like programming in a dynamically
>> typed language.  Still, I felt the type system was getting in my way a
>> lot of the time when it forced me to write more complicated versions
>> of some functions that would have been much easier (but still
>> type-safe) in Lisp, e g..
>
> I'm dying to hear this example... :-)

It's been a while since I last played with OCaml, so I don't remember
a specific example (I know, what a lame thing to say).  But here is
something I have come up with:

Suppose I need a stack.  A generic class seems like a reasonable
choice:

--8<---------------cut here---------------start------------->8---
class ['a] stack =
  object (self)
    val mutable list = ( [] : 'a list )    (* instance variable *)
    method push x =                        (* push method *)
      list <- x :: list
    method pop =                           (* pop method *)
      let result = hd list in
      list <- tl list;
      result
    method peek =                          (* peek method *)
      hd list
    method size =                          (* size method *)
      length list
  end;;
--8<---------------cut here---------------end--------------->8---

Now, for debugging purposes I want to log each element I pop off the
stack.  In Python, e. g., I could just say:

--8<---------------cut here---------------start------------->8---
def pop(self):
    result = self.list[0]
    self.list = list[1:]
    print repr(result) # repr returns the representation of its
argument
    return result
--8<---------------cut here---------------end--------------->8---

(In Common Lisp, I'd write (prin1 result) where I wrote print
repr(result).)

My OCaml solution is not so easy because I don't know how to write
something like repr or prin1 in OCaml. (But then again, I'm not
really an OCaml hacker.  Maybe you know a good solution?) So I
modified the constructor to accept an additional function tos that
can convert an element into a string.

--8<---------------cut here---------------start------------->8---
class ['a] stack tos =
  object (self)
    val tos = tos;
    val mutable list = ( [] : 'a list )    (* instance variable *)
    method push x =                        (* push method *)
      list <- x :: list
    method pop =                           (* pop method *)
      let result = hd list in
      list <- tl list;
      Printf.printf "%s\n" (tos result);
      result
    method peek =                          (* peek method *)
      hd list
    method size =                          (* size method *)
      length list
  end;;
--8<---------------cut here---------------end--------------->8---

It would be used like this:

--8<---------------cut here---------------start------------->8---
let _ =
  let i = new stack string_of_int in
  let ls =
    let string_of_tupel t =
      String.concat "" ["("; String.concat ", " [(fst t); (snd t)];
")"]
    in
    new stack string_of_tupel
  in
  i#push 1;
  i#push 2;
  ls#push ("one", "1");
  ls#push ("two", "2");
  ls#pop;
  ls#pop;
  i#pop;
  i#pop
;;
--8<---------------cut here---------------end--------------->8---

This is not very elegant.

I guess this could be seen as an instance of wanting to do Lisp in
OCaml.  Maybe a seasoned OCaml programmer wouldn't want to do that
kind of thing in the first place.  The thing is, "polymorphic"
functions like repr/prin1 appear everywhere in a dynamically typed
language.  In Gnus, e. g., a news and mail client for Emacs, there are
lots of places where you can either use a regular expression, a list
of regular expressions or a function.  Here is an extract from the
Gnus docs:

--8<---------------cut here---------------start------------->8---
nnimap-split-rule

    New mail found in /nnimap-split-inbox/ will be split according to
    this variable.

    This variable contains a list of lists, where the first element in
    the sublist gives the name of the IMAP mailbox to move articles
    matching the regexp in the second element in the sublist. Got
    that? Neither did I, we need examples.

    (setq nnimap-split-rule
          '(("INBOX.nnimap"
             "^Sender: ············@vic20.globalcom.se")
            ("INBOX.junk"    "^Subject:.*MAKE MONEY")
            ("INBOX.private" "")))

    This will put all articles from the nnimap mailing list into
    mailbox INBOX.nnimap, all articles containing MAKE MONEY in the
    Subject: line into INBOX.junk and everything else in
    INBOX.private.

    [...]

    The first element can also be the symbol junk to indicate that
    matching messages should simply be deleted. Use with care.

    The second element can also be a function. In that case, it will
    be called with the first element of the rule as the argument, in a
    buffer containing the headers of the article. It should return a
    non-nil value if it thinks that the mail belongs in that group.
--8<---------------cut here---------------end--------------->8---

However, this style of programming goes against the grain of a
statically typed language.  I like programming this way, which is why
I prefer dynamically typed languages like Lisp.


Wolfram

PS: I had a look at your Ray tracer language comparison site.  I was
very surprised to read that the Lisp programs were much more verbose
than the OCaml versions.  Something to think about ...  I guess I
should write a pattern matching macro.
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452eec56$0$8720$ed2619ec@ptn-nntp-reader02.plus.net>
Wolfram Fenske wrote:
> Suppose I need a stack.  A generic class seems like a reasonable
> choice:
> 
> --8<---------------cut here---------------start------------->8---
> class ['a] stack =
>   object (self)
>     val mutable list = ( [] : 'a list )    (* instance variable *)
>     method push x =                        (* push method *)
>       list <- x :: list
>     method pop =                           (* pop method *)
>       let result = hd list in
>       list <- tl list;
>       result
>     method peek =                          (* peek method *)
>       hd list
>     method size =                          (* size method *)
>       length list
>   end;;
> --8<---------------cut here---------------end--------------->8---

No reason to use a generic class when there is already a built-in Stack
module.

> Now, for debugging purposes I want to log each element I pop off the
> stack.  In Python, e. g., I could just say:
> 
> --8<---------------cut here---------------start------------->8---
> def pop(self):
>     result = self.list[0]
>     self.list = list[1:]
>     print repr(result) # repr returns the representation of its
> argument
>     return result
> --8<---------------cut here---------------end--------------->8---
>
> (In Common Lisp, I'd write (prin1 result) where I wrote print
> repr(result).)
> 
> My OCaml solution is not so easy because I don't know how to write
> something like repr or prin1 in OCaml.

Absolutely. You can't write a generic print function in OCaml because it has
optimised away all of the type information by run time.

> (But then again, I'm not really an OCaml hacker.  Maybe you know a good
> solution?) 

There are several solutions. You can use higher-order functions:

module LoggedStack = struct
  include Stack
  let pop ?(print=fun _ -> ()) stack =
    let elt = Stack.pop stack in
    print elt;
    elt
end;;

or modules (functors) to pass in a suitable print function from another
module:

module type PRINTER = sig
  type t
  val print : t -> unit
end;;

module LoggedStack(Printer : PRINTER) = struct
  include Stack
  let pop stack =
    let elt = Stack.pop stack in
    Printer.print elt;
    elt
end;;

> So I 
> modified the constructor to accept an additional function tos that
> can convert an element into a string.
> 
> --8<---------------cut here---------------start------------->8---
> class ['a] stack tos =
>   object (self)
>     val tos = tos;
>     val mutable list = ( [] : 'a list )    (* instance variable *)
>     method push x =                        (* push method *)
>       list <- x :: list
>     method pop =                           (* pop method *)
>       let result = hd list in
>       list <- tl list;
>       Printf.printf "%s\n" (tos result);
>       result
>     method peek =                          (* peek method *)
>       hd list
>     method size =                          (* size method *)
>       length list
>   end;;
> --8<---------------cut here---------------end--------------->8---
> 
> It would be used like this:
> 
> --8<---------------cut here---------------start------------->8---
> let _ =
>   let i = new stack string_of_int in
>   let ls =
>     let string_of_tupel t =
>       String.concat "" ["("; String.concat ", " [(fst t); (snd t)];
> ")"]
>     in
>     new stack string_of_tupel
>   in
>   i#push 1;
>   i#push 2;
>   ls#push ("one", "1");
>   ls#push ("two", "2");
>   ls#pop;
>   ls#pop;
>   i#pop;
>   i#pop
> ;;
> --8<---------------cut here---------------end--------------->8---
> 
> This is not very elegant.

The functor approach makes better use of static typing but a generic print
will not be as elegant in OCaml as it is in Lisp.

> I guess this could be seen as an instance of wanting to do Lisp in
> OCaml.  Maybe a seasoned OCaml programmer wouldn't want to do that
> kind of thing in the first place.

Other MLs (e.g. F#) carry run-time type information and implement print_any,
which does what you want. However, they are as slow as Lisp...

> The thing is, "polymorphic" 
> functions like repr/prin1 appear everywhere in a dynamically typed
> language.  In Gnus, e. g., a news and mail client for Emacs, there are
> lots of places where you can either use a regular expression, a list
> of regular expressions or a function.  Here is an extract from the
> Gnus docs:
> 
> --8<---------------cut here---------------start------------->8---
> nnimap-split-rule
> 
>     New mail found in /nnimap-split-inbox/ will be split according to
>     this variable.
> 
>     This variable contains a list of lists, where the first element in
>     the sublist gives the name of the IMAP mailbox to move articles
>     matching the regexp in the second element in the sublist. Got
>     that? Neither did I, we need examples.
> 
>     (setq nnimap-split-rule
>           '(("INBOX.nnimap"
>              "^Sender: ············@vic20.globalcom.se")
>             ("INBOX.junk"    "^Subject:.*MAKE MONEY")
>             ("INBOX.private" "")))
> 
>     This will put all articles from the nnimap mailing list into
>     mailbox INBOX.nnimap, all articles containing MAKE MONEY in the
>     Subject: line into INBOX.junk and everything else in
>     INBOX.private.
> 
>     [...]
> 
>     The first element can also be the symbol junk to indicate that
>     matching messages should simply be deleted. Use with care.
> 
>     The second element can also be a function. In that case, it will
>     be called with the first element of the rule as the argument, in a
>     buffer containing the headers of the article. It should return a
>     non-nil value if it thinks that the mail belongs in that group.
> --8<---------------cut here---------------end--------------->8---
> 
> However, this style of programming goes against the grain of a
> statically typed language.  I like programming this way, which is why
> I prefer dynamically typed languages like Lisp.

Not at all. When you said "a regular expression, a list of regular
expressions or a function", you were literally enumerating the constructors
of a variant type:

type ('a, 'b) t =
  | RegExp of regexp
  | RegExps of regexp list
  | Fn of ('a -> 'b);; 

Now that we've defined that type you can define your nnimap_split_rule
function to accept a value of this type (you can probably do more by
specifying the type of the function). Then the type is explicit, so you
don't need documentation that says things like "Got that? Neither did
I...", and your types will be automatically and statically checked for you.
Basically, you get machine-verified documentation.

> PS: I had a look at your Ray tracer language comparison site.  I was
> very surprised to read that the Lisp programs were much more verbose
> than the OCaml versions.

Indeed, Lisp is about the slowest and most verbose language on that test.
However, I believe the results are quite specific to the task in that
respect. Lisp should do a lot better on tasks where performance is
irrelevant or not CPU-limited and where EVAL is useful. I don't believe
(decent) static typing is ever enough of a hindrance that it will offset
Lisp's missing features, so trying to find tasks well suited to dynamic
typing is a red herring (IMHO).

So if I wanted to make Lisp look relatively better then I'd try to write an
efficient interpreter. I'd like to see a Lisp equivalent of the interpreter
that I've put up here, for example:

  http://www.ffconsultancy.com/free/ocaml/interpreter.html

> Something to think about ...  I guess I should write a pattern matching
> macro. 

Yes. Having no form of pattern matching is just embarassing. Without variant
types there isn't a lot you can do with pattern matching.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Wolfram Fenske
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160769578.174971.180760@f16g2000cwb.googlegroups.com>
Jon Harrop <···@ffconsultancy.com> writes:

> Wolfram Fenske wrote:
>> Suppose I need a stack.  A generic class seems like a reasonable
>> choice:
>>

[...] (source code snipped)

> No reason to use a generic class when there is already a built-in Stack
> module.

Yeah, yeah, I was only using stuff I found in an OCaml tutorial.
While we're at it, in Lisp I might have gotten away with simply using
CONS cells and the built-in PUSH and POP macros.  This might not be
ideal for a production quality application but it might do for a
prototype.  On a similar note, I see a lot of Python code where
dictionaries (Python-speak for hash tables) are used instead of
classes (Python doesn't have structs so classes would be the only
alternative).  This is probably because Python's syntax for
dictionaries is pretty concise and in a situation where all they want
to do is pass around two or three values between a handfull of
functions, people think the overhead of writing a class is just not
worth it.

>> Now, for debugging purposes I want to log each element I pop off the
>> stack.  In Python, e. g., I could just say:

[...] (source code snipped)

>> (In Common Lisp, I'd write (prin1 result) where I wrote print
>> repr(result).)
>>
>> My OCaml solution is not so easy because I don't know how to write
>> something like repr or prin1 in OCaml.
>
> Absolutely. You can't write a generic print function in OCaml
> because it has optimised away all of the type information by run
> time.
>
>> (But then again, I'm not really an OCaml hacker.  Maybe you know a good
>> solution?)
>
> There are several solutions. You can use higher-order functions:
>
> module LoggedStack = struct
>   include Stack
>   let pop ?(print=fun _ -> ()) stack =
>     let elt = Stack.pop stack in
>     print elt;
>     elt
> end;;
>
> or modules (functors) to pass in a suitable print function from another
> module:
>
> module type PRINTER = sig
>   type t
>   val print : t -> unit
> end;;
>
> module LoggedStack(Printer : PRINTER) = struct
>   include Stack
>   let pop stack =
>     let elt = Stack.pop stack in
>     Printer.print elt;
>     elt
> end;;

[...]

> The functor approach makes better use of static typing but a generic print
> will not be as elegant in OCaml as it is in Lisp.

This is my point.  None of these is really "plug & play".  They have
other qualities, like higher efficiency and better support for static
checks.  But I don't value these as much as you do.  IMO, programmer
efficiency is more important a lot of the time.  However, in your
domain (your signature suggests it's scientific computing), things
might be different.  I imagine it involves a lot of number crunching,
using complicated algorithms.  Maybe I'd use OCaml instead of Lisp for
this, too.

>> I guess this could be seen as an instance of wanting to do Lisp in
>> OCaml.  Maybe a seasoned OCaml programmer wouldn't want to do that
>> kind of thing in the first place.
>
> Other MLs (e.g. F#) carry run-time type information and implement
> print_any, which does what you want. However, they are as slow as
> Lisp...

Efficiency again.  Yes, a statical type system makes building an
optimizing compiler much easier.  But this is the only real advantage
I see, and it doesn't come for free.

I'd like to see an optimizing Lisp compiler that uses type inference
in the way OCaml does.  I know that SBCL infers types where it can and
uses this during optimization, but I don't know to what extent.
E. g., when OCaml compiles a function like Lisp.fold, I assume it
duplicates it for each type signature the function is used on, like a
C++ template.  I wonder if any Lisp compilers do this.

>> The thing is, "polymorphic"
>> functions like repr/prin1 appear everywhere in a dynamically typed
>> language.  In Gnus, e. g., a news and mail client for Emacs, there are
>> lots of places where you can either use a regular expression, a list
>> of regular expressions or a function.  Here is an extract from the
>> Gnus docs:
>>

[...] (quote from Gnus docs snipped)

>> However, this style of programming goes against the grain of a
>> statically typed language.  I like programming this way, which is why
>> I prefer dynamically typed languages like Lisp.
>
> Not at all. When you said "a regular expression, a list of regular
> expressions or a function", you were literally enumerating the constructors
> of a variant type:
>
> type ('a, 'b) t =
>   | RegExp of regexp
>   | RegExps of regexp list
>   | Fn of ('a -> 'b);;
>
> Now that we've defined that type you can define your nnimap_split_rule
> function

(For the record: it's not *my function*.  It's from Gnus, a news
reader for Emacs.  I had nothing to do with it.)

> to accept a value of this type (you can probably do more by
> specifying the type of the function).

I know, but why would I want write additional source code if I don't
have to?  I assume my programs would look like this:

--8<---------------cut here---------------start------------->8---
type variant =
    Str of string
  | Int of int
;;

let printv arg =
  match arg with
  | Str s -> Printf.printf "%s\n" s
  | Int i -> Printf.printf "%d\n" i
;;

let _ =
  printv (Str "foo");
  printv (Int 3)
;;--8<---------------cut here---------------end--------------->8---

The explicit type declaration isn't so bad, and a Lisp equivalent of
printv would look very similar because it would also have to
dispatch on the type of its input.  But I imagine having to write (Str
"foo") and (Int 3) all the time would get on my nerves.

> Then the type is explicit, so you don't need documentation that says
> things like "Got that? Neither did I...", and your types will be
> automatically and statically checked for you.  Basically, you get
> machine-verified documentation.

Come on, putting OCaml type declarations into the *end user*
documentation (because this is were the quotation came from) wouldn't
really make things more understandable, would it.

>> PS: I had a look at your Ray tracer language comparison site.  I was
>> very surprised to read that the Lisp programs were much more verbose
>> than the OCaml versions.
>
> Indeed, Lisp is about the slowest and most verbose language on that test.
> However, I believe the results are quite specific to the task in that
> respect.

That's what I thought.

> Lisp should do a lot better on tasks where performance is irrelevant
> or not CPU-limited and where EVAL is useful. I don't believe
> (decent) static typing is ever enough of a hindrance that it will
> offset Lisp's missing features,

Missing features?  OK, it doesn't have pattern matching [1], but what
else?

> so trying to find tasks well suited to dynamic typing is a red
> herring (IMHO).

The way I see it, using a statically typed language instead of a
dynamically typed one has to be justified, not the other way around.
IMO, a static type system is an additional cost most of the time, not
a benefit.  So I'd need a good reason to use a statically typed
language, e. g.  performance is critical or much better library
support.

> So if I wanted to make Lisp look relatively better then I'd try to write an
> efficient interpreter. I'd like to see a Lisp equivalent of the interpreter
> that I've put up here, for example:
>
>   http://www.ffconsultancy.com/free/ocaml/interpreter.html

Using OCaml's built-in lexer and parser support is cheating a bit,
isn't it?  I could just use Common Lisp's READ (reads an s-expression
form stdin) and EVAL and be about done. :-)

>> Something to think about ...  I guess I should write a pattern matching
>> macro.
>
> Yes. Having no form of pattern matching is just embarassing. Without variant
> types there isn't a lot you can do with pattern matching.

I'm not sure how variant types fit into Lisp.  Something else to think
about ...


Wolfram

Footnotes:
[1]  I'm pretty sure pattern matching could be added using macros.  In
     just under 50 lines I wrote a little macro that allowed me to
     define the good old flatten like this:

         (defun flatten (list)
           (match list
             (()            ())
             (((x . y) . z) (append (flatten (cons x y)) (flatten z)))
             ((x . y)       (cons x (flatten y)))))
     
     That was really fun!
From: David Golden
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <GzSXg.14795$j7.331971@news.indigo.ie>
Wolfram Fenske wrote:

> [1]  I'm pretty sure pattern matching could be added using macros.  

http://common-lisp.net/project/cl-unification/
might be interesting for you.
From: Wolfram Fenske
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160771632.504087.315810@i3g2000cwc.googlegroups.com>
David Golden <············@oceanfree.net> writes:

> Wolfram Fenske wrote:
>
>> [1]  I'm pretty sure pattern matching could be added using macros.
>
> http://common-lisp.net/project/cl-unification/
> might be interesting for you.

Thanks!  I knew somebody must have done this before.  I just didn't
find it right away.  This looks very promising.


Wolfram
From: David Golden
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <G4TXg.14797$j7.332417@news.indigo.ie>
Wolfram Fenske wrote:
> Thanks! 

Hey, don't thank me, thank Marco Antoniotti !
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <453042e0$0$8751$ed2619ec@ptn-nntp-reader02.plus.net>
Wolfram Fenske wrote:
> Yeah, yeah, I was only using stuff I found in an OCaml tutorial.
> While we're at it, in Lisp I might have gotten away with simply using
> CONS cells and the built-in PUSH and POP macros.  This might not be
> ideal for a production quality application but it might do for a
> prototype.

In ML, you could get away with list literals and pattern matching:

  let empty = [] and push h t = h::t and pop l = hd l, tl l

However, they are purely functional.

> On a similar note, I see a lot of Python code where 
> dictionaries (Python-speak for hash tables) are used instead of
> classes (Python doesn't have structs so classes would be the only
> alternative).  This is probably because Python's syntax for
> dictionaries is pretty concise and in a situation where all they want
> to do is pass around two or three values between a handfull of
> functions, people think the overhead of writing a class is just not
> worth it.

Classes are generally the wrong tool for the job in OCaml.

>> The functor approach makes better use of static typing but a generic
>> print will not be as elegant in OCaml as it is in Lisp.
> 
> This is my point.  None of these is really "plug & play".  They have
> other qualities, like higher efficiency and better support for static
> checks.  But I don't value these as much as you do.  IMO, programmer
> efficiency is more important a lot of the time.  However, in your
> domain (your signature suggests it's scientific computing), things
> might be different.  I imagine it involves a lot of number crunching,
> using complicated algorithms.  Maybe I'd use OCaml instead of Lisp for
> this, too.

From my point of view, having to write type-specific functions for a tiny
number of generic functions (e.g. print) is a small price to pay for
getting all of your programs checked by the compiler.

>> Other MLs (e.g. F#) carry run-time type information and implement
>> print_any, which does what you want. However, they are as slow as
>> Lisp...
> 
> Efficiency again.  Yes, a statical type system makes building an
> optimizing compiler much easier.  But this is the only real advantage
> I see, and it doesn't come for free.

Absolutely, except I'd add "correctness" along with "performance".

> I'd like to see an optimizing Lisp compiler that uses type inference
> in the way OCaml does.  I know that SBCL infers types where it can and
> uses this during optimization, but I don't know to what extent.

Stalin does this for Scheme but it requires whole-program compilation and is
incredibly slow, e.g. 2,000x slower.

> E. g., when OCaml compiles a function like Lisp.fold, I assume it
> duplicates it for each type signature the function is used on, like a 
> C++ template.  I wonder if any Lisp compilers do this.

No, OCaml compiles to polymorphic assembler and does very little type
specialisation.

>> to accept a value of this type (you can probably do more by
>> specifying the type of the function).
> 
> I know, but why would I want write additional source code if I don't
> have to?

You're going to write it in the docs. Why not have the docs checked against
the code by the machine?

> I assume my programs would look like this: 
> 
> --8<---------------cut here---------------start------------->8---
> type variant =
>     Str of string
>   | Int of int
> ;;
> 
> let printv arg =
>   match arg with
>   | Str s -> Printf.printf "%s\n" s
>   | Int i -> Printf.printf "%d\n" i
> ;;
> 
> let _ =
>   printv (Str "foo");
>   printv (Int 3)
> ;;--8<---------------cut here---------------end--------------->8---

Absolutely not! You're implementing dynamic typing, which weakens the type
system to the point that you might as well be writing Lisp code. Write:

  print_string "foo";
  print_int 3

instead.

> The explicit type declaration isn't so bad, and a Lisp equivalent of
> printv would look very similar because it would also have to
> dispatch on the type of its input.  But I imagine having to write (Str
> "foo") and (Int 3) all the time would get on my nerves.

Yes. This is bad style in OCaml.

>> Then the type is explicit, so you don't need documentation that says
>> things like "Got that? Neither did I...", and your types will be
>> automatically and statically checked for you.  Basically, you get
>> machine-verified documentation.
> 
> Come on, putting OCaml type declarations into the *end user*
> documentation (because this is were the quotation came from) wouldn't
> really make things more understandable, would it.

Absolutely. Almost all API documentation includes type information in
statically typed languages. In dynamically typed languages, it often
contains a waffly description of the allowed values in English that is
likely to be out of date.

>> Lisp should do a lot better on tasks where performance is irrelevant
>> or not CPU-limited and where EVAL is useful. I don't believe
>> (decent) static typing is ever enough of a hindrance that it will
>> offset Lisp's missing features,
> 
> Missing features?  OK, it doesn't have pattern matching [1], but what
> else?

Features related to pattern matching, like variant types. Features related
to static typing, like tuples vs arrays/lists. Features related to type
inference, like polymorphic variants and objects.

>> so trying to find tasks well suited to dynamic typing is a red
>> herring (IMHO).
> 
> The way I see it, using a statically typed language instead of a
> dynamically typed one has to be justified, not the other way around.
> IMO, a static type system is an additional cost most of the time, not
> a benefit.  So I'd need a good reason to use a statically typed
> language, e. g.  performance is critical or much better library
> support.

Reliability is the main reason to use static typing, IMHO.

>> So if I wanted to make Lisp look relatively better then I'd try to write
>> an efficient interpreter. I'd like to see a Lisp equivalent of the
>> interpreter that I've put up here, for example:
>>
>>   http://www.ffconsultancy.com/free/ocaml/interpreter.html
> 
> Using OCaml's built-in lexer and parser support is cheating a bit,
> isn't it?

I've used the built-in lexer generator that contains rules for ints and
floats but the parser is completely general. Using ocamllex to generate a
lexer that included rules for ints and floats explicitly would only add a
couple of LOC.

> I could just use Common Lisp's READ (reads an s-expression  
> form stdin) and EVAL and be about done. :-)

My interpreter contains a recursive descent parser for the grammar that I
chose. You can easily alter it to parse a different grammar (e.g. BASIC)
and it wouldn't require any extra code.

> [1]  I'm pretty sure pattern matching could be added using macros.  In
>      just under 50 lines I wrote a little macro that allowed me to
>      define the good old flatten like this:
> 
>          (defun flatten (list)
>            (match list
>              (()            ())
>              (((x . y) . z) (append (flatten (cons x y)) (flatten z)))
>              ((x . y)       (cons x (flatten y)))))
>      
>      That was really fun!

Absolutely. Even naive pattern matching is incredibly useful.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <87k63235tc.fsf@qrnik.zagroda>
Jon Harrop <···@ffconsultancy.com> writes:

> From my point of view, having to write type-specific functions for
> a tiny number of generic functions (e.g. print) is a small price
> to pay for getting all of your programs checked by the compiler.

OCaml serialization is not checked by the compiler. It's not checked
at run time either.

$ ocaml
        Objective Caml version 3.09.0

# Marshal.from_string (Marshal.to_string 123 []) 0 ^ "xyz";;
zsh: segmentation fault  ocaml

Correctness?

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <4531e04a$0$8729$ed2619ec@ptn-nntp-reader02.plus.net>
Marcin 'Qrczak' Kowalczyk wrote:

> Jon Harrop <···@ffconsultancy.com> writes:
> 
>> From my point of view, having to write type-specific functions for
>> a tiny number of generic functions (e.g. print) is a small price
>> to pay for getting all of your programs checked by the compiler.
> 
> OCaml serialization is not checked by the compiler. It's not checked
> at run time either.
> 
> $ ocaml
>         Objective Caml version 3.09.0
> 
> # Marshal.from_string (Marshal.to_string 123 []) 0 ^ "xyz";;
> zsh: segmentation fault  ocaml
> 
> Correctness?

Sure, you can get an OCaml program to segfault using Obj as well. If you
want your programs to be safe then you have to write your own (type
specific) serialisation code, or use one of the OCaml variants that
supports safe marshalling.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: ·········@aol.com
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160970176.482929.27040@i42g2000cwa.googlegroups.com>
Jon Harrop wrote:
> Wolfram Fenske wrote:
> > Yeah, yeah, I was only using stuff I found in an OCaml tutorial.
> > While we're at it, in Lisp I might have gotten away with simply using
> > CONS cells and the built-in PUSH and POP macros.  This might not be
> > ideal for a production quality application but it might do for a
> > prototype.
>
> In ML, you could get away with list literals and pattern matching:
>
>   let empty = [] and push h t = h::t and pop l = hd l, tl l
>
> However, they are purely functional.
>
> > On a similar note, I see a lot of Python code where
> > dictionaries (Python-speak for hash tables) are used instead of
> > classes (Python doesn't have structs so classes would be the only
> > alternative).  This is probably because Python's syntax for
> > dictionaries is pretty concise and in a situation where all they want
> > to do is pass around two or three values between a handfull of
> > functions, people think the overhead of writing a class is just not
> > worth it.
>
> Classes are generally the wrong tool for the job in OCaml.
>
> >> The functor approach makes better use of static typing but a generic
> >> print will not be as elegant in OCaml as it is in Lisp.
> >
> > This is my point.  None of these is really "plug & play".  They have
> > other qualities, like higher efficiency and better support for static
> > checks.  But I don't value these as much as you do.  IMO, programmer
> > efficiency is more important a lot of the time.  However, in your
> > domain (your signature suggests it's scientific computing), things
> > might be different.  I imagine it involves a lot of number crunching,
> > using complicated algorithms.  Maybe I'd use OCaml instead of Lisp for
> > this, too.
>
> From my point of view, having to write type-specific functions for a tiny
> number of generic functions (e.g. print) is a small price to pay for
> getting all of your programs checked by the compiler.
>
> >> Other MLs (e.g. F#) carry run-time type information and implement
> >> print_any, which does what you want. However, they are as slow as
> >> Lisp...
> >
> > Efficiency again.  Yes, a statical type system makes building an
> > optimizing compiler much easier.  But this is the only real advantage
> > I see, and it doesn't come for free.
>
> Absolutely, except I'd add "correctness" along with "performance".
>
> > I'd like to see an optimizing Lisp compiler that uses type inference
> > in the way OCaml does.  I know that SBCL infers types where it can and
> > uses this during optimization, but I don't know to what extent.
>
> Stalin does this for Scheme but it requires whole-program compilation and is
> incredibly slow, e.g. 2,000x slower.
>
> > E. g., when OCaml compiles a function like Lisp.fold, I assume it
> > duplicates it for each type signature the function is used on, like a
> > C++ template.  I wonder if any Lisp compilers do this.
>
> No, OCaml compiles to polymorphic assembler and does very little type
> specialisation.
>
> >> to accept a value of this type (you can probably do more by
> >> specifying the type of the function).
> >
> > I know, but why would I want write additional source code if I don't
> > have to?
>
> You're going to write it in the docs. Why not have the docs checked against
> the code by the machine?
>
> > I assume my programs would look like this:
> >
> > --8<---------------cut here---------------start------------->8---
> > type variant =
> >     Str of string
> >   | Int of int
> > ;;
> >
> > let printv arg =
> >   match arg with
> >   | Str s -> Printf.printf "%s\n" s
> >   | Int i -> Printf.printf "%d\n" i
> > ;;
> >
> > let _ =
> >   printv (Str "foo");
> >   printv (Int 3)
> > ;;--8<---------------cut here---------------end--------------->8---
>
> Absolutely not! You're implementing dynamic typing, which weakens the type
> system to the point that you might as well be writing Lisp code. Write:
>
>   print_string "foo";
>   print_int 3
>
> instead.
>
> > The explicit type declaration isn't so bad, and a Lisp equivalent of
> > printv would look very similar because it would also have to
> > dispatch on the type of its input.  But I imagine having to write (Str
> > "foo") and (Int 3) all the time would get on my nerves.
>
> Yes. This is bad style in OCaml.
>
> >> Then the type is explicit, so you don't need documentation that says
> >> things like "Got that? Neither did I...", and your types will be
> >> automatically and statically checked for you.  Basically, you get
> >> machine-verified documentation.
> >
> > Come on, putting OCaml type declarations into the *end user*
> > documentation (because this is were the quotation came from) wouldn't
> > really make things more understandable, would it.
>
> Absolutely. Almost all API documentation includes type information in
> statically typed languages. In dynamically typed languages, it often
> contains a waffly description of the allowed values in English that is
> likely to be out of date.
>
> >> Lisp should do a lot better on tasks where performance is irrelevant
> >> or not CPU-limited and where EVAL is useful. I don't believe
> >> (decent) static typing is ever enough of a hindrance that it will
> >> offset Lisp's missing features,
> >
> > Missing features?  OK, it doesn't have pattern matching [1], but what
> > else?
>
> Features related to pattern matching, like variant types. Features related
> to static typing, like tuples vs arrays/lists. Features related to type
> inference, like polymorphic variants and objects.
>
> >> so trying to find tasks well suited to dynamic typing is a red
> >> herring (IMHO).
> >
> > The way I see it, using a statically typed language instead of a
> > dynamically typed one has to be justified, not the other way around.
> > IMO, a static type system is an additional cost most of the time, not
> > a benefit.  So I'd need a good reason to use a statically typed
> > language, e. g.  performance is critical or much better library
> > support.
>
> Reliability is the main reason to use static typing, IMHO.
>
> >> So if I wanted to make Lisp look relatively better then I'd try to write
> >> an efficient interpreter. I'd like to see a Lisp equivalent of the
> >> interpreter that I've put up here, for example:
> >>
> >>   http://www.ffconsultancy.com/free/ocaml/interpreter.html
> >
> > Using OCaml's built-in lexer and parser support is cheating a bit,
> > isn't it?
>
> I've used the built-in lexer generator that contains rules for ints and
> floats but the parser is completely general. Using ocamllex to generate a
> lexer that included rules for ints and floats explicitly would only add a
> couple of LOC.
>
> > I could just use Common Lisp's READ (reads an s-expression
> > form stdin) and EVAL and be about done. :-)
>
> My interpreter contains a recursive descent parser for the grammar that I
> chose. You can easily alter it to parse a different grammar (e.g. BASIC)
> and it wouldn't require any extra code.
>
> > [1]  I'm pretty sure pattern matching could be added using macros.  In
> >      just under 50 lines I wrote a little macro that allowed me to
> >      define the good old flatten like this:
> >
> >          (defun flatten (list)
> >            (match list
> >              (()            ())
> >              (((x . y) . z) (append (flatten (cons x y)) (flatten z)))
> >              ((x . y)       (cons x (flatten y)))))
> >
> >      That was really fun!
>
> Absolutely. Even naive pattern matching is incredibly useful.
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> Objective CAML for Scientists
> http://www.ffconsultancy.com/products/ocaml_for_scientists

I am More of a Ruby guy but i programed in CL and Scheme and SML.
dynamic typeing allows for less code and faaster develipment. static
typeings safty benifits are pointless becaus the #  of bugs is
preporial to the lines_of_code. a line of code not needed is a correct
line of code!!!!! and how often do you get type errors
From: Holger Schauer
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <yxzy7rf9zk2.fsf@gmx.de>
On 4791 September 1993, Jon Harrop wrote:
>> Efficiency again.  Yes, a statical type system makes building an
>> optimizing compiler much easier.  But this is the only real advantage
>> I see, and it doesn't come for free.

> Absolutely, except I'd add "correctness" along with "performance".
[...]
> Reliability is the main reason to use static typing, IMHO.

Jon, I think, by now everybody reading cll, even including occasional
readers, has understood that you believe in the benefits of static
typing and Ocaml in particular, especially for the kind of problems
you've to tackle. What at least I haven't understood, though, is why
you think that this is of interest to the readers of comp.lang.*lisp*,
where we don't have static typing and where we usually are happy with
the general state of affairs in CL (and in particular with the state
of fairs regarding the type system) and where probably the most of us
readers are not working in the same problem field as you do.

Put otherwise: Do you intend to post something interesting about lisp
in the future or should I just follow my intuition and lower the score
of your postings? 

Holger

PS: No, I'm not the watch guard of this group. I'm just tired of
(IMHO) pointless cross-language discussions. Which is also the reason
why I don't read any threads containing the keyword "Scheme" in its
subject. When reading cll, I'm far more interested in discussing (or
at least reading about) problem solving in Lisp than in other
languages.

-- 
---          http://www.coling.uni-freiburg.de/~schauer/            ---
Fachbegriffe der Informatik - Einfach erkl�rt
280: Freizeit
       Wenn Du etwas f�r die Firma zu Hause (der Ort, der in Deinem
       Personalausweis steht. Genau, dort, wo mal wieder dringend
       abgewaschen werden m�sste) tust. (Florian Kuehnert)
From: Javier
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1161077178.989503.119300@f16g2000cwb.googlegroups.com>
Holger Schauer ha escrito:

> What at least I haven't understood, though, is why
> you think that this is of interest to the readers of comp.lang.*lisp*,

It obviusly is, taking in count the number of messagges this discussion
has generated.
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <4535cc07$0$8727$ed2619ec@ptn-nntp-reader02.plus.net>
Javier wrote:
> Holger Schauer ha escrito:
>> What at least I haven't understood, though, is why
>> you think that this is of interest to the readers of comp.lang.*lisp*,
> 
> It obviusly is, taking in count the number of messagges this discussion
> has generated.

As several people have pointed out, real Lisp compilers do as much static
analysis as possible.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Josip Gracin
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <egje14$ef5$1@sunce.iskon.hr>
Javier wrote:
> For example, I usually pass a list of lists, but forget to enclose the
> data on that second list.
> 
> CL-USER> (defun get-x (obj) (assoc 'x obj))
> GET-X
> CL-USER> (defun mul-x (obj) (* (get-x obj) 5))
> MUL-X
> CL-USER> (mul-x '((x 3)))
> Argument X is not a NUMBER: (X 5)
>    [Condition of type SIMPLE-TYPE-ERROR]
 >
 > The bad thing here is that the compiler were not able to detect that
 > (get-x obj) is never going to return a number, so mul-x is broken.


Here's what compiler reports in SBCL:

; in: DEFUN MUL-X
;     (* (GET-X OBJ) 5)
;
; note: deleting unreachable code
;
; caught WARNING:
;   Asserted type NUMBER conflicts with derived type
;   (VALUES LIST &OPTIONAL).
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452cc071$0$8749$ed2619ec@ptn-nntp-reader02.plus.net>
Wolfram Fenske wrote:
> "Javier" <·······@gmail.com> writes:
>> Yes, you are right. But if you eliminate both null exceptions and type
>> errors, I think that you are probably going to avoid at least 50% of
>> the bugs.
> 
> Null-pointer exceptions and other memory-related errors certainly
> account for a huge part of errors in C/C++ programs.  So having a
> garbage collector is a huge win, IMO.  But that's a little beside the
> point since both OCaml and Lisp (and many others) are garbage
> collected.

I believe Javier was referring to everything being an option type in Lisp
and not in OCaml, i.e. you can pass NIL to a function that expects an int
in Lisp and get a run-time type error that OCaml would catch at compile
time.

>> Your test system can really help to avoid all of this kind of error.
> 
> Yes, testing is the way to go.  In my experience type errors are
> discovered rather quickly and can easily be solved; semantic errors
> are more difficult to detect.  That's why you need tests, whether you
> work in a statically typed language or not.

Static checking is a form of test, of course. Don't forget that type
checking goes way beyond int option vs int in OCaml. For example, it is
used to statically check pattern matches for exhaustiveness and redundancy.
That can save a lot of work during development.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Rahul Jain
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <87u02ahmtq.fsf@nyct.net>
Jon Harrop <···@ffconsultancy.com> writes:

> I believe Javier was referring to everything being an option type in Lisp
> and not in OCaml, i.e. you can pass NIL to a function that expects an int
> in Lisp and get a run-time type error that OCaml would catch at compile
> time.

That is blatantly false. Please re-read my postings.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Javier
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160594322.164042.142580@i3g2000cwc.googlegroups.com>
Rahul Jain ha escrito:

> Jon Harrop <···@ffconsultancy.com> writes:
>
> > I believe Javier was referring to everything being an option type in Lisp
> > and not in OCaml, i.e. you can pass NIL to a function that expects an int
> > in Lisp and get a run-time type error that OCaml would catch at compile
> > time.
>
> That is blatantly false. Please re-read my postings.

This is "blatantly" true, we have demostrated that it is the case.
Re-read our postings.
From: Rahul Jain
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <871wpeh93d.fsf@nyct.net>
"Javier" <·······@gmail.com> writes:

> Rahul Jain ha escrito:
>
>> Jon Harrop <···@ffconsultancy.com> writes:
>>
>> > I believe Javier was referring to everything being an option type in Lisp
>> > and not in OCaml, i.e. you can pass NIL to a function that expects an int
>> > in Lisp and get a run-time type error that OCaml would catch at compile
>> > time.
>>
>> That is blatantly false. Please re-read my postings.
>
> This is "blatantly" true, we have demostrated that it is the case.
> Re-read our postings.

I was referring to the Lisp part. If you want the code to be optimized,
you can get these problems detected at compile-time. Of course, in CL,
they're not errors, because CL has a usable condition system.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452d6150$0$8735$ed2619ec@ptn-nntp-reader02.plus.net>
Rahul Jain wrote:
> I was referring to the Lisp part. If you want the code to be optimized,
> you can get these problems detected at compile-time.

This is about correctness, not optimisation.

> Of course, in CL, they're not errors, because CL has a usable condition
> system. 

Forcing everything to be an option type whether you want it to be or not is
not a "usable condition system".

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Rahul Jain
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <87hcyafmh5.fsf@nyct.net>
Jon Harrop <···@ffconsultancy.com> writes:

D> Rahul Jain wrote:
>> I was referring to the Lisp part. If you want the code to be optimized,
>> you can get these problems detected at compile-time.
>
> This is about correctness, not optimisation.

A program signalling an error is a correct program. As long as safety is
not declared down, you'll get a correct program.

>> Of course, in CL, they're not errors, because CL has a usable condition
>> system. 
>
> Forcing everything to be an option type whether you want it to be or not is
> not a "usable condition system".

Where do you get these misconceptions?

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Bill Atkins
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <m2vemq5s76.fsf@bertrand.local>
Jon Harrop <···@ffconsultancy.com> writes:

> Forcing everything to be an option type whether you want it to be or not is
> not a "usable condition system".

Buh?  What does dynamic typing have to do with the condition system?
More to the point, what makes you think Rahul was equating the two?
From: Jon Harrop
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <452dabbf$0$8741$ed2619ec@ptn-nntp-reader02.plus.net>
Rahul Jain wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> I believe Javier was referring to everything being an option type in Lisp
>> and not in OCaml, i.e. you can pass NIL to a function that expects an int
>> in Lisp and get a run-time type error that OCaml would catch at compile
>> time.
> 
> That is blatantly false. Please re-read my postings.

Note that SBCL does not complain about the definition of the function g:

* (defun f (list) (apply '+ list))

F
* (f '(1 2 3))

6
* (defun g () (f '(1 2 3 NIL)))

G
* (g)

debugger invoked on a SIMPLE-TYPE-ERROR in thread #<THREAD "initial thread"
{1002559B01}>:
  Argument Y is not a NUMBER: NIL

Type HELP for debugger help, or (SB-EXT:QUIT) to exit from SBCL.

restarts (invokable by number or by possibly-abbreviated name):
  0: [ABORT] Exit debugger, returning to top level.

(SB-KERNEL:TWO-ARG-+ 6 NIL)
0]

but OCaml does:

# let f = List.fold_left ( + ) 0;;
val f : int list -> int = <fun>
# f [1;2;3];;
- : int = 6
# let g() = f [Some 1; Some 2; Some 3; None];;
This expression has type 'a option but is here used with type int

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Pascal Costanza
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <4p677kFh4pusU1@individual.net>
Jon Harrop wrote:

> Note that SBCL does not complain about the definition of the function g:
> 
> * (defun f (list) (apply '+ list))
> 
> F
> * (f '(1 2 3))
> 
> 6
> * (defun g () (f '(1 2 3 NIL)))
> 
> G
> * (g)

That's a stupid program, and this is pretty obvious.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Wolfram Fenske
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <1160611748.949612.245530@b28g2000cwb.googlegroups.com>
Jon Harrop <···@ffconsultancy.com> writes:

> Wolfram Fenske wrote:
>> "Javier" <·······@gmail.com> writes:

[...]

>>> Your test system can really help to avoid all of this kind of error.
>>
>> Yes, testing is the way to go.  In my experience type errors are
>> discovered rather quickly and can easily be solved; semantic errors
>> are more difficult to detect.  That's why you need tests, whether you
>> work in a statically typed language or not.
>
> Static checking is a form of test, of course. Don't forget that type
> checking goes way beyond int option vs int in OCaml. For example, it is
> used to statically check pattern matches for exhaustiveness and redundancy.
> That can save a lot of work during development.

But this is the question, isn't it?  You think the answer is yes.
This is no surprise, because given your signature, I assume you mostly
program in OCaml and enjoy it.  I OTHO think the answer is no.  But
this is no surprise, either, because I mostly program in Python and
Lisp and enjoy it very much, too.  And even if someone could give a
definitive answer, we'd probably still stick with our respective
choices.


Wolfram
From: Alan Crowe
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <861wpfm2ar.fsf@cawtech.freeserve.co.uk>
"Javier" <·······@gmail.com> writes:

> Alan Crowe ha escrito:
> 
> > I've been working on some code so that I can do
> > The code is online at
> >
> >    http://www.cawtech.demon.co.uk/nestuit/current.lisp
> 
> This is very nice, thanks for the code.
> 
> > I don't think it is possible to think clearly about this
> > issue if you use the word "correct" for code that passes the
> > static type checks. I propose "typeck" as a neologism for
> > code that passes the static type checks. You need a
> > vocabularly that permits you to say that
> >
> >     (defun sum-list (list)
> >       (cond
> >         ((null list) 1)
> >         (t (+ (car list)(sum-list (cdr list))))))
> >
> > is typeck but incorrect.
> 
> 
> Yes, you are right. But if you eliminate both null exceptions and type
> errors, I think that you are probably going to avoid at least 50% of
> the bugs.
> Your test system can really help to avoid all of this kind of error.

My experience is that most bugs are shallow: you notice
something is wrong, poke about for a minute or two to find
out why, fix it and press on. 

Every once in a while I hit a nasty bug and have to spend an
hour or two hunting it. Sometimes it takes longer. Studying
SICP my version of delay and force using CLOS kept going
into an infinite loop. I rewrote it from scratch, using cons
cells instead of CLOS objects, and it miraculously worked.

So there is some kind of Pareto law with 20% of the bugs
consuming 80% of the debugging effort.

When I hit a nasty bug I try to capture the
details in my log-book under the heading BUG NATURE.
I want to understand where hard bugs come from. If some hard
bugs are actually repeats of previous hard bugs, studying
bug-nature will save me hours of debugging time. 

There is also a language design perspective. Some people
want to improve CL or to write a new, better language. I
sympathise. However hard bugs are a core issue. The
improvements should help avoid some and find others. So I
see one route to a better programming language like this:

1)Study bug-nature. Produce a taxonomy of hard bugs.

2)Hypothesise improvements that combat hard bugs

3)Implement, most economically as macros on top of CL.

4)Tackle some meaty, difficult programming tasks with the
  improved tools, continuing to log bug-nature

5)Review: Were old problems cured? Were new problems
  introduced? Were the improved tools a drag on
  productivity? Was the time won back with fewer bugs and
  easier debugging?

I'm sceptical about static type checking because it seems to
be targetting shallow bugs that drop out of the easiest test
cases. Discussions about static type checking don't seem to
get to stage 1 of my program.

I'm also sceptical about my (declare (test ...)) work. I
feel that I don't understand bug-nature, and the test
declarations are just a punt.

There is clearly an issue that typing in the test cases is a
drag on productivity. You get greater benefits, in terms of
catching bugs early, by writing more tests, thus paying a
greater cost for the greater benefit. Does one come out
ahead?

On the other hand, the test declarations serve to document
the code by providing examples of what the functions do. So
comments are free to focus on "why" rather than
"what". Examples in comments are problematic becase they are
powerful, and hence very harmful when out of synchronisation
with the code. Thinking of the test declarations as
example-comments that are guaranteed to be in
synchronisation with passing code is an intriguing
perspective.

The most distressing kind of hard bug is the kind in which
the fix depends on information that is discarded by an
earlier stage of processing or which is only generated later
by a subsequent stage of processing. The bug shows that the
design was wrong, and fixing it properly requires a major
rewrite.

On interesting idea is

(defun sum-list (args)
  (declare (stub foo))
  (declare (test ((0) 0
                  (1) 1
                  (0 1) 1
                  (1 0) 1
                  (1 1) 2))))

No actual code to make foo go. Nevertheless, the test
framework could generate a stub

(cl:defun sum-list(args)
  (cond ((equal args '(0)) 0)
        ...
        ((equal args '(1 1) 2))))

sufficient to let other test cases run. One could sketch
code, putting in the detail of the tricky functions that
determine the design, while just sketching in the other
functions with examples of what they should do on simple
cases.

If the design looks promising then one can flesh out the
stubbs with algorithms for how they should work in general.

I have a bad feeling that this will not actually help,
because I suspect that designing a program is a large scale
exercise in imagination and sketching at the level of
function stubbs is too small scale.

Alan Crowe
Edinburgh
Scotland
From: Raffael Cavallaro
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <2006101014014616807-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2006-10-10 09:43:06 -0400, Alan Crowe <····@cawtech.freeserve.co.uk> said:

> I propose "typeck" as a neologism for
> code that passes the static type checks. You need a
> vocabularly that permits you to say that
> 
>     (defun sum-list (list)
>       (cond
>         ((null list) 1)
>         (t (+ (car list)(sum-list (cdr list))))))
> 
> is typeck but incorrect.


I think that "type correct" is a usage I've seen before and says what 
you mean maybe a little more clearly than a neologism.

> 
> If you are stuck with a terminology that forces you to say
> that it is correct but incorrect, you will just get in a
> muddle.

I am very much in agreement with you here. Maybe we need to distinguish 
between "type correct" on the one hand and "logically correct," or 
"algorithmically incorrect" on the other. Then we can point at programs 
which are "type correct" but "algorithmically incorrect."
From: Raffael Cavallaro
Subject: Re: Lisp dinamism vs ML safety
Date: 
Message-ID: <2006101017461975249-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2006-10-10 14:01:46 -0400, Raffael Cavallaro 
<················@pas-d'espam-s'il-vous-plait-mac.com> said:

> Maybe we need to distinguish between "type correct" on the one hand and 
> "logically correct," or "algorithmically incorrect" on the other.

That "incorrect" should be "correct" of course.
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <45250f4e$0$8747$ed2619ec@ptn-nntp-reader02.plus.net>
Rahul Jain wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Rahul Jain wrote:
>>> Lisp has a much richer type system than any of the MLs.
>>
>> I'm not sure what you mean by that. ML does lots of things that Lisp does
>> not and vice-versa.
> 
> Hmm. I'm not sure about that. All I can think of is inference of
> parametrized array types (non-upgraded) across functions. Lisp just
> doesn't _require_ those things to be done. If someone cared, they could
> augment a compiler to do it.

Sure, you can augment a compiler to do anything.

>> OCaml, SML and F# all have bignums as part of the language.
> 
> Interesting. Some self-proclaimed ocaml genius claimed that it didn't
> have bignums. Maybe he meant that you need to explicitly... um...
> declare (?!?) a number to be a bignum? That would fit in with the lack
> of a numeric tower.

Yes. More than just declare though, you have to annotate every literal and
use different operators.

>> In contrast, succinct ML is typically fast ML.
> 
> Yeah, but you don't know if it's going to be correct unless you manually
> perform the kind of type inference that CMUCL does. :)
>
> On a more serious note, you still end up with the problem that your code
> is either slow but tolerant of different architectures or it's fast on
> some architectures and incorrect on others. Sure, you could maintain two
> parallel versions of the codebase, but don't programmers like
> abstraction? (yeah, I know, the mainstream abhors abstraction. They'd be
> happy writing machine code if they could still get paid enough for their
> time.)

In practice, I have never had a problem caused by fixed-precision integers.
However, I have lots of problems caused by floating point precision. Does
CMUCL's type inference do anything to help with that?

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Rahul Jain
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87u02ilck4.fsf@nyct.net>
Jon Harrop <···@ffconsultancy.com> writes:

> Rahul Jain wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
>>> Rahul Jain wrote:
>>>> Lisp has a much richer type system than any of the MLs.
>>>
>>> I'm not sure what you mean by that. ML does lots of things that Lisp does
>>> not and vice-versa.
>> 
>> Hmm. I'm not sure about that. All I can think of is inference of
>> parametrized array types (non-upgraded) across functions. Lisp just
>> doesn't _require_ those things to be done. If someone cared, they could
>> augment a compiler to do it.
>
> Sure, you can augment a compiler to do anything.

In ML, you _can't_ augment the compiler to do this without creating a
different dialect, because it doesn't _have_ the types needed to express
what I wrote in Lisp. If I'm wrong, please show me SML code that will
automatically switch to using bignums when the type inferer determines
that the values won't fit in a machine word.

> In practice, I have never had a problem caused by fixed-precision integers.

You've never needed a 2 GB file?

> However, I have lots of problems caused by floating point precision. Does
> CMUCL's type inference do anything to help with that?

Um, no. How could it change your hardware to conform to your idea of how
FP should work? Maybe it could, if there were a language for describing
your precision requirements. Hmm... actually, it might be able to figure
out based on the ranges of values whether you'd hit one of the FP
lossage situations and rearrange the expression until it doesn't. You'd
need to be _very_ careful in defining the ranges of values, tho, to
avoid it seeing a problem that won't exist when run on real data.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Dr Jon D Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1160092487.398722.315210@m73g2000cwd.googlegroups.com>
Rahul Jain wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
> > Sure, you can augment a compiler to do anything.
>
> In ML, you _can't_ augment the compiler to do this without creating a
> different dialect, because it doesn't _have_ the types needed to express
> what I wrote in Lisp.

Yes. Doesn't that sword cut both ways though? If you supplement Lisp
with features of ML then you've created a "different dialect" as well.
Doesn't Qi do something like that?

> If I'm wrong, please show me SML code that will
> automatically switch to using bignums when the type inferer determines
> that the values won't fit in a machine word.

I don't know how to do that. MLton may already try to do this. I doubt
it though. I can't imagine that being a bottleneck for many people. ;-)

> > In practice, I have never had a problem caused by fixed-precision integers.
>
> You've never needed a 2 GB file?

Yes. Maybe I will in the future, but I'm already on 64-bit. :-)

> > However, I have lots of problems caused by floating point precision. Does
> > CMUCL's type inference do anything to help with that?
>
> Um, no. How could it change your hardware to conform to your idea of how
> FP should work?

I just want the same functionality that CMUCL already provides for
ints: use machine-precision where possible and arbitrary-precision
otherwise.

> Maybe it could, if there were a language for describing
> your precision requirements. Hmm... actually, it might be able to figure
> out based on the ranges of values whether you'd hit one of the FP
> lossage situations and rearrange the expression until it doesn't. You'd
> need to be _very_ careful in defining the ranges of values, tho, to
> avoid it seeing a problem that won't exist when run on real data.

Yes. I haven't really thought about how you could do it statically...

Cheers,
Jon.
From: Rahul Jain
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87ejtljrex.fsf@nyct.net>
"Dr Jon D Harrop" <···@ffconsultancy.com> writes:

> Rahul Jain wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
>> > Sure, you can augment a compiler to do anything.
>>
>> In ML, you _can't_ augment the compiler to do this without creating a
>> different dialect, because it doesn't _have_ the types needed to express
>> what I wrote in Lisp.
>
> Yes. Doesn't that sword cut both ways though? If you supplement Lisp
> with features of ML then you've created a "different dialect" as well.
> Doesn't Qi do something like that?

Qi sounds familiar, but I don't know anything specific about it. What ML
features do you think you need to achieve what I just demonstrated? Type
inference? Nothing in the CL spec forbids that.

>> If I'm wrong, please show me SML code that will
>> automatically switch to using bignums when the type inferer determines
>> that the values won't fit in a machine word.
>
> I don't know how to do that. MLton may already try to do this. I doubt
> it though. I can't imagine that being a bottleneck for many people. ;-)
>
>> > In practice, I have never had a problem caused by fixed-precision integers.
>>
>> You've never needed a 2 GB file?
>
> Yes. Maybe I will in the future, but I'm already on 64-bit. :-)

Ah, so you don't want me to use your software because my processor is
too old. Fair enough. :)

>> > However, I have lots of problems caused by floating point precision. Does
>> > CMUCL's type inference do anything to help with that?
>>
>> Um, no. How could it change your hardware to conform to your idea of how
>> FP should work?
>
> I just want the same functionality that CMUCL already provides for
> ints: use machine-precision where possible and arbitrary-precision
> otherwise.

How would it know what precision you need? You _can_ introspect the
floating point types as far as their precision, etc. I don't know of
anyone that has come up with a language to express your precision
requirements that can then be used to compute which FP type is best for
you. Also, 80 bit floats are the fastest way to compute using an x87
FPU. Should that matter, too?

>> Maybe it could, if there were a language for describing
>> your precision requirements. Hmm... actually, it might be able to figure
>> out based on the ranges of values whether you'd hit one of the FP
>> lossage situations and rearrange the expression until it doesn't. You'd
>> need to be _very_ careful in defining the ranges of values, tho, to
>> avoid it seeing a problem that won't exist when run on real data.
>
> Yes. I haven't really thought about how you could do it statically...

Well, then what are you asking us for? :P

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Thomas A. Russ
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <ymifye63b5m.fsf@sevak.isi.edu>
"Henry Bigelow" <·········@gmail.com> writes:

> it looks like you have a lot of experience.  so, benchmark politics
> aside, do you find lisp to be fast enough to do heavy-duty computing,
> (in my case, a bayesian network with hundreds of nodes and lots of
> floating point addition, but also written in a very extensible,
> abstract style)

Well, I have one anecdote to relate, told to me by a professor who took
his sabbatical to work on Bayesian networks.  He and another programmer
began working on code to do Bayesian netork analysis of genetic factors
for genetic counselors.  The professor used Lisp and the other
programmer C.

The Lisp version (no surprise) was completed much more quickly.  It also
exploited lisp's run-time compiler by taking a particular family tree
and generating a program for evaluating the resulting Bayesian network
analysis.  This program was then compiled and used for the analysis.

That alone made it faster than the C program.  And with the extra time
available, the professor was able to include some additional features
(like a nicer user inteface) and some algorithmic improvements that the
C programmer never had time to get to, because they were still working
on getting the basic code for evaluating the Bayesian network for a
network defined at runtime to work.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <451f0ecf$0$8747$ed2619ec@ptn-nntp-reader02.plus.net>
Juho Snellman wrote:
>   1. A new benchmark is added, nobody cares
>   2. A newbie with 2 hours of Lisp experience notices that there's
>      no implementation for the benchmark. He thinks that a crappy
>      implementation is better than no implementation, and proceeds
>      to write something embarrasingly slow and hideously ugly.
>   3. People who mistakenly think that the Shootout has any validity
>      bitch and moan about Lisp being slow on that benchmark.
>   4. Someone who knows how to optimize Lisp code gets tired of the
>      moaning, and either rewrites the whole program (if it was
>      sufficiently hideous) or makes a few obvious but critical
>      improvements (if the program was salvageable). A huge speedup
>      results, the people from step 3 start whining about something else.
>   5. The requirements for the benchmark are modified, and the optimized
>      Lisp implementation gets deleted. There's no sign of it ever
>      having existed.
>   6. Return to step 2

It is interesting to compare the Lisp lifecycle with the OCaml lifecycle:

1. A new benchmark is added to the shootout.
2. An elite OCaml programmer knocks up a tiny, lightning-fast solution and
submits it to the shootout.
3. The shootout maintainers either ignore the submission or deem that its
Zen feels too dissimilar to the C program.
4. The shootout maintainers claim that the submission never existed, despite
the fact that it is still on their site.
5. Despite never having existed, the requirements for the benchmark change.
6. Repeat from step 1.

Don't even get me started on Wikipedia. ;-)

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159719640.738284.82020@m73g2000cwd.googlegroups.com>
Jon Harrop wrote:
> Juho Snellman wrote:
> >   1. A new benchmark is added, nobody cares
> >   2. A newbie with 2 hours of Lisp experience notices that there's
> >      no implementation for the benchmark. He thinks that a crappy
> >      implementation is better than no implementation, and proceeds
> >      to write something embarrasingly slow and hideously ugly.
> >   3. People who mistakenly think that the Shootout has any validity
> >      bitch and moan about Lisp being slow on that benchmark.
> >   4. Someone who knows how to optimize Lisp code gets tired of the
> >      moaning, and either rewrites the whole program (if it was
> >      sufficiently hideous) or makes a few obvious but critical
> >      improvements (if the program was salvageable). A huge speedup
> >      results, the people from step 3 start whining about something else.
> >   5. The requirements for the benchmark are modified, and the optimized
> >      Lisp implementation gets deleted. There's no sign of it ever
> >      having existed.
> >   6. Return to step 2
>
> It is interesting to compare the Lisp lifecycle with the OCaml lifecycle:
>
> 1. A new benchmark is added to the shootout.
> 2. An elite OCaml programmer knocks up a tiny, lightning-fast solution and
> submits it to the shootout.
> 3. The shootout maintainers either ignore the submission or deem that its
> Zen feels too dissimilar to the C program.
> 4. The shootout maintainers claim that the submission never existed, despite
> the fact that it is still on their site.
> 5. Despite never having existed, the requirements for the benchmark change.
> 6. Repeat from step 1.
>
> Don't even get me started on Wikipedia. ;-)
>

hi mr. harrop!  i was surprised to see your post over here on c.l.l!
did you start on lisp before o'caml?  so, is that shootout the best one
available, despite the grim lifecycle descriptions?    from your and
juho's descriptions, the owners sound a bit deceitful!

henry


> --
> Dr Jon D Harrop, Flying Frog Consultancy
> Objective CAML for Scientists
> http://www.ffconsultancy.com/products/ocaml_for_scientists
From: ·····@yahoo.com
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159720856.514486.162510@c28g2000cwb.googlegroups.com>
Henry Bigelow wrote:
> Jon Harrop wrote:
> > Juho Snellman wrote:
> > >   1. A new benchmark is added, nobody cares
> > >   2. A newbie with 2 hours of Lisp experience notices that there's
> > >      no implementation for the benchmark. He thinks that a crappy
> > >      implementation is better than no implementation, and proceeds
> > >      to write something embarrasingly slow and hideously ugly.
> > >   3. People who mistakenly think that the Shootout has any validity
> > >      bitch and moan about Lisp being slow on that benchmark.
> > >   4. Someone who knows how to optimize Lisp code gets tired of the
> > >      moaning, and either rewrites the whole program (if it was
> > >      sufficiently hideous) or makes a few obvious but critical
> > >      improvements (if the program was salvageable). A huge speedup
> > >      results, the people from step 3 start whining about something else.
> > >   5. The requirements for the benchmark are modified, and the optimized
 > > >      Lisp implementation gets deleted. There's no sign of it
ever
> > >      having existed.
> > >   6. Return to step 2
> >
> > It is interesting to compare the Lisp lifecycle with the OCaml lifecycle:
> >
> > 1. A new benchmark is added to the shootout.
> > 2. An elite OCaml programmer knocks up a tiny, lightning-fast solution and
> > submits it to the shootout.
> > 3. The shootout maintainers either ignore the submission or deem that its
> > Zen feels too dissimilar to the C program.
> > 4. The shootout maintainers claim that the submission never existed, despite
> > the fact that it is still on their site.
> > 5. Despite never having existed, the requirements for the benchmark change.
> > 6. Repeat from step 1.
> >
> > Don't even get me started on Wikipedia. ;-)
> >
>
> hi mr. harrop!  i was surprised to see your post over here on c.l.l!
> did you start on lisp before o'caml?  so, is that shootout the best one
> available, despite the grim lifecycle descriptions?


 > from your and juho's descriptions, the owners sound a bit deceitful!
If you have a week to waste, you could go back and read through the old
shootout mailing list archives and try to match it to what Jon Harrop
now says.
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <45201063$0$16549$ed2619ec@ptn-nntp-reader01.plus.net>
Henry Bigelow wrote:
> hi mr. harrop!  i was surprised to see your post over here on c.l.l!
> did you start on lisp before o'caml?

I started on SML and moved to OCaml. Then I toyed with some of the other
FPLs, including Lisp.

> so, is that shootout the best one 
> available, despite the grim lifecycle descriptions?

Yes, AFAIK.

> from your and juho's descriptions, the owners sound a bit deceitful!

Only Isaac Gouy has tried to deceive people, AFAIK.

I have two fundamental disagreements with the shootout:

1. Benchmarks should be a function of a non-trivial input to obviate
arbitrary precomputation, e.g. computing the digits of pi is a stupid
benchmark.

2. Program selection should be objective to evade any notion of "cheating",
e.g. see the "Note"s in binary-trees.

If I were to create a shootout, I would use benchmarks with variable inputs
and I would try to remove all subjective criteria.

For example, to test the performance and implementation of balanced binary
trees in different languages I would try to define a benchmark that was
best solved using such trees, rather than trying to force programmers to
use trees. This might be something like: "use the given random number
generator with seed "x" to generate "n" sets of "m" random numbers in
different ranges and then compute the intersection of the sets,
with "x", "n" and "m" given on the command line". (When the overlap of two
sets is small, set-theoretic operations are faster with trees than with
hash tables.)

Unfortunately, my constructive criticisms fell on one of Isaac's deaf
ears. ;-)

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Isaac Gouy
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159738897.747201.102100@k70g2000cwa.googlegroups.com>
Jon Harrop wrote:
> Henry Bigelow wrote:
> > hi mr. harrop!  i was surprised to see your post over here on c.l.l!
> > did you start on lisp before o'caml?
>
> I started on SML and moved to OCaml. Then I toyed with some of the other
> FPLs, including Lisp.
>
> > so, is that shootout the best one
> > available, despite the grim lifecycle descriptions?
>
> Yes, AFAIK.
>
> > from your and juho's descriptions, the owners sound a bit deceitful!
>
> Only Isaac Gouy has tried to deceive people, AFAIK.

That is no more than malicious personal abuse - you contributed similar
baseless personal attacks to the shootout mailing-list.


> I have two fundamental disagreements with the shootout:
>
> 1. Benchmarks should be a function of a non-trivial input to obviate
> arbitrary precomputation, e.g. computing the digits of pi is a stupid
> benchmark.
>
> 2. Program selection should be objective to evade any notion of "cheating",
> e.g. see the "Note"s in binary-trees.
>
> If I were to create a shootout, I would use benchmarks with variable inputs
> and I would try to remove all subjective criteria.
>
> For example, to test the performance and implementation of balanced binary
> trees in different languages I would try to define a benchmark that was
> best solved using such trees, rather than trying to force programmers to
> use trees. This might be something like: "use the given random number
> generator with seed "x" to generate "n" sets of "m" random numbers in
> different ranges and then compute the intersection of the sets,
> with "x", "n" and "m" given on the command line". (When the overlap of two
> sets is small, set-theoretic operations are faster with trees than with
> hash tables.)
>
> Unfortunately, my constructive criticisms fell on one of Isaac's deaf
> ears. ;-)

I heard them the first and second time, but stopped listening as they
were repeated month after month.

Jack Andrews created "The shootin" partly in response to criticisms of
the shootout. I think it's sad that in the past year you haven't taken
the opportunity to be constructive and contribute to "The shootin".

http://shootin.sourceforge.net/


>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> Objective CAML for Scientists
> http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <452170ce$0$8730$ed2619ec@ptn-nntp-reader02.plus.net>
Isaac Gouy wrote:
> Jon Harrop wrote:
>> Only Isaac Gouy has tried to deceive people, AFAIK.
> 
> That is no more than malicious personal abuse -
GV
From my point of view, you implied that I had plagiarised the ray tracer
(following Alex Goldman) and then revoked my access.

> you contributed similar baseless personal attacks to the shootout
> mailing-list. 

I don't believe so.

> Jack Andrews created "The shootin" partly in response to criticisms of
> the shootout. I think it's sad that in the past year you haven't taken
> the opportunity to be constructive and contribute to "The shootin".

Fortunately, my business is sufficiently busy that I don't have spare time
to invest in such projects. Indeed, I have little time to improve my own
benchmarking site.

I think we'll all agree that benchmarking "across the board" is very
difficult. You've tried. I disagree with the fundamental approach taken by
your shootout (particularly the unnecessary subjectivity). I tried. Juho
kindly contributed a lot of Lisp code to my ray tracer benchmark but, I
believe, he disagrees with the way I presented the results. So it goes
on...

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: ·····@yahoo.com
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159720036.261392.271660@e3g2000cwe.googlegroups.com>
Juho Snellman wrote:
> Henry Bigelow <·········@gmail.com> wrote:
> > hi paul,
> >   i took a look at SPEC--it seems definitive, indeed, but for comparing
> > computer systems, not languages.  is that right?  would you be able to
> > point me in a direction to compare languages?  i have a hunch that
> > there is something very wrong with the lisp benchmark which is about
> > 100 times more memory/ or slower than a c program, but it'd be very
> > difficult for me to rewrite it more efficiently.
> >
> > it's possible that for various reasons, people don't care enough about
> > the shootout
>
> As far as I can tell, the lifecycle of a shootout benchmark goes like
> this:
>
>   1. A new benchmark is added, nobody cares
>   2. A newbie with 2 hours of Lisp experience notices that there's
>      no implementation for the benchmark. He thinks that a crappy
>      implementation is better than no implementation, and proceeds
>      to write something embarrasingly slow and hideously ugly.
>   3. People who mistakenly think that the Shootout has any validity
>      bitch and moan about Lisp being slow on that benchmark.
>   4. Someone who knows how to optimize Lisp code gets tired of the
>      moaning, and either rewrites the whole program (if it was
>      sufficiently hideous) or makes a few obvious but critical
>      improvements (if the program was salvageable). A huge speedup
>      results, the people from step 3 start whining about something else.
>   5. The requirements for the benchmark are modified, and the optimized
>      Lisp implementation gets deleted. There's no sign of it ever
>      having existed.
>   6. Return to step 2

That seems to be a more-or-less reasonable summary to me - although I
don't understand what you think the benefit of displaying optimized
Lisp implementations that don't give the expected result would be (#5).


> The fannkuch benchmark is currently at step 3, having been at step 5
> at least once. The old fannkuch program was about 10 times faster than
> what is currently up on the shootout site.

The expected result for the fannkuch benchmark was changed slightly
back in 2005, to print out the first 30 of the permutations.

> 
> -- 
> Juho Snellman
From: Juho Snellman
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <slrnei21gc.7f0.jsnell@sbz-32.cs.Helsinki.FI>
·····@yahoo.com <·····@yahoo.com> wrote:
> That seems to be a more-or-less reasonable summary to me - although I
> don't understand what you think the benefit of displaying optimized
> Lisp implementations that don't give the expected result would be (#5).

The benefit would obviously be that someone who decides to implement
that program can start working from the old version, instead of
writing it from scratch. I'd expect that it's going to be easier to
make the existing optimized program conform to the new specification
than re-implement an equally optimized one from scratch.

Also, please note that it was absolutely not my intention to accuse
Isaac, or anyone else involved with the Shootout, of being deceitful
or having an anti-Lisp agenda. Or actually to accuse them of anything.

My point was just that people really shouldn't be using the Shootout
for making decisions about what languages to use, since it's not
unlikely that a program for Blub has been submitted by someone who is
still a newbie at programming Blub.

-- 
Juho Snellman
From: Isaac Gouy
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159798647.551600.281650@e3g2000cwe.googlegroups.com>
Juho Snellman wrote:
> ·····@yahoo.com <·····@yahoo.com> wrote:
> > That seems to be a more-or-less reasonable summary to me - although I
> > don't understand what you think the benefit of displaying optimized
> > Lisp implementations that don't give the expected result would be (#5).
>
> The benefit would obviously be that someone who decides to implement
> that program can start working from the old version, instead of
> writing it from scratch. I'd expect that it's going to be easier to
> make the existing optimized program conform to the new specification
> than re-implement an equally optimized one from scratch.

[ #302423 ] Lisp SBCL fannkuch Juho Snellman 2005-10-26
http://alioth.debian.org/tracker/index.php?func=detail&aid=302423&group_id=30402&atid=411646


> Also, please note that it was absolutely not my intention to accuse
> Isaac, or anyone else involved with the Shootout, of being deceitful
> or having an anti-Lisp agenda. Or actually to accuse them of anything.
>
> My point was just that people really shouldn't be using the Shootout
> for making decisions about what languages to use, since it's not
> unlikely that a program for Blub has been submitted by someone who is
> still a newbie at programming Blub.

Oh it's worse that that, sometimes we get programs that even I as a
newbie Blub programmer can write better ;-)
From: Jon Harrop
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <4521752c$0$8730$ed2619ec@ptn-nntp-reader02.plus.net>
Juho Snellman wrote:
> My point was just that people really shouldn't be using the Shootout
> for making decisions about what languages to use, since it's not
> unlikely that a program for Blub has been submitted by someone who is
> still a newbie at programming Blub.

Given that poor (slow) implementations get replaced, the lack of a good
implementation can be taken to imply either an unwillingness or inability
of that community.

I'm not saying that is a good idea, but it is all the reader has to go on.

On the other hand, maybe Lispers think that the shootout is biased and,
consequently, will not bother working on it whereas the D programmer might
love the shootout because it is a low-cost advertisement.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists
From: Ralph Richard Cook
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <kcu0i2h3ub0fa7skdr38n5bnmkmnmf2vjd@4ax.com>
Juho Snellman <······@iki.fi> wrote:

>> it's possible that for various reasons, people don't care enough about
>> the shootout
>
...

I tried submitting code to the shootout around June 2005, but
apparently I wasn't putting the right code and makes in the right
little dialog boxes, so it didn't go through their automatic build. I
tried to get clarifications but a reply to my e-mails took about a
week to get back, so I just gave up.
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159757807.483481.252430@e3g2000cwe.googlegroups.com>
Ralph Richard Cook wrote:
> Juho Snellman <······@iki.fi> wrote:
>
> >> it's possible that for various reasons, people don't care enough about
> >> the shootout
> >
> ...
>
> I tried submitting code to the shootout around June 2005, but
> apparently I wasn't putting the right code and makes in the right
> little dialog boxes, so it didn't go through their automatic build. I
> tried to get clarifications but a reply to my e-mails took about a
> week to get back, so I just gave up.

hi ralph,
  i see.  well, i suppose it's possible that the maintainers are not
very motivated, since it's a free service after all.  or, they might be
overwhelmed with emails.  is there any way you can think to make the
submission process more reliable or easier?

and, what do you think of the idea of having a community-moderated CVS
submission of code?  

thanks,

henry
From: Isaac Gouy
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159765802.015860.14490@i3g2000cwc.googlegroups.com>
Ralph Richard Cook wrote:
> Juho Snellman <······@iki.fi> wrote:
>
> >> it's possible that for various reasons, people don't care enough about
> >> the shootout
> >
> ...
>
> I tried submitting code to the shootout around June 2005, but
> apparently I wasn't putting the right code and makes in the right
> little dialog boxes, so it didn't go through their automatic build. I
> tried to get clarifications but a reply to my e-mails took about a
> week to get back, so I just gave up.

Was it an implementation for Fasta?
http://shootout.alioth.debian.org/gp4/benchmark.php?test=fasta&lang=sbcl&id=0
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159303127.496985.170070@m73g2000cwd.googlegroups.com>
Henry Bigelow ha escrito:

> hi,
>
> i am writing to get an idea whether lisp would be useful.  so far, i
> think it would be ideal as a mode of expression, but i'm not sure
> whether it has (or i would be able to write) memory efficient or
> sufficiently fast code with it.

It depends on the implementation you choose, and the plattform you are
going to develop for, but it usually is very fast.

> i have zero lisp experience.  i started with perl, then c++, then
> o'caml.  a few days ago i read paul graham's summary of john mccarthy's
> paper, and then started reading the original, and On Lisp.  i'm
> completely wowed, and would love to use it to write my programs.

Why not first start learning with Practical Common Lisp? On Lisp is
more advanced and focused on macros.

http://www.gigamonkeys.com/book/

> i'm a bioinformatics student, writing a bayesian network software with
> a very demanding memory requirement, and potentially long running time
> for training.  it predicts protein structure, and must have hundreds of
> nodes for each protein, each with a matrix storing relationships
> between amino acids and structural states.  if this is done
> efficiently, it certainly fits in a 2GB machine.

Lisp helps you when need to develop complex algorithm and big
appliations.

> but i'm not sure why certain lisp programs in the shootout are so
> large.  how do i interpret the results of the shootout?  for example,
> lisp fannkuch is 26 times slower than the c version, and lisp chameneos
> is 120x larger.

http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=sbcl&id=0

120x larger?
It ocupes 1.2 more when gziped, but acctually the Lisp version is
shorter in number of lines. Take in count that Lisp uses long named
statments, for example:

(defun call-with-permutations (fun initial-permutation)
	(call-with-permutations-rec fun initial-permutation
		(array-dimension initial-permutation 0)))

But I believe that the Lisp version is more manteinable and clear to
read (once you are used to Lisp, of course).
For what I can observe, the Lisp algorith is very different from the C
one. If you make it the same way, I'm pretty sure that the SBCL version
would be very similar in speed.

> i'm surprised to see such large discrepancy actually.

I insist: this is due to the difference in selecting the algorith used.
Using similar code, SBCL should not be more than 20% slower than C.
From: Rahul Jain
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87bqot5lms.fsf@nyct.net>
"Javier" <·······@gmail.com> writes:

> Henry Bigelow ha escrito:
>> i have zero lisp experience.  i started with perl, then c++, then
>> o'caml.  a few days ago i read paul graham's summary of john mccarthy's
>> paper, and then started reading the original, and On Lisp.  i'm
>> completely wowed, and would love to use it to write my programs.
>
> Why not first start learning with Practical Common Lisp? On Lisp is
> more advanced and focused on macros.
>
> http://www.gigamonkeys.com/book/

Personally, I would recommend PAIP for the subject matter he's dealing
with. PAIP deals more with information processing. PCL is much broader
and covers parts of what both PAIP and On Lisp cover, in addition to
general application building. That said, PCL is a great book, and would
probably complement PAIP nicely for him.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159303450.168086.58810@h48g2000cwc.googlegroups.com>
Henry Bigelow ha escrito:

> hi,
>
> i am writing to get an idea whether lisp would be useful.  so far, i
> think it would be ideal as a mode of expression, but i'm not sure
> whether it has (or i would be able to write) memory efficient or
> sufficiently fast code with it.

It depends on the implementation you choose, and the plattform you are
going to develop for, but it usually is very fast.

> i have zero lisp experience.  i started with perl, then c++, then
> o'caml.  a few days ago i read paul graham's summary of john mccarthy's
> paper, and then started reading the original, and On Lisp.  i'm
> completely wowed, and would love to use it to write my programs.

Why not first start learning with Practical Common Lisp? On Lisp is
more advanced and focused on macros.

http://www.gigamonkeys.com/book/

> i'm a bioinformatics student, writing a bayesian network software with
> a very demanding memory requirement, and potentially long running time
> for training.  it predicts protein structure, and must have hundreds of
> nodes for each protein, each with a matrix storing relationships
> between amino acids and structural states.  if this is done
> efficiently, it certainly fits in a 2GB machine.

Lisp helps you when need to develop complex algorithm and big
appliations.

> but i'm not sure why certain lisp programs in the shootout are so
> large.  how do i interpret the results of the shootout?  for example,
> lisp fannkuch is 26 times slower than the c version, and lisp chameneos
> is 120x larger.

http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=sbcl&id=0

120x larger?
It ocupes 1.2 more when gziped, but acctually the Lisp version is
shorter in number of lines. Take in count that Lisp uses long named
statments, for example:

(defun call-with-permutations (fun initial-permutation)
	(call-with-permutations-rec fun initial-permutation
		(array-dimension initial-permutation 0)))

But I believe that the Lisp version is more manteinable and clear to
read (once you are used to Lisp, of course).
For what I can observe, the Lisp algorith is very different from the C
one. If you make it the same way, I'm pretty sure that the SBCL version
would be very similar in speed.

> i'm surprised to see such large discrepancy actually.

I insist: this is due to the difference in selecting the algorith used.
Using similar code, SBCL should not be more than 20% slower than C.
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159304568.644542.315680@i42g2000cwa.googlegroups.com>
> > but i'm not sure why certain lisp programs in the shootout are so
> > large.  how do i interpret the results of the shootout?  for example,
> > lisp fannkuch is 26 times slower than the c version, and lisp chameneos
> > is 120x larger.
>
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=fannkuch&lang=sbcl&id=0
>
> 120x larger?


sorry for the confusion.  by "larger" i meant memory use.  here are the
stats for Chameneos:

SBCL:  62,656 MB
GCC C:  524 MB

see:

http://shootout.alioth.debian.org/gp4/benchmark.php?test=chameneos&lang=sbcl
http://shootout.alioth.debian.org/gp4/benchmark.php?test=chameneos&lang=gcc
From: Javier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159318211.407006.171940@h48g2000cwc.googlegroups.com>
Henry Bigelow ha escrito:

> > 120x larger?
>
>
> sorry for the confusion.  by "larger" i meant memory use.  here are the
> stats for Chameneos:
>
> SBCL:  62,656 MB
> GCC C:  524 MB

Let see the differences (I hope some corrections from others if I'm
wrong):

1) SBCL loads the entire CL and extra libraries, including
documentation, the debugger and the compiler. Just starting SBCL eats
nearly 30Mb on my system (OSX), on Linux it may be even larger
(depending on the memoy model used, which I think OSX is better for
this). A similar thing happens with the Java virtual machine.
2) The memory used may be actually a "fake" because SBCL takes more
memory that it really need. For example, on my system it reserves 3Gb
of virtual memory just at startup. Of course, it doesn't mean that it
is ever going to use it all.
3) SBCL is not the only implementation. Try ECL, CLISP, and the
evaluation versions of Allegro and LispWorks. They all are probably
going to consume much less memory.
From: Robert Dodier
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159305695.807246.95510@i42g2000cwa.googlegroups.com>
Henry Bigelow wrote:

> i'm a bioinformatics student, writing a bayesian network software with
> a very demanding memory requirement, and potentially long running time
> for training.  it predicts protein structure, and must have hundreds of
> nodes for each protein, each with a matrix storing relationships
> between amino acids and structural states.  if this is done
> efficiently, it certainly fits in a 2GB machine.

My advice to you is to use R (http://www.r-project.org).
It is a pleasant programming language, and there is a lot of
contributed code, which includes Bayesian stuff and bioinformatics
(not sure if it includes the intersection of the two).
In any event R has a large, active user community and chances
are good you'll find people with similar interests.
I say this after having written a lot of statistical code (including
Bayesian inference) in a variety of languages.

It's not clear to me that Lisp's particular strength (code = data)
is going to be much of a win for you. If you were writing a general
purpose Bayesian inference package, probably so. But I'm guessing
that what you are going to do is derive some equations by hand,
code them, and then run them on enormous data sets. I don't
see much scope for code = data there. YMMV.

Lisp isn't a bad choice in this context; it is probably better than C
or Perl.

FWIW
Robert Dodier
From: ········@gmail.com
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159311453.470483.154660@e3g2000cwe.googlegroups.com>
Much as I appreciate Paolo's nod to BioBike, let me second Robert's
recommendation of R if all you need to do is crunch a bunch of numbers
that represent a bayes net. R has a quasi-reasonable quasi-Lisp
programming language and numerous packages to support mathematical and
statistical computing (incl. extensive bayesian machinery). BioBike is
designed for knowledge-based, not numbers-based biocomputing.
(Although, of course, it can do both, but why try to shoe-horn every
kind of computation into the same programming language even if you
could, esp. when it's so easy to make multiple languages live happily
together?) To do most "typical" biocomputing -- sequence processing,
stats, bayes nets, and those sorts of off-the-shelf numerical things --
we call down to non-Lisp (R-based or C-based) code and just interface
them up to BioBike's Lisp so that you can use Lisp to do the more
difficult knowledge processing that Lisp is good at.

If you are interested in using BioBike to do knowledge processing on
top of your bayes net machinery I'm sure that the BioBike support team
will be happy to help you out.
From: Rahul Jain
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87fye55lre.fsf@nyct.net>
········@gmail.com writes:

> Much as I appreciate Paolo's nod to BioBike, let me second Robert's
> recommendation of R if all you need to do is crunch a bunch of numbers
> that represent a bayes net. R has a quasi-reasonable quasi-Lisp
> programming language and numerous packages to support mathematical and
> statistical computing (incl. extensive bayesian machinery).

I'm sure you know this, but the OP might find it interesting:. it's not
by accident a quasi-Lisp. It's an open source C port of S, which was
written in Lisp, according to what I gather.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159326469.403235.291070@i3g2000cwc.googlegroups.com>
> My advice to you is to use R (http://www.r-project.org).
> It is a pleasant programming language, and there is a lot of
> contributed code, which includes Bayesian stuff and bioinformatics
> (not sure if it includes the intersection of the two).
> In any event R has a large, active user community and chances
> are good you'll find people with similar interests.
> I say this after having written a lot of statistical code (including
> Bayesian inference) in a variety of languages.
>

thanks robert.  i'll check it out.  i've heard R mentioned many times,
and i'd tried 'bayes net toolbox' written in matlab, but it didn't look
fast enough for my purposes.  but mostly i just wanted to write one
myself, if just to get a better understanding of the algorithms.


> It's not clear to me that Lisp's particular strength (code = data)
> is going to be much of a win for you. If you were writing a general
> purpose Bayesian inference package, probably so. But I'm guessing
> that what you are going to do is derive some equations by hand,
> code them, and then run them on enormous data sets. I don't
> see much scope for code = data there. YMMV.
>

this is encouraging, but i would still like to see the huge memory and
speed differences in some of those benchmarks fixed, if possible.

thanks again,

henry

> Lisp isn't a bad choice in this context; it is probably better than C
> or Perl.
> 
> FWIW
> Robert Dodier
From: Paolo Amoroso
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87wt7q1jkx.fsf@plato.moon.paoloamoroso.it>
"Henry Bigelow" <·········@gmail.com> writes:

> i'm a bioinformatics student, writing a bayesian network software with

See:

  BioBike
  http://nostoc.stanford.edu/Docs/

  KnowOS: A Knowledge Operating System
  http://nostoc.stanford.edu/Docs/KnowOS.html

  Bio-Informatics
  http://www.cl-user.net/asp/tags/bio-informatics


Paolo
-- 
Why Lisp? http://wiki.alu.org/RtL%20Highlight%20Film
The Common Lisp Directory: http://www.cl-user.net
From: Nicolas Neuss
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <87hcytraow.fsf@ortler.iwr.uni-heidelberg.de>
"Henry Bigelow" <·········@gmail.com> writes:

> but i'm not sure why certain lisp programs in the shootout are so
> large.  how do i interpret the results of the shootout?  for example,
> lisp fannkuch is 26 times slower than the c version, and lisp chameneos
> is 120x larger.
>

Lisp functions should be started and used in a Lisp environment and not
inside a C environement (Unix).  Then compiled functions are usually as
small or smaller as their C counterpart.

Nicolas
From: Henry Bigelow
Subject: Re: a potential lisp convert, and interpreting the shootout
Date: 
Message-ID: <1159373704.601188.111790@b28g2000cwb.googlegroups.com>
Nicolas Neuss wrote:
> "Henry Bigelow" <·········@gmail.com> writes:
>
> > but i'm not sure why certain lisp programs in the shootout are so
> > large.  how do i interpret the results of the shootout?  for example,
> > lisp fannkuch is 26 times slower than the c version, and lisp chameneos
> > is 120x larger.
> >
>

> Lisp functions should be started and used in a Lisp environment and not
> inside a C environement (Unix).

what do you mean exactly?  how can you avoid running your program in an
operating system?


> Then compiled functions are usually as
> small or smaller as their C counterpart.

when i said 120x larger, i should have said 'uses 120x as much memory
during runtime'.  the actual binary is probably not much larger than
the c binary for these benchmarks, and the size of source code is all
within a factor of 2 or 3.

> 
> Nicolas