I wonder why most functional programming languages fail so badly at
what should be one of their primary main strengths: recursive
functions.
In the well-known benchmarking game called "The computer game
shootout", most such languages fail miserably in the face of blazing
fast imperative code during the benchmarking of recursive functions
such as ackermann or fibonacci:
http://shootout.alioth.debian.org/gp4/benchmark.php?test=recursive&lang=all
The best is clean, only 1.6 slower than gcc C.
The chart:
1.6 Clean
3.2 OCaml (just behind Fortran and Java!)
3.3 F# (running in mono)
3.4 SML on MLton (just after FreeBasic!)
3.4 Haskell GHC (just after Java on gcj! almost a tie-in)
3.6 CL on SBCL (almost as good as compiled lazy static Haskell)
3.7 Scheme on Chicken (just after sbcl)
4.0 SML/NJ (just after C# on mono)
4.6 Scheme on Ikarus (just after Java -client!)
8.1 Scheme PLT (just after Fortran, again)
Well, that's about it.
I know in the grand scheme of things, such timings are not that bad,
not orders of magnitude slower like some of the dynamic scripting
stinkers in the bottom, specially in the face of such classy and easy
syntax and semantics adding to programmer productivity. Still, kind
of a let down, like buying a Playstation 3 and waiting for a killer
app that never comes...
Any thoughts?
namekuseijin wrote:
> I wonder why most functional programming languages fail so badly at
> what should be one of their primary main strengths: recursive
> functions.
>
> In the well-known benchmarking game called "The computer game
> shootout", most such languages fail miserably in the face of blazing
> fast imperative code during the benchmarking of recursive functions
> such as ackermann or fibonacci:
>
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=recursive&lang=all
>
> The best is clean, only 1.6 slower than gcc C.
>
> The chart:
> 1.6 Clean
> 3.2 OCaml (just behind Fortran and Java!)
> 3.3 F# (running in mono)
> 3.4 SML on MLton (just after FreeBasic!)
> 3.4 Haskell GHC (just after Java on gcj! almost a tie-in)
> 3.6 CL on SBCL (almost as good as compiled lazy static Haskell)
> 3.7 Scheme on Chicken (just after sbcl)
> 4.0 SML/NJ (just after C# on mono)
> 4.6 Scheme on Ikarus (just after Java -client!)
> 8.1 Scheme PLT (just after Fortran, again)
>
> Well, that's about it.
>
> I know in the grand scheme of things, such timings are not that bad,
> not orders of magnitude slower like some of the dynamic scripting
> stinkers in the bottom, specially in the face of such classy and easy
> syntax and semantics adding to programmer productivity. Still, kind
> of a let down, like buying a Playstation 3 and waiting for a killer
> app that never comes...
>
> Any thoughts?
Yes. These programs not only perform recursion but also perform
small numerical computations. Because of Scheme's numeric tower,
a system that supports bignums cannot compile these benchmarks as
fast as systems that have fixed precision numbers (all the rest?).
The benchmark program itself can be converted to use fixnum-specific
or flonum-specific operators. This in turn gives some compiler
better chance of recovering some information regarding the size and
representation of the arguments. For example, in the development
version of Ikarus, the run time on my machine the given benchmark
program is:
real 0m6.529s
user 0m6.386s
sys 0m0.088s
Now modifying the program to use fixnum and flonum operations when
appropriate, the run time becomes:
real 0m4.508s
user 0m4.421s
sys 0m0.077s
If you can interpolate these numbers to the ones you give in the
list above, Ikarus will be in the low 3.x range. Note that the
program is still fully safe and still performs overflow check for
the fixnum arithmetic operations.
In short, the benchmark in Scheme measures how fast you do fixnum
and flonum computations more than how fast you do recursion.
Aziz,,,
namekuseijin wrote:
> I wonder why most functional programming languages fail so badly at
> what should be one of their primary main strengths: recursive
> functions.
>
> In the well-known benchmarking game called "The computer game
> shootout", most such languages fail miserably in the face of blazing
> fast imperative code during the benchmarking of recursive functions
> such as ackermann or fibonacci:
>
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=recursive&lang=all
>
> The best is clean, only 1.6 slower than gcc C.
>
> The chart:
> 1.6 Clean
> 3.2 OCaml (just behind Fortran and Java!)
> 3.3 F# (running in mono)
> 3.4 SML on MLton (just after FreeBasic!)
> 3.4 Haskell GHC (just after Java on gcj! almost a tie-in)
> 3.6 CL on SBCL (almost as good as compiled lazy static Haskell)
> 3.7 Scheme on Chicken (just after sbcl)
> 4.0 SML/NJ (just after C# on mono)
> 4.6 Scheme on Ikarus (just after Java -client!)
> 8.1 Scheme PLT (just after Fortran, again)
>
> Well, that's about it.
>
> I know in the grand scheme of things, such timings are not that bad,
> not orders of magnitude slower like some of the dynamic scripting
> stinkers in the bottom, specially in the face of such classy and easy
> syntax and semantics adding to programmer productivity. Still, kind
> of a let down, like buying a Playstation 3 and waiting for a killer
> app that never comes...
>
> Any thoughts?
Yes. These programs not only perform recursion but also perform
small numerical computations. Because of Scheme's numeric tower,
a system that supports bignums cannot compile these benchmarks as
fast as systems that have fixed precision numbers (all the rest?).
The benchmark program itself can be converted to use fixnum-specific
or flonum-specific operators. This in turn gives some compiler
better chance of recovering some information regarding the size and
representation of the arguments. For example, in the development
version of Ikarus, the run time on my machine the given benchmark
program is:
real 0m6.529s
user 0m6.386s
sys 0m0.088s
Now modifying the program to use fixnum and flonum operations when
appropriate, the run time becomes:
real 0m4.508s
user 0m4.421s
sys 0m0.077s
If you can interpolate these numbers to the ones you give in the
list above, Ikarus will be in the low 3.x range. Note that the
program is still fully safe and still performs overflow check for
the fixnum arithmetic operations.
In short, the benchmark in Scheme measures how fast you do fixnum
and flonum computations more than how fast you do recursion.
Aziz,,,
On 31 jul, 12:54, Abdulaziz Ghuloum <········@gmail.com> wrote:
> Now modifying the program to use fixnum and flonum operations when
> appropriate, the run time becomes:
>
> real 0m4.508s
> user 0m4.421s
> sys 0m0.077s
>
> If you can interpolate these numbers to the ones you give in the
> list above, Ikarus will be in the low 3.x range. Note that the
> program is still fully safe and still performs overflow check for
> the fixnum arithmetic operations.
Wonderful! How about submitting that to the site? :)
On Thu, 31 Jul 2008, Abdulaziz Ghuloum wrote:
>
> Now modifying the program to use fixnum and flonum operations when
> appropriate, the run time becomes:
>
> real 0m4.508s
> user 0m4.421s
> sys 0m0.077s
>
In addition, you can probably shave off a few more ms by using
bitwise-ior in tak:
(define (ack x y)
(if (= 0 x)
(+ y 1)
(ack (- x 1)
(if (= 0 (bitwise-ior y 0))
1
(ack x (- y 1))))))
That's what gcc does. It made a significant
difference for stalin as well.
Kjetil S. Matheussen wrote:
> On Thu, 31 Jul 2008, Abdulaziz Ghuloum wrote:
>> Now modifying the program to use fixnum and flonum operations when
>> appropriate, the run time becomes:
>>
>> real 0m4.508s
>> user 0m4.421s
>> sys 0m0.077s
>>
>
> In addition, you can probably shave off a few more ms by using
> bitwise-ior in tak:
>
> (define (ack x y)
> (if (= 0 x)
> (+ y 1)
> (ack (- x 1)
> (if (= 0 (bitwise-ior y 0))
> 1
> (ack x (- y 1))))))
Why is (bitwise-ior y 0) not equal to y? Am I recalling my bitwise
operations incorrectly?
--
Simon Richard Clarkstone:
··············@dunelm.org.uk / ················@yahoo.co.uk
"August 9 - I just made my signature file. Its only 6 pages long.
I will have to work on it some more." -- _Diary of an AOL User_
yfOn Sat, 2 Aug 2008, Simon Richard Clarkstone wrote:
> Kjetil S. Matheussen wrote:
> > On Thu, 31 Jul 2008, Abdulaziz Ghuloum wrote:
> >> Now modifying the program to use fixnum and flonum operations when
> >> appropriate, the run time becomes:
> >>
> >> real 0m4.508s
> >> user 0m4.421s
> >> sys 0m0.077s
> >>
> >
> > In addition, you can probably shave off a few more ms by using
> > bitwise-ior in tak:
> >
> > (define (ack x y)
> > (if (= 0 x)
> > (+ y 1)
> > (ack (- x 1)
> > (if (= 0 (bitwise-ior y 0))
> > 1
> > (ack x (- y 1))))))
>
> Why is (bitwise-ior y 0) not equal to y? Am I recalling my bitwise
> operations incorrectly?
>
Very good question. :-)
I just took the gcc code and translated it to scheme:
ack(x - 1, ((y | 0) ? ack(x, y - 1) : 1));
I am very curious, though, why the gcc version is not just:
ack(x - 1, (y ? ack(x, y - 1) : 1));
Anyone who knows?
On Sun, 3 Aug 2008, Kjetil S. Matheussen wrote:
> yfOn Sat, 2 Aug 2008, Simon Richard Clarkstone wrote:
>
> > Kjetil S. Matheussen wrote:
> > > On Thu, 31 Jul 2008, Abdulaziz Ghuloum wrote:
> > >> Now modifying the program to use fixnum and flonum operations when
> > >> appropriate, the run time becomes:
> > >>
> > >> real 0m4.508s
> > >> user 0m4.421s
> > >> sys 0m0.077s
> > >>
> > >
> > > In addition, you can probably shave off a few more ms by using
> > > bitwise-ior in tak:
> > >
> > > (define (ack x y)
> > > (if (= 0 x)
> > > (+ y 1)
> > > (ack (- x 1)
> > > (if (= 0 (bitwise-ior y 0))
> > > 1
> > > (ack x (- y 1))))))
> >
> > Why is (bitwise-ior y 0) not equal to y? Am I recalling my bitwise
> > operations incorrectly?
> >
>
> Very good question. :-)
Turned, out bitwise-ior actually made stalin run 70ms slower.
This is the fastest version:
(define (ack x y)
(if (= 0 x)
(+ y 1)
(ack (- x 1)
(if (not (= 0 y))
(ack x (- y 1))
1))))
Stalin: 1.51s
Gcc: 2.18s
>
> I just took the gcc code and translated it to scheme:
> ack(x - 1, ((y | 0) ? ack(x, y - 1) : 1));
>
> I am very curious, though, why the gcc version is not just:
> ack(x - 1, (y ? ack(x, y - 1) : 1));
>
> Anyone who knows?
>
>
Tried it,
ack(x - 1, y ? ack(x, y - 1) : 1)
is about 10ms faster than
ack(x - 1, y|0 ? ack(x, y - 1) : 1);
:-)
Kjetil S. Matheussen wrote:
> On Sun, 3 Aug 2008, Kjetil S. Matheussen wrote:
>
>> yfOn Sat, 2 Aug 2008, Simon Richard Clarkstone wrote:
>>
>>> Kjetil S. Matheussen wrote:
>>>> (define (ack x y)
>>>> (if (= 0 x)
>>>> (+ y 1)
>>>> (ack (- x 1)
>>>> (if (= 0 (bitwise-ior y 0))
>>>> 1
>>>> (ack x (- y 1))))))
>>> Why is (bitwise-ior y 0) not equal to y? Am I recalling my bitwise
>>> operations incorrectly?
>>>
>> Very good question. :-)
[snip]
>> I just took the gcc code and translated it to scheme:
>> ack(x - 1, ((y | 0) ? ack(x, y - 1) : 1));
>>
>> I am very curious, though, why the gcc version is not just:
>> ack(x - 1, (y ? ack(x, y - 1) : 1));
>
> Tried it,
> ack(x - 1, y ? ack(x, y - 1) : 1)
> is about 10ms faster than
> ack(x - 1, y|0 ? ack(x, y - 1) : 1);
Did you turn on optimisations? Unless there is a nasty subtlety in C,
gcc should be translating the latter code into the former; it is a
simple and obvious translation that could assist any generated or
macro-using code. Given the number of language implementations that
compile through C, the amount of generated code compiled gcc is probably
comparable with the amount of hand-written code.
--
Simon Richard Clarkstone:
··············@dunelm.org.uk / ················@yahoo.co.uk
"August 9 - I just made my signature file. Its only 6 pages long.
I will have to work on it some more." -- _Diary of an AOL User_
On Sun, 3 Aug 2008, Simon Richard Clarkstone wrote:
> Kjetil S. Matheussen wrote:
> > On Sun, 3 Aug 2008, Kjetil S. Matheussen wrote:
> >
> >> yfOn Sat, 2 Aug 2008, Simon Richard Clarkstone wrote:
> >>
> >>> Kjetil S. Matheussen wrote:
> >>>> (define (ack x y)
> >>>> (if (= 0 x)
> >>>> (+ y 1)
> >>>> (ack (- x 1)
> >>>> (if (= 0 (bitwise-ior y 0))
> >>>> 1
> >>>> (ack x (- y 1))))))
> >>> Why is (bitwise-ior y 0) not equal to y? Am I recalling my bitwise
> >>> operations incorrectly?
> >>>
> >> Very good question. :-)
> [snip]
> >> I just took the gcc code and translated it to scheme:
> >> ack(x - 1, ((y | 0) ? ack(x, y - 1) : 1));
> >>
> >> I am very curious, though, why the gcc version is not just:
> >> ack(x - 1, (y ? ack(x, y - 1) : 1));
> >
> > Tried it,
> > ack(x - 1, y ? ack(x, y - 1) : 1)
> > is about 10ms faster than
> > ack(x - 1, y|0 ? ack(x, y - 1) : 1);
>
> Did you turn on optimisations? Unless there is a nasty subtlety in C,
> gcc should be translating the latter code into the former; it is a
> simple and obvious translation that could assist any generated or
> macro-using code. Given the number of language implementations that
> compile through C, the amount of generated code compiled gcc is probably
> comparable with the amount of hand-written code.
>
Yes, optimisations were turned on. I used the same compilation flags
as on the benchmark page, except replacing pentium4 with athlon-xp.
But you are probably right, gcc probably optimized y|0 into y,
and the small difference I got was just coinsidental.
(I did "time ./recursive 11" a few times first and then
took an approx avarage of the best of the successive results. Very
quick.)
From: Kent M Pitman
Subject: Re: Why such poor recursive behaviour?
Date:
Message-ID: <uiquhmv41.fsf@nhplace.com>
[ comp.lang.lisp only; http://www.nhplace.com/kent/PFAQ/cross-posting.html ]
"Kjetil S. Matheussen" <··············@notam02.no> writes:
> yfOn Sat, 2 Aug 2008, Simon Richard Clarkstone wrote:
>
> > Kjetil S. Matheussen wrote:
> > > On Thu, 31 Jul 2008, Abdulaziz Ghuloum wrote:
...
> > > In addition, you can probably shave off a few more ms by using
> > > bitwise-ior in tak:
> > >
> > > (define (ack x y)
> > > (if (= 0 x)
> > > (+ y 1)
> > > (ack (- x 1)
> > > (if (= 0 (bitwise-ior y 0))
> > > 1
> > > (ack x (- y 1))))))
> >
> > Why is (bitwise-ior y 0) not equal to y? Am I recalling my bitwise
> > operations incorrectly?
> >
>
> Very good question. :-)
>
> I just took the gcc code and translated it to scheme:
> ack(x - 1, ((y | 0) ? ack(x, y - 1) : 1));
>
> I am very curious, though, why the gcc version is not just:
> ack(x - 1, (y ? ack(x, y - 1) : 1));
>
> Anyone who knows?
Wikipedia's definition [1] uses m and n, rather than x and y, but makes
a distinction based on n=0 in that position.
Ironically, later in that same section of the page it says:
One interesting aspect of the Ackermann function is that the only
arithmetic operations it ever uses are addition and subtraction of
1.
I wonder if bitwise-ior counts as an arithmetic operation. :)
[Of course, the reason it makes this observation is to make an argument
about runtime complexity, which isn't affected by the presence or
absence of bitwise-ior.]
My best guess about the (y|0) ... and I admit I'm really stretching
here, both because I'm beginning from the shakey assumption that the
coder actually did mean something and didn't just make a blunder and
is that perhaps someone defaulting y to 0 in a .h file [if this were
C++ anyway, that might work; does C let you do this in any dialects?
The Microsoft C compiler seems content to compile a call with missing
args, but supplies random values at runtime] and then "reminding" you
in the .c file that the argument was really optional. As far as I
know the following only works in C++ but maybe there is some C where
this works, too (?) Basically, a .h file containing:
int ax(int x, int y=0);
and a .cpp file containing:
int ax(int x, int y) {
return x+(y|0);
}
Even if the y|0 [or for this y||0 would work as well] did nothing
computationally, it might be the moral equivalent of the Lisp idiom
(NOT (NULL X)) where X would suffice. People write this in Lisp knowing
the compiler will optimize it, but wanting to show off that they know that
X contains a "list" and not a "boolean" ... as if these were ever
concretely different. I might be way offbase here, but that's all I could
imagine... If I'm right, then in a sense (or y 0) might be a slightly
better rewrite than (= 0 (bitwise-ior y 0)), but it all amounts to the
same since the Scheme code shown above does not have an optional argument
and it just depends on what placeholder object you want to use to mean
"missing"; in CL, of course, one could use &optional.
[1] http://en.wikipedia.org/wiki/Ackermann_function#Definition_and_properties
On 31 jul, 05:28, ·······@pc-003.diku.dk (Torben Ægidius Mogensen)
wrote:
> I think the difference is not due to the speed of recursive function
> calls but due to the speed of arithmetic. Most functional languages
> check for overflow at every arithmetic operations, which adds a
> considerable overhead. C does not.
Hmm, I thought languages like Ada, Oberon-2 or Java did check for
arithmetic overflow as well.
From: Lasse Reichstein Nielsen
Subject: Re: Why such poor recursive behaviour?
Date:
Message-ID: <abfxf2ov.fsf@hotpop.com>
namekuseijin <············@gmail.com> writes:
> On 31 jul, 05:28, ·······@pc-003.diku.dk (Torben �gidius Mogensen)
> wrote:
>
>> I think the difference is not due to the speed of recursive function
>> calls but due to the speed of arithmetic. Most functional languages
>> check for overflow at every arithmetic operations, which adds a
>> considerable overhead. C does not.
>
> Hmm, I thought languages like Ada, Oberon-2 or Java did check for
> arithmetic overflow as well.
Can't speak for Ada and Oberon-2, but Java doesn't.
Its "int" type is just a signed 32-bit integer that overflow just like C.
/L
--
Lasse Reichstein Nielsen
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'
> On 31 jul, 05:28, ·······@pc-003.diku.dk (Torben �gidius Mogensen)
> wrote:
>
>> I think the difference is not due to the speed of recursive function
>> calls but due to the speed of arithmetic. Most functional languages
>> check for overflow at every arithmetic operations, which adds a
>> considerable overhead. C does not.
>
> Hmm, I thought languages like Ada,
Ada requires overflow checks, but GNAT doesn't implement them by
default.
On Fri, 01 Aug 2008 16:28:33 +0200, Florian Weimer <··@deneb.enyo.de>
wrote:
>> On 31 jul, 05:28, ·······@pc-003.diku.dk (Torben �gidius Mogensen)
>> wrote:
>>
>>> I think the difference is not due to the speed of recursive function
>>> calls but due to the speed of arithmetic. Most functional languages
>>> check for overflow at every arithmetic operations, which adds a
>>> considerable overhead. C does not.
>>
>> Hmm, I thought languages like Ada,
>
>Ada requires overflow checks, but GNAT doesn't implement them by
>default.
But the SPARK real-time subset of Ada doesn't.
George
--
for email reply remove "/" from address
On Thu, 31 Jul 2008, namekuseijin wrote:
> I wonder why most functional programming languages fail so badly at
> what should be one of their primary main strengths: recursive
> functions.
>
> In the well-known benchmarking game called "The computer game
> shootout", most such languages fail miserably in the face of blazing
> fast imperative code during the benchmarking of recursive functions
> such as ackermann or fibonacci:
>
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=recursive&lang=all
>
> The best is clean, only 1.6 slower than gcc C.
>
> The chart:
> 1.6 Clean
> 3.2 OCaml (just behind Fortran and Java!)
> 3.3 F# (running in mono)
> 3.4 SML on MLton (just after FreeBasic!)
> 3.4 Haskell GHC (just after Java on gcj! almost a tie-in)
> 3.6 CL on SBCL (almost as good as compiled lazy static Haskell)
> 3.7 Scheme on Chicken (just after sbcl)
> 4.0 SML/NJ (just after C# on mono)
> 4.6 Scheme on Ikarus (just after Java -client!)
> 8.1 Scheme PLT (just after Fortran, again)
>
> Well, that's about it.
>
Well, Stalin scheme is not on the list. My results:
* GCC:
······@ttleush ~/c $ time ./recursive.gcc_run 11
Ack(3,11): 16381
Fib(38.0): 63245986.0
Tak(30,20,10): 11
Fib(3): 3
Tak(3.0,2.0,1.0): 2.0
real 0m2.112s
user 0m2.021s
sys 0m0.002s
* Stalin 0.11:
······@ttleush ~/c $ time ./recursive 11
Ack(3,11): 16381
Fib(3.7999999523162841796875e1.0): 6.32459926605224609375e7.0
Tak(30,20,10): 11
Fib(3): 3
Tak(3.0,2.0,1.0): 2.0.0
real 0m1.725s
user 0m1.635s
sys 0m0.003s
Which means that Stalin scheme is about 22% faster than C
for this particular benchmark, and stalin scheme is a
dynamic typed functional language.
Source:
http://www.notam02.no/~kjetism/recursive.scm
Compile with
stalin -On -no-clone-size-limit -Ob -Om -Or -Ot -copt -O3 -copt
-fomit-frame-pointer -copt -march=athlon-xp -dC -dh -architecture
IA32-align-double -copt -malign-double -copt -freg-struct-return -d0 -d1
-d5 -d6 -k recursive.scm
(most of these options probably don't make any difference regarding
execution speed, I just tried various
things to see if it improved, and kept most that didn't make a difference)
On 31 jul, 07:50, "Kjetil S. Matheussen" <··············@notam02.no>
wrote:
> real 0m1.725s
> user 0m1.635s
> sys 0m0.003s
>
> Which means that Stalin scheme is about 22% faster than C
> for this particular benchmark, and stalin scheme is a
> dynamic typed functional language.
Whoa! That's exactly the kind of impressive achievment a functional
language should provide, at least regarding recursive behaviour.
Just out of curiosity, your implementation of fib is tree-recursive or
tail-recursive? That should make a lot of difference, huh? The PLT-
Scheme version there is a naive tree-recursion:
(define (fib n)
(cond ((< n 2) 1)
(else (+ (fib (- n 2)) (fib (- n 1))))))
A tail-recursive one should fare much better:
(define (fib n)
(let fb ((x n) (r 1) (r2 0))
(cond ((= x 0) r2)
((= x 1) r)
(else (fb (- x 1) (+ r2 r) r)))))
Is Stalin capable of automatically turn tree-recursive into tail-
recursive?
Very impressive, anyway. If only its name didn't spread so much
fear... ;)
On Thu, 31 Jul 2008, namekuseijin wrote:
> On 31 jul, 07:50, "Kjetil S. Matheussen" <··············@notam02.no>
> wrote:
> > real � �0m1.725s
> > user � �0m1.635s
> > sys � � 0m0.003s
> >
> > Which means that Stalin scheme is about 22% faster than C
> > for this particular benchmark, and stalin scheme is a
> > dynamic typed functional language.
>
> Whoa! That's exactly the kind of impressive achievment a functional
> language should provide, at least regarding recursive behaviour.
>
> Just out of curiosity, your implementation of fib is tree-recursive or
> tail-recursive? That should make a lot of difference, huh?
> The PLT-
> Scheme version there is a naive tree-recursion:
>
> (define (fib n)
> (cond ((< n 2) 1)
> (else (+ (fib (- n 2)) (fib (- n 1))))))
>
> A tail-recursive one should fare much better:
>
> (define (fib n)
> (let fb ((x n) (r 1) (r2 0))
> (cond ((= x 0) r2)
> ((= x 1) r)
> (else (fb (- x 1) (+ r2 r) r)))))
>
>
Maybe, but that would be cheating.
According to the benchmark page, fib must be
implemented like this:
Fib(n)
n < 2 = 1
otherwise = Fib(n-2) + Fib(n-1)
Apropos cheating, I'm not 100% sure the gcc version of Ack is playing
fair though...:
Ack(x,y)
x = 0 = y+1
y = 0 = Ack(x-1,1)
otherwise = Ack(x-1, Ack(x,y-1))
vs.
static inline int ack(register int x, register int y){
if (x == 0) {
return y + 1;
}
return ack(x - 1, ((y | 0) ? ack(x, y - 1) : 1));
}
> Is Stalin capable of automatically turn tree-recursive into tail-
> recursive?
At least not for Fib:
/* FIB[7] */
int f7(int a668)
{
int t1297;
int t1298;
int t1299;
int t1300;
int t1301;
int t1302;
int t1303;
int t1304;
int t1305;
int t1306;
/* x60 recursive.scm:40:960 */
/* x40 recursive.scm:40:967 */
/* x38 recursive.scm:40:970 */
t1297 = a668;
/* x39 recursive.scm:40:972 */
t1298 = 2;
/* x18678 recursive.scm:40:968 */
if (!(t1297 < t1298))
goto l366;
/* x43 */
/* x42 */
/* x41 recursive.scm:40:975 */
return 1;
l366:
/* x59 */
/* x58 */
/* x57 recursive.scm:41:992 */
/* x50 recursive.scm:41:995 */
/* x49 recursive.scm:41:1000 */
/* x47 recursive.scm:41:1003 */
t1302 = a668;
/* x48 recursive.scm:41:1005 */
t1303 = 2;
/* x18676 recursive.scm:41:1001 */
t1301 = t1302 - t1303;
/* x45 recursive.scm:41:996 */
t1299 = f7(t1301);
/* x56 recursive.scm:41:1009 */
/* x55 recursive.scm:41:1014 */
/* x53 recursive.scm:41:1017 */
t1305 = a668;
/* x54 recursive.scm:41:1019 */
t1306 = 1;
/* x18675 recursive.scm:41:1015 */
t1304 = t1305 - t1306;
/* x51 recursive.scm:41:1010 */
t1300 = f7(t1304);
/* x18677 recursive.scm:41:993 */
return t1299 + t1300;
}
>
> Very impressive, anyway. If only its name didn't spread so much
> fear... ;)
>
Yeah. ;)
On Thu, 31 Jul 2008, Kjetil S. Matheussen wrote:
> On Thu, 31 Jul 2008, namekuseijin wrote:
>
> > I wonder why most functional programming languages fail so badly at
> > what should be one of their primary main strengths: recursive
> > functions.
> >
> > In the well-known benchmarking game called "The computer game
> > shootout", most such languages fail miserably in the face of blazing
> > fast imperative code during the benchmarking of recursive functions
> > such as ackermann or fibonacci:
> >
> > http://shootout.alioth.debian.org/gp4/benchmark.php?test=recursive&lang=all
> >
> > The best is clean, only 1.6 slower than gcc C.
> >
> > The chart:
> > 1.6 Clean
> > 3.2 OCaml (just behind Fortran and Java!)
> > 3.3 F# (running in mono)
> > 3.4 SML on MLton (just after FreeBasic!)
> > 3.4 Haskell GHC (just after Java on gcj! almost a tie-in)
> > 3.6 CL on SBCL (almost as good as compiled lazy static Haskell)
> > 3.7 Scheme on Chicken (just after sbcl)
> > 4.0 SML/NJ (just after C# on mono)
> > 4.6 Scheme on Ikarus (just after Java -client!)
> > 8.1 Scheme PLT (just after Fortran, again)
> >
> > Well, that's about it.
> >
>
> Well, Stalin scheme is not on the list. My results:
>
> * GCC:
> ······@ttleush ~/c $ time ./recursive.gcc_run 11
> Ack(3,11): 16381
> Fib(38.0): 63245986.0
> Tak(30,20,10): 11
> Fib(3): 3
> Tak(3.0,2.0,1.0): 2.0
>
> real 0m2.112s
> user 0m2.021s
> sys 0m0.002s
>
Options:
/usr/bin/gcc -pipe -Wall -O3 -fomit-frame-pointer -march=athlon-xp recursivec.c -o recursive.gcc_run
gcc version:
gcc (GCC) 4.2.4 (Gentoo 4.2.4 p1.0)
>
> * Stalin 0.11:
> ······@ttleush ~/c $ time ./recursive 11
> Ack(3,11): 16381
> Fib(3.7999999523162841796875e1.0): 6.32459926605224609375e7.0
> Tak(30,20,10): 11
> Fib(3): 3
> Tak(3.0,2.0,1.0): 2.0.0
Don't bother about the not quite accurate printing of numbers,
the calculations themself are correct with the same input
and output values as the C version.
>
> real 0m1.725s
> user 0m1.635s
> sys 0m0.003s
>
>
> Which means that Stalin scheme is about 22% faster than C
> for this particular benchmark, and stalin scheme is a
> dynamic typed functional language.
Well, the types are dynamic unless stalin can't infere the types,
which it obviously could in this example. :-)
Pah. This challenge screams for a Gordian Knot solution:
With the most simple memoization applied to the functions, the run
time of the common lisp version is down to a few milliseconds.
--
(espen)
On 31 Lug, 12:50, "Kjetil S. Matheussen" <··············@notam02.no>
wrote:
> On Thu, 31 Jul 2008, namekuseijin wrote:
> > I wonder why most functional programming languages fail so badly at
> > what should be one of their primary main strengths: recursive
> > functions.
>
> > In the well-known benchmarking game called "The computer game
> > shootout", most such languages fail miserably in the face of blazing
> > fast imperative code during the benchmarking of recursive functions
> > such as ackermann or fibonacci:
>
> >http://shootout.alioth.debian.org/gp4/benchmark.php?test=recursive&la...
>
> > The best is clean, only 1.6 slower than gcc C.
>
> > The chart:
> > 1.6 Clean
> > 3.2 OCaml (just behind Fortran and Java!)
> > 3.3 F# (running in mono)
> > 3.4 SML on MLton (just after FreeBasic!)
> > 3.4 Haskell GHC (just after Java on gcj! almost a tie-in)
> > 3.6 CL on SBCL (almost as good as compiled lazy static Haskell)
> > 3.7 Scheme on Chicken (just after sbcl)
> > 4.0 SML/NJ (just after C# on mono)
> > 4.6 Scheme on Ikarus (just after Java -client!)
> > 8.1 Scheme PLT (just after Fortran, again)
>
> > Well, that's about it.
>
> Well, Stalin scheme is not on the list. My results:
>
> * GCC:
> ······@ttleush ~/c $ time ./recursive.gcc_run 11
> Ack(3,11): 16381
> Fib(38.0): 63245986.0
> Tak(30,20,10): 11
> Fib(3): 3
> Tak(3.0,2.0,1.0): 2.0
>
> real 0m2.112s
> user 0m2.021s
> sys 0m0.002s
>
> * Stalin 0.11:
> ······@ttleush ~/c $ time ./recursive 11
> Ack(3,11): 16381
> Fib(3.7999999523162841796875e1.0): 6.32459926605224609375e7.0
> Tak(30,20,10): 11
> Fib(3): 3
> Tak(3.0,2.0,1.0): 2.0.0
>
> real 0m1.725s
> user 0m1.635s
> sys 0m0.003s
>
> Which means that Stalin scheme is about 22% faster than C
> for this particular benchmark, and stalin scheme is a
> dynamic typed functional language.
>
> Source:http://www.notam02.no/~kjetism/recursive.scm
Can you post the parts of C code generated by Stalin that are faster
than the hand-coded C, please?
> Compile with
> stalin -On -no-clone-size-limit -Ob -Om -Or -Ot -copt -O3 -copt
> -fomit-frame-pointer -copt -march=athlon-xp -dC -dh -architecture
> IA32-align-double -copt -malign-double -copt -freg-struct-return -d0 -d1
> -d5 -d6 -k recursive.scm
>
> (most of these options probably don't make any difference regarding
> execution speed, I just tried various
> things to see if it improved, and kept most that didn't make a difference)
On Mon, 4 Aug 2008, Vend wrote:
> On 31 Lug, 12:50, "Kjetil S. Matheussen" <··············@notam02.no>
> wrote:
> > On Thu, 31 Jul 2008, namekuseijin wrote:
> > > I wonder why most functional programming languages fail so badly at
> > > what should be one of their primary main strengths: recursive
> > > functions.
> >
> > > In the well-known benchmarking game called "The computer game
> > > shootout", most such languages fail miserably in the face of blazing
> > > fast imperative code during the benchmarking of recursive functions
> > > such as ackermann or fibonacci:
> >
> > >http://shootout.alioth.debian.org/gp4/benchmark.php?test=recursive&la...
> >
> > > The best is clean, only 1.6 slower than gcc C.
> >
> > > The chart:
> > > 1.6 Clean
> > > 3.2 OCaml (just behind Fortran and Java!)
> > > 3.3 F# (running in mono)
> > > 3.4 SML on MLton (just after FreeBasic!)
> > > 3.4 Haskell GHC (just after Java on gcj! almost a tie-in)
> > > 3.6 CL on SBCL (almost as good as compiled lazy static Haskell)
> > > 3.7 Scheme on Chicken (just after sbcl)
> > > 4.0 SML/NJ (just after C# on mono)
> > > 4.6 Scheme on Ikarus (just after Java -client!)
> > > 8.1 Scheme PLT (just after Fortran, again)
> >
> > > Well, that's about it.
> >
> > Well, Stalin scheme is not on the list. My results:
> >
> > * GCC:
> > ······@ttleush ~/c $ time ./recursive.gcc_run 11
> > Ack(3,11): 16381
> > Fib(38.0): 63245986.0
> > Tak(30,20,10): 11
> > Fib(3): 3
> > Tak(3.0,2.0,1.0): 2.0
> >
> > real 0m2.112s
> > user 0m2.021s
> > sys 0m0.002s
> >
> > * Stalin 0.11:
> > ······@ttleush ~/c $ time ./recursive 11
> > Ack(3,11): 16381
> > Fib(3.7999999523162841796875e1.0): 6.32459926605224609375e7.0
> > Tak(30,20,10): 11
> > Fib(3): 3
> > Tak(3.0,2.0,1.0): 2.0.0
> >
> > real 0m1.725s
> > user 0m1.635s
> > sys 0m0.003s
> >
> > Which means that Stalin scheme is about 22% faster than C
> > for this particular benchmark, and stalin scheme is a
> > dynamic typed functional language.
> >
> > Source:http://www.notam02.no/~kjetism/recursive.scm
>
> Can you post the parts of C code generated by Stalin that are faster
> than the hand-coded C, please?
>
Okay.
Integer versions of Ack and Tak below.
The float versions are probably similar, I would think,
except that all "int"s are replaced with "float"s,
so I didn't include that code.
Stalin's version of Fibonacci probably performs similar
to the gcc version, so I didn't include that code
either.
In this case, the code produced by Stalin isn't that
spectacular (I mean, it's pretty straight forward),
so I'm surprised no other languages have produced
something similar. The reason for the increased
performance compared to the gcc version seems
to only be because a couple of function
calls are translated into gotos. But maybe Stalin
has done some more sophisticated, but less obvious,
optimization here as well, I don't know. :-)
Ack(x,y)
x = 0 = y+1
y = 0 = Ack(x-1,1)
otherwise = Ack(x-1, Ack(x,y-1))
/* ACK[6] */
int f6(int a666, int a667)
{
int t1307;
int t1308;
int t1309;
int t1310;
int t1311;
int t1312;
int t1313;
int t1314;
int t1315;
int t1316;
int t1317;
int t1318;
int t1319;
int t1320;
h6:
t1307 = 0;
t1308 = a666;
if (!(t1307 == t1308))
goto l368;
t1319 = a667;
t1320 = 1;
return t1319 + t1320;
l368:
t1311 = a666;
t1312 = 1;
t1309 = t1311 - t1312;
t1313 = 0;
t1314 = a667;
if (t1313 == t1314)
goto l370;
t1315 = a666;
t1317 = a667;
t1318 = 1;
t1316 = t1317 - t1318;
t1310 = f6(t1315, t1316);
goto l371;
l370:
t1310 = 1;
l371:
a666 = t1309;
a667 = t1310;
goto h6;
}
Tak(x,y,z)
y < x = Tak(Tak(x-1.0,y,z),Tak(y-1.0,z,x),Tak(z-1.0,x,y))
otherwise = z
/* TAK[13] */
int f13(int a670, int a671, int a672)
{
int t1267;
int t1268;
int t1269;
int t1270;
int t1271;
int t1272;
int t1273;
int t1274;
int t1275;
int t1276;
int t1277;
int t1278;
int t1279;
int t1280;
int t1281;
int t1282;
int t1283;
int t1284;
int t1285;
int t1286;
h13:
t1267 = a671;
t1268 = a670;
if (!(t1267 < t1268))
goto l362;
t1275 = a670;
t1276 = 1;
t1272 = t1275 - t1276;
t1273 = a671;
t1274 = a672;
t1269 = f13(t1272, t1273, t1274);
t1280 = a671;
t1281 = 1;
t1277 = t1280 - t1281;
t1278 = a672;
t1279 = a670;
t1270 = f13(t1277, t1278, t1279);
t1285 = a672;
t1286 = 1;
t1282 = t1285 - t1286;
t1283 = a670;
t1284 = a671;
t1271 = f13(t1282, t1283, t1284);
a670 = t1269;
a671 = t1270;
a672 = t1271;
goto h13;
l362:
return a672;
}
> > Compile with
> > stalin -On -no-clone-size-limit -Ob -Om -Or -Ot -copt -O3 -copt
> > -fomit-frame-pointer -copt -march=athlon-xp -dC -dh -architecture
> > IA32-align-double -copt -malign-double -copt -freg-struct-return -d0 -d1
> > -d5 -d6 -k recursive.scm
> >
> > (most of these options probably don't make any difference regarding
> > execution speed, I just tried various
> > things to see if it improved, and kept most that didn't make a difference)
>
>
On 5 Ago, 11:42, "Kjetil S. Matheussen" <··············@notam02.no>
wrote:
<snip>
Thanks.
I noticed that Stalin seems to store all values it uses, even
constants, in temporaney local variables, which appear to be treated
under a single assignment single reference policy. Is this just the
way it's wired or is there a performance benefit in doing so? (I
suppose most of these variables are optimized away by GCC)
On Tue, 5 Aug 2008, Vend wrote:
> On 5 Ago, 11:42, "Kjetil S. Matheussen" <··············@notam02.no>
> wrote:
> <snip>
>
> Thanks.
>
> I noticed that Stalin seems to store all values it uses, even
> constants, in temporaney local variables, which appear to be treated
> under a single assignment single reference policy. Is this just the
> way it's wired or is there a performance benefit in doing so? (I
> suppose most of these variables are optimized away by GCC)
>
>
Probably just convenient... C compilers optimize those
away anyway, no need for stalin to do that.
On Jul 31, 11:50 am, "Kjetil S. Matheussen"
<··············@notam02.no> wrote:
> Well, Stalin scheme is not on the list. My results:
>
> * GCC:
> ······@ttleush ~/c $ time ./recursive.gcc_run 11
> Ack(3,11): 16381
> Fib(38.0): 63245986.0
> Tak(30,20,10): 11
> Fib(3): 3
> Tak(3.0,2.0,1.0): 2.0
>
> real 0m2.112s
> user 0m2.021s
> sys 0m0.002s
>
> * Stalin 0.11:
> ······@ttleush ~/c $ time ./recursive 11
> Ack(3,11): 16381
> Fib(3.7999999523162841796875e1.0): 6.32459926605224609375e7.0
> Tak(30,20,10): 11
> Fib(3): 3
> Tak(3.0,2.0,1.0): 2.0.0
>
> real 0m1.725s
> user 0m1.635s
> sys 0m0.003s
>
> Which means that Stalin scheme is about 22% faster than C
> for this particular benchmark, and stalin scheme is a
> dynamic typed functional language.
>
> Source:http://www.notam02.no/~kjetism/recursive.scm
Hi:
I used your Scheme code under the latest Bigloo on my Mac OSX and
INTEL 2.0 GHz dual core machine:
for n=10:
the c version (downloade from the link given by the original poster):
==
$ gcc -pipe -Wall -O3 -fomit-frame-pointer -march=pentium4 test33.c
$ time ./a.out
Ack(3,11): 16381
Fib(38.0): 63245986.0
Tak(30,20,10): 11
Fib(3): 3
Tak(3.0,2.0,1.0): 2.0
real 0m1.655s
user 0m1.631s
sys 0m0.008s
==
The Bigloo version (code enclosed: all I did was to change + to Bigloo
its +fx, and +fl, etc. and using bit-or):
==
$ bigloo test.scm -Obench
ld64: warning: option -s is obsolete and being ignored
$ time ./a.out
Ack(3,11): 16381
Fib(38.0): 63245986.0
Tak(30,20,10): 11
Fib(3): 3
Tak(3.0,2.0,1.0): 2.0
real 0m1.620s
user 0m1.601s
sys 0m0.009s
==
Note: in the Bigloo code i haven't used roundto for printing fib
because this is always the fucking and annoying nightmare whe using
scheme implementations and shared code: what works fine for one
implementation does not work for the other. however, this does not
lessen the benchmark results.
==
;;; Code from Kjetil S. Matheussen"
;;; I changed the operators +,-, etc. to Bigloo its native ones: +fl,
+fx,etc.
;;; and the code been using bit-or
(module test)
(define
(ack::bint x::bint y::bint)
(if (=fx 0 x)
(+fx y 1)
(ack (-fx x 1)
(if (=fx 0 (bit-or y 0)) ;;(not (= 0 y))
1
(ack x (-fx y 1))))))
(define
(fib::bint n::bint)
(cond ((<fx n 2) 1)
(else (+fx (fib (-fx n 2)) (fib (-fx n 1))))))
;; TODO: optimize on ikarus, it's allocing around 400MB but the GC is
fast
;; TODO: is the main problem that floats are always boxed in Ikarus?
(define
(fibflt::double n::double)
(cond ((<fl n 2.0) 1.0)
(else (+fl (fibflt (-fl n 2.0))
(fibflt (-fl n 1.0))))))
(define
(tak::bint x::bint y::bint z::bint)
(if (<fx y x)
(tak (tak (-fx x 1) y z)
(tak (-fx y 1) z x)
(tak (-fx z 1) x y))
z))
(define
(takflt::double x::double y::double z::double)
(if (<fl y x)
(takflt (takflt (-fl x 1.0) y z)
(takflt (-fl y 1.0) z x)
(takflt (-fl z 1.0) x y))
z))
;; -------------------------------
;;; Stupid boiler-plate for formatting floating point values
(define
(roundto digits n)
(let* ((e (expt 10 digits))
(num (round (abs (* e (inexact->exact n)))))
(str (number->string (remainder num e))))
;(print "sr" str " " num " " e "dig " digits "n " n)
(string-append
(if (negative? n) "-" "")
(number->string (/ num e))
"."
(make-string (- digits (string-length str)) #\0)
str)))
(define
(main args)
(let ((n (inexact->exact (string->number (vector-ref args 1)))))
(display "Ack(3,") (display n) (display "): ") (display (ack 3
n)) (newline)
(display "Fib(") (display (roundto 1 (+ 27.0 n))) (display "):
")
;(print n) (exit)
;(display "Fib(") (display (+ 27.0 n)) (display "): ")
;(display (roundto 1 (fibflt (+ 27.0 n)))) (newline)
(display (fibflt (+ 27.0 n))) (newline)
(set! n (-fx n 1))
(display "Tak(") (display (*fx n 3))
(display ",") (display (*fx n 2))
(display ",") (display n) (display "): ")
(display (tak (*fx n 3) (*fx n 2) n)) (newline)
(display "Fib(3): ") (display (fib 3)) (newline)
(display "Tak(3.0,2.0,1.0): ") (display (roundto 1 (takflt 3.0
2.0 1.0)))
(newline)))
;; -------------------------------
(main (vector "scheme-compatibiliy-sucks" "11"))
==
P� Thu, 31 Jul 2008 10:01:37 +0200, skrev namekuseijin
<············@gmail.com>:
> I wonder why most functional programming languages fail so badly at
> what should be one of their primary main strengths: recursive
> functions.
>
> In the well-known benchmarking game called "The computer game
> shootout", most such languages fail miserably in the face of blazing
> fast imperative code during the benchmarking of recursive functions
> such as ackermann or fibonacci:
>
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=recursive&lang=all
>
> The best is clean, only 1.6 slower than gcc C.
>
> The chart:
> 1.6 Clean
> 3.2 OCaml (just behind Fortran and Java!)
> 3.3 F# (running in mono)
> 3.4 SML on MLton (just after FreeBasic!)
> 3.4 Haskell GHC (just after Java on gcj! almost a tie-in)
> 3.6 CL on SBCL (almost as good as compiled lazy static Haskell)
> 3.7 Scheme on Chicken (just after sbcl)
> 4.0 SML/NJ (just after C# on mono)
> 4.6 Scheme on Ikarus (just after Java -client!)
> 8.1 Scheme PLT (just after Fortran, again)
>
> Well, that's about it.
>
> I know in the grand scheme of things, such timings are not that bad,
> not orders of magnitude slower like some of the dynamic scripting
> stinkers in the bottom, specially in the face of such classy and easy
> syntax and semantics adding to programmer productivity. Still, kind
> of a let down, like buying a Playstation 3 and waiting for a killer
> app that never comes...
>
> Any thoughts?
The fastest applicative code is in OCALM and Haskell.
For better performance in CL you might try the series library.
In a nutshell the above mentioned languages were based on experiences and
reflections on performance in Lisp.
It is based on experience with type inference and lazy evaluation.
SBCL is better at type inference than other Lisp's so perhaps series under
SBCL would give CL a more faveroable result.
Most compilers today (Allegro Common Lisp, LispWorks) have focused more on
OO performance under CLOS and fast memory management which matters more to
many of the users of Lisp. Note that most users of Lisp used a mixed
paradigm approach and that loop plays a important role in why the
applicative performance has not been a priority. As in all compiled
languages it depends more on the quality of the programmer than the
compiler. Anyhow short example code doesn't accurately reflect the users
perceived performance running real programs. Java's long loading time and
memory use tend to shadow algorithm performance in real programs. In this
sense CL scores a lot better.
--------------
John Thingstad
From: Jon Harrop
Subject: Re: Why such poor recursive behaviour?
Date:
Message-ID: <g773us$b85$1@aioe.org>
namekuseijin wrote:
> I wonder why most functional programming languages fail so badly at
> what should be one of their primary main strengths: recursive
> functions.
>
> In the well-known benchmarking game called "The computer game
> shootout", most such languages fail miserably in the face of blazing
> fast imperative code during the benchmarking of recursive functions
> such as ackermann or fibonacci:
>
>
http://shootout.alioth.debian.org/gp4/benchmark.php?test=recursive&lang=all
>
> The best is clean, only 1.6 slower than gcc C.
Actually the best is F#, only 1.2x slower than Visual C++.
> The chart:
> 1.6 Clean
> 3.2 OCaml (just behind Fortran and Java!)
> 3.3 F# (running in mono)
> 3.4 SML on MLton (just after FreeBasic!)
> 3.4 Haskell GHC (just after Java on gcj! almost a tie-in)
> 3.6 CL on SBCL (almost as good as compiled lazy static Haskell)
> 3.7 Scheme on Chicken (just after sbcl)
> 4.0 SML/NJ (just after C# on mono)
> 4.6 Scheme on Ikarus (just after Java -client!)
> 8.1 Scheme PLT (just after Fortran, again)
>
> Well, that's about it.
>
> I know in the grand scheme of things, such timings are not that bad,
I think 3x slower is bad. These languages should be at most 2x slower.
> not orders of magnitude slower like some of the dynamic scripting
> stinkers in the bottom, specially in the face of such classy and easy
> syntax and semantics adding to programmer productivity. Still, kind
> of a let down, like buying a Playstation 3 and waiting for a killer
> app that never comes...
>
> Any thoughts?
OCaml is slow on the int versions because integer arithmetic is slow in
OCaml because it uses tagged 31-bit integers for the GC that require extra
shifts and tests, e.g. OCaml is 3x slower than F# on the int-based Monte
Carlo part of the SciMark2 benchmark (see the OCaml Journal article for
details). OCaml is slow on the float versions because it is unable to unbox
floats passed by recursive function calls. The latter problem can be solved
without leaving OCaml by rewriting in Fortran 77 style but it is ugly.
.NET's JIT-time monomorphization solves these problems in F# and,
consequently, F# on .NET is often much faster than any other FPL including
OCaml. Moreover, the designs of all of those FPLs predate VMs like the JVM
and .NET which is why none of them are capable of these kinds of
optimization.
This is not rocket science but, despite the availability of excellent
solutions like LLVM, there are no longer any modern open source functional
programming language implementations. This is a great shame because
functional programming is really taking off right now and a lot of people
would benefit from a decent foundation. The problem is that the programming
language researchers are the only people who know what they are doing but
they have no incentive to build decent foundations for the next generation
of functional programming languages. So they are all building separately on
sand, Greenspunning uninteroperable run-time environments and garbage
collectors. Mono is the nearest thing to an alternative (the JVM lacks tail
calls) but it is a joke compared to .NET, as you have shown.
So Microsoft now have a monopoly over modern functional programming
languages.
--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
On Aug 4, 11:33 am, Jon Harrop <····@ffconsultancy.com> wrote:
> So Microsoft now have a monopoly over modern functional programming
> languages.
Well, at the very least, they have a monopoly on F# and Simon Peyton
Jones. ;)
On Aug 6, 12:28 am, ········@gmail.com" <·······@gmail.com> wrote:
> In any case, I think the big advantage of languages like ML is in GC,
> memory safety, sane type systems, interactive development, nice
> debuggers and macro systems. Tail recursion is nice to have, but it
> wouldn't be essential to me.
How would you iterate large datasets, then?
From: Jon Harrop
Subject: Re: Why such poor recursive behaviour?
Date:
Message-ID: <g7dd66$ibr$1@aioe.org>
namekuseijin wrote:
> On Aug 6, 12:28�am, ········@gmail.com" <·······@gmail.com> wrote:
>> In any case, I think the big advantage of languages like ML is in GC,
>> memory safety, sane type systems, interactive development, nice
>> debuggers and macro systems. Tail recursion is nice to have, but it
>> wouldn't be essential to me.
>
> How would you iterate large datasets, then?
By writing Fortran-style code.
--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
It should be noted that fa.caml and fa.haskell are read-only Google
interfaces to the mailing lists. So their respective communities
didn't read your message and aren't here to present their views.