From: ···@!!!
Subject: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <42d1d3ba-33d3-4d59-80dd-691170b2a615@r33g2000yqn.googlegroups.com>
I'm not trying to start a flame war about which one is the best. Could
anybody explain me each of these languages features and strong points ?

From: Dimiter "malkia" Stanev
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h1rluc$c5v$1@malkia.motzarella.org>
···@!!! wrote:
> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?

I know a bit Common Lisp, but I'm still lisp toddler. I can barely 
understand haskell, and have troubles reading prolog code.

So Common Lisp is multi-paradigm language - you can code in many 
different ways. Strong points are powerful macro system, continuous 
development cycle - e.g. you can modify the system while it's running, 
the multi-dispatch Common Lisp Object System (CLOS), the special global 
variables, lambdas, local variables (let), closures (let over lambdas), 
and many other things builtin, or built on top of it.

Common Lisp is not purely functional (unlike Haskell, and I think 
Prolog), and one can say not functional at all, but you can definitely 
code easily in functional matter, given you restrict yourself to 
non-destructive operations, and facilities such as FSet or similar.

Not being purely functional is both strong, and weak point. Strong, that 
you can really fix performance problems with iterative code where 
needed, weak that in generally allowing side-effects, you limit the 
compiler abilities - mainly the compiler cannot assume that data won't 
be modified. Not sure whether it's relevant :)

With Common Lisp you can quickly start prototyping code. Such code could 
be 10x-1000x less efficient than what you can actually achieve, but by 
iteratively profiling and applying hints and types to the compiler, you 
could gain a lot more. I'm still finding my way through this.

In general, there are reasonably good compilers there, where you won't 
need so much hints.

You can mend Common Lisp to your needs, as long as you become good 
master in it.

My weakest point for Common Lisp so far has been ME - I have to mend 
myself to the language, rather than trying to use it, the way I've been 
using past languages

As little as I know of Haskell, it's strictly-typed, lazy by default 
pure functional language (e.g. by default no side effects).

Can't say much about prolog.
From: ···@!!!
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <e359bbd4-b44d-4433-bf3e-8f4a7d8d3252@3g2000yqk.googlegroups.com>
> In general, there are reasonably good compilers there, where you won't
> need so much hints.

Can you mention some of the best compilers?

> My weakest point for Common Lisp so far has been ME - I have to mend
> myself to the language, rather than trying to use it, the way I've been
> using past languages

I believe for me that's true, because if the past languages a
programmer has used were mainly imperative, than a different paradigm,
I guess, would require them to change the way mind thinks about
problems. I don't see programming just as a way for creating software
but as a whole different way of thinking about problems and solutions.
(Sometimes even in every-day life!).

I remember myself about two years ago when I decided to start learning
Haskell. I couldn't even understand the Hello World example.
Finally, after a week I gave up. Since then I have retried 3 other
times to learn Haskell and only the last time I had that bliss where
strangely, in a matter of milliseconds, like a hammer behind the head,
the code changed completely in front of my eyes and there it was like
truly it is, simple and beautiful and full with meaning like never
before. :)
Same thing happened me with Prolog too. I retried about 3 or 4 times
to learn it. But only after getting to understand Haskell, I suddenly
could understand Prolog too and strangely seemed so easy.

The reason for this, I believe, was that my mind was accustomed with
the imperative way of thinking and it needed some time (and perhaps
luck too!!!) to get the functional and logic as well.

Anyway thank you for your detailed response and god luck in mastering
Lisp ans Haskell.

Best regards,
Elton.
From: chthon
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <cbef3a4f-282f-4077-8311-bced880e6ca5@r33g2000yqn.googlegroups.com>
On Jun 24, 1:23 am, ····@!!!" <·········@gmail.com> wrote:
> > In general, there are reasonably good compilers there, where you won't
> > need so much hints.
>
> Can you mention some of the best compilers?
>
> > My weakest point for Common Lisp so far has been ME - I have to mend
> > myself to the language, rather than trying to use it, the way I've been
> > using past languages
>
> I believe for me that's true, because if the past languages a
> programmer has used were mainly imperative, than a different paradigm,
> I guess, would require them to change the way mind thinks about
> problems. I don't see programming just as a way for creating software
> but as a whole different way of thinking about problems and solutions.
> (Sometimes even in every-day life!).
>
> I remember myself about two years ago when I decided to start learning
> Haskell. I couldn't even understand the Hello World example.
> Finally, after a week I gave up. Since then I have retried 3 other
> times to learn Haskell and only the last time I had that bliss where
> strangely, in a matter of milliseconds, like a hammer behind the head,
> the code changed completely in front of my eyes and there it was like
> truly it is, simple and beautiful and full with meaning like never
> before. :)
> Same thing happened me with Prolog too. I retried about 3 or 4 times
> to learn it. But only after getting to understand Haskell, I suddenly
> could understand Prolog too and strangely seemed so easy.
>
> The reason for this, I believe, was that my mind was accustomed with
> the imperative way of thinking and it needed some time (and perhaps
> luck too!!!) to get the functional and logic as well.
>
> Anyway thank you for your detailed response and god luck in mastering
> Lisp ans Haskell.
>
> Best regards,
> Elton.

A good way to remove the imperative way of thinking is reading "How To
Design Programs" and doing the exercises. It uses Scheme and is really
a good book to transition from imperative languages to more functional
languages.

Regards,

Jurgen
From: Richard Fateman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <4A4162A9.2040803@cs.berkeley.edu>
···@!!! wrote:
> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?

Too late.
From: ···@!!!
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <49434d83-ec07-49d5-b2a1-f08e7254e727@w40g2000yqd.googlegroups.com>
On Jun 24, 1:18 am, Richard Fateman <·······@cs.berkeley.edu> wrote:
> ···@!!! wrote:
> > I'm not trying to start a flame war about which one is the best. Could
> > anybody explain me each of these languages features and strong points ?
>
> Too late.

Richard, I'm not trying to start a debate here about which language is
the best, but I'm only looking forward to get informed about the
strong and weak points, similarities and distinctions between these
languages from a more experienced than me programmer.
Well, maybe the thread title could be confusing perhaps, I admit.
From: Thomas M. Hermann
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h1roka$emr$1@news.eternal-september.org>
···@!!! wrote:
> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?


This question reminds me of the Dilbert strip where the secretary asks
Dilbert to tell her everything she needs to know about engineering and
she doesn't care if it takes all day. Well, at least you didn't
cross-post it.

Have you heard of either Google or Wikipedia? Both are generally
considered good places to start when you want to expend as little effort
as possible to get some information.

http://en.wikipedia.org/wiki/Common_lisp
http://en.wikipedia.org/wiki/Haskell_(programming_language)
http://en.wikipedia.org/wiki/Prolog

Hope that helps,

Tom
From: ···@!!!
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <b4bf0e0d-a1c1-4bb9-8f4d-ba033f9233fd@z14g2000yqa.googlegroups.com>
On Jun 24, 1:29 am, "Thomas M. Hermann" <··········@gmail.com> wrote:
> ···@!!! wrote:
> > I'm not trying to start a flame war about which one is the best. Could
> > anybody explain me each of these languages features and strong points ?
>
> This question reminds me of the Dilbert strip where the secretary asks
> Dilbert to tell her everything she needs to know about engineering and
> she doesn't care if it takes all day. Well, at least you didn't
> cross-post it.
>
> Have you heard of either Google or Wikipedia? Both are generally
> considered good places to start when you want to expend as little effort
> as possible to get some information.
>
> http://en.wikipedia.org/wiki/Common_lisphttp://en.wikipedia.org/wiki/Haskell_(programming_language)http://en.wikipedia.org/wiki/Prolog
>
> Hope that helps,
>
> Tom

Thank you Tom, but I have already tried Google and Wikipedia and they
have been more than helpful in some other way.
But if you read my last post, I'm looking to get information from
experiences in a more specific way, not from an encyclopedic-like
general way. I want to get information about the position of these
languages/tools in the software industry by their characteristics to
be more correct.

Elton.
From: Frank Buss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <17an92vjc1b52$.138eybii0je6x.dlg@40tude.net>
···@!!! wrote:

> Thank you Tom, but I have already tried Google and Wikipedia and they
> have been more than helpful in some other way.
> But if you read my last post, I'm looking to get information from
> experiences in a more specific way, not from an encyclopedic-like
> general way. I want to get information about the position of these
> languages/tools in the software industry by their characteristics to
> be more correct.

Ok, let's start a discussion, there are really too few language flame wars
in this newsgroup :-)

Wikipedia says a bit about applications in Common Lisp:

http://en.wikipedia.org/wiki/Common_lisp#Applications

It doesn't mention that this is the complete list of bigger applications
ever written in Lisp and even Jak and Daxter was not Common Lisp.

Lisp isn't used very much in the software industry. Most of the time C,
Java, Python, PHP and other mainstream languages are used, see e.g. the
projects on Sourceforge. There are 197 Prolog projects, 492 Lisp projects,
120 Haskell projects, but for example 42832 Java projects, 32420 C
projects, 28828 PHP projects and 13412 Python projects:

http://sourceforge.net/softwaremap/trove_list.php?form_cat=177
http://sourceforge.net/softwaremap/trove_list.php?form_cat=170
http://sourceforge.net/softwaremap/trove_list.php?form_cat=451
http://sourceforge.net/softwaremap/trove_list.php?form_cat=198
http://sourceforge.net/softwaremap/trove_list.php?form_cat=164
http://sourceforge.net/softwaremap/trove_list.php?form_cat=183
http://sourceforge.net/softwaremap/trove_list.php?form_cat=178

I assume this reflects the number of programmers who used this in the
software industry as well, so Lisp or Haskell are using only about 2% of
all programmers.

This is dangeraous for big companies, because there are not enough people
who can write and maintain good software in these languages, which is the
reason why e.g. Yahoo store was rewritten in C and Perl or why Naugthy Dog
are using C++ now:

http://www.gamasutra.com/features/20020710/white_03.htm

Personally I know of a project for an insurance company which was written
in Prolog for risk analysis. After the programmer left the company, nobody
could understand and maintain the system, so it was rewritten in Java.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Rainer Joswig
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <a66683e9-35ea-4e1d-8ae0-2cbc1814b8c5@h18g2000yqj.googlegroups.com>
On 10 Jul., 09:42, Frank Buss <····@frank-buss.de> wrote:
> ···@!!! wrote:
> > Thank you Tom, but I have already tried Google and Wikipedia and they
> > have been more than helpful in some other way.
> > But if you read my last post, I'm looking to get information from
> > experiences in a more specific way, not from an encyclopedic-like
> > general way. I want to get information about the position of these
> > languages/tools in the software industry by their characteristics to
> > be more correct.
>
> Ok, let's start a discussion, there are really too few language flame wars
> in this newsgroup :-)
>
> Wikipedia says a bit about applications in Common Lisp:
>
> http://en.wikipedia.org/wiki/Common_lisp#Applications
>
> It doesn't mention that this is the complete list of bigger applications
> ever written in Lisp and even Jak and Daxter was not Common Lisp.

Nice try Frank. Fine to get this discussion going for the nth time.
But please try not to follow 'Harrop' style, post wrong things and
wait for people to correct it.

You should come to the Lisp meeting organized by Edi Weitz and Arthur
Lemmens
in Hamburg: http://weitz.de/eclm2009/

I'm pretty sure several people can tell you a bit about the
applications
they are writing or have written. If you were a bit more familiar with
the
domains Common Lisp is used mostly, you would know that there
are thousands of applications written in Common Lisp. That you don't
know them does not mean they don't exist.

Btw.: the DEVELOPMENT ENVIRONMENT for Jak and Daxter was written in
Common Lisp. It included a compiler for a Scheme like language,
which runs on the Playstation 2.

...

> This is dangeraous for big companies, because there are not enough people
> who can write and maintain good software in these languages, which is the
> reason why e.g. Yahoo store was rewritten in C and Perl or why Naugthy Dog
> are using C++ now:
>
> http://www.gamasutra.com/features/20020710/white_03.htm

Naughty Dog says this about their decision not to use Lisp for the
Playstation 3:

"In all honesty, the biggest reason we're not using GOAL for next-gen
development is because we're now part of Sony. I can only imagine
Sony's shock when they purchased Naughty Dog a few years back, hoping
to be able to leverage some of our technology across other Sony
studios, and then realized that there was no way anyone else would be
able to use any of our codebase. :)

Sony wants us to be able to share code with other studios, and this
works both ways - both other studios using our code and vice versa.
Add this to the difficulty curve of learning a new language for new
hires, lack of support from external development tools (we had our own
compiler, linker, and debugger, and pretty much had to use Emacs as
our IDE), etc, means that there are clearly a lot of other factors
involved. Note, however, that these issues aren't really technical
problems, they're social ones.

I can definitively say that the investment in GOAL was worth it for
our PS2 titles, despite the initial setup time and maintenance. Our
productivity gains were huge, and were more than worth the time
investment. This time around, however, the circumstances aren't quite
the same. If we were still an independent studio, I'm almost positive
we'd be extending GOAL for the next-generation of development. As it
is, we are looking into alternative approaches (custom preprocessors,
assemblers, linkers, you name it) - but all of these approaches fall
short in many ways of the unified language and environment we had with
GOAL.

That said, if there was a serious effort on the part of the game
development community to develop and standardize a language for game
development, everyone could gain the benefits without suffering the
drawbacks (lack of code-sharing, learning curve, etc). And if there's
enough community support, it would only be a matter of time before
some really high-quality commercial tools came out to work with the
language."


But now Naughty Dog uses Scheme again.

  http://www.naughtydog.com/corporate/press/GDC%202008/AdventuresInDataCompilation.pdf

>
> Personally I know of a project for an insurance company which was written
> in Prolog for risk analysis. After the programmer left the company, nobody
> could understand and maintain the system, so it was rewritten in Java.
>
> --
> Frank Buss, ····@frank-buss.dehttp://www.frank-buss.de,http://www.it4-systems.de
From: Frank Buss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <1c921lqi2cofs.gqzxhm3nkw0w$.dlg@40tude.net>
Rainer Joswig wrote:

> You should come to the Lisp meeting organized by Edi Weitz and Arthur
> Lemmens
> in Hamburg: http://weitz.de/eclm2009/

Thanks, but my focus has shifted a bit. Currently I'm trying to learn piano
playing in my spare time :-)

http://www.youtube.com/user/frankbuss

and sometimes doing some interesting computing stuff, e.g. magnetic flux
calculation with graphics cards:

http://groups.google.com/group/de.sci.electronics/browse_thread/thread/619614473c54d3df

> I'm pretty sure several people can tell you a bit about the
> applications
> they are writing or have written. If you were a bit more familiar with
> the
> domains Common Lisp is used mostly, you would know that there
> are thousands of applications written in Common Lisp. That you don't
> know them does not mean they don't exist.

You are right, this was a bit exaggerated by me. But the point is that
there are millions of applications in C-like languages, so I think at least
it is true, that there are less than 2% Lisp programmers and programs. Of
course, this doesn't mean that C is better than Lisp. It is better, if you
want to get a programming job, but if you are a good Lisp and a good C
programmer, I think it is easier to implement a given problem in Lisp.

> But now Naughty Dog uses Scheme again.
> 
>   http://www.naughtydog.com/corporate/press/GDC%202008/AdventuresInDataCompilation.pdf

I don't see that they are using it again. They write that they are planning
to use it ("We will build DC in Scheme") and the presentation has some C++
examples, too, so looks like it will be a mix of Scheme and C++, using the
Scheme part for data definitions, which are accessed from C++.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Rainer Joswig
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <d267bbd7-600e-412c-876b-e09acaa87919@n11g2000yqb.googlegroups.com>
On 10 Jul., 11:42, Frank Buss <····@frank-buss.de> wrote:
> Rainer Joswig wrote:
> > You should come to the Lisp meeting organized by Edi Weitz and Arthur
> > Lemmens
> > in Hamburg:http://weitz.de/eclm2009/
>
> Thanks, but my focus has shifted a bit. Currently I'm trying to learn piano
> playing in my spare time :-)

Hmm, so you post here though your focus shifted? You are not
interested
in meeting Lisp people, your focus shifted, but you tell us at most
five applications
are written in Lisp? Strange.

> > I'm pretty sure several people can tell you a bit about the
> > applications
> > they are writing or have written. If you were a bit more familiar with
> > the
> > domains Common Lisp is used mostly, you would know that there
> > are thousands of applications written in Common Lisp. That you don't
> > know them does not mean they don't exist.
>
> You are right, this was a bit exaggerated by me. But the point is that
> there are millions of applications in C-like languages, so I think at least
> it is true, that there are less than 2% Lisp programmers and programs. Of
> course, this doesn't mean that C is better than Lisp. It is better, if you
> want to get a programming job, but if you are a good Lisp and a good C
> programmer, I think it is easier to implement a given problem in Lisp.

Hmm, what you say is trivially known. Everybody knows that there
are more C programmers than Lisp programmers. What is new?
2% of millions of applications can be quite a lot in absolute
numbers and does not say much about the importance and the value
of these applications to their users.

There are also more VW rabbits than Mercedes Unimogs. But what does
it say really? There are more drivers able to drive a VW rabbit?
Great, let's do some forest work with the VW rabbit.

>
> > But now Naughty Dog uses Scheme again.
>
> >  http://www.naughtydog.com/corporate/press/GDC%202008/AdventuresInData...
>
> I don't see that they are using it again. They write that they are planning
> to use it ("We will build DC in Scheme") and the presentation has some C++
> examples, too, so looks like it will be a mix of Scheme and C++, using the
> Scheme part for data definitions, which are accessed from C++.

A mix of C++ and Scheme equals zero Scheme? I don't understand your
arithmetic.
Is use of Scheme valid only if all software is written in Scheme?

This describes where and how they use Scheme and what they use else:

http://www.naughtydog.com/corporate/press/GDC%202008/UnchartedTechGDC2008.pdf
From: Frank Buss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <eigq1sh1p659.1k3dlw63bed6s.dlg@40tude.net>
Rainer Joswig wrote:

> Hmm, so you post here though your focus shifted? You are not
> interested
> in meeting Lisp people, your focus shifted, but you tell us at most
> five applications
> are written in Lisp? Strange.

There is nothing strange about it. I'm still interested in Lisp, but more
in piano playing :-)

> A mix of C++ and Scheme equals zero Scheme? I don't understand your
> arithmetic.

I didn't wrote that they don't use Scheme, I just added the missing
information from your post that they are using it in combination with C++.

> Is use of Scheme valid only if all software is written in Scheme?

No. I think it is a good idea to mix different languages. Each language has
its pros and cons, e.g. Scheme could be nice for scripting and representing
data and C is nice for graphic card shaders.

> This describes where and how they use Scheme and what they use else:
> 
> http://www.naughtydog.com/corporate/press/GDC%202008/UnchartedTechGDC2008.pdf

I don't see details how they used Scheme, but on page 20 they write, that
initially they used it for creating data structures, later scripting and
other things.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Rainer Joswig
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <6a630f0e-a9fe-44cc-b083-a0426f50385c@k19g2000yqn.googlegroups.com>
On 10 Jul., 12:20, Frank Buss <····@frank-buss.de> wrote:

> Rainer Joswig wrote:
> > Hmm, so you post here though your focus shifted? You are not
> > interested
> > in meeting Lisp people, your focus shifted, but you tell us at most
> > five applications
> > are written in Lisp? Strange.
>
> There is nothing strange about it. I'm still interested in Lisp, but more
> in piano playing :-)

So you are going to be a professional piano player? Why piano?
There are many more people playing other instruments.
You can also get more music pieces for other instruments!
Isn't the guitar more popular in mainstream music?
Why not use a more modern instrument? If you use a
computer sequencer you could check your notes before playing
and that would greatly reduce the possibility of false
notes! I'd say the piano has been superseded by more modern
instruments long ago! Btw, I have written a book:

  Synthesizer for Scientists.

You should read it!

> > A mix of C++ and Scheme equals zero Scheme? I don't understand your
> > arithmetic.
>
> I didn't wrote that they don't use Scheme, I just added the missing
> information from your post that they are using it in combination with C++.
>
> > Is use of Scheme valid only if all software is written in Scheme?
>
> No. I think it is a good idea to mix different languages. Each language has
> its pros and cons, e.g. Scheme could be nice for scripting and representing
> data and C is nice for graphic card shaders.
>
> > This describes where and how they use Scheme and what they use else:
>
> >http://www.naughtydog.com/corporate/press/GDC%202008/UnchartedTechGDC...
>
> I don't see details how they used Scheme, but on page 20 they write, that
> initially they used it for creating data structures, later scripting and
> other things.

Dan Liebgold says:

We build upon this basis to create many many things
  Particle definitions
  Animation states
  Gameplay scripts
  Scripted in-gamecinematics
  Weapons tuning
  Sound and voice setup
  Overall game sequencing and control

I would say that's quite substantial...
From: Frank Buss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <1o8wl2uov2l5h$.elvj211e1uj0$.dlg@40tude.net>
Rainer Joswig wrote:

> So you are going to be a professional piano player?

No, it's just a hobby, like Lisp programming :-)

> Why piano?
> There are many more people playing other instruments.
> You can also get more music pieces for other instruments!
> Isn't the guitar more popular in mainstream music?
> Why not use a more modern instrument? If you use a
> computer sequencer you could check your notes before playing
> and that would greatly reduce the possibility of false
> notes! I'd say the piano has been superseded by more modern
> instruments long ago!

Nice analogy! I have an electronic piano and it has a MIDI interface for
interfacing with a PC. And there are even sequencers, where you can setup
note and velocity quantizing, but the recording looses lots of expression.
And no synthesizer can produce such a wide range of nuances like a good old
real grand piano, which I play every week at a music school.

Compared to other instruments, a piano can produce lots of different styles
of music, which is impossible with other instruments, or at least doesn't
sound as good and e.g. a guitar has much less octaves than a grand piano.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Rainer Joswig
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <9a9cc279-f125-444c-a829-819fb9bb49c2@q11g2000yqi.googlegroups.com>
On 10 Jul., 12:58, Frank Buss <····@frank-buss.de> wrote:
> Rainer Joswig wrote:
> > So you are going to be a professional piano player?
>
> No, it's just a hobby, like Lisp programming :-)
>
> > Why piano?
> > There are many more people playing other instruments.
> > You can also get more music pieces for other instruments!
> > Isn't the guitar more popular in mainstream music?
> > Why not use a more modern instrument? If you use a
> > computer sequencer you could check your notes before playing
> > and that would greatly reduce the possibility of false
> > notes! I'd say the piano has been superseded by more modern
> > instruments long ago!
>
> Nice analogy! I have an electronic piano and it has a MIDI interface for
> interfacing with a PC. And there are even sequencers, where you can setup
> note and velocity quantizing, but the recording looses lots of expression.
> And no synthesizer can produce such a wide range of nuances like a good old
> real grand piano, which I play every week at a music school.
>
> Compared to other instruments, a piano can produce lots of different styles
> of music, which is impossible with other instruments, or at least doesn't
> sound as good and e.g. a guitar has much less octaves than a grand piano.

The market for grand pianos is tiny compared to the market for
synthesizers.
Also Google returns many more hits for "synthesizer" than for "grand
piano".
You can also see that Google trends shows less and less searches
for "grand piano". Also the best piano players have switched to more
modern instruments long ago. The only hope I would have is with a
more modern version that is not bound by the historical baggage
of old music, old fashioned piano design, and the limited amount of
sounds
it can produce.


>
> --
> Frank Buss, ····@frank-buss.dehttp://www.frank-buss.de,http://www.it4-systems.de
From: Duane Rettig
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <3aae5479-01e4-414a-991a-10b348df66a8@c36g2000yqn.googlegroups.com>
On Jul 10, 4:28 am, Rainer Joswig <······@lisp.de> wrote:
> On 10 Jul., 12:58, Frank Buss <····@frank-buss.de> wrote:
>
>
>
>
>
> > Rainer Joswig wrote:
> > > So you are going to be a professional piano player?
>
> > No, it's just a hobby, like Lisp programming :-)
>
> > > Why piano?
> > > There are many more people playing other instruments.
> > > You can also get more music pieces for other instruments!
> > > Isn't the guitar more popular in mainstream music?
> > > Why not use a more modern instrument? If you use a
> > > computer sequencer you could check your notes before playing
> > > and that would greatly reduce the possibility of false
> > > notes! I'd say the piano has been superseded by more modern
> > > instruments long ago!
>
> > Nice analogy! I have an electronic piano and it has a MIDI interface for
> > interfacing with a PC. And there are even sequencers, where you can setup
> > note and velocity quantizing, but the recording looses lots of expression.
> > And no synthesizer can produce such a wide range of nuances like a good old
> > real grand piano, which I play every week at a music school.
>
> > Compared to other instruments, a piano can produce lots of different styles
> > of music, which is impossible with other instruments, or at least doesn't
> > sound as good and e.g. a guitar has much less octaves than a grand piano.
>
> The market for grand pianos is tiny compared to the market for
> synthesizers.
> Also Google returns many more hits for "synthesizer" than for "grand
> piano".
> You can also see that Google trends shows less and less searches
> for "grand piano". Also the best piano players have switched to more
> modern instruments long ago. The only hope I would have is with a
> more modern version that is not bound by the historical baggage
> of old music, old fashioned piano design, and the limited amount of
> sounds
> it can produce.

Don't forget non-portability.  And have you ever played on a good
Grand Piano after playing on a nice, cheap, synthesized piano?  The
keys are hard to press, and that you have to press the keys at all is
just ludicrous - give me a good, cheap, player piano any day.

:-)

Duane
From: Nicolas Neuss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <87eisopxhx.fsf@ma-patru.mathematik.uni-karlsruhe.de>
Rainer Joswig <······@lisp.de> writes:

> The market for grand pianos is tiny compared to the market for
> synthesizers.  Also Google returns many more hits for "synthesizer" than
> for "grand piano".  You can also see that Google trends shows less and
> less searches for "grand piano". Also the best piano players have
> switched to more modern instruments long ago. The only hope I would have
> is with a more modern version that is not bound by the historical baggage
> of old music, old fashioned piano design, and the limited amount of
> sounds it can produce.

:-)  But you should still add a link to your "Flying Music Consultancy"...

Nicolas
From: Michael Livshin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <87eisoohiz.fsf@colinux.kakpryg.net.cmm>
Nicolas Neuss <········@math.uni-karlsruhe.de> writes:

> :-)  But you should still add a link to your "Flying Music Consultancy"...

nah, we get it, no need to overdo things. :)

cheers,
--m
From: Vassil Nikolov
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <snzzlbdarr4.fsf@luna.vassil.nikolov.name>
On Fri, 10 Jul 2009 09:42:46 +0200, Frank Buss <··@frank-buss.de> said:
> ...
> Personally I know of a project for an insurance company which was written
> in Prolog for risk analysis. After the programmer left the company, nobody
> could understand and maintain the system, so it was rewritten in Java.

    He had bought a large map representing the sea
    Without the least vestige of land
    And the crew were all happy as they found it to be
    A map they could all understand.

  (Fit the Second, I think, quoting from memory.)

  ---Vassil,
  who couldn't resist...


-- 
"Even when the muse is posting on Usenet, Alexander Sergeevich?"
From: Slobodan Blazeski
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <46c8c8f1-5cae-4d4b-943b-ac44cff376a8@b14g2000yqd.googlegroups.com>
On Jun 24, 12:23 am, ····@!!!" <·········@gmail.com> wrote:
> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?

When student is ready the answer will arrive.

Slobodan Blazeski
From: Lars Rune Nøstdal
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <d43e5ca0-f5a5-4a14-b090-569d39c15b8d@r3g2000vbp.googlegroups.com>
On Jun 24, 12:23 am, ····@!!!" <·········@gmail.com> wrote:
> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?

http://groups.google.com/group/comp.lang.lisp/msg/a5d5c10f811b81e8
..?
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <57ednWbDQIJd4dzXnZ2dnUVZ8nSdnZ2d@brightview.co.uk>
···@!!! wrote:
> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?

Common Lisp is strict evaluation, dynamic typing with macros (to rewrite
code at compile-time) and run-time code evaluation. Was a pioneering
functional language decades ago but has long since been superceded. Has no
particular strengths today and, hence, is very rare. Main weaknesses are
baggage, poor performance, bad implementations and a really backward
community. The only notable development around Lisp for a decade is the new
programming language Clojure that runs on the JVM. In particular, Clojure
addressed many of Lisp's major problems by dropping the baggage, building
upon a performant VM with a concurrent GC and stealing all of the
intelligent members of the Lisp community.

Haskell is non-strict evaluation and static typing. Is a research language
used to implement many radical ideas that are unlikely to be of any
immediate use. Main strength is that it abstracts the machine away
entirely, allowing some solutions to be represented very concisely. Main
weakness is that it abstracts the machine away entirely, rendering
performance wildly unpredictable (will my elegant program terminate in my
lifetime? who knows...).

I know little about Prolog except that it was designed specifically for
logic programming (i.e. solving problems by specifying relations and
searching for solutions) and that some of our customers use it.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: milanj
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <016a9693-7e10-41b8-bea9-b55f69f4a25d@j9g2000vbp.googlegroups.com>
On Jun 24, 4:18 am, Jon Harrop <····@ffconsultancy.com> wrote:
> stealing all of the intelligent members of the Lisp community.

I did not noticed anything like this, all smart lispers whose blogs i
read are still active, in fact they are so smart so bunch of them dont
post on c.l.l. ... also Clojure didnt stole you from us and that is a
real pity
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <79mdnZ5NBYCH89PXnZ2dnUVZ8kpi4p2d@brightview.co.uk>
milanj wrote:
> On Jun 24, 4:18 am, Jon Harrop <····@ffconsultancy.com> wrote:
>> stealing all of the intelligent members of the Lisp community.
> 
> I did not noticed anything like this, all smart lispers whose blogs i
> read are still active, in fact they are so smart so bunch of them dont
> post on c.l.l. ... also Clojure didnt stole you from us and that is a
> real pity

Actually Clojure did steal me: they are requesting that I write books on
Clojure. Obviously I'm not going to waste my time writing books for CL...

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Kenneth Tilton
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <4a4f8275$0$5929$607ed4bc@cv.net>
Jon Harrop wrote:
> milanj wrote:
>> On Jun 24, 4:18 am, Jon Harrop <····@ffconsultancy.com> wrote:
>>> stealing all of the intelligent members of the Lisp community.
>> I did not noticed anything like this, all smart lispers whose blogs i
>> read are still active, in fact they are so smart so bunch of them dont
>> post on c.l.l. ... also Clojure didnt stole you from us and that is a
>> real pity
> 
> Actually Clojure did steal me: they are requesting that I write books on
> Clojure. 

Reminds me of when my older brother would say he heard Mom calling me so 
I would go away and leave him and his friends alone.

Don't fall for it, Jon!

kzo
From: ACL
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <64b2ba5e-b2ce-46d6-b5cd-180c7f366e16@v15g2000prn.googlegroups.com>
On Jul 3, 5:55 pm, JH wrote:
> milanj wrote:
> > On Jun 24, 4:18 am, JH wrote:
> >> stealing all of the intelligent members of the Lisp community.
>
> > I did not noticed anything like this, all smart lispers whose blogs i
> > read are still active, in fact they are so smart so bunch of them dont
> > post on c.l.l. ... also Clojure didnt stole you from us and that is a
> > real pity
>
> Actually Clojure did steal me: they are requesting that I write books on
> Clojure. Obviously I'm not going to waste my time writing books for CL...
>
> --

Agreed, no money in writing books for CL when there are already so
many good ones that are in the public domain or otherwise already
written. I mean, anyone who reads this group or reddit regularly knows
that you have a real hard-on for CL but just couldn't cut it our book
market.

But JH, you have to consider the consequences of this 'Clojure Book'.

1.) I will still /not/ buy it, because it is written by you.

2.) You will be (potentially forced) to make troll and spam posts that
are actually on topic.

3.) You might have to concede that there are good parts of common
lisp. (Stuff you cannot do easily in Ocaml or F#).
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <026e1e77-f9c1-4368-9e9d-76ec41b260d0@18g2000yqa.googlegroups.com>
On Jul 7, 7:11 am, ACL <··················@gmail.com> wrote:
> On Jul 3, 5:55 pm, JH wrote:
>
> > milanj wrote:
> > > On Jun 24, 4:18 am, JH wrote:
> > >> stealing all of the intelligent members of the Lisp community.
>
> > > I did not noticed anything like this, all smart lispers whose blogs i
> > > read are still active, in fact they are so smart so bunch of them dont
> > > post on c.l.l. ... also Clojure didnt stole you from us and that is a
> > > real pity
>
> > Actually Clojure did steal me: they are requesting that I write books on
> > Clojure. Obviously I'm not going to waste my time writing books for CL...
>
> > --
>
> Agreed, no money in writing books for CL when there are already so
> many good ones that are in the public domain or otherwise already
> written. I mean, anyone who reads this group or reddit regularly knows
> that you have a real hard-on for CL but just couldn't cut it our book
> market.
>
I think he also likes Haskell more than he is willing to admit.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <TOednVp6wq6Jj8jXnZ2dnUVZ8iMAAAAA@brightview.co.uk>
Larry Coleman wrote:
> I think he also likes Haskell more than he is willing to admit.

Sure, they all have their merits but none of them have what I want right
now.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7226cefc-7a67-4efb-ae2b-d6c20a69b4b3@t21g2000yqi.googlegroups.com>
On Jul 8, 6:48 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > I think he also likes Haskell more than he is willing to admit.
>
> Sure, they all have their merits but none of them have what I want right
> now.
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u

This raises a question of the kind that I generally don't ask because
I know in advance that I won't like the answer:

So what do you want from a programming language?
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <BvidnfDtPbRw8MvXnZ2dnUVZ8u-dnZ2d@brightview.co.uk>
Larry Coleman wrote:
> So what do you want from a programming language?

Excellent question. I've never really tried to enumerate it before...

Performance: the predictably-excellent performance of OCaml but with
monomorphization during JIT to remove the overhead of polymorphism and with
F#-style inlined HOFs to facilitate efficient abstraction. OCaml's x64
codegen is awesome but its x86 codegen leaves a lot of room for
improvement. However, MLton, SMLNJ, GHC, SBCL, Bigloo etc. don't seem to be
any better (except perhaps for the SSE support just added to SBCL but I
haven't tried it yet).

Data representation: should be as close to C as possible for easy interop.
For example, tuples should be C structs (at the expense of polymorphic
recursion).

Interoperability: use JIT compilation as Lisp and .NET do, with FFI code
defined entirely in the HLL.

Syntax: Indentation-sensitive syntax as in Python, Haskell and F# was a huge
design mistake because cut'n'paste from a browser can break code
semantically by altering the indentation, and it prohibits autoindentation
leaving you to reindent huge swaths of code by hand. Lisps got it wrong by
oversimplifying the syntax, making math particularly cumbersome. OCaml got
it wrong mainly by being too complex (e.g. a dozen different kinds of
brackets instead of consistent brackets and identifiers to distinguish
between lists, arrays, streams etc.) and by using unconventional operators
(e.g. ** for power and ^ for string concatenation).

Macros: can be nice but they're abused far more often than they're used.
Missing language features should be fixed at source in the compiler instead
of being retrofitted in an ad-hoc way using macros. OCaml and Lisp are both
bad for this but it is still better that being stuck with limitations as in
SML and Haskell.

Parallelism: wait-free work-stealing deques like Cilk and Microsoft's TPL.

Concurrent GC: OCaml and Python could not be worse in this respect. Haskell
(and SBCL?) are slightly better with parallel but non-concurrent
collectors. The JVM and CLR (especially in .NET 3.5 SP1) are the gold
standard, of course.

Libraries: Nothing on Linux comes close to Windows Presentation Foundation
for robustly-deployable hardware accelerated UIs but open source
computational libraries on Linux put even the most expensive commercial
offerings on Windows to shame. I'd like the best of both worlds in the
standard library and running reliably on *lots* of machines.

Interface: I want a Mathematica notebook interface with typesetting and
integrated visualization instead of plain text.

Type system: OCaml's static type system is invaluable but OCaml lacks
generic typesetting, type-safe serialization and type classes. Optional
dynamic typing can also be valuable, e.g. we're now using it in our F#
products to make the REPL easier to use:

http://fsharpnews.blogspot.com/2009/06/f-for-visualization-0400-released.html

FWIW, I'm going to try to implement as much of this as possible in HLVM:

http://hlvm.forge.ocamlcore.org/

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7218d55d-7ae0-4a2e-b83c-35fbdbcc6330@h8g2000yqm.googlegroups.com>
On Jul 9, 4:32 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > So what do you want from a programming language?
>
> Excellent question. I've never really tried to enumerate it before...

(snip)

This "being functional" is an unhealthy obsession. 3/4 of the time
imperative idioms are more convenient, and 1/4 of the time functional
ones are more convenient.

But because most popular languages are imperative, people are whining
about those 1/4 cases and some disturbed individuals even think
languages should be pure-functional.

What I want is a safe (or has an option to be safe), fast mostly
imperative language that doesn't suck.

C++ (if you are very good and very careful), Java and OCaml are
decent, but not quite what I want. Maybe D and Ada, although I don't
know about those.
From: Frank Buss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <m5faj22j9f95$.vpwkne72gilz.dlg@40tude.net>
fft1976 wrote:

> What I want is a safe (or has an option to be safe), fast mostly
> imperative language that doesn't suck.
> 
> C++ (if you are very good and very careful), Java and OCaml are
> decent, but not quite what I want. Maybe D and Ada, although I don't
> know about those.

You could try Common Lisp: It is nice for imperative code and doesn't suck
as much as C++ for functional code :-)

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <f6a16657-4f95-4746-bd16-4659d1f3161c@r33g2000yqn.googlegroups.com>
On Jul 10, 1:23 am, Frank Buss <····@frank-buss.de> wrote:
> fft1976 wrote:
> > What I want is a safe (or has an option to be safe), fast mostly
> > imperative language that doesn't suck.
>
> > C++ (if you are very good and very careful), Java and OCaml are
> > decent, but not quite what I want. Maybe D and Ada, although I don't
> > know about those.
>
> You could try Common Lisp: It is nice for imperative code and doesn't suck
> as much as C++ for functional code :-)
>

It's probably inconceivable to you, but I, like 99.9% of other
programmers, respectfully disagree.
From: Alessio Stalla
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <2752caa0-ae4d-491f-9959-5e96bf72e7bc@t13g2000yqt.googlegroups.com>
On Jul 10, 10:34 am, fft1976 <·······@gmail.com> wrote:
> On Jul 10, 1:23 am, Frank Buss <····@frank-buss.de> wrote:
>
> > fft1976 wrote:
> > > What I want is a safe (or has an option to be safe), fast mostly
> > > imperative language that doesn't suck.
>
> > > C++ (if you are very good and very careful), Java and OCaml are
> > > decent, but not quite what I want. Maybe D and Ada, although I don't
> > > know about those.
>
> > You could try Common Lisp: It is nice for imperative code and doesn't suck
> > as much as C++ for functional code :-)
>
> It's probably inconceivable to you, but I, like 99.9% of other
> programmers, respectfully disagree.

I hope you don't disagree about C++ sucking at functional
programming... so you must disagree about Lisp being nice at
imperative code. While for heavily imperative algorithms, C-like
languages are likely to be more compact than Lisp (whether this
implies "more readable" depends on the reader), I find in my
experience with Java, and C++ shouldn't be much different, that in
practice heavily imperative code tends to be scarcer than what people
commonly think. Most of my Java methods are less than 20 lines long,
many are shorter. And of those lines, half or so at least are calls to
other methods, not loops or assignments or increments etc.

Maybe your experience is different. But for me, Lisp is more than fine
for occasional imperative code. Of course, if you try to write C in
Lisp it will come out pretty horrible, just like if you try to write
Lisp in Java (sigh...).

Cheers,
Alessio
From: George Neuner
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <lete55he1430qp7l94h45lue9mlkm87lc2@4ax.com>
On Fri, 10 Jul 2009 02:19:45 -0700 (PDT), Alessio Stalla
<·············@gmail.com> wrote:

>On Jul 10, 10:34�am, fft1976 <·······@gmail.com> wrote:
>> On Jul 10, 1:23�am, Frank Buss <····@frank-buss.de> wrote:
>>
>> > fft1976 wrote:
>> > > What I want is a safe (or has an option to be safe), fast mostly
>> > > imperative language that doesn't suck.
>>
>> > > C++ (if you are very good and very careful), Java and OCaml are
>> > > decent, but not quite what I want. Maybe D and Ada, although I don't
>> > > know about those.
>>
>> > You could try Common Lisp: It is nice for imperative code and doesn't suck
>> > as much as C++ for functional code :-)
>>
>> It's probably inconceivable to you, but I, like 99.9% of other
>> programmers, respectfully disagree.
>
>I hope you don't disagree about C++ sucking at functional
>programming... so you must disagree about Lisp being nice at
>imperative code. While for heavily imperative algorithms, C-like
>languages are likely to be more compact than Lisp (whether this
>implies "more readable" depends on the reader), I find in my
>experience with Java, and C++ shouldn't be much different, that in
>practice heavily imperative code tends to be scarcer than what people
>commonly think. Most of my Java methods are less than 20 lines long,
>many are shorter. And of those lines, half or so at least are calls to
>other methods, not loops or assignments or increments etc.

    Lots of little functions != functional programming

C++ and Java make functional programming possible, but neither makes
it particularly easy.  It might get better in C++0X with the addition
of standard closures, but I doubt it ... the proposed syntax is too
messy and too similar to what Boost does now (which is horrible if
you've ever used a real FPL).  Java's closures are no better ... far
too messy to use anonymously.

C++ at least has a simple function object syntax which doesn't
distract the programmer when using higher level functions - Java can't
even get that right.  But the lack of multiple runtime dispatch in
both language forces unnatural, imperative "pattern" coding (C++ has
multiple compile time dispatch, but it's not the same).  


>Maybe your experience is different. But for me, Lisp is more than fine
>for occasional imperative code. Of course, if you try to write C in
>Lisp it will come out pretty horrible, just like if you try to write
>Lisp in Java (sigh...).

A lot of Lisp programmers write what is essentially imperative code.
The only real difference is that they typically use binding more than
assignment.  Personally I hate LOOP ... I learned it so I can read
other people's code, but I never use it myself.

George
From: George Neuner
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <knqe55hess751p7vg2c2nqf95bf6rcehpb@4ax.com>
On Fri, 10 Jul 2009 01:34:49 -0700 (PDT), fft1976 <·······@gmail.com>
wrote:

>On Jul 10, 1:23�am, Frank Buss <····@frank-buss.de> wrote:
>
>> You could try Common Lisp: It is nice for imperative code and doesn't suck
>> as much as C++ for functional code :-)
>>
>
>It's probably inconceivable to you, but I, like 99.9% of other
>programmers, respectfully disagree.

You mean the 99.9% who have never tried Lisp?

Common Lisp (_not_ Scheme) offers assignment and imperative control
constructs - a variety of standard loops, case and typecase (like C
switch), labeled gotos, lexical block structures with named exit, etc.
Using named blocks, labels and gotos, you can define your own control
structures if you don't like the standard ones.

Although most programmers prefer to write in a more functional style,
Lisp's imperative constructs are used extensively for DSL code
generators and foreign code translators (C->Lisp, Pascal->Lisp,
SQL->Lisp, etc.).

That said, Lisp isn't for everyone ... if you don't like it - fine.
Just don't knock it without trying it.

George
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <a35de7dd-6d9c-4111-b893-c9da735148cd@k19g2000yqn.googlegroups.com>
On Jul 10, 10:04 am, George Neuner <········@comcast.net> wrote:

> Common Lisp (_not_ Scheme) offers assignment and imperative control
> constructs

Scheme has SET!, VECTOR-SET!, SET-CDR! (in <=R5RS), etc.

Why would someone pretend to know something when they don't?
From: George Neuner
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <8clf551qli0li98ke6hlu7ada64jbfh6dd@4ax.com>
On Fri, 10 Jul 2009 16:09:37 -0700 (PDT), fft1976 <·······@gmail.com>
wrote:

>On Jul 10, 10:04�am, George Neuner <········@comcast.net> wrote:
>
>> Common Lisp (_not_ Scheme) offers assignment and imperative control
>> constructs
>
>Scheme has SET!, VECTOR-SET!, SET-CDR! (in <=R5RS), etc.

Scheme does not have imperative control structures.


>Why would someone pretend to know something when they don't?

You quoted a complete thought and took issue with only half of it. Why
don't you take the time to read and understand the thought before you
jump blindly into a discussion?

George
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <77c997b0-6d1c-4734-99b5-aa3d50fd8422@c36g2000yqn.googlegroups.com>
On Jul 10, 5:03 pm, George Neuner <········@comcast.net> wrote:
> On Fri, 10 Jul 2009 16:09:37 -0700 (PDT), fft1976 <·······@gmail.com>
> wrote:
>
> >On Jul 10, 10:04 am, George Neuner <········@comcast.net> wrote:
>
> >> Common Lisp (_not_ Scheme) offers assignment and imperative control
> >> constructs
>
> >Scheme has SET!, VECTOR-SET!, SET-CDR! (in <=R5RS), etc.
>
> Scheme does not have imperative control structures.

You take back what you said about assignment, then?

By the way, BEGIN is strictly an imperative control structure. IF and
others subsume imperative control structures, if their arguments are
imperative.

> >Why would someone pretend to know something when they don't?
>
> You quoted a complete thought and took issue with only half of it.

Huh? Most people don't like selective quoting, and you are asking for
it?! I hope Schemers will explain the error of your ways to you. My
patience ran out here.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7ImdnRnw0d6EFcrXnZ2dnUVZ8uidnZ2d@brightview.co.uk>
George Neuner wrote:
> On Fri, 10 Jul 2009 01:34:49 -0700 (PDT), fft1976 <·······@gmail.com>
> wrote:
>>On Jul 10, 1:23 am, Frank Buss <····@frank-buss.de> wrote:
>>> You could try Common Lisp: It is nice for imperative code and doesn't
>>> suck as much as C++ for functional code :-)
>>
>>It's probably inconceivable to you, but I, like 99.9% of other
>>programmers, respectfully disagree.
> 
> You mean the 99.9% who have never tried Lisp?

You say that as if Lisp had not been taught to millions of computer science
students, 99.9% of whom really do choose not to use Lisp.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: ···@!!!
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <b8363a5d-7f14-401e-af3b-2233b8f6968c@h31g2000yqd.googlegroups.com>
You guys really are amazing!!! You're so good at starting flame
wars!!! I don't understand why you make such useless debates about
which programming languages is the best ?!?! Haven't you yet realised
that you'll never come to a mutual opinion because you have different
preferences and jobs that require different tools!?!?! You make
robotic arguments of these languages features, sometimes without even
have trying programming in these languages and you sound like you are
reading a definition from an encyclopedia with a comunist like voice
protecting his beloved mother language!!!
From: Frank Buss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <sdb7trzm02fu.1h94ax93hpxqs$.dlg@40tude.net>
···@!!! wrote:

> You guys really are amazing!!! You're so good at starting flame
> wars!!! I don't understand why you make such useless debates about
> which programming languages is the best ?!?! Haven't you yet realised
> that you'll never come to a mutual opinion because you have different
> preferences and jobs that require different tools!?!?! You make
> robotic arguments of these languages features, sometimes without even
> have trying programming in these languages and you sound like you are
> reading a definition from an encyclopedia with a comunist like voice
> protecting his beloved mother language!!!

Hello ···@!!!,

I know that many people discussing in this thread have much experience in
multiple languages and in the languages they are talking about. Maybe you
are talking to yourself regarding lack of experience?

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: ···@!!!
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <41bcdbe4-28a1-445a-9688-35757530c766@y17g2000yqn.googlegroups.com>
> Hello ···@!!!,
>
> I know that many people discussing in this thread have much experience in
> multiple languages and in the languages they are talking about. Maybe you
> are talking to yourself regarding lack of experience?


See???
This is what I was talking about.
These arguments are useless and a waste of time.
I am very thankful to you Frank that troubled yourself to post here
and answer the questions of an 'lacked-experience' guy like me but if
you don't have any other piece of wisdom to share with us I suggest
you to not waste your time and tire your fingers to write here.
But I supose you don't take suggestions from guys like me, do you?
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090710213239.3c4e57b9@tritium.xx>
····@!!!" <·········@gmail.com> wrote:

> > I know that many people discussing in this thread have much
> > experience in multiple languages and in the languages they are
> > talking about. Maybe you are talking to yourself regarding lack of
> > experience?
>
> See???
> This is what I was talking about.
> These arguments are useless and a waste of time.
> I am very thankful to you Frank that troubled yourself to post here
> and answer the questions of an 'lacked-experience' guy like me but if
> you don't have any other piece of wisdom to share with us I suggest
> you to not waste your time and tire your fingers to write here.  But I
> supose you don't take suggestions from guys like me, do you?

http://xkcd.com/114/

=)


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: ···@!!!
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <5d2fd6da-2fc5-4d23-a671-1547d9bea3cc@x3g2000yqa.googlegroups.com>
> http://xkcd.com/114/
>
> =)
>
> Greets,
> Ertugrul.

Completely appropriate with the case here.
Nice one :D
From: Rob Warnock
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <Du-dnaraXsjtVcrXnZ2dnUVZ_uVi4p2d@speakeasy.net>
Ertugrul Söylemez  <··@ertes.de> wrote:
+---------------
| ····@!!!" <·········@gmail.com> wrote:
| > if you don't have any other piece of wisdom to share with us I suggest
| > you to not waste your time and tire your fingers to write here.  But I
| > supose you don't take suggestions from guys like me, do you?
| 
| http://xkcd.com/114/
+---------------

Or more generally, <http://xkcd.com/386/>.


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <NLCdnRvMdejwTcrXnZ2dnUVZ8qdi4p2d@brightview.co.uk>
···@!!! wrote:
> You guys really are amazing!!! You're so good at starting flame
> wars!!! I don't understand why you make such useless debates about
> which programming languages is the best ?!?! Haven't you yet realised
> that you'll never come to a mutual opinion because you have different
> preferences and jobs that require different tools!?!?! You make
> robotic arguments of these languages features, sometimes without even
> have trying programming in these languages and you sound like you are
> reading a definition from an encyclopedia with a comunist like voice
> protecting his beloved mother language!!!

I am implementing a VM and language. I appreciate peer review of my ideas.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: George Neuner
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <dfkf5519oe1t19us6tne6142p28bbrq313@4ax.com>
On Fri, 10 Jul 2009 20:36:55 +0100, Jon Harrop <···@ffconsultancy.com>
wrote:

>George Neuner wrote:
>> On Fri, 10 Jul 2009 01:34:49 -0700 (PDT), fft1976 <·······@gmail.com>
>> wrote:
>>>On Jul 10, 1:23�am, Frank Buss <····@frank-buss.de> wrote:
>>>> You could try Common Lisp: It is nice for imperative code and doesn't
>>>> suck as much as C++ for functional code :-)
>>>
>>>It's probably inconceivable to you, but I, like 99.9% of other
>>>programmers, respectfully disagree.
>> 
>> You mean the 99.9% who have never tried Lisp?
>
>You say that as if Lisp had not been taught to millions of computer science
>students, 99.9% of whom really do choose not to use Lisp.

I'd wager there haven't been even 1 million CS students total in the
60+ years since electronic computers were invented.  And Lisp wasn't
TAUGHT to the vast majority of them ... most received a simplistic 3
lecture introduction to the language, wrote a handful of recursive
functions and maybe did a semester project.  That hardly qualifies as
"being taught" the language.  

Beside which, for the last 30 years, most CS programs based on Lisp
have actually used Scheme rather than Lisp.

George
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <mKWdnYP7urYpf8rXnZ2dnUVZ8vqdnZ2d@brightview.co.uk>
George Neuner wrote:
> On Fri, 10 Jul 2009 20:36:55 +0100, Jon Harrop <···@ffconsultancy.com>
> wrote:
>>George Neuner wrote:
>>> On Fri, 10 Jul 2009 01:34:49 -0700 (PDT), fft1976 <·······@gmail.com>
>>> wrote:
>>>>On Jul 10, 1:23 am, Frank Buss <····@frank-buss.de> wrote:
>>>>> You could try Common Lisp: It is nice for imperative code and doesn't
>>>>> suck as much as C++ for functional code :-)
>>>>
>>>>It's probably inconceivable to you, but I, like 99.9% of other
>>>>programmers, respectfully disagree.
>>> 
>>> You mean the 99.9% who have never tried Lisp?
>>
>>You say that as if Lisp had not been taught to millions of computer
>>science students, 99.9% of whom really do choose not to use Lisp.
> 
> I'd wager there haven't been even 1 million CS students total in the
> 60+ years since electronic computers were invented.

According to this:

http://www.cra.org/taulbee/CRATaulbeeReport-StudentEnrollment-07-08.pdf

The US enrolls ~300 CS students in ~200 institutions every year. So the US
alone teaches 600,000 CS students every decade and the US is only 5% of the
world's population.

According to this:

http://www.forbes.com/2007/08/05/india-higher-education-oped-cx_prg_0813education.html

China produces more than twice as many CS PhD as the US. So the US and China
alone have probably produced over a million CS graduates in the last decade
alone.

> And Lisp wasn't 
> TAUGHT to the vast majority of them ... most received a simplistic 3
> lecture introduction to the language, wrote a handful of recursive
> functions and maybe did a semester project.  That hardly qualifies as
> "being taught" the language.
> 
> Beside which, for the last 30 years, most CS programs based on Lisp
> have actually used Scheme rather than Lisp.

Excuses excuses. :-)

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Vassil Nikolov
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <snz3a92c3o5.fsf@luna.vassil.nikolov.name>
On Sat, 11 Jul 2009 03:03:30 +0100, Jon Harrop <···@ffconsultancy.com> said:

> George Neuner wrote:
>> On Fri, 10 Jul 2009 20:36:55 +0100, Jon Harrop <···@ffconsultancy.com>
>> wrote:
>>> ...
>>> You say that as if Lisp had not been taught to millions of computer
>>> science students, 99.9% of whom really do choose not to use Lisp.
>> 
>> I'd wager there haven't been even 1 million CS students total in the
>> 60+ years since electronic computers were invented.

> According to this:
> http://www.cra.org/taulbee/CRATaulbeeReport-StudentEnrollment-07-08.pdf
> The US enrolls ~300 CS students in ~200 institutions every year. So the US
> alone teaches 600,000 CS students every decade and the US is only 5% of the
> world's population.

> According to this:
> http://www.forbes.com/2007/08/05/india-higher-education-oped-cx_prg_0813education.html
> China produces more than twice as many CS PhD as the US. So the US and China
> alone have probably produced over a million CS graduates in the last decade
> alone.

  It is rather more difficult to produce even merely decent
  statistics.  For one, not all computer science graduates write
  programs after graduation, in any programming languages; and then
  many of those who do write programs are not computer science
  graduates (so less likely, if at all, to have been taught Lisp by
  the above argument).  Then even the numbers above need adjustment,
  because enrolling 3e2 (students/institution)/year in 2e2
  institutions does not make 6e5 _distinct_ students each decade
  (probably somewhere between 1e5 and 2e5).  Note that I do not claim
  to know what the numbers are, merely that some more information must
  be supplied for a good argument.

  By the way, the choice of programming language is in many cases not
  the choice of the individual programmer, and those who make or
  influence such decisions are not necessarily a representative sample
  of all computer science graduates.

  ---Vassil.


-- 
"Even when the muse is posting on Usenet, Alexander Sergeevich?"
From: Vassil Nikolov
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <snzws6eaocd.fsf@luna.vassil.nikolov.name>
P.S. For how long has there actually been higher education in computer
science?  Not for six decades, I don't think; four seems more likely
(and that would vary by country, too; and then the numbers of both
institutions and students would have been much smaller initially, so
the growth rate is also important).  Furthermore, even assuming a lisp
presence in most computer science curricula today (I don't know a good
value for "most" here, but it may well be below the proverbial 99.9%),
when did that come into being?  From the very beginning of computer
science in higher education?

---Vassil.


-- 
"Even when the muse is posting on Usenet, Alexander Sergeevich?"
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <g8OdnWTnabqRgsTXnZ2dnUVZ8sOdnZ2d@brightview.co.uk>
Vassil Nikolov wrote:
> P.S. For how long has there actually been higher education in computer
> science?

  "The world’s first computer science course opened here in 1953." -
http://www.cam.ac.uk/admissions/undergraduate/courses/compsci/

> Not for six decades, I don't think;

That was 56 years ago.

> four seems more likely  
> (and that would vary by country, too; and then the numbers of both
> institutions and students would have been much smaller initially, so
> the growth rate is also important).

I already cited data for the US spanning 14 years.

> Furthermore, even assuming a lisp 
> presence in most computer science curricula today (I don't know a good
> value for "most" here, but it may well be below the proverbial 99.9%),
> when did that come into being?  From the very beginning of computer
> science in higher education?

Sure but you can easily account for over a million CS students just by
looking at the US and China over the past 10 years, let alone the rest of
the world and the preceding 46 years.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Richard Fateman
Subject: How many students learn Lisp? was Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <4A5925C4.5060504@cs.berkeley.edu>
Let's just say there are/were 1 million students of computer science in 
higher education worldwide.

Consider the statistics for teaching students the Scheme variant of Lisp 
in courses based on Abelson/Sussman SICP, or others.
This textbook, in its heyday was used at something like 120 institutions.

At Univ. Calif Berkeley alone, my guess is that 15,000 or more students 
took such courses in the last 2 decades. Probably a higher number at 
MIT. So 99.9% is wrong; If Berkeley were the only school teaching lisp, 
you'd have at most 98.5%.  And if MIT has similar numbers, 97%.  And 
that is just counting 2 schools, not 100.  Do students eventually end up 
writing Lisp programs for a living? Probably not. That's grist for 
another troll-mill.


So far as Haskell and Prolog -- I don't have any numbers, but sometimes 
we teach students how to implement Prolog in that SICP course.

Time to change the subject line?
From: Vassil Nikolov
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <snzr5wmah82.fsf@luna.vassil.nikolov.name>
On Sun, 12 Jul 2009 01:33:16 +0100, Jon Harrop <···@ffconsultancy.com> said:
> ...
> Vassil Nikolov wrote:
>> ...
>> (and that would vary by country, too; and then the numbers of both
>> institutions and students would have been much smaller initially, so
>> the growth rate is also important).

> I already cited data for the US spanning 14 years.

>> Furthermore, even assuming a lisp 
>> presence in most computer science curricula today (I don't know a good
>> value for "most" here, but it may well be below the proverbial 99.9%),
>> when did that come into being?  From the very beginning of computer
>> science in higher education?

> Sure but you can easily account for over a million CS students just by
> looking at the US and China over the past 10 years, ...

  That may or may not have been the number; we haven't seen the
  statistics to demonstrate it.  Looking at Figure 1 in the cited
  report from the 2007-2008 CRA Taulbee Survey [*], for example, we
  can roughly estimate the average annual number of bachelor's degrees
  produced since 1995 as 15e3 [#].  Even assuming twice as many in
  China for _all_ of that time [+], it is hard to see how _from this_
  we can arrive at that million.

  _________
  [*] Reproducing the reference here for convenience:
      <http://www.cra.org/taulbee/CRATaulbeeReport-StudentEnrollment-07-08.pdf>.
  [#] In fact, computer science and computer engineering; do the
      latter curricula include lisp?
  [+] China has been changing very rapidly in the past decade or two.

  ---Vassil.


-- 
"Even when the muse is posting on Usenet, Alexander Sergeevich?"
From: toby
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <5e61f454-7ef1-4b49-b2c3-dff4a7aaf27f@33g2000vbe.googlegroups.com>
On Jul 10, 3:36 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> George Neuner wrote:
> > On Fri, 10 Jul 2009 01:34:49 -0700 (PDT), fft1976 <·······@gmail.com>
> > wrote:
> >>On Jul 10, 1:23 am, Frank Buss <····@frank-buss.de> wrote:
> >>> You could try Common Lisp: It is nice for imperative code and doesn't
> >>> suck as much as C++ for functional code :-)
>
> >>It's probably inconceivable to you, but I, like 99.9% of other
> >>programmers, respectfully disagree.
>
> > You mean the 99.9% who have never tried Lisp?
>
> You say that as if Lisp had not been taught to millions of computer science
> students, 99.9% of whom really do choose not to use Lisp.

It's just as ludicrous to say that 97% of PC users "really do choose"
to use Windows. You know perfectly well that these are not free
choices. They are governed by policy, fashion, "what you are exposed
to" (99.9% of programmers certainly have not been exposed to Lisp in a
way that they could make an informed choice*) - and in my example, of
course, a deeply entrenched monopoly.

* - besides, as far as I can tell, people rarely make choices or argue
based on first hand knowledge. Really, 99.9% of computer scientists
were properly taught Lisp? What % of those were also given the
impression that it was an academic exercise of no use in the real
world? By their teachers? Or my the received wisdom of the crowd? Or
both?

>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <x66dnUA0qvfW-cTXnZ2dnUVZ8rudnZ2d@brightview.co.uk>
toby wrote:
> On Jul 10, 3:36 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>> George Neuner wrote:
>> > On Fri, 10 Jul 2009 01:34:49 -0700 (PDT), fft1976 <·······@gmail.com>
>> > wrote:
>> >>On Jul 10, 1:23 am, Frank Buss <····@frank-buss.de> wrote:
>> >>> You could try Common Lisp: It is nice for imperative code and doesn't
>> >>> suck as much as C++ for functional code :-)
>>
>> >>It's probably inconceivable to you, but I, like 99.9% of other
>> >>programmers, respectfully disagree.
>>
>> > You mean the 99.9% who have never tried Lisp?
>>
>> You say that as if Lisp had not been taught to millions of computer
>> science students, 99.9% of whom really do choose not to use Lisp.
> 
> It's just as ludicrous to say that 97% of PC users "really do choose"
> to use Windows. You know perfectly well that these are not free 
> choices. They are governed by policy, fashion, "what you are exposed
> to" (99.9% of programmers certainly have not been exposed to Lisp in a
> way that they could make an informed choice*) - and in my example, of
> course, a deeply entrenched monopoly.
> 
> * - besides, as far as I can tell, people rarely make choices or argue
> based on first hand knowledge. Really, 99.9% of computer scientists
> were properly taught Lisp? What % of those were also given the
> impression that it was an academic exercise of no use in the real
> world? By their teachers? Or my the received wisdom of the crowd? Or
> both?

You're just trying to change the definition of "choice".

Those users have tasks to complete. If a Linux-based solution starts
with "recreate all data in Linux-compatible formats" then they rule out
that choice. Similarly, programmers don't choose Lisp because it would
start them off on the wrong foot.

Either way, it is the suckiness of the OS/language that rules it out as a
viable option. The user still has complete freedom over their choice. They
just recognise that Linux/Lisp is not a viable choice because of
practicalities.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: David Formosa (aka ? the Platypus)
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <slrnh5jgji.l51.dformosa@localhost.localdomain>
On Sun, 12 Jul 2009 06:28:43 +0100, Jon Harrop <···@ffconsultancy.com> wrote:
[...]
> Those users have tasks to complete. If a Linux-based solution starts
> with "recreate all data in Linux-compatible formats"

What is a linux-compatible format, what is a linux-incomptable format?
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <3dWdnfMEyfSNi8fXnZ2dnUVZ8lBi4p2d@brightview.co.uk>
David Formosa (aka ? the Platypus) wrote:
> On Sun, 12 Jul 2009 06:28:43 +0100, Jon Harrop <···@ffconsultancy.com>
> wrote:
> [...]
>> Those users have tasks to complete. If a Linux-based solution starts
>> with "recreate all data in Linux-compatible formats"
> 
> What is a linux-compatible format, what is a linux-incomptable format?

Say you have data in a proprietary format and the only software that can
read it is Windows only: your data is not in a Linux-compatible format.

For example, we have a fantastic new digital camera and I am buying a
replacement netbook after my wife's Asus Eee PC died. I choose a another
Windows-based one because it will be able to read the data.

Note that I had the choice and I chose Windows but my choice was dictated by
practical requirements and not the theoretical properties of the language.
Contrary to Toby's description, I really did choose.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Scott Burson
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <e10aaba6-36d4-45d4-a982-9a48ce1da0dc@j9g2000prh.googlegroups.com>
On Jul 12, 11:07 am, Jon Harrop <····@ffconsultancy.com> wrote:
>
> For example, we have a fantastic new digital camera and I am buying a
> replacement netbook after my wife's Asus Eee PC died. I choose a another
> Windows-based one because it will be able to read the data.

A digital camera that doesn't use an industry-standard method of
transferring data (such as one of the various formats of flash card,
or showing up on USB as a generic mass storage device)?  Please tell
me what brand it is so I can be sure not to buy one.

-- Scott
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <WYqdnSdP-Oez7cfXnZ2dnUVZ8tkAAAAA@brightview.co.uk>
Scott Burson wrote:
> On Jul 12, 11:07 am, Jon Harrop <····@ffconsultancy.com> wrote:
>>
>> For example, we have a fantastic new digital camera and I am buying a
>> replacement netbook after my wife's Asus Eee PC died. I choose a another
>> Windows-based one because it will be able to read the data.
> 
> A digital camera that doesn't use an industry-standard method of
> transferring data (such as one of the various formats of flash card,
> or showing up on USB as a generic mass storage device)?  Please tell
> me what brand it is so I can be sure not to buy one.

Canon.

For another example, part of my work entails writing articles about F#. I
could try to run F# on Mono under Linux but it is a nightmare compared
to .NET on Windows. So I chose a Windows netbook for myself.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Scott Burson
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <da694a5b-28ee-4123-8226-4220836507d7@q40g2000prh.googlegroups.com>
On Jul 12, 5:31 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Scott Burson wrote:
> > A digital camera that doesn't use an industry-standard method of
> > transferring data (such as one of the various formats of flash card,
> > or showing up on USB as a generic mass storage device)?  Please tell
> > me what brand it is so I can be sure not to buy one.
>
> Canon.

Hmm, okay, I see that some claim that the open-source Linux driver
works okay; and failing that, one could use a CF card reader.  I think
I could live with that.

> For another example, part of my work entails writing articles about F#. I
> could try to run F# on Mono under Linux but it is a nightmare compared
> to .NET on Windows. So I chose a Windows netbook for myself.

I'm not questioning your choice of OS.  You have your requirements; I
have mine.

-- Scott
From: toby
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <3f2377d3-24db-4d7a-a304-7a1f556961d5@f16g2000vbf.googlegroups.com>
On Jul 12, 2:07 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> David Formosa (aka ? the Platypus) wrote:
>
> > On Sun, 12 Jul 2009 06:28:43 +0100, Jon Harrop <····@ffconsultancy.com>
> > wrote:
> > [...]
> >> Those users have tasks to complete. If a Linux-based solution starts
> >> with "recreate all data in Linux-compatible formats"
>
> > What is a linux-compatible format, what is a linux-incomptable format?
>
> Say you have data in a proprietary format and the only software that can
> read it is Windows only: your data is not in a Linux-compatible format.
>
> For example, we have a fantastic new digital camera and I am buying a
> replacement netbook after my wife's Asus Eee PC died. I choose a another
> Windows-based one because it will be able to read the data.
>
> Note that I had the choice and I chose Windows but my choice was dictated by
> practical requirements and not the theoretical properties of the language.
> Contrary to Toby's description, I really did choose.

No, you're *forced* to "choose" Windows. You can buy a computer with
Windows or OS X pre-installed, or you can buy Windows, wipe it and
install a free operating system. Only the computer with Windows is
compatible with your digital camera. That's not a choice*, is it?

Your refrigerator is empty but for a jar of pickled onions. Are you
going to "really choose" pickled onions for dinner? What if you'd
really rather eat something else?

* - "choice": According to my English dictionary: an act of selecting
or making a decision when faced with TWO OR MORE possibilities.

[ The fact that few ISVs bother porting their software beyond Windows
is an *effect* of the monopoly. Making it *impossible* to port beyond
Windows is the plain raison d'être of F# and all MS-proprietary
technologies. ]

>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <WYqdnSRP-OeI7cfXnZ2dnUVZ8tmdnZ2d@brightview.co.uk>
toby wrote:
> No, you're *forced* to "choose" Windows.

Nonsense. Nothing forced me to start using Windows. I did so entirely of my
own free will. In fact, I now regret not having chosen Windows earlier.

> You can buy a computer with 
> Windows or OS X pre-installed,

Or Linux. I am writing this on a Dell that came with Linux installed.

> or you can buy Windows, wipe it and 
> install a free operating system.

Or you can wipe Linux and install Windows, as I did on my old laptop.

> Only the computer with Windows is compatible with your digital camera.
> That's not a choice*, is it? 

Yes, it is a choice. Incompatibility with a camera does not prevent me from
choosing to install Linux. Moreover, I have more choices: I might choose to
battle with existing buggy Linux drivers or I might choose try to write my
own.

> Your refrigerator is empty but for a jar of pickled onions. Are you
> going to "really choose" pickled onions for dinner? What if you'd 
> really rather eat something else?

You're trying to say that taking a choice means that choice never existed,
which is nonsensical.

> * - "choice": According to my English dictionary: an act of selecting
> or making a decision when faced with TWO OR MORE possibilities.
> 
> [ The fact that few ISVs bother porting their software beyond Windows
> is an *effect* of the monopoly. Making it *impossible* to port beyond 
> Windows is the plain raison d'être of F# and all MS-proprietary
> technologies. ]

Simply untrue. I port software from F# to OCaml all the time.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: toby
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <d9905be6-5559-40e6-87ed-c1f8871ef87f@d4g2000vbm.googlegroups.com>
On Jul 12, 8:31 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> toby wrote:
> > No, you're *forced* to "choose" Windows.
>
> Nonsense. Nothing forced me to start using Windows. I did so entirely of my
> own free will. In fact, I now regret not having chosen Windows earlier.

So why did you use the example of the camera, which I argue was a non-
choice?

>
> > You can buy a computer with
> > Windows or OS X pre-installed,
>
> Or Linux. I am writing this on a Dell that came with Linux installed.

That's good news, I didn't think Dell did that any more.

>
> > or you can buy Windows, wipe it and
> > install a free operating system.
>
> Or you can wipe Linux and install Windows, as I did on my old laptop.
>
> > Only the computer with Windows is compatible with your digital camera.
> > That's not a choice*, is it?
>
> Yes, it is a choice. Incompatibility with a camera does not prevent me from
> choosing to install Linux. Moreover, I have more choices: I might choose to
> battle with existing buggy Linux drivers or I might choose try to write my
> own.

Yes, that's a real choice, but not the one you said you made.

>
> > Your refrigerator is empty but for a jar of pickled onions. Are you
> > going to "really choose" pickled onions for dinner? What if you'd
> > really rather eat something else?
>
> You're trying to say that taking a choice means that choice never existed,
> which is nonsensical.
>
> > * - "choice": According to my English dictionary: an act of selecting
> > or making a decision when faced with TWO OR MORE possibilities.
>
> > [ The fact that few ISVs bother porting their software beyond Windows
> > is an *effect* of the monopoly. Making it *impossible* to port beyond
> > Windows is the plain raison d'être of F# and all MS-proprietary
> > technologies. ]
>
> Simply untrue. I port software from F# to OCaml all the time.

One single personal exception cancels all other cases, and the whole
issue, eh? Now port the other 1,500,000 apps built on Microsoft
("freely chosen" by the developers from all alternatives, mind you!)
which could have been built on portable systems to begin with.

--Toby

>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <tpqdnfiwf4KN_MbXnZ2dnUVZ8iGdnZ2d@brightview.co.uk>
toby wrote:
> On Jul 12, 8:31 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>> toby wrote:
>> > No, you're *forced* to "choose" Windows.
>>
>> Nonsense. Nothing forced me to start using Windows. I did so entirely of
>> my own free will. In fact, I now regret not having chosen Windows
>> earlier.
> 
> So why did you use the example of the camera, which I argue was a non-
> choice?

The camera example is not a "non-choice".

>> > [ The fact that few ISVs bother porting their software beyond Windows
>> > is an *effect* of the monopoly. Making it *impossible* to port beyond
>> > Windows is the plain raison d'être of F# and all MS-proprietary
>> > technologies. ]
>>
>> Simply untrue. I port software from F# to OCaml all the time.
> 
> One single personal exception cancels all other cases, and the whole
> issue, eh?

Yes. Indeed, that is the sole purpose of a counter example.

> Now port the other 1,500,000 apps built on Microsoft 
> ("freely chosen" by the developers from all alternatives, mind you!)
> which could have been built on portable systems to begin with.

No, there is no such thing as a "portable system".

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: George Neuner
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <cgpe55lm1fhrsbl0jom4j9r039hhf6m8ml@4ax.com>
On Fri, 10 Jul 2009 01:10:49 -0700 (PDT), fft1976 <·······@gmail.com>
wrote:

>On Jul 9, 4:32�pm, Jon Harrop <····@ffconsultancy.com> wrote:
>> Larry Coleman wrote:
>> > So what do you want from a programming language?
>>
>> Excellent question. I've never really tried to enumerate it before...
>
>(snip)
>
>This "being functional" is an unhealthy obsession. 3/4 of the time
>imperative idioms are more convenient, and 1/4 of the time functional
>ones are more convenient.

Only because that's what you are used to.


>But because most popular languages are imperative, people are whining
>about those 1/4 cases and some disturbed individuals even think
>languages should be pure-functional.

And they are wrong.  Strict, impure functional languages are easier to
use and easier to reason about.  Laziness is occasionally a good thing
to have, but it doesn't need to be built in - it can be implemented
reasonably on top of any language with closures.


>What I want is a safe (or has an option to be safe), fast mostly
>imperative language that doesn't suck.

Then why are you posting in groups that promote functional languages?


>C++ (if you are very good and very careful), Java and OCaml are
>decent, but not quite what I want. Maybe D and Ada, although I don't
>know about those.

I've always liked Modula-3.  Ada is great if you are working on bare
hardware but IMO it is not a great general application language.

George
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <1ea892f0-e8d7-4e8c-845a-19305296fad1@h11g2000yqb.googlegroups.com>
On Jul 10, 10:09 am, George Neuner <········@comcast.net> wrote:

> Then why are you posting in groups that promote functional languages?

To help you see the light.
From: George Neuner
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <93mf559h40ou1s5giufonc1r2vrb8821n9@4ax.com>
On Fri, 10 Jul 2009 15:17:45 -0700 (PDT), fft1976 <·······@gmail.com>
wrote:

>On Jul 10, 10:09�am, George Neuner <········@comcast.net> wrote:
>
>> Then why are you posting in groups that promote functional languages?
>
>To help you see the light.

That is a moronic statement given that you know nothing of my
background.  I'd wager I've programmed in more languages than you've
even heard of.

George
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <bMGdnQNE47vZWcrXnZ2dnUVZ8hti4p2d@brightview.co.uk>
George Neuner wrote:
> On Fri, 10 Jul 2009 01:10:49 -0700 (PDT), fft1976 <·······@gmail.com>
> wrote:
>>But because most popular languages are imperative, people are whining
>>about those 1/4 cases and some disturbed individuals even think
>>languages should be pure-functional.
> 
> And they are wrong.  Strict, impure functional languages are easier to
> use and easier to reason about.  Laziness is occasionally a good thing
> to have, but it doesn't need to be built in - it can be implemented
> reasonably on top of any language with closures.

But at what cost? F# is certainly better than OCaml but it can still be a
long way from the clarify of Haskell.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: George Neuner
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <fnlf5515jqskd3n1s2h4k7r6ucm2hud0q1@4ax.com>
On Sat, 11 Jul 2009 00:53:38 +0100, Jon Harrop <···@ffconsultancy.com>
wrote:

>George Neuner wrote:
>> On Fri, 10 Jul 2009 01:10:49 -0700 (PDT), fft1976 <·······@gmail.com>
>> wrote:
>>>But because most popular languages are imperative, people are whining
>>>about those 1/4 cases and some disturbed individuals even think
>>>languages should be pure-functional.
>> 
>> And they are wrong.  Strict, impure functional languages are easier to
>> use and easier to reason about.  Laziness is occasionally a good thing
>> to have, but it doesn't need to be built in - it can be implemented
>> reasonably on top of any language with closures.
>
>But at what cost? F# is certainly better than OCaml but it can still be a
>long way from the clarify of Haskell.

Agreed.  But, apart from streams, how often are lazy constructs really
used?  Lazy data structures definitely have utility, but I think the
majority of general programming tasks does not need them.  Making lazy
evaluation the default, as in Haskell, has been a continual source of
problems.  Lazy data is just a closure to compute a value and so it
can be implemented on any language with closures.  Perhaps it does
need to be a primitive for performance ... but not a default mode.

George
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7xhbxk6o3r.fsf@ruckus.brouhaha.com>
George Neuner <········@comcast.net> writes:
> Lazy data is just a closure to compute a value and so it
> can be implemented on any language with closures. 

It's more complicated than that since it's usually implemented as
graph reduction.  Coding it directly as closures (call-by-name) can
cause exponential slowdown.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <Ip6dnSUaT-M4fcrXnZ2dnUVZ8vCdnZ2d@brightview.co.uk>
George Neuner wrote:
> Agreed.  But, apart from streams, how often are lazy constructs really
> used?  Lazy data structures definitely have utility, but I think the
> majority of general programming tasks does not need them.  Making lazy
> evaluation the default, as in Haskell, has been a continual source of
> problems.

Yes.

> Lazy data is just a closure to compute a value and so it 
> can be implemented on any language with closures.

You need to overwrite the closure with the value it computed as well, of
course.

> Perhaps it does need to be a primitive for performance ... but not a
> default mode. 

Actually there is no decent implementation of thunks from a performance
perspective because they can be forced from threads simultaneously so they,
essentially, require locking. Obviously every subexpression in a non-strict
language is potentially a thunk so the overhead of all those locks is
enormous. GHC has adopted some elaborate workarounds for its "sparks" but
they are fragile, e.g. seemingly-innocuous tweaks can make the GC leak
indefinitely.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090711160308.624112b8@tritium.xx>
George Neuner <········@comcast.net> wrote:

> > But because most popular languages are imperative, people are
> > whining about those 1/4 cases and some disturbed individuals even
> > think languages should be pure-functional.
>
> And they are wrong.  Strict, impure functional languages are easier to
> use and easier to reason about.  Laziness is occasionally a good thing
> to have, but it doesn't need to be built in - it can be implemented
> reasonably on top of any language with closures.

No, it cannot, because lazy code needs to be aware of the laziness.  In
other words, you couldn't feed a lazy value to a strict function.  You
would need to specifically write a non-strict variant of it.  Laziness
is not a language feature, but a semantic property.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090710184049.7bd52c02@tritium.xx>
fft1976 <·······@gmail.com> wrote:

> > > So what do you want from a programming language?
> >
> > Excellent question. I've never really tried to enumerate it
> > before...
>
> (snip)
>
> This "being functional" is an unhealthy obsession. 3/4 of the time
> imperative idioms are more convenient, and 1/4 of the time functional
> ones are more convenient.

Fortunately that's just your opinion and not the reality.  Even
Microsoft has found the value in functional programming.  See F#.  It's
scheduled to be packaged with Visual Studio in 2010, AFAIK.  Also many
imperative languages are employing more and more functional concepts.


> But because most popular languages are imperative, people are whining
> about those 1/4 cases and some disturbed individuals even think
> languages should be pure-functional.

What's wrong with purely functional languages?


> What I want is a safe (or has an option to be safe), fast mostly
> imperative language that doesn't suck.

If you want safety, go for functional programming.  About speed, most
decent functional languages aren't considerably slower than C, but given
that you save a lot of development time, you'll get your result much
faster.

Also purely functional languages (e.g. Haskell, Clean) and languages
with immutable data (e.g. F#) allow clean and concise algorithm
implementations, which are still just as fast as imperative variants
with explicit reference/memory handling.  Finally with these two classes
of languages and additionally Erlang you get concurrency and parallelity
almost for free.  Multithreaded programming is a PITA in all imperative
languages.


> C++ (if you are very good and very careful), Java and OCaml are
> decent, but not quite what I want. Maybe D and Ada, although I don't
> know about those.

Just in case you didn't notice, OCaml is a functional language.
However, nobody is "very good" and "very careful".  C++ leaves too many
spots open for making mistakes.  D is much better at that, if you insist
on imperative programming.

All in all you're just showing that you don't have a clue about
functional programming.  Before making such unfounded claims, you'd
better try a functional language for more than 15 minutes.  If you still
think that functional programming sucks, you should provide reasons to
support your claims.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Vend
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <44285888-2fbc-4999-a3be-057f89c5c162@18g2000yqa.googlegroups.com>
On Jul 10, 6:40 pm, Ertugrul Söylemez <····@ertes.de> wrote:
> Fortunately that's just your opinion and not the reality.  Even
> Microsoft has found the value in functional programming.  See F#.  It's
> scheduled to be packaged with Visual Studio in 2010, AFAIK.  Also many
> imperative languages are employing more and more functional concepts.

F# is not purely functional.

> > But because most popular languages are imperative, people are whining
> > about those 1/4 cases and some disturbed individuals even think
> > languages should be pure-functional.
>
> What's wrong with purely functional languages?
>
> > What I want is a safe (or has an option to be safe), fast mostly
> > imperative language that doesn't suck.
>
> If you want safety, go for functional programming.

Why?
Type safety is orthogonal to the functional/imperative distinction.

>  About speed, most
> decent functional languages aren't considerably slower than C, but given
> that you save a lot of development time, you'll get your result much
> faster.

However, if I understand correctly, purely functional languages
typically have performance problems.

> Also purely functional languages (e.g. Haskell, Clean) and languages
> with immutable data (e.g. F#) allow clean and concise algorithm
> implementations, which are still just as fast as imperative variants
> with explicit reference/memory handling.

This makes no sense, since you can always use mutable variables as if
they were immutable.

What makes functional languages more coincise is the availability of
first-class procedures.
Lazy evaluation can also improve expressivity, at the cost of
performance problems.

>  Finally with these two classes
> of languages and additionally Erlang you get concurrency and parallelity
> almost for free.  Multithreaded programming is a PITA in all imperative
> languages.

Again, these issues are orthogonal. Erlang's message passing model of
concurrency can be also used in imperative languages.

> > C++ (if you are very good and very careful), Java and OCaml are
> > decent, but not quite what I want. Maybe D and Ada, although I don't
> > know about those.
>
> Just in case you didn't notice, OCaml is a functional language.

But it is not pure.

> However, nobody is "very good" and "very careful".  C++ leaves too many
> spots open for making mistakes.  D is much better at that, if you insist
> on imperative programming.
>
> All in all you're just showing that you don't have a clue about
> functional programming.  Before making such unfounded claims, you'd
> better try a functional language for more than 15 minutes.  If you still
> think that functional programming sucks, you should provide reasons to
> support your claims.

He didn't say that functional programming sucks, he said that the
functional paradigm is useful 25% of times.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7ImdnR7w0d4wGsrXnZ2dnUVZ8uidnZ2d@brightview.co.uk>
Vend wrote:
> Why?
> Type safety is orthogonal to the functional/imperative distinction.

Only in theory. In practice, functional languages (OCaml, Haskell) offer far
more expressive type systems than any available safe imperative language
implementation (Java, C#).

>> Also purely functional languages (e.g. Haskell, Clean) and languages
>> with immutable data (e.g. F#) allow clean and concise algorithm
>> implementations, which are still just as fast as imperative variants
>> with explicit reference/memory handling.
> 
> This makes no sense, since you can always use mutable variables as if
> they were immutable.

That does not recover the properties of purely functional data structures.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Vend
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <ef2b9978-1ae3-42a4-95dc-d28f321b8510@t13g2000yqt.googlegroups.com>
On Jul 10, 9:35 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Vend wrote:
> > Why?
> > Type safety is orthogonal to the functional/imperative distinction.
>
> Only in theory. In practice, functional languages (OCaml, Haskell) offer far
> more expressive type systems than any available safe imperative language
> implementation (Java, C#).

The issue under discussion was type safety, not type expressiveness.
And anyway, while modern functional languages do have expressive type
systems, this isn't necessarily a property of functional languages.

Algebraic types and type inference could be used in the definition of
an imperative language.
On the other hand, early lisps were largely functional languages, yet
they had poor type systems.

> >> Also purely functional languages (e.g. Haskell, Clean) and languages
> >> with immutable data (e.g. F#) allow clean and concise algorithm
> >> implementations, which are still just as fast as imperative variants
> >> with explicit reference/memory handling.
>
> > This makes no sense, since you can always use mutable variables as if
> > they were immutable.
>
> That does not recover the properties of purely functional data structures.

What properties?
If by convention you assign to each variable only once per lifetime,
you get the same properties of immutable variables.

Having a way to specify immutability in the language can be useful to
detect bugs, but it doesn't change the base semantics of the programs:
if you take a correct program written using immutable variables and
replace them with mutable variables, it stays correct.
From: Andreas Rossberg
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <e30c7e67-2d3a-4fb8-8cb6-db13734700a8@o7g2000yqb.googlegroups.com>
On Jul 11, 10:50 am, Vend <······@virgilio.it> wrote:
>
> The issue under discussion was type safety, not type expressiveness.

These are interdependent. A more expressive type system requires fewer
work-arounds with unsafe operations such as downcasts, and allows
encoding domain-specific safety and correctness properties in types,
making the notion of type safety stronger.

> And anyway, while modern functional languages do have expressive type
> systems, this isn't necessarily a property of functional languages.
>
> Algebraic types and type inference could be used in the definition of
> an imperative language.

Hindley/Milner style polymorphic type inference - which is what all
those functional languages are based on - does not work for imperative
features. For an imperative language you are forced to use more ad-hoc
and less complete approaches.

> > > This makes no sense, since you can always use mutable variables as if
> > > they were immutable.
>
> > That does not recover the properties of purely functional data structures.
>
> What properties?
> If by convention you assign to each variable only once per lifetime,
> you get the same properties of immutable variables.

When you build an abstraction you cannot simply assume that client
code behaves well. Ensuring (never mind proving) abstraction safety is
orders of magnitude harder in the presence of mutable state. The whole
cottage industry around ownership types is one attempt to tame mutable
state in this respect.

> Having a way to specify immutability in the language can be useful to
> detect bugs, but it doesn't change the base semantics of the programs:
> if you take a correct program written using immutable variables and
> replace them with mutable variables, it stays correct.

That is patently false (unless you only consider whole programs, which
is not very useful). The presence of mutable state fundamentally
changes the semantic notion of program equivalence. In a language with
state, if you put your program P into the context C of another program
(ie. your "program" is a module or class that is used in a larger
program), C can generally distinguish more possible behaviours of P
than in a language with immutable variables - which is a technical way
of saying that it has more ways to mess with it. The flipside is that
you have to work harder to write P such that no C can break its
intended semantics. The most obvious (but by no means sufficient) rule
is that P should never pass out parts of its internal representation,
but always has to copy it first. That is no concern with immutable
data structures.

- Andreas
From: Vend
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <85272402-5ec8-47dd-af2b-a58e2b010767@h2g2000yqg.googlegroups.com>
On Jul 11, 12:38 pm, Andreas Rossberg <········@mpi-sws.org> wrote:
> On Jul 11, 10:50 am, Vend <······@virgilio.it> wrote:
>
>
>
> > The issue under discussion was type safety, not type expressiveness.
>
> These are interdependent. A more expressive type system requires fewer
> work-arounds with unsafe operations such as downcasts, and allows
> encoding domain-specific safety and correctness properties in types,
> making the notion of type safety stronger.

Ok, I meant the property of the language not to fall into unspecified
behavior when a type error occours, like C and C++ can do.

> Hindley/Milner style polymorphic type inference - which is what all
> those functional languages are based on - does not work for imperative
> features. For an imperative language you are forced to use more ad-hoc
> and less complete approaches.

Can you provide an example, please?

> > What properties?
> > If by convention you assign to each variable only once per lifetime,
> > you get the same properties of immutable variables.
>
> When you build an abstraction you cannot simply assume that client
> code behaves well. Ensuring (never mind proving) abstraction safety is
> orders of magnitude harder in the presence of mutable state. The whole
> cottage industry around ownership types is one attempt to tame mutable
> state in this respect.

As I said, immutable variables can be useful for program verification.
You can write in the documentation of your modules "don't mutate these
objects", and hope that the client complies, but if the language
allows you to express that contract in the type system then it is
useful to do so and avoid errors.

However, I see no point in making all variables immutable, or even in
making variables immutable by default.

> > Having a way to specify immutability in the language can be useful to
> > detect bugs, but it doesn't change the base semantics of the programs:
> > if you take a correct program written using immutable variables and
> > replace them with mutable variables, it stays correct.
>
> That is patently false (unless you only consider whole programs, which
> is not very useful). The presence of mutable state fundamentally
> changes the semantic notion of program equivalence. In a language with
> state, if you put your program P into the context C of another program
> (ie. your "program" is a module or class that is used in a larger
> program), C can generally distinguish more possible behaviours of P
> than in a language with immutable variables - which is a technical way
> of saying that it has more ways to mess with it.

No, since you can represent state with input and output values.

> The flipside is that
> you have to work harder to write P such that no C can break its
> intended semantics. The most obvious (but by no means sufficient) rule
> is that P should never pass out parts of its internal representation,
> but always has to copy it first. That is no concern with immutable
> data structures.
>
> - Andreas
From: Andreas Rossberg
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <da4b1a5b-b58d-434d-902f-e2ac37d24003@a7g2000yqk.googlegroups.com>
On Jul 11, 1:02 pm, Vend <······@virgilio.it> wrote:
>
> > Hindley/Milner style polymorphic type inference - which is what all
> > those functional languages are based on - does not work for imperative
> > features. For an imperative language you are forced to use more ad-hoc
> > and less complete approaches.
>
> Can you provide an example, please?

The simplest possible example: when you declare

  val x = nil

where nil denotes the empty list, then in case x is immutable,
inference can assign the polymorphic type

  x : List[A]

If x is mutable, it cannot, because that would be unsound. Similar
problems occur with subtyping polymorphism, essentially because
mutable variables always have to be invariant.


> However, I see no point in making all variables immutable, or even in
> making variables immutable by default.

The idea is that the language (and its type system) should cleanly
separate the notions of variable and mutable state. That makes the
intent clear, allows for more flexible typing, and most importantly,
avoids accident. That is what functional languages do (all of which
allow mutable state in one form or the other).


> > > Having a way to specify immutability in the language can be useful to
> > > detect bugs, but it doesn't change the base semantics of the programs:
> > > if you take a correct program written using immutable variables and
> > > replace them with mutable variables, it stays correct.
>
> > That is patently false (unless you only consider whole programs, which
> > is not very useful). The presence of mutable state fundamentally
> > changes the semantic notion of program equivalence. In a language with
> > state, if you put your program P into the context C of another program
> > (ie. your "program" is a module or class that is used in a larger
> > program), C can generally distinguish more possible behaviours of P
> > than in a language with immutable variables - which is a technical way
> > of saying that it has more ways to mess with it.
>
> No, since you can represent state with input and output values.

Not sure what you are saying. If your language has higher-order state
- i.e. pointers or closures - then you cannot do that (except as a
whole program transformation).

- Andreas
From: Vend
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <695c0852-52ed-4821-ac11-3cebe7b29791@h21g2000yqa.googlegroups.com>
On Jul 11, 2:39 pm, Andreas Rossberg <········@mpi-sws.org> wrote:
> On Jul 11, 1:02 pm, Vend <······@virgilio.it> wrote:
>
>
>
> > > Hindley/Milner style polymorphic type inference - which is what all
> > > those functional languages are based on - does not work for imperative
> > > features. For an imperative language you are forced to use more ad-hoc
> > > and less complete approaches.
>
> > Can you provide an example, please?
>
> The simplest possible example: when you declare
>
>   val x = nil
>
> where nil denotes the empty list, then in case x is immutable,
> inference can assign the polymorphic type
>
>   x : List[A]
>
> If x is mutable, it cannot, because that would be unsound.

You could find all the statements that assign to the variable and
choose a type that satifies them all, if it doesn't exist then signal
a static type error.

> Similar
> problems occur with subtyping polymorphism, essentially because
> mutable variables always have to be invariant.

?

> > However, I see no point in making all variables immutable, or even in
> > making variables immutable by default.
>
> The idea is that the language (and its type system) should cleanly
> separate the notions of variable and mutable state. That makes the
> intent clear, allows for more flexible typing, and most importantly,
> avoids accident. That is what functional languages do (all of which
> allow mutable state in one form or the other).

Ok, but I think that the best way to do that is to default to mutable
variables and provide an optional "immutable" ("final", "const",
ecc...) keyword.

The F# (Ocaml?) way of defaulting to immutable and providing an
optional "mutable" keyword is acceptable.

The Haskell way of resorting to state monads for simple sequential
computations seems to me like reinventing the square imperative wheel.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <uZSdnXeAK5xAxfjXnZ2dnUVZ8h6dnZ2d@brightview.co.uk>
Vend wrote:
> On Jul 11, 2:39 pm, Andreas Rossberg <········@mpi-sws.org> wrote:
>> If x is mutable, it cannot, because that would be unsound.
> 
> You could find all the statements that assign to the variable and
> choose a type that satifies them all, if it doesn't exist then signal
> a static type error.

That is essentially the relaxed value restriction used in OCaml:

# [|[]|];;
- : '_a list array = [|[]|]

Its works fine there but cannot escape compilation units and does not sit
well with other approaches to compilation, e.g. F# cannot use it with its
JIT compilation.

>> Similar
>> problems occur with subtyping polymorphism, essentially because
>> mutable variables always have to be invariant.
> 
> ?

I believe Andreas is referring to something like Java's cock-up where they
made arrays covariant, allowing you to cast from an array of Foo up to an
array of Object and then write an Object into one of the elements of what
is actually a Foo array.

>> > However, I see no point in making all variables immutable, or even in
>> > making variables immutable by default.
>>
>> The idea is that the language (and its type system) should cleanly
>> separate the notions of variable and mutable state. That makes the
>> intent clear, allows for more flexible typing, and most importantly,
>> avoids accident. That is what functional languages do (all of which
>> allow mutable state in one form or the other).
> 
> Ok, but I think that the best way to do that is to default to mutable
> variables and provide an optional "immutable" ("final", "const",
> ecc...) keyword.
> 
> The F# (Ocaml?) way of defaulting to immutable and providing an
> optional "mutable" keyword is acceptable.

That is F# only but it is a non-trivial language construct (I don't write
about it until well into my books) because you need to understand scope and
copying.

For example, something like:

  let mutable n = 0
  n <- n + 1
  List.map (fun m -> n + m) ms

gives a compiler error and you have to explicitly un-mutable "n":

  let mutable n = 0
  n <- n + 1
  let n = n
  List.map (fun m -> n + m) ms

The problem is that "n" is internally represented as an alloca on the stack
frame of the outer functions so it cannot safely be passed elsewhere
because the stack frame it is in might disappear. OCaml essentially turns
refs into mutables when it knows they stay in scope, e.g. to unbox floats
within function bodies.

> The Haskell way of resorting to state monads for simple sequential
> computations seems to me like reinventing the square imperative wheel.

Imperative will always be essential in the real world. As soon as you enter
into Haskell's world of high-level abstractions far removed from the
machinery that evaluates programs you completely lose control of time. Your
programs might very well be "right" in theory but if they will not
terminate before the end of the universe, who cares?

Haskell is history repeating itself. Lispers once spoke of a
mythical "sufficiently smart compiler" that could take their abstract
dynamic code and make it run at a decent speed. Nothing ever came of it.
Haskell just takes that idea even further in the wrong direction.

ML really hits the sweet spot in providing powerful abstractions with
predictable performance. Even in F#, with a JIT compiler, the performance
is extremely predictable once you've mastered a few simple rules.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090710204252.6c1114c6@tritium.xx>
Vend <······@virgilio.it> wrote:

> > Fortunately that's just your opinion and not the reality.  Even
> > Microsoft has found the value in functional programming.  See
> > F#.  It's scheduled to be packaged with Visual Studio in 2010,
> > AFAIK.  Also many imperative languages are employing more and more
> > functional concepts.
>
> F# is not purely functional.

So...?


> > > What I want is a safe (or has an option to be safe), fast mostly
> > > imperative language that doesn't suck.
> >
> > If you want safety, go for functional programming.
>
> Why?
> Type safety is orthogonal to the functional/imperative distinction.

Type safety is one kind of safety.  The functional paradigm gives you
other types of safety, too, which doesn't require additional language
constructs like 'foreach'.


> >  About speed, most decent functional languages aren't considerably
> > slower than C, but given that you save a lot of development time,
> > you'll get your result much faster.
>
> However, if I understand correctly, purely functional languages
> typically have performance problems.

That statement wasn't restricted to purely functional languages, but
despite Dr. Harrop's usual noise purely functional languages perform
quite well.


> > Also purely functional languages (e.g. Haskell, Clean) and languages
> > with immutable data (e.g. F#) allow clean and concise algorithm
> > implementations, which are still just as fast as imperative variants
> > with explicit reference/memory handling.
>
> This makes no sense, since you can always use mutable variables as if
> they were immutable.

You cannot in the same convenient way.  With immutable data you can make
a function take a tree and return a slightly modified tree without much
programming effort.


> What makes functional languages more coincise is the availability of
> first-class procedures.

Modern imperative languages have that as well.  Functional programming
is a paradigm, not a tool.


> Lazy evaluation can also improve expressivity, at the cost of
> performance problems.

Lazy evaluation is a semantic property.  What it compiles to is a
different question.  Particularly Haskell as the outrider in non-strict
semantics does quite well at that (with GHC).


> >  Finally with these two classes of languages and additionally Erlang
> > you get concurrency and parallelity almost for free.  Multithreaded
> > programming is a PITA in all imperative languages.
>
> Again, these issues are orthogonal. Erlang's message passing model of
> concurrency can be also used in imperative languages.

Erlang is inherently thread-safe.  The other languages I mentioned get
that property partly through immutable data and partly through excellent
synchronization constructs.


> > > C++ (if you are very good and very careful), Java and OCaml are
> > > decent, but not quite what I want. Maybe D and Ada, although I
> > > don't know about those.
> >
> > Just in case you didn't notice, OCaml is a functional language.
>
> But it is not pure.

So...?  The OP obviously doesn't like functional programming.  That's my
point.


> > However, nobody is "very good" and "very careful".  C++ leaves too
> > many spots open for making mistakes.  D is much better at that, if
> > you insist on imperative programming.
> >
> > All in all you're just showing that you don't have a clue about
> > functional programming.  Before making such unfounded claims, you'd
> > better try a functional language for more than 15 minutes.  If you
> > still think that functional programming sucks, you should provide
> > reasons to support your claims.
>
> He didn't say that functional programming sucks, he said that the
> functional paradigm is useful 25% of times.

Yes, and he didn't support that statement by anything.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Vend
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <a6fb41c3-c40e-4f3d-8519-acb2e36e2cfd@g31g2000yqc.googlegroups.com>
On Jul 10, 8:42 pm, Ertugrul Söylemez <····@ertes.de> wrote:
> > F# is not purely functional.
>
> So...?

I think we were discussing purely functional languages.

> > Type safety is orthogonal to the functional/imperative distinction.
>
> Type safety is one kind of safety.  The functional paradigm gives you
> other types of safety, too, which doesn't require additional language
> constructs like 'foreach'.

What kind of safety?

> That statement wasn't restricted to purely functional languages, but
> despite Dr. Harrop's usual noise purely functional languages perform
> quite well.

Do they?
Functional update of a data structure has both asymptotic and constant
performance penality over destructive update.

> > > Also purely functional languages (e.g. Haskell, Clean) and languages
> > > with immutable data (e.g. F#) allow clean and concise algorithm
> > > implementations, which are still just as fast as imperative variants
> > > with explicit reference/memory handling.
>
> > This makes no sense, since you can always use mutable variables as if
> > they were immutable.
>
> You cannot in the same convenient way.

Why not? You just have to assign each variable only once.
Language-level immutability can be useful as an optional constraint
for automatic program verification, but I don't see the point of
forcing it on all variables like purely functiona program do.

>  With immutable data you can make
> a function take a tree and return a slightly modified tree without much
> programming effort.

Yes, but it doesn't have to be whole-program language-enforced
immutability.
The Java 'final' keyword sufficies for that task.

> > What makes functional languages more coincise is the availability of
> > first-class procedures.
>
> Modern imperative languages have that as well.  Functional programming
> is a paradigm, not a tool.

Right. As a paradigm, functional programming can be useful.
As a language constraint, like in Haskell and Clean, it is not.

> > Lazy evaluation can also improve expressivity, at the cost of
> > performance problems.
>
> Lazy evaluation is a semantic property.  What it compiles to is a
> different question.

Time and particularly space complexity are also sematic properties,
and they are affected by lazy evaluation.

>  Particularly Haskell as the outrider in non-strict
> semantics does quite well at that (with GHC).

I've read that Haskell has unpredictable space complexity, which
depends on the inner workings of the compiler.

> > > Just in case you didn't notice, OCaml is a functional language.
>
> > But it is not pure.
>
> So...?  The OP obviously doesn't like functional programming.  That's my
> point.

No.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <x56dnXkUHeZoTcXXnZ2dnUVZ8sqdnZ2d@brightview.co.uk>
Vend wrote:
> On Jul 10, 8:42 pm, Ertugrul Söylemez <····@ertes.de> wrote:
>> > F# is not purely functional.
>> That statement wasn't restricted to purely functional languages, but
>> despite Dr. Harrop's usual noise purely functional languages perform
>> quite well.
> 
> Do they?
> Functional update of a data structure has both asymptotic and constant
> performance penality over destructive update.

In theory, you can always fall back on monads to let you write imperative
code in a purely functional language.

In practice, the infrastructure is not there. Hence filling a hash table in
Haskell is 30x slower than in F#.

>> Particularly Haskell as the outrider in non-strict
>> semantics does quite well at that (with GHC).
> 
> I've read that Haskell has unpredictable space complexity, which
> depends on the inner workings of the compiler.

Look at Ertugrul's Haskell implementation of BWT in this thread. Aside from
being slow, it consumes asymptotically more memory than expected.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <Vpydney6j6HEUsXXnZ2dnUVZ8s9i4p2d@brightview.co.uk>
Ertugrul Söylemez wrote:
> Vend <······@virgilio.it> wrote:
>> However, if I understand correctly, purely functional languages
>> typically have performance problems.
> 
> That statement wasn't restricted to purely functional languages, but
> despite Dr. Harrop's usual noise purely functional languages perform
> quite well.

Your fastest BWT in Haskell is still over 100x slower than C.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Slobodan Blazeski
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <6dc9ed3c-c22c-465b-a5f4-c498f08ca7a1@b14g2000yqd.googlegroups.com>
On Jul 10, 6:40 pm, Ertugrul Söylemez <····@ertes.de> wrote:
> fft1976 <·······@gmail.com> wrote:

> > But because most popular languages are imperative, people are whining
> > about those 1/4 cases and some disturbed individuals even think
> > languages should be pure-functional.
>
> What's wrong with purely functional languages?
They suck when the problem is not purely functional.

Slobodan
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h37vph$2ca$1@news.eternal-september.org>
On 2009-07-10 12:40:49 -0400, Ertugrul S�ylemez <··@ertes.de> said:

> What's wrong with purely functional languages?

The "purely" part. Some solutions are naturally expressed in a pure 
functional manner. Others are not. The programmer should have a choice.

Moreover, there's abundant linguistic and cognitive scientific evidence 
that human beings naturally conceptualize the world in terms of 
stateful entities and their mutation, not functionally, so imperative 
languages fit our natural cognitive models better. This suggests that 
DSLs will often include an imperative component in their surface syntax 
at the very least.


-- 
Raffael Cavallaro
From: Larry D'Anna
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h382rk$bhd$1@news.eternal-september.org>
>> What's wrong with purely functional languages?
>
> The "purely" part. Some solutions are naturally expressed in a pure 
> functional manner. Others are not. The programmer should have a choice.

Uh, in Haskell you absolutely do have a choice.  Haskell gives the programmer
access to all the non-functional things you want: mutable cells, mutable arrays,
foreign functions, IO, etc.  You just have to use monads to get to it all.  You
can take any imperative algorithm you like and translate it into Haskell in a
totally straightforward way.  


        --larry
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h387ao$rpg$1@news.eternal-september.org>
On 2009-07-10 14:55:48 -0400, Larry D'Anna <·····@elder-gods.org> said:

> You just have to use monads to get to it all.  You
> can take any imperative algorithm you like and translate it into Haskell in a
> totally straightforward way.

monads. straightforward. <= contradiction in terms


-- 
Raffael Cavallaro
From: Larry D'Anna
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h38aml$655$1@news.eternal-september.org>
>> You just have to use monads to get to it all.  You
>> can take any imperative algorithm you like and translate it into Haskell in a
>> totally straightforward way.
>
> monads. straightforward. <= contradiction in terms

They're a bit difficult to learn, but once you understand them they're not hard
to use.


   --larry
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h38g5v$pr9$1@news.eternal-september.org>
On 2009-07-10 17:09:41 -0400, Larry D'Anna <·····@elder-gods.org> said:

> They're a bit difficult to learn, but once you understand them they're not hard
> to use.

The point here is that it is completely unnecessary cognitive overhead. 
If, semantically, I mean to do mutation of state, I should be able to 
just mutate state in a direct fashion, not have to use a state monad.

Having said that, it may (or may not) be a desireable to *implement* a 
syntax for mutation in terms of monads, but at the level of surface 
syntax, it should just look like ordinary mutation.

-- 
Raffael Cavallaro
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <WrWdnXsFc5y9WsrXnZ2dneKdnZxi4p2d@brightview.co.uk>
Raffael Cavallaro wrote:
> On 2009-07-10 17:09:41 -0400, Larry D'Anna <·····@elder-gods.org> said:
>> They're a bit difficult to learn, but once you understand them they're
>> not hard to use.
> 
> The point here is that it is completely unnecessary cognitive overhead.

All HLL features are "unnecessary cognitive overhead".

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h393nj$mij$1@news.eternal-september.org>
On 2009-07-10 20:05:50 -0400, the amphibian formerly known as toad said:

> Raffael Cavallaro wrote:
>> On 2009-07-10 17:09:41 -0400, Larry D'Anna <·····@elder-gods.org> said:
>>> They're a bit difficult to learn, but once you understand them they're
>>> not hard to use.
>> 
>> The point here is that it is completely unnecessary cognitive overhead.
> 
> All HLL features are "unnecessary cognitive overhead".

A feature is unnecessary when it gets in the way of a direct mapping to 
the problem domain. High or low level has nothing to do with it. It's a 
matter of appropriateness to the the domain. When the domain is about 
mutating state, then monads are inappropriate, not because they are 
"high level" but because they get in the way of a direct mapping from 
the problem domain to code.

-- 
Raffael Cavallaro
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090711160811.225a9dc3@tritium.xx>
Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> wrote:

> >>> They're a bit difficult to learn, but once you understand them
> >>> they're not hard to use.
> >>
> >> The point here is that it is completely unnecessary cognitive
> >> overhead.
> >
> > All HLL features are "unnecessary cognitive overhead".
>
> A feature is unnecessary when it gets in the way of a direct mapping
> to the problem domain. High or low level has nothing to do with
> it. It's a matter of appropriateness to the the domain. When the
> domain is about mutating state, then monads are inappropriate, not
> because they are "high level" but because they get in the way of a
> direct mapping from the problem domain to code.

State monads _are_ a direct mapping.  Monads allow you to create small
problem-specific languages.  The problem in this case is state and state
monads encode it in a concise and elegant way, particularly because you
don't lose referential transparency along the way.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3aat6$rub$1@news.eternal-september.org>
On 2009-07-11 10:08:11 -0400, Ertugrul S�ylemez <··@ertes.de> said:

> State monads _are_ a direct mapping.

Introducing some intermediation, such as monads, is, by definition, not 
*direct*.

-- 
Raffael Cavallaro
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7xvdlz9lkl.fsf@ruckus.brouhaha.com>
Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> writes:
> Introducing some intermediation, such as monads, is, by definition,
> not *direct*.

That sentence would be equally true if you replaced the word "monads"
with "keystrokes". 
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3bmcg$keh$1@news.eternal-september.org>
On 2009-07-11 13:27:38 -0400, Paul Rubin <·············@NOSPAM.invalid> said:

> Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> writes:
>> Introducing some intermediation, such as monads, is, by definition,
>> not *direct*.
> 
> That sentence would be equally true if you replaced the word "monads"
> with "keystrokes".

Only in a world where programs can be conveyed to the machine 
telepathically. But then I'm sure you'd chime in about replacing 
"keystrokes" with "thoughts."

It's kind of sad when people jump on the phrasing rather than trying to 
make actual sense. We're clearly talking about *unncessary* additional 
means to accomplish a task, not the need to use keystrokes to type text.

I'll just say: "Introducing some *unnecessary* intermediation, such as 
monads, is, by definition, not *direct*." Is your inner pedant happy 
now?

-- 
Raffael Cavallaro
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7xhbxia5dq.fsf@ruckus.brouhaha.com>
Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> writes:
> I'll just say: "Introducing some *unnecessary* intermediation, such as
> monads, is, by definition, not *direct*." Is your inner pedant happy now?

I could say that traditional effectful computation, in whatever
language, is already done in a monad.  Haskell just makes this
explicit.
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3eebk$uh9$1@news.eternal-september.org>
On 2009-07-12 00:32:01 -0400, Paul Rubin <·············@NOSPAM.invalid> said:

> I could say that traditional effectful computation, in whatever
> language, is already done in a monad.

You could certainly say it, but it wouldn't be true. Your claim is 
rather like saying that everyone who has a traditional Thanksgiving 
dinner really craves Tofurky. Haskell is forced to make "effectful 
computation" an explicit monad because, having denied itself side 
effects, it had to add them back to the language so it could do useful 
things. In a "tradtional effectful" language, these obviously useful 
features were never eschewed in the first place, so there was no need 
to jump through category theory hoops to get them back.


That pure functional languages conceptualize side effects as monads is 
just reflection of their narrow view of computation, not a reflection 
of the inherent nature of side effects. It is perfectly legitimate (in 
fact, it is the human cognitive norm) to conceptualize the world as 
inherently stateful and constantly mutating. You only reach for monads 
when you insist that the world (or all computation) is a set of 
functions returning values.



-- 
Raffael Cavallaro
From: Andreas Rossberg
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <614f85c1-2262-4e0b-9543-f88940fc09d8@n30g2000vba.googlegroups.com>
On Jul 13, 6:48 am, Raffael Cavallaro wrote:
>
> That pure functional languages conceptualize side effects as monads is
> just reflection of their narrow view of computation, not a reflection
> of the inherent nature of side effects. It is perfectly legitimate (in
> fact, it is the human cognitive norm) to conceptualize the world as
> inherently stateful and constantly mutating.

People used to conceptualize numbers by using their fingers. Do you
think modern math would have evolved that way? Is set theory or are
the Peano axioms a narrow view of numbers because they eschew fingers?

> You only reach for monads
> when you insist that the world (or all computation) is a set of
> functions returning values.

Try talking to a quantum physicist about the world not being a set of
functions. ;-)
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3fbng$8uo$1@news.eternal-september.org>
On 2009-07-13 03:21:37 -0400, Andreas Rossberg <········@mpi-sws.org> said:

> People used to conceptualize numbers by using their fingers.

This is false. Many higher animals have an innate sense of number, 
including some who don't even have digits. You're simply repeating a 
folk fallacy here.

> Do you
> think modern math would have evolved that way? Is set theory or are
> the Peano axioms a narrow view of numbers because they eschew fingers?
> 
>> You only reach for monads
>> when you insist that the world (or all computation) is a set of
>> functions returning values.
> 
> Try talking to a quantum physicist about the world not being a set of
> functions. ;-)

Which is tantamount to saying that to be a programmer, one must fully 
understand quantum physics. Thank you for providing the reductio ad 
absurdum of your own argument for me.

The issue isn't whether the universe is "really" functional or not. The 
issue is that programming languages are *not* an interface between 
hardware and the intrinsic nature of the universe. Programming 
languages are an interface between hardware and human cognitive 
abilities. As such they should leverage native human cognitive 
abilities, not physicists' most recent, and ever-changing estimate of 
the intrinsic nature of the universe, nor any other paradigm that is at 
odds with the way people most easily conceptualize a particular problem 
domain.

IOW, don't force people to change the way they think best in order to 
program; mold programming languages to fit the way people think best.


-- 
Raffael Cavallaro
From: Andreas Rossberg
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <f0e6f28e-545b-4dd3-b0a8-c4ec1afad173@j21g2000vbn.googlegroups.com>
On Jul 13, 3:04 pm, Raffael Cavallaro
<················@pas.espam.s.il.vous.plait.mac.com> wrote:
> > People used to conceptualize numbers by using their fingers.
>
> This is false. Many higher animals have an innate sense of number,
> including some who don't even have digits.

I don't see how that falsifies what I said. But anyway, the point was:
hardware and software are highly complex artefacts -- with a degree of
complexity far beyond what people usually cope with. Appealing to how
these people "normally" "conceptualize" things, or worse, summoning
the mythical "Real World", is useless when it comes to finding the
best ways of dealing with this complexity.

> >> You only reach for monads
> >> when you insist that the world (or all computation) is a set of
> >> functions returning values.
>
> > Try talking to a quantum physicist about the world not being a set of
> > functions. ;-)
>
> Which is tantamount to saying that to be a programmer, one must fully
> understand quantum physics. Thank you for providing the reductio ad
> absurdum of your own argument for me.

It is just saying that you should be careful with drawing conclusions
from what you percept as "reality". And you didn't quite notice the
smiley.

- Andreas
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3fji9$2ja$1@news.eternal-september.org>
On 2009-07-13 10:07:53 -0400, Andreas Rossberg <········@mpi-sws.org> said:

> On Jul 13, 3:04�pm, Raffael Cavallaro
> <················@pas.espam.s.il.vous.plait.mac.com> wrote:
>>> People used to conceptualize numbers by using their fingers.
>> 
>> This is false. Many higher animals have an innate sense of number,
>> including some who don't even have digits.
> 
> I don't see how that falsifies what I said.

Because fingers are a mnemonic device, not how numbers are 
conceptualized. Does using a laptop mean that I conceptualize numbers 
as backlit liquid crystals? Does using arabic numerals mean that I 
conceptualize the number three as two stacked semi-circles open on the 
left? Hardly.

> But anyway, the point was:
> hardware and software are highly complex artefacts -- with a degree of
> complexity far beyond what people usually cope with. Appealing to how
> these people "normally" "conceptualize" things, or worse, summoning
> the mythical "Real World", is useless when it comes to finding the
> best ways of dealing with this complexity.

Again, I'm not the one appealing to notions of the "real world," you 
are. I'm appealing to notions of native cognitive skills. Whether or 
not our native cognitive abilites ultimately correspond to "reality" is 
a question for philosophers[1], and ultimately largely irrelevant to 
leveraging the tools we've been handed by evolution and millenia of 
cultural development.

We shouldn't discard native cognitive abilities merely because one 
paradigm (functional) has proven extremely useful in a particular 
domain (mathematics) and attempt to shoehorn all computation into that 
functional model. *Pure* functional programming falls into the trap of 
thinking that because a paradigm is useful in many circumstances, it 
should be universal. In doing so we discard cognitive abilities that 
human beings are excellent at, and which we've been honing for at least 
tens of thousands of years. Specifically, the ability to keep track of 
the intersecting timelines of numerous, constantly mutating, stateful 
entities.

> 
>>>> You only reach for monads
>>>> when you insist that the world (or all computation) is a set of
>>>> functions returning values.
>> 
>>> Try talking to a quantum physicist about the world not being a set of
>>> functions. ;-)
>> 
>> Which is tantamount to saying that to be a programmer, one must fully
>> understand quantum physics. Thank you for providing the reductio ad
>> absurdum of your own argument for me.
> 
> It is just saying that you should be careful with drawing conclusions
> from what you percept as "reality". And you didn't quite notice the
> smiley.

Again I'm not the one appealing to some ultimate "reality," you are. I 
make no claims that native human cognitive abilities correspond to some 
ultimate "reality." I merely claim that they are powerful tools that we 
discard to our loss.



[1] Yes, philosophers, not physicists. The claim that our current best 
physical model of the universe corresponds to some underlying ultimate 
reality is a metaphysical claim, not a scientific one. Science itself 
is mute on this point.
-- 
Raffael Cavallaro
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <E42dne3o4eFNxsbXnZ2dnUVZ8oFi4p2d@brightview.co.uk>
Andreas Rossberg wrote:
> On Jul 13, 6:48 am, Raffael Cavallaro wrote:
>>
>> That pure functional languages conceptualize side effects as monads is
>> just reflection of their narrow view of computation, not a reflection
>> of the inherent nature of side effects. It is perfectly legitimate (in
>> fact, it is the human cognitive norm) to conceptualize the world as
>> inherently stateful and constantly mutating.
> 
> People used to conceptualize numbers by using their fingers. Do you
> think modern math would have evolved that way? Is set theory or are
> the Peano axioms a narrow view of numbers because they eschew fingers?

Side effects are essential in programming but fingers are not in
mathematics. So how is that analogous?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7xtz1gb41w.fsf@ruckus.brouhaha.com>
Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> writes:
> > I could say that traditional effectful computation, in whatever
> > language, is already done in a monad.
> 
> You could certainly say it, but it wouldn't be true.

You can certainly say it is false, but it is true anyway ;-).
Take a look at Harpy for example--an embedded assembler in which
every machine instruction is a monad action.

> It is perfectly legitimate (in fact, it is the human cognitive norm)
> to conceptualize the world as inherently stateful and constantly
> mutating.

The human cognitive norm is full of errors and inconsistencies (look
at the problems of concurrent programming for example).  Writing sound
code is most easily done using models with sound mathematical
properties.  Human cognition doesn't have those properties, but monads
do just fine.

Have you actually done any programming in Haskell?  Like most
languages, Haskell has many characteristics that are inconvenient or
confusing.  Modelling effectul computations as monad actions is fairly
far down on the list.
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090713221210.3bdc471c@tritium.xx>
Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> wrote:

> > I could say that traditional effectful computation, in whatever
> > language, is already done in a monad.
>
> You could certainly say it, but it wouldn't be true. Your claim is
> rather like saying that everyone who has a traditional Thanksgiving
> dinner really craves Tofurky. Haskell is forced to make "effectful
> computation" an explicit monad because, having denied itself side
> effects, it had to add them back to the language so it could do useful
> things. In a "tradtional effectful" language, these obviously useful
> features were never eschewed in the first place, so there was no need
> to jump through category theory hoops to get them back.
>
> That pure functional languages conceptualize side effects as monads is
> just reflection of their narrow view of computation, not a reflection
> of the inherent nature of side effects. It is perfectly legitimate (in
> fact, it is the human cognitive norm) to conceptualize the world as
> inherently stateful and constantly mutating. You only reach for monads
> when you insist that the world (or all computation) is a set of
> functions returning values.

People love to use metaphors in such discussions.  I doubt that it's
useful in any way.  Let's discuss facts instead.

Firstly monads are not an indirection.  Expressing side effects or state
monadically could hardly be any more direct.  You have state, you can
'get' it, use it, 'modify' it and/or 'put' it back.  The same with IO,
just that you have no 'get' function, because the state of the universe
has no distinct value in memory.  Instead you have a bunch of functions
like 'putStr', 'hGetLine', 'connect', etc., which modify the universe
state.

What's wrong with this?  It's not more to type than in C, in fact it's
less to type in most cases and even more intuitive to read.  What could
be more intuitive than

  forever (putStrLn "Hello world!")

for an endless loop printing a line repeatedly?  It's just that people
who don't like monads either never used them (properly) or never made
efforts to understand them.  Evidence for this comes from the frequent
but wrong claim that monads are there to "bring back" effectful
computation.  Monads are universal tools for many things and show their
true power, as soon as you start combining them through monad
transformers.

However, this post is probably pointless anyway.  Most people on Usenet
are totally obstinate.  I recommend you to try out Haskell for more than
an hour.  After all I was a convinced C programmer a few years ago, so
it's not that I'm so fond of Haskell because it's the only thing I know.
And that's about all I can say at this point.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <WrWdnXoFc5zaWsrXnZ2dneKdnZxi4p2d@brightview.co.uk>
Larry D'Anna wrote:
>>> You just have to use monads to get to it all.  You
>>> can take any imperative algorithm you like and translate it into Haskell
>>> in a totally straightforward way.
>>
>> monads. straightforward. <= contradiction in terms
> 
> They're a bit difficult to learn, but once you understand them they're not
> hard to use.

But do the benefits outweigh the costs?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Nobody
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <pan.2009.07.11.16.51.17.735000@nowhere.com>
On Fri, 10 Jul 2009 21:09:41 +0000, Larry D'Anna wrote:

>>> You just have to use monads to get to it all.  You
>>> can take any imperative algorithm you like and translate it into
>>> Haskell in a totally straightforward way.
>>
>> monads. straightforward. <= contradiction in terms
> 
> They're a bit difficult to learn, but once you understand them they're
> not hard to use.

They're not difficult to learn in the slightest. You could teach someone
how to use Haskell's "IO" without ever mentioning either the word "monad"
or the concepts behind it.

What imperative programmers seem to dislike is that monads take the
fundamental nature of imperative languages and turn it into a "thing".
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3bmv4$lol$1@news.eternal-september.org>
On 2009-07-11 12:51:17 -0400, Nobody <······@nowhere.com> said:

> They're not difficult to learn in the slightest. You could teach someone
> how to use Haskell's "IO" without ever mentioning either the word "monad"
> or the concepts behind it.

But they wouldn't have "learned" monads in that case at all; they'd 
just be doing voodoo programming (i.e., going throught the prescribed 
motions without understanding what it is they were doing)

> 
> What imperative programmers seem to dislike is that monads take the
> fundamental nature of imperative languages and turn it into a "thing".

No, what programmers who haven't drunk the *pure*-functional kool aid 
dislike about them is that they are just another form of unnecessary 
boiler plate that gets in the way of simply stating directly what the 
programmer wants to do. (That they're a pretentious form of boiler 
plate doesn't help their popularity much either. "Our language doesn't 
deal in dirty, impure, mutation, no, no! We use category theory 
instead!")


-- 
Raffael Cavallaro
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <o6ydnTehiqpu-8TXnZ2dnUVZ8oCdnZ2d@brightview.co.uk>
Raffael Cavallaro wrote:
> On 2009-07-11 12:51:17 -0400, Nobody <······@nowhere.com> said:
>> What imperative programmers seem to dislike is that monads take the
>> fundamental nature of imperative languages and turn it into a "thing".
> 
> No, what programmers who haven't drunk the *pure*-functional kool aid
> dislike about them is that they are just another form of unnecessary
> boiler plate that gets in the way of simply stating directly what the
> programmer wants to do.

Indeed. We saw a nice example of that elsewhere on this very thread. Here is
a simple OCaml program that fills in an array by mutating elements in O(1):

let bench n =
  let t = Sys.time() in
  ignore(Array.init n (fun i -> [i]));
  Printf.printf "%gs\n%!" ((Sys.time() -. t) *. 1e6 /. float n)

let () =
  List.iter bench [10000; 100000; 1000000; 10000000]

Here is the Haskell equivalent that fills in an STArray by mutating elements
in O(n):

import CPUTime
import System.IO
import Prelude
import Control.Monad.ST
import Data.Array.ST
import Data.Array

fill a i n = if i > n
                then return ()
                else do writeArray a i [i]
                        fill a (i+1) n

count n = do a <- newArray (1,n) [0] :: ST s (STArray s Int [Int])
             fill a 1 n
             return ()

bench n = do t1 <- getCPUTime
             stToIO $ count n
             t2 <- getCPUTime
             print $ round $ toRational(t2 - t1) / toRational n

main = do bench 10000
          bench 100000
          bench 1000000
          bench 10000000

The presence of monads raises many immediate questions:

Why is this an STArray instead of an Array or STUArray or IOArray or
IOUArray?

Why must the programmer manually annotate the following type:

  ST s (STArray s Int [Int])

when other languages with type inference (e.g. SML, OCaml, F#) do not
require such type annotations?

Why is "stToIO" needed?

The Haskell is far more verbose than the OCaml and uses many functions. Is
this because each function requires a separate "do" because the imperative
sections of code must be wrapped individually in monadic "do" notation?

How could I refactor this to use a more conventional "time" combinator?

The answers to these questions are not at all obvious to me or, indeed, to
the vast majority of Haskell programmers who tinker with it without
realising important facts like mutating an element in an array in O(n).

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Johannes Laire
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <e9c4fe18-aae1-4913-96b9-5acdbf1e4752@k26g2000vbp.googlegroups.com>
On Jul 12, 8:40 am, Jon Harrop <····@ffconsultancy.com> wrote:
> Raffael Cavallaro wrote:
> > On 2009-07-11 12:51:17 -0400, Nobody <······@nowhere.com> said:
> >> What imperative programmers seem to dislike is that monads take the
> >> fundamental nature of imperative languages and turn it into a "thing".
>
> > No, what programmers who haven't drunk the *pure*-functional kool aid
> > dislike about them is that they are just another form of unnecessary
> > boiler plate that gets in the way of simply stating directly what the
> > programmer wants to do.

They don't "get in the way" nearly as much as you think. But since
you know they are a bad idea (even without really understanding them),
I guess you will not spend your time learning about them (and thus
will
never understand them), and will continue complaining about them
instead.

> Indeed. We saw a nice example of that elsewhere on this very thread. Here is
> a simple OCaml program that fills in an array by mutating elements in O(1):
>
> let bench n =
>   let t = Sys.time() in
>   ignore(Array.init n (fun i -> [i]));
>   Printf.printf "%gs\n%!" ((Sys.time() -. t) *. 1e6 /. float n)
>
> let () =
>   List.iter bench [10000; 100000; 1000000; 10000000]
>
> Here is the Haskell equivalent that fills in an STArray by mutating elements
> in O(n):
[snip]

Here's another Haskell version. The benchmarking part is *untested*,
because I don't have StrictBench installed and am lazy, but it should
work. If not, the problem is some small typo; the general idea is
correct.

import Data.Array
import Data.Array.ST
import Test.StrictBench

mkArray :: Int -> Array Int [Int]
mkArray size = runSTArray $ do
   arr <- newArray (0, size - 1) []
   mapM_ (\i -> writeArray arr i [i]) [0 .. size - 1]
   return arr

main :: IO ()
main = mapM_ (bench . mkArray) [10^4, 10^5, 10^6, 10^7]

> The presence of monads raises many immediate questions:
>
> Why is this an STArray instead of an Array or STUArray or IOArray or
> IOUArray?

The U's stand for Unboxed, and you obviously don't want to talk
about them.

Array is immutable, STArray is a mutable array in the ST monad and
IOArray a mutable array in the IO monad. Using IO can be sometimes
convenient and easier for beginners, but most seem to prefer ST.

> Why must the programmer manually annotate the following type:
>
>   ST s (STArray s Int [Int])
>
> when other languages with type inference (e.g. SML, OCaml, F#) do not
> require such type annotations?

This has nothing to do with monads. It's because newArray is
polymorphic,
and if you never use the array for anything, it's just not possible to
know which type of array you want.

> Why is "stToIO" needed?

Because the code is bad.

> The Haskell is far more verbose than the OCaml and uses many functions. Is
> this because each function requires a separate "do" because the imperative
> sections of code must be wrapped individually in monadic "do" notation?

No. The "do" notation is just syntactic sugar and is as composable
as all other kinds of expressions.

> How could I refactor this to use a more conventional "time" combinator?

Depends on the exact combinator. A function like mkArray above
is probably good.

> The answers to these questions are not at all obvious to me or, indeed, to
> the vast majority of Haskell programmers who tinker with it without
> realising important facts like mutating an element in an array in O(n).

To not sound like a troll, I suggest not omitting the crucial fact
that it's O(n) for *boxed* arrays only. I have used mutable unboxed
arrays many times in Haskell (usually integers), but have never needed
boxed ones. I'm sure they have their uses and the bug in GHC is an
important one, but you *can* use arrays with O(1) mutation just as
you can in C.

--
Johannes Laire
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <H8SdnYXjWbRcvcfXnZ2dnUVZ8kydnZ2d@brightview.co.uk>
Johannes Laire wrote:
> Here's another Haskell version. The benchmarking part is *untested*,
> because I don't have StrictBench installed and am lazy, but it should
> work. If not, the problem is some small typo; the general idea is
> correct.
> 
> import Data.Array
> import Data.Array.ST
> import Test.StrictBench
> 
> mkArray :: Int -> Array Int [Int]
> mkArray size = runSTArray $ do
>    arr <- newArray (0, size - 1) []
>    mapM_ (\i -> writeArray arr i [i]) [0 .. size - 1]
>    return arr
> 
> main :: IO ()
> main = mapM_ (bench . mkArray) [10^4, 10^5, 10^6, 10^7]

Thanks. That looks much nicer.

>> The presence of monads raises many immediate questions:
>>
>> Why is this an STArray instead of an Array or STUArray or IOArray or
>> IOUArray?
> 
> The U's stand for Unboxed, and you obviously don't want to talk
> about them.

Given the information you provide below, surely it is essential to talk
about them?

> To not sound like a troll, I suggest not omitting the crucial fact
> that it's O(n) for *boxed* arrays only.

What exactly does that mean? Will an STArray of Int be unboxed and O(1)
write? Can an STUArray of Lists exist?

> I have used mutable unboxed 
> arrays many times in Haskell (usually integers), but have never needed
> boxed ones. I'm sure they have their uses...

Most fast dictionary implementations are built upon arrays of boxed types,
such as hash tables.

> and the bug in GHC is an 
> important one, but you *can* use arrays with O(1) mutation just as
> you can in C.

In C I can write a list into an array in O(1). Is that possible with
Haskell?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3efdf$178$1@news.eternal-september.org>
On 2009-07-12 06:36:23 -0400, Johannes Laire <··············@gmail.com> said:

> On Jul 12, 8:40�am, Jon Harrop <····@ffconsultancy.com> wrote:
>> Raffael Cavallaro wrote:
>>> On 2009-07-11 12:51:17 -0400, Nobody <······@nowhere.com> said:
>>>> What imperative programmers seem to dislike is that monads take the
>>>> fundamental nature of imperative languages and turn it into a "thing".
>> 
>>> No, what programmers who haven't drunk the *pure*-functional kool aid
>>> dislike about them is that they are just another form of unnecessary
>>> boiler plate that gets in the way of simply stating directly what the
>>> programmer wants to do.
> 
> They don't "get in the way" nearly as much as you think. But since
> you know they are a bad idea (even without really understanding them),
> I guess you will not spend your time learning about them (and thus
> will
> never understand them), and will continue complaining about them
> instead.

Jon didn't write the part you're responding to here, I did.

It's condescending and inaccurate to claim that I don't understand 
monads. I just disagree with you regarding their utility. I think that 
programming idioms should leverage human cognitive strenghts, not 
require the programmer to completely recast the world according to the 
paradigm of the programming language.

The benefits of functional programming are well known, particularly in 
this forum. But it does not follow that because a thing is beneficial 
that it should be applied universally. In particular, one can tell when 
a metaphor is being overextended, when a paradigm is pushed beyond it's 
utility, when it requires that practitioners discard their natural 
cognitive strengths and wholly reconceptualize the world. *Pure* 
functional programming is such an overextended metaphor, and monads are 
symptomatic.
-- 
Raffael Cavallaro
From: Larry D'Anna
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3gacj$297$2@news.eternal-september.org>
On 2009-07-12, Jon Harrop <···@ffconsultancy.com> wrote:
> Raffael Cavallaro wrote:
>> On 2009-07-11 12:51:17 -0400, Nobody <······@nowhere.com> said:
>>> What imperative programmers seem to dislike is that monads take the
>>> fundamental nature of imperative languages and turn it into a "thing".
>> 
>> No, what programmers who haven't drunk the *pure*-functional kool aid
>> dislike about them is that they are just another form of unnecessary
>> boiler plate that gets in the way of simply stating directly what the
>> programmer wants to do.
>
> Indeed. We saw a nice example of that elsewhere on this very thread. Here is
> a simple OCaml program that fills in an array by mutating elements in O(1):
>
> let bench n =
>   let t = Sys.time() in
>   ignore(Array.init n (fun i -> [i]));
>   Printf.printf "%gs\n%!" ((Sys.time() -. t) *. 1e6 /. float n)
>
> let () =
>   List.iter bench [10000; 100000; 1000000; 10000000]
>
> Here is the Haskell equivalent that fills in an STArray by mutating elements
> in O(n):

[ large, inelegant program deleted ]

It's really not fair to take my Haskell code as an example of Haskell's
verbosity.  That's probably the longest Haskell program I've written.  I'm no
Haskell expert.  

            --larry
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <0YmdnY1lLutab8bXnZ2dnUVZ8lhi4p2d@brightview.co.uk>
Larry D'Anna wrote:
> It's really not fair to take my Haskell code as an example of Haskell's
> verbosity.  That's probably the longest Haskell program I've written.  I'm
> no Haskell expert.

Sorry, I retract my verbosity comparison.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry D'Anna
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3ga2j$297$1@news.eternal-september.org>
On 2009-07-12, Raffael Cavallaro wrote:

> No, what programmers who haven't drunk the *pure*-functional kool aid 
> dislike about them is that they are just another form of unnecessary 
> boiler plate that gets in the way of simply stating directly what the 
> programmer wants to do. 

Do you really imagine monads serve no useful purpose whatsoever, and that Simon
Peyton-Jones is just too stupid too see how worthless they are?  Or maybe, is it
possible that there's something you've missed, that there are certain things
that monads enable that aren't possible without them.

Let me give you a few hints

Let's say you're writing a search engine.  What type of text file is easier to
handle: plain text, or postscript?

which java function would you rather call:
   public void fooBar() {...
OR public void fooBar() throws Exception {...

All else being equal, which Haskell function would you rather use:

    foo :: a -> b
OR  bar :: a -> IO b

Do you get it yet?

http://en.wikipedia.org/wiki/Principle_of_Least_Power

Monads let the programmer specify which functions are pure functional, and which
ones are not.  In fact, they let the programmer specify with fine granularity
what sort of side effects each function might produce.  If you want "anything
goes" you use IO.  If you want mutable references and arrays you use MArray.  If
you want continuations you can use MonadCont.  And with the magic of typeclasses
and monad transformers, you can mix and match to get a monad that has exactly
the combination of functionality you need and no more.

You may still say "monad aren't worth the trouble".  But least now you won't go
around idiotically saying things are worthless without having a clue what
they're for.
From: Madhu
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <m3bpnoxa8h.fsf@moon.robolove.meer.net>
* Larry D'Anna <············@news.eternal-september.org> :
Wrote on Mon, 13 Jul 2009 21:48:03 +0000 (UTC):

| On 2009-07-12, Raffael Cavallaro wrote:
|
|> No, what programmers who haven't drunk the *pure*-functional kool aid 
|> dislike about them is that they are just another form of unnecessary 
|> boiler plate that gets in the way of simply stating directly what the 
|> programmer wants to do. 
|
| Do you really imagine monads serve no useful purpose whatsoever, and
| that Simon Peyton-Jones is just too stupid too see how worthless they
| are?  Or maybe, is it possible that there's something you've missed,
| that there are certain things that monads enable that aren't possible
| without them.

The thing to consider is whether Simon Peyton-Jones is just too clever
in pulling off what he has done, and has been successfully distracting
the developer community from producing serious competition by limiting
"empowering technologies"

--
Madhu
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3i510$86k$1@news.eternal-september.org>
On 2009-07-13 17:48:03 -0400, Larry D'Anna <·····@elder-gods.org> said:

> Do you really imagine monads serve no useful purpose whatsoever, and that Simon
> Peyton-Jones is just too stupid too see how worthless they are?  Or 
> maybe, is it
> possible that there's something you've missed, that there are certain things
> that monads enable that aren't possible without them.

Monads can indeed accomplish a goal which cannot be achieved with 
traditional side effects, but that goal (maintaining *pure* functional 
semantics) is not a goal that I and many other programmers consider 
worthwhile given the cognitive overhead.

-- 
Raffael Cavallaro
From: David Formosa (aka ? the Platypus)
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <slrnh5jgej.l51.dformosa@localhost.localdomain>
On Fri, 10 Jul 2009 16:12:08 -0400, Raffael Cavallaro
<················@pas.espam.s.il.vous.plait.mac.com> wrote: 
> On 2009-07-10 14:55:48 -0400, Larry D'Anna <·····@elder-gods.org> said:
>
>> You just have to use monads to get to it all.  You
>> can take any imperative algorithm you like and translate it into Haskell in a
>> totally straightforward way.
>
> monads. straightforward. <= contradiction in terms

The thing is you don't have to understand monads to use them.  If you
just use the do-syntax you can use the monad model as an abstraction
and not worry about the model itself.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <3dWdnfIEyfT2i8fXnZ2dnUVZ8lD_fwAA@brightview.co.uk>
David Formosa (aka ? the Platypus) wrote:
> On Fri, 10 Jul 2009 16:12:08 -0400, Raffael Cavallaro
> <················@pas.espam.s.il.vous.plait.mac.com> wrote:
>> On 2009-07-10 14:55:48 -0400, Larry D'Anna <·····@elder-gods.org> said:
>>
>>> You just have to use monads to get to it all.  You
>>> can take any imperative algorithm you like and translate it into Haskell
>>> in a totally straightforward way.
>>
>> monads. straightforward. <= contradiction in terms
> 
> The thing is you don't have to understand monads to use them.  If you
> just use the do-syntax you can use the monad model as an abstraction
> and not worry about the model itself.

You cannot answer the questions I asked about the array benchmark code in
Haskell without explaining monads. So a programmer who does not understand
monads cannot even hope to accomplish the simplest of tasks.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <hNednb_7EsU9NsrXnZ2dnUVZ8odi4p2d@brightview.co.uk>
Larry D'Anna wrote:
>>> What's wrong with purely functional languages?
>>
>> The "purely" part. Some solutions are naturally expressed in a pure
>> functional manner. Others are not. The programmer should have a choice.
> 
> Uh, in Haskell you absolutely do have a choice.  Haskell gives the
> programmer access to all the non-functional things you want: mutable
> cells, mutable arrays,
> foreign functions, IO, etc.  You just have to use monads to get to it all.
>  You can take any imperative algorithm you like and translate it into
> Haskell in a totally straightforward way.

Here is a trivial counter example:

http://flyingfrogblog.blogspot.com/2009/04/f-vs-ocaml-vs-haskell-hash-table.html

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Vend
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <76f8c4f9-afe7-4cdc-826b-f2c32b338d15@k1g2000yqf.googlegroups.com>
On Jul 10, 8:55 pm, Larry D'Anna <·····@elder-gods.org> wrote:
> >> What's wrong with purely functional languages?
>
> > The "purely" part. Some solutions are naturally expressed in a pure
> > functional manner. Others are not. The programmer should have a choice.
>
> Uh, in Haskell you absolutely do have a choice.  Haskell gives the programmer
> access to all the non-functional things you want: mutable cells, mutable arrays,
> foreign functions, IO, etc.  You just have to use monads to get to it all.  You
> can take any imperative algorithm you like and translate it into Haskell in a
> totally straightforward way.  

Hence you need extra complexity to implement something you get for
free in imperative languages.
What is the point?
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090710205138.08f5bf8b@tritium.xx>
Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> wrote:

> > What's wrong with purely functional languages?
>
> The "purely" part. Some solutions are naturally expressed in a pure
> functional manner. Others are not. The programmer should have a
> choice.

They have a choice.  A purely functional language has a theoretical
property, which is very useful.  That doesn't mean you can't use the
things you're used to.  You just don't -- usually.


> Moreover, there's abundant linguistic and cognitive scientific
> evidence that human beings naturally conceptualize the world in terms
> of stateful entities and their mutation, not functionally, so
> imperative languages fit our natural cognitive models better. This
> suggests that DSLs will often include an imperative component in their
> surface syntax at the very least.

And we all know that humans are far from perfect.  They make mistakes
all the time just because of that cognitive property.  Safety is all
about restricting this or finding alternatives.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h387c5$rpg$2@news.eternal-september.org>
On 2009-07-10 14:51:38 -0400, Ertugrul S�ylemez <··@ertes.de> said:

> And we all know that humans are far from perfect.  They make mistakes
> all the time just because of that cognitive property.  Safety is all
> about restricting this or finding alternatives.

And expressiveness is all about *not* restricting the ability to write 
the solution in the language of the problem domain just because of some 
misplaced notion of "safety."


-- 
Raffael Cavallaro
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090711161352.536c3194@tritium.xx>
Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> wrote:

> > And we all know that humans are far from perfect.  They make
> > mistakes all the time just because of that cognitive property.
> > Safety is all about restricting this or finding alternatives.
>
> And expressiveness is all about *not* restricting the ability to write
> the solution in the language of the problem domain just because of
> some misplaced notion of "safety."

You may want to show me something that couldn't be implemented in
Haskell, or where the additional safety gets in your way.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Raffael Cavallaro
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3aamk$r37$1@news.eternal-september.org>
On 2009-07-11 10:13:52 -0400, Ertugrul S�ylemez <··@ertes.de> said:

> You may want to show me something that couldn't be implemented in
> Haskell, or where the additional safety gets in your way.

"couldn't be implemented" is a turing tarpit red herring. For "gets in 
your way," upthread we've just been discussing the pointless need to 
use monads just to get simple mutable state.

-- 
Raffael Cavallaro
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090713221345.29d3c4e9@tritium.xx>
Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> wrote:

> > You may want to show me something that couldn't be implemented in
> > Haskell, or where the additional safety gets in your way.
>
> "couldn't be implemented" is a turing tarpit red herring. For "gets in
> your way," upthread we've just been discussing the pointless need to
> use monads just to get simple mutable state.

You still failed to show how this is "pointless".  It's not even a
"need", as there are many other ways to handle side effects in a purely
functional language.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <udOdnZomUMv9TsXXnZ2dnUVZ8l2dnZ2d@brightview.co.uk>
Ertugrul Söylemez wrote:
> Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com>
> wrote:
>> > And we all know that humans are far from perfect.  They make
>> > mistakes all the time just because of that cognitive property.
>> > Safety is all about restricting this or finding alternatives.
>>
>> And expressiveness is all about *not* restricting the ability to write
>> the solution in the language of the problem domain just because of
>> some misplaced notion of "safety."
> 
> You may want to show me something that couldn't be implemented in
> Haskell, or where the additional safety gets in your way.

A Haskell implementation of the BWT sort that gets within 2x the performance
of C.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090713221458.5ceac7bb@tritium.xx>
Jon Harrop <···@ffconsultancy.com> wrote:

> > You may want to show me something that couldn't be implemented in
> > Haskell, or where the additional safety gets in your way.
>
> A Haskell implementation of the BWT sort that gets within 2x the
> performance of C.

I'll consider it when I have some time. =)


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <cu6dnac3buTVCsbXnZ2dnUVZ8mRi4p2d@brightview.co.uk>
Ertugrul Söylemez wrote:
> Jon Harrop <···@ffconsultancy.com> wrote:
>> > You may want to show me something that couldn't be implemented in
>> > Haskell, or where the additional safety gets in your way.
>>
>> A Haskell implementation of the BWT sort that gets within 2x the
>> performance of C.
> 
> I'll consider it when I have some time. =)

So decent functional languages are only competitively performant after
infinite development time?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Scott Burson
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <499e82c2-2ade-4e18-9edf-aa2f1dd66bec@y10g2000prf.googlegroups.com>
On Jul 13, 2:30 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Ertugrul Söylemez wrote:
> > Jon Harrop <····@ffconsultancy.com> wrote:
> >> > You may want to show me something that couldn't be implemented in
> >> > Haskell, or where the additional safety gets in your way.
>
> >> A Haskell implementation of the BWT sort that gets within 2x the
> >> performance of C.
>
> > I'll consider it when I have some time. =)
>
> So decent functional languages are only competitively performant after
> infinite development time?

Oh good grief.  You really think the C version was written in an
hour??  And you think Ertugrul doesn't have better things to do than
argue on Usenet?

Really, Jon, for a smart guy you say some amazingly stupid things
sometimes.

-- Scott
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <wvCdnWLs0c_CcsbXnZ2dnUVZ8jFi4p2d@brightview.co.uk>
Scott Burson wrote:
> And you think Ertugrul doesn't have better things to do than 
> argue on Usenet?

Ertugrul found plenty of time to post these baseless claims but, now that
I'm calling his bluff, he cannot find the time to optimize his 6-line
function. Sounds fishy to me...

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7x7hyc6l4p.fsf@ruckus.brouhaha.com>
Jon Harrop <···@ffconsultancy.com> writes:
> Ertugrul found plenty of time to post these baseless claims but, now that
> I'm calling his bluff, he cannot find the time to optimize his 6-line
> function. Sounds fishy to me...

By your logic, you should also admit that F# and Ocaml are also
worthlessly slow, until you can optimize that same function to the
level you have requested from Ertugrul.  Go on, admit it, or put up.
We're waiting.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <qLWdnavp--NdnsHXnZ2dnUVZ8mudnZ2d@brightview.co.uk>
Paul Rubin wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Ertugrul found plenty of time to post these baseless claims but, now that
>> I'm calling his bluff, he cannot find the time to optimize his 6-line
>> function. Sounds fishy to me...
> 
> By your logic, you should also admit that F# and Ocaml are also
> worthlessly slow, until you can optimize that same function to the
> level you have requested from Ertugrul.  Go on, admit it, or put up.
> We're waiting.

I didn't come here for a fight but that doesn't mean I won't kick your
ass. ;-)

First up, here are my timings for encoding a 5.5Mb copy of the bible:

pbzip2:             ~1.0s
bzip2:              ~1.5s
My F#:               1.4s
Ertugrul's Haskell: 88s

Note that the timing for bzip2 is a rough approximation because its maximum
block size is only 900k whereas our code is sorting the whole 5.5Mb.
Suffice to say, my F# code is certainly getting performance comparable to
that of bzip2 and is around 60x faster than Ertugrul's Haskell.

FWIW, I'm happy to spend time on interesting projects like this because I
can turn them into OCaml and F#.NET Journal articles. :-)

Here's my parallelized F# code:

  open System.Threading
  
  let inline sort cmp (a: _ array) =
    let inline swap i j =
      let t = a.[i]
      a.[i] <- a.[j]
      a.[j] <- t
    let rec qsort l u =
      if l < u then
        swap l ((l + u) / 2)
        let mutable m = l
        for i=l+1 to u do
          if cmp a.[i] a.[l] < 0 then
            m <- m + 1
            swap m i
        swap l m
        if u-l > 1000 then
          let m = m
          let f = Tasks.Future.Create(fun () -> qsort l (m-1))
          qsort (m+1) u
          f.Value
        else
          qsort l (m-1)
          qsort (m+1) u
    qsort 0 (a.Length-1)
  
  let inline cmp (str: _ array) i j =
    let rec cmp i j =
      if i=str.Length then 1 else
        if j=str.Length then -1 else
          let c = compare str.[i] str.[j] in
          if c<>0 then c else
            cmp (i+1) (j+1)
    cmp i j
  
  let bwt (str: byte array) =
    let n = str.Length
    let a = Array.init n (fun i -> i)
    sort (fun i j -> cmp str i j) a
    Array.init n (fun i -> str.[(a.[i] + n - 1) % n])

Note that I have run this on the example from Wikipedia and it did get the
correct answer. A production quality version would probably be more frugal
about the choice of pivot.

Also, the use of "inline" and functional abstraction with a
higher-order "sort" function without sacrificing any performance is of
particular interest. This is one of the features that I really like about
F# and it is easily overlooked.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7xzlb7vq8p.fsf@ruckus.brouhaha.com>
Jon Harrop <···@ffconsultancy.com> writes:
> First up, here are my timings for encoding a 5.5Mb copy of the bible:
> pbzip2:             ~1.0s
> bzip2:              ~1.5s
> My F#:               1.4s

Nice job!  However, how much of your time did you spend writing that
code, just to prove something that you already knew?
 
> FWIW, I'm happy to spend time on interesting projects like this because I
> can turn them into OCaml and F#.NET Journal articles. :-)

Since Ertugrul isn't in the journal article business, perhaps his
suggestion of having you offer to pay him to write code would be a
workable alternative motivation.

Here is another interesting Haskell vs. C benchmark:

  http://augustss.blogspot.com/2009/02/is-haskell-fast-lets-do-simple.html

;-) 
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <cNidnXkmd8AescHXnZ2dnUVZ8tOdnZ2d@brightview.co.uk>
Paul Rubin wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> First up, here are my timings for encoding a 5.5Mb copy of the bible:
>> pbzip2:             ~1.0s
>> bzip2:              ~1.5s
>> My F#:               1.4s
> 
> Nice job!  However, how much of your time did you spend writing that
> code, just to prove something that you already knew?

Took ~30mins to write but it completely shattered my prediction that F#
would be considerably slower than C.

Without my "inline" trick the time taken goes from 1.4s up to 39s.

>> FWIW, I'm happy to spend time on interesting projects like this because I
>> can turn them into OCaml and F#.NET Journal articles. :-)
> 
> Since Ertugrul isn't in the journal article business, perhaps his
> suggestion of having you offer to pay him to write code would be a
> workable alternative motivation.

Even if Ertugrul can produce the goods (which I honestly don't believe he or
anyone else can), I'd have no use for it.

> Here is another interesting Haskell vs. C benchmark:
> 
>   http://augustss.blogspot.com/2009/02/is-haskell-fast-lets-do-simple.html
> 
> ;-)

FWIW, I recently benchmarked OCaml vs HLVM on an FFT implementation and HLVM
is comparably performant: ~25% compared to optimized OCaml but ~25% faster
than OCaml on elegant code.

Here's my program for my HLVM compiler:

(* Radix-2 FFT *)

let rec zadd(((r1, i1), (r2, i2)) : (float * float) * (float * float)) :
float * float =
  r1 +. r2, i1 +. i2;;

let rec zmul(((r1, i1), (r2, i2)) : (float * float) * (float * float)) :
float * float =
  r1 *. r2 -. i1 *. i2, r1 *. i2 +. i1 *. r2;;

let rec aux1((i, n, a, a1, a2) : int * int * (float * float) array * (float
* float) array * (float * float) array) : unit =
  if i < n/2 then
    begin
      a1.(i) <- a.(2*i);
      a2.(i) <- a.(2*i+1);
      aux1(i+1, n, a, a1, a2)
    end;;

let rec aux2((k, n, a, a1, a2) : int * int * (float * float) array * (float
* float) array * (float * float) array) : unit =
  if k < n/2 then
    begin
      let t = 4. *. pi *. float_of_int k /. float_of_int n in
      a.(k) <- zadd(a1.(k), zmul(a2.(k), (cos t, -.sin t)));
      aux2(k+1, n, a, a1, a2)
    end;;

let rec aux3((k, n, a, a1, a2) : int * int * (float * float) array * (float
* float) array * (float * float) array) : unit =
  if k < n then
    begin
      let t = 4. *. pi *. float_of_int k /. float_of_int n in
      a.(k) <- zadd(a1.(k-n/2), zmul(a2.(k-n/2), (cos t, -.sin t)));
      aux3(k+1, n, a, a1, a2)
    end;;

let rec fft(a: (float * float) array) : (float * float) array =
  if length a = 1 then create(1, a.(0)) else
    begin
      let n = length a in
      let a1 = create(n/2, (0., 0.)) in
      let a2 = create(n/2, (0., 0.)) in
      aux1(0, n, a, a1, a2);
      let a1 = fft a1 in
      let a2 = fft a2 in
      aux2(0, n, a, a1, a2);
      aux3(n/2, n, a, a1, a2);
      a
    end;;

let rec ignore(a: (float * float) array) : unit = ();;

ignore(fft(create(1048576, (0.0, 0.0))));;

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <e-GdnQVQT-W5Z8LXnZ2dnUVZ8q1i4p2d@brightview.co.uk>
Jon Harrop wrote:
> Paul Rubin wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
>>> First up, here are my timings for encoding a 5.5Mb copy of the bible:
>>> pbzip2:             ~1.0s
>>> bzip2:              ~1.5s
>>> My F#:               1.4s
>> 
>> Nice job!  However, how much of your time did you spend writing that
>> code, just to prove something that you already knew?
> 
> Took ~30mins to write but it completely shattered my prediction that F#
> would be considerably slower than C.

After a lot more optimization effort I have managed to create a much uglier
OCaml implementation that is still 10x slower than my F# and has had all
abstractions removed by hand because they incur such severe performance
overheads (my first OCaml port was 146x slower than F#!):

  open Bigarray
  
  type int_array = (int, int_elt, c_layout) Array1.t
  
  type byte_array = (int, int8_unsigned_elt, c_layout) Array1.t
  
  exception Cmp of int
  
  let cmp (str: byte_array) i j =
    let n = Array1.dim str in
    let i = ref i and j = ref j in
    try
      while true do
        if !i = n then raise(Cmp 1) else
          if !j = n then raise(Cmp(-1)) else
            let si = str.{!i} and sj = str.{!j} in
            if si < sj then raise(Cmp(-1)) else
              if si > sj then raise(Cmp 1) else
                begin
                  incr i;
                  incr j
                end
      done;
      0
    with Cmp c -> c
  
  let swap (a: int_array) i j =
    let t = a.{i} in
    a.{i} <- a.{j};
    a.{j} <- t
  
  let sort str (a: int_array) =
    let rec qsort l u =
      if l < u then
        begin
          swap a l ((l + u) / 2);
          let m = ref l in
          for i=l+1 to u do
            if cmp str a.{i} a.{l} < 0 then
              begin
                incr m;
                swap a !m i
              end
          done;
          swap a l !m;
          qsort l (!m - 1);
          qsort (!m + 1) u
        end in
    qsort 0 (Array1.dim a - 1)
  
  let () =
    let file = try Sys.argv.(1) with _ -> "input.txt" in
    let desc = Unix.openfile file [] 777 in
    let str = Array1.map_file desc int8_unsigned c_layout false (-1) in
    let n = Array1.dim str in
  
    let a = Array1.create int c_layout n in
    for i=0 to n-1 do
      a.{i} <- i
    done;
    sort str a;
    for i=0 to n-1 do
      print_char(Char.chr str.{(a.{i} + n - 1) mod n})
    done;
  
    Unix.close desc

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Erik de Castro Lopo
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090717171941.80366a5f.erikd@mega-nerd.com>
Jon Harrop wrote:

> After a lot more optimization effort I have managed to create a much uglier
> OCaml implementation that is still 10x slower than my F# and has had all
> abstractions removed by hand because they incur such severe performance
> overheads (my first OCaml port was 146x slower than F#!):

Is this comparison being done on the same platform? Same machine?

Your Ocaml code uses the Unix module. I thought that wasn't available
on the windows version of Ocaml while I assume your F# code was
running on windows.

What gives?

Erik
-- 
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <YPmdnZUZQMk5xf3XnZ2dnUVZ8m2dnZ2d@brightview.co.uk>
Erik de Castro Lopo wrote:
> Jon Harrop wrote:
>> After a lot more optimization effort I have managed to create a much
>> uglier OCaml implementation that is still 10x slower than my F# and has
>> had all abstractions removed by hand because they incur such severe
>> performance overheads (my first OCaml port was 146x slower than F#!):
> 
> Is this comparison being done on the same platform? Same machine?

That comparison was 32-bit Debian with OCaml and 32-bit Windows Vista with
F#. The machine is a Dell Precision T5400 with two Intel Xeon E5405 2.0GHz
quadcores (i.e. 8 cores in total) and 4Gb RAM.

> Your Ocaml code uses the Unix module. I thought that wasn't available
> on the windows version of Ocaml while I assume your F# code was
> running on windows.

The Unix module and Bigarray work fine from the OCaml top-level on Windows
here but I've tried both the MSVC and MinGW installs of OCaml and cannot
get either to compile to native code.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Erik de Castro Lopo
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090717213206.d5e565c6.erikd@mega-nerd.com>
Jon Harrop wrote:

> That comparison was 32-bit Debian with OCaml

Debian stable? testing? unstable? Ocaml version? Linux kernel
version? Filesystem?

Erik
-- 
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <q8KdneaT6dQ2-P3XnZ2dnUVZ8jVi4p2d@brightview.co.uk>
Erik de Castro Lopo wrote:
> Jon Harrop wrote:
>> That comparison was 32-bit Debian with OCaml
> 
> Debian stable? testing? unstable?

Debian testing.

> Ocaml version?

3.11.0

> Linux kernel version?

$ uname -a
Linux leper 2.6.24-etchnhalf.1-amd64 #1 SMP Fri Dec 26 05:32:46 UTC 2008
x86_64 GNU/Linux

> Filesystem? 

Ext3.

However, I've found a perf bug in my parallelization of the OCaml code and
can now get a speedup for parallelism in OCaml. Specifically, the time
taken falls from 14s to 4s, which is only 2.9x slower than F#!

Here's my latest OCaml code:

  open Bigarray
  
  type int_array = (int, int_elt, c_layout) Array1.t
  
  type byte_array = (int, int8_unsigned_elt, c_layout) Array1.t
  
  exception Cmp of int
  
  let cmp (str: byte_array) i j =
    let n = Array1.dim str in
    let i = ref i and j = ref j in
    try
      while true do
        if !i = n then raise(Cmp 1) else
          if !j = n then raise(Cmp(-1)) else
            let si = str.{!i} and sj = str.{!j} in
            if si < sj then raise(Cmp(-1)) else
              if si > sj then raise(Cmp 1) else
                begin
                  incr i;
                  incr j
                end
      done;
      0
    with Cmp c -> c
  
  let swap (a: int_array) i j =
    let t = a.{i} in
    a.{i} <- a.{j};
    a.{j} <- t
  
  let invoke (f : 'a -> 'b) x : unit -> 'b =
    let input, output = Unix.pipe() in
    match Unix.fork() with
    | -1 -> (let v = f x in fun () -> v)
    | 0 ->
        Unix.close input;
        let output = Unix.out_channel_of_descr output in
        Marshal.to_channel output (try `Res(f x) with e -> `Exn e) [];
        close_out output;
        exit 0
    | pid ->
        Unix.close output;
        let input = Unix.in_channel_of_descr input in
        fun () ->
          let v = Marshal.from_channel input in
          ignore (Unix.waitpid [] pid);
          close_in input;
          match v with
          | `Res x -> x
          | `Exn e -> raise e;;
  
  let sort str (a: int_array) =
    let rec qsort l u =
      if l < u then
        begin
          swap a l ((l + u) / 2);
          let m = ref l in
          for i=l+1 to u do
            if cmp str a.{i} a.{l} < 0 then
              begin
                incr m;
                swap a !m i
              end
          done;
          swap a l !m;
          if u - l > 30000 then
            begin
              let f () =
                qsort l (!m - 1);
                Array1.sub a l (!m-l-1) in
              let a' = invoke f () in
              qsort (!m + 1) u;
              let a' = a'() in
              for i=l to !m-2 do
                a.{i} <- a'.{i - l}
              done
            end
          else
            begin
              qsort l (!m - 1);
              qsort (!m + 1) u
            end
        end in
    ignore(qsort 0 (Array1.dim a - 1))
  
  let () =
    let file = try Sys.argv.(1) with _ -> "input.txt" in
    let desc = Unix.openfile file [] 777 in
    let str = Array1.map_file desc int8_unsigned c_layout false (-1) in
    let n = Array1.dim str in
  
    let a = Array1.create int c_layout n in
    for i=0 to n-1 do
      a.{i} <- i
    done;
    sort str a;
    for i=0 to n-1 do
      print_char(Char.chr str.{(a.{i} + n - 1) mod n})
    done;
  
    Unix.close desc

To be fair, I've optimized the comparison function in the OCaml but not in
the F# now...

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <b658af70-af1a-49df-bc37-f9f29d68bc6d@i4g2000prm.googlegroups.com>
On Jul 13, 8:46 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Scott Burson wrote:
> > And you think Ertugrul doesn't have better things to do than
> > argue on Usenet?
>
> Ertugrul found plenty of time to post these baseless claims but, now that
> I'm calling his bluff, he cannot find the time to optimize his 6-line
> function. Sounds fishy to me...

Haskell is a pretty good language for prototyping though, as I think
anyone preferring static typing will agree.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <0YmdnYxlLuufacbXnZ2dnUVZ8lhi4p2d@brightview.co.uk>
fft1976 wrote:
> On Jul 13, 8:46 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>> Scott Burson wrote:
>> > And you think Ertugrul doesn't have better things to do than
>> > argue on Usenet?
>>
>> Ertugrul found plenty of time to post these baseless claims but, now that
>> I'm calling his bluff, he cannot find the time to optimize his 6-line
>> function. Sounds fishy to me...
> 
> Haskell is a pretty good language for prototyping though, as I think
> anyone preferring static typing will agree.

No, I would not agree. OCaml's type inference puts Haskell to shame and is
maximally beneficial during prototyping. Indeed, OCaml's inferred
structural types (polymorphic variants and objects) are the only features
that have closed the gap between static and dynamic typing for me, to the
extent that I no longer use dynamic languages at all.

They are also the features I miss the most in F#.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <4b7ea84f-ad60-419c-a4ce-175f33fef99e@x5g2000prf.googlegroups.com>
On Jul 13, 9:06 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> fft1976 wrote:
> > On Jul 13, 8:46 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> >> Scott Burson wrote:
> >> > And you think Ertugrul doesn't have better things to do than
> >> > argue on Usenet?
>
> >> Ertugrul found plenty of time to post these baseless claims but, now that
> >> I'm calling his bluff, he cannot find the time to optimize his 6-line
> >> function. Sounds fishy to me...
>
> > Haskell is a pretty good language for prototyping though, as I think
> > anyone preferring static typing will agree.
>
> No, I would not agree. OCaml's type inference puts Haskell to shame and is
> maximally beneficial during prototyping. Indeed, OCaml's inferred
> structural types (polymorphic variants and objects) are the only features
> that have closed the gap between static and dynamic typing for me, to the
> extent that I no longer use dynamic languages at all.
>
> They are also the features I miss the most in F#.

The goal of prototyping, for me, is getting at some conceptual
clarity, which polymorphic variants, due to their wicked power, just
don't give me.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <qLWdnaXp--PRmMHXnZ2dnUVZ8mti4p2d@brightview.co.uk>
fft1976 wrote:
> The goal of prototyping, for me, is getting at some conceptual
> clarity, which polymorphic variants, due to their wicked power, just
> don't give me.

Ok. For me, prototyping means writing in a dynamic style without the benefit
of a coherent design. OCaml is the only statically typed language that
allows me to do that thanks to its very powerful inference. In contrast,
Haskell has very weak inference.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <922f067a-527c-4a5d-8878-b3227558489e@v23g2000pro.googlegroups.com>
On Jul 13, 10:20 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> fft1976 wrote:
> > The goal of prototyping, for me, is getting at some conceptual
> > clarity, which polymorphic variants, due to their wicked power, just
> > don't give me.
>
> Ok. For me, prototyping means writing in a dynamic style without the benefit
> of a coherent design. OCaml is the only statically typed language that
> allows me to do that thanks to its very powerful inference. In contrast,
> Haskell has very weak inference.
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u

By the way, someone is posting on Clojure ML under "JoHn
Harrop" (singing praises to Clojure). So I have to ask, is that poster
an impostor?
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <4bc5b6a2-c6ec-4f3f-8fae-07aef2ef7094@h18g2000yqj.googlegroups.com>
On Jul 16, 6:24 am, fft1976 <·······@gmail.com> wrote:
> On Jul 13, 10:20 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>
> > fft1976 wrote:
> > > The goal of prototyping, for me, is getting at some conceptual
> > > clarity, which polymorphic variants, due to their wicked power, just
> > > don't give me.
>
> > Ok. For me, prototyping means writing in a dynamic style without the benefit
> > of a coherent design. OCaml is the only statically typed language that
> > allows me to do that thanks to its very powerful inference. In contrast,
> > Haskell has very weak inference.
>
> > --
> > Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u
>
> By the way, someone is posting on Clojure ML under "JoHn
> Harrop" (singing praises to Clojure). So I have to ask, is that poster
> an impostor?

I've looked at some of JoHn Harrop's posts on the Clojure ML. He seems
to be actively coding in Clojure, and in addition, seems to have some
understanding of the Lisp Way. I think he just happens to have the
same last name as our amphibian friend.
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <be21f129-94c9-4d2e-b50b-0a3d5c8692ee@x5g2000prf.googlegroups.com>
On Jul 16, 5:34 am, Larry Coleman <············@yahoo.com> wrote:
> On Jul 16, 6:24 am, fft1976 <·······@gmail.com> wrote:
>
>
>
> > On Jul 13, 10:20 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>
> > > fft1976 wrote:
> > > > The goal of prototyping, for me, is getting at some conceptual
> > > > clarity, which polymorphic variants, due to their wicked power, just
> > > > don't give me.
>
> > > Ok. For me, prototyping means writing in a dynamic style without the benefit
> > > of a coherent design. OCaml is the only statically typed language that
> > > allows me to do that thanks to its very powerful inference. In contrast,
> > > Haskell has very weak inference.
>
> > > --
> > > Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u
>
> > By the way, someone is posting on Clojure ML under "JoHn
> > Harrop" (singing praises to Clojure). So I have to ask, is that poster
> > an impostor?
>
> I've looked at some of JoHn Harrop's posts on the Clojure ML. He seems
> to be actively coding in Clojure, and in addition, seems to have some
> understanding of the Lisp Way. I think he just happens to have the
> same last name as our amphibian friend.

I think Lisp programmers (especially in comp.lang.lisp) like to
flatter themselves. They think others disagree with them because they
weren't smart enough to understand what needed to be understood.

"Lisp Way" is by far not the most complicated thing to understand.
OCaml is basically Scheme with static types. Camlp4 is dirty macros
for a more complicated syntax (if you grok the former, the latter are
trivial).

Whether it's better or worse is a totally separate question. IMHO
OCaml requires more brains though.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <0fudnXgw-YPfQMLXnZ2dnUVZ8uSdnZ2d@brightview.co.uk>
fft1976 wrote:
> I think Lisp programmers (especially in comp.lang.lisp) like to
> flatter themselves. They think others disagree with them because they
> weren't smart enough to understand what needed to be understood.

To be fair, a lot of clever people used to use Lisp before the modern FPLs
branched off.

Ironically, Lispers taunting C programmers about Greenspunning 20 years ago
has turned around to today's programmers trying to explain to Lispers why
it is not feasible for them to expect to Greenspun a decent pattern matcher
or concurrent garbage collector.

Multicore seems to have forced the issue though, and most Lispers are
jumping ship to Clojure even if it does use slightly different kinds of
superfluous parentheses. ;-)

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <0843ed78-f319-4c10-a19c-b084333d276b@i4g2000prm.googlegroups.com>
On Jul 16, 8:18 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> fft1976 wrote:
> > I think Lisp programmers (especially in comp.lang.lisp) like to
> > flatter themselves. They think others disagree with them because they
> > weren't smart enough to understand what needed to be understood.
>
> To be fair, a lot of clever people used to use Lisp before the modern FPLs
> branched off.

I'm not saying Lispers are dumber than Ocamlers, just that the
concepts you need to learn to understand OCaml are almost a superset
of those needed to understand Lisp.
From: Alessio Stalla
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <fc398076-136e-4549-a45c-4b1b72fcf3f0@d4g2000yqa.googlegroups.com>
On Jul 17, 5:40 am, fft1976 <·······@gmail.com> wrote:
> On Jul 16, 8:18 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>
> > fft1976 wrote:
> > > I think Lisp programmers (especially in comp.lang.lisp) like to
> > > flatter themselves. They think others disagree with them because they
> > > weren't smart enough to understand what needed to be understood.
>
> > To be fair, a lot of clever people used to use Lisp before the modern FPLs
> > branched off.
>
> I'm not saying Lispers are dumber than Ocamlers, just that the
> concepts you need to learn to understand OCaml are almost a superset
> of those needed to understand Lisp.

I don't agree with this. OCaml (and functional languages in general)
share with Lisp some concepts, but differ substantially in many
things. E.g. I can't expect someone to work with OCaml without
understanding type inference and pattern matching. You can use Lisp
just fine without knowing about those. Conversely, Lisp often forces
you to think in terms of abstract syntax (first class symbols,
macros), while OCaml does not. And Lisp has multimethods, etc. etc.

Of course whatever language you use you should have clear at least the
basic concepts from programming language theory, even if your language
of choice does not provide some of them natively. But this is
completely orthogonal to Lisp, OCaml, C, Java, or whatever.

Alessio
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <fd68b5a2-33d9-4bf7-82ec-a6c4c66d403f@24g2000yqm.googlegroups.com>
On Jul 16, 11:18 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>
> Multicore seems to have forced the issue though, and most Lispers are
> jumping ship to Clojure even if it does use slightly different kinds of
> superfluous parentheses. ;-)
>

This is a perfect example of the unsupported and content-free
assertions that have made you such an esteemed and popular member of
the Lisp community.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <X8qdnelvurYF8_3XnZ2dnUVZ8rxi4p2d@brightview.co.uk>
Larry Coleman wrote:
> On Jul 16, 11:18 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>> Multicore seems to have forced the issue though, and most Lispers are
>> jumping ship to Clojure even if it does use slightly different kinds of
>> superfluous parentheses. ;-)
> 
> This is a perfect example of the unsupported and content-free
> assertions that have made you such an esteemed and popular member of
> the Lisp community.

Esteemed indeed. Here's some "content" though:

  http://www.google.com/trends?q=common+lisp%2Cclojure

You have to admit, Rich Hickey is doing a bloody good job getting Clojure
off the ground.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <f3cf977d-c74c-4e2f-b81e-27007173d99f@26g2000yqk.googlegroups.com>
On Jul 16, 9:49 pm, fft1976 <·······@gmail.com> wrote:
> I think Lisp programmers (especially in comp.lang.lisp) like to
> flatter themselves. They think others disagree with them because they
> weren't smart enough to understand what needed to be understood.

I'm definitely no Lisp expert, but I have read ANSI Common Lisp and On
Lisp and seen enough to decide to start my next project in CL. BTW,
the tendency to conclude that people who disagree with you just don't
understand is a lot more prevalent than you may think.

>
> "Lisp Way" is by far not the most complicated thing to understand.

I never said (or meant to imply) that it was. It's actually quite
simple, but it requires letting go of your old ways if you came from
an imperative or object-oriented environment.

> OCaml is basically Scheme with static types.
> Camlp4 is dirty macros
> for a more complicated syntax (if you grok the former, the latter are
> trivial).
>
> Whether it's better or worse is a totally separate question. IMHO
> OCaml requires more brains though.

I'm not sure about this point. You can write imperative code in OCaml
and it won't complain or get in your way as long as the types are
correct. Lisp and Scheme will also let you write imperative code, but
anyone who sees it will make fun of you behind your back. Haskell
forces you to change.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <f9udnbwotdOXh_zXnZ2dnUVZ8jGdnZ2d@brightview.co.uk>
Larry Coleman wrote:
> I'm not sure about this point. You can write imperative code in OCaml
> and it won't complain or get in your way as long as the types are
> correct. Lisp and Scheme will also let you write imperative code, but
> anyone who sees it will make fun of you behind your back. Haskell
> forces you to change.

Writing imperative code is a critically-important part of using impure
functional languages because purely functional programming is incapable of
solving practically-important problems.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <0c2ca480-f1f1-4610-bff6-3c72aaa6dbb2@y19g2000yqy.googlegroups.com>
On Jul 17, 9:50 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > I'm not sure about this point. You can write imperative code in OCaml
> > and it won't complain or get in your way as long as the types are
> > correct. Lisp and Scheme will also let you write imperative code, but
> > anyone who sees it will make fun of you behind your back. Haskell
> > forces you to change.
>
> Writing imperative code is a critically-important part of using impure
> functional languages because purely functional programming is incapable of
> solving practically-important problems.
>
That's a strong claim and could do with some clarification. What would
you consider to be a pure functional programming language?

What counts as a "practically-important problem"? One thing I have to
do at work is update stored procedures that archive our production
database. I had to manually verify that each table and column was
being archived, and that all tables that referenced archived tables
were also archived. The process was tedious and error-prone. The
program that I wrote in Haskell solved this problem. Does that count?

Once you've clarified those two points, I'd be interested to know how
you could prove that purely functional programming is incapable of
solving "practically-important problems" as you will have defined
them. Or maybe not. It will depend on your definitions.

Nice use of the straw man, BTW. I was really only questioning
fft1976's statement that OCaml requires more brains by making the
point that it doesn't if you only write imperative code in it.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <NJidnUZTzbud6P_XnZ2dnUVZ8iednZ2d@brightview.co.uk>
Larry Coleman wrote:
> On Jul 17, 9:50 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>> Larry Coleman wrote:
>> > I'm not sure about this point. You can write imperative code in OCaml
>> > and it won't complain or get in your way as long as the types are
>> > correct. Lisp and Scheme will also let you write imperative code, but
>> > anyone who sees it will make fun of you behind your back. Haskell
>> > forces you to change.
>>
>> Writing imperative code is a critically-important part of using impure
>> functional languages because purely functional programming is incapable
>> of solving practically-important problems.
>
> That's a strong claim and could do with some clarification. What would
> you consider to be a pure functional programming language?

Haskell.

> What counts as a "practically-important problem"?

The BWT challenge.

> One thing I have to 
> do at work is update stored procedures that archive our production
> database. I had to manually verify that each table and column was
> being archived, and that all tables that referenced archived tables
> were also archived. The process was tedious and error-prone. The
> program that I wrote in Haskell solved this problem. Does that count?

Yes.

> Once you've clarified those two points, I'd be interested to know how
> you could prove that purely functional programming is incapable of
> solving "practically-important problems" as you will have defined
> them. Or maybe not. It will depend on your definitions.

By counter example. I've yet to see a BWT written in Haskell that can
compress 20Mb of data on my 4Gb machine without running out of RAM, let
alone get within a few times the performance of C.

Just to clarify, I'm not talking about "Haskell" that is just C code written
using the FFI and/or unsafe features. Frag was once hailed as an example of
Haskell being relevant to games programming but the code just segfaulted on
my 64-bit machine because it wasn't really written in Haskell (because real
Haskell was unusably slow).

> Nice use of the straw man, BTW. I was really only questioning
> fft1976's statement that OCaml requires more brains by making the
> point that it doesn't if you only write imperative code in it.

That is a circular truism that applies to all languages including Haskell.
Except perhaps that Haskell requires less brains because you can do less
with it.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7d8ebd17-c9fa-4946-bb79-41776de548f9@i6g2000yqj.googlegroups.com>
On Jul 18, 10:30 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > On Jul 17, 9:50 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> >> Larry Coleman wrote:
> >> > I'm not sure about this point. You can write imperative code in OCaml
> >> > and it won't complain or get in your way as long as the types are
> >> > correct. Lisp and Scheme will also let you write imperative code, but
> >> > anyone who sees it will make fun of you behind your back. Haskell
> >> > forces you to change.
>
> >> Writing imperative code is a critically-important part of using impure
> >> functional languages because purely functional programming is incapable
> >> of solving practically-important problems.
>
> > That's a strong claim and could do with some clarification. What would
> > you consider to be a pure functional programming language?
>
> Haskell.
>
> > What counts as a "practically-important problem"?
>
> The BWT challenge.
>
> > One thing I have to
> > do at work is update stored procedures that archive our production
> > database. I had to manually verify that each table and column was
> > being archived, and that all tables that referenced archived tables
> > were also archived. The process was tedious and error-prone. The
> > program that I wrote in Haskell solved this problem. Does that count?
>
> Yes.
>

So you've admitted that a practical problem has been solved using
Haskell. I assume that next you'll demand that I provide a copy of the
code and imply that I'm lying if I won't or can't.

> > Once you've clarified those two points, I'd be interested to know how
> > you could prove that purely functional programming is incapable of
> > solving "practically-important problems" as you will have defined
> > them. Or maybe not. It will depend on your definitions.
>
> By counter example. I've yet to see a BWT written in Haskell that can
> compress 20Mb of data on my 4Gb machine without running out of RAM, let
> alone get within a few times the performance of C.

Did someone sleep through Philosophy 101?

If you had been alive in 1900, you might have said: "I'll prove that
heavier-than-air flight is impossible by counter-example. I've yet to
see a glider with attached propellers that flew successfully. QED."

>
> Just to clarify, I'm not talking about "Haskell" that is just C code written
> using the FFI and/or unsafe features. Frag was once hailed as an example of
> Haskell being relevant to games programming but the code just segfaulted on
> my 64-bit machine because it wasn't really written in Haskell (because real
> Haskell was unusably slow).
>
> > Nice use of the straw man, BTW. I was really only questioning
> > fft1976's statement that OCaml requires more brains by making the
> > point that it doesn't if you only write imperative code in it.
>
> That is a circular truism that applies to all languages including Haskell.
> Except perhaps that Haskell requires less brains because you can do less
> with it.
>

Your unrequited love for Lisp and Haskell is now common knowledge. The
game's basically up, but you can probably still harass the newbies.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <4MednfwL377qc__XnZ2dnUVZ8uudnZ2d@brightview.co.uk>
Larry Coleman wrote:
> If you had been alive in 1900, you might have said: "I'll prove that
> heavier-than-air flight is impossible by counter-example. I've yet to
> see a glider with attached propellers that flew successfully. QED."

You have been civil until now so I'm going to give you the benefit of the
doubt and assume that you just misunderstood my statement.

I was saying that it is not possible to solve important practical problems
(a usable BWT being one example) using purely functional programming and,
consequently, impure programming is an essential part of using most
functional languages.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <c1177353-d9df-4124-bd91-fb5e2ecc98ef@t13g2000yqt.googlegroups.com>
On Jul 19, 7:08 am, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > If you had been alive in 1900, you might have said: "I'll prove that
> > heavier-than-air flight is impossible by counter-example. I've yet to
> > see a glider with attached propellers that flew successfully. QED."
>
> You have been civil until now so I'm going to give you the benefit of the
> doubt and assume that you just misunderstood my statement.

I understood your statement as well as could be expected given that
it's slightly ambiguous. It could be read as "there are some practical
problems that purely functional programming cannot solve" or it could
be read as "purely functional programming cannot solve any practical
problem."

If I stopped being civil, it's because I expect that someone who uses
the letters "Dr" in front of his name should know what does or does
not constitute a valid proof. I should not have let the fact that you
obviously don't know what a valid proof is cause me to be uncivil, and
for that I apologize.

>
> I was saying that it is not possible to solve important practical problems
> (a usable BWT being one example) using purely functional programming and,
> consequently, impure programming is an essential part of using most
> functional languages.
>
Yes, that's what you said last time. I asked for proof and am still
waiting.

If you do have such a proof, it would be worthwhile reading for
readers of any of the groups to which this thread was posted, and
would enhance Computer Science in general. I suspect that you don't
have such a proof, and therefore will not waste any more time on this
thread unless something changes.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <LOadnQfldpbor_7XnZ2dnUVZ8hSdnZ2d@brightview.co.uk>
Larry Coleman wrote:
> On Jul 19, 7:08 am, Jon Harrop <····@ffconsultancy.com> wrote:
>> Larry Coleman wrote:
>> > If you had been alive in 1900, you might have said: "I'll prove that
>> > heavier-than-air flight is impossible by counter-example. I've yet to
>> > see a glider with attached propellers that flew successfully. QED."
>>
>> You have been civil until now so I'm going to give you the benefit of the
>> doubt and assume that you just misunderstood my statement.
> 
> I understood your statement as well as could be expected given that
> it's slightly ambiguous. It could be read as "there are some practical
> problems that purely functional programming cannot solve" or it could
> be read as "purely functional programming cannot solve any practical
> problem."
>
> If I stopped being civil, it's because I expect that someone who uses
> the letters "Dr" in front of his name should know what does or does
> not constitute a valid proof. I should not have let the fact that you
> obviously don't know what a valid proof is cause me to be uncivil, and
> for that I apologize.
> 
>> I was saying that it is not possible to solve important practical
>> problems (a usable BWT being one example) using purely functional
>> programming and, consequently, impure programming is an essential part of
>> using most functional languages.
>>
> Yes, that's what you said last time. I asked for proof and am still
> waiting.
> 
> If you do have such a proof, it would be worthwhile reading for
> readers of any of the groups to which this thread was posted, and
> would enhance Computer Science in general. I suspect that you don't
> have such a proof, and therefore will not waste any more time on this
> thread unless something changes.

Congratulations! That is probably the most inventive cop-out I have ever
seen.

When you're asking venture capitalists for funding and they request a
business plan, do you demand that they prove that your idea cannot make
money whilst insulting their credentials too?

There is, of course, no relevant notion of "proof" in this context so I
obviously cannot provide a proof. Nor did I ever pretend to be able to. Nor
is it of any relevance.

Best of luck though getting your PhD in "implementing a usable quicksort in
Haskell". Or BWT. Or LZW. Or...

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: ACL
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <4c8dc73f-bcfb-4421-b5dd-f5e08d5860fd@24g2000yqm.googlegroups.com>
On Jul 19, 11:58 am, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > On Jul 19, 7:08 am, Jon Harrop <····@ffconsultancy.com> wrote:
> >> Larry Coleman wrote:
> >> > If you had been alive in 1900, you might have said: "I'll prove that
> >> > heavier-than-air flight is impossible by counter-example. I've yet to
> >> > see a glider with attached propellers that flew successfully. QED."
>
> >> You have been civil until now so I'm going to give you the benefit of the
> >> doubt and assume that you just misunderstood my statement.
>
> > I understood your statement as well as could be expected given that
> > it's slightly ambiguous. It could be read as "there are some practical
> > problems that purely functional programming cannot solve" or it could
> > be read as "purely functional programming cannot solve any practical
> > problem."
>
> > If I stopped being civil, it's because I expect that someone who uses
> > the letters "Dr" in front of his name should know what does or does
> > not constitute a valid proof. I should not have let the fact that you
> > obviously don't know what a valid proof is cause me to be uncivil, and
> > for that I apologize.
>
> >> I was saying that it is not possible to solve important practical
> >> problems (a usable BWT being one example) using purely functional
> >> programming and, consequently, impure programming is an essential part of
> >> using most functional languages.
>
> > Yes, that's what you said last time. I asked for proof and am still
> > waiting.
>
> > If you do have such a proof, it would be worthwhile reading for
> > readers of any of the groups to which this thread was posted, and
> > would enhance Computer Science in general. I suspect that you don't
> > have such a proof, and therefore will not waste any more time on this
> > thread unless something changes.
>
> Congratulations! That is probably the most inventive cop-out I have ever
> seen.
>
> When you're asking venture capitalists for funding and they request a
> business plan, do you demand that they prove that your idea cannot make
> money whilst insulting their credentials too?
>
> There is, of course, no relevant notion of "proof" in this context so I
> obviously cannot provide a proof. Nor did I ever pretend to be able to. Nor
> is it of any relevance.
>
> Best of luck though getting your PhD in "implementing a usable quicksort in
> Haskell". Or BWT. Or LZW. Or...
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u

"Ridiculous claim"
"You don't have proof"
"Asking me for proof is a cop out"

Toot toot, troll train rolling into station.

--
Dr. A.C.L, Troll Train Consulting.
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7xhbx88pfg.fsf@ruckus.brouhaha.com>
Jon Harrop <···@ffconsultancy.com> writes:
> I was saying that it is not possible to solve important practical
> problems (a usable BWT being one example) using purely functional
> programming and, consequently, impure programming is an essential
> part of using most functional languages.

I would say that real-time programming is a more important practical
problem that's difficult to handle with functional languages
(including Lisp and *ML) than BWT.  There exist some problems FPL's
are not particularly suited to.  Real-time programming is one.  It's
conceivable (though dubious) that BWT is another.  If you study any
logic, you should know that "there exist" and "for all" are not the
same thing.  The idea is that we as programmers, to some extent,
actually tend to be involved in specific problem classes and use the
appropriate tools for them.  We don't care so much about issues
related to problems that we're not working on.

GHC certainly supports various impure constructions, e.g. STUArray,
the seq combinator, and so forth.  And in practical terms, Alioth
shows that for quite a few practical problems, GHC beats Ocaml and F#
rather soundly and is at least competitive in pretty much all.  If you
think BWT is so bloody important, maybe you should suggest that they
add it to the benchmarks.

Uvector.Algorithms contains a nice Haskell quicksort implementation,
by the way.
From: Keith H Duggar
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <fd04a7c0-28f5-4060-b9c8-a71de007f921@p36g2000vbn.googlegroups.com>
On Jul 19, 5:41 am, Larry Coleman <············@yahoo.com> wrote:
> On Jul 18, 10:30 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> > Larry Coleman wrote:
> > > On Jul 17, 9:50 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> > >> Larry Coleman wrote:
> > >> > I'm not sure about this point. You can write imperative code in OCaml
> > >> > and it won't complain or get in your way as long as the types are
> > >> > correct. Lisp and Scheme will also let you write imperative code, but
> > >> > anyone who sees it will make fun of you behind your back. Haskell
> > >> > forces you to change.
>
> > >> Writing imperative code is a critically-important part of using impure
> > >> functional languages because purely functional programming is incapable
> > >> of solving practically-important problems.
>
> > > That's a strong claim and could do with some clarification. What would
> > > you consider to be a pure functional programming language?
>
> > Haskell.
>
> > > What counts as a "practically-important problem"?
>
> > The BWT challenge.
>
> > > One thing I have to
> > > do at work is update stored procedures that archive our production
> > > database. I had to manually verify that each table and column was
> > > being archived, and that all tables that referenced archived tables
> > > were also archived. The process was tedious and error-prone. The
> > > program that I wrote in Haskell solved this problem. Does that count?
>
> > Yes.
>
> So you've admitted that a practical problem has been solved using
> Haskell. I assume that next you'll demand that I provide a copy of the
> code and imply that I'm lying if I won't or can't.
>
> > > Once you've clarified those two points, I'd be interested to know how
> > > you could prove that purely functional programming is incapable of
> > > solving "practically-important problems" as you will have defined
> > > them. Or maybe not. It will depend on your definitions.
>
> > By counter example. I've yet to see a BWT written in Haskell that can
> > compress 20Mb of data on my 4Gb machine without running out of RAM, let
> > alone get within a few times the performance of C.
>
> Did someone sleep through Philosophy 101?

Did someone sleep through Logic and Discourse 101?

First, it's patently obvious from context and his answer to your
question that by

   "functional languages because purely functional programming is
   incapable of solving practically-important problems."

Jon means

   EXISTS[x](
     PraticallyImportant(x) AND
     FunctionalLanguagesIncapableSolving(x) )

not

   FORALL[x](
     NOT(PraticallyImportant(x)) OR (
        PraticallyImportant(x) AND
        !FunctionalLanguagesIncapableSolving(x) ) )

therefore your database problem was simply irrelevant to his claim.

Secondly, when a statement is ambiguous, a clarification request
and/or generous interpretation is important for efficient dialectic.
Therefore, when you interpreted

Jon Harrop wrote:
> Writing imperative code is a critically-important part of using impure
> functional languages because purely functional programming is incapable
> of solving practically-important problems.

You should have 1) asked for clarification or 2) interpreted
generously as

   "Writing imperative code is a critically-important part of using
   impure functional languages [in practice today] because purely
   functional programming [has so far been] incapable of solving
   [some] practically-important problems."

rather than trotting out burden of proof fallacy non-sense

   http://en.wikipedia.org/wiki/Burden_of_proof_(logical_fallacy)

over the truth of FunctionalLanguagesIncapableSolving(x).

Now, GOTO 10 and CONTINUE ...

KHD
From: Keith H Duggar
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <994ae028-a41d-49fd-83e9-0040c359ae29@g31g2000yqc.googlegroups.com>
On Jul 19, 5:11 pm, Keith H Duggar <······@alum.mit.edu> wrote:
> On Jul 19, 5:41 am, Larry Coleman <············@yahoo.com> wrote:
> First, it's patently obvious from context and his answer to your
> question that by
>
>    "functional languages because purely functional programming is
>    incapable of solving practically-important problems."
>
> Jon means
>
>    EXISTS[x](
>      PraticallyImportant(x) AND
>      FunctionalLanguagesIncapableSolving(x) )
>
> not
>
>    FORALL[x](
>      NOT(PraticallyImportant(x)) OR (
>         PraticallyImportant(x) AND
>         !FunctionalLanguagesIncapableSolving(x) ) )

There is a typo in the FORALL clause. It should be

   FORALL[x](
     NOT(PraticallyImportant(x)) OR (
        PraticallyImportant(x) AND
        FunctionalLanguagesIncapableSolving(x) ) )

and if someone was thinking of attacking that typo and jumping
around in circles of joy, grow up.

KHD
From: Vend
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <3e2e3861-9392-473c-a648-bbb29266dc48@r2g2000yqm.googlegroups.com>
On Jul 19, 4:30 am, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > On Jul 17, 9:50 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> >> Larry Coleman wrote:
> >> > I'm not sure about this point. You can write imperative code in OCaml
> >> > and it won't complain or get in your way as long as the types are
> >> > correct. Lisp and Scheme will also let you write imperative code, but
> >> > anyone who sees it will make fun of you behind your back. Haskell
> >> > forces you to change.
>
> >> Writing imperative code is a critically-important part of using impure
> >> functional languages because purely functional programming is incapable
> >> of solving practically-important problems.
>
> > That's a strong claim and could do with some clarification. What would
> > you consider to be a pure functional programming language?
>
> Haskell.
>
> > What counts as a "practically-important problem"?
>
> The BWT challenge.

If I understand correctly, the problem with that algorithm in Haskell
is that in Haskell array update is O(n), right?

If that's the case, I suppose that Clean admits a reasonably efficient
implementation of that algorithm without using too much black magic.
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7xeisc1by4.fsf@ruckus.brouhaha.com>
Vend <······@virgilio.it> writes:
> > The BWT challenge.
> If I understand correctly, the problem with that algorithm in Haskell
> is that in Haskell array update is O(n), right?

That is not a problem for BWT which can use unboxed arrays.

> If that's the case, I suppose that Clean admits a reasonably efficient
> implementation of that algorithm without using too much black magic.

The issue with GHC boxed arrays is an implementation shortcoming in
the GHC garbage collector.  It has nothing to do with the language.
Clean's GC might or might not have the same issue.
From: Vend
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <d8e83d21-8129-4ebc-8975-c9f23d841343@j32g2000yqh.googlegroups.com>
On Jul 20, 1:37 am, Paul Rubin <·············@NOSPAM.invalid> wrote:
> Vend <······@virgilio.it> writes:
> > > The BWT challenge.
> > If I understand correctly, the problem with that algorithm in Haskell
> > is that in Haskell array update is O(n), right?
>
> That is not a problem for BWT which can use unboxed arrays.
>
> > If that's the case, I suppose that Clean admits a reasonably efficient
> > implementation of that algorithm without using too much black magic.
>
> The issue with GHC boxed arrays is an implementation shortcoming in
> the GHC garbage collector.  It has nothing to do with the language.

Why does that happen?

> Clean's GC might or might not have the same issue.
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7xhbx7gunz.fsf@ruckus.brouhaha.com>
Vend <······@virgilio.it> writes:
> > The issue with GHC boxed arrays is an implementation shortcoming in
> > the GHC garbage collector. �It has nothing to do with the language.
> 
> Why does that happen?

We just had a discussion of this.  Basically if you update a boxed
array element, the entire array is marked "dirty" and is scanned at GC
time.  There are ways around this in the GC literature but none of
them are pretty.  Upthread there is a url of the relevant item in the
GHC bug tracker.  I don't feel like finding it again.
From: Nobody
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <pan.2009.07.20.23.58.36.437000@nowhere.com>
On Mon, 20 Jul 2009 09:08:25 -0700, Vend wrote:

>> > If I understand correctly, the problem with that algorithm in Haskell
>> > is that in Haskell array update is O(n), right?
>>
>> That is not a problem for BWT which can use unboxed arrays.
>>
>> > If that's the case, I suppose that Clean admits a reasonably efficient
>> > implementation of that algorithm without using too much black magic.
>>
>> The issue with GHC boxed arrays is an implementation shortcoming in
>> the GHC garbage collector. �It has nothing to do with the language.
> 
> Why does that happen?

GHC's boxed arrays are either clean or dirty as a whole. If you modify one
element, the entire array becomes dirty and needs to be re-scanned during
gc.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <_tydnbfYqrf5ucLXnZ2dnUVZ8opi4p2d@brightview.co.uk>
fft1976 wrote:
> By the way, someone is posting on Clojure ML under "JoHn
> Harrop" (singing praises to Clojure). So I have to ask, is that poster
> an impostor?

Yes, I noticed that. I think he's just another functional programmer with a
very similar name to mine.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <F7idnXdxSLVpNsDXnZ2dnUVZ8tqdnZ2d@brightview.co.uk>
Scott Burson wrote:
> And you think Ertugrul doesn't have better things to do than
> argue on Usenet?

Having lost two benchmark challenges here, Ertugrul confined the discussion
to comp.lang.haskell only and posted another challenge the next day:

http://groups.google.com/group/comp.lang.haskell/msg/c86dd206a7b1e112?hl=en

He lost that one too.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Scott Burson
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7701ce4c-bc41-422d-999d-005caa957cd2@m7g2000prd.googlegroups.com>
On Jul 15, 4:23 am, Jon Harrop <····@ffconsultancy.com> wrote:
> Scott Burson wrote:
> > And you think Ertugrul doesn't have better things to do than
> > argue on Usenet?
>
> Having lost two benchmark challenges here, Ertugrul confined the discussion
> to comp.lang.haskell only and posted another challenge the next day:
>
> http://groups.google.com/group/comp.lang.haskell/msg/c86dd206a7b1e112...
>
> He lost that one too.

This is one of the times you remind me of a high-school student
heckling the students of a rival school because your team won the
football game.  It's thoroughly adolescent behavior.  ("You suck!  We
rule!" -- it's the same thing even though you don't use those words.)

I had no horse in this race.  I've never even used Haskell.  I was
just pointing out that you said something ridiculous (and also very
adolescent in its black-and-white quality -- infinite development
time, indeed!).

I thank Ertugrul for giving of his time to contribute to the state of
our knowledge of programming languages.

-- Scott
From: GPS
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3mh86$kll$1@news.xmission.com>
Scott Burson wrote:

> On Jul 15, 4:23 am, Jon Harrop <····@ffconsultancy.com> wrote:
>> Scott Burson wrote:
>> > And you think Ertugrul doesn't have better things to do than
>> > argue on Usenet?
>>
>> Having lost two benchmark challenges here, Ertugrul confined the
>> discussion to comp.lang.haskell only and posted another challenge the
>> next day:
>>
>> http://groups.google.com/group/comp.lang.haskell/msg/c86dd206a7b1e112...
>>
>> He lost that one too.
> 
> This is one of the times you remind me of a high-school student
> heckling the students of a rival school because your team won the
> football game.  It's thoroughly adolescent behavior.  ("You suck!  We
> rule!" -- it's the same thing even though you don't use those words.)
> 
> I had no horse in this race.  I've never even used Haskell.  I was
> just pointing out that you said something ridiculous (and also very
> adolescent in its black-and-white quality -- infinite development
> time, indeed!).
> 
> I thank Ertugrul for giving of his time to contribute to the state of
> our knowledge of programming languages.
> 
> -- Scott

For every man that told him he sucked, he built a wall to echo the sucking 
more.

Wine and honey are quite nice, but vinegar is quicker when viewed out the 
side window of the door.

To those that learn to taste, and choose, the fragrance and effects of honey 
are something that amuse.

Both men liked honey, and both sought to find more.  The men could not 
agree, on which path was better, and what the paths were for.

The men disagreed until their days came to an end.  The path by this time, 
was nothing but sand.  Blended, and molded into where it came from, nothing 
but an illusion, of what it once was.

-GPS
From: Slobodan Blazeski
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <ebb041b2-d0aa-4f5e-8d0b-365a646835d6@s15g2000yqs.googlegroups.com>
On Jul 16, 12:08 am, Scott Burson <········@gmail.com> wrote:
> I thank Ertugrul for giving of his time to contribute to the state of
> our knowledge of programming languages.
>
> -- Scott

I would thank them even more if they move this Haskell/F# brawl
outside of cll.

Bobi
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <hNednb77EsVtNsrXnZ2dnUVZ8odi4p2d@brightview.co.uk>
Ertugrul Söylemez wrote:
> Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com>
> wrote:
>> > What's wrong with purely functional languages?
>>
>> The "purely" part. Some solutions are naturally expressed in a pure
>> functional manner. Others are not. The programmer should have a
>> choice.
> 
> They have a choice.  A purely functional language has a theoretical
> property, which is very useful.  That doesn't mean you can't use the
> things you're used to.

Here is a trivial counter example: mutate an element in an array in O(1).

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry D'Anna
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h38atk$655$2@news.eternal-september.org>
["Followup-To:" header set to comp.lang.lisp.]
On 2009-07-10, Jon Harrop <···@ffconsultancy.com> wrote:
> Here is a trivial counter example: mutate an element in an array in O(1).

easy.

http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Array-ST.html


        --larry
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <bMGdnQBE47v1XsrXnZ2dnUVZ8hti4p2d@brightview.co.uk>
Larry D'Anna wrote:
> ["Followup-To:" header set to comp.lang.lisp.]
> On 2009-07-10, Jon Harrop <···@ffconsultancy.com> wrote:
>> Here is a trivial counter example: mutate an element in an array in O(1).
> 
> easy.
> 
> http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Array-ST.html

That is O(n) with GHC.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Vend
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <9725b0b2-5474-4b2a-bdee-e7f6f7bdc5f6@26g2000yqk.googlegroups.com>
On Jul 11, 1:49 am, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry D'Anna wrote:
> > ["Followup-To:" header set to comp.lang.lisp.]
> > On 2009-07-10, Jon Harrop <····@ffconsultancy.com> wrote:
> >> Here is a trivial counter example: mutate an element in an array in O(1).
>
> > easy.
>
> >http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Array-ST.html
>
> That is O(n) with GHC.

And anyway using state monads to re-implement imperative programming
on the top of a purely functional language running on an imperative
machine and interacting with a stateful world is a convoluted
abstraction inversion.
From: Johannes Laire
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <16915695-0930-4d24-9033-3f1fedcf104d@s31g2000yqs.googlegroups.com>
Jon Harrop wrote:
> Larry D'Anna wrote:
> > ["Followup-To:" header set to comp.lang.lisp.]
> > On 2009-07-10, Jon Harrop <····@ffconsultancy.com> wrote:
> >> Here is a trivial counter example: mutate an element in an array in O(1).
>
> > easy.
>
> >http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Array-ST.html
>
> That is O(n) with GHC.

It is O(1) for unboxed arrays.

--
Johannes Laire
From: Larry D'Anna
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3al3g$1k6$1@news.eternal-september.org>
On 2009-07-11, Johannes Laire <··············@gmail.com> wrote:
> Jon Harrop wrote:
>> Larry D'Anna wrote:
>> > ["Followup-To:" header set to comp.lang.lisp.]
>> > On 2009-07-10, Jon Harrop <····@ffconsultancy.com> wrote:
>> >> Here is a trivial counter example: mutate an element in an array in O(1).
>>
>> > easy.
>>
>> >http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Array-ST.html
>>
>> That is O(n) with GHC.
>
> It is O(1) for unboxed arrays.

I thought it was O(1) for boxed arrays too, but I tested it and defiantly is
not.  Any idea why boxed vs. unboxed should make a difference?

      --larry
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <vfGdnUqBHsu7Q8XXnZ2dnUVZ8gednZ2d@brightview.co.uk>
Larry D'Anna wrote:
> On 2009-07-11, Johannes Laire <··············@gmail.com> wrote:
>> Jon Harrop wrote:
>>> Larry D'Anna wrote:
>>> > ["Followup-To:" header set to comp.lang.lisp.]
>>> > On 2009-07-10, Jon Harrop <····@ffconsultancy.com> wrote:
>>> >> Here is a trivial counter example: mutate an element in an array in
>>> >> O(1).
>>>
>>> > easy.
>>>
>>> >http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Array-ST.html
>>>
>>> That is O(n) with GHC.
>>
>> It is O(1) for unboxed arrays.
> 
> I thought it was O(1) for boxed arrays too, but I tested it and defiantly
> is not.  Any idea why boxed vs. unboxed should make a difference?

The write barrier incurred when mutating a boxed element in an array dirties
the whole array and that causes the GC to traverse the whole array at every
minor/gen0 GC, which is periodic (separated by constant amounts of time for
the purposes of this description). That dirtying is not necessary for
unboxed elements so you do not see the asymptotic performance degradation
with them, e.g. the STUArray of Int in your example.

My question is: can you always get the O(1) for all types of array (e.g. an
array of lists)? I had assumed that inherently boxed types such as lists
required the use of STArray instead of STUArray.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry D'Anna
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3g74v$vfa$1@news.eternal-september.org>
On 2009-07-11, Jon Harrop <···@ffconsultancy.com> wrote:
> Larry D'Anna wrote:
>> On 2009-07-11, Johannes Laire <··············@gmail.com> wrote:
>>> Jon Harrop wrote:
>>>> Larry D'Anna wrote:
>>>> > ["Followup-To:" header set to comp.lang.lisp.]
>>>> > On 2009-07-10, Jon Harrop <····@ffconsultancy.com> wrote:
>>>> >> Here is a trivial counter example: mutate an element in an array in
>>>> >> O(1).
>>>>
>>>> > easy.
>>>>
>>>> >http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Array-ST.html
>>>>
>>>> That is O(n) with GHC.
>>>
>>> It is O(1) for unboxed arrays.
>> 
>> I thought it was O(1) for boxed arrays too, but I tested it and defiantly
>> is not.  Any idea why boxed vs. unboxed should make a difference?
>
> The write barrier incurred when mutating a boxed element in an array dirties
> the whole array and that causes the GC to traverse the whole array at every
> minor/gen0 GC, which is periodic (separated by constant amounts of time for
> the purposes of this description). That dirtying is not necessary for
> unboxed elements so you do not see the asymptotic performance degradation
> with them, e.g. the STUArray of Int in your example.

Why in the world does it have to dirty the whole array?  Is there any sane
reason for that or is this just an unfortunate deficiency of GHC?  I just can't
believe that an array update is O(n).  But but my tests updating a STArray
really is O(n) in GHC.  I'm a bit shocked.

          --larry
From: Rob Warnock
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <z-ydnR_NeaXUScbXnZ2dnUVZ_hWdnZ2d@speakeasy.net>
[Cross-posting trimmed to groups I read...]

Larry D'Anna  <·····@elder-gods.org> wrote:
+---------------
| Jon Harrop <···@ffconsultancy.com> wrote:
| > Larry D'Anna wrote:
| >> Johannes Laire <··············@gmail.com> wrote:
| >>>> That is O(n) with GHC.
| >>>
| >>> It is O(1) for unboxed arrays.
| >> 
| >> I thought it was O(1) for boxed arrays too, but I tested it and defiantly
| >> is not.  Any idea why boxed vs. unboxed should make a difference?
| >
| > The write barrier incurred when mutating a boxed element in an array dirties
| > the whole array and that causes the GC to traverse the whole array at every
| > minor/gen0 GC, which is periodic (separated by constant amounts of time for
| > the purposes of this description). That dirtying is not necessary for
| > unboxed elements so you do not see the asymptotic performance degradation
| > with them, e.g. the STUArray of Int in your example.
| 
| Why in the world does it have to dirty the whole array?  Is there any sane
| reason for that or is this just an unfortunate deficiency of GHC?  I just
| can't believe that an array update is O(n).  But but my tests updating a
| STArray really is O(n) in GHC.  I'm a bit shocked.
+---------------

If GHC were using a generational GC with a software card-marking
write barrier, then updating a single element in an array of boxed
elements should at most dirty a card's worth of elements. According
papers I've read on this, good "card" sizes these days are ~256 bytes
[or at least somewhere in the 128-1024 byte range], which on a 32-bit
machine would mean an extra 64 elements to scan during the first GC
following the update [but possibly no extra work thereafter, if the
updated value gets promoted into the same generation as the array].
This would therefore be an O(1) cost w.r.t. to the size of the array,
though O(c) for "c" the number of distinct cards updated. [That is,
doing a zillion updates to all of the elements of one card costs
no more GC overhead than doing a single update to that same card.]

So all you're saying is that GHC isn't using that kind of GC.  ;-}


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <0YmdnZJlLuvOb8bXnZ2dnUVZ8lidnZ2d@brightview.co.uk>
Larry D'Anna wrote:
> Why in the world does it have to dirty the whole array?  Is there any sane
> reason for that or is this just an unfortunate deficiency of GHC?

Of course there is a sane reason! The *only* way you can justify the
following claim is by crippling your hash table implementation:

  "Compared to a hash table, a well-implemented purely functional tree data
structure will perform competitively. You should not approach trees with
the assumption that your code will pay a performance penalty." - Real World
Haskell

Had they done a decent job implementing imperative constructs in GHC, their
impure hash table implementation would thrash all purely functional tree
implementations (just as the hash table implementations in decent languages
like OCaml and F# do). That would make purely functional programming look
like a really bad idea so they had to "work around" the problem and
O(n^2) "hash tables" are the perfect hoax.

I'm being facetious, of course. I suspect the real reason is not conspiracy
but incompetence. Whoever wrote that nonsense in Real World Haskell hadn't
done their homework properly. I bet they just benchmarked hash tables vs
trees using GHC and concluded that trees were much faster without doing any
objective tests with other languages. After all, decent benchmarking is
hardcore science.

Anyway, these long-standing performance bugs in the GHC run-time are the
least of your worries. Wait until you get on to the memory leaks in the
run-time...

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Stephen J. Bevan
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <867hyafwa4.fsf@dino.dnsalias.com>
Jon Harrop <···@ffconsultancy.com> writes:
> Of course there is a sane reason! The *only* way you can justify the
> following claim is by crippling your hash table implementation:
>
>   "Compared to a hash table, a well-implemented purely functional tree data
> structure will perform competitively. You should not approach trees with
> the assumption that your code will pay a performance penalty." - Real World
> Haskell

That's not the only way.  If one is interested in worst-case
performance then hash tables that use a O(n) mechanism to deal with
collison (as does the version in the O'Caml standard library) can be
orders of mangnitude worse than even a purely functional
(well-implemented aka balanced) tree as your own O'Caml test programs
show.  The only way to avoid the disastrous hash table performance to
to keep n small, but for some reason you don't mention that despite
the fact that we went over this twice within the last few months in
comp.lang.functional.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <FdCdndmqoIArPsDXnZ2dnUVZ8vWdnZ2d@brightview.co.uk>
Stephen J. Bevan wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Of course there is a sane reason! The *only* way you can justify the
>> following claim is by crippling your hash table implementation:
>>
>>   "Compared to a hash table, a well-implemented purely functional tree
>>   data
>> structure will perform competitively. You should not approach trees with
>> the assumption that your code will pay a performance penalty." - Real
>> World Haskell
> 
> That's not the only way.  If one is interested in worst-case
> performance

If one is *only* interested in worse case performance.

> then hash tables that use a O(n) mechanism to deal with 
> collison

If the hash table does and if the tree does not.

> (as does the version in the O'Caml standard library)

Yes.

> can be  
> orders of mangnitude worse than even a purely functional
> (well-implemented aka balanced) tree

If he meant balanced when he said "well implemented".

> as your own O'Caml test programs show.

That is incorrect.

> The only way to avoid the disastrous hash table performance to 
> to keep n small,

That is also incorrect.

> but for some reason you don't mention that despite 
> the fact that we went over this twice within the last few months in
> comp.lang.functional.

You took data about batch resizing in OCaml's implementation and incorrectly
extrapolated to all hash tables and even to completely different forms of
worst case. Then you also believed that all purely functional trees were
balanced. I referenced counter examples in every case. Nothing remains to
be debated.

If whoever wrote that part of Real World Haskell meant to refer only to the
worst case, only to O(n) hash tables and only to O(log n) trees then they
should have written that, very different, statement. As it is, what they
wrote is grossly misleading at best.

You get what you pay for.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Stephen J. Bevan
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <863a8yf3cf.fsf@dino.dnsalias.com>
Jon Harrop <···@ffconsultancy.com> writes:
> Stephen J. Bevan wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
> If one is *only* interested in worse case performance.

I am.


>> as your own O'Caml test programs show.
>
> That is incorrect.

You snipped away the full context which is :-

  If one is interested in worst-case performance then hash tables that
  use a O(n) mechanism to deal with collison (as does the version in the
  O'Caml standard library) can be orders of mangnitude worse than even a
  purely functional (well-implemented aka balanced) tree as your own
  O'Caml test programs show.

In <··············@dnsalias.com> the graphs of your O'Caml programs
(which use an hash table with O(n) worst case) are shown and the worst
case preformance is obvious.  If you believe there is something
incorrect in what I wrote then that's incorrect perhaps you'd do me
the courtesy of giving a reason.


>> The only way to avoid the disastrous hash table performance to 
>> to keep n small,
>
> That is also incorrect.

In <································@posted.plusnet> you wrote :-

  > > 2. The maximum time for any of the map runs is < 16,000.  The maximum
  > >    run time for the hash table is > 1,000,000.
  > 
  > For large hash tables or maps, yes. That justifies your original assertion
  > that insertion time for large hash tables is often bad.

So for me to be incorrect I assume that you have some way other
solution than keeping n small for O(n) hash tables to avoid the
performance issue?  Obviousy if one replaces the list in the hash
table by a balanced tree then one gets O(log n) worst case but now the
worst case is no better than the balanced tree.


> You took data about batch resizing in OCaml's implementation and
> incorrectly extrapolated to all hash tables and even to completely
> different forms of worst case.

In <································@posted.plusnet> you wrote :-

  > > Can I assume at this point you contend that a hash table will give the
  > > best performance for this problem -- with or without having to deal
  > > with the issue of re-sizing.  Did you code it and measure the
  > > performance or is this based on results using a hash table in other
  > > problem domains, particularly ones where best/average case performance
  > > is measured and not worst case performance?

  > The relevant performance measurements are already graphed in both OCaml for
  > Scientists (page 80) and F# for Scientists (page 88).

Thus you provided the O'Caml code from which you extrapolated a claim
for all hash tables.  I simply used your data but looked at the worst
case performance.  I agree that the performance of the O'Caml hash
table isn't necessarily the same as all hash tables but since the
you used it to make claims about best-case performance why am I not
allowed to make a claim about worst case performance?


> Then you also believed that all purely functional trees were
> balanced.

Care to quote me writing that?  What I believed and still believe is
that "well-implemented trees" implies balanced as I stated in
<··············@dino.dnsalias.com>.


> I referenced counter examples in every case.

Citing a non-balanced tree isn't a counter example when non-balanced
trees were never an issue.


> If whoever wrote that part of Real World Haskell meant to refer only
> to the worst case, only to O(n) hash tables and only to O(log n)
> trees then they should have written that, very different,
> statement. As it is, what they wrote is grossly misleading at best.

Unlike your even handed statements about hash tables and trees?
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <c6mdnXWNBq0K3MPXnZ2dnUVZ8gSdnZ2d@brightview.co.uk>
Stephen J. Bevan wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Stephen J. Bevan wrote:
>>> Jon Harrop <···@ffconsultancy.com> writes:
>> If one is *only* interested in worse case performance.
> 
> I am.

Then you are off topic.

>>> as your own O'Caml test programs show.
>>
>> That is incorrect.
> 
> You snipped away the full context which is :-
> 
>   If one is interested in worst-case performance then hash tables that
>   use a O(n) mechanism to deal with collison (as does the version in the
>   O'Caml standard library) can be orders of mangnitude worse than even a
>   purely functional (well-implemented aka balanced) tree as your own
>   O'Caml test programs show.
> 
> In <··············@dnsalias.com> the graphs of your O'Caml programs
> (which use an hash table with O(n) worst case) are shown and the worst
> case preformance is obvious.  If you believe there is something
> incorrect in what I wrote then that's incorrect perhaps you'd do me
> the courtesy of giving a reason.

You are describing one worst case scenario (collisions) and citing
irrelevant data about another (resizing).

>>> The only way to avoid the disastrous hash table performance to
>>> to keep n small,
>>
>> That is also incorrect.
> 
> In <································@posted.plusnet> you wrote :-
> 
>   > > 2. The maximum time for any of the map runs is < 16,000.  The
>   > > maximum
>   > >    run time for the hash table is > 1,000,000.
>   > 
>   > For large hash tables or maps, yes. That justifies your original
>   > assertion that insertion time for large hash tables is often bad.
> 
> So for me to be incorrect I assume that you have some way other
> solution than keeping n small for O(n) hash tables to avoid the
> performance issue?  Obviousy if one replaces the list in the hash
> table by a balanced tree then one gets O(log n) worst case but now the
> worst case is no better than the balanced tree.

Yes.

>> You took data about batch resizing in OCaml's implementation and
>> incorrectly extrapolated to all hash tables and even to completely
>> different forms of worst case.
> 
> In <································@posted.plusnet> you wrote :-
> 
>   > > Can I assume at this point you contend that a hash table will give
>   > > the best performance for this problem -- with or without having to
>   > > deal
>   > > with the issue of re-sizing.  Did you code it and measure the
>   > > performance or is this based on results using a hash table in other
>   > > problem domains, particularly ones where best/average case
>   > > performance is measured and not worst case performance?
> 
>   > The relevant performance measurements are already graphed in both
>   > OCaml for Scientists (page 80) and F# for Scientists (page 88).
> 
> Thus you provided the O'Caml code from which you extrapolated a claim
> for all hash tables.

What claim have I made about all hash tables?

> I simply used your data but looked at the worst 
> case performance,

Which worst case performance?

> I agree that the performance of the O'Caml hash 
> table isn't necessarily the same as all hash tables but since the
> you used it to make claims about best-case performance why am I not
> allowed to make a claim about worst case performance?

Logic.

>> Then you also believed that all purely functional trees were
>> balanced.
> 
> Care to quote me writing that?  What I believed and still believe is
> that "well-implemented trees" implies balanced as I stated in
> <··············@dino.dnsalias.com>.

Splay trees?

>> I referenced counter examples in every case.
> 
> ...non-balanced trees were never an issue.

Not true.

>> If whoever wrote that part of Real World Haskell meant to refer only
>> to the worst case, only to O(n) hash tables and only to O(log n)
>> trees then they should have written that, very different,
>> statement. As it is, what they wrote is grossly misleading at best.
> 
> Unlike your even handed statements about hash tables and trees?

Stop wasting my time.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Stephen J. Bevan
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <86y6qpe5ae.fsf@dino.dnsalias.com>
Jon Harrop <···@ffconsultancy.com> writes:
> Stephen J. Bevan wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
>>> Stephen J. Bevan wrote:
>>>> Jon Harrop <···@ffconsultancy.com> writes:
>>> If one is *only* interested in worse case performance.
>> 
>> I am.
>
> Then you are off topic.

The topic is your claim that :-

> Of course there is a sane reason! The *only* way you can justify the
> following claim is by crippling your hash table implementation:
>
>  "Compared to a hash table, a well-implemented purely functional tree data
> structure will perform competitively. You should not approach trees with
> the assumption that your code will pay a performance penalty." - Real World
> Haskell

One does not have to cripple the hash table to justify the claim, your
own O'Caml test program is justification enough.


>> In <··············@dnsalias.com> the graphs of your O'Caml programs
>> (which use an hash table with O(n) worst case) are shown and the worst
>> case preformance is obvious.  If you believe there is something
>> incorrect in what I wrote then that's incorrect perhaps you'd do me
>> the courtesy of giving a reason.
>
> You are describing one worst case scenario (collisions) and citing
> irrelevant data about another (resizing).

I was quite clear in my response that I'm interested in worst case
performance so of course I'm going to pick scenarios that show worst
case performance.  If you don't want to consider worst case
performance then you should take your own advice about writing
misleading claims.  As to irrelevant data, it is from your test
program.  If the results are irrelevant then why did you point to
them?  If you want to argue that some parts of the results are
meaningful and some aren't then re-write your test so that it doesn't
display any negative results.


>> So for me to be incorrect I assume that you have some way other
>> solution than keeping n small for O(n) hash tables to avoid the
>> performance issue?  Obviousy if one replaces the list in the hash
>> table by a balanced tree then one gets O(log n) worst case but now the
>> worst case is no better than the balanced tree.
>
> Yes.

Is that yes you have another way or yes you use balanced trees.  If
the former that was a stunningly unenlightending answer and if the
latter then why didn't you use that when writing your O'Caml test
programs that you claimed as evidence of good performance of hash
tables?


>>> You took data about batch resizing in OCaml's implementation and
>>> incorrectly extrapolated to all hash tables and even to completely
>>> different forms of worst case.
>> 
>> In <································@posted.plusnet> you wrote :-
>> 
>>   > > Can I assume at this point you contend that a hash table will give
>>   > > the best performance for this problem -- with or without having to
>>   > > deal
>>   > > with the issue of re-sizing.  Did you code it and measure the
>>   > > performance or is this based on results using a hash table in other
>>   > > problem domains, particularly ones where best/average case
>>   > > performance is measured and not worst case performance?
>> 
>>   > The relevant performance measurements are already graphed in both
>>   > OCaml for Scientists (page 80) and F# for Scientists (page 88).
>> 
>> Thus you provided the O'Caml code from which you extrapolated a claim
>> for all hash tables.
>
> What claim have I made about all hash tables?

Back in April in <································@posted.plusnet> you
wrote :-

> Hash tables are one of the most important data structures and are the only
> performant way to implement a dictionary in most cases.

That led to the above quote where rather than argue about "most" I
challenged you about one specific problem and you were free to choose
the hash table implementation.  Instead you quoted the results of your
tests as evidence.  I can hardly be faulted for pointing out the worst
case peformance of the hash table that you claimed showed that hash
tables would give the best performance for the specified problem.
You then modified your approach to a trie of hash tables but with a
trie depth of > 20 and eventually in
<································@posted.plusnet> you wrote :-

> For IPv6, I agree that a balanced tree looks best (better than any
> kind of trie or hash table).

which speaks for itself. 


>> I simply used your data but looked at the worst 
>> case performance,
>
> Which worst case performance?

The performance of insert/lookup/delete as n varies.


>> I agree that the performance of the O'Caml hash 
>> table isn't necessarily the same as all hash tables but since the
>> you used it to make claims about best-case performance why am I not
>> allowed to make a claim about worst case performance?
>
> Logic.

"Logic" iasn't argument.  Either we can both extrapolate results for
_all_ hash tables from your test or neither of us can.


>>> Then you also believed that all purely functional trees were
>>> balanced.
>> 
>> Care to quote me writing that?  What I believed and still believe is
>> that "well-implemented trees" implies balanced as I stated in
>> <··············@dino.dnsalias.com>.
>
> Splay trees?

What do splay trees have to do with the above?  They aren't balanced.


>>> I referenced counter examples in every case.
>> 
>> ...non-balanced trees were never an issue.
>
> Not true.

I should have wrote "not an issue for me".  Clearly you think they are
an issue because you apparently attached no meaning to "well
implemented" and so assumed any tree was valid.  That was all covered
in <································@brightview.co.uk> and my respose
<··············@dino.dnsalias.com>.  My news server does not register
any followup from you so quite why you are bringing it up again isn't
clear to me.


>>> If whoever wrote that part of Real World Haskell meant to refer only
>>> to the worst case, only to O(n) hash tables and only to O(log n)
>>> trees then they should have written that, very different,
>>> statement. As it is, what they wrote is grossly misleading at best.
>> 
>> Unlike your even handed statements about hash tables and trees?
>
> Stop wasting my time.

If you aren't willing to defend what you write then the simple answer
is to not write anything.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <j_adnZU86qcmPsPXnZ2dnUVZ8t2dnZ2d@brightview.co.uk>
Stephen J. Bevan wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Stephen J. Bevan wrote:
>>> Jon Harrop <···@ffconsultancy.com> writes:
>>>> Stephen J. Bevan wrote:
>>>>> Jon Harrop <···@ffconsultancy.com> writes:
>>>> If one is *only* interested in worse case performance.
>>> 
>>> I am.
>>
>> Then you are off topic.
> 
> The topic is your claim that :-
> 
>> Of course there is a sane reason! The *only* way you can justify the
>> following claim is by crippling your hash table implementation:
>>
>>  "Compared to a hash table, a well-implemented purely functional tree
>>  data
>> structure will perform competitively. You should not approach trees with
>> the assumption that your code will pay a performance penalty." - Real
>> World Haskell
> 
> One does not have to cripple the hash table to justify the claim,

Consider the task of filling your dictionary (hash table or purely
functional tree) with ten million machine-precision integers. The only way
you can possibly make a purely functional tree "perform competitively" is
by crippling the hash table.

That is a counter example to the statement made in Real World Haskell, i.e.
their statement was factually incorrect.

> your own O'Caml test program is justification enough.

My OCaml test program does not undermine the above counter example.

>>> In <··············@dnsalias.com> the graphs of your O'Caml programs
>>> (which use an hash table with O(n) worst case) are shown and the worst
>>> case preformance is obvious.  If you believe there is something
>>> incorrect in what I wrote then that's incorrect perhaps you'd do me
>>> the courtesy of giving a reason.
>>
>> You are describing one worst case scenario (collisions) and citing
>> irrelevant data about another (resizing).
> 
> I was quite clear in my response that I'm interested in worst case
> performance so of course I'm going to pick scenarios that show worst
> case performance.

That's fine but you must clarify which worst case you are referring to.

> If you don't want to consider worst case 
> performance then you should take your own advice about writing
> misleading claims.

I am more than happy to change the subject completely and talk about worst
case performance instead of the statement from Real World Haskell which did
not mention worse case performance.

> As to irrelevant data, it is from your test 
> program.  If the results are irrelevant then why did you point to
> them?

I did not point to them, you did.

> If you want to argue that some parts of the results are 
> meaningful and some aren't then re-write your test so that it doesn't
> display any negative results.

Negative results?

>>> So for me to be incorrect I assume that you have some way other
>>> solution than keeping n small for O(n) hash tables to avoid the
>>> performance issue?  Obviousy if one replaces the list in the hash
>>> table by a balanced tree then one gets O(log n) worst case but now the
>>> worst case is no better than the balanced tree.
>>
>> Yes.
> 
> Is that yes you have another way or yes you use balanced trees.

Yes, you can implement the buckets of the hash table as mutable balanced
trees to get O(log n) worst case performance.

> If 
> the former that was a stunningly unenlightending answer and if the
> latter then why didn't you use that when writing your O'Caml test
> programs that you claimed as evidence of good performance of hash
> tables?

Firstly, I never claimed that. Secondly, my OCaml test was completely
unrelated to this worst case.

>>>> You took data about batch resizing in OCaml's implementation and
>>>> incorrectly extrapolated to all hash tables and even to completely
>>>> different forms of worst case.
>>> 
>>> In <································@posted.plusnet> you wrote :-
>>> 
>>>   > > Can I assume at this point you contend that a hash table will give
>>>   > > the best performance for this problem -- with or without having to
>>>   > > deal
>>>   > > with the issue of re-sizing.  Did you code it and measure the
>>>   > > performance or is this based on results using a hash table in
>>>   > > other problem domains, particularly ones where best/average case
>>>   > > performance is measured and not worst case performance?
>>> 
>>>   > The relevant performance measurements are already graphed in both
>>>   > OCaml for Scientists (page 80) and F# for Scientists (page 88).
>>> 
>>> Thus you provided the O'Caml code from which you extrapolated a claim
>>> for all hash tables.
>>
>> What claim have I made about all hash tables?
> 
> Back in April in <································@posted.plusnet> you
> wrote :-
> 
>> Hash tables are one of the most important data structures and are the
>> only performant way to implement a dictionary in most cases.

That is not a claim about all hash tables.

> That led to the above quote where rather than argue about "most" I
> challenged you about one specific problem and you were free to choose
> the hash table implementation.  Instead you quoted the results of your
> tests as evidence.  I can hardly be faulted for pointing out the worst
> case peformance of the hash table that you claimed showed that hash
> tables would give the best performance for the specified problem.

That never happened.

> You then modified your approach to a trie of hash tables but with a
> trie depth of > 20 and eventually in
> <································@posted.plusnet> you wrote :-
> 
>> For IPv6, I agree that a balanced tree looks best (better than any
>> kind of trie or hash table).
> 
> which speaks for itself.

Irrelevant. I was talking about impure balanced trees there and am talking
about pure "well implemented" trees here.

>>> I simply used your data but looked at the worst
>>> case performance,
>>
>> Which worst case performance?
> 
> The performance of insert/lookup/delete as n varies.

Which one?

>>> I agree that the performance of the O'Caml hash
>>> table isn't necessarily the same as all hash tables but since the
>>> you used it to make claims about best-case performance why am I not
>>> allowed to make a claim about worst case performance?
>>
>> Logic.
> 
> "Logic" iasn't argument.  Either we can both extrapolate results for
> _all_ hash tables from your test or neither of us can.

My concrete example constitutes a counter example. Yours does not.

>>>> Then you also believed that all purely functional trees were
>>>> balanced.
>>> 
>>> Care to quote me writing that?  What I believed and still believe is
>>> that "well-implemented trees" implies balanced as I stated in
>>> <··············@dino.dnsalias.com>.
>>
>> Splay trees?
> 
> What do splay trees have to do with the above?  They aren't balanced.

Only for some definition of "balanced" that makes your argument circular.

>>>> I referenced counter examples in every case.
>>> 
>>> ...non-balanced trees were never an issue.
>>
>> Not true.
> 
> I should have wrote "not an issue for me". Clearly you think they are 
> an issue because you apparently attached no meaning to "well
> implemented" and so assumed any tree was valid.

I did not presume that Real World Haskell meant "balanced" when it
said "well implemented".

>>>> If whoever wrote that part of Real World Haskell meant to refer only
>>>> to the worst case, only to O(n) hash tables and only to O(log n)
>>>> trees then they should have written that, very different,
>>>> statement. As it is, what they wrote is grossly misleading at best.
>>> 
>>> Unlike your even handed statements about hash tables and trees?
>>
>> Stop wasting my time.
> 
> If you aren't willing to defend what you write then the simple answer
> is to not write anything.

You are trying to get me to defend things that I have not written.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Stephen J. Bevan
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <86prbzrf9i.fsf@dino.dnsalias.com>
Jon Harrop <···@ffconsultancy.com> writes:
> Stephen J. Bevan wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
>>> Stephen J. Bevan wrote:
>>>> Jon Harrop <···@ffconsultancy.com> writes:
>> The topic is your claim that :-
>> 
>>> Of course there is a sane reason! The *only* way you can justify the
>>> following claim is by crippling your hash table implementation:
>>>
>>>  "Compared to a hash table, a well-implemented purely functional tree
>>>  data
>>> structure will perform competitively. You should not approach trees with
>>> the assumption that your code will pay a performance penalty." - Real
>>> World Haskell
>> 
>> One does not have to cripple the hash table to justify the claim,
>
> Consider the task of filling your dictionary (hash table or purely
> functional tree) with ten million machine-precision integers. The only way
> you can possibly make a purely functional tree "perform competitively" is
> by crippling the hash table.

Rather than consider it I decided to measure it using a red-black tree :-

1  tree is 34x  faster than hash using fixed size table of size N/1000
2  tree is 3x   faster than hash using fixed size table of size N/100
3  tree is 2.3x slower than hash using fixed size table of size N/10
4  tree is 5-7x slower than hash using fixed size table of size N
5  tree is 3x   slower than hash using resizing table with start size of N/10

Note the quote is "perform competitively" which is why I tested
various values to show the best case and worst case performance.  The
worst case here is meant to simulate the type of problems with real
world programs reported in http://www.cs.rice.edu/~scrosby/hash/.

If one concentrates on best case performance then whether trees
"perform competitively" depends on whether one considers 2-7x
competitive.

If one concentrates on worst case performance then being 3-34x faster
is clearly competitive.

Note I wrote the tests in C to avoid GC issues but while the actual
ratios may vary a bit based on the language or the exact hash function
chosen (I used the hash that O'Caml uses for doubles) it is always
possible to produce an arbitrary bad result for a hash table by
inducing lots of collisions as I did in the first test.  Re-sizing or
using a balanced tree rather than linked list can cure the problem but
the former isn't always possible (locking the table too long) and the
latter uses a balanced tree which would contradict your claim.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <aPadnRNOfuVr3f3XnZ2dnUVZ8u-dnZ2d@brightview.co.uk>
Stephen J. Bevan wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Stephen J. Bevan wrote:
>>> Jon Harrop <···@ffconsultancy.com> writes:
>>>> Stephen J. Bevan wrote:
>>>>> Jon Harrop <···@ffconsultancy.com> writes:
>>> The topic is your claim that :-
>>> 
>>>> Of course there is a sane reason! The *only* way you can justify the
>>>> following claim is by crippling your hash table implementation:
>>>>
>>>>  "Compared to a hash table, a well-implemented purely functional
>>>>  tree... 
>
> ...Note I wrote the tests in C...

Can you give us the code? I am particularly interested in your purely
functional tree implementation in C.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Keith H Duggar
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <12ba1c6a-b910-445d-8863-a7bd1987a207@h18g2000yqj.googlegroups.com>
On Jul 17, 6:24 am, Jon Harrop <····@ffconsultancy.com> wrote:
> Stephen J. Bevan wrote:
> > Jon Harrop <····@ffconsultancy.com> writes:
> >> Stephen J. Bevan wrote:
> >>> Jon Harrop <····@ffconsultancy.com> writes:
> >>>> Stephen J. Bevan wrote:
> >>>>> Jon Harrop <····@ffconsultancy.com> writes:
> >>> The topic is your claim that :-
>
> >>>> Of course there is a sane reason! The *only* way you can justify the
> >>>> following claim is by crippling your hash table implementation:
>
> >>>>  "Compared to a hash table, a well-implemented purely functional
> >>>>  tree...
>
> > ...Note I wrote the tests in C...
>
> Can you give us the code? I am particularly interested in your purely
> functional tree implementation in C.

Yes me too. Stephen, please provide the code. I very much want
to see the purely functional tree implementation in C as well.
Thanks in advance!

KHD
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7xhbxbdqd7.fsf@ruckus.brouhaha.com>
Keith H Duggar <······@alum.mit.edu> writes:
> Yes me too. Stephen, please provide the code. I very much want
> to see the purely functional tree implementation in C as well.
> Thanks in advance!

http://www.google.com/search?q=red-black+tree+c

works for me.
From: Keith H Duggar
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <42097079-b1cb-4064-a971-6b1f77243dcc@p15g2000vbl.googlegroups.com>
On Jul 17, 4:08 pm, Paul Rubin <·············@NOSPAM.invalid> wrote:
> Keith H Duggar <······@alum.mit.edu> writes:
>
> > Yes me too. Stephen, please provide the code. I very much want
> > to see the purely functional tree implementation in C as well.
> > Thanks in advance!
>
> http://www.google.com/search?q=red-black+tree+c
>
> works for me.

Your post indicates that you either

1) ignored the "purely functional" qualifier

2) incorrectly interpreted "purely functional" to mean
   "functions properly"

3) you believe that those implementations are "purely
   functional"

4) you believe that any "purely functional" implementation
   will do if we happen to find one

5) believe we are morons

Which is it? 1) would mean you need to read more carefully and
try to genuinely understand what what you read before responding.
In the case of 2), please understand that "purely functional"
means "functional" in the comp.lang.functional programming
sense. If 3) then I would suggest you look again at the search
results to see they are not purely functional or post a specific
link to what you consider a "purely functional" implementation.
As to 4), we need Stephen's implementation since he made a claim
specifically about his implementation. 5) is unproductive and
would show you to be acting like a jerk.

I hope it is 2) or 4) but I'm not very optimistic.

KHD
From: Alan Mackenzie
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3sjk0$1tsb$1@colin2.muc.de>
In comp.lang.haskell Keith H Duggar <······@alum.mit.edu> wrote:
> On Jul 17, 4:08 pm, Paul Rubin <·············@NOSPAM.invalid> wrote:
>> Keith H Duggar <······@alum.mit.edu> writes:

>> > Yes me too. Stephen, please provide the code. I very much want
>> > to see the purely functional tree implementation in C as well.
>> > Thanks in advance!

>> http://www.google.com/search?q=red-black+tree+c

>> works for me.

> Your post indicates that you either

> 1) ignored the "purely functional" qualifier
> 
> 2) incorrectly interpreted "purely functional" to mean
>   "functions properly"

What, exactly, does "purely functional" mean in the context of functional
programming languages?  What would a piece of haskell have to look like
to fail to be "purely functional"?  Is it possible at all for a piece of
C to be "purely functional", and if so, how?

> 3) you believe that those implementations are "purely
>   functional"

> 4) you believe that any "purely functional" implementation
>   will do if we happen to find one

[ .... ]

> Which is it? 1) would mean you need to read more carefully and
> try to genuinely understand what what you read before responding.
> In the case of 2), please understand that "purely functional"
> means "functional" in the comp.lang.functional programming
> sense.

Maybe I'm misunderstanding, but I always thought that "functional"
programmnig was more of a stylistic emphasis, rather than a rigid
definition - calculating functions: good - setting variables and
specifying procedural aspects: considered harmful.  If so, there won't
be any rigid definition.

[ .... ]

> KHD

-- 
Alan Mackenzie (Nuremberg, Germany).
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <56OdnSFQu7ch7P_XnZ2dnUVZ8jpi4p2d@brightview.co.uk>
Alan Mackenzie wrote:
> Maybe I'm misunderstanding, but I always thought that "functional"
> programmnig was more of a stylistic emphasis, rather than a rigid
> definition - calculating functions: good - setting variables and
> specifying procedural aspects: considered harmful.  If so, there won't
> be any rigid definition.

Read "Purely functional data structures" by Chris Okasaki.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Alan Mackenzie
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3vpus$8ck$1@colin2.muc.de>
In comp.lang.haskell Jon Harrop <···@ffconsultancy.com> wrote:
> Alan Mackenzie wrote:
>> Maybe I'm misunderstanding, but I always thought that "functional"
>> programmnig was more of a stylistic emphasis, rather than a rigid
>> definition - calculating functions: good - setting variables and
>> specifying procedural aspects: considered harmful.  If so, there won't
>> be any rigid definition.

> Read "Purely functional data structures" by Chris Okasaki.

Hmm.  By some strange coincidence, that book doesn't seem to be on my
bookshelf.

Quick excursion via Google.  It seems what Dr. Okasaki is talking about
in his texts is not data structures which, in some sense, are purely
functional - it's about using bog standard data structures in languages
(such as as ML and Haskell) which are functional, to whatever degree of
purity.  Compressing phrases with prepositions and relative pronouns into
sequences of adjectives and nouns often doesn't work too well in English
(as contrasted to German, where it's usually OK).

Which has got precisely what to do with the recent flame fest?  Probably
not a lot.  Which might be generically similar to what the flame fest had
to do with.  Or more precisely, people using arcane terminology talking
crossly at cross purposes with eachother, because they tacitly disagree
about what the words mean.  Maybe.

Now given the disadvantages of programming in C (I'm sure we'd agree
they're substantial), why would anybody want to add the additional
handicap of doing things "purely functionally" in C?  If you're going to
accept the shackles of "pure functionalness", you'd be crazy to do it
in a language where you don't get back more than it costs.

-- 
Alan Mackenzie (Nuremberg, Germany).
From: Mark T.B. Carroll
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <86ab3049i1.fsf@ixod.org>
Alan Mackenzie <···@muc.de> writes:

> Quick excursion via Google.  It seems what Dr. Okasaki is talking about
> in his texts is not data structures which, in some sense, are purely
> functional - it's about using bog standard data structures in languages
> (such as as ML and Haskell) which are functional, to whatever degree of
> purity.

The algorithms for operating upon the data structures are the purely
functional part. (The fine details of the data structure design may vary
depending on how the algorithms need to work with them, of course.)

(snip)
> Which has got precisely what to do with the recent flame fest?

The question of if the purely functional tree implementation in C really
was using purely functional algorithms. A tree implementation tends to
include algorithms for finding, inserting, removing data, etc.

> Now given the disadvantages of programming in C (I'm sure we'd agree
> they're substantial), why would anybody want to add the additional
> handicap of doing things "purely functionally" in C?

To test the conjecture that, compared to a hash table, a
well-implemented purely functional tree data structure will perform
competitively?

Mark
From: Paul Rubin
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7xy6qk6lo5.fsf@ruckus.brouhaha.com>
Alan Mackenzie <···@muc.de> writes:
> > Read "Purely functional data structures" by Chris Okasaki.
> Hmm.  By some strange coincidence, that book doesn't seem to be on my
> bookshelf.
> 
> Quick excursion via Google.  It seems what Dr. Okasaki is talking about
> in his texts is not data structures which, in some sense, are purely
> functional - it's about using bog standard data structures in languages
> (such as as ML and Haskell) which are functional, to whatever degree of
> purity. 

He is talking about what are sometimes called "persistent data
structures" (google that).  It means structures that efficiently
support operations like "update" without mutation, i.e. by creating a
new structure that shares most of its content with the old.

The book is very good, and a lot of the stuff in it is in the author's
PhD thesis, which might be a good place to get started:

  http://www-2.cs.cmu.edu/~rwh/theses/okasaki.pdf

Related, here is a good article about the benefits of ditching a
mutable structure for a functional one in a particular program: "An
Applicative Control-Flow Graph Based on Huet's Zipper" by Norman
Ramsey and Joao Dias,

  http://www.cs.tufts.edu/~nr/pubs/zipcfg-abstract.html .
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <6amdnRMctNnqdv7XnZ2dnUVZ8r2dnZ2d@brightview.co.uk>
Alan Mackenzie wrote:
> In comp.lang.haskell Jon Harrop <···@ffconsultancy.com> wrote:
>> Alan Mackenzie wrote:
>>> Maybe I'm misunderstanding, but I always thought that "functional"
>>> programmnig was more of a stylistic emphasis, rather than a rigid
>>> definition - calculating functions: good - setting variables and
>>> specifying procedural aspects: considered harmful.  If so, there won't
>>> be any rigid definition.
> 
>> Read "Purely functional data structures" by Chris Okasaki.
> 
> Hmm.  By some strange coincidence, that book doesn't seem to be on my
> bookshelf.

Buy it. Read it.

> Quick excursion via Google.  It seems what Dr. Okasaki is talking about
> in his texts is not data structures which, in some sense, are purely
> functional - it's about using bog standard data structures...

Purely functional data structures have properties like persistence that are
not bog standard.

> in languages 
> (such as as ML and Haskell) which are functional, to whatever degree of
> purity.  Compressing phrases with prepositions and relative pronouns into
> sequences of adjectives and nouns often doesn't work too well in English
> (as contrasted to German, where it's usually OK).
> 
> Which has got precisely what to do with the recent flame fest?  Probably
> not a lot.  Which might be generically similar to what the flame fest had
> to do with.  Or more precisely, people using arcane terminology talking
> crossly at cross purposes with eachother, because they tacitly disagree
> about what the words mean.  Maybe.
> 
> Now given the disadvantages of programming in C (I'm sure we'd agree
> they're substantial), why would anybody want to add the additional
> handicap of doing things "purely functionally" in C?  If you're going to
> accept the shackles of "pure functionalness", you'd be crazy to do it
> in a language where you don't get back more than it costs.

Yes. That is precisely why I do not believe Stephen J Bevan's claims about
his purely functional tree implementations in C that allegedly contradict
my criticism of Real World Haskell.

Given that he hasn't provided the implementations for us to verify his
claims I think it is obvious by now that he was trying to fake it (just
like he did last time around).

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Stephen J. Bevan
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <868wikdjfw.fsf@dino.dnsalias.com>
Jon Harrop <···@ffconsultancy.com> writes:
> Yes. That is precisely why I do not believe Stephen J Bevan's claims
> about his purely functional tree implementations in C that allegedly
> contradict my criticism of Real World Haskell.

There is no reason you should believe without testing it yourself
(something you could do without my C code) but then ...

> Given that he hasn't provided the implementations for us to verify
> his claims I think it is obvious by now that he was trying to fake it

There is no reason to claim someone is trying to fake it because the
weather was too nice to be stuck inside reading Usenet.  The code has
been posted in <··············@dino.dnsalias.com>.  

> (just like he did last time around).

And precisely what does that refer to?
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <CdydnWsMJsnCmfzXnZ2dnUVZ8nKdnZ2d@brightview.co.uk>
Paul Rubin wrote:
> Keith H Duggar <······@alum.mit.edu> writes:
>> Yes me too. Stephen, please provide the code. I very much want
>> to see the purely functional tree implementation in C as well.
>> Thanks in advance!
> 
> http://www.google.com/search?q=red-black+tree+c
> 
> works for me.

That looks imperative and not purely functional to me.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Stephen J. Bevan
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <86ljmkdjvo.fsf@dino.dnsalias.com>
Jon Harrop <···@ffconsultancy.com> writes:
> Can you give us the code? I am particularly interested in your purely
> functional tree implementation in C.

The code is mundane and I would have used email but at least one other
person asked for it so it is attached at the end of this message.  The
following puts the code in some context :-

Claim I:

  Compared to a hash table, a well-implemented purely functional tree
  data structure will perform competitively.  You should not approach
  trees with the assumption that your code will pay a performance
  penalty.

    Real World Haskell page 289.

Some questions :-

Q1. What does "well-implemented" mean?

Q2. What does "purely functional" mean?

Q3. What does "perform competitively" mean?  Does it mean memory, time
    or both?  Does it mean best case, average case, worst case?
    Average of all three?  What does "competitively" mean?  Within a
    order of magnitude?  Within a factor of 2?  Within a factor of
    1.1?

Q4. Is the claim meant to be true for all values of n?

I think a strong hint at the answer to Q1 is the following which
precedes Claim I on page 289 :-

  Implementing a naive tree type is particularly easy in Haskell.
  Beyond that, more useful tree types are also unusually easy to
  implement.  Self-balancing structures, such as red-black trees, have
  struck fear into generations of undergraduate computer science
  students, because the balancing algorithms are notoriously hard to
  get right.

  ...

  Haskell's standard libraries provide two collection types that are
  implemented using balanced trees behind the scenes.

Thus a reasonable interpretation of "well-implemented" is a balanced
tree, such as a red-black tree.  A basic binary tree would not be
considered "well-implemented".  Whether a splay tree with its O(log n)
amortized time counts as "well-implemented" is not so clear.  I'm
going to stick to considering trees don't need th "amortized"
qualifier.

Regarding Q2, given the context of the claim is a book on Haskell then
a likely interpretation of "purely functional" is "persistent".
However, that's not the only possible interpretation.  If one is
programming in a functional language with linear/unique types
(cf. Clean) then the compiler can (though may not :-<) make use of
linearity/uniqueness to perform update in place even though the
algorithm is written in a persistent manner.  The result is much less
memory allocation.

There is no hint for Q3 but for the sake of argument we'll concentrate
on time.  There is no hint as to best, average, or worst case
performance so we'll look at best and worst.  There is no hint on
"competitively" so we'll have to assume there is some factor F and
then try to agree if F is small enough to qualify as "competitively".

There is no hint for Q4 so we either have :-

Strong Claim I

  forall n in Z . perf(hash(n)) <= perf(tree(n))*F

Weak Claim I

  exists (x,y) in (ZxZ) . forall n in {x..y} . perf(hash(n)) <= perf(tree(n))*F

Obviously if F is large enough the strong claim is true.  However, it
is then a meaningless claim.  Similarly if we go with the weak claim
and find that the set {x..y} only has a few elements then that is also
a meaningless claim.  Thus we want the cardinality of {x..y} to be
such that the weak claim applies to practical applications.
Unfortunately that pushes the the issue to a definition of "practical
applications".  Page 289 uses an example of 10,000 nodes in an example
about sharing and whether this was meant to be size used in a
"practical application" or not it would somewhat confusing to use that
example if it was not.  So let's assume the weak claim fails unless
it is true for at least {1..10000} and preferably an even larger
range.

This brings us to :-

Claim II

  The *only* way you can justify [Claim I] is by crippling your hash
  table implementation.

    comp.lang.functional <································@brightview.co.uk>

I'm going to assume this refers to the weak claim rather than the
strong claim.  If that's not the case then there is no need to go any
further.

To test weak claim I and hence II I tested :-

  ht   - fixed size hash table using a linked list to handle collision,

  prb  - a monomorphic persistent red black tree using reference
         counting for memory management.

  pavl - a monomorphic persistent AVL tree using reference counting
         for memory management.

  rb   - a (polymorphic) red black tree (rb) with no reference counting.
         This is an approximation of what a good compiler for a
         language with linear/unique typing _may_ be able to generate.
         
In all cases I used C to avoid any effect GC may have on the results.
I also avoided using malloc(3) so that its quality of implementation
could not affect the results.

I've omitted the re-sizing hash table because :-

a) most of the hash table test are done with the fixed size hash table
   having exactly the same number of buckets as data points (thus if
   the hash is perfect there would be only one value in each bucket)
   and so the resizing version actually performs slightly worse due to
   the extra work done when the table is re-sized.

b) I want to avoid any possible complaint that re-sizing is the cause
   of any performance issues with the hash table.

For reference the tests were done on a Lenovo T60 :-

  $ cat /proc/cpuinfo | egrep "model name|MHz"
  cat /proc/cpuinfo | egrep "model name|MHz"
  model name	: Intel(R) Core(TM)2 CPU         T5500  @ 1.66GHz
  cpu MHz		: 1000.000
  model name	: Intel(R) Core(TM)2 CPU         T5500  @ 1.66GHz
  cpu MHz		: 1000.000

  $ cat /proc/meminfo | grep MemTotal
  cat /proc/meminfo | grep MemTotal
  MemTotal:        1017404 kB
  $ cc -v 2>&1 | tail -1
  gcc version 4.3.3 (Ubuntu 4.3.3-5ubuntu4) 

  $ cc -O3 john.c -o john
  $ ./john > r

I've put john.c at the the end of the message since it is quite long
and its relevance is only that it will allow a third party to attempt
to reproduce the results I obtained, the important part is the results
themselves.  But before we look at any results I'd like to ask anyone
who believes in imperative languages which have "predictable
performance" to state what a graph of the results will look for for
values of n varying from 100 to 10,0000,0000?  Will the hash table
results be linear in n, polynomial in n, exponential in n?  Same
question for the tree results?  Note I'm not asking for a linear
constant or the polynomial degree just whether the results will fall
into one of three very broad categories.  If you can predict a
constant or degree all the better.

So, keeping your predictions in mind, here are the results I get :-

  $ cat r
  ht	       100        100 0.16
  rb	       100          0 0.23
  prb	       100          0 0.58
  pavl	       100          0 0.63
  ht	       250        250 0.24
  rb	       250          0 0.39
  prb	       250          0 0.143
  pavl	       250          0 0.158
  ht	       500        500 0.36
  rb	       500          0 0.71
  prb	       500          0 0.304
  pavl	       500          0 0.339
  ht	       750        750 0.52
  rb	       750          0 0.106
  prb	       750          0 0.485
  pavl	       750          0 0.540
  ht	      1000       1000 0.66
  rb	      1000          0 0.142
  prb	      1000          0 0.633
  pavl	      1000          0 0.747
  ht	      1000        100 0.81
  ht	      1000         10 0.188
  ht	      1000          1 0.1281
  ht	      2500       2500 0.158
  rb	      2500          0 0.374
  prb	      2500          0 0.1820
  pavl	      2500          0 0.2135
  ht	      5000       5000 0.314
  rb	      5000          0 0.817
  prb	      5000          0 0.3918
  pavl	      5000          0 0.4659
  ht	      7500       7500 0.474
  rb	      7500          0 0.1318
  prb	      7500          0 0.6254
  pavl	      7500          0 0.7344
  ht	     10000      10000 0.634
  rb	     10000          0 0.1827
  prb	     10000          0 0.8720
  pavl	     10000          0 0.10372
  ht	     10000       1000 0.1003
  ht	     10000        100 0.4892
  ht	     10000         10 0.42870
  ht	     25000      25000 0.1606
  rb	     25000          0 0.5273
  prb	     25000          0 0.26102
  pavl	     25000          0 0.29457
  ht	     50000      50000 0.3347
  rb	     50000          0 0.11625
  prb	     50000          0 0.55170
  pavl	     50000          0 0.64388
  ht	     75000      75000 0.5174
  rb	     75000          0 0.18590
  prb	     75000          0 0.86948
  pavl	     75000          0 0.102214
  ht	    100000     100000 0.7993
  rb	    100000          0 0.26667
  prb	    100000          0 0.123833
  pavl	    100000          0 0.142501
  ht	    100000      10000 0.12723
  ht	    100000       1000 0.71383
  ht	    100000        100 0.630548
  ht	    250000     250000 0.34738
  rb	    250000          0 0.108903
  prb	    250000          0 0.431491
  pavl	    250000          0 0.476282
  ht	    500000     500000 0.87062
  rb	    500000          0 0.319648
  prb	    500000          0 1.162610
  pavl	    500000          0 1.256194
  ht	    750000     750000 0.135599
  rb	    750000          0 0.576284
  prb	    750000          0 2.17145
  pavl	    750000          0 2.114378
  ht	   1000000    1000000 0.186973
  rb	   1000000          0 0.854929
  prb	   1000000          0 2.964288
  pavl	   1000000          0 3.75486
  ht	   1000000     100000 0.510172
  ht	   1000000      10000 4.988925
  ht	   1000000       1000 50.173156
  ht	   2500000    2500000 0.500639
  rb	   2500000          0 2.804329
  prb	   2500000          0 9.243149
  pavl	   2500000          0 9.639243
  ht	   5000000    5000000 1.24295
  rb	   5000000          0 6.887620
  prb	   5000000          0 21.45345
  pavl	   5000000          0 22.77399
  ht	   7500000    7500000 1.570430
  rb	   7500000          0 10.801139
  prb	   7500000          0 33.974976
  pavl	   7500000          0 35.311648
  ht	  10000000   10000000 2.129251
  rb	  10000000          0 15.223801
  prb	  10000000          0 47.628965
  pavl	  10000000          0 49.485987
  ht	  10000000    1000000 6.225125
  ht	  10000000     100000 60.995422
  ht	  10000000      10000 599.944561

The third column is only relevant for ht and there it indicates the
size of the hash table.  The fourth column is the time in seconds to
insert n integers where n is the value in the second column.  While
the numbers are great for precise detail, it is hard to easily
identify see patterns so let's plot some graphs :-

  $ cat r2p
  #!/bin/sh

  limit=${2:-10000000}

  for t in ht rb prb pavl
  do
          awk -v t=$t -v l=$limit '($1 == t) && ($1 != "ht" || $2 == $3) && ($2 <= l) { printf("%s %s\n", $2, $4); }' $1 > $t
  done 

  for n in 10 100 1000
  do
          awk -v n=$n -v l=$limit '($1 == "ht" && ($2 == n*$3) && ($2 <= l)) { printf("%s %s\n", $2, $4); }' $1 > ht.$n
  done
  $ ./r2p r
  $ cat plot.all
  set terminal dumb
  set logscale x
  set logscale y
  set key left
  plot 'ht'
  replot 'prb'
  replot 'pavl'
  replot 'rb'
  replot 'ht.10'
  replot 'ht.100'
  replot 'ht.1000'
  $ gnuplot plot.all

    1000 ++--+-----+-+-+---+----+--++---+-----+-+-+---+----+--++---+-----+--++
         +     'ht'   A+            +             +            +             G
         +    'prb'   B                                                      +
         +   'pavl'   C                                                      +
         |     'rb'   D                                                      |
     100 ++ 'ht.10'   E                                                     ++
         + 'ht.100'   F                                        G             F
         +'ht.1000'   G                                                    B B
         +                                                               B   +
         |                                                                   D
      10 ++                                                          B     D++
         +                                                               D   E
         +                                                     C             +
         +                                                    BB     D       A
         |                                                                 A |
       1 ++            E        D   B           B A        A   D         A  ++
         B         D A A           BA     D   B A F           DE             +
         +    D    A B      D   B  AF         A        A   D         A       +
         D    A    B        C   A         B       D                          +
         A    B        D    A      DD     A     D C           AA             +
     0.1 ++--+-----+-D-+---+----+--+C---+-----D-C-B---+D---+--++---+-----+--++
        100           1000        10000         100000       1e+06        1e+07

The plot is somewhat cluttered but some general trends are that :-

1. for n <= 100,000 all the times at this scale are around the same
   order of magnitude and there is a sawtooth pattern to the results
   for all algorithms.  I bow down to anyone who predicated a sawtooth
   pattern and can explain why it occurs.  It is not hard to suggest
   some possible reasons for the sawtooth but I haven't used any tools to
   confirm/deny any possibilities.

2. for n >= 100,000 and more obviously n >= 1,000,000 the results
   favour a hash table that has sufficient buckets to avoid the effect
   of collisions coming in to effect (A) with the trees being at least
   an order of magnitude worse.

3. If you can make out E, F and G amid all those points you'll see
   that the performance of the hash table when collisions dominate can
   be an order of magnitude or more worse than a tree.

There is not much more to say about point 2.  Unless we are generous
with F then it is clear that {x..y} is at best {1..100000} if we
ignore collisions.

So, let's look at point 1 in some more detail.  If we look at n <=
100,000 in isolation and ignore the ht.X variants :-

  $ ./r2p r 100000
  $ cat plot.low
  set terminal dumb
  set logscale x
  set key left
  plot 'ht'
  replot 'prb'
  replot 'pavl'
  replot 'rb'
  $ gnuplot plot.low

    0.9 ++-----+---+--+-+-++-+++------+---+--+-+-++-+++------+---+--+-+-++-+++
        +  'ht'   A            +                      B                   B  +
        | 'prb'   B                            D                             |
    0.8 +'pavl'   C                                                         +A
        |  'rb'   D            C                   C                         |
    0.7 ++              D                                                   ++
        |                      A                                      C      |
        C                      B                   B  A                      |
    0.6 B+                                                                  ++
        |                   A                                  D      B      |
    0.5 ++                  B                                             A ++
        |                                      C   A                         |
    0.4 ++                                                                  ++
        |        D                      D      B                             |
        |               A                                             A      |
    0.3 ++              B                      A               C            ++
        |                                                      B             D
    0.2 D+       A                      C                                   ++
        |                               B             D        A          D  |
        A        B             D        A          D  +                      B
    0.1 ++-----+---+--+-+-++D+++------+---+--+-+-++-++C------+---+--+-D-++C+++
       100                    1000                  10000                100000

Now the sawtooth pattern is more obvious.  The fact the sawtooth for each
algorithm is different means that no implementation is consistently
faster than all the others.  Instead you either have to pick a set of
values of interest or take an average over all values to try and pick
the fastest.  

For F = 10 the weak claim is true in the range {1..100000}.  I'm using
F = 10 here because while the hash table is is up to 5x faster than
the trees for some values of n, the hash table is also up to 6x slower
than the trees for some values of n.

If we reduce the clutter and concentrate specifically on hash table
and persistent red black tree for n <= 100,000 :-

  $ cat plot.low.ht.prb
  set terminal dumb
  set logscale x
  set logscale y
  set key left
  plot 'ht'
  replot 'prb'
  $ gnuplot plot.low.ht.prb

      1 ++-----+---+--+-+-++-+++------+---+--+-+-++-+++------+---+--+-+-++-+++
        + 'ht'   A             +                      B                   B  +
        +'prb'   B                                                           A
        +                                                                    +
        +                      A                   B  A                      +
        B                                                             B      |
        +                   A                      A                      A  +
        |                                                                    |
        +                                      B                             +
        |               A                                                    |
        +               B                      A                      A      +
        |                                                                    |
        |        A                                             B             |
        |                                                                    |
        +                                                                    +
        |                               B                                    |
        A                               A                      A             |
        |        B                                                           |
        |                                                                    B
        +                      +                      +                      +
    0.1 ++-----+---+--+-+-++-+++------+---+--+-+-++-+++------+---+--+-+-++-+++
       100                    1000                  10000                100000

If we ignore n = 100 (bad for tree) and n = 100000 (bad for hash) then
for many of the other points we can use a value of F ~= 2 and meet the
weak claim.  Note for n = 750 and n = 1000 the results are very close
(persistent red-black tree is marginally quicker in both cases) but
gnuplot can only plot one character in "dumb" mode and the hash table
value is shown.

To show that there is nothing special about a persistent red-black
tree, here's the comparison of a hash table with a persistent AVL tree :-

  $ cat plot.low.ht.pavl
  set terminal dumb
  set logscale x
  set logscale y
  set key left
  plot 'ht'
  replot 'pavl'
  $ gnuplot plot.low.ht.pavl

      1 ++-----+---+--+-+-++-+++------+---+--+-+-++-+++------+---+--+-+-++-+++
        +  'ht'   A            +                      +                      +
        +'pavl'   B                                                          A
        +                      B                   B                         +
        B                      A                      A               B      +
        |                   B                                                |
        +                   A                      A                      A  +
        |                                      B                             |
        +                                                                    +
        |               A                                                    |
        +                                      A                      A      +
        |                                                      B             |
        |        A                                                           |
        |                               B                                    |
        +                                                                    +
        |                                                                    |
        A        B                      A                      A             |
        |                                                                    B
        |                                                                    |
        +                      +                      +                      +
    0.1 ++-----+---+--+-+-++-+++------+---+--+-+-++-++B------+---+--+-+-++B+++
       100                    1000                  10000                100000

The results are similar though here the sawtooth overlaps are more
advantageous for the AVL than they were for the red black tree such
that there are four values (250, 1000, 75,000, 100,000) where the AVL
tree is more than 2x faster and up to 5x faster than the hash table
wereas the hash table is rarely 2x faster than the AVL.

Finally comparing the hash table with a non-persistent red-black tree :-

  $ cat plot.low.ht.rb
  set terminal dumb
  set logscale x
  set logscale y
  set key left
  plot 'ht'
  replot 'rb'
  $ gnuplot plot.low.ht.rb

      1 ++-----+---+--+-+-++-+++------+---+--+-+-++-+++------+---+--+-+-++-+++
        +'ht'   A              +                      +                      +
        +'rb'   B                              B                             A
        +               B                                                    +
        +                      A                      A                      +
        |                                                                    |
        +                   A                      A           B          A  +
        |                                                                    |
        +        B                                                           +
        |               A               B                                    |
        +                                      A                      A      +
        |                                                                    B
        |        A                                                           |
        B                                                                    |
        +                                                                    +
        |                                             B                   B  |
        A                               A                      A             |
        |                      B                                             |
        |                                          B                         |
        +                   B  +                      +               B      +
    0.1 ++-----+---+--+-+-++-+++------+---+--+-+-++-+++------+---+--+-+-++-+++
       100                    1000                  10000                100000

Again the sawtooth pattern comes through clearly and there is no clear
winner across the whole range.

Looking now at point 3 I'm only going to use the non-persistent
red-black tree for comparision since I want to compare worst case
behaviour.  This is a practical concern when choosing an algorithm and
in practice one would not be comparing persistent X and non-persistent Y.

  $ cat plot.high.ht.rb
  set terminal dumb
  set logscale x
  set logscale y
  set key left
  plot 'rb.high'
  replot 'ht.100'
  replot 'ht.1000'

where rb.high just restricts rb to those values of x that are in ht.X :-

  $ cat rb.high
  1000 0.142
  10000 0.1827
  100000 0.26667
  1000000 0.854929
  10000000 15.223801
  $ gnuplot plot.high.ht.rb

    1000 ++--+-----+-+-+---+----+--++---+-----+-+-+---+----+--++---+-----+--++
         +'rb.high'   A+            +             +            +             C
         + 'ht.100'   B                                                      +
         +'ht.1000'   C                                                      +
         |                                                                   |
     100 ++                                                                 ++
         +                                                     C             B
         +                                                                   +
         +                                                                   +
         |                                                                   A
      10 ++                                                                 ++
         +                                                                   +
         +                                                     B             +
         +                                                                   +
         |                                                                   |
       1 ++                                                    A            ++
         +                                        B                          +
         +                          B                                        +
         +                                        A                          +
         +             A            A             +            +             +
     0.1 ++--+-----+-+-+---+----+--++---+-----+-+-+---+----+--++---+-----+--++
        100           1000        10000         100000       1e+06        1e+07

Notice that the red-black tree is consistently better and and as
n > 1,000,000 the red-black tree is 10-100x better due to the massive
number of collisions causing long lists and so poor insert times.

As has been discussed earlier in this thread the poor hash table
performance can be fixed by either resizing the hash table or using
something other than a linked list to deal with collisions.  If one
chooses a balanced tree then the worst case performance is that of a
balanced tree which makes the weak claim is true for worst case
performance.

In the context I'm interested in (kernel) re-sizing a 10,000,000 entry
hash table is not really an option because it assumes :-

a) one can at any time allocate a 40MB chunk of physically
   contiguous memory to hold the new buckets.  Such an allocation is
   unlikely to succeed.

b) in the unlikely event one has the memory, one can afford to take a
   write lock lock on the table for the time it takes to rehash
   (anywhere from 0.1 - 2 seconds depending on the size of the table).

Thus kernels often to use fixed size hash tables and :-

a) limit the number of elements to avoid the above behaviour thus
   requiring re-compilation if you actually need a bigger limit.
   Somewhat problematic if ou are trying to sell a single product to
   multiple customers.

b) allocate the maximum number of buckets statically -- somewhat
   problematic when you have multiple tables (session table, route
   cache, ARP table, bridge table, IPsec SA map) and different
   cusomers want to maximize the size of different tables.

c) use some kind of keyed hash to thwart attackers from causing a
   problem even when the number of nodes is << the number of buckets.

d) do nothing and exhibit dramatic performance problems.

Finally here's the code :-

  $ cat john.c
  #include <stdlib.h>
  #include <stdio.h>
  #include <time.h>
  #include <math.h>


  /*
   * Maximum number of values that will be used in any test.
   * It is declared so that we can statically allocate memory
   * for the various node types that will be used and thus
   * avoid memory allocation affecting the performance.
   */

  enum {
          N = 10*1000*1000
  };

  /*
   * The keys are stored in an array so that we can use the same
   * values for testing all data structures.
   */
  static int keys[N];



  /*
   * Generic container as used in Linux kernel though this is not
   * the version that is used in 2.6, it probably comes from 2.4.
   */

  #define container_of(ptr, type, member) \
          ((type *)((char *)(ptr)-(unsigned long)(&((type *)0)->member)))


  /*
   * List taken from Linux kernel include/linux/list.h
   */

  #define list_entry(ptr, type, member) \
          container_of(ptr, type, member)

  struct list_head {
          struct list_head *next, *prev;
  };

  static inline void INIT_LIST_HEAD(struct list_head *list)
  {
          list->next = list;
          list->prev = list;
  }

  #define list_for_each(pos, head) \
          for (pos = (head)->next; pos->next, pos != (head); \
                  pos = pos->next)


  static inline void __list_add(struct list_head *new,
                                struct list_head *prev,
                                struct list_head *next)
  {
          next->prev = new;
          new->next = next;
          new->prev = prev;
          prev->next = new;
  }


  static inline void list_add(struct list_head *new, struct list_head *head)
  {
          __list_add(new, head, head->next);
  }


  /*
   * Red Black Tree taken from Linux kernel 2.6 include/linux/rbtree.h
   * This code is under GPL
   */

  struct rb_node
  {
          unsigned long  rb_parent_color;
  #define	RB_RED		0
  #define	RB_BLACK	1
          struct rb_node *rb_right;
          struct rb_node *rb_left;
  };

  struct rb_root
  {
          struct rb_node *rb_node;
  };

  #define rb_parent(r)   ((struct rb_node *)((r)->rb_parent_color & ~3))
  #define rb_color(r)   ((r)->rb_parent_color & 1)
  #define rb_is_red(r)   (!rb_color(r))
  #define rb_is_black(r) rb_color(r)
  #define rb_set_red(r)  do { (r)->rb_parent_color &= ~1; } while (0)
  #define rb_set_black(r)  do { (r)->rb_parent_color |= 1; } while (0)

  static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p)
  {
          rb->rb_parent_color = (rb->rb_parent_color & 3) | (unsigned long)p;
  }

  static inline void rb_set_color(struct rb_node *rb, int color)
  {
          rb->rb_parent_color = (rb->rb_parent_color & ~1) | color;
  }

  #define	rb_entry(ptr, type, member) container_of(ptr, type, member)

  static inline void rb_link_node(struct rb_node * node, struct rb_node * parent,
                                  struct rb_node ** rb_link)
  {
          node->rb_parent_color = (unsigned long )parent;
          node->rb_left = node->rb_right = 0;
          *rb_link = node;
  }

  /*
   * Red Black insert tree taken from Linux kernel 2.6 lib/rbtree.c
   * This code is also under GPL.
   */

  static void __rb_rotate_left(struct rb_node *node, struct rb_root *root)
  {
          struct rb_node *right = node->rb_right;
          struct rb_node *parent = rb_parent(node);

          if ((node->rb_right = right->rb_left))
                  rb_set_parent(right->rb_left, node);
          right->rb_left = node;

          rb_set_parent(right, parent);

          if (parent)
          {
                  if (node == parent->rb_left)
                          parent->rb_left = right;
                  else
                          parent->rb_right = right;
          }
          else
                  root->rb_node = right;
          rb_set_parent(node, right);
  }


  static void __rb_rotate_right(struct rb_node *node, struct rb_root *root)
  {
          struct rb_node *left = node->rb_left;
          struct rb_node *parent = rb_parent(node);

          if ((node->rb_left = left->rb_right))
                  rb_set_parent(left->rb_right, node);
          left->rb_right = node;

          rb_set_parent(left, parent);

          if (parent)
          {
                  if (node == parent->rb_right)
                          parent->rb_right = left;
                  else
                          parent->rb_left = left;
          }
          else
                  root->rb_node = left;
          rb_set_parent(node, left);
  }


  static void rb_insert_color(struct rb_node *node, struct rb_root *root)
  {
          struct rb_node *parent, *gparent;

          while ((parent = rb_parent(node)) && rb_is_red(parent))
          {
                  gparent = rb_parent(parent);

                  if (parent == gparent->rb_left)
                  {
                          {
                                  register struct rb_node *uncle;

                                  uncle = gparent->rb_right;
                                  if (uncle && rb_is_red(uncle))
                                  {
                                          rb_set_black(uncle);
                                          rb_set_black(parent);
                                          rb_set_red(gparent);
                                          node = gparent;
                                          continue;
                                  }
                          }

                          if (parent->rb_right == node)
                          {
                                  register struct rb_node *tmp;
                                  __rb_rotate_left(parent, root);
                                  tmp = parent;
                                  parent = node;
                                  node = tmp;
                          }

                          rb_set_black(parent);
                          rb_set_red(gparent);
                          __rb_rotate_right(gparent, root);
                  } else {
                          {
                                  register struct rb_node *uncle;

                                  uncle = gparent->rb_left;
                                  if (uncle && rb_is_red(uncle))
                                  {
                                          rb_set_black(uncle);
                                          rb_set_black(parent);
                                          rb_set_red(gparent);
                                          node = gparent;
                                          continue;
                                  }
                          }

                          if (parent->rb_left == node)
                          {
                                  register struct rb_node *tmp;
                                  __rb_rotate_right(parent, root);
                                  tmp = parent;
                                  parent = node;
                                  node = tmp;
                          }

                          rb_set_black(parent);
                          rb_set_red(gparent);
                          __rb_rotate_left(gparent, root);
                  }
          }

          rb_set_black(root->rb_node);
  }


  /*
   * monomorphic non-persistent Red Black Tree
   */

  struct rb_d_node {
          int n;
          union {
                  struct rb_node rb_node;
                  struct rb_d_node *free;
          };
  };

  static struct rb_d_node *rb_d_nodes_free_list;
  static struct rb_root rb_d_root;


  static struct rb_d_node *rb_d_node_new(void)
  {
          struct rb_d_node *t;

          t = rb_d_nodes_free_list;
          rb_d_nodes_free_list = rb_d_nodes_free_list->free;
          return t;
  }


  static void rb_d_node_free(struct rb_d_node *n)
  {
          n->free = rb_d_nodes_free_list;
          rb_d_nodes_free_list = n;
  }


  static void rb_d_tree_insert(int d)
  {
          struct rb_d_node *n, *o;
          struct rb_root *root = &rb_d_root;
          struct rb_node **p = &rb_d_root.rb_node;
          struct rb_node *parent = 0;

          n = rb_d_node_new();
          if (!n)
                  return;
          n->n = d;
          while (*p) {
                  parent = *p;
                  o = rb_entry(parent, struct rb_d_node, rb_node);

                  if (d < o->n) {
                          p = &(*p)->rb_left;
                  } else if (d > o->n) {
                          p = &(*p)->rb_right;
                  } else {
                          /* dup */
                          rb_d_node_free(n);
                          return;
                  }
          }
          rb_link_node(&n->rb_node, parent, p);
          rb_insert_color(&n->rb_node, root);
  }


  static int rb_d_tree_find(int d)
  {
          struct rb_node *p = rb_d_root.rb_node;
          struct rb_d_node *n;

          while (p) {
                  n = rb_entry(p, struct rb_d_node, rb_node);

                  if (d < n->n)
                          p = p->rb_left;
                  else if (d > n->n)
                          p = p->rb_right;
                  else 
                          return 1;
          }
          return 0;
  }



  static void rb_d_tree_insert_test(unsigned int n)
  {
          unsigned int i;
          int dup;

          for (i = 0; i != n; i += 1)
                  rb_d_tree_insert(keys[i]);
  }


  static void rb_d_tree_find_test(unsigned int n)
  {
          unsigned int i;
          int found;

          for (i = 0; i != n; i += 1) {
                  if (!rb_d_tree_find(keys[i]))
                          printf("rb tree lookup %d failed\n", keys[i]);
          }
  }


  /*
   * monomorphic non-persistent fixed size hash table
   */

  struct hash_node {
          int n;
          unsigned int hash;
          union {
                  struct list_head hash_link;
                  struct hash_node *free;
          };
  };

  static struct list_head hash_table[N];
  static struct hash_node *hash_nodes_free;


  /*
   * Hopefully a faithful implementation of the hash function for ints
   * that O'Caml 3.10 uses, see ocaml-3.11.0/byterun/hash.c:hash_aux.c.
   */
  static inline unsigned int ocaml_hash_int(int d)
  {
          unsigned int hash_accu = 0;
          unsigned int i;

          hash_accu = hash_accu*65599 + d;
          return hash_accu;
  }


  static void init_hash_table(void)
  {
          int i;

          for (i = 0; i != N; i += 1)
                  INIT_LIST_HEAD(&hash_table[i]);
  }


  static inline struct hash_node *hash_node_new(void)
  {
          struct hash_node *t;

          t = hash_nodes_free;
          hash_nodes_free = hash_nodes_free->free;
          return t;
  }


  static void hash_insert(unsigned int max, int d) 
  {
          unsigned int h;
          unsigned int b;
          struct list_head *head, *l;
          struct hash_node *hn;

          h = ocaml_hash_int(d);
          b = h%max;
          list_for_each(l, &hash_table[b]) {
                  hn = list_entry(l, struct hash_node, hash_link);
                  if (h != hn->hash)
                          continue;
                  if (d == hn->n)
                          return;
          }
          hn = hash_node_new();
          if (!hn)
                  return;
          hn->n = d;
          hn->hash = h;
          list_add(&hn->hash_link, &hash_table[b]);
  }


  static int hash_find(unsigned int max, int d)
  {
          unsigned int b;
          unsigned int h;
          struct hash_node *hn;
          struct list_head *head, *l;

          h = ocaml_hash_int(d);
          b = h%max;
          head = &hash_table[b];
          list_for_each(l, head) {
                  hn = list_entry(l, struct hash_node, hash_link);
                  if (h != hn->hash)
                          continue;
                  if (d == hn->n)
                          return 1;
          }
          return 0;
  }


  static void hash_insert_test(unsigned int n, unsigned int max)
  {
          unsigned int i;

          for (i = 0; i != n; i += 1)
                  hash_insert(max, keys[i]);
  }


  static void hash_find_test(unsigned int n, unsigned int max)
  {
          unsigned int i;
          unsigned int b;
          unsigned int h;
          struct node *e;
          struct list_head *head, *l;
          int found;

          for (i = 0; i != n; i += 1) {
                  if (!hash_find(max, keys[i]))
                          printf("hash find %d failed\n", keys[i]);
          }
  }



  /*
   * Monomorphic Persistent Red Black Tree.
   * Reference counts are used to support persistence.
   * The code is based on a purely functional exposition of Red-Black trees
   * in Scheme or Haskell, I forget which.  Since I wrote the C code 
   * then should anyone actually want to use it for something there
   * is not need to ask, you can use it under the BSD/Berkeley license
   * http://www.openbsd.org/policy.html.  Supporting deletion is left
   * as an excercise for the reader :-)
   */

  enum {
          BLACK = 0,
          RED = 1
  };

  struct prbt {
          struct prbt *l, *r;
          unsigned short u;
          unsigned short c;
          int d;
  };


  static struct prbt *prbt_free_list;


  static inline struct prbt *prbt_new(void)
  {
          struct prbt *t;

          t = prbt_free_list;
          prbt_free_list = prbt_free_list->l;
          return t;
  }


  static inline struct prbt *prbt_get(struct prbt *n)
  {
          if (n)
                  n->u += 1;
          return n;
  }

  /*
   * The noinline is to stop gcc diverging as it tries to recursively
   * inline prtt_free and prbt_put.
   */
  static void prbt_free(struct prbt *) __attribute__((noinline));;

  /*
   * prbt_put is small enough and called frequently enough that if
   * the code was being generated by a compiler for a language that uses
   * reference counts then it should inline it.  That said, the only non-trivial
   * functional language that I'm aware of that uses reference counts
   * is Opal (http://user.cs.tu-berlin.de/~opal/) and I've never investigated
   * whether it does or not.  The inlining can be removed and it doesn't
   * make a significant difference to the results.
   */

  static void prbt_put(struct prbt *) __attribute__((always_inline));


  static void prbt_put(struct prbt *n)
  {
          if (!n)
                  return;
          n->u -= 1;
          if (n->u)
                  return;
          prbt_free(n);
  }


  static void prbt_free(struct prbt *n)
  {
          if (n->l)
                  prbt_put(n->l);
          if (n->r)
                  prbt_put(n->r);
          n->l = prbt_free_list;
          prbt_free_list = n;
  }


  struct prbt *prbt_bal(struct prbt *l, int d, struct prbt *r)
  {
          struct prbt *nl, *nr, *p, *ll, *lr, *rr, *rl;

          if (l && (l->c == RED)) {
                  ll = l->l;
                  lr = l->r;
                  if (r && (r->c == RED)) {
                          nl = prbt_new();
                          if (!nl)
                                  return 0;
                          nr = prbt_new();
                          if (!nr) {
                                  nl->u = 1;
                                  nl->l = 0;
                                  nl->r = 0;
                                  prbt_put(nl);
                                  return 0;
                          }
                          p = prbt_new();
                          if (!p) {
                                  nl->u = 1;
                                  nl->l = 0;
                                  nl->r = 0;
                                  prbt_put(nl);
                                  nr->u = 1;
                                  nr->l = 0;
                                  nr->r = 0;
                                  prbt_put(nr);
                                  return 0;
                          }
                          nl->l = prbt_get(ll);
                          nl->r = prbt_get(lr);
                          nl->c = BLACK;
                          nl->u = 1;
                          nl->d = l->d;
                          nr->l = prbt_get(r->l);
                          nr->r = prbt_get(r->r);
                          nr->c = BLACK;
                          nr->u = 1;
                          nr->d = r->d;
                          p->l = nl;
                          p->r = nr;
                          p->c = RED;
                          p->d = d;
                          p->u = 1;
                          return p;
                  }
                  if (ll && (ll->c == RED)) {
                          nl = prbt_new();
                          if (!nl)
                                  return 0;
                          nr = prbt_new();
                          if (!nr) {
                                  nl->u = 1;
                                  nl->l = 0;
                                  nl->r = 0;
                                  prbt_put(nl);
                                  return 0;
                          }
                          p = prbt_new();
                          if (!p) {
                                  nl->u = 1;
                                  nl->l = 0;
                                  nl->r = 0;
                                  prbt_put(nl);
                                  nr->u = 1;
                                  nr->l = 0;
                                  nr->r = 0;
                                  prbt_put(nr);
                                  return 0;
                          }
                          nl->l = prbt_get(ll->l); /*  */
                          nl->r = prbt_get(ll->r);
                          nl->c = BLACK;
                          nl->u = 1;
                          nl->d = ll->d;
                          nr->l = prbt_get(lr);
                          nr->r = prbt_get(r);
                          nr->c = BLACK;
                          nr->u = 1;
                          nr->d = d;
                          p->l = nl;
                          p->r = nr;
                          p->c = RED;
                          p->d = l->d;
                          p->u = 1;
                          return p;
                  }
                  if (lr && (lr->c == RED)) {
                          nl = prbt_new();
                          if (!nl)
                                  return 0;
                          nr = prbt_new();
                          if (!nr) {
                                  nl->u = 1;
                                  nl->l = 0;
                                  nl->r = 0;
                                  prbt_put(nl);
                                  return 0;
                          }
                          p = prbt_new();
                          if (!p) {
                                  nl->u = 1;
                                  nl->l = 0;
                                  nl->r = 0;
                                  prbt_put(nl);
                                  nr->u = 1;
                                  nr->l = 0;
                                  nr->r = 0;
                                  prbt_put(nr);
                                  return 0;
                          }
                          nl->l = prbt_get(ll);
                          nl->r = prbt_get(lr->l);
                          nl->c = BLACK;
                          nl->u = 1;
                          nl->d = l->d;
                          nr->l = prbt_get(lr->r);
                          nr->r = prbt_get(r);
                          nr->c = BLACK;
                          nr->u = 1;
                          nr->d = d;
                          p->l = nl;
                          p->r = nr;
                          p->c = RED;
                          p->d = lr->d;
                          p->u = 1;
                          return p;
                  }
          }
          if (r && (r->c == RED)) {
                  rr = r->r;
                  rl = r->l;
                  if (rr && (rr->c == RED)) {
                          nl = prbt_new();
                          if (!nl)
                                  return 0;
                          nr = prbt_new();
                          if (!nr) {
                                  nl->u = 1;
                                  nl->l = 0;
                                  nl->r = 0;
                                  prbt_put(nl);
                                  return 0;
                          }
                          p = prbt_new();
                          if (!p) {
                                  nl->u = 1;
                                  nl->l = 0;
                                  nl->r = 0;
                                  prbt_put(nl);
                                  nr->u = 1;
                                  nr->l = 0;
                                  nr->r = 0;
                                  prbt_put(nr);
                                  return 0;
                          }
                          nl->l = prbt_get(l);
                          nl->r = prbt_get(rl);
                          nl->c = BLACK;
                          nl->u = 1;
                          nl->d = d;
                          nr->l = prbt_get(rr->l);
                          nr->r = prbt_get(rr->r);
                          nr->c = BLACK;
                          nr->u = 1;
                          nr->d = rr->d;
                          p->l = nl;
                          p->r = nr;
                          p->c = RED;
                          p->d = r->d;
                          p->u = 1;
                          return p;
                  }
                  if (rl && (rl->c == RED)) {
                          nl = prbt_new();
                          if (!nl)
                                  return 0;
                          nr = prbt_new();
                          if (!nr) {
                                  nl->u = 1;
                                  nl->l = 0;
                                  nl->r = 0;
                                  prbt_put(nl);
                                  return 0;
                          }
                          p = prbt_new();
                          if (!p) {
                                  nl->u = 1;
                                  nl->l = 0;
                                  nl->r = 0;
                                  prbt_put(nl);
                                  nr->u = 1;
                                  nr->l = 0;
                                  nr->r = 0;
                                  prbt_put(nr);
                                  return 0;
                          }
                          nl->l = prbt_get(l);
                          nl->r = prbt_get(rl->l);
                          nl->c = BLACK;
                          nl->u = 1;
                          nl->d = d;
                          nr->l = prbt_get(rl->r);
                          nr->r = prbt_get(rr);
                          nr->c = BLACK;
                          nr->u = 1;
                          nr->d = r->d;
                          p->l = nl;
                          p->r = nr;
                          p->c = RED;
                          p->d = rl->d;
                          p->u = 1;
                          return p;
                  }
          }
          p = prbt_new();
          if (!p)
                  return 0;
          p->l = prbt_get(l);
          p->r = prbt_get(r);
          p->c = BLACK;
          p->d = d;
          p->u = 1;
          return p;
  }



  struct prbt *prbt_ins(struct prbt *r, int d)
  {
          struct prbt *n, *t;

          if (!r) {
                  n = prbt_new();
                  if (!n)
                          return 0;
                  n->l = 0;
                  n->r = 0;
                  n->u = 1;
                  n->c = RED;
                  n->d = d;
                  return n;
          }
          switch (r->c) {
          case RED:
                  if (d < r->d) {
                          t = prbt_ins(r->l, d);
                          if (!t)
                                  return 0;
                          n = prbt_new();
                          if (!n) {
                                  prbt_put(t);
                                  return 0;
                          }
                          n->l = t;
                          n->r = prbt_get(r->r);
                          n->c = RED;
                          n->u = 1;
                          n->d = r->d;
                          return n;
                  } else if (d > r->d) {
                          t = prbt_ins(r->r, d);
                          if (!t)
                                  return 0;
                          n = prbt_new();
                          if (!n) {
                                  prbt_put(t);
                                  return 0;
                          }
                          n->l = prbt_get(r->l);
                          n->r = t;
                          n->c = RED;
                          n->u = 1;
                          n->d = r->d;
                          return n;
                  } else {
                          r->u += 1;
                          return r;
                  }
                  break;
          case BLACK:
                  if (d < r->d) {
                          t = prbt_ins(r->l, d);
                          n = prbt_bal(t, r->d, r->r);
                          prbt_put(t);
                          return n;
                  } else if (d > r->d) {
                          t = prbt_ins(r->r, d);
                          n = prbt_bal(r->l, r->d, t);
                          prbt_put(t);
                          return n;
                  } else {
                          r->u += 1;
                          return r;
                  }
                  break;
          }
          return 0;		/* never get here */
  }


  struct prbt *prbt_add(struct prbt *r, int d)
  {
          struct prbt *nr, *n;

          nr = prbt_ins(r, d);
          if (!nr)
                  return nr;
          if (nr->u == 1) {
                  nr->c = BLACK;
                  return nr;
          }
          n = prbt_new();
          if (!n)
                  return 0;
          n->l = prbt_get(nr->l);
          n->r = prbt_get(nr->r);
          n->u = 1;
          n->c = BLACK;
          n->d = nr->d;
          prbt_put(nr);
          return n;
  }


  static struct prbt *prbt_find(struct prbt *r, int d)
  {
          if (!r)
                  return 0;
          if (d < r->d)
                  return prbt_find(r->l, d);
          else if (d > r->d)
                  return prbt_find(r->r, d);
          else 
                  return r;
  }

  static struct prbt *prbt_root;


  static void prbt_insert_test(unsigned int n)
  {
          unsigned int i;
          struct prbt *nr, *or = prbt_root;

          for (i = 0; i != n; i += 1) {
                  nr = prbt_add(or, keys[i]);
                  prbt_put(or);
                  or = nr;
          }
          prbt_root = or;
  }


  static void prbt_find_test(unsigned int n)
  {
          struct prbt *r = prbt_root;
          struct prbt *f;
          unsigned int i;

          for (i = 0; i != n; i += 1) {
                  f = prbt_find(r, keys[i]);
                  if (!f)
                          printf("prb find %d failed\n", keys[i]);
          }
  }


  /*
   * Persistent AVL tree.
   * C code is based on Scheme code I wrote some time ago.
   * http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/scheme/code/ext/avl/
   * As with the Persistent Red Black tree if for any reason you want to
   * use this code then you can under BSD/Berkely license whose terms are
   * available at http://www.openbsd.org/policy.html.
   */

  struct pavl_node {
          struct pavl_node *l, *r;
          int d;
          unsigned int h;
          unsigned int u;
  };



  static struct pavl_node *pavl_free_list;
  static struct pavl_node *pavl_root;


  static struct pavl_node *pavl_node_new(void)
  {
          struct pavl_node *t;

          t = pavl_free_list;
          pavl_free_list = pavl_free_list->l;
          return t;
  }

  static inline struct pavl_node *pavl_node_get(struct pavl_node *n)
  {
          if (n)
                  n->u += 1;
          return n;
  }

  static void pavl_node_free(struct pavl_node *) __attribute__((noinline));;

  static void pavl_node_put(struct pavl_node *) __attribute__((always_inline));

  static void pavl_node_put(struct pavl_node *n)
  {
          if (!n)
                  return;
          n->u -= 1;
          if (n->u)
                  return;
          pavl_node_free(n);
  }


  static void pavl_node_free(struct pavl_node *n)
  {
          if (n->l)
                  pavl_node_put(n->l);
          if (n->r)
                  pavl_node_put(n->r);
          n->l = pavl_free_list;
          pavl_free_list = n;
  }


  struct pavl_node *
  pavl_bal(struct pavl_node *l, int a, 
           struct pavl_node *m, int c, struct pavl_node *r)
  {
          struct pavl_node *nl, *nr, *n;
          unsigned int lh = l ? l->h : 0;
          unsigned int mh = m ? m->h : 0;
          unsigned int rh = r ? r->h : 0;

          if ((mh > lh) && (mh > rh)) {
                  nl = pavl_node_new();
                  if (!nl)
                          return 0;
                  nr = pavl_node_new();
                  if (!nr) {
                          nl->u = 1;
                          nl->l = 0;
                          nl->r = 0;
                          pavl_node_put(nl);
                          return 0;
                  }
                  n = pavl_node_new();
                  if (!n) {
                          nl->u = 1;
                          nl->l = 0;
                          nl->r = 0;
                          pavl_node_put(nl);
                          nr->u = 1;
                          nr->l = 0;
                          nr->r = 0;
                          pavl_node_put(nr);
                          return 0;
                  }
                  nl->l = pavl_node_get(l);
                  nl->r = pavl_node_get(m->l);
                  nl->d = a;
                  nl->h = 1+lh;
                  nl->u = 1;
                  nr->l = pavl_node_get(m->r);
                  nr->r = pavl_node_get(r);
                  nr->d = c;
                  nr->h = 1+rh;
                  nr->u = 1;
                  n->l = nl;
                  n->r = nr;
                  n->d = m->d;
                  n->h = 2+lh;
                  n->u = 1;
                  return n;
          } else if ((lh >= mh) && (lh >= rh)) { /* rotate right */
                  unsigned int mrm = 1+(mh > rh ? mh : rh);
                  unsigned int lmrm = 1+(mrm > lh ? mrm : lh);

                  nr = pavl_node_new();
                  if (!nr)
                          return 0;
                  n = pavl_node_new();
                  if (!n) {
                          nr->u = 1;
                          nr->l = 0;
                          nr->r = 0;
                          pavl_node_put(nr);
                          return 0;
                  }
                  nr->l = pavl_node_get(m);
                  nr->r = pavl_node_get(r);
                  nr->d = c;
                  nr->h = mrm;
                  nr->u = 1;
                  n->l = pavl_node_get(l);
                  n->r = nr;
                  n->d = a;
                  n->h = lmrm;
                  n->u = 1;
                  return n;
          } else {		/* rotate left */
                  unsigned int lmm = 1+(lh > mh ? lh : mh);
                  unsigned int lmrm = 1+(lmm > rh ? lmm : rh);

                  nl = pavl_node_new();
                  if (!nl)
                          return 0;
                  n = pavl_node_new();
                  if (!n) {
                          nl->u = 1;
                          nl->l = 0;
                          nl->r = 0;
                          pavl_node_put(nl);
                          return 0;
                  }
                  nl->l = pavl_node_get(l);
                  nl->r = pavl_node_get(m);
                  nl->d = a;
                  nl->h = lmm;
                  nl->u = 1;
                  n->l = nl;
                  n->r = pavl_node_get(r);
                  n->d = c;
                  n->h = lmrm;
                  n->u = 1;
                  return n;
          }
  }


  static struct pavl_node *pavl_add(struct pavl_node *r, int d)
  {
          struct pavl_node *n, *t;

          if (!r) {
                  n = pavl_node_new();
                  if (!n)
                          return 0;
                  n->l = 0;
                  n->r = 0;
                  n->d = d;
                  n->h = 1;
                  n->u = 1;
                  return n;
          }
          if (d < r->d) {
                  t = pavl_add(r->l, d);
                  if (!t)
                          return 0;
                  n = pavl_bal(t->l, t->d, t->r, r->d, r->r);
                  pavl_node_put(t);
                  return n;
          } else if (d > r->d) {
                  t = pavl_add(r->r, d);
                  if (!t)
                          return 0;
                  n = pavl_bal(r->l, r->d, t->l, t->d, t->r);
                  pavl_node_put(t);
                  return n;
          } else {
                  /* dup */
                  r->u += 1;
                  return r;
          }
  }


  static struct pavl_node *pavl_find(struct pavl_node *r, int d)
  {
          if (!r)
                  return 0;
          if (d < r->d)
                  return pavl_find(r->l, d);
          else if (d > r->d)
                  return pavl_find(r->r, d);
          else 
                  return r;
  }


  static void pavl_insert_test(unsigned int n)
  {
          unsigned int i;
          struct pavl_node *nr, *or = pavl_root;

          for (i = 0; i != n; i += 1) {
                  nr = pavl_add(or, keys[i]);
                  pavl_node_put(or);
                  or = nr;
          }
          pavl_root = or;
  }


  static void pavl_find_test(unsigned int n)
  {
          struct pavl_node *r = pavl_root;
          struct pavl_node *f;
          unsigned int i;

          for (i = 0; i != n; i += 1) {
                  f = pavl_find(r, keys[i]);
                  if (!f)
                          printf("pavl find %d failed\n", keys[i]);
          }
  }


  /*
   *
   */

  /*
   * If the algorithm requires allocation, the memory is allocated from
   * this block.  This avoids timing variations due to varying malloc(3)
   * implementations
   */

  union {
          struct hash_node hash;
          struct prbt p_rb_node;
          struct rb_d_node rb_d_node;
          struct pavl_node pavl_node;
  } mem[N+(N/4)];


  static void init_hash_mem(void)
  {
          int i;

          for (i = 0; i != N-1; i += 1)
                  mem[i].hash.free = &mem[i+1].hash;
          mem[N-1].hash.free = 0;
          hash_nodes_free = &mem[0].hash;
  }


  static void init_prb_mem(void)
  {
          int i;

          for (i = 0; i != (N+(N/4))-1; i += 1)
                  mem[i].p_rb_node.l = &mem[i+1].p_rb_node;
          mem[N+(N/4)].p_rb_node.l = 0;
          prbt_free_list = &mem[0].p_rb_node;
  }


  static void init_rb_d_mem(void)
  {
          int i;

          for (i = 0; i != N-1; i += 1)
                  mem[i].rb_d_node.free = &mem[i+1].rb_d_node;
          mem[N-1].rb_d_node.free = 0;
          rb_d_nodes_free_list = &mem[0].rb_d_node;
  }


  static void init_pavl_mem(void)
  {
          int i;

          for (i = 0; i != (N+(N/4))-1; i += 1)
                  mem[i].pavl_node.l = &mem[i+1].pavl_node;
          mem[(N+(N/4))].pavl_node.l = 0;
          pavl_free_list = &mem[0].pavl_node;
  }



  static void init_keys(void)
  {
          int i;

          for (i = 0; i != N; i += 1) {
                  keys[i] = random();
          }
  }


  static void timersub(struct timeval *a, struct timeval *b, struct timeval *diff)
  {
          diff->tv_sec = a->tv_sec - b->tv_sec;
          diff->tv_usec = a->tv_usec - b->tv_usec;
          if (diff->tv_usec < 0) {
                  --diff->tv_sec;
                  diff->tv_usec += 1000000;
          }
  }

  struct {
          int test_tree;
          unsigned int n;
          unsigned int table_size;
  } tests[] = {
          { 1,      100,      100 },
          { 1,      250,      250 },
          { 1,      500,      500 },
          { 1,      750,      750 },
          { 1,     1000,     1000 },
          { 0,     1000,      100 },
          { 0,     1000,       10 },
          { 0,     1000,        1 },
          { 1,     2500,     2500 },
          { 1,     5000,     5000 },
          { 1,     7500,     7500 },
          { 1,    10000,    10000 },
          { 0,    10000,     1000 },
          { 0,    10000,      100 },
          { 0,    10000,       10 },
          { 1,    25000,    25000 },
          { 1,    50000,    50000 },
          { 1,    75000,    75000 },
          { 1,   100000,   100000 },
          { 0,   100000,    10000 },
          { 0,   100000,     1000 },
          { 0,   100000,      100 },
          { 1,   250000,   250000 },
          { 1,   500000,   500000 },
          { 1,   750000,   750000 },
          { 1,  1000000,  1000000 },
          { 0,  1000000,   100000 },
          { 0,  1000000,    10000 },
          { 0,  1000000,     1000 },
          { 1,  2500000,  2500000 },
          { 1,  5000000,  5000000 },
          { 1,  7500000,  7500000 },
          { 1, 10000000, 10000000 },
          { 0, 10000000,  1000000 },
          { 0, 10000000,   100000 },
          { 0, 10000000,    10000 },
  };


  int main(int argc, char **argv)
  {
          struct timeval start, stop, diff;
          /*
           * Set to 1 to run the tests to confirm that the insert
           * does indeed insert all the keys.
           */
          int find = 0;
          unsigned int i;

          init_keys();

          for (i = 0; i != sizeof(tests)/sizeof(tests[0]); i += 1) {

                  init_hash_mem();
                  init_hash_table();

                  gettimeofday(&start);
                  hash_insert_test(tests[i].n, tests[i].table_size);
                  gettimeofday(&stop);
                  timersub(&stop, &start, &diff);
                  printf("ht\t%10u %10u %lu.%lu\n", tests[i].n, tests[i].table_size,
                         diff.tv_sec, diff.tv_usec);

                  if (find) {
                          gettimeofday(&start);
                          hash_find_test(tests[i].n, tests[i].table_size);
                          gettimeofday(&stop);
                          timersub(&stop, &start, &diff);
                          printf("htf\t%10u %10u %lu.%lu\n",
                                 tests[i].n, tests[i].table_size,
                                 diff.tv_sec, diff.tv_usec);
                  }

                  if (tests[i].test_tree) {
                          init_rb_d_mem();
                          rb_d_root.rb_node = 0;

                          gettimeofday(&start);
                          rb_d_tree_insert_test(tests[i].n);
                          gettimeofday(&stop);
                          timersub(&stop, &start, &diff);
                          printf("rb\t%10u %10u %lu.%lu\n", 
                                 tests[i].n, 0, diff.tv_sec, diff.tv_usec);

                          if (find) {
                                  gettimeofday(&start);
                                  rb_d_tree_find_test(tests[i].n);
                                  gettimeofday(&stop);
                                  timersub(&stop, &start, &diff);
                                  printf("rbf\t%10u %10u %lu.%lu\n",
                                         tests[i].n, 0, 
                                         diff.tv_sec, diff.tv_usec);
                          }

                          init_prb_mem();
                          prbt_root = 0;

                          gettimeofday(&start);
                          prbt_insert_test(tests[i].n);
                          gettimeofday(&stop);
                          timersub(&stop, &start, &diff);
                          printf("prb\t%10u %10u %lu.%lu\n", 
                                 tests[i].n, 0, diff.tv_sec, diff.tv_usec);

                          if (find) {
                                  gettimeofday(&start);
                                  prbt_find_test(tests[i].n);
                                  gettimeofday(&stop);
                                  timersub(&stop, &start, &diff);
                                  printf("prbf\t%10u %10u %lu.%lu\n",
                                         tests[i].n, 0, 
                                         diff.tv_sec, diff.tv_usec);
                          }

                          init_pavl_mem();
                          pavl_root = 0;

                          gettimeofday(&start);
                          pavl_insert_test(tests[i].n);
                          gettimeofday(&stop);
                          timersub(&stop, &start, &diff);
                          printf("pavl\t%10u %10u %lu.%lu\n", 
                                 tests[i].n, 0, diff.tv_sec, diff.tv_usec);

                          if (find) {
                                  gettimeofday(&start);
                                  pavl_find_test(tests[i].n);
                                  gettimeofday(&stop);
                                  timersub(&stop, &start, &diff);
                                  printf("pavlf\t%10u %10u %lu.%lu\n",
                                         tests[i].n, 0, 
                                         diff.tv_sec, diff.tv_usec);
                          }
                  }
          }

          exit(0);
  }
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <mo6dnRKjOOaOivnXnZ2dnUVZ8hSdnZ2d@brightview.co.uk>
On 64-bit Debian etch'n'half on a twin quadcore Xeon workstation, your code
just segfaults:

$ gcc -g -Wall -O3 jon.c -o jon
jon.c: In function ‘rb_d_tree_insert_test’:
jon.c:329: warning: unused variable ‘dup’
jon.c: In function ‘rb_d_tree_find_test’:
jon.c:339: warning: unused variable ‘found’
jon.c: In function ‘ocaml_hash_int’:
jon.c:372: warning: unused variable ‘i’
jon.c: In function ‘hash_insert’:
jon.c:407: warning: left-hand operand of comma expression has no effect
jon.c:402: warning: unused variable ‘head’
jon.c: In function ‘hash_find’:
jon.c:433: warning: left-hand operand of comma expression has no effect
jon.c: In function ‘hash_find_test’:
jon.c:460: warning: unused variable ‘found’
jon.c:459: warning: unused variable ‘l’
jon.c:459: warning: unused variable ‘head’
jon.c:458: warning: unused variable ‘e’
jon.c:457: warning: unused variable ‘h’
jon.c:456: warning: unused variable ‘b’
jon.c: In function ‘main’:
jon.c:1316: warning: implicit declaration of function ‘gettimeofday’
jon.c:1207: warning: array subscript is above array bounds
jon.c:1229: warning: array subscript is above array bounds
$ ./jon
Segmentation fault

I can run your tests in 32-bit and the results do seem to confirm what I
said. Specifically, hash tables are 10x faster than purely functional trees
for a wide range of tasks.

However, I note that swapping the order of the tests over to place the hash
table last, after the trees have warmed the cache, makes the hash table run
many times faster in the initial tests.

Stephen J. Bevan wrote:
> b) I want to avoid any possible complaint that re-sizing is the cause
>    of any performance issues with the hash table.

I think there is a much stronger complaint for not resizing the hash table.
Moreover, you're not using enough buckets/element in any of the tests.

On the other hand, I think the sorted ints you are inserting into balanced
trees is worst case for them: you might try random ints to make the trees
faster. Also, you're using (untested?) reference counting which is
notoriously slow compared to a real GC.

> 2. for n >= 100,000 and more obviously n >= 1,000,000 the results
>    favour a hash table that has sufficient buckets to avoid the effect
>    of collisions coming in to effect (A) with the trees being at least
>    an order of magnitude worse.

This is the effect I was referring to.

> For F = 10 the weak claim is true in the range {1..100000}.  I'm using
> F = 10 here because while the hash table is is up to 5x faster than
> the trees for some values of n, the hash table is also up to 6x slower
> than the trees for some values of n.

So we agree that hash tables are up to 10x faster than purely functional
trees and, therefore, that the claim in Real World Haskell is wrong?

> a) one can at any time allocate a 40MB chunk of physically
>    contiguous memory to hold the new buckets.  Such an allocation is
>    unlikely to succeed.

I regularly rely upon allocations orders of magnitude larger than that. Why
do you think you cannot reliably allocate 40Mb?

> Thus kernels often to use fixed size hash tables and :-
> 
> a) limit the number of elements to avoid the above behaviour thus
>    requiring re-compilation if you actually need a bigger limit.
>    Somewhat problematic if ou are trying to sell a single product to
>    multiple customers.
> 
> b) allocate the maximum number of buckets statically -- somewhat
>    problematic when you have multiple tables (session table, route
>    cache, ARP table, bridge table, IPsec SA map) and different
>    cusomers want to maximize the size of different tables.
> 
> c) use some kind of keyed hash to thwart attackers from causing a
>    problem even when the number of nodes is << the number of buckets.
> 
> d) do nothing and exhibit dramatic performance problems.

Why not just use one of the hash table implementations that guarantees O(1)
for every insertion?

Finally, I object to the idea of crippling the hash table in order to
represent worst case behaviour. If you want to analyze worst case
behaviour, choose an input to the *same implementation* that exhibits the
worst case behaviour.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Phil Armstrong
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <impdj6-t1b.ln1@mail.kantaka.co.uk>
Jon Harrop <···@ffconsultancy.com> wrote:
> Stephen J. Bevan wrote:
>> a) one can at any time allocate a 40MB chunk of physically
>>    contiguous memory to hold the new buckets.  Such an allocation is
>>    unlikely to succeed.
> 
> I regularly rely upon allocations orders of magnitude larger than that. Why
> do you think you cannot reliably allocate 40Mb?

Pay attention Jon:

  "In the context I'm interested in (kernel) re-sizing a 10,000,000
  entry hash table is not really an option because it assumes :-

  a) one can at any time allocate a 40MB chunk of physically
     contiguous memory to hold the new buckets.  Such an allocation is
     unlikely to succeed. "

He's writing OS kernel code where you can never assume the
availability of significant amounts of contiguous memory: Even if it's
available it's probably fragmented.

> Finally, I object to the idea of crippling the hash table in order to
> represent worst case behaviour. If you want to analyze worst case
> behaviour, choose an input to the *same implementation* that exhibits the
> worst case behaviour.

Kernels have to care about worst case behaviour. You might not: choose
your algorithms approriately. Surely this isn't controversial?

Phil

-- 
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <W9GdnQADeIzQ0vnXnZ2dnUVZ8iadnZ2d@brightview.co.uk>
Phil Armstrong wrote:
> Jon Harrop <···@ffconsultancy.com> wrote:
>> Stephen J. Bevan wrote:
>>> a) one can at any time allocate a 40MB chunk of physically
>>>    contiguous memory to hold the new buckets.  Such an allocation is
>>>    unlikely to succeed.
>> 
>> I regularly rely upon allocations orders of magnitude larger than that.
>> Why do you think you cannot reliably allocate 40Mb?
> 
> Pay attention Jon:
> 
>   "In the context I'm interested in (kernel) re-sizing a 10,000,000
>   entry hash table is not really an option because it assumes :-
> 
>   a) one can at any time allocate a 40MB chunk of physically
>      contiguous memory to hold the new buckets.  Such an allocation is
>      unlikely to succeed. "
> 
> He's writing OS kernel code where you can never assume the
> availability of significant amounts of contiguous memory: Even if it's
> available it's probably fragmented.

1. Fragmenting the hash table will solve that problem and still be vastly
faster than any purely functional tree.

2. His tree-based code does not handle fragmentation either.

>> Finally, I object to the idea of crippling the hash table in order to
>> represent worst case behaviour. If you want to analyze worst case
>> behaviour, choose an input to the *same implementation* that exhibits the
>> worst case behaviour.
> 
> Kernels have to care about worst case behaviour. You might not: choose
> your algorithms approriately. Surely this isn't controversial?

Neither controversial nor relevant to my statement about experimental
methodology.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Adrian Hey
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <P4GdncLCz7_35PnXnZ2dnUVZ8uednZ2d@pipex.net>
Hello Jon

Jon Harrop wrote:
> On the other hand, I think the sorted ints you are inserting into balanced
> trees is worst case for them:

FWIW with AVL trees sorted data sets are pretty much best case. This is
another good thing about AVL trees.

If you play about with this applet..

  http://www.site.uottawa.ca/~stan/csi2514/applets/avl/BT.html

..you can see that with sorted insertions you always get a perfectly
balanced tree for (2^N)-1 elements. It's a pity there's not a similar
demo for other balancing algorithms (not sure how they perform with
the all too common "accidentally sorted" data sets).

Regards
--
Adrian Hey
From: Adrian Hey
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <kvmdnfmfH40dWvnXnZ2dnUVZ8kadnZ2d@pipex.net>
Hello Folks,

Adrian Hey wrote:
> FWIW with AVL trees sorted data sets are pretty much best case. This is
> another good thing about AVL trees.
> 
> If you play about with this applet..
> 
>  http://www.site.uottawa.ca/~stan/csi2514/applets/avl/BT.html
> 
> ..you can see that with sorted insertions you always get a perfectly
> balanced tree for (2^N)-1 elements. It's a pity there's not a similar
> demo for other balancing algorithms (not sure how they perform with
> the all too common "accidentally sorted" data sets).

OK, googling around reveals quite a few red-black demo applets, most of
which don't work at all AFAICT. But this one does..

http://www.ibr.cs.tu-bs.de/courses/ss98/audii/applets/BST/RedBlackTree-Example.html

If you try sorted insertion of 15 entries with it you get insertion path
of 6 nodes (vs. 4 with AVL). So on this (admitedly tiny) test red-black
trees don't look very attractive.

But now I remember benchmarked this myself a few years ago..

http://groups.google.co.uk/group/comp.lang.functional/msg/74a422ea04ff1217

.. it looks like for growing a tree from large sorted data sets
red-black search paths are about 40% longer than AVL.

Regards
--
Adrian Hey
From: Keith H Duggar
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7c4f9704-525f-4f03-947b-1582f74095a5@o7g2000yqb.googlegroups.com>
On Jul 20, 1:05 am, ·······@dino.dnsalias.com (Stephen J. Bevan)
wrote:
> Jon Harrop <····@ffconsultancy.com> writes:
> > Can you give us the code? I am particularly interested in your purely
> > functional tree implementation in C.
>
> The code is mundane and I would have used email but at least one other
> person asked for it so it is attached at the end of this message.  The
> following puts the code in some context :-

Jon, first thanks for the code and the beautiful write up!

Second, gettimeofday takes 2 arguments

     int
     gettimeofday(struct timeval *tp, struct timezone *tzp);

so you need to pass explicitly pass 0 for the tzp argument. If
you don't it will crash on some systems as it did on mine and
perhaps also this is why it crashed on Jon's 64-bit system.

Third, and unfortunately, gettimeofday is fundamentally flawed
for profiling of this nature. gettimeofday is calendar time and
therefore will include time passed running other processes, at
least in a preemptive OS such as you (we) are using. So sadly
all your timing results (particular for the faster runs ie
the ones with smaller n) are corrupted by whatever arbitrary
stuff was happening on your system at the time you ran the
tests.

You need to either use a profiler or switch to clock() timing.
However, there are many caveats when using clock(). For one,
many systems have a slow clock() of say 100Hz so you will need
to make sure you run a timed code fragment for long enough to
capture a reasonable number of samples. Another problem is
that this scaling will vary depending on n because obviously
the running is a function of n. So you need to scale the number
of timing samples roughly with the inverse expected running
time for a given n. So you need something like this rough code

#define MIN_SAMPLES_PER_N 1000000.0 ;

double timesomething ( int samples, int n, yada yada)
{
   int normSamples = (int)(
      samples * MIN_SAMPLES_PER_N / n + 0.5) ;

   clock_t start = clock() ;

   for ( int i = 0 ; i < normSamples ; ++i ) {
      something(n, yada yada) ;
   }

   clock_t stop = clock();
   double delta = stop - start ;
   delta *= samples / normSamples ;
   return delta ;
}

where that MIN_SAMPLES_PER_N is whatever scale factor gives a
reasonable clock() delta for the iterated something(n) to ensure
reasonable reliability and low systematic bias.

KHD
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <udOdnZQmUMtDScXXnZ2dnUVZ8l1i4p2d@brightview.co.uk>
Johannes Laire wrote:
> Jon Harrop wrote:
>> Larry D'Anna wrote:
>> > ["Followup-To:" header set to comp.lang.lisp.]
>> > On 2009-07-10, Jon Harrop <····@ffconsultancy.com> wrote:
>> >> Here is a trivial counter example: mutate an element in an array in
>> >> O(1).
>>
>> > easy.
>>
>> >http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Array-ST.html
>>
>> That is O(n) with GHC.
> 
> It is O(1) for unboxed arrays.

O(n) in general.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry D'Anna
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3akb3$kp2$1@news.eternal-september.org>
On 2009-07-10, Jon Harrop <···@ffconsultancy.com> wrote:
> Larry D'Anna wrote:
>> ["Followup-To:" header set to comp.lang.lisp.]
>> On 2009-07-10, Jon Harrop <···@ffconsultancy.com> wrote:
>>> Here is a trivial counter example: mutate an element in an array in O(1).
>> 
>> easy.
>> 
>> http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Array-ST.html
>
> That is O(n) with GHC.

$ cat stupid.hs 

import CPUTime
import System.IO
import Prelude
import Control.Monad.ST
import Data.Array.ST
import Data.Array

fill a i n = if i > n 
                then return ()
                else do writeArray a i i 
                        fill a (i+1) n 

count n = do a <- newArray (1,n) 0 :: ST s (STUArray s Int Int)
             fill a 1 n 
             return ()

bench n = do t1 <- getCPUTime
             stToIO $ count n 
             t2 <- getCPUTime 
             print $ round $ toRational(t2 - t1) / toRational n

main = do bench 10000
          bench 100000
          bench 1000000
          bench 10000000

$ ghc stupid.hs
$ ./a.out
1200000
1000070
1016063
1002063
$ echo Jon Harrop is a loud-mouthed idiot.
Jon Harrop is a loud-mouthed idiot.
$
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <wcqdndigbo52QcXXnZ2dnUVZ8vKdnZ2d@brightview.co.uk>
Larry D'Anna wrote:
> On 2009-07-10, Jon Harrop <···@ffconsultancy.com> wrote:
>> That is O(n) with GHC.
> 
> $ cat stupid.hs
> 
> import CPUTime
> import System.IO
> import Prelude
> import Control.Monad.ST
> import Data.Array.ST
> import Data.Array
> 
> fill a i n = if i > n
>                 then return ()
>                 else do writeArray a i i
>                         fill a (i+1) n
> 
> count n = do a <- newArray (1,n) 0 :: ST s (STUArray s Int Int)
>              fill a 1 n
>              return ()
> 
> bench n = do t1 <- getCPUTime
>              stToIO $ count n
>              t2 <- getCPUTime
>              print $ round $ toRational(t2 - t1) / toRational n
> 
> main = do bench 10000
>           bench 100000
>           bench 1000000
>           bench 10000000
> 
> $ ghc stupid.hs
> $ ./a.out
> 1200000
> 1000070
> 1016063
> 1002063

Fill the array with the list [i] instead of the integer i and you get:

0
240010
552034
3161398

That is clearly not O(1).

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry D'Anna
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <h3g79h$vfa$2@news.eternal-september.org>
On 2009-07-11, Jon Harrop <···@ffconsultancy.com> wrote:
> Fill the array with the list [i] instead of the integer i and you get:
>
> 0
> 240010
> 552034
> 3161398
>
> That is clearly not O(1).

Wow you're right.  Only the unboxed arrays are O(1).  I'm a bit shocked.

    --larry
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090711161201.2b5117ac@tritium.xx>
Larry D'Anna <·····@elder-gods.org> wrote:

> > Here is a trivial counter example: mutate an element in an array in
> > O(1).
>
> easy.
>
> http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Array-ST.html

There is also DiffArray which doesn't require any monad.  Unfortunately
it's slower than STArray and STUArray, which give you C-like speed, so
if you want high performance, STArrays/STUArrays are the way to go.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <JdKdncPNiLH55crXnZ2dnUVZ8l-dnZ2d@brightview.co.uk>
Ertugrul Söylemez wrote:
> fft1976 <·······@gmail.com> wrote:
>> But because most popular languages are imperative, people are whining
>> about those 1/4 cases and some disturbed individuals even think
>> languages should be pure-functional.
> 
> What's wrong with purely functional languages?

. Unpredictable performance.

. Unreliable.

. Uninteroperable.

>> What I want is a safe (or has an option to be safe), fast mostly
>> imperative language that doesn't suck.
> 
> If you want safety, go for functional programming.

If you want to interoperate with code written in other languages then
functional languages like OCaml and Haskell are *less* safe than the JVM
and CLR because their FFIs are comparatively poorly designed and poorly
implemented.

> About speed, most decent functional languages aren't considerably slower
> than C, 

That assertion is uselessly subjective. What are "decent" functional
languages? How much is "considerably" slower? Why are you comparing with C?
Are you assuming GCC and not Intel C++ (which often generates code that is
several times faster)?

The fastest BWT implementation written in Haskell, after weeks of
optimization by several experts, remained over 200x slower than C++:

http://www.mail-archive.com/haskell-cafe%40haskell.org/msg25645.html

So either Haskell is not "decent" or 200x slower is "not considerably
slower".

> but given that you save a lot of development time, you'll get your result
> much faster.

Not if your "result" is efficient code, which is often the case for
professional developers because their end users demand speed.

> Also purely functional languages (e.g. Haskell, Clean) and languages
> with immutable data (e.g. F#) allow clean and concise algorithm
> implementations, which are still just as fast as imperative variants
> with explicit reference/memory handling.  Finally with these two classes
> of languages and additionally Erlang you get concurrency and parallelity
> almost for free.

GHC's stop-the-world GC does not scale so Haskell programmers obviously do
not get "parallelism almost for free".

> Multithreaded programming is a PITA in all imperative languages.

Cilk makes parallelism easy.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: fft1976
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <ce56a124-8908-414a-8cf0-5bdf13b6a3d4@t33g2000yqe.googlegroups.com>
On Jul 10, 11:29 am, Jon Harrop <····@ffconsultancy.com> wrote:

> Are you assuming GCC and not Intel C++ (which often generates code that is
> several times faster)?

[citation_needed]

IME they are neck to neck. Same for C++ vs Fortran (IFC and G77). ICC
gave me more problems, so I  dumped it.
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <20090710212840.776e617b@tritium.xx>
Jon Harrop <···@ffconsultancy.com> wrote:

> > What's wrong with purely functional languages?
>
> . Unpredictable performance.

Wrong.


> . Unreliable.

Wrong (although right to some extent on PPC).


> . Uninteroperable.

Right.  That's something that could use some improvement.


> > About speed, most decent functional languages aren't considerably
> > slower than C,
>
> That assertion is uselessly subjective. What are "decent" functional
> languages? How much is "considerably" slower? Why are you comparing
> with C?  Are you assuming GCC and not Intel C++ (which often generates
> code that is several times faster)?
>
> The fastest BWT implementation written in Haskell, after weeks of
> optimization by several experts, remained over 200x slower than C++:
>
> http://www.mail-archive.com/haskell-cafe%40haskell.org/msg25645.html
>
> So either Haskell is not "decent" or 200x slower is "not considerably
> slower".

Hmm.  This one took me about 10 minutes:

  bwt :: B.ByteString -> B.ByteString
  bwt str =
    let sorted = map fst . sortBy (compare `on` snd) $ zip [0, 1..] (B.unpack str)
        slen   = B.length str
    in B.pack . map (\i -> B.index str ((i-1) `mod` slen)) $ sorted

It's not optimal, because it uses list sorting.  Replace it by an
in-place sort and you're set.


> GHC's stop-the-world GC does not scale so Haskell programmers
> obviously do not get "parallelism almost for free".

Right.  It's not that it makes such a big difference anyway, but it
could be improved a lot.  When using explicit threading in Haskell I
usually get c*n*100% of the performance of a single thread, where c is
some constant between 0.8 and 0.9 and n is the number of CPUs and
threads.  Unfortunately this is not true for implicit parallelism using
strategies, but I don't believe that this can be improved a lot.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <hNednbz7EsV5N8rXnZ2dnUVZ8oednZ2d@brightview.co.uk>
Ertugrul Söylemez wrote:
> Jon Harrop <···@ffconsultancy.com> wrote:
>> > What's wrong with purely functional languages?
>>
>> . Unpredictable performance.
> 
> Wrong.

There is overwhelming evidence to the contrary. Indeed, you provide more
below...

>> So either Haskell is not "decent" or 200x slower is "not considerably
>> slower".
> 
> Hmm.  This one took me about 10 minutes:
> 
>   bwt :: B.ByteString -> B.ByteString
>   bwt str =
>     let sorted = map fst . sortBy (compare `on` snd) $ zip [0, 1..]
>     (B.unpack str)
>         slen   = B.length str
>     in B.pack . map (\i -> B.index str ((i-1) `mod` slen)) $ sorted

Your incomplete implementation is over 100x slower than bzip2. Moreover,
your Haskell runs out of memory when trying to compress only 8Mb.

The poor performance of your Haskell code is a direct consequence of
Haskell's unpredictability. Obviously you failed to predict how bad your
code was or you would not have posted an example that contradicts your own
assertions.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: ACL
Subject: META: Proper trolling technique and methodology
Date: 
Message-ID: <16cd3b1b-ed46-4280-bb4b-5c31c467077e@h11g2000yqb.googlegroups.com>
A quick analysis so that we can better understand the question  'what
is trolling?'

On Jul 10, 6:05 pm,  wrote:
> Ertugrul Söylemez wrote:
> >  wrote:
> >> > What's wrong with purely functional languages?
>
> >> . Unpredictable performance.
>
> > Wrong.
>
> There is overwhelming evidence to the contrary. Indeed, you provide more
> below...
>
> >> So either Haskell is not "decent" or 200x slower is "not considerably
> >> slower".
>
You see here, the wiley troll makes a claim of comparison as if it is
fact. 'Haskell is 200x slower' than ...? he doesn't specify what it is
200x slower than.

> > Hmm.  This one took me about 10 minutes:
>
> >   bwt :: B.ByteString -> B.ByteString
> >   bwt str =
> >     let sorted = map fst . sortBy (compare `on` snd) $ zip [0, 1..]
> >     (B.unpack str)
> >         slen   = B.length str
> >     in B.pack . map (\i -> B.index str ((i-1) `mod` slen)) $ sorted
>

The trolled posts something to prove that his language is productive.

> Your incomplete implementation is over 100x slower than bzip2. Moreover,
> your Haskell runs out of memory when trying to compress only 8Mb.
>

The troll cleverly compares the 6 line haskell implementation to an
industry quality implementation of zip compression, specifically well
known for its speed.
The source of which being nearly 1 megabyte of information compressed.
We should note that it is only 100x slower than the high quality
implementation, while the claim was that haskell would be 200x
slower...

> The poor performance of your Haskell code is a direct consequence of
> Haskell's unpredictability. Obviously you failed to predict how bad your
> code was or you would not have posted an example that contradicts your own
> assertions.

He omits the fact that the poor performance is a result of not
comparing apples to apples in any sort of reasonable manner. He also
omits the fact that the author admited that it wasn't the highest
quality implementation of zip compression.

Here's to you Doctor Harrop, a fine specimen of the Trollish race, now
please get back under your bridge.
From: Jon Harrop
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <DeWdnQGu_6n3e8rXnZ2dnUVZ8tOdnZ2d@brightview.co.uk>
ACL wrote:
>> >> So either Haskell is not "decent" or 200x slower is "not considerably
>> >> slower".
>
> You see here, the wiley troll makes a claim of comparison as if it is
> fact.

Err, it is fact and I cited the source.

> 'Haskell is 200x slower' than ...? he doesn't specify what it is 
> 200x slower than.

I explicitly wrote "200x slower than C++".

>> > Hmm.  This one took me about 10 minutes:
>>
>> > bwt :: B.ByteString -> B.ByteString
>> > bwt str =
>> > let sorted = map fst . sortBy (compare `on` snd) $ zip [0, 1..]
>> > (B.unpack str)
>> > slen   = B.length str
>> > in B.pack . map (\i -> B.index str ((i-1) `mod` slen)) $ sorted
> 
> The trolled posts something to prove that his language is productive.
> 
>> Your incomplete implementation is over 100x slower than bzip2. Moreover,
>> your Haskell runs out of memory when trying to compress only 8Mb.
> 
> The troll cleverly compares the 6 line haskell implementation to an
> industry quality implementation of zip compression,

Bzip2, not zip.

> specifically well known for its speed.

Did you want to compare Haskell with badly written inefficient C?

> The source of which being nearly 1 megabyte of information compressed.

No, the source code to bzip2 is 170kB in under 9kLOC. The "blocksort.c" file
that contains the equivalent of this code (and a lot more) and is only 729
LOC.

> We should note that it is only 100x slower than the high quality
> implementation, while the claim was that haskell would be 200x
> slower...

No, the partially-implemented Haskell is already 100x slower that the
complete C implementation so a full Haskell implementation based upon it
must be over 100x slower.

>> The poor performance of your Haskell code is a direct consequence of
>> Haskell's unpredictability. Obviously you failed to predict how bad your
>> code was or you would not have posted an example that contradicts your
>> own assertions.
> 
> He omits the fact that the poor performance is a result of not
> comparing apples to apples in any sort of reasonable manner.

You misunderstood the comparison.

> He also omits the fact that the author admited that it wasn't the highest
> quality implementation of zip compression.

Ertugrul claimed he could write performant Haskell and then didn't.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <20090711155837.01078e5a@tritium.xx>
Jon Harrop <···@ffconsultancy.com> wrote:

> > He also omits the fact that the author admited that it wasn't the
> > highest quality implementation of zip compression.
>
> Ertugrul claimed he could write performant Haskell and then didn't.

Or maybe you should learn to use the compiler properly.  The code works
well for me.  High memory usage comes from the sorting algorithm
(because it uses lists), which I said can be improved.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Jon Harrop
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <68SdnYU5hLiCUMXXnZ2dnUVZ8hVi4p2d@brightview.co.uk>
Ertugrul Söylemez wrote:
> Jon Harrop <···@ffconsultancy.com> wrote:
>> > He also omits the fact that the author admited that it wasn't the
>> > highest quality implementation of zip compression.
>>
>> Ertugrul claimed he could write performant Haskell and then didn't.
> 
> Or maybe you should learn to use the compiler properly.

You claimed "most decent functional languages aren't considerably slower
than C" and then provided Haskell that is over 100x slower than C.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: ACL
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <73786fb6-bae0-4232-b0ba-10fd2d0cb5b5@l31g2000vbp.googlegroups.com>
On Jul 10, 10:19 pm, wrote:
> ACL wrote:
> >> >> So either Haskell is not "decent" or 200x slower is "not considerably
> >> >> slower".
>
> > You see here, the wiley troll makes a claim of comparison as if it is
> > fact.
>
> Err, it is fact and I cited the source.

No you didn't. I just scrolled up. I see no study cited.

> > 'Haskell is 200x slower' than ...? he doesn't specify what it is
> > 200x slower than.
>
> I explicitly wrote "200x slower than C++".
>
Who cares about C++?

It seems that any language you defame has to go up against 3 or 4
different languages of your choosing depending on the test. I thought
you were the F#/OCaml schill?

Besides the haskell was *according to you* only 100x slower than C
code, (not C++ code). you claimed 200x

>
> >> > Hmm.  This one took me about 10 minutes:
>
> >> > bwt :: B.ByteString -> B.ByteString
> >> > bwt str =
> >> > let sorted = map fst . sortBy (compare `on` snd) $ zip [0, 1..]
> >> > (B.unpack str)
> >> > slen   = B.length str
> >> > in B.pack . map (\i -> B.index str ((i-1) `mod` slen)) $ sorted
>
> > The trolled posts something to prove that his language is productive.
>
> >> Your incomplete implementation is over 100x slower than bzip2. Moreover,
> >> your Haskell runs out of memory when trying to compress only 8Mb.
>
> > The troll cleverly compares the 6 line haskell implementation to an
> > industry quality implementation of zip compression,
>
> Bzip2, not zip.
>

nitpicking doesn't make you more right.

> > specifically well known for its speed.
>
> Did you want to compare Haskell with badly written inefficient C?
>
if it is badly written and inefficient haskell... then yes.

> > The source of which being nearly 1 megabyte of information compressed.
>
> No, the source code to bzip2 is 170kB in under 9kLOC. The "blocksort.c" file
> that contains the equivalent of this code (and a lot more) and is only 729
> LOC.
>



That's weird i just downloaded the source and zipped it came in at
~800kb.

Even if what you say is right, you are still comparing 100x more LOC.
Only 730 loc? Seriously?

> > We should note that it is only 100x slower than the high quality
> > implementation, while the claim was that haskell would be 200x
> > slower...
>
> No, the partially-implemented Haskell is already 100x slower that the
> complete C implementation so a full Haskell implementation based upon it
> must be over 100x slower.
>

That doesn't make sense. Adding code is not equal to adding extra run
time. Sometimes adding code makes things faster, you know.

> >> The poor performance of your Haskell code is a direct consequence of
> >> Haskell's unpredictability. Obviously you failed to predict how bad your
> >> code was or you would not have posted an example that contradicts your
> >> own assertions.
>
> > He omits the fact that the poor performance is a result of not
> > comparing apples to apples in any sort of reasonable manner.
>
> You misunderstood the comparison.
>

It seems it is a bad comparison....

> > He also omits the fact that the author admited that it wasn't the highest
> > quality implementation of zip compression.
>
> Ertugrul claimed he could write performant Haskell and then didn't.
>

I think he claimed it could be written, not that he was writing it.
From: Jon Harrop
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <HaidnRV--OLh48TXnZ2dnUVZ8vqdnZ2d@brightview.co.uk>
ACL wrote:
> On Jul 10, 10:19 pm, wrote:
>> ACL wrote:
>> >> >> So either Haskell is not "decent" or 200x slower is "not
>> >> >> considerably slower".
>>
>> > You see here, the wiley troll makes a claim of comparison as if it is
>> > fact.
>>
>> Err, it is fact and I cited the source.
> 
> No you didn't. I just scrolled up. I see no study cited.

Here's that citation again:

http://www.mail-archive.com/haskell-cafe%40haskell.org/msg25645.html

>> > 'Haskell is 200x slower' than ...? he doesn't specify what it is
>> > 200x slower than.
>>
>> I explicitly wrote "200x slower than C++".
>
> Who cares about C++?
> 
> It seems that any language you defame has to go up against 3 or 4
> different languages of your choosing depending on the test. I thought
> you were the F#/OCaml schill?
> 
> Besides the haskell was *according to you* only 100x slower than C
> code, (not C++ code). you claimed 200x

The post I cited claimed 200.

>> > specifically well known for its speed.
>>
>> Did you want to compare Haskell with badly written inefficient C?
>
> if it is badly written and inefficient haskell... then yes.

We are trying to determine whether or not Haskell code can be efficient.

>> > The source of which being nearly 1 megabyte of information compressed.
>>
>> No, the source code to bzip2 is 170kB in under 9kLOC. The "blocksort.c"
>> file that contains the equivalent of this code (and a lot more) and is
>> only 729 LOC.
> 
> That's weird i just downloaded the source and zipped it came in at
> ~800kb.

The BZip2 distro includes the manual in several different formats, e.g. a
1Mb PostScript file.

> Even if what you say is right, you are still comparing 100x more LOC.
> Only 730 loc? Seriously?

Sure. I won't contest that the C is far more verbose but this discussion was
solely about performance.

>> > We should note that it is only 100x slower than the high quality
>> > implementation, while the claim was that haskell would be 200x
>> > slower...
>>
>> No, the partially-implemented Haskell is already 100x slower that the
>> complete C implementation so a full Haskell implementation based upon it
>> must be over 100x slower.
> 
> That doesn't make sense. Adding code is not equal to adding extra run
> time. Sometimes adding code makes things faster, you know.

How could adding code that implements the subsequent phases of the
compression algorithm possibly decrease the total running time?

>> >> The poor performance of your Haskell code is a direct consequence of
>> >> Haskell's unpredictability. Obviously you failed to predict how bad
>> >> your code was or you would not have posted an example that contradicts
>> >> your own assertions.
>>
>> > He omits the fact that the poor performance is a result of not
>> > comparing apples to apples in any sort of reasonable manner.
>>
>> You misunderstood the comparison.
> 
> It seems it is a bad comparison....

Indeed, it turns out that Ertugrul's Haskell implementation is actually
broken.

>> > He also omits the fact that the author admited that it wasn't the
>> > highest quality implementation of zip compression.
>>
>> Ertugrul claimed he could write performant Haskell and then didn't.
> 
> I think he claimed it could be written, not that he was writing it.

Indeed, this is the second time he has made a claim about Haskell only to
prove himself wrong.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Richard Heathfield
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <WOednegtWpxWE8TXnZ2dnUVZ8sudnZ2d@bt.com>
Jon Harrop said:

> ACL wrote:
>> On Jul 10, 10:19 pm, wrote:
>>> ACL wrote:
>>> >> >> So either Haskell is not "decent" or 200x slower is "not
>>> >> >> considerably slower".
>>>
>>> > You see here, the wiley troll makes a claim of comparison as
>>> > if it is fact.
>>>
>>> Err, it is fact and I cited the source.
>> 
>> No you didn't. I just scrolled up. I see no study cited.
> 
> Here's that citation again:
> 
http://www.mail-archive.com/haskell-cafe%40haskell.org/msg25645.html

Hardly a "study" - just a single data point. Note that I don't doubt 
the claim. It is obvious to me that C++, being a bit of a 
cart-horse, is not going to manage to be much more than 200x faster 
than Haskell (which is more of a barge-horse, with a missing leg). 
Nevertheless, that single data point is only the tiniest indicator 
that this is so, and most certainly does not deserve to be 
dignified by the name of "study". To extrapolate an argument from a 
single datum is rarely wise.

<snip>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. ····@
Forged article? See 
http://www.cpax.org.uk/prg/usenet/comp.lang.c/msgauth.php
"Usenet is a strange place" - dmr 29 July 1999
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <20090713211233.3599683c@tritium.xx>
Jon Harrop <···@ffconsultancy.com> wrote:

> >>>> The poor performance of your Haskell code is a direct consequence
> >>>> of Haskell's unpredictability. Obviously you failed to predict
> >>>> how bad your code was or you would not have posted an example
> >>>> that contradicts your own assertions.
> >>>
> >>> He omits the fact that the poor performance is a result of not
> >>> comparing apples to apples in any sort of reasonable manner.
> >>
> >> You misunderstood the comparison.
> >
> > It seems it is a bad comparison....
>
> Indeed, it turns out that Ertugrul's Haskell implementation is
> actually broken.

Its output is shifted compared to the original BWT transform.  You can
correct that without adding extra code, although there is no reason to
do that.  And I say it again:  Replace the list sorting by an in-place
sort and you get orders of magnitude better performance.  I was just too
lazy to implement it.


> >>> He also omits the fact that the author admited that it wasn't the
> >>> highest quality implementation of zip compression.
> >>
> >> Ertugrul claimed he could write performant Haskell and then didn't.
> >
> > I think he claimed it could be written, not that he was writing it.
>
> Indeed, this is the second time he has made a claim about Haskell only
> to prove himself wrong.

This is about the hundredth time you claim that I have proven myself
wrong.  You said that the best Haskell implementation some people from
the ML could come up with was 200 times slower than equivalent C code.
I wrote in a matter of 10 minutes a (nowhere near optimal) variant,
which is, according to you, only 100 times slower.  I disregard the fact
that you have repeatedly shown that you're unable to use the compiler
properly.  Anyway, the bad performance comes from the fact that I have
used list sorting.  See above for how to fix this.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Paul Rubin
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <7xy6qstmb7.fsf@ruckus.brouhaha.com>
Ertugrul S�ylemez <··@ertes.de> writes:
> Replace the list sorting by an in-place sort and you get orders of
> magnitude better performance.  I was just too lazy to implement it.

I wonder if the sorting code in uvector could handle it.

http://hackage.haskell.org/package/uvector-algorithms
From: Jon Harrop
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <cu6dnaQ3buRbC8bXnZ2dnUVZ8mRi4p2d@brightview.co.uk>
Ertugrul Söylemez wrote:
> Its output is shifted compared to the original BWT transform.  You can
> correct that without adding extra code, although there is no reason to
> do that.  And I say it again:  Replace the list sorting by an in-place
> sort and you get orders of magnitude better performance.  I was just too
> lazy to implement it.

Put up or shut up, Ertugrul. Its as simple as that. Show us the Haskell
implementation that gives the correct answer and is "not considerably
slower" than the C or retract your claim. Your choice.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Ertugrul =?UTF-8?B?U8O2eWxlbWV6?=
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <20090713224433.4edbe0f1@tritium.xx>
Jon Harrop <···@ffconsultancy.com> wrote:

> > Its output is shifted compared to the original BWT transform.  You
> > can correct that without adding extra code, although there is no
> > reason to do that.  And I say it again:  Replace the list sorting by
> > an in-place sort and you get orders of magnitude better performance.
> > I was just too lazy to implement it.
>
> Put up or shut up, Ertugrul. Its as simple as that. Show us the
> Haskell implementation that gives the correct answer and is "not
> considerably slower" than the C or retract your claim. Your choice.

Unfortunately I don't have the time to prove my claims all the time,
just because some hair splitter like you insists on it.  But as said in
another subthread, I'll consider it.

If you're impatient, you may want to pay me for doing it.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/
From: Richard Harter
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <4a5babc6.280890828@text.giganews.com>
On Mon, 13 Jul 2009 22:44:33 +0200, Ertugrul
=?UTF-8?B?U8O2eWxlbWV6?= <··@ertes.de> wrote:

>Jon Harrop <···@ffconsultancy.com> wrote:
>
>> > Its output is shifted compared to the original BWT transform.  You
>> > can correct that without adding extra code, although there is no
>> > reason to do that.  And I say it again:  Replace the list sorting by
>> > an in-place sort and you get orders of magnitude better performance.
>> > I was just too lazy to implement it.
>>
>> Put up or shut up, Ertugrul. Its as simple as that. Show us the
>> Haskell implementation that gives the correct answer and is "not
>> considerably slower" than the C or retract your claim. Your choice.
>
>Unfortunately I don't have the time to prove my claims all the time,
>just because some hair splitter like you insists on it.  But as said in
>another subthread, I'll consider it.
>
>If you're impatient, you may want to pay me for doing it.

While we all sympathize with the limits on your time, it remains
that you made a major claim about the efficiency of Haskell that
remains unsubstantiated.


Richard Harter, ···@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
If I do not see as far as others, it is because
I stand in the footprints of giants. 
From: Paul Rubin
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <7xeisk2q7o.fsf@ruckus.brouhaha.com>
···@tiac.net (Richard Harter) writes:
> While we all sympathize with the limits on your time, it remains
> that you made a major claim about the efficiency of Haskell that
> remains unsubstantiated.

The Alioth shootout results are reasonable substantiation for general
claims about Haskell performance.  Jon's specific challenge is pretty
worthless unless he can put up Ocaml or F# code that has the
performance that he's demanding from Ertagul's Haskell code.
From: Richard Harter
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <4a5bf3cf.13899906@text.giganews.com>
On 13 Jul 2009 15:06:03 -0700, Paul Rubin
<·············@NOSPAM.invalid> wrote:

>···@tiac.net (Richard Harter) writes:
>> While we all sympathize with the limits on your time, it remains
>> that you made a major claim about the efficiency of Haskell that
>> remains unsubstantiated.
>
>The Alioth shootout results are reasonable substantiation for general
>claims about Haskell performance.  

Fair enough.  The value of shootouts is what it is, but they are
something.  However it is you that is offering substantiation
rather than Ertugrul.

>Jon's specific challenge is pretty
>worthless unless he can put up Ocaml or F# code that has the
>performance that he's demanding from Ertagul's Haskell code.

Why should he have to put Ocaml or F# code?  The issue at hand is
the performance of Haskell vs C.


Richard Harter, ···@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
If I do not see as far as others, it is because
I stand in the footprints of giants. 
From: Paul Rubin
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <7xvdlvapnw.fsf@ruckus.brouhaha.com>
···@tiac.net (Richard Harter) writes:
> >Jon's specific challenge is pretty
> >worthless unless he can put up Ocaml or F# code that has the
> >performance that he's demanding from Ertagul's Haskell code.
> 
> Why should he have to put Ocaml or F# code?  The issue at hand is
> the performance of Haskell vs C.

Jon tries to infer something about Haskell vs. C's performance from
Ertugul's having better things to do than respond to Jon's mindless
challenges.  If that is a valid form of inference, Jon should show the
validity by responding to a similar challenge regarding his own
favorite languages.  If he can't do that, following Jon's logic, we
should draw the same conflusion about Ocaml and F# that Jon draws
about Haskell.  Heck, since there is not even a 100x slower-than-C F#
version of that function, maybe the ugly truth is that F# is 1000x
slower than C rather than 100x slower.
From: Jon Harrop
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <qLWdnarp--NBmcHXnZ2dnUVZ8mti4p2d@brightview.co.uk>
Paul Rubin wrote:
> ···@tiac.net (Richard Harter) writes:
>> >Jon's specific challenge is pretty
>> >worthless unless he can put up Ocaml or F# code that has the
>> >performance that he's demanding from Ertagul's Haskell code.
>> 
>> Why should he have to put Ocaml or F# code?  The issue at hand is
>> the performance of Haskell vs C.
> 
> Jon tries to infer something about Haskell vs. C's performance from
> Ertugul's having better things to do than respond to Jon's mindless
> challenges.

I'm sorry if you don't value this challenge but I (and many other people)
find it very interesting indeed.

> If that is a valid form of inference, Jon should show the 
> validity by responding to a similar challenge regarding his own
> favorite languages. If he can't do that, following Jon's logic, we 
> should draw the same conflusion about Ocaml and F# that Jon draws
> about Haskell.  Heck, since there is not even a 100x slower-than-C F#
> version of that function, maybe the ugly truth is that F# is 1000x
> slower than C rather than 100x slower.

Absolutely. I could not agree more. Which is why I just posted F# code that
matches the performance of highly-optimized C and thrashes Haskell.

I did not try to compare with bad C++ code or make excuses as Ertugrul did.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Jon Harrop
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <qLWdnaTp--MDm8HXnZ2dnUVZ8mti4p2d@brightview.co.uk>
Paul Rubin wrote:
> ···@tiac.net (Richard Harter) writes:
>> While we all sympathize with the limits on your time, it remains
>> that you made a major claim about the efficiency of Haskell that
>> remains unsubstantiated.
> 
> The Alioth shootout results are reasonable substantiation for general
> claims about Haskell performance.

The Alioth shootout is bad science and says nothing useful about Haskell at
all.

> Jon's specific challenge is pretty 
> worthless unless he can put up Ocaml or F# code that has the
> performance that he's demanding from Ertagul's Haskell code.

I've posted F# code that far exceeds the performance I demanded of
Ertugrul's Haskell.

Perhaps now would be a good time for you to stop whining.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: ACL
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <526c60c0-b829-4179-9ed5-6e81e8122cf5@p36g2000vbn.googlegroups.com>
On Jul 14, 1:25 am, Jon Harrop <····@ffconsultancy.com> wrote:
> Paul Rubin wrote:
> > ····@tiac.net (Richard Harter) writes:
> >> While we all sympathize with the limits on your time, it remains
> >> that you made a major claim about the efficiency of Haskell that
> >> remains unsubstantiated.
>
> > The Alioth shootout results are reasonable substantiation for general
> > claims about Haskell performance.
>
> The Alioth shootout is bad science and says nothing useful about Haskell at
> all.
>

Your posts are just as much bad science.


> > Jon's specific challenge is pretty
> > worthless unless he can put up Ocaml or F# code that has the
> > performance that he's demanding from Ertagul's Haskell code.
>
> I've posted F# code that far exceeds the performance I demanded of
> Ertugrul's Haskell.
>
> Perhaps now would be a good time for you to stop whining.
>

Are you secretly rush limbaugh?

'Shut up! Shut up! Shut up!
'Stop Whining!'

> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u
From: Scott Burson
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <97d3a26e-99e5-421c-b11d-be815b5d71f9@k13g2000prh.googlegroups.com>
On Jul 13, 2:53 pm, ····@tiac.net (Richard Harter) wrote:
>
> While we all sympathize with the limits on your time, it remains
> that you made a major claim about the efficiency of Haskell that
> remains unsubstantiated.

I'm curious about the outcome too, but whether Ertugrul can come
within a factor of 2 of the C implementation is, to me, not the most
interesting question.  I'd be more interested to know the ratio of the
time it takes him to the time the C implementation took to write.  It
seems unlikely, though, that the latter number can be obtained.

-- Scott
From: Paul Rubin
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <7x1vok2icg.fsf@ruckus.brouhaha.com>
Scott Burson <········@gmail.com> writes:
> I'd be more interested to know the ratio of the time it takes him to
> the time the C implementation took to write.  It seems unlikely,
> though, that the latter number can be obtained.

This might be of interest:

  http://www.cse.iitb.ac.in/~as/fpcourse/jfp.ps

See the chart on page 9.
From: ACL
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <81c528ce-0674-4274-bbda-28c7cd14c51e@l31g2000vbp.googlegroups.com>
On Jul 12, 3:20 am, JH wrote:
> ACL wrote:
> > On Jul 10, 10:19 pm, JH wrote:
> >> ACL wrote:
> >> >> >> So either Haskell is not "decent" or 200x slower is "not
> >> >> >> considerably slower".
>
> >> > You see here, the wiley troll makes a claim of comparison as if it is
> >> > fact.
>
> >> Err, it is fact and I cited the source.
>
> > No you didn't. I just scrolled up. I see no study cited.
>
> Here's that citation again:
>
> http://www.mail-archive.com/haskell-cafe%40haskell.org/msg25645.html
>

One usenet post doth not a study make.

What type of 'scientist' are you exactly?
(that was rhetorical)
> >> > 'Haskell is 200x slower' than ...? he doesn't specify what it is
> >> > 200x slower than.
>
> >> I explicitly wrote "200x slower than C++".
>
> > Who cares about C++?
>
> > It seems that any language you defame has to go up against 3 or 4
> > different languages of your choosing depending on the test. I thought
> > you were the F#/OCaml schill?
>
> > Besides the haskell was *according to you* only 100x slower than C
> > code, (not C++ code). you claimed 200x
>
> The post I cited claimed 200.
>
indeed.

> >> > specifically well known for its speed.
>
> >> Did you want to compare Haskell with badly written inefficient C?
>
> > if it is badly written and inefficient haskell... then yes.
>
> We are trying to determine whether or not Haskell code can be efficient.
>

Ok, we have 2 datapoints, now to be scientists and actually able to
responsibly claim this as 'likelyhood' we need many more than 2
datapoints.

> >> > The source of which being nearly 1 megabyte of information compressed.
>
> >> No, the source code to bzip2 is 170kB in under 9kLOC. The "blocksort.c"
> >> file that contains the equivalent of this code (and a lot more) and is
> >> only 729 LOC.
>
> > That's weird i just downloaded the source and zipped it came in at
> > ~800kb.
>
> The BZip2 distro includes the manual in several different formats, e.g. a
> 1Mb PostScript file.
>
> > Even if what you say is right, you are still comparing 100x more LOC.
> > Only 730 loc? Seriously?
>
> Sure. I won't contest that the C is far more verbose but this discussion was
> solely about performance.
>
> >> > We should note that it is only 100x slower than the high quality
> >> > implementation, while the claim was that haskell would be 200x
> >> > slower...
>
> >> No, the partially-implemented Haskell is already 100x slower that the
> >> complete C implementation so a full Haskell implementation based upon it
> >> must be over 100x slower.
>
> > That doesn't make sense. Adding code is not equal to adding extra run
> > time. Sometimes adding code makes things faster, you know.
>
> How could adding code that implements the subsequent phases of the
> compression algorithm possibly decrease the total running time?
>

He could implement a more memory efficient sorting algorithm and maybe
play with the datastructures he is using to represent things a bit.
His implementation is clearly a naive one (not that he is naive, only
you). Maybe there is an algorithmic improvement that becomes obvious
when using Haskell that is not obvious with the C code. Who knows?

> >> >> The poor performance of your Haskell code is a direct consequence of
> >> >> Haskell's unpredictability. Obviously you failed to predict how bad
> >> >> your code was or you would not have posted an example that contradicts
> >> >> your own assertions.
>
> >> > He omits the fact that the poor performance is a result of not
> >> > comparing apples to apples in any sort of reasonable manner.
>
> >> You misunderstood the comparison.
>
> > It seems it is a bad comparison....
>
> Indeed, it turns out that Ertugrul's Haskell implementation is actually
> broken.
>
So we can throw out the datapoint then?
You are now claiming that Haskell is slower than C and basing the
claim on 1 usenet post.

Awesome.

> >> > He also omits the fact that the author admited that it wasn't the
> >> > highest quality implementation of zip compression.
>
> >> Ertugrul claimed he could write performant Haskell and then didn't.
>
> > I think he claimed it could be written, not that he was writing it.
>
> Indeed, this is the second time he has made a claim about Haskell only to
> prove himself wrong.
>

'prove himself wrong' vs. 'not make his point'.

Nothing has been proven here.
From: Jon Harrop
Subject: Re: META: Proper trolling technique and methodology
Date: 
Message-ID: <1padnUX5B7UsV8HXnZ2dnUVZ8j5i4p2d@brightview.co.uk>
ACL wrote:
>> Indeed, this is the second time he has made a claim about Haskell only to
>> prove himself wrong.
> 
> 'prove himself wrong' vs. 'not make his point'.
> 
> Nothing has been proven here.

Yes, of course. We are trying to disprove something.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <Y82dnShZqvipNMDXnZ2dnUVZ8iZi4p2d@brightview.co.uk>
Ertugrul Söylemez wrote:
> Jon Harrop <···@ffconsultancy.com> wrote:
> 
>> > What's wrong with purely functional languages?
>>
>> . Unpredictable performance.
> 
> Wrong.

Ertugrul just disproved this for a third time on comp.lang.haskell, BTW.

He posted this prime sieve and claimed that moving to boxed arrays did not
make the program scale worse:

  module Main where

  import Control.Monad
  import Control.Monad.ST
  import Data.Array.ST
  import Data.Array.Unboxed
  import System.Environment


  soe :: Int -> ST s (STUArray s Int Bool)
  soe n = do
    sieve <- newArray (2,n) True
    let !m = div n 2
    forM_ [2..m] $ \i -> do
      b <- readArray sieve i
      when b $ forM_ [i+i, i+i+i .. n] $ \j -> writeArray sieve j False
    return sieve


  soeList :: (Enum a, Num a) => Int -> [a]
  soeList n = map fst . filter snd . zip [2..] . elems . runSTUArray $ soe n


  main :: IO ()
  main = getArgs >>= mapM_ (print . length . soeList . read)

Compiled with ghc --make -O2, ratios of running times on my machine are:

0.5M:  2.52
1.0M:  4
2.0M:  7.3
3.0M:  9.4
4.0M: 13.4

The boxed version is clearly relatively slower on larger inputs. The reason
is that writing a single element into an array is O(n) due to a
long-standing perf bug in GHC's run-time.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <oJydnXaHrO5MO_jXnZ2dnUVZ8vdi4p2d@brightview.co.uk>
Ertugrul Söylemez wrote:
> Jon Harrop <···@ffconsultancy.com> wrote:
>> > What's wrong with purely functional languages?
>>
>> . Unpredictable performance.
> 
> Wrong.

I just found another interesting example of this in the form of the
interpreters written for the ICFP 2006:

  http://www.cse.unsw.edu.au/~dons/um.html

The Haskell solutions range from 4x to 60x slower than C.

In fact, half of the Haskell solutions are even slower than the C
interpreter interpreting an interpreter!

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Vassil Nikolov
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <snz63e1c952.fsf@luna.vassil.nikolov.name>
On Fri, 10 Jul 2009 00:32:57 +0100, Jon Harrop <···@ffconsultancy.com> said:
> ...
> unconventional operators (e.g. ** for power ...)

  That is an at-least-55-year-old convention.  How much more
  conventional can it get?

  ---Vassil.


-- 
"Even when the muse is posting on Usenet, Alexander Sergeevich?"
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <l-mdncqjqv31xsrXnZ2dnUVZ8vGdnZ2d@brightview.co.uk>
Vassil Nikolov wrote:
> On Fri, 10 Jul 2009 00:32:57 +0100, Jon Harrop <···@ffconsultancy.com>
> said:
>> ...
>> unconventional operators (e.g. ** for power ...)
> 
>   That is an at-least-55-year-old convention.  How much more
>   conventional can it get?

Superscript is more conventional.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Vassil Nikolov
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <snzab3bat16.fsf@luna.vassil.nikolov.name>
On Fri, 10 Jul 2009 17:26:14 +0100, Jon Harrop <···@ffconsultancy.com> said:

> Vassil Nikolov wrote:
>> On Fri, 10 Jul 2009 00:32:57 +0100, Jon Harrop <···@ffconsultancy.com>
>> said:
>>> ...
>>> unconventional operators (e.g. ** for power ...)
>> 
>> That is an at-least-55-year-old convention.  How much more
>> conventional can it get?

> Superscript is more conventional.

  Oh I see.  I thought we were talking about programming language
  conventions for writing expressions with a keyboard.  Silly me.

  ---Vassil.


-- 
"Even when the muse is posting on Usenet, Alexander Sergeevich?"
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <gKWdndkDQPcOvsTXnZ2dnUVZ8sKdnZ2d@brightview.co.uk>
Vassil Nikolov wrote:
> On Fri, 10 Jul 2009 17:26:14 +0100, Jon Harrop <···@ffconsultancy.com>
> said:
>> Vassil Nikolov wrote:
>>> On Fri, 10 Jul 2009 00:32:57 +0100, Jon Harrop <···@ffconsultancy.com>
>>> said:
>>>> ...
>>>> unconventional operators (e.g. ** for power ...)
>>> 
>>> That is an at-least-55-year-old convention.  How much more
>>> conventional can it get?
>>
>> Superscript is more conventional.
> 
>   Oh I see.  I thought we were talking about programming language
>   conventions for writing expressions with a keyboard.  Silly me.

I was talking about conventions for writing expressions with a keyboard.
LaTeX and Mathematica can both use superscript to denote power and in both
cases it is typed with ^.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Vassil Nikolov
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <snzljmuag4x.fsf@luna.vassil.nikolov.name>
On Sun, 12 Jul 2009 01:52:34 +0100, Jon Harrop <···@ffconsultancy.com> said:

> Vassil Nikolov wrote:
>> On Fri, 10 Jul 2009 17:26:14 +0100, Jon Harrop <···@ffconsultancy.com>
>> said:
>>> Vassil Nikolov wrote:
>>>> On Fri, 10 Jul 2009 00:32:57 +0100, Jon Harrop <···@ffconsultancy.com>
>>>> said:
>>>>> ...
>>>>> unconventional operators (e.g. ** for power ...)
>>>> 
>>>> That is an at-least-55-year-old convention.  How much more
>>>> conventional can it get?
>>> 
>>> Superscript is more conventional.
>> 
>> Oh I see.  I thought we were talking about programming language
>> conventions for writing expressions with a keyboard.  Silly me.

> I was talking about conventions for writing expressions with a keyboard.
> LaTeX and Mathematica can both use superscript to denote power and in both
> cases it is typed with ^.

  Then we are not comparing `**' and superscripting for
  exponentiation, but `**' and `^'.  The latter convention (which
  originated in the form of an upward-pointing arrow) is more recent
  by a few years.

  ---Vassil.


-- 
"Even when the muse is posting on Usenet, Alexander Sergeevich?"
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <1189301e-ef9a-4ced-96aa-3757d2a6f4d9@x3g2000yqa.googlegroups.com>
On Jul 9, 7:32 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > So what do you want from a programming language?
>
> Excellent question. I've never really tried to enumerate it before...
>
In short, the One Ring to rule them all. I've been there. The thing
is, everyone has a different idea about how the One Ring should look
and work. As a result, we either complain and annoy those who are
basically happy with the status quo and are actually writing stuff, or
we implement yet another ultimate programming language.

I'm sure you'll be happy with HLVM when it's done. However, the next
guy will look at it, say "WTF?", and implement his own personal Ring.

I think a better approach for the rest of us is to find one or more
platforms that are "good enough" (CL and Haskell both work for me) and
start coding apps and libraries.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <39ednXdWANMn-MrXnZ2dnUVZ8rWdnZ2d@brightview.co.uk>
Larry Coleman wrote:
> I think a better approach for the rest of us is to find one or more
> platforms that are "good enough" (CL and Haskell both work for me) and
> start coding apps and libraries.

I think that is an extremely bad idea because you're building on sand. The
very foundation of your applications and libraries is rotting.

Microsoft were right on the money when they built .NET as a rock solid
foundation for everyone to use going forward and they are right to put a
huge amount of effort into migrating as much old code to run on top of .NET
as possible. They are so far ahead of Linux now that it is just ridiculous.

For a start, you're advocating using at least two separate and
uninteroperable garbage collectors in CL and Haskell. That is obviously a
really bad idea, not to mention that no implementations of either CL or
Haskell have GCs comparable to those found in the JVM and .NET.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <9cbb350d-ffe6-402f-8b82-039d913eee31@r33g2000yqn.googlegroups.com>
On Jul 10, 1:10 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > I think a better approach for the rest of us is to find one or more
> > platforms that are "good enough" (CL and Haskell both work for me) and
> > start coding apps and libraries.
>
> I think that is an extremely bad idea because you're building on sand. The
> very foundation of your applications and libraries is rotting.

What does this even mean?

>
> Microsoft were right on the money when they built .NET as a rock solid
> foundation for everyone to use going forward and they are right to put a
> huge amount of effort into migrating as much old code to run on top of .NET
> as possible. They are so far ahead of Linux now that it is just ridiculous.
>
And for all that the .NET languages are so far inadequate to meet your
needs that you feel compelled to create yet another programming
environment.

> For a start, you're advocating using at least two separate and
> uninteroperable garbage collectors in CL and Haskell. That is obviously a
> really bad idea, not to mention that no implementations of either CL or
> Haskell have GCs comparable to those found in the JVM and .NET.
>

Actually, I'm only advocating picking something that works reasonably
well and getting on with it. I use either CL or Haskell according to
the task.

Also, I don't think you really got the point I was trying to make
earlier. So, let's ask this question: Don't you think the creators of
Ruby, Arc, Clojure, etc., all thought they were creating the ultimate
programming environment?
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <cfc9adfd-a01c-493d-8802-3b3172c2666e@h2g2000yqg.googlegroups.com>
On Jul 10, 12:19 pm, Larry Coleman <············@yahoo.com> wrote:
> On Jul 10, 1:10 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>
> > Larry Coleman wrote:
> > > I think a better approach for the rest of us is to find one or more
> > > platforms that are "good enough" (CL and Haskell both work for me) and
> > > start coding apps and libraries.
>
> > I think that is an extremely bad idea because you're building on sand. The
> > very foundation of your applications and libraries is rotting.
>
> What does this even mean?
>
I should clarify this, as it implies that your statement about
building on sand was meaningless. What I meant to imply was that your
statement was content-free.

Also, in relation to my earlier point, a future Dr. Harrop will say
exactly the same thing about HLVM (that using it is like building on
sand and a rotting foundation).
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <lOKdnZT98IVIScrXnZ2dnUVZ8l1i4p2d@brightview.co.uk>
Larry Coleman wrote:
> On Jul 10, 1:10 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>> Larry Coleman wrote:
>> > I think a better approach for the rest of us is to find one or more
>> > platforms that are "good enough" (CL and Haskell both work for me) and
>> > start coding apps and libraries.
>>
>> I think that is an extremely bad idea because you're building on sand.
>> The very foundation of your applications and libraries is rotting.
> 
> What does this even mean?

No attempt is being made to factor out the enormous amount of commonality
between those code bases (e.g. SBCL and GHC). For example, the garbage
collectors and the foreign function interfaces.

Hence Linux has a dozen different uninteroperable functional language
implementations each with completely separate GC implementations, none of
which are up to date with respect to important functionality like
concurrent collection.

>> Microsoft were right on the money when they built .NET as a rock solid
>> foundation for everyone to use going forward and they are right to put a
>> huge amount of effort into migrating as much old code to run on top of
>> .NET as possible. They are so far ahead of Linux now that it is just
>> ridiculous.
>
> And for all that the .NET languages are so far inadequate to meet your
> needs that you feel compelled to create yet another programming
> environment.

I was talking about the VM (CLR) and framework (e.g. WPF) and not the
languages. I am creating a new VM on Linux because Linux does not have a
suitable VM and not because Windows-only languages are inadequate.

>> For a start, you're advocating using at least two separate and
>> uninteroperable garbage collectors in CL and Haskell. That is obviously a
>> really bad idea, not to mention that no implementations of either CL or
>> Haskell have GCs comparable to those found in the JVM and .NET.
> 
> Actually, I'm only advocating picking something that works reasonably
> well and getting on with it. I use either CL or Haskell according to
> the task.

Sure.

> Also, I don't think you really got the point I was trying to make
> earlier. So, let's ask this question: Don't you think the creators of
> Ruby, Arc, Clojure, etc., all thought they were creating the ultimate
> programming environment?

This discussion has never been about creating the ultimate programming
environment. I am advocating factoring out the commonality between today's
language implementations so that we can build upon a shared VM in the
future in order to develop better language implementations more easily.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <66d06b70-5083-4df0-a656-413178471525@y19g2000yqy.googlegroups.com>
On Jul 10, 9:04 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > On Jul 10, 1:10 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> >> Larry Coleman wrote:
> >> > I think a better approach for the rest of us is to find one or more
> >> > platforms that are "good enough" (CL and Haskell both work for me) and
> >> > start coding apps and libraries.
>
> >> I think that is an extremely bad idea because you're building on sand.
> >> The very foundation of your applications and libraries is rotting.
>
> > What does this even mean?

Apparently this was the question I shouldn't have asked because I
should have known I wouldn't like the answer.

>
> No attempt is being made to factor out the enormous amount of commonality
> between those code bases (e.g. SBCL and GHC). For example, the garbage
> collectors and the foreign function interfaces.
>
> Hence Linux has a dozen different uninteroperable functional language
> implementations each with completely separate GC implementations, none of
> which are up to date with respect to important functionality like
> concurrent collection.
>
OK, hands up, everyone who guessed in advance that this is what was
meant by "rotting foundation" and "building on sand."
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <udOdnZUmUMs-ScXXnZ2dnUVZ8l1i4p2d@brightview.co.uk>
Larry Coleman wrote:
> OK, hands up, everyone who guessed in advance that this is what was
> meant by "rotting foundation" and "building on sand."

Do you not agree that the state of FPL implementations on Linux would be
vastly better if we had a decent VM to build upon?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Larry Coleman
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <6d34f5a0-24c5-4845-9eab-6afba46838b0@p23g2000vbl.googlegroups.com>
On Jul 11, 3:15 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> Larry Coleman wrote:
> > OK, hands up, everyone who guessed in advance that this is what was
> > meant by "rotting foundation" and "building on sand."
>
> Do you not agree that the state of FPL implementations on Linux would be
> vastly better if we had a decent VM to build upon?
>

I'm not in the business of implementing functional languages, so I
don't know whether that's true, and frankly don't care.

Also, it takes some serious flexibility to make "rotting foundation"
and "building on sand" mean the same thing as "would be vastly
better." Most people can't stretch that far.
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <ksydnYqWH8zGhcTXnZ2dnUVZ8iti4p2d@brightview.co.uk>
Larry Coleman wrote:
> On Jul 11, 3:15 pm, Jon Harrop <····@ffconsultancy.com> wrote:
>> Larry Coleman wrote:
>> > OK, hands up, everyone who guessed in advance that this is what was
>> > meant by "rotting foundation" and "building on sand."
>>
>> Do you not agree that the state of FPL implementations on Linux would be
>> vastly better if we had a decent VM to build upon?
> 
> I'm not in the business of implementing functional languages, so I
> don't know whether that's true, and frankly don't care.

Ok. I care.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Frank Buss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <8bs0bm1kfsmt$.ckcqcurzdlue.dlg@40tude.net>
Jon Harrop wrote:

> Libraries: Nothing on Linux comes close to Windows Presentation Foundation
> for robustly-deployable hardware accelerated UIs 

Do you know Qt and wxWidgets? Both libraries are really nice and the GUI
editor for Qt is as complete as for WPF, but if you are using layout
managers, you can't do bugs like this one with the buttons in the bottom
area, found in Microsoft Visual Studio 2008:

http://www.frank-buss.de/tmp/gui-bug.jpg

I don't know if Microsoft practices "eat your own dogfood" and if they are
using WPF, but this is not a good publicity for their framework.

wxWidgets has some nice GUI editors, too and it uses the native controls of
the operating system, so I assume they are hardware accelerated, but why is
this important for a GUI, if you just want to display some text and edit
fields and not realtime 3D animation?

Qt is available as LGPL, now that Nokia bought Trolltech for Windows (last
year it was available for Windows as commercial, only). wxWidget is
available for Win32, Mac OS X, GTK+, X11, Motif, WinCE, and more, and has a
LPGL-like licence.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <hoadndI63trp-crXnZ2dnUVZ8h6dnZ2d@brightview.co.uk>
Frank Buss wrote:
> Jon Harrop wrote:
>> Libraries: Nothing on Linux comes close to Windows Presentation
>> Foundation for robustly-deployable hardware accelerated UIs
> 
> Do you know Qt and wxWidgets?

Yes, of course. Neither even attempt to provide hardware accelerated GUIs as
WPF does.

> Both libraries are really nice

They could barely be any worse. Even their own demo programs segfault
because the foundations they are built upon are rotten to the core.

Look at the use of first-class lexical closures in WPF, for example. Suffice
to say, C++ is nowhere near competitive in this respect.

> and the GUI editor for Qt is as complete as for WPF,

Nowhere near. WPF's GUI editor generates high-level safe interoperable code
than runs on .NET. I can call it directly from F# to generate binary
executables that millions of people can run directly.

Qt is a generation behind, generating code in a proprietary dialect of C++
that could barely be less safe or interoperable. I cannot even use Qt
directly from OCaml and, if I could, any compiled binaries would be brittle
with respect to the exact versions on all libraries on my machine so they
would segfault on the vast majority of other machines. So the only
practical option is to distribute source code and ask users to recompile it
or distro maintainers to create custom packages for it. That throws up
barriers to commercial software on Linux. Moreover, if you even try to
discuss making Linux more commerce friendly, you get trampled by millions
of freeloaders who want to impose "freedom" upon people by taking that
choice away from them. Hence the quality of software on Linux is already
bad and is getting worse.

And I suppose I would be expected to go back to battling with bugs in the
g++ toolchain? No thanks. I dropped that crappy language and implementation
years ago for good reasons and have no intention of going backwards.

> but if you are using layout 
> managers, you can't do bugs like this one with the buttons in the bottom
> area, found in Microsoft Visual Studio 2008:
> 
> http://www.frank-buss.de/tmp/gui-bug.jpg

Firstly, that software isn't even using WPF. Secondly, KDE is riddled with
similar rendering bugs.

> I don't know if Microsoft practices "eat your own dogfood" and if they are
> using WPF, but this is not a good publicity for their framework.

WPF is in VS 2010.

> wxWidgets has some nice GUI editors, too and it uses the native controls
> of the operating system, so I assume they are hardware accelerated, but 
> why is this important for a GUI, if you just want to display some text and
> edit fields and not realtime 3D animation?

Are you seriously asking why someone developing a *graphical* user interface
might be interested in decent graphics?

> Qt is available as LGPL, now that Nokia bought Trolltech for Windows (last
> year it was available for Windows as commercial, only). wxWidget is
> available for Win32, Mac OS X, GTK+, X11, Motif, WinCE, and more, and has
> a LPGL-like licence.

Sure. Neither are remotely close to providing what I was asking for.

GUI libraries should be written in safe languages and they should use a
highly-reliable subset of the low-level rendering libraries for hardware
accelerated graphics. Qt and wxWidgets are both written in unsafe low-level
languages and neither make any attempt to hardware accelerate the contents
of windows. You can make completely uninteroperable use of the whole of
OpenGL but it is entirely unsafe, e.g. you can screw up the rendering
context for the next Window and cause widespread corruption.

GUI libraries should be interoperable so that many programming languages can
reap the benefits of a single mature GUI framework. Qt is written in a
proprietary dialect of C++ which is extremely difficult to interoperate
with. This essentially requires some kind of common language run-time but,
of course, Linux doesn't have any that support basic features like TCO.

GUI libraries should be written in high-level languages. The idea of trying
to write a decent framework without garbage collection is now laughable.

Finally, writing a GUI library is actually comparatively easy. The hard part
is building the foundation upon which everything is built, i.e. the safe
interoperable VM. Linux desperately needs such a thing but, so far, we're
stuck with the JVM and Mono.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Frank Buss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <mpg8hacvgqv4$.197u5r9jukqhl$.dlg@40tude.net>
Jon Harrop wrote:

> They could barely be any worse. Even their own demo programs segfault
> because the foundations they are built upon are rotten to the core.

Are we talking about the same library? Qt is a mature library, KDE and lots
of programs are based on it, QTopia and QTE works on lots of embedded
devices. I've never seen a segfault in one of the Qt demo programs. I've
tried version 4.4 on a x86 Linux box and the QTE version on an ARM embedded
device, for which I've developed my own graphics driver, which translates
the Qt calls to accelerated calls to an attached graphics chip. Qt has a
nice architecture to make such things very easy.

> Look at the use of first-class lexical closures in WPF, for example. Suffice
> to say, C++ is nowhere near competitive in this respect.

Yes, this could be a nice feature.

> Nowhere near. WPF's GUI editor generates high-level safe interoperable code
> than runs on .NET. I can call it directly from F# to generate binary
> executables that millions of people can run directly.
> 
> Qt is a generation behind, generating code in a proprietary dialect of C++
> that could barely be less safe or interoperable. 

This is only half of the truth: You can generate C++ code, but for a
commercial project e.g. I save the GUI forms as XML, which are loaded by
JavaScript, which is integrated in Qt and which has access to all GUI
elements. The GC and prototype based programming style of JavaScript is
very nice for GUI programming, e.g. you can use anonymous functions as
button handlers with a very simple syntax.

> I cannot even use Qt
> directly from OCaml and, if I could, any compiled binaries would be brittle
> with respect to the exact versions on all libraries on my machine so they
> would segfault on the vast majority of other machines. So the only
> practical option is to distribute source code and ask users to recompile it
> or distro maintainers to create custom packages for it. That throws up
> barriers to commercial software on Linux.

This is wrong. There are many way how to use Qt: On Linux there is no
problem with different versions of libraries, because it has a descent
version managment for shared libraries and you link your programs to the
right version, which you can deliver with your application, if not already
installed on the system. To avoid any problems, you can even link it
statically to your application.

I've just installed
http://get.qtsoftware.com/qtsdk/qt-sdk-win-opensource-2009.03.exe 
Included in this release was even the compiler (based on MinGW) and with Qt
Creator an IDE you don't need any other program to start programming. There
is a demo, which just works, no "brittle" problems at all.

> And I suppose I would be expected to go back to battling with bugs in the
> g++ toolchain? No thanks. I dropped that crappy language and implementation
> years ago for good reasons and have no intention of going backwards.

Does WPF even exists for Linux? On Windows Qt works with Visual Studio,
which has not a perfect C++ compiler, but still good quality.

>> wxWidgets has some nice GUI editors, too and it uses the native controls
>> of the operating system, so I assume they are hardware accelerated, but 
>> why is this important for a GUI, if you just want to display some text and
>> edit fields and not realtime 3D animation?
> 
> Are you seriously asking why someone developing a *graphical* user interface
> might be interested in decent graphics?

I was thinking of warehouse applications, databases etc., but you are right
that you need accelerated graphics for multimedia applications etc. Thanks
Trolltech that they support it :-)

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <Ip6dnSoaT-NeQsrXnZ2dnUVZ8vCdnZ2d@brightview.co.uk>
Frank Buss wrote:
> KDE and lots of programs are based on it,

We experience dozens of segfaults from KDE apps every day on all of our
machines running KDE because they are written in C++ because they are
building upon Qt. That is a huge mistake.

>> Qt is a generation behind, generating code in a proprietary dialect of
>> C++ that could barely be less safe or interoperable.
> 
> This is only half of the truth: You can generate C++ code, but for a
> commercial project e.g. I save the GUI forms as XML, which are loaded by
> JavaScript, which is integrated in Qt and which has access to all GUI
> elements. The GC and prototype based programming style of JavaScript is
> very nice for GUI programming, e.g. you can use anonymous functions as
> button handlers with a very simple syntax.

What KDE applications are written in Javascript?

>> I cannot even use Qt
>> directly from OCaml and, if I could, any compiled binaries would be
>> brittle with respect to the exact versions on all libraries on my machine
>> so they would segfault on the vast majority of other machines. So the
>> only practical option is to distribute source code and ask users to
>> recompile it or distro maintainers to create custom packages for it. That
>> throws up barriers to commercial software on Linux.
> 
> This is wrong. There are many way how to use Qt: On Linux there is no 
> problem with different versions of libraries, because it has a descent
> version managment for shared libraries and you link your programs to the
> right version, which you can deliver with your application, if not already
> installed on the system.

No, Linux will happily dynamically load a binary incompatible library and
then segfault.

> To avoid any problems, you can even link it statically to your
> application. 

No, you can link it a ton of useless junk that you don't need, bloat your
downloadable executables by a factor of a 1,000 to drive customers away and
spend all of your remaining profits on bandwidth but you'll always be left
with some dynamic linking.

Have you even looked at what you're pulling in:

$ ldd /usr/lib/libqt-mt.so.3.3.8
        linux-gate.so.1 =>  (0xffffe000)
        libfontconfig.so.1 => /usr/lib/libfontconfig.so.1 (0xf7808000)
        libaudio.so.2 => /usr/lib/libaudio.so.2 (0xf77f2000)
        libXt.so.6 => /usr/lib/libXt.so.6 (0xf77a1000)
        libjpeg.so.62 => /usr/lib/libjpeg.so.62 (0xf7782000)
        libpng12.so.0 => /usr/lib/libpng12.so.0 (0xf775e000)
        libz.so.1 => /usr/lib/libz.so.1 (0xf7749000)
        libXi.so.6 => /usr/lib/libXi.so.6 (0xf7741000)
        libXrender.so.1 => /usr/lib/libXrender.so.1 (0xf7738000)
        libXrandr.so.2 => /usr/lib/libXrandr.so.2 (0xf7730000)
        libXcursor.so.1 => /usr/lib/libXcursor.so.1 (0xf7727000)
        libXinerama.so.1 => /usr/lib/libXinerama.so.1 (0xf7724000)
        libXft.so.2 => /usr/lib/libXft.so.2 (0xf7711000)
        libfreetype.so.6 => /usr/lib/libfreetype.so.6 (0xf769a000)
        libXext.so.6 => /usr/lib/libXext.so.6 (0xf768c000)
        libX11.so.6 => /usr/lib/libX11.so.6 (0xf756d000)
        libSM.so.6 => /usr/lib/libSM.so.6 (0xf7565000)
        libICE.so.6 => /usr/lib/libICE.so.6 (0xf754d000)
        libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xf7549000)
        libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xf7530000)
        libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xf743f000)
        libm.so.6 => /lib/i686/cmov/libm.so.6 (0xf7418000)
        libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xf73ec000)
        libc.so.6 => /lib/i686/cmov/libc.so.6 (0xf728c000)
        libexpat.so.1 => /usr/lib/libexpat.so.1 (0xf7266000)
        libXau.so.6 => /usr/lib/libXau.so.6 (0xf7263000)
        libXfixes.so.3 => /usr/lib/libXfixes.so.3 (0xf725d000)
        libxcb.so.1 => /usr/lib/libxcb.so.1 (0xf7244000)
        libuuid.so.1 => /lib/libuuid.so.1 (0xf7240000)
        /lib/ld-linux.so.2 (0x56555000)
        libXdmcp.so.6 => /usr/lib/libXdmcp.so.6 (0xf723b000)

Have you considered what statically linking X11 would do?

What are you going to suggest next, that I bundle an OS with my application?

> I've just installed
> http://get.qtsoftware.com/qtsdk/qt-sdk-win-opensource-2009.03.exe
> Included in this release was even the compiler (based on MinGW) and with
> Qt Creator an IDE you don't need any other program to start programming.
> There is a demo, which just works, no "brittle" problems at all.

One of the demos works on another OS. Great.

>> And I suppose I would be expected to go back to battling with bugs in the
>> g++ toolchain? No thanks. I dropped that crappy language and
>> implementation years ago for good reasons and have no intention of going
>> backwards.
> 
> Does WPF even exists for Linux?

No, Linux has nothing comparable. Linux does not even have a VM or graphics
libraries capable of hosting WPF reliably. That was my point.

> On Windows Qt works with Visual Studio, which has not a perfect C++
> compiler, but still good quality.

So I can go back to a dying language *and* benefit from yesteryear's GUIs?
Yipee!

>>> wxWidgets has some nice GUI editors, too and it uses the native controls
>>> of the operating system, so I assume they are hardware accelerated, but
>>> why is this important for a GUI, if you just want to display some text
>>> and edit fields and not realtime 3D animation?
>> 
>> Are you seriously asking why someone developing a *graphical* user
>> interface might be interested in decent graphics?
> 
> I was thinking of warehouse applications, databases etc., but you are
> right that you need accelerated graphics for multimedia applications etc.
> Thanks Trolltech that they support it :-)

In a nice unsafe unintegrated unreliable out-of-date kind of way.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Frank Buss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <1uylpyp2qgvzx$.1ja7k34io0zom.dlg@40tude.net>
If you don't want to install Qt, I've found a video channel with some
demos:

http://www.youtube.com/user/QtStudios

I didn't know that they pronounce it "cute" :-)


Jon Harrop wrote:

> We experience dozens of segfaults from KDE apps every day on all of our
> machines running KDE because they are written in C++ because they are
> building upon Qt. That is a huge mistake.

You are right, C++ is not the best language for building higher level
software.

> What KDE applications are written in Javascript?

I don't know, maybe most are written in C++. At least there are some
projects for providing Qt and KDE bindings for scripting languages,
including e.g. C# :

http://techbase.kde.org/Development/Languages/Smoke

> No, Linux will happily dynamically load a binary incompatible library and
> then segfault.

This depends how it is configured. In Linux there is usually a library
called foo.so.3.4.5 and then links like foo.so.3.4 and foo.so.3, but maybe
even foo.so.2 for foo.so.3.4.5, if it is compatible. But you don't have to
create this links, if it is not compatible, so chances are very good, that
it doesn't load an incompatible library.

> No, you can link it a ton of useless junk that you don't need, bloat your
> downloadable executables by a factor of a 1,000 to drive customers away and
> spend all of your remaining profits on bandwidth but you'll always be left
> with some dynamic linking.
> 
> Have you even looked at what you're pulling in:
> 
> $ ldd /usr/lib/libqt-mt.so.3.3.8

[lots of standard libraries]

> Have you considered what statically linking X11 would do?

Of course, I don't want to link statically to libc6 or X11 :-) Only the Qt
libraries, the rest is standard on each modern Linux system. But in general
this is a problem. Unlike Windows, sometimes there are major incompatible
updates in Linux, e.g. if you have linked to libc5, it doesn't work with
libc6. But Microsoft has catched up in this direction: Some XP applications
doesn't work anymore in Vista :-)

For a typical application you have an offset of about 12 mb for Qt
(measured the unpacked size of the DLLs on Windows). This is big, but small
for modern applications. This means the user has to wait one minute for
downloading the application with a modern internet connection. Bandwith is
no problem theses days. E.g. in Germany there are hosting services with
unlimited traffic for 80 Euro per month, or you can distribute it with
Torrent or Sourceforge, for open source applications.

But you are right, this is not perfect. Please suggest Microsoft to bundle
the Qt DLLs with Windows.

> No, Linux has nothing comparable. Linux does not even have a VM or graphics
> libraries capable of hosting WPF reliably. That was my point.

Linux has many VMs, maybe you know this one:

http://www.mono-project.com

Looks like at least Winforms applications works with it. Should be not too
difficult to enhance it for WPF.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Frank Buss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <1fvq0vbzt4fz9$.12zxt3y9vysfw$.dlg@40tude.net>
Jon Harrop wrote:

> Syntax: Indentation-sensitive syntax as in Python, Haskell and F# was a huge
> design mistake because cut'n'paste from a browser can break code
> semantically by altering the indentation, and it prohibits autoindentation
> leaving you to reindent huge swaths of code by hand. Lisps got it wrong by
> oversimplifying the syntax, making math particularly cumbersome. OCaml got
> it wrong mainly by being too complex (e.g. a dozen different kinds of
> brackets instead of consistent brackets and identifiers to distinguish
> between lists, arrays, streams etc.) and by using unconventional operators
> (e.g. ** for power and ^ for string concatenation).

I've written some code in Python some time ago, e.g. this one:

http://www.frank-buss.de/shapeways/dome.py

It can be executed in Blender to create platonic solids and geodesic domes:

http://www.shapeways.com/model/28414/platonic_solids.html
http://www.shapeways.com/model/28981/geodesic_dome.html

Python is not that bad and I like the indention, because you'll see
immediatly how the code is meant. There is no wrong indention, like it is
possible in other languages, because then your code doesn't work. Fix the
web pages from where you want to paste, not the language :-)

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Nicolas Neuss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <874otomr04.fsf@ma-patru.mathematik.uni-karlsruhe.de>
ACL <··················@gmail.com> writes:

> Agreed, no money in writing books for CL when there are already so
> many good ones that are in the public domain or otherwise already
> written.

However, one book that I eagerly await is Conrad Barski's "Land of Lisp"
which seems to linger in some limbo state.  I think I'll preorder it in the
hope that this will speed things up a bit.

Nicolas

[To ACL: Sorry for replying by email first.]
From: Slobodan Blazeski
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <3857df0c-cb4f-4b3c-845a-7df9fe907ce1@b15g2000yqd.googlegroups.com>
On Jul 7, 5:50 pm, Nicolas Neuss <········@math.uni-karlsruhe.de>
wrote:
> ACL <··················@gmail.com> writes:
> > Agreed, no money in writing books for CL when there are already so
> > many good ones that are in the public domain or otherwise already
> > written.
>
> However, one book that I eagerly await is Conrad Barski's "Land of Lisp"
> which seems to linger in some limbo state.  I think I'll preorder it in the
> hope that this will speed things up a bit.
>
> Nicolas
>
> [To ACL: Sorry for replying by email first.]

Don't forget Nick Levine's Lisp outside the Box  http://lisp-book.org/

Slobodan
http://www.linkedin.com/in/slobodanblazeski
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <TOednVt6wq5HjMjXnZ2dnUVZ8iOdnZ2d@brightview.co.uk>
Nicolas Neuss wrote:
> ACL <··················@gmail.com> writes:
>> Agreed, no money in writing books for CL when there are already so
>> many good ones that are in the public domain or otherwise already
>> written.
> 
> However, one book that I eagerly await is Conrad Barski's "Land of Lisp"
> which seems to linger in some limbo state.

Why do you eagerly await it?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Nicolas Neuss
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <87my7eqj9u.fsf@ma-patru.mathematik.uni-karlsruhe.de>
Jon Harrop <···@spammershome.com> writes:

> Nicolas Neuss wrote:

>> However, one book that I eagerly await is Conrad Barski's "Land of Lisp"
>> which seems to linger in some limbo state.
>
> Why do you eagerly await it?

Hmm, good question.  I think I like his Lisp alien, the "Lisp is different"
cartoon and the Macro short course on http://www.lisperati.com/casting.html
at least sufficiently much that I would like to buy and take a look at the
book.

Nicolas
From: neptundancer
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <5532830f-4d4a-4f3e-bb06-8646d54d1893@n21g2000vba.googlegroups.com>
On Jun 24, 4:18 am, Jon Harrop <····@ffconsultancy.com> wrote:
> ···@!!! wrote:
> > I'm not trying to start a flame war about which one is the best. Could
> > anybody explain me each of these languages features and strong points ?
>
> Common Lisp is strict evaluation, dynamic typing with macros (to rewrite
> code at compile-time) and run-time code evaluation. Was a pioneering
> functional language decades ago but has long since been superceded. Has no
> particular strengths today and, hence, is very rare. Main weaknesses are
> baggage, poor performance, bad implementations and a really backward
> community. The only notable development around Lisp for a decade is the new
> programming language Clojure that runs on the JVM. In particular, Clojure
> addressed many of Lisp's major problems by dropping the baggage, building
> upon a performant VM with a concurrent GC and stealing all of the
> intelligent members of the Lisp community.
>
> Haskell is non-strict evaluation and static typing. Is a research language
> used to implement many radical ideas that are unlikely to be of any
> immediate use. Main strength is that it abstracts the machine away
> entirely, allowing some solutions to be represented very concisely. Main
> weakness is that it abstracts the machine away entirely, rendering
> performance wildly unpredictable (will my elegant program terminate in my
> lifetime? who knows...).
>
> I know little about Prolog except that it was designed specifically for
> logic programming (i.e. solving problems by specifying relations and
> searching for solutions) and that some of our customers use it.
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u

I feel so sorry for you.
From: Xah Lee
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <60d9c518-4dff-4dac-982c-a5263d411da1@t11g2000prh.googlegroups.com>
On Jun 23, 7:18 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> ···@!!! wrote:
> > I'm not trying to start a flame war about which one is the best. Could
> > anybody explain me each of these languages features and strong points ?
>
> Common Lisp is strict evaluation, dynamic typing with macros (to rewrite
> code at compile-time) and run-time code evaluation. Was a pioneering
> functional language decades ago but has long since been superceded. Has no
> particular strengths today and, hence, is very rare. Main weaknesses are
> baggage, poor performance, bad implementations and a really backward
> community. The only notable development around Lisp for a decade is the new
> programming language Clojure that runs on the JVM. In particular, Clojure
> addressed many of Lisp's major problems by dropping the baggage, building
> upon a performant VM with a concurrent GC and stealing all of the
> intelligent members of the Lisp community.
>
> Haskell is non-strict evaluation and static typing. Is a research language
> used to implement many radical ideas that are unlikely to be of any
> immediate use. Main strength is that it abstracts the machine away
> entirely, allowing some solutions to be represented very concisely. Main
> weakness is that it abstracts the machine away entirely, rendering
> performance wildly unpredictable (will my elegant program terminate in my
> lifetime? who knows...).
>
> I know little about Prolog except that it was designed specifically for
> logic programming (i.e. solving problems by specifying relations and
> searching for solutions) and that some of our customers use it.

i heartily disagree.

In my opinion:

• lisp = useless old shit.

• haskell = useless academic shit.

• prolog = useless old and academic shit.

• Qi Lisp = Academically confined. Esoteric. Abtruse. Unreadible.
Doomed.

• Clojure lisp = of little future due to competition.

• Mathematica = Widely used. Wildly successful. witness A New Kinda
Science & Wolfram Alpha.

• OCaml = Widely used. Wildly successful. Almost all formal proof
systems that are actively used are written in OCaml. And Microsoft
giant's version of OCaml, the F#, with .NET, is about to wipe out the
planet of imperative monkies.

References:

• Language, Purity, Cult, and Deception
  http://xahlee.org/UnixResource_dir/writ/lang_purity_cult_deception.html

• Proliferation of Computing Languages
  http://xahlee.org/UnixResource_dir/writ/new_langs.html

• What Languages to Hate
  http://xahlee.org/UnixResource_dir/writ/language_to_hate.html

• Xah's OCaml Tutorial
  http://xahlee.org/ocaml/ocaml.html

  Xah
∑ http://xahlee.org/

☄
From: Don Geddis
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <87y6qxzjwv.fsf@geddis.org>
Xah Lee <······@gmail.com> wrote on Thu, 9 Jul 2009 :
> In my opinion:
> lisp = useless old shit.

So, why do you post on this newsgroup, c.l.l?  It's clearly for people who
are interested in Lisp.  If you're not, why are you here?
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Men occasionally stumble over the truth, but most of them pick themselves up
and hurry off as if nothing had happened.  -- Winston Churchill (1874-1965)
From: Xah Lee
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <2f397d70-06bb-47d4-9668-9528ba6731c9@r33g2000yqn.googlegroups.com>
Don Geddis wrote:
> So, why do you post on this newsgroup, c.l.l?  It's clearly for people who
> are interested in Lisp.  If you're not, why are you here?

See:

• Why do I Rant In comp.lang.lisp?
  http://xahlee.org/UnixResource_dir/writ/why_comp_lang_lisp.html

In the past, i wouldn't have answered your idiotic question, because
it is off topic and all. But now i do, taking it as a chance to speak
for myself. Why? See:

• How Shall I Respond?
  http://xahlee.org/Netiquette_dir/how_shall_i_respond.html

On retrospect of the article “Why do I Rant In comp.lang.lisp?”, i ask
myself whether i have achieved something about creating a book length
collection of my ideas and opinions. I must say yes i have. For
example, especially in the past few years, usually when i argue with
the tech geekers online, i can usually pull out a collection of some 5
or 10 essays of my opinions or tech expositions on a some particular
argued topic. For example of this, recent debate about lisp1 vs lisp2
in this very thread, i pulled out 10 essays about jargons:

• The Importance of Terminology's Quality In Computer Languages
  http://xahlee.org/UnixResource_dir/writ/naming_functions.html

• Jargons of Info Tech Industry
  http://xahlee.org/UnixResource_dir/writ/jargons.html

• Why You should Not Use The Jargon Lisp1 and Lisp2
  http://xahlee.org/emacs/lisp1_vs_lisp2.html

• The Term Currying In Computer Science
  http://xahlee.org/UnixResource_dir/writ/currying.html

• What Is Closure In A Programing Language
  http://xahlee.org/UnixResource_dir/writ/closure.html

• What are OOP's Jargons and Complexities
  http://xahlee.org/Periodic_dosage_dir/t2/oop.html

• Interface in Java
  http://xahlee.org/java-a-day/interface.html

• Math Terminology and Naming of Things
  http://xahlee.org/cmaci/notation/math_namings.html

• Politics and the English Language
  http://xahlee.org/p/george_orwell_english.html

btw, your idiotic reply:
> So, why do you post on this newsgroup, c.l.l?  It's clearly for people who
> are interested in Lisp.  If you're not, why are you here?

is actually wrong, and is a common response to me in the past 10
years. People say that to me on Perl, Python groups, as well. I guess
i could spend few hours now and write another essay focused on
expounding this issue... and creating a addition to my essay
collection. Not sure i feel like that now though. But to quip, why are
YOU not interested in lisp?? if not, why r u here?? arn't u trolling?

Thanks for your troll, cause, without trolls like u, i wouldn't have
the chance, the spur, to have created my essays in the past decade.

  Xah
∑ http://xahlee.org/

☄
From: Don Geddis
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <87vdlz9cib.fsf@geddis.org>
Xah Lee <······@gmail.com> wrote on Fri, 10 Jul 2009:
> i ask myself whether i have achieved something about creating a book length
> collection of my ideas and opinions. I must say yes i have.

You've got the length, that's for sure.  But usually a successful book is
about something more than just total word count.

I'd love to see you attempt to turn your collection into an actual published
book.  Most people here don't want to read your writings when you offer them
for free!  It would be amusing to see you attempt to get people to _pay_ for
the experience.

> But to quip, why are YOU not interested in lisp?? if not, why r u here?? 
> arn't u trolling?

But I _am_ interested in Lisp (unlike you).  That's (one of the very many)
differences between us.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Not what we have but what we enjoy, constitutes our abundance.
	-- Epicurus (Greek philosopher, BC 341-270)
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <NLCdnRrMdehcTcrXnZ2dnUVZ8qdi4p2d@brightview.co.uk>
Don Geddis wrote:
> Xah Lee <······@gmail.com> wrote on Thu, 9 Jul 2009 :
>> In my opinion:
>> lisp = useless old shit.
> 
> So, why do you post on this newsgroup, c.l.l?  It's clearly for people who
> are interested in Lisp.  If you're not, why are you here?

Is c.l.lisp only for pro-Lisp misinformation?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Don Geddis
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <87r5wn9c1t.fsf@geddis.org>
Jon Harrop <···@ffconsultancy.com> wrote on Sat, 11 Jul 2009:
> Don Geddis wrote:
>> Xah Lee <······@gmail.com> wrote on Thu, 9 Jul 2009 :
>>> In my opinion:
>>> lisp = useless old shit.
>> So, why do you post on this newsgroup, c.l.l?  It's clearly for people who
>> are interested in Lisp.  If you're not, why are you here?
> Is c.l.lisp only for pro-Lisp misinformation?

One would generally think it's for people that are at least invested in the
future of Lisp, if not the present.  Constructive criticism is fine.  Notice
that I said "interested in Lisp", not just "pro-Lisp" as you suggested.
Those phrases are not the same, and that distinction is critical.

If you think there's no value here, what are you achieving by entering this
community of people who _do_ think there is value, and constantly emitting
your boring refrain of, "there is no value here"?  You're just being
annoying, and making people angry to no constructive purpose.  Why bother?

Why not just go find a community of people who share the values that you
have, and be constructive with them?

The answer, of course, is obvious.  It's because adding negative energy to a
constructive conversation is actually the goal of such people.  They enjoy
seeing the existing community of well-intentioned people waste their time and
effort attempting to combat the troll.  There is no higher purpose to the
troll.  Being a troll is the whole point.

Most trolls gain at least some of their power, such as it is, by trying to
pretend that they're constructive members of the community.  I enjoy
occasionally pointing out the hypocrisy and deception in their writings.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
If the bark is the skin of the tree, then what are the acorns?  You don't want
to know.  -- Deep Thoughts, by Jack Handey [1999]
From: Mark Tarver
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <b08feeb5-7dba-43b4-81fc-6932124e0a5b@l32g2000vba.googlegroups.com>
On 23 June, 23:23, ····@!!!" <·········@gmail.com> wrote:
> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?

In Qi (www.lambdassociates.org) all these, in a certain way, live
harmoniously together.  You would probably answer your question by
experimenting with Qi.

Mark
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <-4CdnT2rKrKvEN_XnZ2dnUVZ8nNi4p2d@brightview.co.uk>
Mark Tarver wrote:
> On 23 June, 23:23, ····@!!!" <·········@gmail.com> wrote:
>> I'm not trying to start a flame war about which one is the best. Could
>> anybody explain me each of these languages features and strong points ?
> 
> In Qi (www.lambdassociates.org) all these, in a certain way, live
> harmoniously together.  You would probably answer your question by
> experimenting with Qi.

How is Qi like Haskell?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Mark Tarver
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <09860076-682f-480d-a28f-c013667c5f51@t21g2000yqi.googlegroups.com>
On 24 June, 22:40, Jon Harrop <····@ffconsultancy.com> wrote:
> Mark Tarver wrote:
> > On 23 June, 23:23, ····@!!!" <·········@gmail.com> wrote:
> >> I'm not trying to start a flame war about which one is the best. Could
> >> anybody explain me each of these languages features and strong points ?
>
> > In Qi (www.lambdassociates.org) all these, in a certain way, live
> > harmoniously together.  You would probably answer your question by
> > experimenting with Qi.
>
> How is Qi like Haskell?
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u

Qi is close enough to Haskell in enough salient ways for this guy to
get an answer to his question by experiment.  And he can mix and match
all these paradigms in one environment to get an answer he will find
convincing to him - which is probably the best way to learn.
From: Tamas K Papp
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <7ae67rF1ulfvuU1@mid.individual.net>
On Tue, 23 Jun 2009 15:23:09 -0700, ···@!!! wrote:

> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?

Your question is still too specific.  Please try to be more general.
For example,

"Which one of Haskell, Prolog or CL is the best?"

gets a bonus mark for giving a vague criterion ("best") and no
context, but still constrains the answer to one of these three
languages.  It is better to ask

"Which is the best computer language?"

But we are still not there: your domain of meaningless optimization is
still too restricted.  Instead, ask:

"What is the best?"

Now the final touch: drop the criterion.  Ask it this way:

"What?"

HTH,

Tamas
From: Christoph Senjak
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <85ef26d1-5aa9-41b4-a04a-230d9ed5d24e@f16g2000vbf.googlegroups.com>
On Jun 24, 12:23 am, ····@!!!" <·········@gmail.com> wrote:
> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?

Haskell is purely functional, very "formal", etc., Common Lisp has a
mighty object system and you can fit it to your needs in almost any
aspect, Prolog is a logical programming language. Comparing them is
like comparing mustard, apples and coffee.

Ask 10 people and you will get 11 opinions which one and why this one
is the best. As this is a Common Lisp usenet-group, it should be
expectable that most people prefer common lisp, for whatever reason.
From: ACL
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <edaa7aa0-29c8-406f-9fdf-0ee2fa057d20@z34g2000vbl.googlegroups.com>
On Jun 23, 6:23 pm, ····@!!!" <·········@gmail.com> wrote:
> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?

The best way is to try each and read a lot about them, as everyone who
responds will have  a favorite language they are touting... Also,
certain languages will have varying amounts of power depending on the
person that you ask. Some people love common lisp and will tell you
all about how great it is, others get a headache from the parens and
haven't even written a line of code in it (but will tell you how
aweful it is anyway...).

A better thing to ask, 'what are your strong points', 'what are your
weak points', of each language community.

CL,
Pros:
+ old stable language,
+ multi-paradigm. (OO, functional, imperative, all in one).
+ macros
+ prototyping language (you can find the best algorithm fastest).
+ dynamic
+ faster than other dynamic languages (ruby/python) generally.

Cons:
- Not likely to change/improve
- Giving multi-paradigm language with macros to a bad programmer is
like giving a light sabre to a non-jedi, more likely to cut his arm
off and slice a hole in the side of the Millennium Falcon than save
Leia. (There is an amount of 'taste' involved, and not everyone has
it, even if they believe they do).
- Package system is crappy. (The package system is the real reason
people say 'there are no libraries for Common Lisp'. There are plenty
of libraries for common lisp, the problem is that they are a pain in
the ass to find/maintain and get working properly with other packages,
so you end up writing your own).
- Not quite as fast as statically typed Java/C most of the time

Neutrals:
+- If the library genuinely doesn't exist, you'll have a fun time
creating it.
   (seriously I enjoy writing lisp code and would probably do it even
if I weren't paid... just don't tell my boss).
+- Through macros you get an entirely new way of development through
DSLs and bottom up programming.
+- Community has some of the most well established and talented trolls
around.
+- A few months in you'll get this completely mind blowing experience
of 'getting it'.
From: Pillsy
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <95e63329-8c7c-41ff-b4b6-8c17796ecba3@s16g2000vbp.googlegroups.com>
On Jun 23, 6:23 pm, ····@!!!" <·········@gmail.com> wrote:
> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong points ?

Common Lisp is so much more useful than either Prolog or Haskell that
it isn't even funny.

Why, if I want to do something in Common Lisp, I can open up a
terminal window, type "clisp", open another terminal window and type
"vim"[1] and start putting together something useful out of the
standard functions and a few essential libraries, all of which I'm
pretty familiar with.

OTOH, if I wanted to do something in Prolog, I'd have to download a
Prolog implementation, install it, read a book or two on Prolog,
probably do a bunch of exercises and little warm-up projects, and then
maybe I'd be able to put together something useful. Haskell would be
the same.

So, yeah, Common Lisp: way better than those other languages you're
talking about.

Cheers, 0
Pillsy

[1] That's on my work computer; at home I use SLIME + SBCL and things
are even more pleasant.
From: ···@!!!
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <4a941947-e69e-4032-9e26-72a05fa8c572@o36g2000vbi.googlegroups.com>
On Jun 25, 3:53 pm, Pillsy <·········@gmail.com> wrote:
> On Jun 23, 6:23 pm, ····@!!!" <·········@gmail.com> wrote:
>
> > I'm not trying to start a flame war about which one is the best. Could
> > anybody explain me each of these languages features and strong points ?
>
> Common Lisp is so much more useful than either Prolog or Haskell that
> it isn't even funny.
>
> Why, if I want to do something in Common Lisp, I can open up a
> terminal window, type "clisp", open another terminal window and type
> "vim"[1] and start putting together something useful out of the
> standard functions and a few essential libraries, all of which I'm
> pretty familiar with.
>
> OTOH, if I wanted to do something in Prolog, I'd have to download a
> Prolog implementation, install it, read a book or two on Prolog,
> probably do a bunch of exercises and little warm-up projects, and then
> maybe I'd be able to put together something useful. Haskell would be
> the same.
>
> So, yeah, Common Lisp: way better than those other languages you're
> talking about.
>
> Cheers, 0
> Pillsy
>
> [1] That's on my work computer; at home I use SLIME + SBCL and things
> are even more pleasant.

Yeah, somehow I feel the same about CL. But there's something with
Haskell that atracts me and makes me believe that maybe there'll be
something promising.

I use the same setup SLIME + SBCL and I'm really starting to love
Emacs.
From: chthon
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <fecc6ab5-d379-4bf6-b389-deac15327e34@l31g2000yqb.googlegroups.com>
On Jun 25, 4:52 pm, ····@!!!" <·········@gmail.com> wrote:
> On Jun 25, 3:53 pm, Pillsy <·········@gmail.com> wrote:
>
>
>
> > On Jun 23, 6:23 pm, ····@!!!" <·········@gmail.com> wrote:
>
> > > I'm not trying to start a flame war about which one is the best. Could
> > > anybody explain me each of these languages features and strong points ?
>
> > Common Lisp is so much more useful than either Prolog or Haskell that
> > it isn't even funny.
>
> > Why, if I want to do something in Common Lisp, I can open up a
> > terminal window, type "clisp", open another terminal window and type
> > "vim"[1] and start putting together something useful out of the
> > standard functions and a few essential libraries, all of which I'm
> > pretty familiar with.
>
> > OTOH, if I wanted to do something in Prolog, I'd have to download a
> > Prolog implementation, install it, read a book or two on Prolog,
> > probably do a bunch of exercises and little warm-up projects, and then
> > maybe I'd be able to put together something useful. Haskell would be
> > the same.
>
> > So, yeah, Common Lisp: way better than those other languages you're
> > talking about.
>
> > Cheers, 0
> > Pillsy
>
> > [1] That's on my work computer; at home I use SLIME + SBCL and things
> > are even more pleasant.
>
> Yeah, somehow I feel the same about CL. But there's something with
> Haskell that atracts me and makes me believe that maybe there'll be
> something promising.
>
> I use the same setup SLIME + SBCL and I'm really starting to love
> Emacs.

There used to be 'Two Dozen Short Lessons in Haskell' on the Internet,
but now it has become a book and you cannot find it anymore. I do have
an old version though, if you are interested.

Regards,

Jurgen
From: ACL
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <db3011a0-091e-4664-824d-eb36ecca5f0f@g20g2000vba.googlegroups.com>
On Jun 25, 9:53 am, Pillsy <·········@gmail.com> wrote:
> On Jun 23, 6:23 pm, ····@!!!" <·········@gmail.com> wrote:
>
> > I'm not trying to start a flame war about which one is the best. Could
> > anybody explain me each of these languages features and strong points ?
>
> Common Lisp is so much more useful than either Prolog or Haskell that
> it isn't even funny.
>
> Why, if I want to do something in Common Lisp, I can open up a
> terminal window, type "clisp", open another terminal window and type
> "vim"[1] and start putting together something useful out of the
> standard functions and a few essential libraries, all of which I'm
> pretty familiar with.
>
> OTOH, if I wanted to do something in Prolog, I'd have to download a
> Prolog implementation, install it, read a book or two on Prolog,
> probably do a bunch of exercises and little warm-up projects, and then
> maybe I'd be able to put together something useful. Haskell would be
> the same.
>
> So, yeah, Common Lisp: way better than those other languages you're
> talking about.
>
> Cheers, 0
> Pillsy
>
> [1] That's on my work computer; at home I use SLIME + SBCL and things
> are even more pleasant.

"Common lisp, I know it and its already installed, therefore its
better than everything else!"

:-P

(Not that I don't agree with you) :-)
From: A.L.
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <0gr065tklbstcshqgspsld58c3jii5eva6@4ax.com>
On Thu, 25 Jun 2009 06:53:41 -0700 (PDT), Pillsy <·········@gmail.com>
wrote:

>On Jun 23, 6:23�pm, ····@!!!" <·········@gmail.com> wrote:
>> I'm not trying to start a flame war about which one is the best. Could
>> anybody explain me each of these languages features and strong points ?
>
>Common Lisp is so much more useful than either Prolog or Haskell that
>it isn't even funny.
>

Then, why Lisp is dead?...

A.L.
From: Pillsy
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <f89a0772-9b73-4db1-907d-fc5e49e8c204@h18g2000yqj.googlegroups.com>
On Jul 17, 8:27 am, A.L. <········@aol.com> wrote:

> On Thu, 25 Jun 2009 06:53:41 -0700 (PDT), Pillsy <·········@gmail.com>
> wrote:

> >On Jun 23, 6:23 pm, ····@!!!" <·········@gmail.com> wrote:

> >> I'm not trying to start a flame war about which one is the best. Could
> >> anybody explain me each of these languages features and strong points ?

> >Common Lisp is so much more useful than either Prolog or Haskell that
> >it isn't even funny.

> Then, why Lisp is dead?...

I blame the breakdown in the educational system that left you unable
to read for comprehension.

Later,
Pillsy
From: Jon Harrop
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <X8qdnehvurZV8_3XnZ2dnUVZ8rxi4p2d@brightview.co.uk>
A.L. wrote:
> Then, why Lisp is dead?...

Lisp isn't dead. Its undead!

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
From: Friedrich Dominicus
Subject: Re: LISP vs HASKELL vs PROLOG
Date: 
Message-ID: <87fxd5kmxw.fsf@q-software-solutions.de>
····@!!!" <·········@gmail.com> writes:

> I'm not trying to start a flame war about which one is the best. Could
> anybody explain me each of these languages features and strong
 > points ?
Then the question should not be Lisp vs Haskell vs Prolog but 
Lisp and  Haskell and Prolog, which to use when?



-- 
Please remove just-for-news- to reply via e-mail.