From: Jon Harrop
Subject: New Computer Language Shootout?
Date: 
Message-ID: <469972a3$0$1634$ed2619ec@ptn-nntp-reader02.plus.net>
There seems to be a great deal of interest from the functional programming
community in benchmarking. Is there enough interest to create a new
computer language shootout that showcases more relevant/suitable tasks for
FP?

I think the ray tracer and symbolic simplifier are very valuable benchmarks
but I would also like to see regexps, program evaluators (rewriters,
interpreters and compilers) and other benchmarks such as the "n"th-nearest
neighbour example from my book:

  http://www.ffconsultancy.com/products/ocaml_for_scientists/complete.html

Perhaps even collating example programs where performance of of little
relevance, such as GUI Sudoku solvers, Wade's Game of Concentration, OpenGL
examples, web programming and so on.

I think such a resource would be both valuable and interesting to a wide
variety of readers but, most of all, I see it as an opportunity to
demonstrate applications where functional programming shines.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
The OCaml Journal
http://www.ffconsultancy.com/products/ocaml_journal/?usenet

From: Ken Tilton
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <uqgmi.1578$Kf1.175@newsfe12.lga>
Jon Harrop wrote:
> There seems to be a great deal of interest from the functional programming
> community in benchmarking. Is there enough interest to create a new
> computer language shootout that showcases more relevant/suitable tasks for
> FP?

Nope.

hth,kt
From: Kent M Pitman
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <ud4yub34y.fsf@nhplace.com>
Ken Tilton <···········@optonline.net> writes:

> Jon Harrop wrote:
> > There seems to be a great deal of interest from the functional programming
> > community in benchmarking. Is there enough interest to create a new
> > computer language shootout that showcases more relevant/suitable tasks for
> > FP?
> 
> Nope.

Not me either.

Personally, I don't really classify CL as an FP language, even though
we have some operators that are related to those used in FP languages.
Those operators are in some ways specifically designed to not work
like other languages--being a Lisp2 is a common example.  The
deliberate lack of functions to combine other functions [I proposed
CONJOIN and DISJOIN to take predicate functions and return functions
that did the AND and OR of their effects, respectively, and it was
rejected as something our users didn't want].  The absence of tail
call optimization is another example, encouraging people to write
loops rather than recursions--that's probably more practically
profound than the silly syntax debate over Lisp1/Lisp2.  One can go on.

More often than not, I have personally found that people who refer to
CL as an FP language end up confusing themselves by thinking CL is
secretly Scheme with bad syntax.  That usually holds them back rather
than helping them.  CL certainly permits a lot of programming using
functions, but "functional programming" has taken on a kind of
political correctness to it that transcends the sum of the individual
meanings of those terms.  The expression part of the CL language isn't
really designed for that kind of narrow understanding of what it is to
program with functions.

Incidentally, following on another item from upthread, I also think
that the "symbolic processing" part of CL isn't really an FP task, so
it's an especially weird thing to be benchmarking.  IMO, symbols in
the sense CL means them are really not used like they are in other
languages.  That is, they are places to hang semantics out for view,
more like URLs, where it's just plain ridiculous on the web to talk
about a URL being private... and the same is true of CL's symbols.  In
my experience, FP languages use symbols in a more "pure" way like
Scheme does, with no real attached semantics and discarding the sense
in which CL allows you to use symbols as a way of gaining a debugging
foothold.  The presence of uses for symbols in CL are subtle and take
a while to see, but the ability to do things like CLOS/MOP
introspection by just giving some symbol names (no need to name a
lexical environment) is an example.  In other languages, symbols name
variables that are still not intelligible without access to a lexical
environment--in short, CL isn't really working hard to implement the
kind of strict encapsulation that other languages have, and yet it
achieves a level of comfort that makes it easy to program with
reasonable comfort that you won't clobber others' programs and yet
reasonable comfort that you won't find yourself locked out without a
key either.

I don't like big public discussions of benchmarks not because
benchmarks themselves aren't sometimes useful, but because I think
these things change like the wind and it's hard to get rid of invalid
information fast enough.  They are quite an opportunity to just whip
stale data into the wind and start a lot of viral misinformation
making the rounds.

Also, Lisp is EXPLICITLY designed to ALLOW the same language to behave
in different ways performance-wise, trading efficiency for other
properties in some implementations--like smallness, like the ability
to deliver compiled and interpreted implementations at different
dollar cost, etc.  Lisp also has a zillion more optimization settings
with widely differing meanings than other languages have; the notion
that a single optimization setting is "obviously the right one" is
very hard to establish.  Knowing that the total number of optimization
settings under which you could test Lisp is huge, not even counting
opportunities to inline things, the possible use of compiler macros,
etc.  I doubt there's any commitment to actually allow or encourage or
report on the huge number of possibilities.  The notion that there is
a canonical "fast setting" which is "good enough" seems doubtful in my
mind.  Certainly a simple (declare (optimize (speed 3))) isn't likely
to do it.

The notion that by comparing the efficiency of a certain number of
specific function calls you're going to gain insight into whether Lisp
is a good language to use is a bit suspect.  When anyone designing
such tests has a profit motive in selling consulting in one of the
languages being tested, that seriously compromises the likelihood of a
useful outcome in my eyes.

I'd rather work with a language that lets me say things as I want to
say them, and work to speed up the things that seem too slow where
that turns out to matter, than begin by tying my hands and saying the
only expressional paradigms I may consider are those that are known on
the day I start programming to be a particular speed.

All just my opinion, of course.  Others may do as they please.  I
wanted to just say that my non-participation in this is not an
accident.  And I didn't want anyone who is mesmerized by benchmark
numbers to think there was no minority report on the significance
(or lack thereof) of such numbers.

I do think metering real programs that are about to be really used
is useful, btw.  I just don't think that's what this is.
From: Matthew D Swank
Subject: Common Lisp and FP
Date: 
Message-ID: <pan.2007.07.15.17.06.38.648677@c.net>
On Sun, 15 Jul 2007 02:19:09 -0400, Kent M Pitman wrote:

> More often than not, I have personally found that people who refer to
> CL as an FP language end up confusing themselves by thinking CL is
> secretly Scheme with bad syntax.  That usually holds them back rather
> than helping them.  CL certainly permits a lot of programming using
> functions, but "functional programming" has taken on a kind of
> political correctness to it that transcends the sum of the individual
> meanings of those terms.  The expression part of the CL language isn't
> really designed for that kind of narrow understanding of what it is to
> program with functions.

A lot of time and care has gone into pushing the limits of what is
practical to do in a (pure) functional language.  One of lisp's selling
points is paradigmatic adaptability.  I've always tried to use lisp in a
way that brings some of those interesting ideas into my standard set of
idioms.

However, you seem to imply that programmers that use Common Lisp
functionally will end up selecting different languages. If I like the kind
structure and reasoning that FP supports, are you saying should I look
elsewhere?

Matt

-- 
"You do not really understand something unless you can
 explain it to your grandmother." — Albert Einstein.
From: Tim Bradshaw
Subject: Re: Common Lisp and FP
Date: 
Message-ID: <1184613670.850143.107940@w3g2000hsg.googlegroups.com>
On Jul 15, 6:06 pm, Matthew D Swank
>
> However, you seem to imply that programmers that use Common Lisp
> functionally will end up selecting different languages. If I like the kind
> structure and reasoning that FP supports, are you saying should I look
> elsewhere?

If you do not value the vast amount of other stuff that CL has (which
entirely dwarfs any FP component of the language) then yes.  If you
*do* value it, then no.
From: Kent M Pitman
Subject: Re: Common Lisp and FP
Date: 
Message-ID: <uabtvd1vk.fsf@nhplace.com>
Tim Bradshaw <··········@tfeb.org> writes:

> On Jul 15, 6:06 pm, Matthew D Swank
> >
> > However, you seem to imply that programmers that use Common Lisp
> > functionally will end up selecting different languages. If I like
> > the kind structure and reasoning that FP supports, are you saying
> > should I look elsewhere?
> 
> If you do not value the vast amount of other stuff that CL has (which
> entirely dwarfs any FP component of the language) then yes.  If you
> *do* value it, then no.

In my case, I was speaking mostly to people who are standing outside
the language and trying to figure out what to make of it.  If such a
person, who doesn't know CL and thinks that it is most easily approached
by thinking of it as an FP language, then I'm guessing that will lead
to confusion.  That has been my experience anyway.  Often such people are
happier with Scheme.

I didn't in any way suggest that anyone using the language should
leave it.  I'm not out to dissuade anyone who's happy with CL that 
they shouldn't be.  I was just explaining why some people are sometimes 
not happy with and/or just confused by it.

Mismatched expectations are often a source of discontent.
From: Matthew D Swank
Subject: Re: Common Lisp and FP
Date: 
Message-ID: <pan.2007.07.18.01.31.44.999689@c.net>
On Tue, 17 Jul 2007 01:40:15 -0400, Kent M Pitman wrote:

> Tim Bradshaw <··········@tfeb.org> writes:
> 
>> On Jul 15, 6:06 pm, Matthew D Swank
>> >
>> > However, you seem to imply that programmers that use Common Lisp
>> > functionally will end up selecting different languages. If I like
>> > the kind structure and reasoning that FP supports, are you saying
>> > should I look elsewhere?
>> 
>> If you do not value the vast amount of other stuff that CL has (which
>> entirely dwarfs any FP component of the language) then yes.  If you
>> *do* value it, then no.
> 
> In my case, I was speaking mostly to people who are standing outside
> the language and trying to figure out what to make of it.  If such a
> person, who doesn't know CL and thinks that it is most easily approached
> by thinking of it as an FP language, then I'm guessing that will lead
> to confusion.  That has been my experience anyway.  Often such people are
> happier with Scheme.
> 
> I didn't in any way suggest that anyone using the language should
> leave it.  I'm not out to dissuade anyone who's happy with CL that 
> they shouldn't be.  I was just explaining why some people are sometimes 
> not happy with and/or just confused by it.
> 
> Mismatched expectations are often a source of discontent.

Well I count my self (own the whole) as pleased by Common Lisp. 
However, it is so malleable I sometimes forget there are some constraints
on what is practical.

Matt

-- 
"You do not really understand something unless you can
 explain it to your grandmother." — Albert Einstein.
From: Cesar Rabak
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <f7eg6v$3sa$1@aioe.org>
Kent M Pitman escreveu:
[snipped]

> The notion that by comparing the efficiency of a certain number of
> specific function calls you're going to gain insight into whether Lisp
> is a good language to use is a bit suspect.  When anyone designing
> such tests has a profit motive in selling consulting in one of the
> languages being tested, that seriously compromises the likelihood of a
> useful outcome in my eyes.

I think there is a sort of trade off on this. IIRC, some years ago a 
dispute between Lisp versus C ended up with a commercial Lisp 
implementator finding an opportunity to improve the Lisp compiler.

[snipped]

> 
> I do think metering real programs that are about to be really used
> is useful, btw.  I just don't think that's what this is.

Most of problems solvable by computer languages are usable by some kind 
of user. Some may be so specialized that we can believe they've no 
practical value, while some other may find it is its daily bread and butter.

my .01999....

--
Cesar Rabak
From: Mark Tarver
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1184490429.506532.215460@q75g2000hsh.googlegroups.com>
On 15 Jul, 01:59, Jon Harrop <····@ffconsultancy.com> wrote:
> There seems to be a great deal of interest from the functional programming
> community in benchmarking. Is there enough interest to create a new
> computer language shootout that showcases more relevant/suitable tasks for
> FP?
>
> I think the ray tracer and symbolic simplifier are very valuable benchmarks
> but I would also like to see regexps, program evaluators (rewriters,
> interpreters and compilers) and other benchmarks such as the "n"th-nearest
> neighbour example from my book:
>
>  http://www.ffconsultancy.com/products/ocaml_for_scientists/complete.html
>
> Perhaps even collating example programs where performance of of little
> relevance, such as GUI Sudoku solvers, Wade's Game of Concentration, OpenGL
> examples, web programming and so on.
>
> I think such a resource would be both valuable and interesting to a wide
> variety of readers but, most of all, I see it as an opportunity to
> demonstrate applications where functional programming shines.
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> The OCaml Journalhttp://www.ffconsultancy.com/products/ocaml_journal/?usenet

Benchmark programs for speed compariosns should be very simple or
trivial to write.  If you pose significant programming challenges in
your benchmarks then what you end up doing is comparing algorithms and
not languages.

Mark
From: Frank Buss
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <cvkyv42quq7q$.1o7ps7f7v210c$.dlg@40tude.net>
Mark Tarver wrote:

> Benchmark programs for speed compariosns should be very simple or
> trivial to write.  If you pose significant programming challenges in
> your benchmarks then what you end up doing is comparing algorithms and
> not languages.

First you can't test languages at all, only implementations. And if the
benchmarks are too simple, you are testing only one aspect of the
implementation, e.g. floating point and boxing/unboxing performance. This
may be useful, if you have different tests to compare differenct aspects.
But another idea would be one big task, like a ray-tracer, and then writing
it multiple times with different algorithms and optimizations for each
language implementation, and to compare this to derive some conclusions
which algorithms and optimizations works best for which language.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Mark Tarver
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1184502572.239998.138940@n60g2000hse.googlegroups.com>
On 15 Jul, 10:50, Frank Buss <····@frank-buss.de> wrote:
> And if the
> benchmarks are too simple, you are testing only one aspect of the
> implementation, e.g. floating point and boxing/unboxing performance.

No; the correct approach is to have a battery of simple benchmarks
aimed at different aspects of the language - symbol processing,
numerics etc.  If you have large complex problems then you end up
comparing programmers and not languages.

Mark
From: Richard M Kreuter
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <87tzs5isc9.fsf@tan-ru.localdomain>
Mark Tarver <··········@ukonline.co.uk> writes:
> On 15 Jul, 10:50, Frank Buss <····@frank-buss.de> wrote:

>> And if the benchmarks are too simple, you are testing only one
>> aspect of the implementation, e.g. floating point and
>> boxing/unboxing performance.
>
> No; the correct approach is to have a battery of simple benchmarks
> aimed at different aspects of the language - symbol processing,
> numerics etc.

I think you've missed Buss's point.  In Lisp, a function tested in
isolation may have to perform argument list checks, type checks,
boxing of values to be returned, which can dominate the computation
time for simple routines.  In realistic programs, inlining and other
mechanisms for skipping parts of the full function call protocol can
reduce or entirely remove that overhead.

> If you have large complex problems then you end up comparing
> programmers and not languages.

I'd say you end up comparing the ways that programmers and language
implementations combine to make programs, which might be the right
thing to compare, after all.

--
RmK
From: Mark Tarver
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1184581180.559148.54520@n60g2000hse.googlegroups.com>
On 15 Jul, 16:45, Richard M Kreuter <·······@progn.net> wrote:
> Mark Tarver <··········@ukonline.co.uk> writes:
> > On 15 Jul, 10:50, Frank Buss <····@frank-buss.de> wrote:
> >> And if the benchmarks are too simple, you are testing only one
> >> aspect of the implementation, e.g. floating point and
> >> boxing/unboxing performance.
>
> > No; the correct approach is to have a battery of simple benchmarks
> > aimed at different aspects of the language - symbol processing,
> > numerics etc.
>
> I think you've missed Buss's point.  In Lisp, a function tested in
> isolation may have to perform argument list checks, type checks,
> boxing of values to be returned, which can dominate the computation
> time for simple routines.  In realistic programs, inlining and other
> mechanisms for skipping parts of the full function call protocol can
> reduce or entirely remove that overhead.
>
> > If you have large complex problems then you end up comparing
> > programmers and not languages.
>
> I'd say you end up comparing the ways that programmers and language
> implementations combine to make programs, which might be the right
> thing to compare, after all.
>
> --
> RmK

I understood him well enough and its wrong for the purpose intended
and for the rest quite impractical.

If you want to compare solutions then its fine.  If you want to use
his method to compare performance w.r.t. languages, then its not
fine.  Complex open-ended problems (like Sudoku) admitting different
algorithms for solution - offering many choices of implementation
along the way are *bad choices* for benchmarks.  You're supposed to be
benchmarking *languages* - not algorithms or programmers.

The point you are making is that a single benchmark function may
misrepresent the truth about a language because it will capitalise on
a few features.  That's right - a single benchmark is a single point
on the graph.  The trick is to accumulate many such points to get a
better picture.

Benchmarks are trivial and even boring to code but not trivial to
choose.  A good suite of benchmarks should systematically profile all
the key aspects of the language in isolation - numerics, garbage
collection, list handling etc.

A good way to think of benchmarks is to compare them to the fitness
tests that sport doctors do of athletes.  Simple but scientific tests
aimed at profiling vital fitness signs like lung capacity or
heartbeat.  They're not exciting or challenging to watch or do -
nobody is going to pay to watch Beckham take a BP test; but that is
not the point of them.

Now if you want to follow Frank's method you'll probably have a lot
more fun and a more interesting discussion - but with no firm
conclusion.   The degree of organisation required to get many of the
best programmers to work together devoting many hours to this project
(and how do we decide who is 'best'?) is not realistic.  And many of
the questions (How easy is it to implement such a program?) are not
scientific questions (e.g How easy is it to change a light bulb?
Answer: not hard if you know what you're doing).  You'll have a jolly
good chin wag and things will be learnt no doubt.  But its not
benchmarking as I understand it.

That's why I recommend the Gabriel benchmarks to Jon as the obvious
route to choose if he wishes to prove his thesis.  The benchmarks are
famous, were largely designed on the philosophy I endorse here and
have a wealth of timing results attached to them.  Best still the Lisp
code is already there.  So rather than reinvent the wheel, why not
begin there?

Mark
From: Jon Harrop
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <469b4acd$0$1628$ed2619ec@ptn-nntp-reader02.plus.net>
Mark Tarver wrote:
> If you want to compare solutions then its fine.  If you want to use
> his method to compare performance w.r.t. languages, then its not
> fine.  Complex open-ended problems (like Sudoku) admitting different
> algorithms for solution - offering many choices of implementation
> along the way are *bad choices* for benchmarks.  You're supposed to be
> benchmarking *languages* - not algorithms or programmers.

I think the plot of verbosity vs performance for different ray tracers is
very enlightening. I would like to see similar things for Sudoku solvers
and so on.

> That's why I recommend the Gabriel benchmarks to Jon as the obvious
> route to choose if he wishes to prove his thesis.  The benchmarks are
> famous, were largely designed on the philosophy I endorse here and
> have a wealth of timing results attached to them.  Best still the Lisp
> code is already there.  So rather than reinvent the wheel, why not
> begin there?

They've already been done in all of these languages but I'm not terribly
impressed with the Gabriel benchmarks themselves. Time is distributed over
a tiny amount of code in each benchmark (that I've studied).

-- 
Dr Jon D Harrop, Flying Frog Consultancy
The OCaml Journal
http://www.ffconsultancy.com/products/ocaml_journal/?usenet
From: ···············@gmail.com
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1184589341.516465.308710@k79g2000hse.googlegroups.com>
On 16 Jul, 11:33, Jon Harrop <····@ffconsultancy.com> wrote:
> Mark Tarver wrote:
> > Complex open-ended problems (like Sudoku) admitting different
> > algorithms for solution - offering many choices of implementation
> > along the way are *bad choices* for benchmarks.
>
> I think the plot of verbosity vs performance for different ray tracers is
> very enlightening. I would like to see similar things for Sudoku solvers
> and so on.

As someone who's interested in real programming problems (matching
derivative prices, compressing terabytes of trading data, serving web
pages, syntax highlighting program code, ...) when I see raw
benchmarks it can be helpful to know which are relevant to the sort of
programs I want to write. So in this respect I see a purpose in the
less granular approach. But obviously going down this route you risk
'benchmarks' that are only relevant to that *exact* problem which is
clearly useless.

As with most programming problems, it seems the best approach is to
tackle the smaller problem as Mark is suggesting but then have some
way of tagging and grouping those results to give them meaning in real
world situations.

--
Phil
http://phil.nullable.eu/
From: ···········@gmail.com
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1185058758.922763.225850@22g2000hsm.googlegroups.com>
On 16 Jul, 13:35, ···············@gmail.com wrote:
> On 16 Jul, 11:33, Jon Harrop <····@ffconsultancy.com> wrote:
>
> > Mark Tarver wrote:
> > > Complex open-ended problems (likeSudoku) admitting different
> > > algorithms for solution - offering many choices of implementation
> > > along the way are *bad choices* for benchmarks.
>
> > I think the plot of verbosity vs performance for different ray tracers is
> > very enlightening. I would like to see similar things forSudokusolvers
> > and so on.
>
> As someone who's interested in real programming problems (matching
> derivative prices, compressing terabytes of trading data, serving web
> pages, syntax highlighting program code, ...) when I see raw
> benchmarks it can be helpful to know which are relevant to the sort of
> programs I want to write. So in this respect I see a purpose in the
> less granular approach. But obviously going down this route you risk
> 'benchmarks' that are only relevant to that *exact* problem which is
> clearly useless.
>
> As with most programming problems, it seems the best approach is to
> tackle the smaller problem as Mark is suggesting but then have some
> way of tagging and grouping those results to give them meaning in real
> world situations.
>
> --
> Philhttp://phil.nullable.eu/
From: Jon Harrop
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <46a30b7d$0$1635$ed2619ec@ptn-nntp-reader02.plus.net>
 ···············@gmail.com wrote:
> As with most programming problems, it seems the best approach is to
> tackle the smaller problem as Mark is suggesting but then have some
> way of tagging and grouping those results to give them meaning in real
> world situations.

I think that is extremely difficult to do accurately. For example, someone
asked how much time was spent in floating point and integer arithmetic in
my ray tracer. I would not have faith in any measurements because the two
are so entwined that pipelining would spoil the results if nothing else.

You can still draw a lot of interesting conclusions though. From the Sudoku
solver, I discovered that the OCaml compilers do not optimize integer mod
by a constant, which made the whole program several times slower.

From the ray tracer: SBCL has the performance characteristics of Java when
it comes to rapid allocation and collection of short-lived objects.

From the latest benchmark, the interpreter, I found it particularly
interesting that the lexing parsing code is shorter in OCaml even though
the input program is an s-expr!

The fact that trivial interpreters can match the performance of much more
powerful optimizing interpreters like ocamlc and clisp when the target
language and program are trivial was also surprising to me, although it is
a lesson I learned a few years ago when benchmarking against ocamlc.

So I believe you can certainly draw some very useful conclusions from these
benchmarks.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
OCaml for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists/?usenet
From: Paul Wallich
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <f7g3b5$er0$1@reader2.panix.com>
Mark Tarver wrote:

> A good way to think of benchmarks is to compare them to the fitness
> tests that sport doctors do of athletes.  Simple but scientific tests
> aimed at profiling vital fitness signs like lung capacity or
> heartbeat.  They're not exciting or challenging to watch or do -
> nobody is going to pay to watch Beckham take a BP test; but that is
> not the point of them.

That's a very good way to think of the kinds of benchmarks you're 
talking about, but I don't think the logical conclusion is the one 
you're implying. No one is going to look at two sports teams or two 
individual athletes and say, "Well, A's oxygen-saturation and 
reaction-time numbers are all better than B's, so obviously A will beat 
B." They're going to looking instead at whether A beats B in actual 
contests, or how A and B perform relative to their competition in actual 
contests.

The simple tests can tell you where an athlete's strengths and 
weaknesses might be, but it's well understood that they don't tell the 
whole story. In the same way, simple benchmarks will produce answers you 
can measure easily, but the relationship between those answers and 
actual usefulness will be unclear. (In other fields of endeavor this is 
sometimes called "looking under the lamp post.)

paul
From: Christopher Browne
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <60myxwnxre.fsf@dba2.int.libertyrms.com>
Paul Wallich <··@panix.com> writes:
> Mark Tarver wrote:
>
>> A good way to think of benchmarks is to compare them to the fitness
>> tests that sport doctors do of athletes.  Simple but scientific tests
>> aimed at profiling vital fitness signs like lung capacity or
>> heartbeat.  They're not exciting or challenging to watch or do -
>> nobody is going to pay to watch Beckham take a BP test; but that is
>> not the point of them.
>
> That's a very good way to think of the kinds of benchmarks you're
> talking about, but I don't think the logical conclusion is the one
> you're implying. No one is going to look at two sports teams or two
> individual athletes and say, "Well, A's oxygen-saturation and
> reaction-time numbers are all better than B's, so obviously A will
> beat B." They're going to looking instead at whether A beats B in
> actual contests, or how A and B perform relative to their competition
> in actual contests.
>
> The simple tests can tell you where an athlete's strengths and
> weaknesses might be, but it's well understood that they don't tell the
> whole story. In the same way, simple benchmarks will produce answers
> you can measure easily, but the relationship between those answers and
> actual usefulness will be unclear. (In other fields of endeavor this
> is sometimes called "looking under the lamp post.)

Well, then that indicates that there's more than one factor affecting
an athlete's performance.

- Physical condition is one.

- Technique is another; if one sprinter is controlling his legs to
  better effect than another, he may win, despite inferior condition.

- In team sports, the are "wisdom" effects which are related neither
  to condition nor to technique.  Passing at the right time to the
  right person is what gets a goal in football.

Each of these likely has its own subcriteria, so that by the time
you're done, you probably have a whole array of measurements to test,
where results fall out of the combination of them.

You almost certainly won't be able to predict outcomes based on one or
two of the factors, but the whole set of factors are likely to be
somewhat suggestive.  "World Cup" teams will probably have better
numbers, across the board, than teams that aren't in that league...
-- 
output = ("cbbrowne" ·@" "acm.org")
http://linuxdatabases.info/info/sgml.html
?OM ERROR
From: Frank Buss
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1t8xzon46j0fp$.1rvmi4ubx6hmf$.dlg@40tude.net>
Mark Tarver wrote:

> No; the correct approach is to have a battery of simple benchmarks
> aimed at different aspects of the language - symbol processing,
> numerics etc.  If you have large complex problems then you end up
> comparing programmers and not languages.

This depends on what you want to measure. I agree that small benchmarks,
like matrix multiplication, are useful for comparing different aspects of a
language. But implementing a ray tracer could "benchmark" some more things:
How easy is it to implement such a program? Is it easy to add a GUI or some
network support for splitting parts of the rendering to a render farm? How
easy is it to change the program, if requirements changes? And last but not
least: How fast is the ray tracer?

You are right that there is the danger that you are comparing programmers.
To avoid this, multiple programmers should work on the problem, if possible
some of the best for each language or a group of programmers should discuss
and enhance the implementation, e.g. in this newsgroup for Lisp.

The process of implementing the solution should be monitored (blogs from
the programmers) and the time needed should be summed. This would be more
useful than just some simple benchmarks and might show some differences.
E.g. for OCaml it might be true that it is faster than some Lisp
implementations.

But I think developing programs in OCaml or Haskell is slower than with
Lisp. At least I have problems with functional programming languages like
Haskell: Once a solution is found, usually it is very smart and works
reasonable fast. But developing a solution in Haskell is more like
developing a mathematical proof: trying to describe the problem in
mathematical terms, then trying to find axioms and theorems (functions)
which solves the problem.

Developing a solution in Lisp for me is more like painting: First some
sketch, like defining some lists with test cases of the problem and some
functions which handles it, without caring too much about types or
correctness. Then doing some top-down design, mixed with OO, functional or
plain imperative programming to finish the painting.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Kent M Pitman
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <u1wf9kyrr.fsf@nhplace.com>
Frank Buss <··@frank-buss.de> writes:

> Developing a solution in Lisp for me is more like painting: First some
> sketch, like defining some lists with test cases of the problem and some
> functions which handles it, without caring too much about types or
> correctness. Then doing some top-down design, mixed with OO, functional or
> plain imperative programming to finish the painting.

Well said.
From: Slobodan Blazeski
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1184573028.769713.165340@o61g2000hsh.googlegroups.com>
On Jul 15, 3:59 pm, Frank Buss <····@frank-buss.de> wrote:

> Developing a solution in Lisp for me is more like painting: First some
> sketch, like defining some lists with test cases of the problem and some
> functions which handles it, without caring too much about types or
> correctness. Then doing some top-down design, mixed with OO, functional or
> plain imperative programming to finish the painting.
>
> --
> Frank Buss, ····@frank-buss.dehttp://www.frank-buss.de,http://www.it4-systems.de

I would add this to my list of favourite quotes if you don't mind . It
describes very well my attitude towards lisp, programming  and kind of
person I am.

bobi
From: Frank Buss
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1k2zfrfdjnmn4.rb3sl11fqajd$.dlg@40tude.net>
Slobodan Blazeski wrote:

> I would add this to my list of favourite quotes if you don't mind . 

Yes, no problem. But I don't know if I have read something similar in one
of Paul Graham's essays. But it is true: I think Lisp supports the use of
both sides of the brain, while the complex syntax, type system and
programming style of Haskell requires lots of power of the logical half of
the brain and no power is leftover for the visual part of the brain.

With Java there is another problem: You have a hard time keeping your brain
awake and not wandering away, while your fingers are running hot, trying to
implement some simple ideas with lines after lines of redundant code :-)

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Slobodan Blazeski
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1184660298.632688.160850@m37g2000prh.googlegroups.com>
On Jul 16, 8:55 pm, Frank Buss <····@frank-buss.de> wrote:
> Slobodan Blazeski wrote:
> > I would add this to my list of favourite quotes if you don't mind .
>
> Yes, no problem. But I don't know if I have read something similar in one
> of Paul Graham's essays. But it is true: I think Lisp supports the use of
> both sides of the brain, while the complex syntax, type system and
> programming style of Haskell requires lots of power of the logical half of
> the brain and no power is leftover for the visual part of the brain.
>
> With Java there is another problem: You have a hard time keeping your brain
> awake and not wandering away, while your fingers are running hot, trying to
> implement some simple ideas with lines after lines of redundant code :-)

Tell me about it. Did you  read  How to Crash and Burn your Java
project by  Pete McBreen ?
 http://www.mcbreen.ab.ca/papers/CrashAndBurnJava.html
My favourite part is :

The best place to put all of the code is either behind the OK button
(you only have to go to one place to see what is happening) or in
stored procedures in the database (since these give optimal database
performance).


That's exactly what happens to me. I don't give a damn if lisp is  an
order of magnitude slower as from my own humble experience most apps
spent most of their time waiting for user input or 80% of slow
reaction comes from database & networking, so I would choose something
that is streightforward to understand and easy maintain / improve even
if it's 10x slower, user won't probably even notice a difference.
(Unless you're doing something like games  ).  If someone wants to
develop with  haskell or ocaml let them try it for themselves , if it
makes them happy, great else stick with lisp.
From: Tim Bradshaw
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1184674910.087119.100020@e9g2000prf.googlegroups.com>
On Jul 17, 9:18 am, Slobodan Blazeski <·················@gmail.com>
wrote:

>
> That's exactly what happens to me. I don't give a damn if lisp is  an
> order of magnitude slower as from my own humble experience most apps
> spent most of their time waiting for user input or 80% of slow
> reaction comes from database & networking, so I would choose something
> that is streightforward to understand and easy maintain / improve even
> if it's 10x slower, user won't probably even notice a difference.
> (Unless you're doing something like games  ).  If someone wants to
> develop with  haskell or ocaml let them try it for themselves , if it
> makes them happy, great else stick with lisp.

Of course, it's not just your experience, what you're saying is borne
out by a huge amount of evidence.  Performance is a much more complex
domain than many people understand it to be.  In particular, outside
some limited, if high-profile (games, numerical simulations, a few
others) domains, raw compute performance is rather seldom on the
critical path.  Instead things like I/O performance of various kinds,
IPC performance of various kinds (often tangled up with I/O) and just
plain good design matter.  And in many contexts programmer time is
more important than any of these.

You can see this in the wholesale rush to higher-level, non-native
compiled languages like Java and Perl, which claim to reduce
programmer time at the cost of some performance.  And where
performance really does matter, if you've spent time trying to improve
performance of substantial programs written in these languages (or
Lisp) you'll rather seldom find that there's something which batter
code generation would help significantly with.  In relatively recent
history I've spent time trying to improve the performance of a large
Java program and a smaller but long-running Perl program.  For the
Java application the problem was really awful memory access patterns
which was improved by better data structure design, and for the Perl
program (writte by me, so not entirely a fair example) the big
improvement was to aggressively re-use the results of system calls to
reduce the number made, as virtually all the time was spent waiting
for them.

--tim
From: Ken Tilton
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <Z%2ni.91$ip4.83@newsfe12.lga>
Slobodan Blazeski wrote:
> On Jul 16, 8:55 pm, Frank Buss <····@frank-buss.de> wrote:
> 
>>Slobodan Blazeski wrote:
>>
>>>I would add this to my list of favourite quotes if you don't mind .
>>
>>Yes, no problem. But I don't know if I have read something similar in one
>>of Paul Graham's essays. But it is true: I think Lisp supports the use of
>>both sides of the brain, while the complex syntax, type system and
>>programming style of Haskell requires lots of power of the logical half of
>>the brain and no power is leftover for the visual part of the brain.
>>
>>With Java there is another problem: You have a hard time keeping your brain
>>awake and not wandering away, while your fingers are running hot, trying to
>>implement some simple ideas with lines after lines of redundant code :-)
> 
> 
> Tell me about it. Did you  read  How to Crash and Burn your Java
> project by  Pete McBreen ?
>  http://www.mcbreen.ab.ca/papers/CrashAndBurnJava.html
> My favourite part is :
> 
> The best place to put all of the code is either behind the OK button
> (you only have to go to one place to see what is happening) or in
> stored procedures in the database (since these give optimal database
> performance).
> 
> 
> That's exactly what happens to me. I don't give a damn if lisp is  an
> order of magnitude slower ...

It is? I thought it was 20% slower than C. I saw Doctor H claiming 
OhCamel was 10x faster than Lisp, but that just made me curious about 
OakAml being eight times faster than C.

kt
From: George Neuner
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <fvrp93dbh977lug1tqllg7i2l6a3hutkit@4ax.com>
On Tue, 17 Jul 2007 08:58:27 -0400, Ken Tilton
<···········@optonline.net> wrote:

>Slobodan Blazeski wrote:
>> 
>> I don't give a damn if lisp is  an order of magnitude slower ...
>
>It is? I thought it was 20% slower than C. I saw Doctor H claiming 
>OhCamel was 10x faster than Lisp, but that just made me curious about 
>OakAml being eight times faster than C.

Ocaml's memory allocator is much faster than the default allocator in
most C implementations.  If the program uses dynamic allocation and
Ocaml has enough head room to avoid lots of GC cycles it will
outperform C.

However, using statically allocated data structures - mutable
structures in Ocaml - C is faster, though the difference can be
reduced to just a few percent by hand optimizing the Ocaml.

Ocaml's dynamic memory allocation advantage can be mostly negated by
replacing C's default linked list allocator with a better one.
Real-time C implementations, which use region and/or fixed order buddy
allocators, can closely approximate Ocaml's performance and may better
it when GC is taken into account.

George
--
for email reply remove "/" from address
From: Giorgos Keramidas
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <87vechxtz3.fsf@kobe.laptop>
On Tue, 17 Jul 2007 13:16:06 -0400, George Neuner <·········@comcast.net> wrote:
> Ocaml's memory allocator is much faster than the default allocator in
> most C implementations.  If the program uses dynamic allocation and
> Ocaml has enough head room to avoid lots of GC cycles it will
> outperform C.
>
> However, using statically allocated data structures - mutable
> structures in Ocaml - C is faster, though the difference can be
> reduced to just a few percent by hand optimizing the Ocaml.
>
> Ocaml's dynamic memory allocation advantage can be mostly negated by
> replacing C's default linked list allocator with a better one.
> Real-time C implementations, which use region and/or fixed order buddy
> allocators, can closely approximate Ocaml's performance and may better
> it when GC is taken into account.

(somewhat-off-topic
  There's no such thing as "the default linked list allocator" in C.  An
  allocator _may_ be implemented in such a simplistic manner, but it's
  not something "default" or "standard" in C.

  Having said that, the rest of what is said above sounds reasonable
  enough so it may be a fair approximation of Ocaml vs. C allocators.)
From: George Neuner
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <okrt93pt65u465ovlusrbvpkuqlvgfimuo@4ax.com>
On Wed, 18 Jul 2007 08:51:32 +0200, Matthias Buelow <···@incubus.de>
wrote:

>George Neuner wrote:
>
>> by replacing C's default linked list allocator with a better one.
>
>There's no "default" allocator these days and the malloc implementations
>I know don't use simple linked lists.

The "default" allocator is the one the C runtime provides.  Most are
first-fit or best-fit list based implementations.  IME only real-time
C implementations for embedded systems typically come with better
allocators (for use in soft real time code where dynamic allocation
can be tolerated).

George
--
for email reply remove "/" from address
From: Andrew Reilly
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <pan.2007.07.19.06.01.44.398394@areilly.bpc-users.org>
On Thu, 19 Jul 2007 01:03:06 -0400, George Neuner wrote:

> The "default" allocator is the one the C runtime provides.  Most are
> first-fit or best-fit list based implementations.  IME only real-time
> C implementations for embedded systems typically come with better
> allocators (for use in soft real time code where dynamic allocation
> can be tolerated).

On which platforms?  I know that the BSDs have been using fairly
sophisticated virtual-memory-aware slab allocators for years, and are in
the process of changing to even more sophisticated multi-threaded
node-afinity-foo versions.  I'd be enormously surprised if contemporary
Linux or Solars aren't in the same ballpark. I think that the only place
you'd have a hope of finding a simple first-fit allocator would be in a
cut-down non-vm embedded development system.

Of course, that's not to say that any of these are in the same performance
ballpark as the first generation allocator of a modern garbage-collected
system, but they're solving a somewhat different problem.

-- 
Andrew
From: George Neuner
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <fndv93lms3am2ij5eq0lv86ilbalrc18u1@4ax.com>
On Thu, 19 Jul 2007 16:01:46 +1000, Andrew Reilly
<···············@areilly.bpc-users.org> wrote:

>On Thu, 19 Jul 2007 01:03:06 -0400, George Neuner wrote:
>
>> The "default" allocator is the one the C runtime provides.  Most are
>> first-fit or best-fit list based implementations.  IME only real-time
>> C implementations for embedded systems typically come with better
>> allocators (for use in soft real time code where dynamic allocation
>> can be tolerated).
>
>On which platforms?  I know that the BSDs have been using fairly
>sophisticated virtual-memory-aware slab allocators for years, and are in
>the process of changing to even more sophisticated multi-threaded
>node-afinity-foo versions.  I'd be enormously surprised if contemporary
>Linux or Solars aren't in the same ballpark. 

The C runtime on most systems uses a two level allocator: a low level
large block allocator grabs memory from the OS when necessary and then
a list based algorithm is used for splitting the blocks into smaller
pieces.  In the beginning the low level allocator just used (s)brk -
now it is tied into the VMM system.  The high level allocator is not
so simple either because it works hard to coalesce freed blocks and is
usually neither pure best-fit or first-fit but rather an optimized
combination of both algorithms.

However, it can still be improved upon.


>I think that the only place
>you'd have a hope of finding a simple first-fit allocator would be in a
>cut-down non-vm embedded development system.

That's actually the last place you'd find one.  I worked many years in
real-time software and I can say from personal experience that good
commercial embedded C development systems (as opposed to quick GCC
ports to get a new chip out the door) routinely offer several choices
of allocators including optimized __-fit, region and variations on
neighbor/buddy.  They offer different trade-offs in speed vs
fragmentation vs overhead and it's left up to the developer to decide
which is best for the purpose.

George
--
for email reply remove "/" from address
From: Slobodan Blazeski
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1184847069.296879.41980@w3g2000hsg.googlegroups.com>
On Jul 18, 11:24 am, Ken Tilton <···········@optonline.net> wrote:
> Slobodan Blazeski wrote:
> >  The only bad thing about lisp is that I
> > went out of shape, I use to make push ups when c++ code was compiling
>
> That would be in the RtL Highlight Film if you answered the survey.

I just edited my 2003 survey, though I feel little nostalgic about it.
You could include it in Rtl highlight if you want but my favourite
quote is:
"Knowing Lisp makes programming in other languages painful." Kristian
S rensen - Road to Lisp
From: André Thieme
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <f7ok1v$lul$1@registered.motzarella.org>
Frank Buss schrieb:
> Slobodan Blazeski wrote:
> 
>> I would add this to my list of favourite quotes if you don't mind . 
> 
> Yes, no problem. But I don't know if I have read something similar in one
> of Paul Graham's essays. But it is true: I think Lisp supports the use of
> both sides of the brain, while the complex syntax, type system and
> programming style of Haskell requires lots of power of the logical half of
> the brain and no power is leftover for the visual part of the brain.
> 
> With Java there is another problem: You have a hard time keeping your brain
> awake

Hehe, if the other thing you said is one of Slodobans favourite quotes
then I should add this last sentence to mine :-)

But yes, I agree to what you originally said. It starts with something
simple. And it has to, because I often don't understand the problem well
enough. And during this painting process, where I basically have a cloud
of loosly coupled datasets some understanding emerges and I learn more
about the problem at hand to express myself more concrete (and Lisp
allows me to do that). So, for an exploratory programming style Lisp is
maybe the best choice.
As soon the problem is well understood it can be easily solved in a
language like OCaml or Haskell.


Andr�
-- 
From: jayessay
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <m3vecls3c3.fsf@sirius.goldenthreadtech.com>
Mark Tarver <··········@ukonline.co.uk> writes:

> On 15 Jul, 10:50, Frank Buss <····@frank-buss.de> wrote:
> > And if the
> > benchmarks are too simple, you are testing only one aspect of the
> > implementation, e.g. floating point and boxing/unboxing performance.
> 
> No; the correct approach is to have a battery of simple benchmarks
> aimed at different aspects of the language - symbol processing,
> numerics etc.  If you have large complex problems then you end up
> comparing programmers and not languages.

I'd say this is pretty much dead wrong as it doesn't account for
interactions of constructs and a) that is the area which will throw
the most wrenches into things[1], and b) the very area which anything
even approaching the character of a real program/application will need
to make the most use of.



/Jon

1. It's pretty easy to optimize different constructs in isolation from
   one another.

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Jon Harrop
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <469ab82b$0$1605$ed2619ec@ptn-nntp-reader02.plus.net>
Mark Tarver wrote:
> On 15 Jul, 10:50, Frank Buss <····@frank-buss.de> wrote:
>> And if the
>> benchmarks are too simple, you are testing only one aspect of the
>> implementation, e.g. floating point and boxing/unboxing performance.
> 
> No; the correct approach is to have a battery of simple benchmarks
> aimed at different aspects of the language - symbol processing,
> numerics etc.  If you have large complex problems then you end up
> comparing programmers and not languages.

I think both of your statements have merit but, given finite time, I suspect
we shall end up with more smaller and fewer larger benchmarks.

A really important point in this context though: complicated benchmarks in
languages like Java and C# can be trivial in OCaml or Lisp.

For example, a parser and evaluator for a simple dynamically typed
functional language with first-class lexical closures that can interpret
the following program:

let rec fib n =
  if n=0 then 0 else
    if n=1 then 1 else
      fib(n - 1) + fib(n - 2) in
fib 35

may be written in only 49 lines of OCaml:

open Camlp4.PreCast;;

let expr = Gram.Entry.mk "expr";;
EXTEND Gram
  expr:
  [ "prog" NONA
      [ "if"; p = expr; "then"; t = expr; "else"; f = expr -> `If(p, t, f)
      | "let"; "rec"; f=LIDENT; x=LIDENT; "="; body=expr; "in"; rest=expr ->
          `LetRec(f, x, body, rest) ]
  | "equal" LEFTA
      [ e1 = expr; "="; e2 = expr -> `Equal(e1, e2) ]
  | "sum" LEFTA
      [ e1 = expr; "+"; e2 = expr -> `Add(e1, e2)
      | e1 = expr; "-"; e2 = expr -> `Add(e1, `Mul(`Int(-1), e2)) ]
  | "product" LEFTA
      [ e1 = expr; "*"; e2 = expr -> `Mul(e1, e2) ]
  | "apply" LEFTA
      [ e1 = expr; e2 = expr -> `Apply(e1, e2) ]
  | "simple" NONA
      [ v = LIDENT -> `Var(v)
      | n = INT -> `Int(int_of_string n)
      | "("; e = expr; ")" -> e ] ];
END;;

(* Evaluator *)
let int = function `Int n -> n | _ -> invalid_arg "int";;
let bool = function `Bool b -> b | _ -> invalid_arg "bool";;

let rec eval vars = function
  | `Apply(func, arg) -> apply (eval vars func) (eval vars arg)
  | `Add(e1, e2) -> `Int (int(eval vars e1) + int(eval vars e2))
  | `Mul(e1, e2) -> `Int (int(eval vars e1) * int(eval vars e2))
  | `Equal(e1, e2) -> `Bool (eval vars e1 = eval vars e2)
  | `If(p, t, f) -> eval vars (if bool (eval vars p) then t else f)
  | `Int i -> `Int i
  | `LetRec(var, arg, body, rest) ->
      let rec vars' = (var, `Closure(arg, vars', body)) :: vars in
      eval vars' rest
  | `Var s -> List.assoc s vars
and apply func arg = match func with
  | `Closure(var, vars, body) -> eval ((var, arg) :: vars) body
  | _ -> invalid_arg "Attempt to apply a non-function value";;

(* Top level *)
let string_of_value = function
  | `Int n -> string_of_int n
  | `Bool true -> "true"
  | `Bool false -> "false"
  | `Closure _ -> "<fun>";;

let () =
  let program = Gram.parse expr Loc.ghost (Stream.of_channel stdin) in
  let vars = ["true", `Bool true; "false", `Bool false] in
  print_endline(string_of_value(eval vars program));;

Could we make another benchmark test out of this? Maybe make the target
language BASIC and have a more sophisticated target program?

Personally, I'd like to see how fast and easy this is using EVAL in Lisp. I
could code up MetaOCaml and F# implementations as well...

-- 
Dr Jon D Harrop, Flying Frog Consultancy
The OCaml Journal
http://www.ffconsultancy.com/products/ocaml_journal/?usenet
From: Kent M Pitman
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <uwsx1jj6g.fsf@nhplace.com>
Jon Harrop <···@ffconsultancy.com> writes:

> Personally, I'd like to see how fast and easy this is using EVAL in Lisp. I
> could code up MetaOCaml and F# implementations as well...

This reflects a fundamental misunderstanding about how Lisp works.

EVAL makes no pretense whatsoever of being fast and is not in any way
related to the execution speed of Lisp since Lisp programs ordinarily
do not pass through EVAL.
From: André Thieme
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <f7ose3$fg0$1@registered.motzarella.org>
Jon Harrop schrieb:

> For example, a parser and evaluator for a simple dynamically typed
> functional language with first-class lexical closures that can interpret
> the following program:
> 
> let rec fib n =
>   if n=0 then 0 else
>     if n=1 then 1 else
>       fib(n - 1) + fib(n - 2) in
> fib 35
> 
> may be written in only 49 lines of OCaml:

When making a line count then Lisp often is not the best.
This has several reasons:
In Lisp functions, variables and macros usually have longish names.
These have to fit into 80 chars wide lines. This is a style descision.
If you want to write to a file it could for example look like this in
Lisp:
(with-open-file (out "/path/to/file"
                      :direction :output
                      :if-does-not-exist :create
                      :if-exists :overwrite)
   ...)

It takes up 4 LOC and it looks like a 4 liner, while it really is just
one line. The arguments (that start with a colon ":") could well have
been compressed into a cryptic string:
(w/file (out "oco" "/path/to/file")
   ..)

and suddenly it looks like a one-liner.

Also other style elements will make Lisp code using up more lines.
Let's take this simple accumulator producing function:
(defun make-accumulator (n)
   (lambda (i)
     (incf n i)))

In some other languages the same solution could look like
fun makeAccumulator n = (i -> n inc i)

To factor out these style implications you could try to compare
the source code when it is packed. The best packers for code should be
compilers, I think.

For example you can look at the file size of a compiled OCaml function
and the file size of a (with clisp) compiled function.


Andr�
-- 
From: Slobodan Blazeski
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1184930654.351862.265130@57g2000hsv.googlegroups.com>
On Jul 20, 1:34 am, Andr� Thieme <address.good.until.
···········@justmail.de> wrote:
> Jon Harrop schrieb:
>
> > For example, a parser and evaluator for a simple dynamically typed
> > functional language with first-class lexical closures that can interpret
> > the following program:
>
> > let rec fib n =
> >   if n=0 then 0 else
> >     if n=1 then 1 else
> >       fib(n - 1) + fib(n - 2) in
> > fib 35
>
> > may be written in only 49 lines of OCaml:
>
> When making a line count then Lisp often is not the best.
> This has several reasons:
> In Lisp functions, variables and macros usually have longish names.
> These have to fit into 80 chars wide lines. This is a style descision.
> If you want to write to a file it could for example look like this in
> Lisp:
> (with-open-file (out "/path/to/file"
>                       :direction :output
>                       :if-does-not-exist :create
>                       :if-exists :overwrite)
>    ...)
>
> It takes up 4 LOC and it looks like a 4 liner, while it really is just
> one line. The arguments (that start with a colon ":") could well have
> been compressed into a cryptic string:
> (w/file (out "oco" "/path/to/file")
>    ..)
>
> and suddenly it looks like a one-liner.
>
> Also other style elements will make Lisp code using up more lines.
> Let's take this simple accumulator producing function:
> (defun make-accumulator (n)
>    (lambda (i)
>      (incf n i)))
>
> In some other languages the same solution could look like
> fun makeAccumulator n = (i -> n inc i)
>
> To factor out these style implications you could try to compare
> the source code when it is packed. The best packers for code should be
> compilers, I think.
>
> For example you can look at the file size of a compiled OCaml function
> and the file size of a (with clisp) compiled function.
>
> Andr�
> --

Very-long-and-descriptive-names are one of the strengts  that I
learned from the lisp culture and I use it in all the languages I
encounter. As soon as your programm becomes non-trivial there is more
interest with reading & understanding code than  line count,
especially if there are other programmers involved or even for
yoyrself after few motnhs. For a good style namings look at :
http://www.cs.northwestern.edu/academics/courses/325/readings/names.html
from
http://www.cs.northwestern.edu/academics/courses/325/readings/
From: Tamas Papp
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <87zm1rrmll.fsf@pu100877.student.princeton.edu>
Slobodan Blazeski <·················@gmail.com> writes:

> Very-long-and-descriptive-names are one of the strengts  that I
> learned from the lisp culture and I use it in all the languages I
> encounter. As soon as your programm becomes non-trivial there is more

One small thing I love about Lisp is the possibility of including
characters in names which would be special in other languages, eg
vector+.

Tamas
From: Cesar Rabak
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <f7egeq$4h4$1@aioe.org>
Mark Tarver escreveu:
> On 15 Jul, 01:59, Jon Harrop <····@ffconsultancy.com> wrote:
[snipped]
> 
> Benchmark programs for speed compariosns should be very simple or 
> trivial to write.  If you pose significant programming challenges in 
> your benchmarks then what you end up doing is comparing algorithms
> and not languages.
> 
Mark,

I feel you're oversimplifying. . . certain languages allow for
approaches which differ radically from others this is one of selling 
points of FP languages.

--
Cesar Rabak
From: Mark Tarver
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <1184502930.391382.91130@r34g2000hsd.googlegroups.com>
On 15 Jul, 01:59, Jon Harrop <····@ffconsultancy.com> wrote:
> There seems to be a great deal of interest from the functional programming
> community in benchmarking. Is there enough interest to create a new
> computer language shootout that showcases more relevant/suitable tasks for
> FP?
>
> I think the ray tracer and symbolic simplifier are very valuable benchmarks
> but I would also like to see regexps, program evaluators (rewriters,
> interpreters and compilers) and other benchmarks such as the "n"th-nearest
> neighbour example from my book:
>
>  http://www.ffconsultancy.com/products/ocaml_for_scientists/complete.html
>
> Perhaps even collating example programs where performance of of little
> relevance, such as GUI Sudoku solvers, Wade's Game of Concentration, OpenGL
> examples, web programming and so on.
>
> I think such a resource would be both valuable and interesting to a wide
> variety of readers but, most of all, I see it as an opportunity to
> demonstrate applications where functional programming shines.
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> The OCaml Journalhttp://www.ffconsultancy.com/products/ocaml_journal/?usenet

If you want to compare OCaml with Lisp why don't you choose the
Gabriel benchmarks and write them in OCaml?  They've been intensively
done in Lisp and are designed to test various aspects of an FPL.

Mark
From: Nicolas Neuss
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <87y7hgt6if.fsf@ma-patru.mathematik.uni-karlsruhe.de>
Mark Tarver <··········@ukonline.co.uk> writes:

> On 15 Jul, 01:59, Jon Harrop <the_flying_frog.com> wrote:
> > ...
> >  http://www....
> >
> > The ... Journal http://www...
> 
> If you want to compare OCaml with Lisp why don't you choose the
> Gabriel benchmarks and write them in OCaml?  They've been intensively
> done in Lisp and are designed to test various aspects of an FPL.
> 
> Mark

Because he then would have to work on something which would not grow the
sales of his books?

Nicolas

P.S.: I would suggest that at least the link spam should not be repeated
when citing him.
From: Jon Harrop
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <469be852$0$1593$ed2619ec@ptn-nntp-reader02.plus.net>
Nicolas Neuss wrote:
> Because he then would have to work on something which would not grow the
> sales of his books?

Actually, I spent yesterday doing this. Most of the Gabriel benchmarks have
already been ported to OCaml. One of the more interesting ones is the Boyer
benchmark, which is a tiny theorem prover that works by repeated symbolic
rewriting. However, it only solves one problem and takes only 0.01s to
complete.

An OCaml translation is actually part of the benchmark suite of the OCaml
distribution:

  http://stuff.mit.edu/afs/sipb/project/ocaml/src/current/test/

The implementation is quite poor and, consequently, it is 4x slower than the
Lisp. However, this benchmark is trivially reducible, rendering it useless:

  http://home.pipeline.com/~hbaker1/BoyerB.ps.gz

I started optimizing boyer.ml before I realised this and gave up when it
became obvious, i.e. I can make the program take any amount of time to
complete so it measures nothing.

I've also ported Boehm's GC benchmark but this is (unsurprisingly) a very
poor benchmark of GC performance that stresses the GC so little that it
actually makes Boehm's GC look average. Moreover, it is only 10 lines long.

I would say that the symbolic simplifier supercedes both of these tests, as
it performs more sophisticated term-rewriting in a way that stresses the
GC.

So, I would advocate the selection of a better benchmark that is practically
irreducible but measures the same things. In this context, two tests spring
to mind:

1. A program to evaluate BASIC programs.

2. A program to generate FFT programs.

I would also like to see Haskell implementations.

Incidentally, the Haskell guys recently translated my ray tracer benchmark
and Haskell's performance is similar to Lisp's with GHC and SBCL.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
OCaml for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists/?usenet
From: Mark Tarver
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <1184675413.485068.92460@e16g2000pri.googlegroups.com>
> So, I would advocate the selection of a better benchmark that is practically
> irreducible but measures the same things. In this context, two tests spring
> to mind:
>
> 1. A program to evaluate BASIC programs.

Do you mean an emulator for a Basic-like language?

Mark
From: Pascal Bourguignon
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <87bqebku77.fsf@thalassa.lan.informatimago.com>
Mark Tarver <··········@ukonline.co.uk> writes:

>> So, I would advocate the selection of a better benchmark that is practically
>> irreducible but measures the same things. In this context, two tests spring
>> to mind:
>>
>> 1. A program to evaluate BASIC programs.
>
> Do you mean an emulator for a Basic-like language?

I bet there are people doing meta-programming with Visual BASIC,
generating Visual BASIC programs, storing them in databases, and
loading and running them...

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
From: Cesar Rabak
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <f7ijpm$tip$1@aioe.org>
Pascal Bourguignon escreveu:
> Mark Tarver <··········@ukonline.co.uk> writes:
> 
>>> So, I would advocate the selection of a better benchmark that is practically
>>> irreducible but measures the same things. In this context, two tests spring
>>> to mind:
>>>
>>> 1. A program to evaluate BASIC programs.
>> Do you mean an emulator for a Basic-like language?
> 
> I bet there are people doing meta-programming with Visual BASIC,
> generating Visual BASIC programs, storing them in databases, and
> loading and running them...
> 
Using only a palette in GUI tool with a handful of widgets for 'visual 
meta-programming'?

Couldn't resist... :-)

--
Cesar Rabak
From: Ken Tilton
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <Zzani.10399$xe1.6041@newsfe12.lga>
Pascal Bourguignon wrote:
> Mark Tarver <··········@ukonline.co.uk> writes:
> 
> 
>>>So, I would advocate the selection of a better benchmark that is practically
>>>irreducible but measures the same things. In this context, two tests spring
>>>to mind:
>>>
>>>1. A program to evaluate BASIC programs.
>>
>>Do you mean an emulator for a Basic-like language?
> 
> 
> I bet there are people doing meta-programming with Visual BASIC,
> generating Visual BASIC programs, storing them in databases, and
> loading and running them...
> 

The word you are looking for is "greenspunning".

hth,kzo
From: Jon Harrop
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <469d5eeb$0$1612$ed2619ec@ptn-nntp-reader02.plus.net>
Mark Tarver wrote:
>> So, I would advocate the selection of a better benchmark that is
>> practically irreducible but measures the same things. In this context,
>> two tests spring to mind:
>>
>> 1. A program to evaluate BASIC programs.
> 
> Do you mean an emulator for a Basic-like language?

Yes. Just a minimal example.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
OCaml for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists/?usenet
From: Nicolas Neuss
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <87bqebw3rd.fsf@ma-patru.mathematik.uni-karlsruhe.de>
Jon Harrop <···@ffconsultancy.com> writes:
> So, I would advocate the selection of a better benchmark that is practically
> irreducible but measures the same things. In this context, two tests spring
> to mind:
> 
> 1. A program to evaluate BASIC programs.
> 
> 2. A program to generate FFT programs.

As we know, concerning point 2, the FFTW project did the work for you
already.  And probably you have also an OCaml BASIC evaluator lying around
somewhere.

Let's add some interesting tasks in favor of the CL side, too.  I hope you
will do them before you expect us to work on your little tasks from above.

3. An OCaml program for evaluating FORTRAN programs

4. An OCaml CAS system (about the size of Maxima or Axiom)

5. Perl-like regexps in Ocaml (as good and fast as cl-pccre)

etc, etc


Nicolas
From: Jon Harrop
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <469d60ef$0$1614$ed2619ec@ptn-nntp-reader02.plus.net>
Nicolas Neuss wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> 1. A program to evaluate BASIC programs.
>> 
>> 2. A program to generate FFT programs.
> 
> As we know, concerning point 2, the FFTW project did the work for you
> already.

Certainly relevant but too big to ask anyone to translate into Lisp. We can
take the most trivial reductions that FFTW implements and perform those.

> And probably you have also an OCaml BASIC evaluator lying around 
> somewhere.

I have the following interpreter for a tiny dynamically-typed pure
functional language.

> 3. An OCaml program for evaluating FORTRAN programs

This might be feasible for F77. Is there already a Lisp implementation?

> 4. An OCaml CAS system (about the size of Maxima or Axiom)

That is too big and I can't do any more work on CAS as I'm under NDA from
Wolfram Research.

> 5. Perl-like regexps in Ocaml (as good and fast as cl-pccre)

Already done. Might be interesting to benchmark but I've never had a
regexp-limited program.

Here's that interpreter:

open Camlp4.PreCast;;

let expr = Gram.Entry.mk "expr";;
EXTEND Gram
  expr:
  [ "prog" NONA
      [ "if"; p = expr; "then"; t = expr; "else"; f = expr -> `If(p, t, f)
      | "let"; "rec"; f=LIDENT; x=LIDENT; "="; body=expr; "in"; rest=expr ->
          `LetRec(f, x, body, rest) ]
  | "equal" LEFTA
      [ e1 = expr; "="; e2 = expr -> `Equal(e1, e2) ]
  | "sum" LEFTA
      [ e1 = expr; "+"; e2 = expr -> `Add(e1, e2)
      | e1 = expr; "-"; e2 = expr -> `Add(e1, `Mul(`Int(-1), e2)) ]
  | "product" LEFTA
      [ e1 = expr; "*"; e2 = expr -> `Mul(e1, e2) ]
  | "apply" LEFTA
      [ e1 = expr; e2 = expr -> `Apply(e1, e2) ]
  | "simple" NONA
      [ v = LIDENT -> `Var(v)
      | n = INT -> `Int(int_of_string n)
      | "("; e = expr; ")" -> e ] ];
END;;

(* Evaluator *)
let int = function `Int n -> n | _ -> invalid_arg "int";;
let bool = function `Bool b -> b | _ -> invalid_arg "bool";;

let rec eval vars = function
  | `Apply(func, arg) -> apply (eval vars func) (eval vars arg)
  | `Add(e1, e2) -> `Int (int(eval vars e1) + int(eval vars e2))
  | `Mul(e1, e2) -> `Int (int(eval vars e1) * int(eval vars e2))
  | `Equal(e1, e2) -> `Bool (eval vars e1 = eval vars e2)
  | `If(p, t, f) -> eval vars (if bool (eval vars p) then t else f)
  | `Int i -> `Int i
  | `LetRec(var, arg, body, rest) ->
      let rec vars' = (var, `Closure(arg, vars', body)) :: vars in
      eval vars' rest
  | `Var s -> List.assoc s vars
and apply func arg = match func with
  | `Closure(var, vars, body) -> eval ((var, arg) :: vars) body
  | _ -> invalid_arg "Attempt to apply a non-function value";;

(* Top level *)
let string_of_value = function
  | `Int n -> string_of_int n
  | `Bool true -> "true"
  | `Bool false -> "false"
  | `Closure _ -> "<fun>";;

let () =
  let program = Gram.parse expr Loc.ghost (Stream.of_channel stdin) in
  let vars = ["true", `Bool true; "false", `Bool false] in
  print_endline(string_of_value(eval vars program));;

This can be made into a BASIC interpreter easily enough. Might also be
interesting to make the target language lazy.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
OCaml for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists/?usenet
From: Jon Harrop
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <469eb6f5$0$1618$ed2619ec@ptn-nntp-reader02.plus.net>
Jeronimo Pellegrini wrote:
> On 2007-07-18, Nicolas Neuss <········@mathematik.uni-karlsruhe.de> wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
>>>
http://shootout.alioth.debian.org/sandbox/benchmark.php?test=regexdna&lang=all
> 
> If you look at the code, it has no type declarations. And the logs show
> that SBCL complained about not being able to optimize exactly because of
> that.

Would type declarations make much difference for a program that spends most
of its time in the regex library?

-- 
Dr Jon D Harrop, Flying Frog Consultancy
OCaml for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists/?usenet
From: Nicolas Neuss
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <87lkdcua2s.fsf@ma-patru.mathematik.uni-karlsruhe.de>
Jeronimo Pellegrini <···@aleph0.info> writes:

> On 2007-07-18, Nicolas Neuss <········@mathematik.uni-karlsruhe.de> wrote:
> > Apparently we are living in very different worlds[1].  I see the
> > shootout as a waste of time and I won't participate in it any more
> > (having done it before).  The reasons are described by Juho Snellman
> > better than I could do it:
> >
> > http://groups.google.com/group/comp.lang.lisp/msg/5489247d2f56a848
> 
> But it's not supposed to be taken too seriously:
> 
> http://shootout.alioth.debian.org/sandbox/faq.php#means

Yes, as I said I have finally understood that the shootout is bogus.  So
please understand that I deny Harrop's wish to drag me into that idiocy
again.  Concerning your link: I prefer the falcon.

Nicolas

--

The "Flying Frog Pattern" or
"Report from a visit at the Flying Frog Consultancy":

JH:      "Hey, look here, this frog is called Ocaml.  Wanna see it fly?"

Visitor: "Hmm, it looks interesting.  But...
          Oh my god!  You smashed it against the wall!"

JH:      "But it did fly, didn't it?  Wanna see another one flying?
          Here, this one is called F#."

Visitor: "It looks a little treacherous, nevertheless I would prefer if...
          Oh, what a mess, you did it again!"

JH:      "Yep.  Wanna see a third one?"

Visitor: "By no means!  Please, try this one instead."

JH:      "Hey, what's happening here?  This one is really flying..."

Visitor: "Yes, it is a bird called Common Lisp.  Look how nicely it
          circles up into the sky!"

JH:      "How nasty!  I _want_ them smashed against the wall!"

Visitor: "Hmm, a living Common Lisp won't let you do that.  But maybe you
          can catch yourself a free Common Lisp and kill it.  Then you
          could fling it to your wall."

JH:      "See?  Common Lisp is no good at flying!"
From: William D Clinger
Subject: Re: Implement the Gabriel benchmarks in OCaml
Date: 
Message-ID: <1184874000.792953.4400@r34g2000hsd.googlegroups.com>
Jon Harrop wrote:
> Actually, I spent yesterday doing this. Most of the Gabriel benchmarks have
> already been ported to OCaml. One of the more interesting ones is the Boyer
> benchmark, which is a tiny theorem prover that works by repeated symbolic
> rewriting. However, it only solves one problem and takes only 0.01s to
> complete.

The original Boyer benchmark is obsolete, but the nboyer
and sboyer benchmarks are excellent replacements for it
[1,2].

> I've also ported Boehm's GC benchmark but this is (unsurprisingly) a very
> poor benchmark of GC performance that stresses the GC so little that it
> actually makes Boehm's GC look average. Moreover, it is only 10 lines long.

Again, much better gc benchmarks are available [3,4].

Will

[1] http://www.ccs.neu.edu/home/will/Twobit/src/nboyer.scm
[2] http://www.ccs.neu.edu/home/will/Twobit/src/sboyer.scm
[3] http://www.ccs.neu.edu/home/will/Twobit/src/gcold.scm
[4] http://www.ccs.neu.edu/home/will/GC/Benchmarks/perm.sch
    Note that MpermNKL-benchmark is the main entry point;
    tenperm-benchmark gives an example of its use.  Note
    also that the Gambit version of this program is a
    terrible gc benchmark [5].
[5] http://www.ccs.neu.edu/home/will/Twobit/src/perm9.scm
    This is a terrible gc benchmark.  Use [4] instead.
From: Stefan Scholl
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <0T48fnl2If4nNv8%stesch@parsec.no-spoon.de>
Jon Harrop <···@ffconsultancy.com> wrote:
> There seems to be a great deal of interest from the functional programming
> community in benchmarking. Is there enough interest to create a new
> computer language shootout that showcases more relevant/suitable tasks for
> FP?

Does this sell books? Should I write one?
From: ······@gmail.com
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <1184538237.210517.176730@k79g2000hse.googlegroups.com>
On Jul 15, 1:59 am, Jon Harrop <····@ffconsultancy.com> wrote:
.. bullshit sniped

Time and time again you've been outed as a troll and spammer.
You constantly misrepresent what others say, twist words,
lie and deceive. You only accept what fits your crafted
agenda and furthers your nefarious purposes. In so many words,
you are a scumbag plain and simple.

Somehow i don't see many interested in your "benchmarks" and
"language comparisons".
From: Matthias Buelow
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <5g1qopF3cpj10U1@mid.dfncis.de>
Jon Harrop wrote:

> There seems to be a great deal of interest from the functional programming
> community in benchmarking. Is there enough interest to create a new
> computer language shootout that showcases more relevant/suitable tasks for
> FP?

It would be more interesting to test the language for programming speed,
not program speed (even though my inner nerd protests).

For most areas, being able to turn your ideas into executable code with
a minimum of effort is the most important feature at all and is the very
point of high-level programming languages. That's why scripting
languages like perl, python, php etc. are so popular; they make writing
certain programs very easy, even though I've never heard anyone claim
that perl was fast per se (ok, its regex and string matching stuff is
apparently quite optimized but ordinary execution is the typical
interpreter speed).

I think I remember the wise Kent Pitman (*wink*) say on here, "the inner
loop of a programming language is end-user programming", or something
like that. So that's what should be improved first. Arguably, languages
like Lisp are "more optimized" in that regard than languages that impose
a lot of bureaucracy on the programmer.
From: Jon Harrop
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <469bb793$0$1597$ed2619ec@ptn-nntp-reader02.plus.net>
Matthias Buelow wrote:
> Arguably, languages like Lisp are "more optimized" in that regard than
> languages that impose a lot of bureaucracy on the programmer. 

That is certainly a belief held by some people. However, I think it is an
unfalsifiable hypothesis. How can we test programming speed without pitting
Jo Lisper against the OCaml elite?

-- 
Dr Jon D Harrop, Flying Frog Consultancy
The OCaml Journal
http://www.ffconsultancy.com/products/ocaml_journal/?usenet
From: Matthias Buelow
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <5g1u8bF37cremU1@mid.dfncis.de>
Jon Harrop wrote:

>> Arguably, languages like Lisp are "more optimized" in that regard than
>> languages that impose a lot of bureaucracy on the programmer. 
> 
> That is certainly a belief held by some people. However, I think it is an
> unfalsifiable hypothesis. How can we test programming speed without pitting
> Jo Lisper against the OCaml elite?

I'd think you'd have to use extensive case study. It's amazing that the
"Software Engineering" crowd hasn't really come up with such stuff yet,
instead of just churning out one visual-modelling-for-OO-practices
fooblah after another, preferrably in combination with simplistic,
bureaucratic programming languages (read: Java, C#.NET, etc.), which
effectively increases the unproductive bureaucracy instead of making
programming more efficient. But then again, the focus of Software
Engineering isn't efficient programming but making the use of cheap,
low-skilled programmers feasible for large projects. We'll see if that
will work out in the long range, I'm rather doubtful.
From: Jon Harrop
Subject: Re: New Computer Language Shootout?
Date: 
Message-ID: <469bcf7a$0$1635$ed2619ec@ptn-nntp-reader02.plus.net>
Matthias Buelow wrote:
> Jon Harrop wrote:
>>> Arguably, languages like Lisp are "more optimized" in that regard than
>>> languages that impose a lot of bureaucracy on the programmer.
>> 
>> That is certainly a belief held by some people. However, I think it is an
>> unfalsifiable hypothesis. How can we test programming speed without
>> pitting Jo Lisper against the OCaml elite?
> 
> I'd think you'd have to use extensive case study.

Are the OCaml and Lisp communities big enough for such a study to even be
theoretically possible?

Consider a control experiment where two groups of people:

1. Programmers
2. Lisp programmers

were asked to solve benchmark problems in the C programming language. I
believe group 2 would be significantly better but this does not reflect
upon the capability of C vs C, of course, but simply reflects that fact
that only the programming elite bother looking at non-mainstream languages.

While I agree that a software engineering case study would be great, I just
cannot see how we can even attempt it in a way that the results can be at
all meaningful.

> It's amazing that the 
> "Software Engineering" crowd hasn't really come up with such stuff yet,
> instead of just churning out one visual-modelling-for-OO-practices
> fooblah after another, preferrably in combination with simplistic,
> bureaucratic programming languages (read: Java, C#.NET, etc.), which
> effectively increases the unproductive bureaucracy instead of making
> programming more efficient. But then again, the focus of Software
> Engineering isn't efficient programming but making the use of cheap,
> low-skilled programmers feasible for large projects. We'll see if that
> will work out in the long range, I'm rather doubtful.

Agreed.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
OCaml for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists/?usenet