Here's an example that is being touted as of a programming study
which provides evidences that "Common Lisp is slow".
http://emr.cs.iit.edu/~reingold/calendars.shtml
From: Sam Steingold
Subject: Re: performance challenge
Date:
Message-ID: <uzn4qq1x6.fsf@gnu.org>
> * Christopher C. Stacy <······@arjf.qgcd.pbz> [2004-08-19 20:11:27 +0000]:
>
> Here's an example that is being touted as of a programming study
> which provides evidences that "Common Lisp is slow".
>
> http://emr.cs.iit.edu/~reingold/calendars.shtml
just looking on the first screen full:
1. they use lists to represent dates (instead of defstruct)
2. they use a list - a freshly consed up list!! - as a lookup table:
(defun last-day-of-gregorian-month (month year)
;; Last day in Gregorian $month$ during $year$.
(if ;; February in a leap year
(and (= month 2)
(= (mod year 4) 0)
(not (member (mod year 400) (list 100 200 300))))
;; Then return
29
;; Else return
(nth (1- month)
(list 31 28 31 30 31 30 31 31 30 31 30 31))))
(and then they re-cons this same list in `last-day-of-julian-month'!)
--
Sam Steingold (http://www.podval.org/~sds) running w2k
<http://www.camera.org> <http://www.iris.org.il> <http://www.memri.org/>
<http://www.mideasttruth.com/> <http://www.honestreporting.com>
Any connection between your reality and mine is purely coincidental.
······@news.dtpq.com (Christopher C. Stacy) writes:
> Here's an example that is being touted as of a programming study
> which provides evidences that "Common Lisp is slow".
>
> http://emr.cs.iit.edu/~reingold/calendars.shtml
So?
Christophe
--
http://www-jcsu.jesus.cam.ac.uk/~csr21/ +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%") (pprint #36rJesusCollegeCambridge)
On Thu, 19 Aug 2004 20:11:27 GMT, ······@news.dtpq.com (Christopher C.
Stacy) wrote:
>Here's an example that is being touted as of a programming study
>which provides evidences that "Common Lisp is slow".
>
>http://emr.cs.iit.edu/~reingold/calendars.shtml
So where's the study itself? It doesn't seem to be mentioned anywhere
on this page.
Tell you what, I challenge whomever did the study to write the C++
code without any type declarations (just like the Lisp code is
written) and then compare how fast each is.
(-:
I'd say there's even a reasonable chance the study is comparing
interpreted Lisp vs the C++.
>>>>> On Thu, 19 Aug 2004 22:57:23 GMT, JP Massar ("JP") writes:
JP> On Thu, 19 Aug 2004 20:11:27 GMT, ······@news.dtpq.com (Christopher C.
JP> Stacy) wrote:
>> Here's an example that is being touted as of a programming study
>> which provides evidences that "Common Lisp is slow".
>>
>> http://emr.cs.iit.edu/~reingold/calendars.shtml
JP> So where's the study itself? It doesn't seem to be mentioned
JP> anywhere on this page.
The link points to his book, "Calendrical Calculations" from Cambridge
University Press. It's a high-profile work on calendar science that
is referenced all over the Internet. Browsing around a little yeilds
this link, which is better:
http://emr.cs.iit.edu/home/reingold/calendar-book/papers/
In article <·············@news.dtpq.com>, Christopher C. Stacy wrote:
> Here's an example that is being touted as of a programming study
> which provides evidences that "Common Lisp is slow".
>
> http://emr.cs.iit.edu/~reingold/calendars.shtml
Touted where? The page itself doesn't seem to say anything of the
sort.
>>>>> On Fri, 20 Aug 2004 02:17:45 GMT, Larry Clapp ("Larry") writes:
Larry> In article <·············@news.dtpq.com>, Christopher C. Stacy wrote:
>> Here's an example that is being touted as of a programming study
>> which provides evidences that "Common Lisp is slow".
>>
>> http://emr.cs.iit.edu/~reingold/calendars.shtml
Larry> Touted where? The page itself doesn't seem to
Larry> say anything of the sort.
The page is just a link to their papers and books, which talk
about how they calculated the calendars. One comparison in
particular is the Hindu Lunar calender taking 34 hours to run
in Common Lisp. Some folks on other newsgroups are suggesting
that it would only take a few minutes to do the calculation in C.
They are suggesting that this is a good Lisp example because it
was written by mathemeticians concerned with algorithms.
The paper doesn't seem to support that, though.
It says
"We have chosen Common Lisp as the vehicle for
implementation because it encourages functional programming,
and has a trivial syntax, nearly self-evident semantics,
historical durability, and wide distribution."
So I guess the claim is that this is an example of how to
write perspicious code in the way of Lisp, and that it must
be manifestly slow.
It's a widely cited work.
······@news.dtpq.com (Christopher C. Stacy) writes:
>>>>>> On Fri, 20 Aug 2004 02:17:45 GMT, Larry Clapp ("Larry") writes:
>
> Larry> In article <·············@news.dtpq.com>, Christopher C. Stacy wrote:
> >> Here's an example that is being touted as of a programming study
> >> which provides evidences that "Common Lisp is slow".
> >>
> >> http://emr.cs.iit.edu/~reingold/calendars.shtml
>
> Larry> Touted where? The page itself doesn't seem to
> Larry> say anything of the sort.
>
> The page is just a link to their papers and books, which talk about
> how they calculated the calendars. One comparison in particular is
> the Hindu Lunar calender taking 34 hours to run in Common Lisp. Some
> folks on other newsgroups are suggesting that it would only take a
> few minutes to do the calculation in C. They are suggesting that
> this is a good Lisp example because it was written by mathemeticians
> concerned with algorithms.
What newsgroups is this discussion occuring in. I couldn't find
anything via Google except one mention of the code in comp.lang.cobol.
> The paper doesn't seem to support that, though. It says
>
> "We have chosen Common Lisp as the vehicle for implementation
> because it encourages functional programming, and has a trivial
> syntax, nearly self-evident semantics, historical durability, and
> wide distribution."
>
> So I guess the claim is that this is an example of how to write
> perspicious code in the way of Lisp, and that it must be manifestly
> slow.
Hmmm. I wasn't blown away with the perspiciousness of the code either.
Not that it was terrible. After a bit of tinkering around with it I
found some refactorings that made it both more clear (to me anyway)
and quite likely more efficient.
-Peter
--
Peter Seibel ·····@javamonkey.com
Lisp is the red pill. -- John Fraser, comp.lang.lisp
>>>>> On Fri, 20 Aug 2004 20:28:56 GMT, Peter Seibel ("Peter") writes:
Peter> What newsgroups is this discussion occuring in?
It's in "alt.folklore.computers" but the subject lines on threads
there are always totally random. In this case, it's one of the
"Vintage computers are better than modern crap !"; do a fulltext
search for "Lisp" postings by me in the last week.
Here's a numerical challenge submitted by someone over there.
Perhaps someone who likes to tweak CMUCL programs will write
this program and benchmark it in answer to their query below.
(Be very careful not to cross-post, or else the resulting thread,
which will soon have nothing to do with Lisp, will persist here
for probably 5 months to a year. They never start new threads
over there in that newsgroup; they just reuse existing ones!)
The following is from ·········@aol.com:
----------------------------------------------------------------------
How long does it take to add 10^7 double precision numbers in Lisp?
Using Compaq Visual Fortran on a 2.8 GHz computer it takes 40-50
milliseconds.
program xsum_double
! sum 1e7 doubles
implicit none
integer, parameter :: n = 10000000
real(kind=8) :: xx(n)
real :: t1,t2,xsum
call random_seed()
call random_number(xx)
call cpu_time(t1)
xsum = sum(xx)
call cpu_time(t2)
print*,1000*(t2-t1),xsum
end program xsum_double
Another simple benchmark -- how long does it take to multiply two
1000*1000 matrices and then compute the sum of the elements of the
resulting matrix?
It takes 1632-1642 milliseconds with CVF.
program xmatmul_time
implicit none
integer, parameter :: n = 1000
real(kind=8) :: xx(n,n),yy(n,n)
real :: t1,t2,xsum
call random_seed()
call random_number(xx)
call random_number(yy)
call cpu_time(t1)
xsum = sum(matmul(xx,yy))
call cpu_time(t2)
print*,1000*(t2-t1),xsum
end program xmatmul_time
----------------------------------------------------------------------
Here are my own comments about writing the program.
1. ARRAY-TOTAL-SIZE. Hmmmm.
2. The SUM and MATMUL functions, which is what is actually being
benchmarked in the competing FORTRAN examples, are "intrinsics".
They are not written in FORTRAN, but are hand-tweaked assembler code.
(a) Since Lisp doesn't have these functions built-in, but it is the
suitability of the language that is being compared, the Lisp version
should be written in Lisp. The Lisp version would be Lisp's natural
way of expressing the same result in Lisp (rather than calling out).
Apparently not all FORTRAN implementations have MATMUL.
Although the author used a compiler that does have it,
GNU Fortran doesn't have it, for example. But there are
FORTRAN implenentations of MATMUL.
So the speed comparison being made here is between DEC FORTRAN
on some machine versus some Common Lisp compiler on some
machine. Not exactly a "language comparison", but what is?
(b) It might also be a little interesting to see the hand-coded version,
just to illustrate that this can be easily done and called from Lisp,
and also to see if there's really any significant speed difference.
This code would be non-portable across machines, but the competition's
benchmark results are also non-portable.
(c) Maybe we should run the Lisp version against the GNU FORTRAN version
on the same computer. That would be easier to interpret than the
unknown machine and the DEC compiler that we've been given.
(d) I wonder how the commercial Lisp implementations would do on this?
Also how would CLISP do?
(e) Wasn't there something a lot like this posted here in the last year?
I don't have any experience tuning numerical Lisp programs.
······@news.dtpq.com (Christopher C. Stacy) wrote in message news:<·············@news.dtpq.com>...
> 2. The SUM and MATMUL functions, which is what is actually being
> benchmarked in the competing FORTRAN examples, are "intrinsics".
> They are not written in FORTRAN, but are hand-tweaked assembler code.
Many Fortran programmers, including me, don't care what language their
compiler is written in. What matters is program speed and code
readability.
> (a) Since Lisp doesn't have these functions built-in, but it is the
> suitability of the language that is being compared, the Lisp version
> should be written in Lisp. The Lisp version would be Lisp's natural
> way of expressing the same result in Lisp (rather than calling out).
>
> Apparently not all FORTRAN implementations have MATMUL.
> Although the author used a compiler that does have it,
> GNU Fortran doesn't have it, for example. But there are
> FORTRAN implenentations of MATMUL.
The "GNU Fortran" compiler you are referring to is probably g77, a
Fortran 77 compiler. There have been two standards since, Fortran 90
and Fortran 95, and the Fortran 2003 standard will be made official
soon. Array intrinsic functions such as MATMUL are part of Fortran 90
and must be present in all F90/95/2003 compilers.
> So the speed comparison being made here is between DEC FORTRAN
> on some machine versus some Common Lisp compiler on some
> machine. Not exactly a "language comparison", but what is?
The platform I used is hardly exotic -- a Dell PC with a 2.8 GHz
Pentium 4 processor and 512 MB of RAM, running Windows XP Pro.
>
> (b) It might also be a little interesting to see the hand-coded version,
> just to illustrate that this can be easily done and called from Lisp,
> and also to see if there's really any significant speed difference.
> This code would be non-portable across machines, but the competition's
> benchmark results are also non-portable.
The Fortran code I posted is standard Fortran 95 and is portable to
any platform with a F95 compiler.
(message (Hello ··········@aol.com)
(you :wrote :on '(22 Aug 2004 13:57:28 -0700))
(
>> 2. The SUM and MATMUL functions, which is what is actually being
>> benchmarked in the competing FORTRAN examples, are "intrinsics".
>> They are not written in FORTRAN, but are hand-tweaked assembler
>> code.
b> Many Fortran programmers, including me, don't care what language
b> their compiler is written in. What matters is program speed and code
b> readability.
but looks like that function measured not the speed of common fortran
operators, but some function written in C - which has nothing with fortran
itself, you can bind such function to any other language. it's definitely
more interesting to test stuff that was not specially optimized. for
example, function that finds Pi suming some numbers will be more correct
test, i think..
)
(With-best-regards '(Alex Mizrahi) :aka 'killer_storm)
(prin1 "Jane dates only Lisp programmers"))
"Alex Mizrahi" <········@hotmail.com> wrote:
> it's definitely
>more interesting to test stuff that was not specially optimized. for
>example, function that finds Pi suming some numbers will be more correct
>test, i think..
Using 1E7 double-precision random samples to approximate pi, my Fortran 95
program using the same compiler and hardware described earlier takes about
720 ms
program xpi_sim
! compute pi with simulation, using a loop
! compare with xpi_sim.f
implicit none
integer :: i
integer, parameter :: n = 10000000
real (kind=kind(1.0d0)) :: xx,pi
real :: t1,t2
call cpu_time(t1)
pi = 0.0
call random_seed()
do i=1,n
call random_number(xx)
pi = pi + sqrt(1-xx**2)
end do
pi = 4*pi/n
call cpu_time(t2)
print*,1000*(t2-t1),pi,n,"with_loop"
end program xpi_sim
sample output:
721.0368 3.14156808225164 10000000 with_loop
----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
In article <··············@uni-berlin.de>, Alex Mizrahi wrote:
>
> but looks like that function measured not the speed of common fortran
> operators, but some function written in C - which has nothing with fortran
> itself, you can bind such function to any other language. it's definitely
> more interesting to test stuff that was not specially optimized. for
> example, function that finds Pi suming some numbers will be more correct
> test, i think..
>
The counter-argument would be that if all you care about is matrix
multiplication, you should either 1) use fortran, which comes with matrix
operations built in, or 2) spend time writing these operations in Lisp
instead of concentrating on developing your application. If someone
came up with an efficient matrix library for Lisp, the argument would
of course change.
This is why such arguments about language performance are often useless
without discussing the application domain. And comparing languages "in
general" is even more pointless since there is no such thing as a general
program.
--
Eric Daniel
On Sun, Aug 22, 2004 at 11:11:17PM -0000, Eric Daniel wrote:
> If someone came up with an efficient matrix library for Lisp, the
> argument would of course change.
Isn't that what MatLisp is?
http://matlisp.sourceforge.net/
--
;;;; Matthew Danish -- user: mrd domain: cmu.edu
;;;; OpenPGP public key: C24B6010 on keyring.debian.org
In article <·····················@mapcar.org>, Matthew Danish wrote:
> On Sun, Aug 22, 2004 at 11:11:17PM -0000, Eric Daniel wrote:
> > If someone came up with an efficient matrix library for Lisp, the
> > argument would of course change.
>
> Isn't that what MatLisp is?
> http://matlisp.sourceforge.net/
>
Looks like it. Case closed then :-)
--
Eric Daniel
>>>>> "Christopher" == Christopher C Stacy <······@news.dtpq.com> writes:
Christopher> (e) Wasn't there something a lot like this posted here in
Christopher> the last year?
You may be thinking about the open challenge that the notorius
"nobody" challenged me with sometime earlier this year. That benchmark
was primarily concerned with floating point performance with some
astronomical algorithms.
The result was that the Lisp community quickly could come up with a
version that beat GCC, allthough GCC was only half as fast as the
Intel C++ compiler.
So our claim still stands that Lisp compilers are able to come within
spitting distance of C compilers.
------------------------+-----------------------------------------------------
Christian Lynbech | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- ·······@hal.com (Michael A. Petonic)
Christian Lynbech escreveu:
[snipped]
> So our claim still stands that Lisp compilers are able to come within
> spitting distance of C compilers.
Your phrase above has been sampled for my random tagline :-)
--
Cesar Rabak
has anywone written a C compiler in lisp?
Christian Lynbech wrote:
> The result was that the Lisp community quickly could come up with a
> version that beat GCC, allthough GCC was only half as fast as the
> Intel C++ compiler.
>
Jim Newton <·····@rdrop.com> writes:
> has anywone written a C compiler in lisp?
Such a compiler was shipped with Symbolics Lisp Machines, and was used
to compile the X server.
Paolo
--
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Recommended Common Lisp libraries/tools (Google for info on each):
- ASDF/ASDF-INSTALL: system building/installation
- CL-PPCRE: regular expressions
- UFFI: Foreign Function Interface
Paolo Amoroso wrote:
> Jim Newton <·····@rdrop.com> writes:
>
>
>>has anywone written a C compiler in lisp?
>
>
> Such a compiler was shipped with Symbolics Lisp Machines, and was used
> to compile the X server.
>
>
> Paolo
That was Zeta-C, wasn't it?
http://www.cliki.net/Zeta-C
>>>>> On Sun, 22 Aug 2004 21:37:55 +0200, Engelke Eschner ("Engelke") writes:
Engelke> Paolo Amoroso wrote:
>> Jim Newton <·····@rdrop.com> writes:
>>
>>> has anywone written a C compiler in lisp?
>> Such a compiler was shipped with Symbolics Lisp Machines, and was
>> used
>> to compile the X server.
>> Paolo
Engelke> That was Zeta-C, wasn't it?
No, that was Symbolics ANSI C.
Zeta-C was from a different company.
is there a public domain one that i could look at? i did a quick goole
search but did not find anything interesting.
Christopher C. Stacy wrote:
>>>>>>On Sun, 22 Aug 2004 15:07:09 +0200, Jim Newton ("Jim") writes:
>
> Jim> has anywone written a C compiler in lisp?
>
> Yes, several.
On Sun, 22 Aug 2004 22:02:56 +0200, Jim Newton <·····@rdrop.com> wrote:
> is there a public domain one that i could look at? i did a quick
> goole search but did not find anything interesting.
<http://groups.google.com/groups?selm=877jy4ty0y.fsf%40nyct.net>
Edi.
--
"Lisp doesn't look any deader than usual to me."
(David Thornley, reply to a question older than most languages)
Real email: (replace (subseq ·········@agharta.de" 5) "edi")
(message (Hello 'Christopher)
(you :wrote :on '(Sat, 21 Aug 2004 06:17:12 GMT))
(
CCS> Here's a numerical challenge submitted by someone over there.
once i've did some benchmarks on adding-multiplicating. GCL was able to
create to create C code that did loop correctly, and compiled with gcc it
was fastest (there was pure C program in benchmark compiled with other c/c++
compilers, looks like there was different precision..). CMUCL produced quite
ugly code, but on Intel P4 with NetBurst architecture it behaved quite well,
no much difference with C version.
by the way, that fortran benchmark is bad for processors like pentium4 -
memory there is much slower than processor(i had figures like memory reading
is about 30 times slower that FPU operations on that memory), so most time
it will just read memory, there will be no difference what instructions are
runned, unless there's memory overhead or really a lot of instructions.
)
(With-best-regards '(Alex Mizrahi) :aka 'killer_storm)
(prin1 "Jane dates only Lisp programmers"))
······@news.dtpq.com (Christopher C. Stacy) writes:
> Here's an example that is being touted as of a programming study
> which provides evidences that "Common Lisp is slow".
>
> http://emr.cs.iit.edu/~reingold/calendars.shtml
Touted where and by whom? Reingold's & Dershowitz's book
makes no such claim, and their account of why they chose
Lisp doesn't mention speed as a consideration and makes
it quite clear that they were going for simplicity and
clarity above all else.
--
Gareth McCaughan
.sig under construc
Gareth McCaughan <················@pobox.com> writes:
> Touted where and by whom? Reingold's & Dershowitz's book makes no
> such claim, and their account of why they chose Lisp doesn't mention
> speed as a consideration and makes it quite clear that they were
> going for simplicity and clarity above all else.
Well, their code contains comments like "this code in principle works,
but might fail on some 32-bit lisps because there are temporary
integer values greater than that, so here's some much more complicated
code that is gives correct results regardless". So it seems to me they
are aiming for portability (across versions and generations) of lisps,
above all else.
--
Frode Vatvedt Fjeld
Frode Vatvedt Fjeld <······@cs.uit.no> writes:
> Gareth McCaughan <················@pobox.com> writes:
>
> > Touted where and by whom? Reingold's & Dershowitz's book makes no
> > such claim, and their account of why they chose Lisp doesn't mention
> > speed as a consideration and makes it quite clear that they were
> > going for simplicity and clarity above all else.
>
> Well, their code contains comments like "this code in principle works,
> but might fail on some 32-bit lisps because there are temporary
> integer values greater than that, so here's some much more complicated
> code that is gives correct results regardless". So it seems to me they
> are aiming for portability (across versions and generations) of lisps,
> above all else.
Um, but CL has bignums.
Bj�rn
On 20 Aug 2004 12:45:29 +0200, Björn Lindberg <·······@nada.kth.se> wrote:
>> Well, their code contains comments like "this code in principle works,
>> but might fail on some 32-bit lisps because there are temporary
>> integer values greater than that, so here's some much more complicated
>> code that is gives correct results regardless". So it seems to me they
>> are aiming for portability (across versions and generations) of lisps,
>> above all else.
>
> Um, but CL has bignums.
That code was written initially in emacs lisp, and then
modified slightly to make it run in cl.
The calendar related functions of emacs are written by
the two authors.
P.
·······@nada.kth.se (=?iso-8859-1?q?Bj=F6rn_Lindberg?=) writes:
>
> Frode Vatvedt Fjeld <······@cs.uit.no> writes:
> > Well, their code contains comments like "this code in principle works,
> > but might fail on some 32-bit lisps because there are temporary
> > integer values greater than that, so here's some much more complicated
> > code that is gives correct results regardless". So it seems to me they
> > are aiming for portability (across versions and generations) of lisps,
> > above all else.
>
> Um, but CL has bignums.
>
> Bj�rn
True, but if the code is declared to use FIXNUMs and the speed
optimization setting is high enough with safety low enough, then the
code can still fail miserably, because no bignums will be created.
--
Thomas A. Russ, USC/Information Sciences Institute
···@sevak.isi.edu (Thomas A. Russ) writes:
> ·······@nada.kth.se (=?iso-8859-1?q?Bj=F6rn_Lindberg?=) writes:
>
>>
>> Frode Vatvedt Fjeld <······@cs.uit.no> writes:
>
>> > Well, their code contains comments like "this code in principle works,
>> > but might fail on some 32-bit lisps because there are temporary
>> > integer values greater than that, so here's some much more complicated
>> > code that is gives correct results regardless". So it seems to me they
>> > are aiming for portability (across versions and generations) of lisps,
>> > above all else.
>>
>> Um, but CL has bignums.
>>
>> Bj�rn
>
> True, but if the code is declared to use FIXNUMs and the speed
> optimization setting is high enough with safety low enough, then the
> code can still fail miserably, because no bignums will be created.
But with a Common Lisp the programmer has the option of not
providing such declarations [1]. With a non Common Lisp, the
programmer might only have the option of switching to Common Lisp...
(It has of course already been noted that that code was originally
writen for Elisp.)
[1] And we hope that the programmer would have the sense not to
provide such declarations...
---Vassil.
--
Vassil Nikolov <········@poboxes.com>
Hollerith's Law of Docstrings: Everything can be summarized in 72 bytes.
···@sevak.isi.edu (Thomas A. Russ) writes:
> ·······@nada.kth.se (=?iso-8859-1?q?Bj=F6rn_Lindberg?=) writes:
>
> >
> > Frode Vatvedt Fjeld <······@cs.uit.no> writes:
>
> > > Well, their code contains comments like "this code in principle works,
> > > but might fail on some 32-bit lisps because there are temporary
> > > integer values greater than that, so here's some much more complicated
> > > code that is gives correct results regardless". So it seems to me they
> > > are aiming for portability (across versions and generations) of lisps,
> > > above all else.
> >
> > Um, but CL has bignums.
> >
> > Bj�rn
>
> True, but if the code is declared to use FIXNUMs and the speed
> optimization setting is high enough with safety low enough, then the
> code can still fail miserably, because no bignums will be created.
Yes, but the code in question didn't have any declarations. However,
apparently it was originally written in elisp, which explains why they
made the effort to use simply fixnums. In any case, the authors were
well aware that the calculations might overflow a fixnum, which is why
they did the more complex rewrite.
Bj�rn