From: ·····@labs-n.bbn.com
Subject: RE: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <330d8n$s5o@info-server.bbn.com>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
--> In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
--> 
--> Not "necessarily slower", but in my experience it is more common in
--> real-world programming projects to encounter cases were C's
--> memory-management philosophy results in higher throughput and fewer
--> problems for interactivity and real-time response than one which
--> relies on GC.

isn't this a result comparable to that of declaring types?  if I took a
casually written lisp program and converted it to C, I'd have to declare
lots more variables types than i did in lisp.  I'd also have to use lots
of malloc/free calls, which I didn't in lisp.  let us say the the
programs get tested with the exact same test suite...I'd expect the C
version to not exhibit the GC-related slowdowns that might be apparent
in the lisp version because that behavior would be distributed
differently, even though the total would likely be exactly the same
amount of RAM allocated and GC'd.

my currently used Lisp dialects do the typical thing--generational GC,
which waits until a threshold is reached before GC'ing gen 0. a C
program GCs every time you let go of something (free it), and so the
time distribution is different. this is the most painful thing
preventing "real-time" behavior--it occurs more-or-less randomly, and
has unpredictable duration.

some years ago, when i was porting Macsyma to the Explorer, I fiddled
with cons-areas, wondering if I could do a better job of controlling GC
occurrences myself.  the idea was that I would allocate an area, work in
it, copy a result out of it and immediately free the area, avoiding
having to GC that area, cutting the cost of a GC when it finally did
occur.  this was before generational GC had become available.  Macsyma
is/was horrible about generating garbage.

 -- clint

From: Thomas M. Breuel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TMB.94Aug20205223@arolla.idiap.ch>
In article <··········@info-server.bbn.com> ·····@labs-n.bbn.com writes:
|isn't this a result comparable to that of declaring types?  if I took a
|casually written lisp program and converted it to C, I'd have to declare
|lots more variables types than i did in lisp.

The problem is that even if you add the same number (or more)
declarations to your Lisp program, on many Lisp implementations it
would still not run as fast as the corresponding C program.  And
there is no standard way of figuring out why.

For example, in CMU CL, at some point, FLOOR was much slower than
TRUNCATE because FLOOR didn't get inlined.  In Lucid, adding redundant
declarations could slow down your program.  Trying to track down the
sources of such problems is just as hard as trying to track down a
compiler bug or a pointer bug in C (actually, harder than a pointer
bug if you have Purify for C...).

Another problem is that if you want to get memory utilization in
CommonLisp that is as efficient as in C, you often have to change your
program logic radically, in ways that are much less abstract than in
C.  For example, arrays of structures end up having one pointer and
one structure header overhead for each element ("struct { int x,y; }
foo[N];").  Structures having multiple "small" objects do not get
packed by most CommonLisp implementations ("struct { char x,y,z; }
foo[N];").  Structures of fixed-size arrays have a pointer plus an
array header overhead for each element ("struct { float
vect1[3],vect2[3]; } foo[N];").

The only way to program around those limitations is to convert
everything to FORTRAN-style code, where you don't use structures and
rely on monotyped arrays of characters, bytes, and floating point
numbers and use array indexes as pointers.  If you do program
FORTRAN-style, it is usually relatively easy to get good performance
in CommonLisp (there are still some gotchas to watch out for), but
most people choose Lisp in order to have convenient and powerful means
of expressing their algorithms at their disposal, not to be squeezed
into FORTRAN-style programming.

				Thomas.
From: William G. Dubuque
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <WGD.94Aug21181020@martigny.ai.mit.edu>
  From: ·····@labs-n.bbn.com
  Date: 18 Aug 1994 19:36:55 GMT
  ...
  some years ago, when i was porting Macsyma to the Explorer, I fiddled
  with cons-areas, wondering if I could do a better job of controlling GC
  occurrences myself.  the idea was that I would allocate an area, work in
  it, copy a result out of it and immediately free the area, avoiding
  having to GC that area, cutting the cost of a GC when it finally did
  occur.  this was before generational GC had become available.  Macsyma
  is/was horrible about generating garbage.

The problem of "intermediate expression swell" is innate in many
computer algebra calculations and is not necessarily specific to
Macsyma. In fact Macsyma was one of the driving forces behind Moon's
original ephemeral GC implementation for MIT Lispms, as well as many
other features of MacLisp/Lispm-lisp that made their way into CLtL2.
Of course once can optimize using temporary consing areas, but if you
go too far you are almost doing the same thing as you would in
C with malloc/free.