From: ·······@LoyalistC.ON.CA
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <se3035ee.002@LoyalistC.ON.CA>
·······@loyalistc.on.ca 
	Data:(613)476-4898  Fax:(613)476-1516




    (major portion deleted)

    I've worked in lisp for 15 years, but I think that when I 
    start my next project I'll give serious consideration to 
    starting in c++ rather than lisp.

    |Bryan M. Kramer, Ph.D.  416-978-7569, fax 416-978-1455
    |Department of Computer Science, University of Toronto      
    |6 King's College Road, Room 265A		
    |Toronto, Ontario, Canada      M5S 1A4                      

  I program in C++ and I'm writing a LISP interpreter into my product
to give it flexibility.  I think LISP is a wonderful method for
adding a robust malleability to the type of software that benifits
from extensibility.  AutoLISP is a prime example of this.  AutoCAD
started out life as a generic CAD program.  The AutoLISP extension
put it on top of the heap.  But could you imagine AutoCAD written in
LISP?  No way.  The only reason I could think of for writing a
program in a language that takes 20 megs of space when another
language will do it in 2 is for the sake of using the language. 

  I will regret having said this here, but I cannot think of a major
process that I would wish to write in LISP.  I don't think that's
where its strength is.  LISP is a tool best used to intelligently
link other tools.  The space it takes up cannot be justified by cheap
memory.  If someone gave me a house with 200 rooms, how would I keep
it clean?

                                          Rick

              Rick Graham, the Binary Workshop
·······@loyalistc.on.ca  Data:(613)476-4898  Fax:(613)476-1516

From: Lawrence G. Mayka
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <LGM.94Jul23170511@polaris.ih.att.com>
In article <············@LoyalistC.ON.CA> ·······@LoyalistC.ON.CA writes:

     I will regret having said this here, but I cannot think of a major
   process that I would wish to write in LISP.  I don't think that's
   where its strength is.  LISP is a tool best used to intelligently
   link other tools.  The space it takes up cannot be justified by cheap
   memory.  If someone gave me a house with 200 rooms, how would I keep
   it clean?

Your question begs for the obvious answer, following your own analogy:
the House of Lisp cleans itself automatically, and indeed offers the
option of removing any rooms that you've decided, after settling in,
that you don't want.

Seriously, real-life Common Lisp applications typically require
image-trimming (e.g., via a treeshaker) in order to be competitive in
space with similar applications written in more parsimonious
languages.  We simply must include the image-trimming effort in the
total productivity equation.  I still think we come out way ahead.
--
        Lawrence G. Mayka
        AT&T Bell Laboratories
        ···@ieain.att.com

Standard disclaimer.
From: Kirk Rader
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <CtI6Mo.6tG@triple-i.com>
In article <·················@polaris.ih.att.com> ···@polaris.ih.att.com (Lawrence G. Mayka) writes:

[...]

>
>Seriously, real-life Common Lisp applications typically require
>image-trimming (e.g., via a treeshaker) in order to be competitive in
>space with similar applications written in more parsimonious
>languages.  We simply must include the image-trimming effort in the
>total productivity equation.  I still think we come out way ahead.
>--
>        Lawrence G. Mayka
>        AT&T Bell Laboratories
>        ···@ieain.att.com
>
>Standard disclaimer.


I agree that using lisp you come out way ahead in productivity and
without paying too large a cost in executable size and performance if:

1. You are sufficiently aware of the performance and memory
implications of common operations and idioms to know how to design for
a sufficient degree of efficiency up front without having to spend too
much time finding and fixing "performance leaks" after the fact.  This
will be highly dependent on the particular implementation in use,
since in my experience most (commercial or otherwise) lisp
implementations have their own idiosyncratic patterns of which
operations cons excessively and which do not, and which functions are
coded in an optimally efficient manner and which are better avoided in
favor of home-grown lisp or foreign code.

2.  You are working in a problem domain which is well-suited to lisp
in the first place.  Some problem domains are best addressed using the
features of a lisp-like language because they actually make good use
of lisp's semantic bells-and-whistles.  Note that the more "lispish"
features one uses, the less tree-shaking is likely to actually find
substantial amounts of unused code to eliminate but, since it is
already conceded in this case that those features "pay their way" in
the application's executable, this is not an issue.

The considerations cited in 1, together with the quality of modern
C/C++ integrated development and debugging environments, are the
reason I feel that most claims of lisp's "productivity enhancing"
features are overblown, if not simply false.  There are valid reasons
for using lisp for certain kinds of applications.  There are also
valid reasons for avoiding it in others.  When starting a new project,
it is a good idea to spend at least some time considering the
trade-offs involved before making a choice as basic as what language
to use in implementing it.  As someone whose job it has been to make
piles of performance-critical lisp-machine code run on general-purpose
machines in commercial Common Lisp implementations, I have had this
confirmed through bitter experience.  The more that tree-shaking,
declarations, application-specific foreign or simplified versions of
standard functions, etc. are required for acceptable performance and
actually succeed in achieving it, the more evidence it is that lisp
was not really the best choice for that particular application (even
though there are applications for which it is, in fact, the best
choice due to considerations as in 2, above) and the more likely it is
that lisp will make one less, rather than more, productive.

Several people have suggested in this thread that one code one's
mainline application modules in C or C++, and use a small, easily
extensible version of lisp as a "shell" or "macro" language to glue
the pieces together and provide the end-user programmability features
which are one of lisp's greatest assets.  This seems to me to be an
ideal compromise for applications where lisp's performance and
resource requirements are unacceptable to the main application but
where it is still desired to retain at least some of lisp's superior
features.

------------------------------------------------------------
Kirk Rader                                 ····@triple-i.com
From: Ken Anderson
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <KANDERSO.94Jul25164855@wheaton.bbn.com>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:

   ...

   I agree that using lisp you come out way ahead in productivity and
   without paying too large a cost in executable size and performance if:

   1. You are sufficiently aware of the performance and memory
   implications of common operations and idioms to know how to design for
   a sufficient degree of efficiency up front without having to spend too
   much time finding and fixing "performance leaks" after the fact.  This
   will be highly dependent on the particular implementation in use,
   since in my experience most (commercial or otherwise) lisp
   implementations have their own idiosyncratic patterns of which
   operations cons excessively and which do not, and which functions are
   coded in an optimally efficient manner and which are better avoided in
   favor of home-grown lisp or foreign code.

I sympathize with this.  I just replaced three calls to FIND with hand
written loops.  However, my other uses of find seem fine in terms of the
amount of time they take.  I find Lisp to be a little like a shell (just
type stuff and things start happening), a little like Mathematica (you can
express a fairly complicated program easily and let it take care of the
details), and a little like C (when you need performance you need to be
precise).  Performance tuning is an expert activity in any language.  So, i
keep thinking there should be an expert system out there to help us.

   2.  You are working in a problem domain which is well-suited to lisp
   in the first place.  Some problem domains are best addressed using the
   features of a lisp-like language because they actually make good use
   of lisp's semantic bells-and-whistles.  Note that the more "lispish"
   features one uses, the less tree-shaking is likely to actually find
   substantial amounts of unused code to eliminate but, since it is
   already conceded in this case that those features "pay their way" in
   the application's executable, this is not an issue.

   The considerations cited in 1, together with the quality of modern
   C/C++ integrated development and debugging environments, are the
   reason I feel that most claims of lisp's "productivity enhancing"
   features are overblown, if not simply false.  There are valid reasons
   for using lisp for certain kinds of applications.  There are also
   valid reasons for avoiding it in others.  When starting a new project,
   it is a good idea to spend at least some time considering the
   trade-offs involved before making a choice as basic as what language
   to use in implementing it.  As someone whose job it has been to make
   piles of performance-critical lisp-machine code run on general-purpose
   machines in commercial Common Lisp implementations, I have had this
   confirmed through bitter experience.  The more that tree-shaking,
   declarations, application-specific foreign or simplified versions of
   standard functions, etc. are required for acceptable performance and
   actually succeed in achieving it, the more evidence it is that lisp
   was not really the best choice for that particular application (even
   though there are applications for which it is, in fact, the best
   choice due to considerations as in 2, above) and the more likely it is
   that lisp will make one less, rather than more, productive.

It sounds like you've had a lot of experience.  Can you tell us what you
found Lisp not to be appropriate for, and why?

Thanks,
k
--
Ken Anderson 
Internet: ·········@bbn.com
BBN ST               Work Phone: 617-873-3160
10 Moulton St.       Home Phone: 617-643-0157
Mail Stop 6/4a              FAX: 617-873-2794
Cambridge MA 02138
USA
From: Kirk Rader
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <CtpM0I.BM7@triple-i.com>
In article <······················@wheaton.bbn.com> ········@wheaton.bbn.com (Ken Anderson) writes:

[...]

>
>It sounds like you've had a lot of experience.  Can you tell us what you
>found Lisp not to be appropriate for, and why?
>
>Thanks,
>k
>--
>Ken Anderson 
>Internet: ·········@bbn.com
>BBN ST               Work Phone: 617-873-3160
>10 Moulton St.       Home Phone: 617-643-0157
>Mail Stop 6/4a              FAX: 617-873-2794
>Cambridge MA 02138
>USA


I have found that applications which require "real-time" performance,
either in the traditional sense used by embedded-systems programmers
or in the related sense implied by the requirements of a highly
interactive application like the paint program on which I currently
work, are almost if not impossible to achieve in lisp.  We have
achieved it (to the extent we have, yet) primarily through the most
drastic kinds of optimizations discussion of which was the basis of
this thread.  The reason for this is that the chaotic behavior
resulting from the combination of lisp's freedom to "involuntarily"
allocate memory with the requirement to therefore periodically invoke
the garbage collector at what amount to random times results in a
system that does not satisfy the basic requirement of a real-time
system - that you can predict that any given operation will not take
longer than some known length of time and that the time it takes will
be short enough to keep up with the asynchronous events with which the
system must interact.  Beyond this specific issue, any time you must
consider memory utilization or computational horse-power constraints,
you must carefully decide whether the "one-size fits all" philosophy
of lisp is really a good match for your application.  A garbage
collector is an excellent and ideally efficient model for
memory-management if your application requires (by its nature, not as
a side-effect of its implementation language) a large number of small
"anonymous" dynamic memory allocations and it can afford the small or
large "hiccups" that result from it.  If, as is actually more common,
your application needs to make few if any allocations "off the heap",
in C jargon, or can easily control the circumstances in which they
occur, e.g. using the C++ constructor/destructor model, then the
garbage-collector's overhead is pure loss.  Whether it is fatally so
is a consequence of considerations like that cited above.

Memory-management is only one area in which lisp is optimized for a
particular kind of problem.  Common Lisp's model of function-calling,
including lexical-closures and method-combination, require an
implementation to typically expend a great deal more effort just
dispatching to and returning from a subroutine than in a language like
C or C++.  If you really need, or at least can make good use of, that
kind of power there is nothing comparable in more "conventional"
languages.  As with memory-management, however, I have found that
C++'s model of overloaded functions is more than sufficient in most
instances (no pun intended), and it is actually rather rare that I
want or need to use CLOS's more elaborate object-oriented features.
If you really need to use a true mix-in style of programming, C++
can't cut it - but how often do you need it to an extent that
justifies the overhead that is imposed on _every_ call of _every_
function by a typical real-world implementation in order to support
it?  Despite all that, my experience and intuition suggest while that
only a minority of applications really benefit from lisp's more
sophisticated features, only a minority really suffer an unacceptable
performance penalty due to them.  The latter is primarily due to the
kernel of truth there is in the "cheap RAM and fast processors"
argument.

But these kinds of performance issues represent only one aspect of the
problems to be faced when using a non-mainstream (whether it is
deservedly so, or not) programming language.  In the application
domain in which I work, computer graphics, there are any number of
platform-vendor-supplied and third-party libraries and tools that we
either can not use at all or can only use in a less than satisfactory
way because they assume C/C++ as the application language and a
conventional suite of software-engineering support tools which
specifically does not include a lisp environment.  I have worked in
enough other fields besides graphics to know that the same is true in
many other application domains, as well.  To quote myself from a
private email message generated as a result of this same thread, I
personally have come to the conclusion that using lisp on a
"C-machine" like a PC or Unix workstation should be regarded as being
as anamolous (but not therefore necessarily inappropriate) as using
anything _but_ lisp would be on a lisp-machine.

------------------------------------------------------------
Kirk Rader                                 ····@triple-i.com
From: Alan Gunderson
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <31jtkv$aal@iris.mbvlab.wpafb.af.mil>
In article <··················@qobi.ai>
           ····@CS.Toronto.EDU "Jeffrey Mark Siskind" writes:

> There is no reason why a compiler couldn't do global analysis to determine the
> possible types of all expressions and compile code as efficient as C even
> without any declarations. It possible to do this without losing safety. It is
> even possible to do it in an incremental fashion within a classic
> read-eval-print loop based development environment. 

The CLiCC (Common Lisp to C Compiler) by Goerigk, Hoffmann, and Knutzen
at Christian-Albrechts-University of Kiel does global analysis type
determination as it generates a C program equivalent of a Lisp program.
With only a few days of modifications to make 13,000 lines of Lisp 
code compliant with the Common Lisp subset supported by CLiCC, the
system successfully transformed the code into an 860,000 byte executable.
The CLiCC static libraries rather than the shared libraries were used 
in the linking.  The generated C code was 15,200 lines of code.

Thus, with a system like CLiCC, the Lisp development environment can
be used to develop a program and then it can be converted to C for
deployment. Thus, the "best" of each languages capabilites can be
exploited.

> But rather than working on such a compiler, the Lisp community has three
> classes of people:
> - those who are building yet another byte-code interpreter to add to the
> dozens already available
> - those adding creaping featurism to the language (like MOP, DEFSYSTEM,
> CLIM, ...)
> - those adding kludge upon kludge on top of existing compilers rather than
> building one from the ground up on solid up-to-date principles.

CLiCC was built from the ground up on solid up-to-date principles.  The
CLiCC documentation contains a nice discussion of compiler research and
techniques that influenced the design of CLiCC.  These include work on
the Scheme to C compiler done by DEC CRL and compiler techniques used
in functional languages such as ML. 

Thus, there are people in the Lisp community working on some great 
post-development tools to support Lisp programmers.  Seeing tools such
as CLiCC should be encouragement to the Lisp community to internalize 
some of Siskind's points and further develop and extend tools such
as CLiCC.

--- AlanG
From: John B. Plevyak
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <31k2mo$8ld@vixen.cso.uiuc.edu>
Henry G. Baker (······@netcom.com) wrote:

: This isn't quite true.  There are very good theoretical reasons why
: global analysis in Lisp is much less satisfactory than in other
: languages.  But these stem from the _power_ of Lisp, so one wouldn't
: want to give it up.  The consensus seems to be that on-the-fly
: incremental compilation is probably the best near-term answer, where
: the compilation of (a version of) a function is delayed until we get
: some idea of what types of arguments it should expect.  The SELF
: language/implementation is a good example of this style of
: compilation.

On the contrary, I would say that there are no reasons why global
analysis need be less "satisfactory" for Lisp.   It is true that
the language contains features that are difficult to analyze, but
I would be satisfied with less from a analysis of a program which
uses them.  In most cases (excluding things like eval) static
analysis and compile time optimizations (and possibly profiling
feedback) can do a better job then on-the-fly incrementatal compilation.
This is because the compiler can globally restructure control flow and
change data representations.

In particular, if the global analysis builds interprocedural control
and data flow graphs, it can clone subtrees and specialize these with
respect to the data they operate on.  In addition, the global data
flow information can be used to specialize the physical layout of
structures/objects and then to replicate and specialize all code
which operates on them.

My paper describing such an analysis and applications will appear
in OOPSLA:

  Precise Concrete Type Inference for Object-Oriented Languages
  http::/www-csag.cs.uiuc.edu/papers/ti-oopsla94.ps

This analysis and the optimizations have been implemented for a
language with some of the difficult bits from Lisp: untyped,
first class selectors (functions), continuations, and 
messages (essentially apply).

: >There is also no
: >reason why a compiler couldn't determine at compile time the method to be
: >dispatched by a given generic function call. 

: There are very good theoretical and practical reasons why this is not
: to be.  See the discussion above.  Also, if one is intent on composing
: old compiled code with new compiled code, then there have to be some
: facts about the new compiled code that the old compiled code wasn't
: privy to, and therefore can't take advantage of.

Again, the above analysis computes a safe approximation which is very
accurate.  Combined with cloning of subtrees and to specialize for
classes containing instance variables of different types, we have been
able to statically bind the vast majority of call sites (>99% in many cases).

Such analysis are expensive, and require the entire program on which to
operate, but they are not that much more expensive than g++ -O2 :)

Incremental on-the-fly compilation is the best bet for incrementatal
development, debugging and fast turn around, but when you are ready
to build the final version, good global analysis can enable many
optimizations.

--
John Plevyak (·······@uiuc.edu)  2233 Digital Computer Lab, (217) 244-7116
             Concurrent Systems Architecture Group 
             University of Illinois at Urbana-Champaign
             1304 West Springfield
             Urbana, IL              61801
<A HREF="http://www-csag.cs.uiuc.edu">CSAG Home Page</A>
From: Martin Rodgers
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <775833093snz@wildcard.demon.co.uk>
In article <··········@triple-i.com> ····@triple-i.com "Kirk Rader" writes:

> And shouldn't there have been a smiley beside the description of ML as
> being "widely used"? :-)

The statement would certainly amuse (or bemuse) most C/C++
programmers. :-) I can only guess at what a vendor might say.

-- 
Future generations are relying on us
It's a world we've made - Incubus	
We're living on a knife edge, looking for the ground -- Hawkwind
From: Kris Karas
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <31lht2$imv@hsdndev.harvard.edu>
Henry G. Baker writes:
>In article <··················@qobi.ai> ····@CS.Toronto.EDU writes:
>>But the community is full of people who try to justify garbage
>>collection as performing better than C's static allocation.
>In many cases GC _does_ perform better than C's static allocation.
>This advantage is over and above the important service of avoiding
>dangling references.  Most C/C++ programmers are quite amazed by this --
>as if finding out about sex for the first time after being kept ignorant
>about it by their parents...

Heh heh.  Good point.  :-)

Full garbage collection takes a long time, but it has zero overhead
when memory is allocated.  Thus lots of little creates don't suffer
any memory-subsystem-invocation penalties.  C's memory allocator
doesn't take a lunchbreak every so often as Lisp's GC does, but it
spends lots of time making small, incremental re-organizings whenever
a small chunk of memory is allocated or deallocated; add to that the
function calling overhead, and the total time spent in C's allocator
can be greater.

And for those lisps that support ephemeral ("incremental" for lack of
a better lay term) garbage collection, frequently-manipulated pieces
of data get placed adjacent to one another in the machine's memory
space, greatly increasing the hit rate on the disk cache (and thus
reducing page thrashing) on the virtual memory system.
-- 
Kris Karas <···@enterprise.bih.harvard.edu> for fun, or @aviion-b... for work.
(setq *disclaimer* "I barely speak for myself, much less anybody else."
      *conformist-numbers* '((AMA-CCS 274) (DoD 1236))
      *bikes* '((RF900RR-94) (NT650-89 :RaceP T) (CB750F-79 :SellP T)))
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Ctwynz.MIv@triple-i.com>
In article <··········@hsdndev.harvard.edu> ···@enterprise.bih.harvard.edu (Kris Karas) writes:

[...]

>
>Oh, rubbish.  This is true in an apples to oranges comparison;
>specifically, if you allow one language to use machine-specific
>libraries and code but not the other, it's hard to have a fair
>comparison.  I can't think of many C programs that don't begin with
>#include <sys/xxxx.h> where xxxx is some un*x-specific hack.  Device
>control.  Asynchonous interrupts.  Process scheduling.  Low-level file
>system mangling.  These tasks are difficult (e.g. slow) if not
>impossible to deal with when the language you're using doesn't have
>any standard set of libraries you can call upon.  C programs can
>practically be made machine-specific too, in the sense that you can
>code your programs to generate just a line or two of assembler for
>each C subexpression.  (Heck, I remember when C was referred to as a
>high-level assembler.)  "a += b" easily converts to a single
>instruction in many implementations.
>
>So, fighting apples to apples now, and using some machine-specific
>code, let's see how "slow" this lisp program is compared to its C
>counterpart.  The task: copy from one wired array to another (wired
>means that the storage management system has been told to not swap the
>array to disk [virtual memory]).  To make the program slightly smaller
>for the sake of net.bandwidth, a lot of the setup code will be
>simplified (no multi-dimensional arrays, etc), we'll assume the arrays
>are integer multiples of 4 words long and that they're the same size.
>

[...]

>
>Embedded-application lisp code is easy to write, it's very fast if you
>allow yourself some knowledge of the machine upon which it is being
>compiled, and is often available if you allow yourself to use
>platform-specific code.  The above was, of course, the code for a
>Symbolics Ivory processor, something of which I've dealt with a lot:
>I wrote the embedded SCSI driver for the NXP1000, a large and complex
>driver that has its own mini lisp compiler for the microcoded NCR
>SCSI hardware.
>-- 
>Kris Karas <···@enterprise.bih.harvard.edu> for fun, or @aviion-b... for work.
>(setq *disclaimer* "I barely speak for myself, much less anybody else."
>      *conformist-numbers* '((AMA-CCS 274) (DoD 1236))
>      *bikes* '((RF900RR-94) (NT650-89 :RaceP T) (CB750F-79 :SellP T)))


"Rubbish" yourself!  Comparing the use of highly
implementation-specific lisp in a device driver for a machine which
was _designed_ to use lisp _as_ its assembly language to writing
application-level code on general-purpose hardware is your idea of an
"apples to apples" comparison?  Do you really suggest implementing,
for example, a Unix device driver in lisp?  Did you really not
understand my point that a language with a garbage-collector based
memory-management scheme with no (hardware assisted or otherwise)
real-time programming support is, by definition, not terribly useful
for applications which require continuous real-time response?

It seems to me that you actually make my point for me.  C was designed
expressly for system-level hacking or anything else requiring intimate
interaction with the hardware.  That is why C compilers typically do
compile to the degree of tightness to which you refer above.  While it
may be possible, with sufficient effort and expertise, to achieve
nearly the same efficiency in a particular lisp implementation on a
particular platform using a sufficient degree of implementation- and
platform-specific hackery, it is ludicrous to suggest that the typical
application programmer will achieve that degree of efficiency using
lisp for the typical application in the typical implementation on the
typical platform.  Or if they do manage, eventually, to create such
optimal code it will only be after having expended more rather than
less effort to do so than would have been expended using C - precisely
because of the existence of all of C's hardware-level libraries and
optimizations to which you refer.  Proposing as counter-examples of
"embedded programming in lisp" lisp-machine device-drivers is, putting
it mildly, missing the point.

One more time: If you want or need lisp's higher-level features for a
particular application and can afford the performance trade-offs they
entail, lisp is an ideal choice.  If you don't really need lisp's
power for a particular application but still find lisp's performance
acceptable, then there is no real reason not to use it in that case,
either.  Since in the case of an Ivory based machine the performance
trade-offs almost _always_ favor using lisp, the premise is vacuously
true.  For most applications running on non-lisp-machines there _is_ a
real performance trade-off to using lisp, and so programmers should
weigh the costs and benefits of using a variety of different
implementation languages before choosing one.

------------------------------------------------------------

Kirk Rader
From: Flier
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cu0w0s.1DL@rci.ripco.com>
Kris Karas (···@enterprise.bih.harvard.edu) wrote:
: In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
: >I have found that applications which require "real-time" performance,
: >[...] are almost if not impossible to achieve in lisp.

: Oh, rubbish.  

: So, fighting apples to apples now, and using some machine-specific
: code, let's see how "slow" this lisp program is compared to its C
: counterpart.  The task: copy from one wired array to another (wired
: means that the storage management system has been told to not swap the
: array to disk [virtual memory]).  To make the program slightly smaller
: for the sake of net.bandwidth, a lot of the setup code will be
: simplified (no multi-dimensional arrays, etc), we'll assume the arrays
: are integer multiples of 4 words long and that they're the same size.

: (defun copy-a-to-b (a b)
:   (sys:with-block-registers (1 2)
:     (setf (sys:%block-register 1) (sys:%set-tag (locf (aref a 0))
: 						sys:dtp-physical)
: 	  (sys:%block-register 2) (sys:%set-tag (locf (aref b 0))
: 						sys:dtp-physical))
:     (loop repeat (ash (length a) -2) do
:       (let ((a (sys:%block-read 1 :prefetch t))
: 	    (b (sys:%block-read 1 :prefetch t))
: 	    (c (sys:%block-read 1 :prefetch nil))
: 	    (d (sys:%block-read 1 :prefetch nil)))
: 	(sys:%block-write 2 a)
: 	(sys:%block-write 2 b)
: 	(sys:%block-write 2 c)
: 	(sys:%block-write 2 d)))))

: So, we've used lisp and not assembler.  But tell me, how big is this
: program?  How long does it take to execute?  Does it cons?  Answer:
: It's 16 machine instructions long, 9 of which are within the loop.
: For a 4096 byte array (1024 32-bit words) it takes 128 microseconds to
: execute.  You can embed it within an interrupt handler, if so desired,
: as it does not allocate any static memory; it uses five locations on
: the data stack.

And why, pray tell, would I wish to write this nearly indecipherable
mess of Lisp code instead of 16 lines of perfectly readable assembler?
This does seem like the wrong tool for a simple task.

Greg
From: Paul F. Snively
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <31sjpj$lum@news1.svc.portal.com>
In article <··········@rci.ripco.com>
····@ripco.com (Flier) writes:

> Kris Karas (···@enterprise.bih.harvard.edu) wrote:
> : In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
> : >I have found that applications which require "real-time" performance,
> : >[...] are almost if not impossible to achieve in lisp.

[Response demonstrating a very fast machine-specific array fill
deleted]

> : So, we've used lisp and not assembler.  But tell me, how big is this
> : program?  How long does it take to execute?  Does it cons?  Answer:
> : It's 16 machine instructions long, 9 of which are within the loop.
> : For a 4096 byte array (1024 32-bit words) it takes 128 microseconds to
> : execute.  You can embed it within an interrupt handler, if so desired,
> : as it does not allocate any static memory; it uses five locations on
> : the data stack.
> 
> And why, pray tell, would I wish to write this nearly indecipherable
> mess of Lisp code instead of 16 lines of perfectly readable assembler?
> This does seem like the wrong tool for a simple task.

It's an existence proof that the original assertion--that writing
`real-time' code in Lisp is nearly or truly impossible--is a bald-faced
falsehood.  If you are willing to use Lisp in the fashion that you
_must_ use C (that is, get down and dirty with the hardware, use (and
declare) types that are machine-word-size-and-byte-order specific,
etc.) then there's nothing to prevent you from writing `real-time'
code.

Now, you may find the case of writing pieces of device-driver code for
a Lisp Machine a contrived example.  Since I happen to find that
argument fairly compelling myself, let me just point out that there is
a commercial real-time expert system shell, called G2 if memory serves
me correctly, written in Common Lisp and running on stock hardware. 
It's being used, among other things, to control the Biosphere 2
environment.

> Greg


-----------------------------------------------------------------------
Paul F. Snively          "Just because you're paranoid, it doesn't mean
·····@shell.portal.com    that there's no one out to get you."
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cu83AG.8Dy@triple-i.com>
In article <··········@news1.svc.portal.com> ·····@shell.portal.com (Paul F. Snively) writes:
>In article <··········@rci.ripco.com>
>····@ripco.com (Flier) writes:
>
>> Kris Karas (···@enterprise.bih.harvard.edu) wrote:
>> : In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>> : >I have found that applications which require "real-time" performance,
>> : >[...] are almost if not impossible to achieve in lisp.
>
>[Response demonstrating a very fast machine-specific array fill
>deleted]
>
>> : So, we've used lisp and not assembler.  But tell me, how big is this
>> : program?  How long does it take to execute?  Does it cons?  Answer:
>> : It's 16 machine instructions long, 9 of which are within the loop.
>> : For a 4096 byte array (1024 32-bit words) it takes 128 microseconds to
>> : execute.  You can embed it within an interrupt handler, if so desired,
>> : as it does not allocate any static memory; it uses five locations on
>> : the data stack.
>> 
>> And why, pray tell, would I wish to write this nearly indecipherable
>> mess of Lisp code instead of 16 lines of perfectly readable assembler?
>> This does seem like the wrong tool for a simple task.
>
>It's an existence proof that the original assertion--that writing
>`real-time' code in Lisp is nearly or truly impossible--is a bald-faced
>falsehood.  If you are willing to use Lisp in the fashion that you
>_must_ use C (that is, get down and dirty with the hardware, use (and
>declare) types that are machine-word-size-and-byte-order specific,
>etc.) then there's nothing to prevent you from writing `real-time'
>code.
>
>Now, you may find the case of writing pieces of device-driver code for
>a Lisp Machine a contrived example.  Since I happen to find that
>argument fairly compelling myself, let me just point out that there is
>a commercial real-time expert system shell, called G2 if memory serves
>me correctly, written in Common Lisp and running on stock hardware. 
>It's being used, among other things, to control the Biosphere 2
>environment.
>
>> Greg
>
>
>-----------------------------------------------------------------------
>Paul F. Snively          "Just because you're paranoid, it doesn't mean
>·····@shell.portal.com    that there's no one out to get you."


The bareface falsehood is the assertion that it is easy or even
possible to achieve real-time behavior in any of the popular
commercial Common Lisp implementations available on stock hardware,
especially while retaining the features of lisp that are usually
presented as its advantages - abstraction, ease of use, ease of
debuggability, etc.  I don't find the example of a lisp-machine device
driver being written in lisp "contrived", I merely consider it
irrelevant to the issues being discussed in the thread at the point at
which it was introduced.  By definition, the architecture of a
lisp-machine is going to favor using lisp for most aspects of both
application and system-level code.  It should be equally obvious that
both the hardware and the software architecture of systems designed
primarily to run Unix or other popular OS's themselves written in C
using the standard C libraries will generally favor using C-like
languages, other things being equal.  Where other things are _not_
equal, such as applications which require the greater expressive power
of lisp more than they need optimal performance, lisp is a good
choice.  But suggesting that lisp is just as reasonable a choice as
assembler or C for implementing things like device-drivers on typical
hardware / software configurations is simply ludicrous.

Note also that Unix itself is not particularly well-suited to
real-time applications.  Adding the overhead of supporting the typical
lisp implementation's run-time system (especially its
memory-management mechanisms) to the problems already inherent in Unix
for the type of application under discussion only exacerbates the
problems.

Kirk Rader
From: Tim Bradshaw
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TFB.94Aug9172856@sorley.cogsci.ed.ac.uk>
* Kirk Rader wrote:
> By definition, the architecture of a
> lisp-machine is going to favor using lisp for most aspects of both
> application and system-level code.  It should be equally obvious that
> both the hardware and the software architecture of systems designed
> primarily to run Unix or other popular OS's themselves written in C
> using the standard C libraries will generally favor using C-like
> languages, other things being equal.  

This turns out not to be true as far as I can tell.  I'm not a
compiler-design (Lisp or otherwise) expert, but Lisp compilers can do
very well on RISC machines, and many of the old `lisp'
architectures torn out to be not so good after all, or rather, they
mesh well with the way people wrote lisp systems in the 70s, but they
don't write them like that any more, they write them better.

--tim
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuA8uv.JvJ@cogsci.ed.ac.uk>
In article <················@sorley.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>* Kirk Rader wrote:
>> By definition, the architecture of a
>> lisp-machine is going to favor using lisp for most aspects of both
>> application and system-level code.  It should be equally obvious that
>> both the hardware and the software architecture of systems designed
>> primarily to run Unix or other popular OS's themselves written in C
>> using the standard C libraries will generally favor using C-like
>> languages, other things being equal.  
>
>This turns out not to be true as far as I can tell.  I'm not a
>compiler-design (Lisp or otherwise) expert, but Lisp compilers can do
>very well on RISC machines, and many of the old `lisp'
>architectures torn out to be not so good after all, or rather, they
>mesh well with the way people wrote lisp systems in the 70s, but they
>don't write them like that any more, they write them better.

I would agree that Lisp can do reasonably well on RISC machines,
but the point of Lisp machines was not just to make Lisp fast
but also to make it fast and safe at the same time and fast 
without needing lots of declarations.

Recent Lisp implementations (especially CMU CL) have gone a fair
way towards making it easy to have safe, efficient code on RISC
machines, but it may always require a somewhat different way of
thinking.  (Not a bad way, IMHO, but different from LM thinking
nonetheless.)

But what is this about "the way people wrote lisp systems in the 70s"?
What sort of thing do you have in mind?  Lisps written in assembler
that could run in 16K 36-bit words?  (Presumably not.)
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuDKK7.LFM@triple-i.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <················@sorley.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>>* Kirk Rader wrote:
>>> By definition, the architecture of a
>>> lisp-machine is going to favor using lisp for most aspects of both
>>> application and system-level code.  It should be equally obvious that
>>> both the hardware and the software architecture of systems designed
>>> primarily to run Unix or other popular OS's themselves written in C
>>> using the standard C libraries will generally favor using C-like
>>> languages, other things being equal.  
>>
>>This turns out not to be true as far as I can tell.  I'm not a
>>compiler-design (Lisp or otherwise) expert, but Lisp compilers can do
>>very well on RISC machines, and many of the old `lisp'
>>architectures torn out to be not so good after all, or rather, they
>>mesh well with the way people wrote lisp systems in the 70s, but they
>>don't write them like that any more, they write them better.


I agree that if you focus on line-by-line treatment of individual
translation units, lisp compilers can be made to be quite efficient.
How common it is for real-world implementations to attain that degree
of efficiency is another matter.  In any case, this misses the point I
was making that you must look at the hardware and software
architecture of the system as a whole.  RISC CPU's are only the
starting point in the design of a modern (non-lisp-machine)
work-station.  The system as a whole - it's I/O, memory-management,
etc. hardware and software substrates - were all designed and
optimized with Unix in mind.  Any software system such as Common Lisp
which has its own memory-management, I/O, etc. models that are
sufficiently different from that of Unix to prevent the implementor or
application programmer from simply calling the standard libraries in
the same way that a C program would is by definition not only
re-inventing the wheel from the point of view of the work-station's
designers but also risks incurring (and in typical implementations
does incur) serious performance problems.  Every brand of work-station
with which I am familiar comes with performance metering tools which
can be used to easily verify the kinds of ill-effects to which I
refer.  As a concrete example, it is an enlightening experience to
watch the output of gr_osview on an SGI while a complex lisp
application is running using one of the popular commercial Common Lisp
implementations.  One can easily see where the lisp implementation's
I/O model, memory-management model, lightweight process model, and so
on cause really awful behavior of Irix' built-in I/O,
memory-management, and scheduling mechanisms.

All of the above applies equally to desktop PC's, of course, except
that the OS for which the system was optimized is different.


>
>I would agree that Lisp can do reasonably well on RISC machines,
>but the point of Lisp machines was not just to make Lisp fast
>but also to make it fast and safe at the same time and fast 
>without needing lots of declarations.


And to build the kind of "holistically lisp friendly" environment that
would make the kind of misbehavior to which I refer above impossible.
A lisp machine has no other memory-management mechanism than lisp's.
Ditto for its scheduler.  Ditto for its I/O substrate.  There is no
possibility of conflict or misoptimization.


>
>Recent Lisp implementations (especially CMU CL) have gone a fair
>way towards making it easy to have safe, efficient code on RISC
>machines, but it may always require a somewhat different way of
>thinking.  (Not a bad way, IMHO, but different from LM thinking
>nonetheless.)


I would disagree with the description that writing efficient
applications for typical RISC-based workstations in lisp is "easy",
for all of the reasons I refer to above, irregardless of the number of
machine instructions to which a simple expression may compile, or
whatever other micro-level measure of compiler efficiency you wish to
use.  The ultimate performance of the application as a whole will be
determined by much more than just the tightness of the code emitted by
the compiler.


[...]


Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuL3JD.Lv7@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <················@sorley.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>>>* Kirk Rader wrote:
>>>> By definition, the architecture of a
>>>> lisp-machine is going to favor using lisp for most aspects of both
>>>> application and system-level code.  It should be equally obvious that
>>>> both the hardware and the software architecture of systems designed
>>>> primarily to run Unix or other popular OS's themselves written in C
>>>> using the standard C libraries will generally favor using C-like
>>>> languages, other things being equal.  

>             Every brand of work-station
>with which I am familiar comes with performance metering tools which
>can be used to easily verify the kinds of ill-effects to which I
>refer.  As a concrete example, it is an enlightening experience to
>watch the output of gr_osview on an SGI while a complex lisp
>application is running using one of the popular commercial Common Lisp
>implementations.  One can easily see where the lisp implementation's
>I/O model, memory-management model, lightweight process model, and so
>on cause really awful behavior of Irix' built-in I/O,
>memory-management, and scheduling mechanisms.

I find this rather strange.  What is the mismatch in I/O models?
And why would Lisp's lightweight processes be a problem?  Is the
OS not expecting to give timer interrupts?  Memory management I
can almost see, but what exactly is going wrong?  Berkeley Unix
tried to take Franz Lisp into account.  Have things moved backwards
since then?

>>I would agree that Lisp can do reasonably well on RISC machines,
>>but the point of Lisp machines was not just to make Lisp fast
>>but also to make it fast and safe at the same time and fast 
>>without needing lots of declarations.
>
>And to build the kind of "holistically lisp friendly" environment that
>would make the kind of misbehavior to which I refer above impossible.
>A lisp machine has no other memory-management mechanism than lisp's.
>Ditto for its scheduler.  Ditto for its I/O substrate.  There is no
>possibility of conflict or misoptimization.

Sure there is.  It might be fine for Lisp A but not for Lisp B.
Besides, I/O and scheduling and much of memory management is
the OS, not the hardware.  The OS on ordinary non-Lisp machines
could change toi work better with Lisp.

>>Recent Lisp implementations (especially CMU CL) have gone a fair
>>way towards making it easy to have safe, efficient code on RISC
>>machines, but it may always require a somewhat different way of
>>thinking.  (Not a bad way, IMHO, but different from LM thinking
>>nonetheless.)
>
>I would disagree with the description that writing efficient
>applications for typical RISC-based workstations in lisp is "easy",

I didn't say it was easy.  But it's true that I don't think it's
as hard as some other people seem too.  Moreover, I blame particular
Lisp implementations rather than "Lisp".

>for all of the reasons I refer to above, irregardless of the number of
>machine instructions to which a simple expression may compile, or
>whatever other micro-level measure of compiler efficiency you wish to
>use.  The ultimate performance of the application as a whole will be
>determined by much more than just the tightness of the code emitted by
>the compiler.

Well, I (at least) have never made any claim about the number of
machine instructions or indeed any "micro-level measure of compiler
efficiency".  So why are you saying this in response to me?

-- jeff
From: William Paul Vrotney
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <vrotneyCus107.Dy1@netcom.com>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:

>
>   In article <·················@netcom.com> ·······@netcom.com (William Paul Vrotney) writes:
>
>   [...]
>
>   >
>   >Instead of all this complex analysis, which I'm not sure is going anywhere,
>   >lets try some simple stuff for a change.  Lets try this mind experiment. IF
>   >there was a Lisp compiler that compiled as efficiently as C (or even close
>   >to) and IF your boss said that you can program in either Lisp or C. What
>   >would your choice be? Case closed (one way or the other). I hope.
>   >
>   >There are so many more interesting aspects of Lisp that this news group can
>   >be used for.
>   >
>   >-- 
>   >Bill Vrotney - ·······@netcom.com
>
>
>   I agree that this is far from the most interesting topic that could be
>   discussed in this newsgroup.  I also see from the above that you have
>   missed my point entirely.  Oh well.
>
>   Kirk Rader

What is your point, in one sentence?

-- 
Bill Vrotney - ·······@netcom.com
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cuvvr5.HJ5@triple-i.com>
In article <·················@netcom.com> ·······@netcom.com (William Paul Vrotney) writes:
>In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>
>>
>>   In article <·················@netcom.com> ·······@netcom.com (William Paul Vrotney) writes:
>>

[...]

>
>What is your point, in one sentence?
>
>-- 
>Bill Vrotney - ·······@netcom.com


That different kinds of languages are best suited to different kinds
of tasks, so saying that "language X is better than language Y" is
false or meaningless without specifying better for _what_.

Your previous posting amounted to saying "lisp is better because,
other things being equal, wouldn't you rather use it?"  I am
paraphrasing, but that is what I understood you to be saying.  I have
been explicitly talking all along about cases for which other things
are _not_ equal.  If you really know that the cost of implementing
lisp's more powerful features relative to a C-like language will not
in fact result in unacceptable performance for a given application
then it is true either that lisp is better (if you also intend to use
those features) or at least, as you stated, the choice of language is
simply a matter of taste, or organizational requirements, or whatever.
In my experience, however, there is a significant class of
applications for which the cost of implementing lisp's features has
proven to be too high, such that C-like languages are in fact better
in those cases just as the existence of lisp's more powerful features
make it a better choice in others.

Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cuy8Jz.2Bs@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <·················@netcom.com> ·······@netcom.com (William Paul Vrotney) writes:
>>
>>What is your point, in one sentence?
>>
>That different kinds of languages are best suited to different kinds
>of tasks, so saying that "language X is better than language Y" is
>false or meaningless without specifying better for _what_.

If you stopped there, few, if any, would disagree.  However,
most of your time has been spent on this:

>In my experience, however, there is a significant class of
>applications for which the cost of implementing lisp's features has
>proven to be too high, such that C-like languages are in fact better
>in those cases just as the existence of lisp's more powerful features
>make it a better choice in others.

In fact, few would disagree with that either, modulo some quibbles
about "significant", if it were confined to an observation about
particular existing implementations.  For in fact the costs in some
implementations have proven to be too high in some cases.

But claims about Lisp in general have to consider that implementations
could be very different from the typical ones we see today and that
there have been a number of cases in which it appeared that Lisp would
be less suitable than some other language (even C) but where Lisp
actually performed as well or better than the alternative.

In short, you have to show that a Lisp-family language is necessarily
slower, a notoriously tricky thing to get right, rather than reasoning
from existing or even likely implementations.

-- jd
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cuzw9q.6Gw@triple-i.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:

[...]

>
>If you stopped there, few, if any, would disagree.  However,
>most of your time has been spent on this:
>

[...]

If others, including yourself, have repeatedly excerpted the most
negative-sounding quotes about lisp from my messages and perpetuated
the argument by focusing on an attempt at refuting specific examples
of poor performance, while editing out and ignoring my repeated
statements that I in fact like and use lisp for many applications for
which I consider it better suited than other languages, I can hardly
be blamed for the appearance of my "attacking" lisp that is the
result.

>
>But claims about Lisp in general have to consider that implementations
>could be very different from the typical ones we see today and that
>there have been a number of cases in which it appeared that Lisp would
>be less suitable than some other language (even C) but where Lisp
>actually performed as well or better than the alternative.
>
>In short, you have to show that a Lisp-family language is necessarily
>slower, a notoriously tricky thing to get right, rather than reasoning
>from existing or even likely implementations.
>
>-- jd


I feel that you have this just backwards.  A priori, a richer and more
featureful language will have made performance compromises to achieve
that additional functionality.  A posteriori, the vast majority of
actual implementations of one of the oldest still-current family of
languages have in fact exhibited the kinds of performance compromises
that would be expected.  The burden of proof, therefore, rests on
those who argue that it is merely poor implementation which has made
it necessary to not use lisp for a variety of types of applications.

And before you jump too hard on the "vast majority" waffle in the
above, note that in every case of which I am aware of lisp
implementations which have avoided the kinds of problems to which I
have referred, it was either at the cost of excluding features or of
imposing the requirement that it only run on specialized hardware
optimized to run lisp as its native OS, or both.  While there is
nothing theoretically wrong with such approaches per se, in the one
case (excluding features) it begins to stretch the definition of what
a "lisp" is and makes one wonder what advantage there is left in using
such dialects or techniques for optimizing conventional dialects other
than possibly taste in syntax, and in the other (specialized hardware)
the market has demonstrated that such approaches are difficult to make
commercially viable either for the hardware vendor or for developers
who wish to create and sell software to run on it.

So, until someone comes up with a demonstration of how a
general-purpose lisp with at least nearly the same features of
existing popular implementations can be made to run on off-the-shelf
platforms running widely-used OS's can be implemented without
sacrificing anything in performance for system-level, real-time, and
similar applications I will persist in my belief, and continue to
espouse, that engineers should excercise care when choosing a language
for a given project to make sure that the tool is a good match to the
problem to be solved.
From: Richard A. O'Keefe
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33eoms$dll@goanna.cs.rmit.oz.au>
····@triple-i.com (Kirk Rader) writes in reply to Jeff Dalton:
>If others, including yourself, have repeatedly excerpted the most
>negative-sounding quotes about lisp from my messages and perpetuated
>the argument by focusing on an attempt at refuting specific examples
>of poor performance, while editing out and ignoring my repeated
>statements that I in fact like and use lisp for many applications for
>which I consider it better suited than other languages, I can hardly
>be blamed for the appearance of my "attacking" lisp that is the
>result.

I have been following this thread with some interest.
I find that Kird Rader's *own* postings (not isolated chunks quoted by
Jeff Dalton or anyone else) come across as extremely hostile to Lisp.
I am not saying that he _is_ hostile to Lisp, only that he contrives
to give that impression _in his postings in this thread_ without help.
The "I like and use lisp" bit, given the other things Rader has kept on
saying, sound a lot like "some of my best friends are <group X>".

Rader keeps on making strong but rather fuzzy claims.  I for one am
extremely puzzled how Common Lisp I/O can be a bad fit to SGI Irix.
To be concrete, let me refer to CLtL1.
    File system interface:
	I have implemented Lisp-like pathnames on top of several operating
	systems for a couple of languages.  It isn't hard.  They fit UNIX ok.
	(open FileName &key :direction :element-type
		:if-exists :if-does-not-exist)
	:probe is like access(), otherwise you get something like a C
	"r", "w", "r+", or "w+" stream (minus a rather nasty bug in the
	C design--hint fseek()).
	:element-type _is_ a problem.  signed-byte and unsigned-byte are
	like C "binary" streams, string-char is like C "text" streams.
	The others are a cost *IF YOU USE THEM*, which I never have.
	(The cost is the size of the library; no *run-time* cost is implied.)
	rename-file is C rename()
	delete-file is UNIX unlink()
	probe-file is almost access() but there's name normalising
	file-write-date comes from stat()
	file-author comes from stat()
	file-position is like fseek()
	file-length comes from stat()
	directory maps onto the <dirent.h> functions

    Input/output
	The primitive operations are
	read-char	like getc()
	read-byte	like getc() (maybe some overhead%)
	write-char	like putc()
	write-byte	like putc() (maybe some overhead%)
	terpri		like putc()
	fresh-line	almost like putc()  It _can_ be implemented quite
			easily *without* slowing down any other operation
	force-output	like fflush()
	read-char-no-hang	tricky; ALSO tricky in C.
	listen		very UNIX-version-dependent (ditto in C)
	peek-char	like getc() + ungetc() only cleaner
	unread-char	like ungetc()
    (% The overhead is that if several byte sizes are supported a check
    may be needed to find out which.)
    There are also
	read-line	like fgetline() if you have it, or gets()
	write-line	like puts()
	write-string	like fwrite()

This is so very close to the C model that it isn't funny.  Any operating
system and library that is bad at *this* has got to be bad at C too.
There is certainly nothing here to confuse an operating system's buffering
policies that would not also be present in C.

I have seen "Lisp" (not exactly "Common Lisp") systems in which Lisp
streams _were_ C stdio streams (+ a header for GC) and in which Lisp
character I/O _was_ C stdio.  The Lisp system I liked best (Interlisp-D)
had no C system underneath, but character I/O was built on primitives
implemented almost identically to getc()/putc().

The consequence of this is that when someone tells me that "Lisp's"
I/O model is a bad fit to IRIX, and that it is all "Lisp's" fault,
I need _detail_ before it is possible for me to believe it.  Was the
Lisp system in question using I/O operations _not_ in CLtL1?  Can we
be shown a small Lisp program that exhibits this bad behaviour?  How
would a program coded using stdio (and also, because stdio is so bad,
a program using sfio) perform doing the same task?  How about C++
using C++ iostreams?

>So, until someone comes up with a demonstration of how a
>general-purpose lisp with at least nearly the same features of
>existing popular implementations can be made to run on off-the-shelf
>platforms running widely-used OS's can be implemented without
>sacrificing anything in performance for system-level, real-time, and
                                         ^^^^^^^^^^^^^^^^^^^^^^^
>similar applications I will persist in my belief, and continue to
>espouse, that engineers should excercise care when choosing a language
>for a given project to make sure that the tool is a good match to the
>problem to be solved.

I'm currently teaching a 3rd-year course on operating systems.
The students have to use C on PCs.  You wouldn't *believe* just how bad
a fit C is to the PC until you try.  (Pascal and Modula-2 would be
pretty good fits, actually.)  I can find nothing in the C standard that
qualifies it as a real-time language.  I have had several painful
experiences of *gross* inefficiency in C, particularly in malloc/free.
(You certainly would _not_ want to use any C library function in real-time
code.)  You not only have to choose a language, you have to choose an
implementation.  Ada was *designed* for real-time systems, but the
*implementation* I mostly use is an interpreter (Ada/Ed).

-- 
30 million of Australia's 140 million sheep
suffer from some form of baldness.  -- Weekly Times.
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv5D47.F65@triple-i.com>
In article <··········@goanna.cs.rmit.oz.au> ··@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes:

[...]

>
>I have been following this thread with some interest.
>I find that Kird Rader's *own* postings (not isolated chunks quoted by
>Jeff Dalton or anyone else) come across as extremely hostile to Lisp.
>I am not saying that he _is_ hostile to Lisp, only that he contrives
>to give that impression _in his postings in this thread_ without help.
>The "I like and use lisp" bit, given the other things Rader has kept on
>saying, sound a lot like "some of my best friends are <group X>".


You are entitled to your opinion, of course.  I believe, on the other
hand, that many people who prefer to use lisp over any other language,
and who, for a variety of reasons, some good and some bad, have found
their opportunities to do so while still being paid for their effort
have become so defensive and hostile about "lisp vs C" (or any other
particular language) that any discussion which includes observations
from experience in applications for which lisp proved itself not well
suited is perceived as "anti-lisp".  Such discussions are necessary,
however, if lisp is to continue to evolve so as to avoid those very
problems.  For that reason I consider myself to be more "pro-lisp"
than many of those who have responded to my posts with such
antagonism.


>
>Rader keeps on making strong but rather fuzzy claims.  I for one am
>extremely puzzled how Common Lisp I/O can be a bad fit to SGI Irix.
>To be concrete, let me refer to CLtL1.

[...]

I agree with almost all of the excised observations.  However, for the
most part, they do not have any relevance to the problems I observed
and used as an example.  In the few cases where they are relevent, I
can only say that my experience has been different using a number of
different implementations of several dialects of lisp.  As I have said
several times now, the specific problem I reported as an example, and
which has contributed to our decision to do all of our large-object
I/O (multi-megabyte graphic files, for example) using foreign
functions was the result of conflicts between the buffering strategies
used by the lisp run-time layer and the OS filesystem layer resulting
in much more memory being allocated than necessary, with a consequent
drop in throughput due to poor cache performance and unnecessary
paging.  Avoiding lisp in that case avoided the problem, precisely
because the platform vendor supplied library routines took account of
the filesystem's "intelligence".  What is "fuzzy" about that?

Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv9qCJ.8EG@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@goanna.cs.rmit.oz.au> ··@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes:
>
>[...]
>
>>
>>I have been following this thread with some interest.
>>I find that Kird Rader's *own* postings (not isolated chunks quoted by
>>Jeff Dalton or anyone else) come across as extremely hostile to Lisp.
>>I am not saying that he _is_ hostile to Lisp, only that he contrives
>>to give that impression _in his postings in this thread_ without help.
>>The "I like and use lisp" bit, given the other things Rader has kept on
>>saying, sound a lot like "some of my best friends are <group X>".
>
>
>You are entitled to your opinion, of course.  I believe, on the other
>hand, that many people who prefer to use lisp over any other language,
>and who, for a variety of reasons, some good and some bad, have found
>their opportunities to do so while still being paid for their effort
>have become so defensive and hostile about "lisp vs C" (or any other
>particular language) that any discussion which includes observations
>from experience in applications for which lisp proved itself not well
>suited is perceived as "anti-lisp".

If this is supposed to include me, it shows how little you know
about me and how little you've understood me.  I use and prefer C
for a number of tasks, and I not only agree with many criticisms
of Lisp, I make them myself.

-- jd
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv3uK5.BIw@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>[...]
>
>>
>>If you stopped there, few, if any, would disagree.  However,
>>most of your time has been spent on this:
>>
>
>[...]
>
>If others, including yourself, have repeatedly excerpted the most
>negative-sounding quotes about lisp from my messages and perpetuated
>the argument by focusing on an attempt at refuting specific examples
>of poor performance, while editing out and ignoring my repeated
>statements that I in fact like and use lisp for many applications for
>which I consider it better suited than other languages, I can hardly
>be blamed for the appearance of my "attacking" lisp that is the
>result.

Do you claim that the excerpted parts meant something else in context?

Do you actually agree with most (any?) of what I say about Lisp,
Lisp implementation, etc?

You seem to be suggesting now that there's no real disagreement.

>>But claims about Lisp in general have to consider that implementations
>>could be very different from the typical ones we see today and that
>>there have been a number of cases in which it appeared that Lisp would
>>be less suitable than some other language (even C) but where Lisp
>>actually performed as well or better than the alternative.
>>
>>In short, you have to show that a Lisp-family language is necessarily
>>slower, a notoriously tricky thing to get right, rather than reasoning
>>from existing or even likely implementations.
>
>I feel that you have this just backwards.  A priori, a richer and more
>featureful language will have made performance compromises to achieve
>that additional functionality.  

A priori, it can all be handled at compile-time.  :->

>A posteriori, the vast majority of
>actual implementations of one of the oldest still-current family of
>languages have in fact exhibited the kinds of performance compromises
>that would be expected.  The burden of proof, therefore, rests on
>those who argue that it is merely poor implementation which has made
>it necessary to not use lisp for a variety of types of applications.

You make it mystery why Lisp is ever faster than C.

Now, the simple fact is that if you want to claim something about
all Lisps, you have to show something about all Lisp.  If the best
you can do is: "things point in my direction, so prove I'm wrong"
then you have not in fact justified your claims.

Reasoning from "the vast majority of actual implementations"
fails if implementations could be different.  And they can be.
Despite Lisp being an old language family, as programming language
families go, the vast majority of implementations followed a
few common patterns.  Such things change.  Most C's have been
compiled, but eventually interpreters came along.  

Your a priori argument lacks all detail.  I've seen lots of 
fallacious "Lisp must be slower" arguments.  Why is yours any
different?

>And before you jump too hard on the "vast majority" waffle in the
>above, note that in every case of which I am aware of lisp
>implementations which have avoided the kinds of problems to which I
>have referred, it was either at the cost of excluding features or of
>imposing the requirement that it only run on specialized hardware
>optimized to run lisp as its native OS, or both. 

Since you still haven't said exactly what the problems are, it's
difficult to answer in any detail.

But, to pick just one point, should we care about excluding features?
Surely if there's some feature that's causing excessive problems,
the rational thing is to exclude it.  I'll even give an example.
Common Lisp has multiple values.  There is, so far as I know, a
cost associated with this, although it can be a small one, in
all existing implementations.  Moreover, there's a cost even
when multiple values are't used.  But most Lisp-family languages
don't have multiple values, and if the cost is excessive, the
solution is to omit that feature or to give it a different
semantics that costs less.

So far as I can tell from what you've said, excluding features
is a perfectly good solution.

> While there is
>nothing theoretically wrong with such approaches per se, in the one
>case (excluding features) it begins to stretch the definition of what
>a "lisp" is 

It depends on what features are in question.

In another article, you argued that a feature (closures) that had
no cost when not used was still unaccpetable.  I've encountered a 
lot of arguments against Lisp and Common Lisp over the years, and
yours seems to be one of the most extreme!  I assume, from what
you say here and there about liking and using Lisp, that you don't 
intend this, but that really is how it seems to me.

>So, until someone comes up with a demonstration of how a
>general-purpose lisp with at least nearly the same features of
>existing popular implementations can be made to run on off-the-shelf
>platforms running widely-used OS's can be implemented without
>sacrificing anything in performance for system-level, real-time, and
>similar applications I will persist in my belief, and continue to
>espouse, that engineers should excercise care when choosing a language
>for a given project to make sure that the tool is a good match to the
>problem to be solved.

But of course they should exercise care!  Here again you say a bunch
of strong stuff and then go back to a conclusion that's not controversial.
If you're not attacking Lisp, why do you say all those other things?

-- jd
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CvF292.HAM@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:

>>You seem to be suggesting now that there's no real disagreement.
>
>I came to the conclusion long ago that there is little real
>disagreement.  

Sure, but that conclusion may be wrong.  I agree with many of the things
you say.  For instance, you're right to point out that that different
tools are better suited to different tasks, that different languages
are optimized for different kinds of applications, that greater
expressive power has costs, etc, and that an implementation might
use a feature the programmer avoids, thus negating the programmer's
efforts.

I also agree that there are classes of applications for which current
C is better than current Lisps and that it's wrong to offer as advice
"that lisp is well suited to all or nearly all kinds of applications,
that it is really not necessary to worry too much about performance
issues since tools exist for tuning executables, etc."

But in almost every article in which you say things I think are
reasonable you couple them stronger claims that I find less so.
However, many times when you seem to be making a strong claim it's
not clear exactly what it is.  Perhaps if it were clearer, I might
agree with you.  I don't know.  

For instance, you mention "the kinds of applications for which I feel
that C is better suited than current lisp implementations".  But what
kinds of applications are those?  You wrote:

  It is easy enough to say that if this feature is not useful to your
  application and making use of it imposes too great a performance
  burden then just don't use it.  How many such features must one
  avoid, however, at what cost in effort and care before it becomes
  obvious that using a language for which such issues simply do not
  arise would have been better for that particular application?  
  And how can one control uses of such features that are made in 
  the run-time system unless one is also the lisp implementor?

In general, you seem to be saying that when Lisp is a worse choice
than C it's because Lisp has lower run-time performance, and at a
number of points you've employed the "intuition that more powerful
features impose run-time costs".  Then, the reasoning quoted as a
block above is designed to counter the argument that Lisp should be
judged by the speed obtainable when high-cost features are avoided.

So it looks like you might be saying Lisp loses to C in every case 
in which run-time performance is the dominant factor, even when its
possible to write a Lisp program that's just as fast as the C program,
because you'd have to artificially avoid the very features that can
give Lisp an advantage over C in the first place.

Of course, you might argue that all you're really saying is that
Lisp loses when it's *difficult* to avoid higher-level features.  But
then I would ask this: 

  How much scope does Lisp have to do better against C than it does
  now?

This is the key question, and it's behind everything I've said in
this exchange.

I've tried to suggest that the scope is fairly large, at least larger
than current experience would suggest.  Current implementations are
misleading, better Lisp-family languages can be devised, and so on.
You never seem to agree with me when I say such things, not even a
qualified agreement.  On the contrary, you typically present 
counterarguments.

For instance, you say I have "not shown how implementations could be
different in a way that mattered to any of the issues that are the
subject of this discussion."  Against the suggestion that better
languages could be devised, you say that when implementors do what's
necessary you "would expect that such a Lisp would probably be as
unsuitable for the kinds of applications for which current Lisps are
well-suited as C is today, for all the same reasons."

In short, there appears that there is a fundamental disagreement
between us and that our aims are directly opposed.

I think it's wrong to suggest that we know right now which
applications are for Lisp and which are for C or that any attempt
to make Lisp better in one way will make it worse in another.
I will therefore mention some things again:

In some cases, we can develop techniques that are strictly better,
rather than offering a different set of tradeoffs (or with the
cost being only some thinking time).

Many of the current limitations of Lisp are historical accidents.
They are not intrinsic to the Lisp family of languages.  Had some
different trends in Lisp development -- trends that existed -- been 
more powerful, our impression of Lisp would now be very different.

When people devise new languages in the Lisp family, they may give 
up some features present in some other Lisps in order to gain other
advantages.  EuLisp and Scheme don't have "eval", EuLisp makes
redefinition an implementation and environment issue rather than part
of the language and brings in multiple-inheritance in a different way,
and so on.  But the resulting languages are still significantly
different from C and they are rightly considered varieties of Lisp.
They won't be "be as unsuitable for the kinds of applications for
which current Lisps are well-suited as C is today".  (Though of
course one variety of Lisp may be more suitable than is another.)

Many of the things people often see as properties of (some language
in the) Lisp (family) are actually properties of implementations.
The same is true of C.  C _could_ be implemented inefficiently in
a variety of ways (e.g. as an interpreter).  And Lisp often _could_
be implemented more efficiently than it is.  Moreover, even when 
many Lisps have the same inefficiencies, this is often due to the 
implementation tradition (common techniques and approaches are
reused, etc) rather than to the language.

Now, what exactly are the costs of higher-level features?
Some costs can be moved to compile-time.  (Not all.  It's not
for nothing that I said "a priori" before and put a ":->" after
"it can all be handled at compile-time.")  Other costs are small.
There are plenty of cases where C is now better than Lisp but
where Lisp might be preferred if the cost was, say, 10% rather
than a factor of 2.

Finally, if Lisp is sometimes faster than C, how do you know your
application isn't one of those cases?  It's not obvious which cases
these are.  People have often been wrong about them.  And how is it
that Lisp ever manages to *be* faster?  Doesn't it give a different
impression than Kirk's "intuition that more powerful features impose
run-time costs"?

A few closing details:

>  It is also true that a language which is lacking in any built-in
>mechanism is likely to be a better fit for applications in which
>performance is best under some different, application-specific
>strategy.

Only if it's not too hard to discover that strategy and implement it
and if Lisp's built-in mechanism imposes excessive costs when you try
to implement the application-specific strategy in Lisp.

>I have never stated that closures or any other feature of lisp were
>"unacceptable". 

I mean this to be clear in context, but I've looked back on it,
and it probably wasn't.  What I objected to was the idea that it
wasn't good enough that closures had no costs when not used.

---

About the burden of proof: I'm not claiming that Lisp is potentially
as good as C in all application areas (or even that the particular
problems Kirk's encountered can be removed.  (I don't know enough
*about* those problems for that.)  In many cases, the only acceptable
proof would be to produce an implementation that actually had sufficient
performance without becoming C in all but name, and that's too much
to expect from a few news articles.

My aims are merely to suggest that it's too soon to say which
application areas can be handled effectively by Lisp and which
cannot and to explain why even direct, practical experience
with existing Lisps can be misleading.  

I am trying to suggest why the scope for Lisp to do better against
C than it does now might be greater than people think, even when
they have fairly good evidence and arguments on their side.

-- jeff
From: Tim Bradshaw
Subject: High-level features can be fast (was Re: C is faster than lisp)
Date: 
Message-ID: <TFB.94Sep1015231@oliphant.cogsci.ed.ac.uk>
Note that this is a small quote from a large article, but this is
something that I just thought of.  It also *isn't* meant to be
specific to lisp, although the example is from common lisp, that being
the language I am most familiar with.

* Jeff Dalton wrote:

> Now, what exactly are the costs of higher-level features?
> Some costs can be moved to compile-time.  (Not all.  It's not
> for nothing that I said "a priori" before and put a ":->" after
> "it can all be handled at compile-time.")  Other costs are small.
> There are plenty of cases where C is now better than Lisp but
> where Lisp might be preferred if the cost was, say, 10% rather
> than a factor of 2.

And sometimes the costs can be negative.  For instance consider LABELS
in Common Lisp.  A good compiler (even a dumb one I expect) ought to
be able to compile very fast calls to the local functions so
established and very fast references to the data they share.  This is
certainly true of CMUCL -- the difference between two simple
self-recursive functions to count to a number seems to be about a
factor of 5.

--tim
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cvny60.K8J@triple-i.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>>>You seem to be suggesting now that there's no real disagreement.
>>
>>I came to the conclusion long ago that there is little real
>>disagreement.  
>
>Sure, but that conclusion may be wrong.

[...]

I think that the most significant point of disagreement between
us about what I have been arguing for and against.

>
>But in almost every article in which you say things I think are
>reasonable you couple them stronger claims that I find less so.
>However, many times when you seem to be making a strong claim it's
>not clear exactly what it is.  Perhaps if it were clearer, I might
>agree with you.  I don't know.  

I do not know how to present my claims any more clearly.  I also am
completely baffled in what way the seem so vague to you.

>
>For instance, you mention "the kinds of applications for which I feel
>that C is better suited than current lisp implementations".  But what
>kinds of applications are those?  You wrote:
>
>  It is easy enough to say that if this feature is not useful to your
>  application and making use of it imposes too great a performance
>  burden then just don't use it.  How many such features must one
>  avoid, however, at what cost in effort and care before it becomes
>  obvious that using a language for which such issues simply do not
>  arise would have been better for that particular application?  
>  And how can one control uses of such features that are made in 
>  the run-time system unless one is also the lisp implementor?

I did indeed write the above, but I did not intend it to be (and do
not see how it could be interpreted as) a description of a kind of
application.  Rather it is one of the _reasons_ that lisp is a worse
choice, when it is a worse choice.  The _kinds_ of applications where
the above cited consideration typically causes lisp to be a worse
choice is, as I have frequently stated, system-level, real-time, and
applications with similar requirements for continuous high-bandwidth
interaction with hardware or the user which can only be achieved by
the kind of application- and hardware-specific optimization which is
C's forte.

>
>In general, you seem to be saying that when Lisp is a worse choice
>than C it's because Lisp has lower run-time performance, and at a
>number of points you've employed the "intuition that more powerful
>features impose run-time costs".  Then, the reasoning quoted as a
>block above is designed to counter the argument that Lisp should be
>judged by the speed obtainable when high-cost features are avoided.
>
>So it looks like you might be saying Lisp loses to C in every case 
>in which run-time performance is the dominant factor, even when its
>possible to write a Lisp program that's just as fast as the C program,
>because you'd have to artificially avoid the very features that can
>give Lisp an advantage over C in the first place.
>
>Of course, you might argue that all you're really saying is that
>Lisp loses when it's *difficult* to avoid higher-level features.  

Exactly.  Or it can be impossible to avoid them in some cases (mainly
the cases I assume you would attribute to poor implementation
strategies.)

>                                                                   But
>then I would ask this: 
>
>  How much scope does Lisp have to do better against C than it does
>  now?

I agree that there is room for vast improvement in both the design and
implementation of most current lisp dialects.

>
>This is the key question, and it's behind everything I've said in
>this exchange.
>
>I've tried to suggest that the scope is fairly large, at least larger
>than current experience would suggest.  Current implementations are
>misleading, better Lisp-family languages can be devised, and so on.
>You never seem to agree with me when I say such things, not even a
>qualified agreement.  On the contrary, you typically present 
>counterarguments.

But I do, and have, agreed.  I disagree that it is terribly relevent
to the topic about which I have been writing.  Again, the arguments to
which yours were a response were presented as reasons why I consider
it wrong to suggest that criticisms of _current_ implementations of
_current_ dialects on such performance grounds are unfounded, in the
context of someone being given what I considered bad advice as to what
criteria to use in choosing a programming language for any given
project.

>For instance, you say I have "not shown how implementations could be
>different in a way that mattered to any of the issues that are the
>subject of this discussion."  Against the suggestion that better
>languages could be devised, you say that when implementors do what's
>necessary you "would expect that such a Lisp would probably be as
>unsuitable for the kinds of applications for which current Lisps are
>well-suited as C is today, for all the same reasons."
>
>In short, there appears that there is a fundamental disagreement
>between us and that our aims are directly opposed.

I said that I thought there was little, not no, real disagreement.
Here is, indeed, a significant point on which we differ.  I am much
more satisfied with the current state of affairs - using a variety of
different tools for the different tasks for which each is well-suited.
I am less concerned with, and less optimistic about, the possibility
of developing some "universal" language which would be well-suited to
all or almost kinds of applications.

>I think it's wrong to suggest that we know right now which
>applications are for Lisp and which are for C or that any attempt
>to make Lisp better in one way will make it worse in another.

Again, we are just talking about different things.  I disagree with
very little in the specific points you raise below.  But the fact is
that all of these things are issues for research and future
development.  If and when lisp dialects are available which have the
properties you describe, I will be eager to apply them in areas for
which they are well suited.  In the meantime I will continue to be
quite happy using current lisp dialects where they are appropriate, C
or C++ where they are appropriate, Hypercard or Visual Basic where
they are appropriate, awk where it is appropriate, etc.

>I will therefore mention some things again:
>
>In some cases, we can develop techniques that are strictly better,
>rather than offering a different set of tradeoffs (or with the
>cost being only some thinking time).
>
>Many of the current limitations of Lisp are historical accidents.
>They are not intrinsic to the Lisp family of languages.  Had some
>different trends in Lisp development -- trends that existed -- been 
>more powerful, our impression of Lisp would now be very different.
>
>When people devise new languages in the Lisp family, they may give 
>up some features present in some other Lisps in order to gain other
>advantages.  EuLisp and Scheme don't have "eval", EuLisp makes
>redefinition an implementation and environment issue rather than part
>of the language and brings in multiple-inheritance in a different way,
>and so on.  But the resulting languages are still significantly
>different from C and they are rightly considered varieties of Lisp.
>They won't be "be as unsuitable for the kinds of applications for
>which current Lisps are well-suited as C is today".  (Though of
>course one variety of Lisp may be more suitable than is another.)
>
>Many of the things people often see as properties of (some language
>in the) Lisp (family) are actually properties of implementations.
>The same is true of C.  C _could_ be implemented inefficiently in
>a variety of ways (e.g. as an interpreter).  And Lisp often _could_
>be implemented more efficiently than it is.  Moreover, even when 
>many Lisps have the same inefficiencies, this is often due to the 
>implementation tradition (common techniques and approaches are
>reused, etc) rather than to the language.
>
>Now, what exactly are the costs of higher-level features?
>Some costs can be moved to compile-time.  (Not all.  It's not
>for nothing that I said "a priori" before and put a ":->" after
>"it can all be handled at compile-time.")  Other costs are small.
>There are plenty of cases where C is now better than Lisp but
>where Lisp might be preferred if the cost was, say, 10% rather
>than a factor of 2.
>
>Finally, if Lisp is sometimes faster than C, how do you know your
>application isn't one of those cases?  It's not obvious which cases
>these are.  People have often been wrong about them.  And how is it
>that Lisp ever manages to *be* faster?  Doesn't it give a different
>impression than Kirk's "intuition that more powerful features impose
>run-time costs"?

I know from experience.

And as for "how is it that Lisp ever manages to *be* faster?"  Those
more powerful features may be necessary to some required functionality
in the application, such that it would be necessary for the
application programmer to implement them if they were missing from the
language (with the obvious likelyhood of doing a worse job than the
more powerful language's implementor would.)  Also, it is often the
case that poor performance of some particular feature is highly
application dependent - memory-management is the most frequent culprit
in my experience - such that the very same feature can either enhance
or degrade performance in different applications.

>A few closing details:
>
>>  It is also true that a language which is lacking in any built-in
>>mechanism is likely to be a better fit for applications in which
>>performance is best under some different, application-specific
>>strategy.
>
>Only if it's not too hard to discover that strategy and implement it
>and if Lisp's built-in mechanism imposes excessive costs when you try
>to implement the application-specific strategy in Lisp.
>
>>I have never stated that closures or any other feature of lisp were
>>"unacceptable". 
>
>I mean this to be clear in context, but I've looked back on it,
>and it probably wasn't.  What I objected to was the idea that it
>wasn't good enough that closures had no costs when not used.
>
>---
>
>About the burden of proof: I'm not claiming that Lisp is potentially
>as good as C in all application areas (or even that the particular
>problems Kirk's encountered can be removed.  (I don't know enough
>*about* those problems for that.)  In many cases, the only acceptable
>proof would be to produce an implementation that actually had sufficient
>performance without becoming C in all but name, and that's too much
>to expect from a few news articles.

But it is as unreasonable to expect a complete analysis of the
run-time behavior of a variety of large, complex applications in this
context.  Nor do I feel that such analysis is really required, since
even those who have felt it their duty to "defend" lisp against my
"attacks" have rarely disagreed that the kinds of performance problems
to which I have referred actually do occur for some real-world
applications written in real-world lisp implementations.

>My aims are merely to suggest that it's too soon to say which
>application areas can be handled effectively by Lisp and which
>cannot and to explain why even direct, practical experience
>with existing Lisps can be misleading.  

There is an old joke about the difference between Asian and
Western-European attitudes regarding time scales and the scope of
long-range planning:

"A conference was once held at which representatives of various
governments were asked to present their views on the affect of the
French Revolution on their countries.  The US representative delivered
a speech on the close ties between the two revolutionary movements,
and the influence on both the American Revolutionary War and the early
development of the US government by French people like the Marquis de
Lafayette.  The British representative presented a White Paper on the
profound impact that the Napoleonic Wars had on the history and
development of the UK as maritime and colonial power.  The Chinese
representative refused even to bother getting up and going to the
podium, just saying 'It's too soon to tell.'"

I do not dispute the importance and the likely positive practical
benefits of continuing to do research and development in language
design and implementation.  However, I cannot afford to take the
"mature" attitude that "it's too soon to tell" when I am required to
choose or recommend a programming language for project that must be
developed _today_.

>I am trying to suggest why the scope for Lisp to do better against
>C than it does now might be greater than people think, even when
>they have fairly good evidence and arguments on their side.
>
>-- jeff

In the mean time, until and unless the kind of research to which you
refer succeeds in producing the kinds of dialects and implementaions
to which you refer, I will continue to advocate and practice using
care when choosing or recommending what language to use for any given
project.

Kirk Rader
From: Mike Haertel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <MIKE.94Aug26105212@pdx399.intel.com>
In article <·····················@sunset.huji.ac.il> ·······@sunset.huji.ac.il (Harvey J. Stein) writes:
>This just isn't true.  For example, C++ is richer & more
>feature-filled than C, but it contains C, so anything that can be done
>in C can be done in C++ with exactly the same performance (by using
>exactly the same code, more or less). [...]

This is true only because the designers of C++ were careful not
to add features that would slow things down even when not used.
In general you have to be very careful when adding features.
--
Mike Haertel <····@ichips.intel.com>
Not speaking for Intel.
From: Ken Anderson
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <KANDERSO.94Aug26151600@wheaton.bbn.com>
In article <··················@pdx399.intel.com> ····@ichips.intel.com (Mike Haertel) writes:

   In article <·····················@sunset.huji.ac.il> ·······@sunset.huji.ac.il (Harvey J. Stein) writes:
   >This just isn't true.  For example, C++ is richer & more
   >feature-filled than C, but it contains C, so anything that can be done
   >in C can be done in C++ with exactly the same performance (by using
   >exactly the same code, more or less). [...]

   This is true only because the designers of C++ were careful not
   to add features that would slow things down even when not used.
   In general you have to be very careful when adding features.

You have to be a bit careful here.  I have a benchmark that shows that
coding in a C++ style, using classes and nonvirtual methods has about a 30%
overhead compared to a direct C style.  This is for cases where you would
expect no overhead at all.  I don't know why.

k
--
Ken Anderson 
Internet: ·········@bbn.com
BBN ST               Work Phone: 617-873-3160
10 Moulton St.       Home Phone: 617-643-0157
Mail Stop 6/4a              FAX: 617-873-2794
Cambridge MA 02138
USA
From: Harvey J. Stein
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <HJSTEIN.94Aug31213301@sunset.huji.ac.il>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader)
writes:

   In article <·····················@sunset.huji.ac.il>
   ·······@sunset.huji.ac.il (Harvey J. Stein) writes:
   >In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
   >
   >   I feel that you have this just backwards.  A priori, a richer and more
   >   featureful language will have made performance compromises to achieve
   >   that additional functionality.  A posteriori, the vast majority of
   >
   >This just isn't true.  For example, C++ is richer & more
   >feature-filled than C, but it contains C, so anything that can be done
   >in C can be done in C++ with exactly the same performance (by using
   >exactly the same code, more or less).  It might be argued that adding
   >features must a priori slow down compilation, but you can't argue that
   >adding features a priori slows down execution.  And, the effects of
   >adding features on compilation time could very well be negligible for
   >compiling the subset language.

   Evidently I wasn't explicit enough.  As you say yourself, C++'s
   semantics are for the most part identical to those of C (that was a
   critical design goal of the language.)  As such, those features it
   adds are mainly a matter of syntactic sugar so could hardly be
   expected to affect run-time performance.  In those few cases where the
   run-time semantics of C++ actually are different from C,
   exception-handling leaps to mind, current implementations are
   typically severely lacking and often have performance problems
   (although I suspect that has more to do with the youth of the language
   than any inherent difficulty in implementing those particular
   features.)

Why are you arguing with me?  First you say that a "more featureful
(sic)" language must be "a priori" more inefficent.  I gave a
counter-example, and now you say that some features need not impact
efficiency.  So, which is it going to be?

   Lisp, on the other hand, has a much richer run-time semantics than
   either C or C++ which cannot be interpreted as mere syntactic sugar.
   I do not believe it possible to obtain those rich run-time semantics
   without paying a cost for them in executable size or speed.

"I do not believe" is not good enough.  Since, as you now admit,
additional features do not "a priori" imply overall language
inefficiency, you must, to prove that lisp is more inefficent than C
(the original conjecture), take particular features of lisp, and prove
that they they must, regardless of implementation, force lisp to be
more inefficient than C, even when these features aren't used.  Good
luck.

   >Furthermore, some features could enhance performance.  For example,
   >suppose one were to add some set of data structures to a language, and
   >implement them extremely carefully with optimal performance.  Further
   >suppose that it would be difficult to code an efficient implementation
   >of these data structures in the original language.  Then, you would
   >have enhanced the language, and made it more efficient in the cases
   >that these data structures are used in the optimal implementation.

   Your argument is circular.  Of course a more powerful language will be
   better for applications which make use of those extra features in an
   optimal way.  I have been explicitly talking about cases where the
   extra features of the more powerful language do not pay for themselves
   either because they are not well implemented or are not necessary to
   the particular application.

Well, then, what happened to the performance hit that these features
were supposed to cause?  What's your argument now - that the misuse of
powerful features can cause inefficiencies?  Sure - in ALL languages!
And, no, you haven't been explicitly talking about cases where...  You
said several paragraphs back that high level features "a priori" make
a language inefficent, WITHOUT EVEN QUALIFYING THAT THEY MUST BE USED,
let alone be improperly implemented or improperly used.

   >Moreover, one would hope that all language feature are implemented
   >carefully and as efficiently as possible, and thus one would expect
   >that using these features would produce faster code than implementing
   >them oneself.  So, using these features in the cases where the
   >alternative is to code them oneself probably leads to higher
   >performance, not lower performance.

   One would hope so, but sadly that is not always true.  Besides you
   have still ignored the case where the additional functionality is just
   not needed or imposes too great a run-time burden on a particular
   application.

Here we go again...  Once more.  Yes.  It can be the case that using a
particular feature in a particular application is inefficient.  This
does not mean that lisp is inefficient, just that the programmer is an
idiot.

   >Of course, adding features often leads to a bloated, ugly, inelegant
   >language.  And, using extremely general features for very specific
   >tasks can easily lead to poor performance.  However, the existence of
   >features has nothing to do with performance (except, of course,
   >possibly for compilation performance).  If they're slow, just don't
   >use them.  Or, to paraphrase a character from "A Fish Called Wanda":
   >
   >   *Having* features doesn't make a program slow, *using* them makes
   >   it slow.
   >
   >So, you can't argue that LISP must be slower than C just because it
   >has more features.
   >

   This last point is just untrue.  Some features impose run-time costs
   whether they are in use by a particular application or not.  Jeff
   Dalton, whom you would presumably consider to be on your "side" in
   this, explicitly referred to such in one of his recent posts.  And in
   addition, the presence of features implies that you and the language's
   implementors must exercise care to not use them when they are not
   appropriate.  It is this last which can frequently result in it being
   more trouble than it is worth to use a more powerful language than is
   required for a partiular application.


Your remark is just untrue.  If you paused long enough to read what I
wrote, you'd see that I didn't say that all features of lisp are
optimally implemented in all implementations, and I never asserted
that all the features of all lisp implementations never cause overall
performance hits (multiple return values comes to mind).  This is all
you're refuting.  What I said, which I'll restate on the off chance
that you might actually read it, is that powerful features do not
imply inefficiency.  And I said it in response to your comment that
they do.

Now, if you'll excuse me, I'm not going to reply to this thread again
because it's gotten too inane.

Enjoy yourself,

--
Harvey J. Stein
Berger Financial Research
·······@math.huji.ac.il
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cvo01H.KLI@triple-i.com>
In article <·····················@sunset.huji.ac.il> ·······@sunset.huji.ac.il (Harvey J. Stein) writes:
>Why are you arguing with me?  First you say that a "more featureful
>(sic)" language must be "a priori" more inefficent.  I gave a
>counter-example, and now you say that some features need not impact
>efficiency.  So, which is it going to be?

I am arguing with you because I believe you to be in error in your
criticism of my previous posting.  The more powerful features of C++
which you posed as a counter-example do not impose a greater run-time
burden because they are just syntatic conveniences that by the design
of the language have no affect on the run-time behavior of the
language.

>"I do not believe" is not good enough.  Since, as you now admit,
>additional features do not "a priori" imply overall language
>inefficiency, you must, to prove that lisp is more inefficent than C
>(the original conjecture), take particular features of lisp, and prove
>that they they must, regardless of implementation, force lisp to be
>more inefficient than C, even when these features aren't used.  Good
>luck.

I admit that purely syntactic features do not impose a run time
overhead.  I have no intention of starting this whole interminable
thread over again by repeating the many examples of lisp's features
imposing an unacceptable overhead on certain kinds of applications.  I
repeat, the burden of proof is on those who claim that it is possible
to retain all or even most of current lisp implementations' features
while eliminating all of the run-time performance problems that they
entail for real-time, system-level, or highly interactive
applications.

[...]

>Well, then, what happened to the performance hit that these features
>were supposed to cause?  What's your argument now - that the misuse of
>powerful features can cause inefficiencies?  Sure - in ALL languages!
>And, no, you haven't been explicitly talking about cases where...  You
>said several paragraphs back that high level features "a priori" make
>a language inefficent, WITHOUT EVEN QUALIFYING THAT THEY MUST BE USED,
>let alone be improperly implemented or improperly used.

Experience has shown that some applications perform better written in
lisp than in C.  This does not imply that lisp's more powerful
features do not result in higher run-time performance overhead
relative to C, just that there are applications for which the
advantages those features confer more than outweigh any cost they
impose.

[...]

>
>Here we go again...  Once more.  Yes.  It can be the case that using a
>particular feature in a particular application is inefficient.  This
>does not mean that lisp is inefficient, just that the programmer is an
>idiot.

Or that the design of the language includes requirements which it is
impossible to "compile away" even if the application does not make
explicit use of them.  Common Lisp's treatment of both argument lists
and multiple return values are frequently cited examples (including by
yourself, below.)

[...]

>Your remark is just untrue.  If you paused long enough to read what I
>wrote, you'd see that I didn't say that all features of lisp are
>optimally implemented in all implementations, and I never asserted
>that all the features of all lisp implementations never cause overall
>performance hits (multiple return values comes to mind).  This is all
>you're refuting.  What I said, which I'll restate on the off chance
>that you might actually read it, is that powerful features do not
>imply inefficiency.  And I said it in response to your comment that
>they do.

I will continue to believe that more powerful features do in general
imply more overhead, until someone succeeds in showing how such
overhead could always be avoided, which you have not.  What you said
in the preceding paragraph is that you can reduce run-time overhead by
eliminating features, which I have never disputed.  If you reduce
lisp's features to the point where it has no additional run-time
overhead, what makes you think it will have retained any additional
features (other than purely syntactic ones such as those which C++ has
over C?)

>
>Now, if you'll excuse me, I'm not going to reply to this thread again
>because it's gotten too inane.
>
>Enjoy yourself,
>
>--
>Harvey J. Stein
>Berger Financial Research
>·······@math.huji.ac.il

Same to you.

Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cw367s.8I0@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:

>>There are several problems with this point, of which I will mention
>>two:
>>
>>  * We don't always know what can be compiled away, and what's
>>    actually compiled away changes as new methods are developed.
>>    (BTW, multiple-value overheads-when-unused could be compiled away 
>>    by a global analysis.  Unfortunately, CL has some properties
>>    that make that difficult.)
>
>This is not a "problem" with the point I was making, since I have been
>making a point about the state of the art _today_.

What you wrote was:

  Or that the design of the language includes requirements which it is
  impossible to "compile away" even if the application does not make
  explicit use of them.

"Impossible", not "currently beyond the state of the art".

>>  * Even when something isn't compiled away, the cost may be trivial.
>>    Since you (Kirk) don't say what the costs are, it's hard to say
>>    whether it makes sense to care.
>
>Anyone who has spent time wringing a few processor cycles out of a
>real-time application should be able to understand why even trivial
>costs of high-level features can often be of great concern.

Exactly, *can* be.  I want to know how great the cost is so I
can tell when it will matter for myself.

>>And I will continue to believe that what matters is how great
>>and exactly what the overheads are.  That there are bound to be
>>some overheads somewhere is not a point worth making.
>
>Then why have you considered it so important to dispute its truth?

Because it hasn't been shown.

>>For many applications it's already the case.  Or do you want to
>>say Lisp is always slower than C?

>I have repeatedly said that I do not believe that lisp is always
>slower than C.  I just think that the opposite is at least as often
>the case.

You wrote:

  If you reduce lisp's features to the point where it has no
  additional run-time overhead, what makes you think it will have
  retained any additional features (other than purely syntactic ones
  such as those which C++ has over C?)

Lisp can be as fast as C for some things despite retaining
features.  What makes you think it will have to give up the
features in other areas?  (I assumed, BTW, that you *agreed*
Lisp wasn't always slower.)

-- jeff
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CvGM8E.Hsy@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <·····················@sunset.huji.ac.il> ·······@sunset.huji.ac.il (Harvey J. Stein) writes:

>Lisp, on the other hand, has a much richer run-time semantics than
>either C or C++ which cannot be interpreted as mere syntactic sugar.
>I do not believe it possible to obtain those rich run-time semantics
>without paying a cost for them in executable size or speed.

Which features and how much cost?  1%?  10%?  Small integer factors?

Of course, "rich run-time semantics" is aimed at taking the claim
closer to tautological, by suggesting that things handled at compile
time aren't part of the "richer" semantics.  But in fact the semantics
of Lisp allow a range of implementations in which costs can often be
moved from run-time to compile-time (if they'd even occur at run-time
in the first place.)

It's pointless to continue at this abstract level.  All we get then
is that it looks like there will be unavoidable costs.  But are they
really unavoidable, and how much can they be reduced?  How central
are the features to Lisp?  How much trouble will it be to avoid
them, if that's what's needed?

>>Furthermore, some features could enhance performance.  For example,
>>suppose one were to add some set of data structures to a language, and
>>implement them extremely carefully with optimal performance.  Further
>>suppose that it would be difficult to code an efficient implementation
>>of these data structures in the original language.  Then, you would
>>have enhanced the language, and made it more efficient in the cases
>>that these data structures are used in the optimal implementation.
>
>Your argument is circular.  Of course a more powerful language will be
>better for applications which make use of those extra features in an
>optimal way.  I have been explicitly talking about cases where the
>extra features of the more powerful language do not pay for themselves
>either because they are not well implemented or are not necessary to
>the particular application.

He said optimal implementation, not applications which use the
features in an optimal way.

In any case, he has a perfectly valid point.  Imagine an application
A that could make good use of a data structure D but where the people
writing C programs for A don't use D because it's too hard to implement
D efficiently in C.  Then along comes HJS with an extended C that
includes D and implements D efficiently.

Now for ther cases Kirk is "explicitly talking about".  If the
extra features are not well implemented, perhaps they could be
well implemented.  then we have an implementation problem, not
a language problem.  If the extra features are not necessary,
then perhaps they won't be used or will be used only in parts
of the application that are not performance-critical.

Without knowing what the applications, features and costs are, we
can't say whether it has to be a problem that a language has those
features.

>>Moreover, one would hope that all language feature are implemented
>>carefully and as efficiently as possible, and thus one would expect
>>that using these features would produce faster code than implementing
>>them oneself.  So, using these features in the cases where the
>>alternative is to code them oneself probably leads to higher
>>performance, not lower performance.
>
>One would hope so, but sadly that is not always true.  

Of course it's not *always* true!

>>   *Having* features doesn't make a program slow, *using* them makes
>>   it slow.
>>
>>So, you can't argue that LISP must be slower than C just because it
>>has more features.
>
>This last point is just untrue.  Some features impose run-time costs
>whether they are in use by a particular application or not. 

Which features?  How much cost?

>Jeff Dalton, whom you would presumably consider to be on your "side" in
>this, explicitly referred to such in one of his recent posts.

Multiple-values, perhaps?  The cost is fauily small, and most
Lisp-family languages don't have multiple-values.  I therefore
conclude that we can remove that feature from the language without
ceasing to be Lisp or losing important Lispish advantages.  It may
also be possible to change the semantics to reduce the costs further
while not giving up benefits anyone cares about.

>  And in
>addition, the presence of features implies that you and the language's
>implementors must exercise care to not use them when they are not
>appropriate.  It is this last which can frequently result in it being
>more trouble than it is worth to use a more powerful language than is
>required for a partiular application.

Well, there is a significant Lisp problem in this area that we might
agree about, namely that it's often not clear what will be efficient.
If it were clearer, and if implementations were more consistent (with
each other), then it would be easier for "good practice" to develop
and be taught, which would increase the range of applications for
which Lisp was not too much trouble.

-- jeff
From: J W Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CusLs7.JyE@festival.ed.ac.uk>
····@triple-i.com (Kirk Rader) writes:

>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>>>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>>In article <················@sorley.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:

>[...]
>>
>>I find this rather strange.  What is the mismatch in I/O models?
>>And why would Lisp's lightweight processes be a problem?  Is the
>>OS not expecting to give timer interrupts?  Memory management I
>>can almost see, but what exactly is going wrong?  Berkeley Unix
>>tried to take Franz Lisp into account.  Have things moved backwards
>>since then?

>As one concrete example of the kind thing to which I referred, SGI's
>filesystem substrate maintains its recently-deallocated file cache
>buffers in a "semi-free" state on the theory that they will quickly be
>reallocated for the same purpose.  Watching the output of gr_osview as
>an I/O intensive lisp application executes you can easily see conflict
>between the different buffering and memory-management policies being
>used by the OS's filesystem and virtual-memory management mechanisms
>and by the I/O and memory-management mechanisms of at least two
>commercial Common Lisp implementations and one shareware Scheme
>implementation with which I am familiar.

What exactly is the problem?  I don't have an SGI system, so I
can't watch the output of gr_osview and easily see what's happening.
Is there any reason to suppose it's an inherent Lisp problem rather
than a poorly tuned implementation?

>[...]

>>Sure there is.  It might be fine for Lisp A but not for Lisp B.
>>Besides, I/O and scheduling and much of memory management is
>>the OS, not the hardware.  The OS on ordinary non-Lisp machines
>>could change to work better with Lisp.

>All these arguments about "lisp A vs lisp B" are red-herrings.  

Not when someone says a Lisp machine is bound to be fine for Lisp.

>On a given lisp-machine you could only be described as perverse for using
>any other dialect than one for which the machine was designed. 

Why is that any more perverse than using gcc rather than the compiler
from the hardware manufacturer?  Who says a Lisp machine has to be for
only one Lisp and an ordinary machine can be for any C and indeed a
whole range of languages?  Even existing Lisp machines weren't
that restricted!

>I understand your real argument to be that it is in principle possible
>to design lisp-friendly OS's and Unix- (or other particular OS-)
>friendly lisps.  That is undeniably true, but market forces have yet
>to produce viable examples of such implementations, so far as I can
>tell.

Strictly speaking, market forces don't produce implementations?
Surely they select among them instead.  Moreover, they don't have to
respect technical merit or performance or anything else.

Now, it was fairly easy on VAXes to find cases where Franz Lisp was
faster than C.  Moreover, for some time after Common Lisps appeared,
Franz fit better with C and Unix than they did.  Some CLs have passed
Franz in some ways, but then development of Franz stopped years ago.
Now we're starting to see Lisps that compile to C and pass arguments
on the C stack (which KCL did to some extent in the mid 80s), that are
packaged as shared libraries, that can be linked into C programs on a
more or elss equal basis, etc.  There are also roles for Lisp that
don't involve competing with C but complementing C instead.  What we
now think of as a typical Lisp didn't have to be typical and may
not be typical in the future.  Whether market forces will take any
notice is a different question.

>>I didn't say it was easy.  But it's true that I don't think it's
>>as hard as some other people seem too.  Moreover, I blame particular
>>Lisp implementations rather than "Lisp".

>"Lisp", as opposed to particular lisp implementations, can only be
>understood to refer to the concept of any member of a particular
>family of programming languages. 

Sure, but it's not confined to already existing members.

>    The common usage of "lisp" implies a
>language with certain features that make it difficult to implement in
>a way that won't conflict with a Unix workstation's or PC's model of
>the universe. 

Well, so you say.  (Actually, there's not a very good fit between
C and some PC operating systems.)  The standard machine for Lisp 
used to be the PDP-10.  I suspect that Lisp machines have distorted
perceptions of how well Lisp fits with "mainstream" machines.
There are often actual problems with linkers, memory management,
etc, but I don't think there's a fundamental mismatch with the
hardware or even, in most cases, with the software.

>    The richer and more featureful the lisp dialect, the
>more difficult it has proven to be.

Common Lisp didn't have to be implemented the way it typically
was.

>>Well, I (at least) have never made any claim about the number of
>>machine instructions or indeed any "micro-level measure of compiler
>>efficiency".  So why are you saying this in response to me?

>Because your earlier contribution to this thread came in the context
>of and both quoted and added various statements about implementation
>details and compiler efficiency that simply ignored the more general
>issues that cause them to only address one small aspect of the whole
>problem. 

I still don't know what from my messages you have in mind.

> My central point all along has been that the choice of which
>language to use for a particular project is a complex one due to the
>many trade-offs entailed no matter which language is used.  

But I agree with that.

>  In the
>absence of any contextual information about the nature of the
>platform, the particular language implementations, and the
>requirements of the application both of the statements "Lisp is better
>than C" and "C is better than lisp" are either meaningless or false,
>depending on one's semantic theory.

Sure, but I don't say either of those things.

-- jd
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cuvy9o.I13@triple-i.com>
In article <··········@festival.ed.ac.uk> ····@festival.ed.ac.uk (J W Dalton) writes:

>····@triple-i.com (Kirk Rader) writes:
>
>>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>>>>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:

[...]

>
>>As one concrete example of the kind thing to which I referred, SGI's
>>filesystem substrate maintains its recently-deallocated file cache
>>buffers in a "semi-free" state on the theory that they will quickly be
>>reallocated for the same purpose.  Watching the output of gr_osview as
>>an I/O intensive lisp application executes you can easily see conflict
>>between the different buffering and memory-management policies being
>>used by the OS's filesystem and virtual-memory management mechanisms
>>and by the I/O and memory-management mechanisms of at least two
>>commercial Common Lisp implementations and one shareware Scheme
>>implementation with which I am familiar.

>
>What exactly is the problem?  I don't have an SGI system, so I
>can't watch the output of gr_osview and easily see what's happening.
>Is there any reason to suppose it's an inherent Lisp problem rather
>than a poorly tuned implementation?

As I stated in the quote which I left in, above, the problem is with
conflicts between the buffering strategies used by the OS and by the
implementation.  The result is a wasteful amount of memory being
allocated, poor cache behavior, etc.  Some such cases of poor
performance of particular implementations could, no doubt, be
eliminated or ameliorated by better tuning on the part of the
implementor.  The fact remains that lisp's semantics include
requirements for any implementation to support features that have a
high likelihood of causing such conflicts, unless the underlying OS is
designed to accomodate those semantics.  The same is true for the
run-time model of C-like languages and the standard library, but since
most modern OS's were specifically designed with that model in mind,
the possibility of conflict in actual implementations is much lower.

[...]

>
>Why is that any more perverse than using gcc rather than the compiler
>from the hardware manufacturer?  Who says a Lisp machine has to be for
>only one Lisp and an ordinary machine can be for any C and indeed a
>whole range of languages?  Even existing Lisp machines weren't
>that restricted!

I never suggested that they were.  There is nothing perverse about
using gcc, particularly when in many cases it peforms better than the
vendor's compiler.  Similarly, there would be nothing perverse about
using different lisps on the same lisp-machine, so long as they all
performed about equally.  What I explicitly said would be perverse
would be using a dialect of any language on any machine that had
significant mis-optimizations for the architecture, OS, or
application.

[...]

>
>Strictly speaking, market forces don't produce implementations?
>Surely they select among them instead.  Moreover, they don't have to
>respect technical merit or performance or anything else.

Market forces produce implementations in exactly the same way that
environmental forces produce species - by natural selection.  I don't
understand the distinction you are trying to make, unless it is just a
word game?

>
>Now, it was fairly easy on VAXes to find cases where Franz Lisp was
>faster than C.  Moreover, for some time after Common Lisps appeared,
>Franz fit better with C and Unix than they did.  Some CLs have passed
>Franz in some ways, but then development of Franz stopped years ago.
>Now we're starting to see Lisps that compile to C and pass arguments
>on the C stack (which KCL did to some extent in the mid 80s), that are
>packaged as shared libraries, that can be linked into C programs on a
>more or elss equal basis, etc.  There are also roles for Lisp that
>don't involve competing with C but complementing C instead.  What we
>now think of as a typical Lisp didn't have to be typical and may
>not be typical in the future.  Whether market forces will take any
>notice is a different question.

How many times do I have to repeat that I agree 100% with everything
you say in the preceding paragraph?  Here are all things I have said
any number of times before in this thread: There are applications for
which lisp is ideally suited, such that an application written in a
C-like language is likely to perform worse than one written in a
lisp-like language.  Good ways of achieving a compromise between
performance and features for many applications which cannot afford to
be written entirely in lisp is either embedding a significant amount
of foreign code in a lisp application, or embedding a lisp
implementation in an application written in some other language.  None
of the above alters the fact that some specific kinds of applications
neither need nor can afford lisp's more powerful features.

[...]

>
>Sure, but it's not confined to already existing members.

Again, I have already said that I would be unsurprised to find that
both OS's and lisps evolve over time so that they become a better
match for one another.  In the mean time, those of us trying to write
commercially viable products using today's dialects and
implementations must choose our tools with care.

[...]

>
>Well, so you say.  (Actually, there's not a very good fit between
>C and some PC operating systems.)  The standard machine for Lisp 
>used to be the PDP-10.  I suspect that Lisp machines have distorted
>perceptions of how well Lisp fits with "mainstream" machines.
>There are often actual problems with linkers, memory management,
>etc, but I don't think there's a fundamental mismatch with the
>hardware or even, in most cases, with the software.

Well, so you say.

[...]

>
>I still don't know what from my messages you have in mind.
>

Your arguments that the problems with lisp for certain applications I
(and many others) have encountered are implementation issues rather
than language-intrinsic, especially coming in the context of this
thread at the point of the message to which you took exception,
implies that with enough tuning of the compiler and the run-time
system there simply is no performance penalty for lisp's richer
semantics.  I disagree.

[...]
>
>But I agree with that.

I honestly don't understand how anyone could fail to.

[...]

>
>>  In the
>>absence of any contextual information about the nature of the
>>platform, the particular language implementations, and the
>>requirements of the application both of the statements "Lisp is better
>>than C" and "C is better than lisp" are either meaningless or false,
>>depending on one's semantic theory.
>
>Sure, but I don't say either of those things.
>
>-- jd

The whole thrust of your argument has been that if one encounters a
peformance problem using lisp then that is either the implementor's or
the OS's problem.  This seems to me to directly contradict this last
assertion.

Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv3rtH.AD5@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@festival.ed.ac.uk> ····@festival.ed.ac.uk (J W Dalton) writes:
>
>>····@triple-i.com (Kirk Rader) writes:
>>
>>>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>>In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>>>>>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>[...]
>
>>
>>>As one concrete example of the kind thing to which I referred, SGI's
>>>filesystem substrate maintains its recently-deallocated file cache
>>>buffers in a "semi-free" state on the theory that they will quickly be
>>>reallocated for the same purpose.  Watching the output of gr_osview as
>>>an I/O intensive lisp application executes you can easily see conflict
>>>between the different buffering and memory-management policies being
>>>used by the OS's filesystem and virtual-memory management mechanisms
>>>and by the I/O and memory-management mechanisms of at least two
>>>commercial Common Lisp implementations and one shareware Scheme
>>>implementation with which I am familiar.
>
>>
>>What exactly is the problem?  I don't have an SGI system, so I
>>can't watch the output of gr_osview and easily see what's happening.
>>Is there any reason to suppose it's an inherent Lisp problem rather
>>than a poorly tuned implementation?
>
>As I stated in the quote which I left in, above, the problem is with
>conflicts between the buffering strategies used by the OS and by the
>implementation. 

Yeah, and what *is* the conflict?  What is Lisp doing wrong?
What sope is there for reducing the problem?

(For example, some GC techniques access pages in a random order,
which is not a good case for some VM policies.  So what Lisp is
doing wrong is accessing pages in an effectively random order,
and then we could fill in details about why that GC technique
has that effect and consider ways to reduce the problem, such
as a different approach to GC or a system call to warn the VM
system.)

> The result is a wasteful amount of memory being
>allocated, poor cache behavior, etc.  Some such cases of poor
>performance of particular implementations could, no doubt, be
>eliminated or ameliorated by better tuning on the part of the
>implementor.  The fact remains that lisp's semantics include
>requirements for any implementation to support features that have a
>high likelihood of causing such conflicts, unless the underlying OS is
>designed to accomodate those semantics.

Why can't Lisp implementors move to techniques that don't have
such problems or have them to a lesser degree?  Surely changing
the OS is not the only way to reduce the problem.

>>Strictly speaking, market forces don't produce implementations?
>>Surely they select among them instead.  Moreover, they don't have to
>>respect technical merit or performance or anything else.
>
>Market forces produce implementations in exactly the same way that
>environmental forces produce species - by natural selection.  I don't
>understand the distinction you are trying to make, unless it is just a
>word game?

What point were you trying to make?  Mine is that people produce
implementations.  If no one produces an implementation with certain
properties, the market will not "produce" such an implementation
either.  

>>Now, it was fairly easy on VAXes to find cases where Franz Lisp was
>>faster than C.  Moreover, for some time after Common Lisps appeared,
>>Franz fit better with C and Unix than they did.  Some CLs have passed
>>Franz in some ways, but then development of Franz stopped years ago.
>>Now we're starting to see Lisps that compile to C and pass arguments
>>on the C stack (which KCL did to some extent in the mid 80s), that are
>>packaged as shared libraries, that can be linked into C programs on a
>>more or elss equal basis, etc.  There are also roles for Lisp that
>>don't involve competing with C but complementing C instead.  What we
>>now think of as a typical Lisp didn't have to be typical and may
>>not be typical in the future.  Whether market forces will take any
>>notice is a different question.
>
>How many times do I have to repeat that I agree 100% with everything
>you say in the preceding paragraph? 

I don't know.  So far you've said it only once.

Indeed, I'm not sure you do agree with me.  For one thing, there's
your point about the cost of higher-level features.  How did Franz
Lisp ever end up _faster_ than C?  I know you say (e.g. below) that
Lisp can be faster, but you also offer general arguments against
that.  This suggests that you're not really considering how it is
that Lisp can end up faster.

Moreover, I'm saying Lisp implementations could be very different from
the ones we have now.  If things had gone differently, typical Lisp
implementations might have been much more competitive with C for
application areas where C now wins.  Performance differences between
languages can go "against" the semantic differences, because more
effort is put into implementations, or into parts of implementations,
and because of other constraints (e.g. maybe C has to use a particular
calling mechanism because it has to be freely linkable with certain
other languages.)

>   Here are all things I have said
>any number of times before in this thread: There are applications for
>which lisp is ideally suited, such that an application written in a
>C-like language is likely to perform worse than one written in a
>lisp-like language.

But you keep saying that higher-level features can't be had for free,
must have overheads, and so on.

>  Good ways of achieving a compromise between
>performance and features for many applications which cannot afford to
>be written entirely in lisp is either embedding a significant amount
>of foreign code in a lisp application, or embedding a lisp
>implementation in an application written in some other language.  None
>of the above alters the fact that some specific kinds of applications
>neither need nor can afford lisp's more powerful features.

But which kinds are they?  Dividing according to current
implementations is useful but we can't say that the necessary
divisions are the same.  So why not stick to points about
existing implementations?  It's useful information and 
we're on more solid ground when making such claims.

>>Sure, but it's not confined to already existing members.
>
>Again, I have already said that I would be unsurprised to find that
>both OS's and lisps evolve over time so that they become a better
>match for one another.  In the mean time, those of us trying to write
>commercially viable products using today's dialects and
>implementations must choose our tools with care.

Which is back to something no one disagrees with.

>>Well, so you say.  (Actually, there's not a very good fit between
>>C and some PC operating systems.)  The standard machine for Lisp 
>>used to be the PDP-10.  I suspect that Lisp machines have distorted
>>perceptions of how well Lisp fits with "mainstream" machines.
>>There are often actual problems with linkers, memory management,
>>etc, but I don't think there's a fundamental mismatch with the
>>hardware or even, in most cases, with the software.
>
>Well, so you say.

If Lisp machines didn't supply an easier target, I suspect we'd
have people arguing that the PDP-10 was a peculiar machine, unusually
well-suited to Lisp, and that current machines were very different.
(It's possible to make an argument along those lines that's more
reasonable than much of what we've seen here.)

>>I still don't know what from my messages you have in mind.
>
>Your arguments that the problems with lisp for certain applications I
>(and many others) have encountered are implementation issues rather
>than language-intrinsic, especially coming in the context of this
>thread at the point of the message to which you took exception,
>implies that with enough tuning of the compiler and the run-time
>system there simply is no performance penalty for lisp's richer
>semantics.  I disagree.

That does nothing to show I'm using Lisp as "any language based on
the lambda-calculus or any one that has generic features to support
higher-order functions and a functional programming paradigm"?

You keep eliding that part, but that's what's at issue here.
Moreover, your claim about "any language based on the lambda calculus
..." is doubly strange.  For surely such langauges, with support
for higher-order functions, are perfectly good cases for your
general arguments about the costs of semantic extras.  Do you
think there are some lc-based languages that support higher-
order functions that do not have language-intrinsic costs?  If
so, please say which they are, because it sounds like I should
try using the.

However, since you don't seem to what to discuss that let's
turn to something you are inclined to defend.

Now, I think it's possible, perhaps even likely, that many of
the problems *you've* encountered *are* due to implementation
factors and are not language-intrinsic.  Other problems people
run into may be due to properties of Common Lisp that are not 
shared by all languages in the Lisp family.  If there is a
language-intrinsic cost, no one has managed to show what it
is, much less quantify it so we can see whether we should care.

>>>  In the
>>>absence of any contextual information about the nature of the
>>>platform, the particular language implementations, and the
>>>requirements of the application both of the statements "Lisp is better
>>>than C" and "C is better than lisp" are either meaningless or false,
>>>depending on one's semantic theory.
>>
>>Sure, but I don't say either of those things.
>
>The whole thrust of your argument has been that if one encounters a
>peformance problem using lisp then that is either the implementor's or
>the OS's problem.  This seems to me to directly contradict this last
>assertion.

Well, suppose I *did* say all problems are either the implementor's
or the OS's.  How does that have me saying Lisp is better than C or
C is better than Lisp?

-- jd
From: J W Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cuso2x.243@festival.ed.ac.uk>
·······@netcom.com (William Paul Vrotney) writes:

>In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:

>> ...
>> problem.  My central point all along has been that the choice of which
>> language to use for a particular project is a complex one due to the
>> many trade-offs entailed no matter which language is used.  [...]

>Instead of all this complex analysis, which I'm not sure is going anywhere,
>lets try some simple stuff for a change.  Lets try this mind experiment. IF
>there was a Lisp compiler that compiled as efficiently as C (or even close
>to) and IF your boss said that you can program in either Lisp or C. What
>would your choice be? Case closed (one way or the other). I hope.

>There are so many more interesting aspects of Lisp that this news group can
>be used for.

I agree.  I wish people would stop using it to attack Lisp.
Many of the same criticisms of Lisp could be made in a more
constructive fashion and could include something to let us see
exactlky what and how bad the problems are.
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cuw3A2.JAn@triple-i.com>
In article <··········@festival.ed.ac.uk> ····@festival.ed.ac.uk (J W Dalton) writes:
>·······@netcom.com (William Paul Vrotney) writes:
>
>>In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>
>>> ...
>>> problem.  My central point all along has been that the choice of which
>>> language to use for a particular project is a complex one due to the
>>> many trade-offs entailed no matter which language is used.  [...]
>
>>Instead of all this complex analysis, which I'm not sure is going anywhere,
>>lets try some simple stuff for a change.  Lets try this mind experiment. IF
>>there was a Lisp compiler that compiled as efficiently as C (or even close
>>to) and IF your boss said that you can program in either Lisp or C. What
>>would your choice be? Case closed (one way or the other). I hope.
>
>>There are so many more interesting aspects of Lisp that this news group can
>>be used for.
>
>I agree.  I wish people would stop using it to attack Lisp.
>Many of the same criticisms of Lisp could be made in a more
>constructive fashion and could include something to let us see
>exactlky what and how bad the problems are.
>
>

Since you included a quote from me in this, I assume you consider me
to be one of those who has been attacking lisp.  I do not feel that to
be the case.  I also think that it is a little hypocritical to
criticize the same line of argument for being both "too dependent on
illustrative examples" and "not including anything to see what the
problems are" (my paraphrase) as you have in this set of related
messages.

My repeatedly emphasized point is not deep, mysterious, hard to grasp,
or even requiring much in the way of complex justification or
analysis.  A higher level language like lisp will have made
performance trade-offs making it a better choice of language for some
applications and a worse choice for others relative to a lower-level
language like C.  All of the lengthy thread which has ensued is, I
believe, the result of people who feel so threatened by any suggestion
that there can be any application for which lisp is not the best
choice of implementation language that they have felt it necessary to
vehemently attack what were mild observations of (to me. at least)
obvious truths.

Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv3p9r.8Ao@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@festival.ed.ac.uk> ····@festival.ed.ac.uk (J W Dalton) writes:

>>>There are so many more interesting aspects of Lisp that this news group can
>>>be used for.

>>I agree.  I wish people would stop using it to attack Lisp.
>>Many of the same criticisms of Lisp could be made in a more
>>constructive fashion and could include something to let us see
>>exactlky what and how bad the problems are.

>Since you included a quote from me in this, I assume you consider me
>to be one of those who has been attacking lisp. 

That's right.

>I do not feel that to be the case.

You alternate between claims that virtually no one would disagree
with and much stronger claims that, if correct, would mean Lisp
was a poor choice for a vast range of applications.  In between
is some useful stuff about particular problems for particular
applications.  I think the useful stuff could be more useful,
by giving us more of the details, and that the stronger claims
have not been justified.

>  I also think that it is a little hypocritical to
>criticize the same line of argument for being both "too dependent on
>illustrative examples" and "not including anything to see what the
>problems are" (my paraphrase) as you have in this set of related
>messages.

Why hypocritical?  Inconsistent, perhaps.  But no.  The examples
don't justify the claims they illustrate and neither do they show
just what it is that Lisp is doing wrong.  They support the impression
that Lisp is losing without telling us enough for us to tell just
how great the costs are and what scope there is for reducing them.

>My repeatedly emphasized point is not deep, mysterious, hard to grasp,
>or even requiring much in the way of complex justification or
>analysis. 

And if you'd confined yourself to the reasonable claims no one has
much problem with, this whole discussion would have ended long ago.
But you also say things that at least give the impression that Lisp
is losing in a fairly general way.  (Not just for real-time
applications or numeric ones.  Instead, Lisp I/O will clash
with operating systems, closures and other high-level features
will trap the unwary into writing inefficient code, and so on.)

> A higher level language like lisp will have made
>performance trade-offs making it a better choice of language for some
>applications and a worse choice for others relative to a lower-level
>language like C.

Well, you're now posting articles that pretty much confine themselves
to that kind of general point, without trying to argue that Lisp I/O
will be losing, and so on.  But do you now accept that Lisp I/O
needn't clash with ordinary operating systems, or what?

>  All of the lengthy thread which has ensued is, I
>believe, the result of people who feel so threatened by any suggestion
>that there can be any application for which lisp is not the best
>choice of implementation language that they have felt it necessary to
>vehemently attack what were mild observations of (to me. at least)
>obvious truths.

Nonsense.  For instance, I'm not only not threatened by the suggestion
that there can be any application for which Lisp is not the best choice
of implementation language, I'd say so myself.  I'd even list
applications for which current Lisps are unsuited.

What I don't do is suppose that Lisp is necessarily losing, that we
can identify cases where C will win no matter what we do in the way of
implementation and language design for a language in the Lisp family.
Sure, there might be such cases, and maybe we can even tell which
they are.  But not from what's been said here.

-- jeff
From: Tim Bradshaw
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TFB.94Aug15180058@oliphant.cogsci.ed.ac.uk>
* Kirk Rader wrote:
> The system as a whole - it's I/O, memory-management,
> etc. hardware and software substrates - were all designed and
> optimized with Unix in mind.  Any software system such as Common Lisp
> which has its own memory-management, I/O, etc. models that are
> sufficiently different from that of Unix to prevent the implementor or
> application programmer from simply calling the standard libraries in
> the same way that a C program would is by definition not only
> re-inventing the wheel from the point of view of the work-station's
> designers but also risks incurring (and in typical implementations
> does incur) serious performance problems.  

[Talking about Common Lisp here]

CL's I/O model is basically buffered streams.  Sounds pretty much
identical to that of C to me.  A lot of CL implementations seem to
have rather poor I/O but that's because the I/O systems aren't well
written.

Since C doesn't have any built in memory management, I fail to see how
CL's can be different from it. There are or have been problems with
Unix's VM and typical lisp poor locality behaviour.  However these
problems are in fact just as bad for other programs (for instance X
servers exhibit many of the same characteristics, and thrash VM
systems the same way as Lisp on resource-starved machines.

> Every brand of work-station
> with which I am familiar comes with performance metering tools which
> can be used to easily verify the kinds of ill-effects to which I
> refer.  As a concrete example, it is an enlightening experience to
> watch the output of gr_osview on an SGI while a complex lisp
> application is running using one of the popular commercial Common Lisp
> implementations.  One can easily see where the lisp implementation's
> I/O model, memory-management model, lightweight process model, and so
> on cause really awful behavior of Irix' built-in I/O,
> memory-management, and scheduling mechanisms.

Can you give details?  I have spent some time watching large CL (CMUCL)
programs on Suns, and other than VM problems (and CMUCL's garbage
collection is not exactly `state of the art') I find they do fine.
And they weren't even very well written.  Lightweight processes are
not part of CL BTW.

> And to build the kind of "holistically lisp friendly" environment that
> would make the kind of misbehavior to which I refer above impossible.
> A lisp machine has no other memory-management mechanism than lisp's.
> Ditto for its scheduler.  Ditto for its I/O substrate.  There is no
> possibility of conflict or misoptimization.

Sure we'd all like the integrated environment back.  But we can't have
it.  And it didn't make that kind of misbehaviour impossible, believe
me!

> I would disagree with the description that writing efficient
> applications for typical RISC-based workstations in lisp is "easy",
> for all of the reasons I refer to above, irregardless of the number of
> machine instructions to which a simple expression may compile, or
> whatever other micro-level measure of compiler efficiency you wish to
> use.  The ultimate performance of the application as a whole will be
> determined by much more than just the tightness of the code emitted by
> the compiler.

Again, can you give some examples here?  I fully agree that
crappily-written programs don't perform well, and you can write badly
in Lisp.  But well-written ones do, and you don't need to resort to
arcana to write well.

--tim
From: J W Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CusK83.GK5@festival.ed.ac.uk>
···@arolla.idiap.ch (Thomas M. Breuel) writes:

>In article <·················@oliphant.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>|Sounds pretty much
>|identical to that of C to me.  A lot of CL implementations seem to
>|have rather poor I/O but that's because the I/O systems aren't well
>|written.

>CL's I/O system is also lacking some rather important primitives,
>like block read/block write.

1. You say "like".  What other examples are there?

2. What exactly is the problem with lacking user-visible block
   read and write?  The underlying I/O system will still use block r/w.

>|However these
>|problems are in fact just as bad for other programs (for instance X
>|servers exhibit many of the same characteristics, and thrash VM
>|systems the same way as Lisp on resource-starved machines.

>Sure, but CommonLisp systems will resource-starve a machine much more
>quickly than an equivalent C program. 

That depends, in part, on what you count as equivalent.

> The reason is that CL lacks
>important primitives for expressing some fundamental kinds of data
>abstractions in a space-efficient way (think about how much space your
>typical "struct { int x; double y; char z;};" takes as a CommonLisp
>DEFSTRUCT).

How much?  Do you have some actual numbers?

(There are cases where Lisps typically represent things more
efficiently than C -- consider e.g. a naive implementation of cons in
C using malloc.)

>  Also, rampant consing by the standard libraries is a real
>problem in CL. 

What standard libraries are those?  Which Common Lisps?

Do you know that a number of Lisps never cons unless required to
do so by the user's program?

> And, most systems still don't have efficient floating
>point arguments/return values for separately compiled functions.

Is this an inherent problem or just a matter of implementation
traditions?

>Yes, you are right, neither VM behavior of CommonLisp nor garbage
>collection are the problem.  However, VM thrashing and excessive
>garbage collections are frequently observed symptoms of problems with
>the CommonLisp language.

I wonder what people are doing so that this is so.  I don't usually
have much problem with excessive GC and when I do it's often easy to
fix.

>|CL's I/O model is basically buffered streams.

>CL has _typed_ buffered streams.  Whatever the CL I/O design was
>supposed to achieve, it fails to achieve in practice.  Those stream
>types at best cause lots of headaches.

Do you have any examples?

This problem with think kind of account is that it assumes everyone
knows what you're talking about.  It lines up with what lots of
people who dislike Lisp already believe, but doesn't say just how
bad the problem actually is.  People therefore assume it's as
bad as they always thought it was, or worse, since they may not
have considered all your examples (what exactly is wrong with
typed streams, for instance?).

-- jeff
From: Tim Bradshaw
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TFB.94Aug22191119@grieve.cogsci.ed.ac.uk>
* J W Dalton wrote:

> 2. What exactly is the problem with lacking user-visible block
>    read and write?  The underlying I/O system will still use block r/w.

The `problem' is that every CL implementation I've seen conses
*appallingly* when you do input.  So much so that, given efficient
user block IO you'd be sorely tempted to invent special purpose
reading stuff which is a bit efficient.  But of course this isn't a
problem with Lisp but with poor implementations.

I read somewhere that some kind of user block IO was in the ANSI CL spec
BTW, is this wrong?

To change the subject slightly.  Do *any* of the anti-Lisp arguments
that have been raised not fall into one of these 4 classes:

	1. I hate all those parens.

	2. Many Lisp implementations are not good, and many programs
	   written in Lisp are poorly written.

	3. High-level languages are a bad thing.  For instance the
	   `Lisp memory management is bad' arguments basically come
	   down to saying that languages which don't allow you to have
	   dangling pointers (ie that don't have free) are bad.

	4. It won't run comfortably on my souped-up CP/M OS with no
	   proper VM and a window system that is 3 times as big as
	   X.

--tim

The last one is a joke BTW.
From: Lawrence G. Mayka
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <LGM.94Aug22174729@polaris.ih.att.com>
In article <·················@grieve.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:

   I read somewhere that some kind of user block IO was in the ANSI CL spec
   BTW, is this wrong?

You are quite correct.  The ANSI CL spec includes READ-SEQUENCE and
WRITE-SEQUENCE.

   To change the subject slightly.  Do *any* of the anti-Lisp arguments
   that have been raised not fall into one of these 4 classes:

	   1. I hate all those parens.

	   2. Many Lisp implementations are not good, and many programs
	      written in Lisp are poorly written.

	   3. High-level languages are a bad thing.  For instance the
	      `Lisp memory management is bad' arguments basically come
	      down to saying that languages which don't allow you to have
	      dangling pointers (ie that don't have free) are bad.

	   4. It won't run comfortably on my souped-up CP/M OS with no
	      proper VM and a window system that is 3 times as big as
	      X.

Perhaps the best argument we've seen is that the behavior of garbage
collectors in commercial CL implementations are unsuitable for
applications requiring consistent real-time response.  But the recent
press release from Harlequin Inc., distributed at the Lisp Users and
Vendors Conference, gives one reason for hope.  ("Harlequin's
world-leading team of software developers have produced a 'real-time'
version of LispWorks, called Hercules, to meet the special
requirements of AT&T's ATM switched virtual circuit application.")
--
        Lawrence G. Mayka
        AT&T Bell Laboratories
        ···@ieain.att.com

Standard disclaimer.
From: Lawrence G. Mayka
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <LGM.94Aug23173633@polaris.ih.att.com>
In article <··········@pulitzer.eng.sematech.org> ······@swim1.eng.sematech.org (Bill Gooch on SWIM project x7151) writes:

   In article <·················@polaris.ih.att.com>, ···@polaris.ih.att.com (Lawrence G. Mayka) writes:
   |> 
   |> Perhaps the best argument we've seen is that the behavior of garbage
   |> collectors in commercial CL implementations are unsuitable for
   |> applications requiring consistent real-time response.  But the recent
   |> press release from Harlequin Inc., distributed at the Lisp Users and
   |> Vendors Conference, gives one reason for hope....

   I wouldn't be extremely hopeful that some new software can address
   and resolve the issue of GC for realtime operation.  Unless the 
   application code itself takes care not to create too much garbage, 
   it seems highly unlikely that some new GC technology would somehow 
   enable realtime performance (for a reasonably small epsilon) with 
   the GC turned on and getting enough cycles to complete its work.

Do you doubt that GC can be made highly incremental (i.e., real-time)?
Research literature is rife with various algorithms for incremental
GC.  They vary principally in

(a) the average and maximum intervals taken by GC;

(b) any limitations on GC's effectiveness (e.g., susceptibility to
memory fragmentation); and

(c) the effect on system performance, at a micro- and macro-level.

What is currently lacking is a commercial Common Lisp implementation
that employs such an algorithm.
--
        Lawrence G. Mayka
        AT&T Bell Laboratories
        ···@ieain.att.com

Standard disclaimer.
From: Bill Gooch on SWIM project x7151
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33ftdp$eb6@pulitzer.eng.sematech.org>
In article <·················@polaris.ih.att.com>, ···@polaris.ih.att.com (Lawrence G. Mayka) writes:
|> 
|> Do you doubt that GC can be made highly incremental (i.e., real-time)?

No, and I may have been over-generalizing based on my experience
with current GC technology.  Also, the realtime applications I've
had contact with have involved some pretty small epsilons, and I'm
doubtful that any GC approach would be effective in those cases.
I'd be happy to be proved wrong.
From: Scott Nettles
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33e36g$sle@cantaloupe.srv.cs.cmu.edu>
In article <··········@pulitzer.eng.sematech.org>,
Bill Gooch on SWIM project x7151 <··········@sematech.org> wrote:
>
>I wouldn't be extremely hopeful that some new software can address
>and resolve the issue of GC for realtime operation.  Unless the 
>application code itself takes care not to create too much garbage, 
>it seems highly unlikely that some new GC technology would somehow 
>enable realtime performance (for a reasonably small epsilon) with 
>the GC turned on and getting enough cycles to complete its work.

This shows an important misunderstanding of how modern garbage collectors
work.  The problem isn't the creation of to much garbage, but rather
creating to much non-garbage.  Copying collectors don't expend ANY cycles on
garbage, so if your program did nothing but generate garbage GC would be
very cheap, just the overhead to get into the collector, set everything up,
and then do nothing.  In theory mark-and-sweep has a cost component related
to garbage, but in practice it's dominated by the same costs as copying.

It also shows a lack of understanding of what "realtime" really means.  It
doesn't mean fast, but rather predictable.  The goal is to make sure that
things get the cycles they need predictably.  By the way, as far as I know
there are no malloc/free implementations (or designs!) which are (hard)
realtime.

There has been a lot of good work recently which I think may well lead to
practical hard realtime collection in the next few years.  If people are
kean on it, I can provide some references and a bit more discussion.

Scott
·······@cs.cmu.edu
From: Bill Gooch on SWIM project x7151
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33fsol$eb6@pulitzer.eng.sematech.org>
In article <··········@cantaloupe.srv.cs.cmu.edu>, ········@cs.cmu.edu (Scott Nettles) writes:
|> In article <··········@pulitzer.eng.sematech.org>,
|> Bill Gooch on SWIM project x7151 <··········@sematech.org> wrote:
|> >
|> >I wouldn't be extremely hopeful that some new software can address
|> >and resolve the issue of GC for realtime operation....
|> 
|> This shows an important misunderstanding of how modern garbage collectors
|> work.  The problem isn't the creation of to much garbage, but rather
|> creating to much non-garbage.  Copying collectors don't expend ANY cycles on
|> garbage, so if your program did nothing but generate garbage GC would be
|> very cheap, just the overhead to get into the collector, set everything up,
|> and then do nothing.  In theory mark-and-sweep has a cost component related
|> to garbage, but in practice it's dominated by the same costs as copying.

In practice, most lisp programs create noticeable amounts of both 
garbage and non-garbage, which is why GC is important.  I don't 
claim to be very knowledgeable about the details of GC operation,
but I have a question for you: if garbage collectors don't spend a
significant portion of time dealing with garbage, then why doesn't
everyone simply write garbage-intensive code, avoiding allocation 
of non-garbage?   And the corollary: "if your program did nothing 
but generate garbage," why would there be any need for GC? 

My experience has been that, in general, GC overhead goes up with 
the amount of garbage created.  Are you claiming that this is not 
true, or just that there are other factors involved?  I would not 
disagree with the latter.  All of this is rather beside the point 
I was making, however.

Anyway, I didn't intend to indirectly slam your research.  Perhaps
I should have said something like "your time might be better spent 
exploring currently viable solutions than in waiting for new GC 
technology."

|> It also shows a lack of understanding of what "realtime" really means.  It
|> doesn't mean fast, but rather predictable.  The goal is to make sure that
|> things get the cycles they need predictably.  

I really can't see where you read such a "misunderstanding" into
my post.  My use of the phrase "enable realtime performance (for
a reasonably small epsilon)" should have made it clear that I do 
understand the distinction between "realtime" and just plain "fast."  
But "a reasonably small epsilon" was meant to indicate that in some 
cases "fast" (as in raw performance) is a major consideration.  In 
such cases, which I believe may be fairly common, giving the realtime 
part of the system first priority means deferring GC work, which 
may mean that the total cycles available for GC will not be enough 
to get the GC work done.
From: Rob MacLachlan
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33h1u8$58b@cantaloupe.srv.cs.cmu.edu>
Scott was being a bit pedantic.  It is true that for many collectors, the
cost of a collection cycle is determined primarily by the amount of recently
allocated non-garbage, but the *frequency* of collection cycles is determined
by the total rate of allocation (garbage and non-garbage.)

It's true that it's better to allocate garbage than non-garbage, but it's even
better to not allocate anything.  As long as you can reduce garbage allocation
without changing memory reference patterns in bad ways (such as heavily
modifying old objects), then it is a good thing to reduce garbage allocation.

Many of the garbage-reduction tweaks that people do in Common Lisp also just
plain reduce the amount of work even when GC costs are ignored.  For example,
allocating numbers in registers instead of on the heap eliminates the memory
references that initialize and fetch the number.

As to the reality of real-time GC, I think that the answer is that it is
starting to get there.  There are degrees of real-time-ness, and the less real
the time, the more tractable GC becomes.  Hard real-time (where deadlines must
not be missed) is difficult enough without throwing GC in.  But the game
changes if you're willing to set a "pause goal" where small overruns are
allowed and extremely rare large overruns are tolerable.

I believe that there is GC technology which allows solid "interactive"
applications with mouse-tracking, window-popping, etc.  Multi-media
applications may cause more difficulty.  I have been told by GC researchers
that they have collectors which can fit all their pauses between frames of a
60hz video display, but that doing sound output is harder.

One fundamental problem with hard real-time garbage collection is that today's
algorithms have been optimized for typical-case behavior, not worst-case.  For
example, generational GC is no help in meeting real-time goals unless you can
prove that the application reliably demonstrates the sort of memory-use
patterns that generational GC optimizes.

These concerns about hard real-time should be put in perspective.  Virtual
memory is not a hard real-time technology.  Caches are not a hard real-time
technology.  In practice, it is almost always a colossally big win to have a
couple of levels in your memory hierarchy, but it is hard to prove that this
is always so.

If someone ever does come up with a collector that "solves" the hard-real-time
problem, it will probably be irrelevant for the vast majority of interactive
workstation/PC applications, since a hard real-time collector would sacrifice
lots of normal-case performance optimizations in order to get the desired
worst-case performance.

  Rob
From: Scott Nettles
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33leok$fnn@cantaloupe.srv.cs.cmu.edu>
First a few comments on Rob MacLachlan's excellent post.

Rob MacLachlan <···@cs.cmu.edu> wrote:
>Scott was being a bit pedantic.  It is true that for many collectors, the
>cost of a collection cycle is determined primarily by the amount of recently
>allocated non-garbage, but the *frequency* of collection cycles is determined
>by the total rate of allocation (garbage and non-garbage.)

Assuming fixed amounts of memory, this is of course completely true.
This means there is some overhead for garbage as well as non-garbage.
Just to get some feel for this, let's look at some numbers from my
favorite system SML/NJ.  Getting in and out of the collector is pretty
expensive in older versions of the compiler like the one I use.  A
conservative (i.e. on the high side) estimate would be 1ms on a 20 mips
machine, or 20,000 cycles. Also conservatively we might GC every 1Mb of
allocation.  This means that the overhead of allocating a byte is about
0.02 cycles.  On the other hand on the same machine the collector
copies live data at a rate of about 2Mb/sec so that's 10 cycles/byte.
This means that live data is about 1000 times more expensive to
allocate than garbage.  Furthermore, it's going to be pretty tough to
lower the cost of copying by much, but it should be pretty easy to make
the cost of getting into and out of the collector much lower.

>
>It's true that it's better to allocate garbage than non-garbage, but it's even
>better to not allocate anything.  As long as you can reduce garbage allocation
>without changing memory reference patterns in bad ways (such as heavily
>modifying old objects), then it is a good thing to reduce garbage allocation.

I agree, with one exception.  If you are using generational collection
it may not be good to trade allocation for mutation.

>the time, the more tractable GC becomes.  Hard real-time (where deadlines must
>not be missed) is difficult enough without throwing GC in.  But the game

On the other hand, it looks like the hard real-time folks may being
willing to tell us some things about the application which we don't
always know. Such the maximum allocation rate, rate of live data
production etc.  This may help constrain the problem so it's tractable.

>These concerns about hard real-time should be put in perspective.  Virtual
>memory is not a hard real-time technology.  Caches are not a hard real-time
And in fact I know of NO dynamic storage facility free or GC based
which is hard real-time on stock systems.

Now for a response to Bill Gooch.

Bill Gooch on SWIM project x7151 <··········@sematech.org> wrote:
>significant portion of time dealing with garbage, then why doesn't
>everyone simply write garbage-intensive code, avoiding allocation 
>of non-garbage?   

If they are trying to reduce the cost of garbage collection, that would
be a good strategy.  In practice when you compute you often want to
produce results which are long lived and thus "never" garbage.

> And the corollary: "if your program did nothing 
>but generate garbage," why would there be any need for GC? 
Because you have to run the garbage collector to PROVE that all that
memory you allocated is in fact garbage before you can reuse it.

>
>My experience has been that, in general, GC overhead goes up with 
>the amount of garbage created.  Are you claiming that this is not 
>true, or just that there are other factors involved? 

I don't know what collectors you have been using, so I can't really say
anything about them.  What I am claiming is that the asymptotic cost of
collection has to do with the amount of live data, not garbage.  As
noted above there is some cost for garbage caused by the need to call
the collector more frequently.  For the collectors that I have
measured, there is no question that the amount of garbage is NOT a
predictor of GC overhead while amount of live data is. 

>Anyway, I didn't intend to indirectly slam your research.  Perhaps
That's OK, my research can hold it's own.

>I should have said something like "your time might be better spent 
>exploring currently viable solutions than in waiting for new GC 
>technology."
Personally I like the idea of creating the new technology myself :-)
It's easy and you can do it in the privacy of your own home.

>my post.  My use of the phrase "enable realtime performance (for
>a reasonably small epsilon)" should have made it clear that I do 

What is epsilon?  Just mentioning greek symbols doesn't really tell me
much.  Maybe I was wrong about you, but there is a common misconception
that real-time means "fast" rather than predictable.

>cases "fast" (as in raw performance) is a major consideration.  In 
>such cases, which I believe may be fairly common, giving the realtime 
>part of the system first priority means deferring GC work, which 
>may mean that the total cycles available for GC will not be enough 
>to get the GC work done.

All this means is that if you want to use GC in such a system you have
to include it in the cycle budget, not just defer GC work.  It's true
that some applications won't have enough budget to do collection, but
then it's also true that some applications won't have the cycle budget
to let people code in C either.  Big deal. 

Scott
·······@cs.cmu.edu
From: Bill Gooch on SWIM project x7140
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33lvrr$ghu@pulitzer.eng.sematech.org>
In article <··········@cantaloupe.srv.cs.cmu.edu>, ········@cs.cmu.edu (Scott Nettles) writes:
|>
|> What is epsilon?  Just mentioning greek symbols doesn't really tell me
|> much.  Maybe I was wrong about you, but there is a common misconception
|> that real-time means "fast" rather than predictable.

By epsilon, I meant what you are calling "cycle budget."
From: Scott Fahlman
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33nhqg$29i@cantaloupe.srv.cs.cmu.edu>
In article <··········@cantaloupe.srv.cs.cmu.edu> ········@cs.cmu.edu (Scott Nettles) writes:

   This means there is some overhead for garbage as well as non-garbage.
   Just to get some feel for this, let's look at some numbers from my
   favorite system SML/NJ.  Getting in and out of the collector is pretty
   expensive in older versions of the compiler like the one I use.  A
   conservative (i.e. on the high side) estimate would be 1ms on a 20 mips
   machine, or 20,000 cycles. Also conservatively we might GC every 1Mb of
   allocation.  This means that the overhead of allocating a byte is about
   0.02 cycles.  On the other hand on the same machine the collector
   copies live data at a rate of about 2Mb/sec so that's 10 cycles/byte.
   This means that live data is about 1000 times more expensive to
   allocate than garbage.  Furthermore, it's going to be pretty tough to
   lower the cost of copying by much, but it should be pretty easy to make
   the cost of getting into and out of the collector much lower.

I think that there's a bug in this argument.  In the presence of a lot
of live data, the marginal cost of that byte of garbage you create
must incure not only it's share of the cost of getting into and out of
the GC, but also its share of the cost of tacking down and moving all
the live stuff (in the current geenration, at least).  A byte of live
stuff incurs the same cost, plus the cost it adds to this and future
GC's.  This cost of moving live stuff will dominate everything else,
so while it costs a bit more to allocate live data, it is nothing like
a 1000:1 ratio.

That piece of live data will cost you more in the long run, since it
shows up in the cost of everything you allocate from now on, at least
until it gets promoted to a seldom-collected generation.  Presumably
that's OK because it's something you want to keep.  But allocating
garbage should not be considered to be practically free, as you seem
to be suggesting.

-- Scott (the other one)

===========================================================================
Scott E. Fahlman			Internet:  ····@cs.cmu.edu
Principal Research Scientist		Phone:     412 268-2575
School of Computer Science              Fax:       412 268-5576 (new!)
Carnegie Mellon University		Latitude:  40:26:46 N
5000 Forbes Avenue			Longitude: 79:56:55 W
Pittsburgh, PA 15213			Mood:      :-)
===========================================================================
From: Scott Nettles
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33o3gf$e8o@cantaloupe.srv.cs.cmu.edu>
In article <··········@cantaloupe.srv.cs.cmu.edu>,
Scott Fahlman <···@CS.CMU.EDU> wrote:
>
>I think that there's a bug in this argument.  In the presence of a lot
>of live data, the marginal cost of that byte of garbage you create
>must incure not only it's share of the cost of getting into and out of
>the GC, but also its share of the cost of tacking down and moving all

I don't agree.  I see no reason to "charge" the garbage bytes for the cost
of copying the live bytes.  Consider a heap with some live data in it.  The
cost of collecting it is the cost of getting in and out and of copying the
live data.  If you add one more byte of garbage the cost does not change. If
you add one byte of live data it changes by the cost to copy one byte of
live data.  Thus the cost of adding a garbage byte really is just the cost
of doing more frequent collections and that of a live byte has that cost
plus the copy cost.

There are two effects I did not get into since they are very benchmark
dependent, although Scott (the other one) picked up on one of them.  The
first cost comes because allocating more means that live data will have less
time to die before a collection occurs. This makes garbage bytes more
expensive.  The other cost is that live data may well be copied more than
once.  This increases the cost of live bytes.  I don't know of a way to
estimate these effects without knowing a good deal more about the
application and compiler.  Of course these costs are measurable.  My guess
is that these secondary effects don't change the result much.  

In addition to these effects, there are probably some issues WRT to locality
both at the VM and cache level.  These are harder to measure, but again I
don't think they would change things enough to make enough of a  difference.
However frequent allocation DOES put heavy demands on the write oriented
parts of the memory subsystem and may be expensive regardless of the GC
issues.

>that's OK because it's something you want to keep.  But allocating
>garbage should not be considered to be practically free, as you seem
>to be suggesting.

Well of course if one can avoid allocating something that's to be preferred,
especially since as Rob suggests often avoiding allocation also save cycles.
Remember that my analysis applied only to the GC related costs.  The cost of
the actual allocation will be something like 1. (maybe) a test to see if
there is space 2. increment a register for the allocation pointer 3. write a
header word 4. initialize the slots of the object.  For a fast in-line
allocation this probably comes to about 5+n cycles and 1+n writes where n is
the size of the object.  For garbage this cost will almost certainly
dominate any GC related cost.  For live data the cost of GC will dominate.

Thus my claim is that for a system with a good generational collector, the
cost of allocating a garbage byte is essentially just the cost of
allocation, while the cost of a live byte will be dominated by the cost of
GC.  I'm sure this is true of SML/NJ and it generates garbage as fast as any
system I know of.

Scott
·······@cs.cmu.edu
From: Scott Fahlman
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33o7ef$grc@cantaloupe.srv.cs.cmu.edu>
In article <··········@cantaloupe.srv.cs.cmu.edu> ········@cs.cmu.edu (Scott Nettles) writes:

   I don't agree.  I see no reason to "charge" the garbage bytes for the cost
   of copying the live bytes.  Consider a heap with some live data in it.  The
   cost of collecting it is the cost of getting in and out and of copying the
   live data.  If you add one more byte of garbage the cost does not change.

Ah, but the frequency of doing the collections changes.  Just for
illustration, consider a heap with a lot (say a terabyte) of live data
that is live and that stays live, and a copying GC that collects after
some fixed amount of consing (say a megabyte).  The marginal cost of
allocating any new byte, whatever its lifetime, is going to be
dominated by the fact that it brings you one byte closer to the Big
Expensive Copy of all this stuff.  The cost of creating and
initializing the new datum will be small by comparison.

Yes, allocating a byte that is still alive when the GC triggers is a
tiny bit more expensive, because it adds one byte to the terabyte that
you are going to copy at GC time, and because that one new byte may be
copied in several future GC's.  But aside from that tiny difference,
the marginal cost of creating a garbage byte or a long-lived byte is
identical.  Both must allocate and initialize the datum, and both must
pay their 10^-6 share of the cost of entering and leaving the GC and
of identifying and copying a terabyte of live stuff.

Granted, if your generational GC technology is so good that there's
not normally a large amount of live stuff in the generation to be
GC'ed, then the difference between allocating a garbage byte and a
long-lived byte is larger.  But I bet that in real systems the ratio
of marginal costs is hardly ever close to the the 1000:1 ratio you
suggested earlier.  It may be 2:1 with average generational GC
technology and 10:1 with extremely well-tuned generational GC.

-- Scott

===========================================================================
Scott E. Fahlman			Internet:  ····@cs.cmu.edu
Principal Research Scientist		Phone:     412 268-2575
School of Computer Science              Fax:       412 268-5576 (new!)
Carnegie Mellon University		Latitude:  40:26:46 N
5000 Forbes Avenue			Longitude: 79:56:55 W
Pittsburgh, PA 15213			Mood:      :-)
===========================================================================
From: Scott Nettles
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33oaor$j27@cantaloupe.srv.cs.cmu.edu>
In article <··········@cantaloupe.srv.cs.cmu.edu>,
Scott Fahlman <···@CS.CMU.EDU> wrote:
>dominated by the fact that it brings you one byte closer to the Big
>Expensive Copy of all this stuff.  The cost of creating and

My analysis assumed that you are using some kind of generational collector
and it focused on the costs of collecting the first generation.  Only live
data increases the size of older generations and leads to the need to
collect those parts of the heap.  But that just makes live bytes more
expensive, so it doesn't change the conclusion.

If you are using a simple stop-and-copy collector your point is correct, but
if you're doing that you aren't that worried about GC costs anyway.

>pay their 10^-6 share of the cost of entering and leaving the GC and
>of identifying and copying a terabyte of live stuff.

These terabyte collections will occur infrequently and as I noted above the
frequency of these will depend on the rate of live data creation NOT the
rate of garbage creation.  Adding a garbage byte to the youngest generation
brings you one byte closer to collecting the youngest generation, but no
closer for any other.

>suggested earlier.  It may be 2:1 with average generational GC
>technology and 10:1 with extremely well-tuned generational GC.

Do you have any real evidence for this?  Or is it just a guess?  For the
system I mentioned (SML/NJ) I would be very surprised if it was less than a
factor of 100.  And with SML/NJs new more efficient entry and exit to the
collector I suspect the factor of 1000 is pretty conservative.  Does anyone
else have any other numbers to bring to bear?

Scott
·······@cs.cmu.edu
From: Scott Fahlman
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33okv7$q5d@cantaloupe.srv.cs.cmu.edu>
In article <··········@cantaloupe.srv.cs.cmu.edu> ········@cs.cmu.edu (Scott Nettles) writes:

   >suggested earlier.  It may be 2:1 with average generational GC
   >technology and 10:1 with extremely well-tuned generational GC.

   Do you have any real evidence for this?  Or is it just a guess?  For the
   system I mentioned (SML/NJ) I would be very surprised if it was less than a
   factor of 100.

Mostly a guess, based on some generational GC figures I've seen in the
Lisp world.  That was a while ago -- maybe modern generational GC's do
a better job of getting all the live stuff out of the way.  Of course,
if you push objects to older generations too fast, you either bloat
them or end up having to scavenge them, and then we have the same
argument at the next level of GC.

-- Scott

===========================================================================
Scott E. Fahlman			Internet:  ····@cs.cmu.edu
Principal Research Scientist		Phone:     412 268-2575
School of Computer Science              Fax:       412 268-5576 (new!)
Carnegie Mellon University		Latitude:  40:26:46 N
5000 Forbes Avenue			Longitude: 79:56:55 W
Pittsburgh, PA 15213			Mood:      :-)
===========================================================================
From: Scott Nettles
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33qgl0$8m2@cantaloupe.srv.cs.cmu.edu>
In article <··········@cantaloupe.srv.cs.cmu.edu>,
Scott Fahlman <···@CS.CMU.EDU> wrote:
>
>Mostly a guess, based on some generational GC figures I've seen in the
>Lisp world.

I'd guess that those numbers reflected both the cost of allocation and GC.
When you do that, my numbers are in basically the same range.  If the
savings for garbage collection alone was only 1/2 then there would be little
reason to use generational collection since the best you could possibly do
is a factor of 2 for the collection. In reality you wouldn't do nearly that
well, once you added in the overheads associated with generational
collection.  For SML/NJ, I see speed ups of about 100, which agrees pretty
well with the fact that for collections of the first generation only about
1/100th of the bytes are live.

>a better job of getting all the live stuff out of the way.  Of course,
>if you push objects to older generations too fast, you either bloat
>them or end up having to scavenge them, and then we have the same
>argument at the next level of GC.

Absolutely.  This is closely connected to one of the issues I specifically
ignored, which is that more frequent collections will give live data less
time to die, and thus increase the survival rate out of the generation being
considered.  This has two bad effects.  One, the live data has to be copied.
Two, it increases the frequency of the next generation collections.  Both of
these issues can be dealt with by increasing the size of the generation in
question, if that is feasible.

I think part of the controversy here is that it seems like I'm saying one
should feel free to allocate garbage at will.  That's not my point at all.
My point is that the cost of allocating garbage is dominated by the cost of
allocation, not the costs of collection.  However if you allocate enough
garbage, it adds up.  I'd guess that for SML/NJ around 1% of it's execution
time is GC overhead mostly due to frequent collections.

Scott
·······@cs.cmu.edu
From: Scott Nettles
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <3409d8$gbo@cantaloupe.srv.cs.cmu.edu>
Before I start my reply, I want to encourage anyone who cares about this
topic to reread both of Rob MacLachlan's posts.  They have a wealth of good
information.

In article <··········@cantaloupe.srv.cs.cmu.edu>,
Rob MacLachlan <···@cs.cmu.edu> wrote:
>One thing that hasn't been defined in this thread is what it means to
>"allocate garbage".  Scott Nettles seems to consider this to mean allocating
>an object which becomes garbage very soon. 

Garbage is data which isn't live.  Garbage is only discovered by doing a
GC.  We were talking about the cost of allocating garbage, so it seems
natural to consider the things that are garbage at the first collection
after allocation.

>SML/NJ's generational collector is well suited to minimizing the cost of this
>ephemeral allocation. 

I'd debate that point.  In particular, at least older versions are far from
optimal in the following areas.  1. Implementation of the write barrier 2.
The time it takes to enter and exit the collector 3. the copy inner loop.
All of these things make the costs of GC in SML/NJ non-optimal.  It really
isn't a highly bummed collector and I don't think there is anything special
about it compared to other generational collectors.  There are some language
related issues that you bring up later, which do make SML more suited to
generational collectors.

> In fact, some have argued that it actually *saves* time
>to allocate contexts on the heap, since a stack pointer would have to be
>decremented [probably not true with cache effects.]

Note that these cache effects show up in looking at the cost allocation, NOT
collection.  If your memory subsystem can support fast allocation, heaps and
stacks look pretty comparable.  If it doesn't then stacks probably win big.

>The major failing of generational collectors is that not all data destined to

I would call this an issue, not a failing.  But it certainly is an important
issue, and one which the GC community needs to do more work on.

>activation records, SML/NJ also tends to help generational GC by having a very
>functional (side-effect-free) programming style.  Side-effect avoidance:

Good point.  But, understanding the performance issues may cause people to
code more functionally in other languages too.  Certainly these days
avoiding allocation like the plague may not be the best thing to do,
especially if you do a lot of mutations instead.

>The effect of [1] shows up in statistics such as the average work needed to
>reclaim a word of storage.  If you add in a lot of "unnecessary but easily

Good point.  This is why I was trying to look at the cost of "garbage" and
live data separately.

>Of course, it would be silly to *not* do generational collection, but results
>from ML tend to make it look simpler and cheaper than it is in general.

I buy this point WRT the mutation related issues, but not so much WRT the
issues we've been talking about so far.

>    I don't believe that GC is "too inefficient" for most X, but the costs of
>    GC are highly context dependent.  The commonly perceived problems with GC
>    are pause times and VM thrashing, and not the instructions-per-word
>    collection costs discussed above.  Pause times can be controlled with
>    incremental collection, and generational collection tends to make large
>    (potentially thrashing) collections rarer.  Non-copying old-generation
>    collectors probably also help paging behavior.

This is a very good point.  In fact my work on GC has focused exactly on the
issue of pauses, which I think is the thing that stops most people from
using GC, at least on machines with ample memory.  I was focusing on the
instructions-per-word issues because I wanted to make it clear that for the
collector, garbage wasn't the issue.  In my work on concurrent GC, I have
found that high survival rates out of the first generation can be a BIG
problem.

Scott
·······@cs
From: Henry G. Baker
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <hbakerCv2JEp.BnF@netcom.com>
In article <··········@cantaloupe.srv.cs.cmu.edu> ········@cs.cmu.edu (Scott Nettles) writes:
>In article <··········@pulitzer.eng.sematech.org>,
>Bill Gooch on SWIM project x7151 <··········@sematech.org> wrote:
>>
>>I wouldn't be extremely hopeful that some new software can address
>>and resolve the issue of GC for realtime operation.  Unless the 
>>application code itself takes care not to create too much garbage, 
>>it seems highly unlikely that some new GC technology would somehow 
>>enable realtime performance (for a reasonably small epsilon) with 
>>the GC turned on and getting enough cycles to complete its work.
>
>This shows an important misunderstanding of how modern garbage collectors
>work.  The problem isn't the creation of to much garbage, but rather
>creating to much non-garbage.  Copying collectors don't expend ANY cycles on
>garbage, so if your program did nothing but generate garbage GC would be
>very cheap, just the overhead to get into the collector, set everything up,
>and then do nothing.  In theory mark-and-sweep has a cost component related
>to garbage, but in practice it's dominated by the same costs as copying.
>
>It also shows a lack of understanding of what "realtime" really means.  It
>doesn't mean fast, but rather predictable.  The goal is to make sure that
>things get the cycles they need predictably.  By the way, as far as I know
>there are no malloc/free implementations (or designs!) which are (hard)
>realtime.
>
>There has been a lot of good work recently which I think may well lead to
>practical hard realtime collection in the next few years.  If people are
>kean on it, I can provide some references and a bit more discussion.

I second Scott's comments.

I might also point out that at least two commercial 'multimedia'
systems are successfully using my copying-but-not-really-copying
real-time GC scheme described in:

"The Treadmill: Real-Time Garbage Collection without Motion Sickness".
ACM Sigplan Not. 27,3 (Mar 1992), 66-70.
From: Thomas M. Breuel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TMB.94Aug24183159@arolla.idiap.ch>
In article <··········@pulitzer.eng.sematech.org> ······@swim1.eng.sematech.org (Bill Gooch on SWIM project x7151) writes:
|Since garbage collection is a feature rather than a requirement,
|you can simply turn it off and do your own memory management. My
|experience has been that explicit memory management in Common Lisp
|is straightforward, especially when compared with (for example) C.

If that were true, it would be great.  Unfortunately, you can't just
turn off GC in CommonLisp because there is no requirement that the
compiler or runtime don't start generating garbage behind your back.

I think an important step would be to put in just such a requirement
into the CL language definition and to make explicit in the language
definition which operations are allowed to generate garbage, which
operations are allowed to allocate memory, and which ones aren't.

				Thomas.
From: Bill Gooch on SWIM project x7151
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <33g6v9$ee8@pulitzer.eng.sematech.org>
In article <·················@arolla.idiap.ch>, ···@arolla.idiap.ch (Thomas M. Breuel) writes:
|> In article <··········@pulitzer.eng.sematech.org> ······@swim1.eng.sematech.org (Bill Gooch on SWIM project x7151) writes:
|> |Since garbage collection is a feature rather than a requirement,
|> |you can simply turn it off and do your own memory management. My
|> |experience has been that explicit memory management in Common Lisp
|> |is straightforward, especially when compared with (for example) C.
|> 
|> If that were true, it would be great.  Unfortunately, you can't just
|> turn off GC in CommonLisp because there is no requirement that the
|> compiler or runtime don't start generating garbage behind your back.

Good point.  If the runtime environment generates garbage independent
of what your code does, that would pose a difficult problem.  I suspect 
that this is controllable, but that depends on the specific environment.

|> I think an important step would be to put in just such a requirement
|> into the CL language definition and to make explicit in the language
|> definition which operations are allowed to generate garbage, which
|> operations are allowed to allocate memory, and which ones aren't.

This is a good suggestion.  Anyone from x3j13 care to comment?
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv3wz8.CGB@cogsci.ed.ac.uk>
In article <··········@pulitzer.eng.sematech.org> ··········@sematech.org (Bill Gooch) writes:
>
>In article <·················@arolla.idiap.ch>, ···@arolla.idiap.ch (Thomas M. Breuel) writes:
>|> In article <··········@pulitzer.eng.sematech.org> ······@swim1.eng.sematech.org (Bill Gooch on SWIM project x7151) writes:
>|> |Since garbage collection is a feature rather than a requirement,
>|> |you can simply turn it off and do your own memory management. My
>|> |experience has been that explicit memory management in Common Lisp
>|> |is straightforward, especially when compared with (for example) C.
>|> 
>|> If that were true, it would be great.  Unfortunately, you can't just
>|> turn off GC in CommonLisp because there is no requirement that the
>|> compiler or runtime don't start generating garbage behind your back.
>
>Good point.  If the runtime environment generates garbage independent
>of what your code does, that would pose a difficult problem.  I suspect 
>that this is controllable, but that depends on the specific environment.

Well, it's sort of a good point.  There's no _requirement_ that C not
be implemented via an interpreter written in Lisp, but the performance
problems of such an implementation would not be counted against C.
From: Thomas M. Breuel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TMB.94Aug27174837@arolla.idiap.ch>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
|>Good point.  If the runtime environment generates garbage independent
|>of what your code does, that would pose a difficult problem.  I suspect 
|>that this is controllable, but that depends on the specific environment.
|
|Well, it's sort of a good point.  There's no _requirement_ that C not
|be implemented via an interpreter written in Lisp, but the performance
|problems of such an implementation would not be counted against C.

That's because the de-facto standard for C is not to do this (I didn't
claim that this was standardized in C).  Nobody would mind some Lisp
implementations generating garbage like mad if most of them didn't.

				Thomas.
From: J W Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cusnu3.1Ar@festival.ed.ac.uk>
····@triple-i.com (Kirk Rader) writes:

>In article <·················@oliphant.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:

>[...]

>>
>>[Talking about Common Lisp here]
>>
>>CL's I/O model is basically buffered streams.  Sounds pretty much
>>identical to that of C to me.  A lot of CL implementations seem to
>>have rather poor I/O but that's because the I/O systems aren't well
>>written.

>C's "minimalist" philosophy makes it much easier for the
>implementation to provide alternative library entry points [open() vs
>fopen(), etc.]  and greater opportunity for the programmer to
>circumvent whatever problems there are with a given library
>implementation.  

I've found that Lisp typically gives more opportunity to do that.

Anyway, in Franz Lisp, "ports" are alost directly FILE *s.
Nothing prevents there being fd-based operations as well.
(Indeed, there's an fdopen.)

>It is silly to suggest that C and Common Lisp are
>really on a par when it comes to the amount semantic "baggage" they
>carry for I/O or anything else.  If that were true, what advantage
>would lisp _ever_ have?

That Lisp has something extra doesn't mean it must always have
excess baggage.

>C's memory-management philosophy is to rely on the standard-library's
>interface to the OS.  I do not see how this could be more different
>from lisp's reliance on a built-in memory-management scheme.  

What do you mean?  Lisp uses the same OS operatins C does.

>And what possible relevence could it have that other software systems also
>exhibit similar performance problems to those of lisp?

It suggests that the problems people observe with Lisp may not be
due to Lisp, since they can (evidently) have other causes.

>>Can you give details?  I have spent some time watching large CL (CMUCL)
>>programs on Suns, and other than VM problems (and CMUCL's garbage
>>collection is not exactly `state of the art') I find they do fine.
>>And they weren't even very well written.  

>I specifically referred to using gr_osview on SGI's.  In particular,
>it is easy to observe conflicts between lisp's memory management and
>I/O mechanisms and Irix's filesystem and memory-management mechanisms.

Can you say something more about this for those of us who can't
us gr_osview on SGIs?

>>                                           Lightweight processes are
>>not part of CL BTW.

>But they are part of almost every "serious" implementation of every
>lisp dialect with which I am familiar, and I was not talking just about
>some particular implementation of some particular dialect.

But they're not part of every serious implementation of every
Lisp dialect, unless you make it true by definition of "serious".

And what is the problem for Lisp lw processes anyway?

>The question isn't whether it is possible to write programs in any
>particular language that perform well, but rather for any given
>programming task do the features of the language make it easier or
>harder to achieve acceptable performance?  The semantics of a language
>which includes GC style memory management, lexical closures, so-called
>"weak types", etc. has consciously chosen expressive power in favor of
>highest-possible performance.  In many cases that is an appropriate
>choice, but in many cases it isn't.

The semantics of Lisp say objects (normally) have indefinite extent.
Implementations typically use GC as a way to reuse storage.  Whether
this has to make it harder for programmers to obtain acceptable
performance is not clear.  It depends on the implementation and
the application.

Inclusion of lexical closures in a language does not slow down
cases that don't use them.  So how is this part of a consistent
choice of expressive power over highest possible performance?

-- jd
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cuw26u.J1A@triple-i.com>
In article <··········@festival.ed.ac.uk> ····@festival.ed.ac.uk (J W Dalton) writes:
>····@triple-i.com (Kirk Rader) writes:
>
>>In article <·················@oliphant.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>

[...]

>
>I've found that Lisp typically gives more opportunity to do that.

Well then, I can only say our experiences have been different.

>
>Anyway, in Franz Lisp, "ports" are alost directly FILE *s.
>Nothing prevents there being fd-based operations as well.
>(Indeed, there's an fdopen.)

Nothing prevents it, perhaps, but nothing encourages it particularly
either.  The fact that particular implementations may provide
particular hooks into the underlying OS just seems to me to confirm
the necessity from time to time of bypassing lisp's higher level
functionality.  QED

[...]

>
>That Lisp has something extra doesn't mean it must always have
>excess baggage.

How do you get the "something extra" for free?

[...]

>
>What do you mean?  Lisp uses the same OS operatins C does.

And it imposes a significantly more complex additional layer of
functionality on top of it.  If this additional complexity pays for
itself, as it does in many cases, well and good.  But in many other
cases, the additional complexity doesn't pay for itself.  In those
cases I would consider it "excess baggage" (as opposed to "useful or
necessary baggage".)

[...]

>
>It suggests that the problems people observe with Lisp may not be
>due to Lisp, since they can (evidently) have other causes.

Or, as I believe is actually the case, that other systems sometimes
suffer symptoms due to the same or similar causes as lisp.

>

[...]

>
>Can you say something more about this for those of us who can't
>us gr_osview on SGIs?

Irix has a fairly complex buffer-management scheme of its own which is
implemented at the lowest level of the filesystem and VM substrates.
Due to the multiple layers of functionality referred to above, the I/O
mechanisms of the particular lisp implementation that I was using in
this example were causing unnecessary cache and page thrashing due to
more memory being allocated per I/O operation than was necessary, or
is typical in applications that use the standard library calls
directly.

[...]

>
>But they're not part of every serious implementation of every
>Lisp dialect, unless you make it true by definition of "serious".

If you want to exclude that particular example from your consideration
on such grounds, by all means do so.  For the particular application
on which I work and which I was using as an example, we could not use
any implementation of any dialect that did not include such a
facility, since it is a requirement of the design of our suite of
integrated applications.

>
>And what is the problem for Lisp lw processes anyway?

The same as with lisp memory-management.  The additional layer of
OS-like functionality on top of the real OS.

[...]

>
>The semantics of Lisp say objects (normally) have indefinite extent.
>Implementations typically use GC as a way to reuse storage.  Whether
>this has to make it harder for programmers to obtain acceptable
>performance is not clear.  It depends on the implementation and
>the application.
>
>Inclusion of lexical closures in a language does not slow down
>cases that don't use them.  So how is this part of a consistent
>choice of expressive power over highest possible performance?
>
>-- jd

Because one can only avoid the overhead inherent in using
lexical-closures by choosing not to use them (without even raising the
issue of whether they get used in the run-time system, outside of the
application programmer's control.)  If one must exercise greater
awareness of performance issues in order to specifically avoid using
exactly those features of lisp that make it more powerful in order to
achieve acceptable performance for a particular application, than how
could it be considered other than a handicap?  For such an application
I would expect the programmer to be more productive and the
application to require less debugging and performance tuning if it had
been developed in C to start with.  For other applications, where the
nature of the application is such that lisp's more powerful features
are an asset rather than a liability, the advantage would clearly go
to lisp.

Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv3n70.716@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@festival.ed.ac.uk> ····@festival.ed.ac.uk (J W Dalton) writes:
>>····@triple-i.com (Kirk Rader) writes:
>>
>>>In article <·················@oliphant.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>>
>
[Re: opportunity for the programmer to circumvent whatever problems 
there are with a given library implementation.]
>
>>I've found that Lisp typically gives more opportunity to do that.
>
>Well then, I can only say our experiences have been different.

Perhaps you haven't considered all the opportunities in Lisp, or
maybe you've used implementations that are excessively opaque.

>>Anyway, in Franz Lisp, "ports" are almost directly FILE *s.
>>Nothing prevents there being fd-based operations as well.
>>(Indeed, there's an fdopen.)
>
>Nothing prevents it, perhaps, but nothing encourages it particularly
>either.  

Nonsense.  It's encouraged by the desire to fit well with C and
Unix, to have efficient I/O, and so on.

>The fact that particular implementations may provide
>particular hooks into the underlying OS just seems to me to confirm
>the necessity from time to time of bypassing lisp's higher level
>functionality.  QED

What are you talking about?  There is no particular higher level
functionality associated with files in Lisp.

>>That Lisp has something extra doesn't mean it must always have
>>excess baggage.
>
>How do you get the "something extra" for free?

Who said anything about getting it for free?  Obviously you have to
implement it.  It doesn't just appear magically out of the air.  But
there are many extras that incur no extra run-time cost.

>>What do you mean?  Lisp uses the same OS operatins C does.
>
>And it imposes a significantly more complex additional layer of
>functionality on top of it. 

It does?  Like what?

>>It suggests that the problems people observe with Lisp may not be
>>due to Lisp, since they can (evidently) have other causes.
>
>Or, as I believe is actually the case, that other systems sometimes
>suffer symptoms due to the same or similar causes as lisp.

How can they, if the causes in Lisp are due to Lisp's semantics?

>>Can you say something more about this for those of us who can't
>>us gr_osview on SGIs?
>
>Irix has a fairly complex buffer-management scheme of its own which is
>implemented at the lowest level of the filesystem and VM substrates.
>Due to the multiple layers of functionality referred to above, the I/O
>mechanisms of the particular lisp implementation that I was using in
>this example were causing unnecessary cache and page thrashing due to
>more memory being allocated per I/O operation than was necessary, or
>is typical in applications that use the standard library calls
>directly.

That doesn't sound like an inherent mismatch to me, especially
since a number of Lisps use stdio for their I/O.  It's poor
tuning or perhaps a poorly balanced OS.  Do C programs that
try to accomlish the same thing as the Lisp programs have
similar problems?  (There are always *some* C programs that
have the same problems as Lisp programs, because Lisp can
be translated into C, but I'm assuming we can set that aside.)

>>But they're not part of every serious implementation of every
>>Lisp dialect, unless you make it true by definition of "serious".

You often delete just a little too much context.  For readers who
have lost track, this is about lightweight processes/threads.

>If you want to exclude that particular example from your consideration
>on such grounds, by all means do so.  For the particular application
>on which I work and which I was using as an example, we could not use
>any implementation of any dialect that did not include such a
>facility, since it is a requirement of the design of our suite of
>integrated applications.

Ok.

>>And what is the problem for Lisp lw processes anyway?
>
>The same as with lisp memory-management.  The additional layer of
>OS-like functionality on top of the real OS.

And what does that layer do wrong?

>>The semantics of Lisp say objects (normally) have indefinite extent.
>>Implementations typically use GC as a way to reuse storage.  Whether
>>this has to make it harder for programmers to obtain acceptable
>>performance is not clear.  It depends on the implementation and
>>the application.
>>
>>Inclusion of lexical closures in a language does not slow down
>>cases that don't use them.  So how is this part of a consistent
>>choice of expressive power over highest possible performance?

>Because one can only avoid the overhead inherent in using
>lexical-closures by choosing not to use them (without even raising the
>issue of whether they get used in the run-time system, outside of the
>application programmer's control.) 

So to avoid complaints from people like you we can't add anything to
a language that might have a cost if it were used.  I'm having trouble
taking this seriously.  Next you'll be telling me we can't have various
built-in or library procedures because -- oh no! -- there's a cost
associated with calling them.

BTW, what do you think *is* the run-time overhead associated with
closures?

> If one must exercise greater
>awareness of performance issues in order to specifically avoid using
>exactly those features of lisp that make it more powerful in order to
>achieve acceptable performance for a particular application, than how
>could it be considered other than a handicap?

In many applications, only certain parts must be maximally efficient.
If Lisp makes it easier to write the other parts, and if it's not too
hard to either get the required efficiency in Lisp or to call procedures
written in some other language, then Lisp's still worth using.

Now when I write Lisp that needs to be efficient, I don't find myself
specifically avoiding features, much less "exactly those features of
lisp that make it more powerful".  Instead I use things that are
effricient.  Sure, sometimes, I have to avoid something that I
might otherwise be inclined to use, because of peculiarities of
certain implementations.  (For instance, sequence functions in KCL
are very slow.)

But the real problem with your position is that it supposes more
powerful features must have run-time costs.  That's just not so.

>  For such an application
>I would expect the programmer to be more productive and the
>application to require less debugging and performance tuning if it had
>been developed in C to start with. 

There are many cases where Lisp will be more efficient than C for
less work even though it might look the other way around.  If C
really is better in some cases, it would be useful to know what
they are, and likewise for the cases where Lisp is better.  (I'm
talking now about actual Lisps available at whatever time we're
considering.)  Your kind of attack on Lisp will give a misleading
impression of which cases are which.

> For other applications, where the
>nature of the application is such that lisp's more powerful features
>are an asset rather than a liability, the advantage would clearly go
>to lisp.

It's hard to disagree with a tautology.

-- jd
From: J W Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cuso8J.2J1@festival.ed.ac.uk>
···@arolla.idiap.ch (Thomas M. Breuel) writes:

>In article <··················@qobi.ai> ····@qobi.ai (Jeffrey Mark Siskind) writes:
>|In article <·················@arolla.idiap.ch> ···@arolla.idiap.ch (Thomas M. Breuel) writes:
>|
>|   The reason is that CL lacks
>|   important primitives for expressing some fundamental kinds of data
>|   abstractions in a space-efficient way (think about how much space your
>|   typical "struct { int x; double y; char z;};" takes as a CommonLisp
>|   DEFSTRUCT).
>|
>|What primitives does CL lack? The Scheme compiler that I am writing provides
>|a DEFINE-STRUCTURE that is essentially a subset of the CL DEFSTRUCT. And it
>|can produce *exactly* the code "struct { int x; double y; char z;};" for
>|(DEFINE-STRUCTURE FOO X Y Z) when type inference determines that the X slot
>|will only hold exact integers, the Y slot only inexact reals, and the Z slot
>|characters. Note that it does this without any declarations at all. I presume
>|that the same thing can be done for CL.

>Sorry, I mixed two arguments into one.

>Yes, for putting scalars into structures, you can generate reasonably
>efficient code, but few CL compilers actually do.

>However, when you nest those structures, put them into arrays, or pass
>them as arguments, you have extra pointer and heap overhead.

Which is what, exactly?

>Furthermore, the compiler cannot statically decide to optimize the
>pointer overhead away, since that is sometimes wrong (i.e., less
>efficient) and can also change semantics.

>It is incredibly useful to be able to specify reference vs. value
>semantics, and CL lacks the primitives for doing that.

In many cases, to an extent, ...
From: Tim Bradshaw
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TFB.94Aug22193754@grieve.cogsci.ed.ac.uk>
* Kirk Rader wrote:
> C's "minimalist" philosophy makes it much easier for the
> implementation to provide alternative library entry points [open() vs
> fopen(), etc.]  and greater opportunity for the programmer to
> circumvent whatever problems there are with a given library
> implementation.  It is silly to suggest that C and Common Lisp are
> really on a par when it comes to the amount semantic "baggage" they
> carry for I/O or anything else.  If that were true, what advantage
> would lisp _ever_ have?

I suppose this is true.  If you are dealing with a crap implementation
of various standard libraries then C does allow you to rewrite them in
terms of system calls.

>> Since C doesn't have any built in memory management, I fail to see how
>> CL's can be different from it. [...]

> C's memory-management philosophy is to rely on the standard-library's
> interface to the OS.  I do not see how this could be more different
> from lisp's reliance on a built-in memory-management scheme.  And what
> possible relevence could it have that other software systems also
> exhibit similar performance problems to those of lisp?

Large applications written in C with C's `efficient' interface to the
OS' turn out to spend as much or more time doing memory management
than large programs written in Lisp using `inefficient' automatic
memory management.  Given fancy generational GCs it is also often the
case that Lisp has *better* locality than the behaviour given by the
ad hoc memory management of a given C program.  I can find references
for this if you want.

> I specifically referred to using gr_osview on SGI's.  In particular,
> it is easy to observe conflicts between lisp's memory management and
> I/O mechanisms and Irix's filesystem and memory-management
> mechanisms.

So is your argument that, on SGIs (with unspecified hardware
configurations), certain (unspecified) programs in (unspecified)
commercial CL implementations are slow.  I'm sure they are, but this
argument should not be taken to be an anti Lisp-in-general argument
without a *lot* of further evidence.  Telling us some details might be
a start.

>> Lightweight processes are
>> not part of CL BTW.

> But they are part of almost every "serious" implementation of every
> lisp dialect with which I am familiar, and I was not talking just about
> some particular implementation of some particular dialect.

No, you were talking about several implementations of several dialects,
evidently.  But not, for instance of anything that is inherent in
Lisp.

I think this discussion is all worn out now.  Obviously nothing I or
anyone can say, let alone the existence of counterexamples (again,
references on request since I have to get them from home) will
persuade you.

--tim
From: Thomas M. Breuel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TMB.94Aug24222912@arolla.idiap.ch>
In article <··········@pulitzer.eng.sematech.org> ······@swim1.eng.sematech.org (Bill Gooch on SWIM project x7151) writes:
|I am curious how you would find the presence of GC capability 
|a problem.  GC does not prevent you from doing your own memory 
|management if that's appropriate, it merely allows you to avoid 
|doing it until you see the need and have the opportunity.  

I think the problem with the presence of GC in CL is more subtle: CL
implementors rely on it "because it's there".  So, in one of those
run-of-the-mill CL implementations, you can't escape garbage
collection, even if you meticulously avoid consing in your own code,
because simple standard functions or even compiled code that has no
business even allocating heap memory will generate garbage.  And the
programmer can't even control it.  He is lucky if he even can figure
out where and why it is happening.

I have used GC in languages where the libraries and compilers were not
originally designed with GC in mind, and they are much better behaved
wrt. GC.  As I stated before, I think the CL language definition
should define clearly what can and must not cons under what
conditions.

				Thomas.
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv5DsM.9q8@cogsci.ed.ac.uk>
In article <·················@arolla.idiap.ch> ···@idiap.ch writes:
>In article <··········@pulitzer.eng.sematech.org> ······@swim1.eng.sematech.org (Bill Gooch on SWIM project x7151) writes:
>|I am curious how you would find the presence of GC capability 
>|a problem.  GC does not prevent you from doing your own memory 
>|management if that's appropriate, it merely allows you to avoid 
>|doing it until you see the need and have the opportunity.  
>
>I think the problem with the presence of GC in CL is more subtle: CL
>implementors rely on it "because it's there".  So, in one of those
>run-of-the-mill CL implementations, you can't escape garbage
>collection, even if you meticulously avoid consing in your own code,
>because simple standard functions or even compiled code that has no
>business even allocating heap memory will generate garbage.  And the
>programmer can't even control it.  He is lucky if he even can figure
>out where and why it is happening.

Well, there used to be a view that the implementation shouldn't
allocate unless the user's code did something that required
allocation.  It sounds like we've moved backwards since the 70s!

-- jeff
From: Harley Davis
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <DAVIS.94Sep1110221@passy.ilog.fr>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:

   >I think the problem with the presence of GC in CL is more subtle: CL
   >implementors rely on it "because it's there".  So, in one of those
   >run-of-the-mill CL implementations, you can't escape garbage
   >collection, even if you meticulously avoid consing in your own code,
   >because simple standard functions or even compiled code that has no
   >business even allocating heap memory will generate garbage.  And the
   >programmer can't even control it.  He is lucky if he even can figure
   >out where and why it is happening.

   Well, there used to be a view that the implementation shouldn't
   allocate unless the user's code did something that required
   allocation.  It sounds like we've moved backwards since the 70s!

Good Lisp implementations will document the allocation behavior of
library functions.  (Well, ours does anyway.)

However, sometimes unexpected allocations might occur: For example, I
think all major generic-function based object system implementations
use method caches, and the first time you call a generic function it
might allocate the cache, or it might allocate effective methods, and
so on.  If this allocation didn't occur, you would either have very
slow generic function calls or massive pre-allocation leading to large
memory images.  (This behavior should also be documented for a given
implementation so it isn't completely unexpected.  You can also manage
this sort of storage manually to avoid any chance of a GC but I'm not
sure if any implementations actually do this.) As far as I know, the
method caching technology is post-70's.

-- Harley Davis
-- 

------------------------------------------------------------------------------
motto: Use an integrated object-oriented dynamic language today.
       Write to ····@ilog.com and ask about Ilog Talk.
------------------------------------------------------------------------------
nom: Harley Davis			ILOG S.A.
net: ·····@ilog.fr			2 Avenue Galli�ni, BP 85
tel: +33 1 46 63 66 66			94253 Gentilly Cedex, France
From: Rob MacLachlan
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <345du6$g4h@cantaloupe.srv.cs.cmu.edu>
In article <··················@passy.ilog.fr>,
Harley Davis <·····@ilog.fr> wrote:
>
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk writes:
>
>   >I think the problem with the presence of GC in CL is more subtle: CL
>   >implementors rely on it "because it's there".  [...] you can't escape
>   >garbage collection [...] even compiled code that has no business even
>   >allocating heap memory will generate garbage.
>
>   Well, there used to be a view that the implementation shouldn't
>   allocate unless the user's code did something that required
>   allocation.  It sounds like we've moved backwards since the 70s!
>
>Good Lisp implementations will document the allocation behavior of library
>functions.  (Well, ours does anyway.)  However, sometimes unexpected
>allocations might occur [...]

Agreed --- library functions are not the biggest problem.  There is often a
big difference between where implementors find consing necessary and where
users perceive it as obviously necessary.  In Common Lisp, the three big
offenders are:
 -- Floating-point numbers,
 -- closures, and
 -- rest args

With rest args, sophisticated users will expect the consing, but will be
helpless to do anything about it, since there is no other way to receive
variable numbers of arguments.

In a way, clever implementation makes floats and closure efficiency worse
(less predictable), since sometimes the consing will be optimized away, and
sometimes it won't.

Fortunately, this frivolous consing is easily reclaimed by generational
garbage collection.  The DYNAMIC-EXTENT declaration also provides a way to
allow implementations to stack-allocate rest args and closures (but many don't
take advantage of this declaration.)

  Rob
From: Thomas M. Breuel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TMB.94Sep2023108@arolla.idiap.ch>
|In a way, clever implementation makes floats and closure efficiency worse
|(less predictable), since sometimes the consing will be optimized away, and
|sometimes it won't.
|
|Fortunately, this frivolous consing is easily reclaimed by generational
|garbage collection.

This "frivolous consing" is absolutely unacceptable in numerical
code.  Even if collecting the floating point garbage were completely
free, just the cost of allocating the values and dereferencing them is
often much too expensive.  And, in fact, even systems with excellent
generational GC become very slow for numerical code whenever there is
any consing going on in the numerical code.

I really don't see why CL implementors are having so much trouble with
passing floating point arguments in registers.  Sure, you have to
worry about functions being redefined dynamically and all that, but
those don't seem like big challenges.

				Thomas.
From: Lawrence G. Mayka
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <LGM.94Sep4131854@polaris.ih.att.com>
In article <················@arolla.idiap.ch> ···@arolla.idiap.ch (Thomas M. Breuel) writes:

   This "frivolous consing" is absolutely unacceptable in numerical
   code.  Even if collecting the floating point garbage were completely
   free, just the cost of allocating the values and dereferencing them is
   often much too expensive.  And, in fact, even systems with excellent
   generational GC become very slow for numerical code whenever there is
   any consing going on in the numerical code.

I've sometimes wondered: Does floating-point numerical code, of the
kind you describe, typically require IEEE precision (double or
single)?  Or can such code often make do with, say, the same 7 bits of
exponent as IEEE single-precision but only 19 bits of significand
(FLOAT-DIGITS) instead of 24?  My real question is, How often can a
cons-less SHORT-FLOAT meet this need for fast floating-point numerics?
--
        Lawrence G. Mayka
        AT&T Bell Laboratories
        ···@ieain.att.com

Standard disclaimer.
From: Thomas M. Breuel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TMB.94Sep5044731@arolla.idiap.ch>
In article <················@polaris.ih.att.com> ···@polaris.ih.att.com (Lawrence G. Mayka) writes:
|I've sometimes wondered: Does floating-point numerical code, of the
|kind you describe, typically require IEEE precision (double or
|single)?

Single-precision is already pretty tight for many applications.

|Or can such code often make do with, say, the same 7 bits of
|exponent as IEEE single-precision but only 19 bits of significand
|(FLOAT-DIGITS) instead of 24?  My real question is, How often can a
|cons-less SHORT-FLOAT meet this need for fast floating-point numerics?

I suspect that you'd have a hard time supporting IEEE semantics if you
try to steal a few bits from either the mantissa or the exponent.
Also, even if it is cons-less, there is still overhead associated
with tagging and untagging.

Supporting passing unboxed numbers in registers is easy for declared
functions, even in an interactive, dynamically typed Lisp
environment.  There is no need for kludges like chopping bits off
an IEEE number.

				Thomas.
From: Henry G. Baker
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <hbakerCvnzvE.LJw@netcom.com>
In article <················@polaris.ih.att.com> ···@polaris.ih.att.com (Lawrence G. Mayka) writes:
>In article <················@arolla.idiap.ch> ···@arolla.idiap.ch (Thomas M. Breuel) writes:
>
>   This "frivolous consing" is absolutely unacceptable in numerical
>   code.  Even if collecting the floating point garbage were completely
>   free, just the cost of allocating the values and dereferencing them is
>   often much too expensive.  And, in fact, even systems with excellent
>   generational GC become very slow for numerical code whenever there is
>   any consing going on in the numerical code.
>
>I've sometimes wondered: Does floating-point numerical code, of the
>kind you describe, typically require IEEE precision (double or
>single)?  Or can such code often make do with, say, the same 7 bits of
>exponent as IEEE single-precision but only 19 bits of significand
>(FLOAT-DIGITS) instead of 24?  My real question is, How often can a
>cons-less SHORT-FLOAT meet this need for fast floating-point numerics?

IEEE 'single precision' (32 bits) is considered essentially worthless by
most numerical analysts.  Even John Neumann's original article on his
'Von Neumann architecture' argued for a minimum of 38-40 bits.

I've been told that IEEE single precision is only good for glorified
signal processing applications, and even then, the lack of a decent
double precision accumulator limits its usefulness.

Even IEEE double (64 bits) is minimal, with most microprocessors opting
for 80 bits for their 'double' size.

If storage for these boxed floats is managed using some of the techniques
of 'linear logic', the cost of storage management should still be dominated
by (or overlapped with) the computational cost, on all but the fastest
implementations.

Refs on linear logic storage management.

'Sparse Polynomials and Linear Logic'.  ACM Sigsam  Bull.  27,4 (Dec. 1993),
10-14.

'A Linear Logic Quicksort'.  ACM Sigplan Notices 29,2 (Feb. 1994), 13-18.
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cvo0zM.HBA@cogsci.ed.ac.uk>
In article <················@arolla.idiap.ch> ···@idiap.ch writes:
>|In a way, clever implementation makes floats and closure efficiency worse
>|(less predictable), since sometimes the consing will be optimized away, and
>|sometimes it won't.
>|
>|Fortunately, this frivolous consing is easily reclaimed by generational
>|garbage collection.
>
>This "frivolous consing" is absolutely unacceptable in numerical
>code.  Even if collecting the floating point garbage were completely
>free, just the cost of allocating the values and dereferencing them is
>often much too expensive.

But just how expensive is it?  10%  A factor of 3?

>  And, in fact, even systems with excellent
>generational GC become very slow for numerical code whenever there is
>any consing going on in the numerical code.

How slow is "very slow"?

-- jd
From: Thomas M. Breuel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TMB.94Sep5232556@arolla.idiap.ch>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
|>This "frivolous consing" is absolutely unacceptable in numerical
|>code.  Even if collecting the floating point garbage were completely
|>free, just the cost of allocating the values and dereferencing them is
|>often much too expensive.
|
|But just how expensive is it?  10%  A factor of 3?
|
|>  And, in fact, even systems with excellent
|>generational GC become very slow for numerical code whenever there is
|>any consing going on in the numerical code.
|
|How slow is "very slow"?

That depends very much on the code and the compiler.  And, of course,
I have only bothered to investigate when I noticed that some numerical
inner loop wasn't performing even close to the way it should.

So, I don't know what the average cost is, but I have experienced
cases where the slowdown due to tagging, dereferencing, and/or consing
was a factor of up to 10 (I would guess a typical near-worst case is
more like a factor of 4).

Using block compilation and adding the correct declarations or
expressing the operation in terms of some other function would usually
fix it, even if it took a while to figure out the right incantation.

Note that some of the problems are probably also due to other
optimizations being blocked by using the general calling sequence.
But those are real costs that one has to worry about when getting code
to run fast.

				Thomas.
From: Thomas M. Breuel
Subject: Re: GC vs. malloc WRT locality
Date: 
Message-ID: <TMB.94Aug24230759@arolla.idiap.ch>
In article <··········@cantaloupe.srv.cs.cmu.edu> ········@cs.cmu.edu (Scott Nettles) writes:
|I'd love to see these!  But as far as I know (and I'm pretty up to date,
|these kinds of issues form the basis of much of my research) they don't
|exist.  There has been a lot of speculation that this is true, but I don't
|know of any direct evidence that it's true.  There is good evidence that
|suggests that locality for Lispy things (scheme and SML are the ones I know
|of) is no WORSE than C.  

Generally, the way people control locality in C is by explicitly
forcing things to be near one another.  For example, they will create
an array of objects and use that as a storage pool to allocate objects
out of.  Because of the way pointers work in C, that kind of
optimization can be added to existing code easily and is very
popular.  Deallocation also becomes very simple and extremely
efficient.  In some situations, a topological sort of the data
structures in the array may also be of benefit.  You really can't get
much better than that.

Of course, any of these require a bit of effort to implement.  But I
think the reason why C is popular is that at least you can do those
things when you have to.

On the other hand, I'm also not so sure whether locality really
matters as much anymore as it used to.  Locality for the benefit of
the VM is largely uninteresting: VM is so slow compared to memory and
CPUs these days that you can't afford to page.  What you still do have
to worry about is cache effects, but they are often more complicated
than just locality.

				Thomas.
From: Barry Margolin
Subject: Re: GC vs. malloc WRT locality
Date: 
Message-ID: <barmarCv3B9B.Fwu@netcom.com>
In article <·················@arolla.idiap.ch> ···@idiap.ch writes:
>On the other hand, I'm also not so sure whether locality really
>matters as much anymore as it used to.  Locality for the benefit of
>the VM is largely uninteresting: VM is so slow compared to memory and
>CPUs these days that you can't afford to page.

That's the whole point of locality -- reducing paging.  Increasing
locality reduces your working set, making it more likely that most of
the important data will fit (and stay) in memory.
-- 
Barry Margolin                                                ······@netcom.com
From: David Gadbois
Subject: Re: GC vs. malloc WRT locality
Date: 
Message-ID: <33lub6$3lf@peaches.cs.utexas.edu>
Barry Margolin <······@netcom.com> wrote:
>In article <·················@arolla.idiap.ch> ···@idiap.ch writes:
>>Locality for the benefit of the VM is largely uninteresting: VM is
>>so slow compared to memory and CPUs these days that you can't afford
>>to page.
>
>That's the whole point of locality -- reducing paging.  Increasing
>locality reduces your working set, making it more likely that most of
>the important data will fit (and stay) in memory.

And, even in this day of big, cheap main memories, there are still
applications out there where: 1) It is not economically feasible to
have enough main memory to hold all the data at once; and 2) Manual
tertiary storage management is impractical.  In these cases, it is
worth it to make Herculean efforts to avoid a zillion-cycle page
fault.

Folks have been down on demand-paged VM lately, and, to be sure, the
real thing is always better, but it is much worse to be limited in
what you can do solely by the amount of real memory you have.

--David Gadbois
From: Martin Rodgers
Subject: Re: GC vs. malloc WRT locality
Date: 
Message-ID: <777934455snz@wildcard.demon.co.uk>
In article <·················@arolla.idiap.ch>
           ···@idiap.ch "Thomas M. Breuel" writes:

> On the other hand, I'm also not so sure whether locality really
> matters as much anymore as it used to.  Locality for the benefit of
> the VM is largely uninteresting: VM is so slow compared to memory and
> CPUs these days that you can't afford to page.  What you still do have
> to worry about is cache effects, but they are often more complicated
> than just locality.

This is my own feeling, after watching the poor performance of apps
on my machine, and reading user's comments about the low performance
on their machines. In the early 80s I read about P-System developers
using machines (like the Sage-II) several times more powerful than
the machines their users run the apps on (like the Apple-II).

I can't see any difference today, except that we now have VM to make
a machine thrash and run even slower.

-- 
Future generations are relying on us
It's a world we've made - Incubus	
We're living on a knife edge, looking for the ground -- Hawkwind
This space is reserved for my home page URL.
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv3IJK.460@triple-i.com>
In article <··········@pulitzer.eng.sematech.org> ··········@sematech.org (Bill Gooch) writes:

[...]

>
>I am curious how you would find the presence of GC capability 
>a problem.  GC does not prevent you from doing your own memory 
>management if that's appropriate, it merely allows you to avoid 
>doing it until you see the need and have the opportunity.  

I am torn between a desire to see this seemingly endless thread
finally die, and an unwillingness to simply ignore a reasonable
question.  Oh well...

One problem is that even if you recognize an opportunity to improve
performance for a given application by taking some aspect of memory
management out of the purview of the GC, there will still be many
aspects of the run-time system over which you have no control unless
you are the implementor of the lisp as well as the application
developer.

Another problem is that it is often the case that even where an
off-the-shelf implementation gives one the ability to write foreign
code which calls malloc and free, these are non-standard versions of
the library functions which impose their own set of inefficiencies
relative to what you would obtain using the platform vendor supplied
routines.

Both of the above can be considered implementation issues.  The most
general problem is that the more time one must spend dealing with
these sorts of issues, and the more such higher-level features one
must choose not to use or take pains to work around, the less obvious
it is that using lisp has retained the very advantages that do, in
fact, make it a better choice of language for those applications which
make good use of those features.  This is why I believe that it is a
fairly complex problem to determine for any given application what
language is objectively better.

Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv3vu4.C03@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <·················@grieve.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>>
>>So is your argument that, on SGIs (with unspecified hardware
>>configurations), certain (unspecified) programs in (unspecified)
>>commercial CL implementations are slow.  I'm sure they are, but this
>>argument should not be taken to be an anti Lisp-in-general argument
>>without a *lot* of further evidence.  Telling us some details might be
>>a start.
>
>You have understood my argument more or less exactly, except to think
>that it is "anti-lisp".  How many times need I repeat that I think
>lisp is an ideal choice of a programming language for many
>applications?  I just happen to think that it is also a poor choice
>for many others.  The same is true of C and every other language.

C is a language, but Lisp isn't.  What's the syntax of Lisp?
Is it statically or dynamically scoped?  Is it object-oriented
or not?  Is "everything a list" or is there a wide range of
data structures not represented as lists?  Can lists beginning
with "lambda" be used as functions?  Is it possible to look at
source code and tell for sure what functions might be called?
Is there a way to define macros?  Are arguments evaluated when
a function is called, or is it "lazy"?

These questions can all be answered for particular varieties of
Lisp but not for Lisp.

Even if we confine ourselves to Common Lisp, there are many
implementation strategies in the C-ward direction (and in other
directions) that people have only begun to explore.

It's hard to draw conclusions about Lisp in general.  I'm not trying
to quibble about the meaning of "Lisp".  If someone says "C is better
than Lisp for writing portable interpreters", then I don't say "oh,
no, there might be some potential Lisp that's better than C".  But
if someone says "Lisp is inherently worse than C for writing portable
interpreters", then I might well say something like this: "it's
turned out that way, in part for historical reasons, but it might
have turned out differently" or "existing implementations (or, in
some cases, Lisp-family languages) impose too much `environment' 
that may not be desired, but there are other ways of implementing
Lisp that would not have that problem".

>>No, you were talking about several implementations of several dialects,
>>evidently.  But not, for instance of anything that is inherent in
>>Lisp.
>
>No, I was talking about a particular instance of performance problems
>in response to someone else's request for specific examples of the
>kind of issues about which I was talking.  I know of no way to provide
>empirical examples other than in reference to the use of particular
>features of actual implementations.  However, I do not believe that
>the basic fact that different languages are optimized by their design
>for different types of applications is really so controversial or
>counter-intuitive a premise as to need much defence.

And if you'd confined yourself to such non-controversial claims,
I, for one, wouldn't have posted anything at all!

But how can you tell to what extent a problem is intrinsic to Lisp
rather than to the peculiarities of certain varieties of Lisp or 
of particular implemenations.  An implementation that tried to
do as well as possible for real-time processing might be very
different from one that had different goals.

Or suppose someone trys to design a Lisp that's suitable for
an application area where Lisp has not been a good choice.  Is
this doomed to failure?  Or could it be, instead, that Lisp
is a sufficiently inclusive category that a different kind of
Lisp might do the trick?

-- jeff
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CvGIz3.FsJ@cogsci.ed.ac.uk>
Just when I think the discussion may have taken a reasonable turn,
I get this.

I hope that some day comp.lang.lisp is used to discuss how Lisp can be
improved for applications where it performs poorly now rather than why
such attempts are bound to sacrifice features or irritate Lisp "purists".

In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>[...]
>
>>It's hard to draw conclusions about Lisp in general.  I'm not trying
>>to quibble about the meaning of "Lisp".  
>
>It seems to me that is exactly what you are doing.

But every time you say what I include in "Lisp" you get it wrong,
supposing that I mean something ridiculously inclusive that I can
exploit to reach absurd conclusions.

Sure, *you* interpret me that way, but its not what I'm doing.

Lisp is not confined to existing languages and implementations.  What
I've said about possibilities is based on existing or past Lisps, on
trends neglected while the 80s "big Lisps" ruled, on the aims and
desires of a number of people who have been developing new varieties
of Lisp, and on current research directions.

>>                                         If someone says "C is better
>>than Lisp for writing portable interpreters", then I don't say "oh,
>>no, there might be some potential Lisp that's better than C".  But
>>if someone says "Lisp is inherently worse than C for writing portable
>>interpreters", then I might well say something like this: "it's
>>turned out that way, in part for historical reasons, but it might
>>have turned out differently" or "existing implementations (or, in
>>some cases, Lisp-family languages) impose too much `environment' 
>>that may not be desired, but there are other ways of implementing
>>Lisp that would not have that problem".
>
>I believe this equally to be a meaningless quibble,

That *what* is?  That there are other ways of implementing Lisp?

>   But for the record
>I agree that how the term "lisp" has come to be understood the way it
>has is a matter of historical accident.

What a concession.  Gee, I guess you're a reasonable sort after all.

>[...]
>
>>But how can you tell to what extent a problem is intrinsic to Lisp
>>rather than to the peculiarities of certain varieties of Lisp or 
>>of particular implemenations.  An implementation that tried to
>>do as well as possible for real-time processing might be very
>>different from one that had different goals.
>
>And would, undoubtedly, result in a dialect which lisp "purists" on
>comp.lang.lisp would decry as being some sort of bastard hybrid which
>excluded all of the features that make lisp so great (have you paid
>attention to any of the dylan lists, by any chance?)

Bull.  You think that because you don't believe Lisp can be
better in relevant ways.  You are wrong.  It easily can be,
and some Lisps alreaady are.

Have *you* paid attention to comp.lang.dylan?  If you had, you'd
have noticed my articles there and not tried this ploy.  Or is
this just phase one, so you can exploit what I've said there
(which, if so, you are bound to get wrong).

>>Or suppose someone trys to design a Lisp that's suitable for
>>an application area where Lisp has not been a good choice.  Is
>>this doomed to failure?  Or could it be, instead, that Lisp
>>is a sufficiently inclusive category that a different kind of
>>Lisp might do the trick?
>
>Again, I suspect that such an attempt is doomed to failure only if it
>tried to solve all of the real-time and similar problems of current
>lisps while retaining all of their semantics and features. 

An absurd requirement.  No Lisp has all the features of current Lisps.

Now, if you were to say Common Lisp, or some other particular
Lisps, couldn't do it and retain all their features, you might
well be right.  

> A variant
>of lisp which was as suitable as C for the applications for which C is
>better would, undoubtedly, be as bad as C for applications for which
>what is currently understood as "lisp" is better.

Nonsense.  There's plenty of scope for Lisp to do better relative
to C without becoming C, something you are remarkably reluctant to
acknowledge.

Now, if you were to argue that no one Lisp-family language could do
_everything_ C does just as well as C, you might have a valid point.
Other semi-tautological claims will likewise meet little argument.

-- jeff
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cvo0vJ.Kwu@triple-i.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>Just when I think the discussion may have taken a reasonable turn,
>I get this.
>
>I hope that some day comp.lang.lisp is used to discuss how Lisp can be
>improved for applications where it performs poorly now rather than why
>such attempts are bound to sacrifice features or irritate Lisp "purists".

[...]

How lisp can be improved is one valid use for comp.lang.lisp.  Another
valid use is in requesting and offering advice on how to use lisp, or
how to avoid the potential problems one might encounter in using it.
One such request for information and advice came, and the response
amounted to "criticisms of lisp as being too big or too slow are all
just lisp bashing."  I responded that this was not true and that for
some kinds of applications the costs of using lisp outweigh its
benefits, while explictly stating that for other applications the
opposite is true, so that care should be taken when choosing a
language.  _You_ were the one who chose to interpret this as some sort
of general denunciation of all possible lisp dialects, past or future.
Use whatever rhetorical devices you wish, but I believe my position in
this has been more moderate and reasonable than yours.
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CvqDDB.1FA@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>Just when I think the discussion may have taken a reasonable turn,
>>I get this.
>>
>>I hope that some day comp.lang.lisp is used to discuss how Lisp can be
>>improved for applications where it performs poorly now rather than why
>>such attempts are bound to sacrifice features or irritate Lisp "purists".
>
>[...]
>
>How lisp can be improved is one valid use for comp.lang.lisp.  Another
>valid use is in requesting and offering advice on how to use lisp, or
>how to avoid the potential problems one might encounter in using it.

Something I have never disagreed with.

However, it helps to get it right.

>One such request for information and advice came, and the response
>amounted to "criticisms of lisp as being too big or too slow are all
>just lisp bashing."  I responded that this was not true and that for
>some kinds of applications the costs of using lisp outweigh its
>benefits, while explictly stating that for other applications the
>opposite is true, so that care should be taken when choosing a
>language.  _You_ were the one who chose to interpret this as some sort
>of general denunciation of all possible lisp dialects, past or future.
>Use whatever rhetorical devices you wish, but I believe my position in
>this has been more moderate and reasonable than yours.

Bull.  You introduced claims about inherent problems of Lisp
and you offered _general_ arguments about the costs of richer
semantics.

The simple fact is that if you'd confined your remarks to the sort
you make here, I wouldn't have posted any reply.

Moreover, if you make claims about Lisp, I can certainly point
out other facts about Lisp that give a different impression.
You could have ended any difficulties there by agreeing with me.
But, in fact, you don't agree, despite your repeated assertions
that you're making only uncontroversial claims like the one above
and that there's no real disagreement between us.

-- jd
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cw35tA.8AE@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>>>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>>Just when I think the discussion may have taken a reasonable turn,
>>>>I get this.
>>>>
>>>>I hope that some day comp.lang.lisp is used to discuss how Lisp can be
>>>>improved for applications where it performs poorly now rather than why
>>>>such attempts are bound to sacrifice features or irritate Lisp "purists".
>>>
>>>[...]
>>>
>>>How lisp can be improved is one valid use for comp.lang.lisp.  Another
>>>valid use is in requesting and offering advice on how to use lisp, or
>>>how to avoid the potential problems one might encounter in using it.
>>
>>Something I have never disagreed with.
>>
>>However, it helps to get it right.
>
>That is, obviously, a matter of point of view.  I believe myself to
>have "gotten it right" and have received any number of private email
>messages indicating that some people, at least, agree that I have done
>so.  I don't doubt that you have received the same in support of you
>positions -

But I do doubt it.  I have received exactly one.

> so simply making this sort of unsupported assertion that

So I have to repeat the support in every article?

BTW, when in this have you offered advice on how to avoid potential
problems?  What you've said is Lisp is unsuitable.

>I'm wrong does nothing but exacerbate the ill-will in this thread that
>I though we each were making efforts to ameliorate.

I don't have any ill will towards you personally, but for some
reason the line you took in the message I was answering really
annoyed me.

Anyway, I will also make a "point of view" remark.  What we're really
told in about Lisp and applications is not that Lisp is right (or
wrong) for application X but that some person (not always you) finds 
it so.  That's why I want enough information to be posted so that
everyone can make up their own minds.  What are the costs?  
What would have to change in the Lisp implementation to reduce
them?  And so on.

Compare this thread to the one on "data bloat".  There, the problem
was quantified, and various ways to reduce discussed.  That's the
kind of critical thread I'd like to see.  This one is just frustrating
and annoying.

>>>One such request for information and advice came, and the response
>>>amounted to "criticisms of lisp as being too big or too slow are all
>>>just lisp bashing."  I responded that this was not true and that for
>>>some kinds of applications the costs of using lisp outweigh its
>>>benefits, while explictly stating that for other applications the
>>>opposite is true, so that care should be taken when choosing a
>>>language. 

I believe that I could have made that point without seeming to suggest
anything more general, or could quickly have amended my presentation.
Since I do not believe that I am especially skilled at such things, I
have to believe that you could do so as well.  It seems to me that you
think Lisp will inherently have problems in certain application areas
and not just that implementations currently have such problems.
For otherwise, I am baffled as to why you said anything along those
lines at all.

>>>        _You_ were the one who chose to interpret this as some sort
>>>of general denunciation of all possible lisp dialects, past or future.
>>>Use whatever rhetorical devices you wish, but I believe my position in
>>>this has been more moderate and reasonable than yours.

>>Bull.  You introduced claims about inherent problems of Lisp
>>and you offered _general_ arguments about the costs of richer
>>semantics.
>
>Again: _only in the context of a discussion of application-specific
>performance issues_.

But about *Lisp*, not about particular implementations.  Is it
still not clear that that's what's at issue here?  I know you
don't say Lisp loses for *all* applications.  That was never
in dispute!

>  When people have suggested that my remarks could
>be interpreted as making more general claims I have repeatedly both
>agreed that it is possible, though not currently common-place, to deal
>with these issues via language design and implementation and pointed
>out that the appearance of my even making such general claims was (I
>still believe) mainly the result of selective quoting out of context.

What did they mean in context that was different?  How does (for
instance) the following not mean what I take it to mean, namely that 
*Lisp* has unavoidable problems and that higher-level features 
(unless "trivial") have them as well?

  I do feel that lisp, i.e. the family of languages commonly regarded
  as lisp dialects, does have as a group certain features that makes
  it well suited to certain tasks and ill suited to others.  I do feel
  that higher-level features necessarily have costs, with the specific
  stipulation that it is possible to have trivial additional features
  or purely syntactic conveniences with so little extra cost that they
  can be regarded as free.

If the problems were avoidable, how could Lisp be ill suited?
It would be only languages and implementations that failed to
avoid the problems that were ill suited.

>Having said all that, I still have seen no convincing argument from
>you on how you think it possible that a language could both retain a
>significant percentage of the additional features of present lisps and
>also be suitable for a significant percentage of the kinds of
>applications for which present lisps are not well suited.  Until I see
>any such convincing arguments, I will persist in my belief that
>additional features don't come for free.

So how have I distorted what you've said?

Now the simple fact is that a number of past and current Lisps have
retained a significant fraction of the usual features of Lisp while
becoming suitable for a much greater range of applications.  I see
no reason why that cannot continue.  There may be limits, but why
suppose we can say now what they are?

Moreover, no one Lisp has to do everything, and a number of
features of current Lisps (esp Common Lisp) could well be
dropped.

I am not claiming Lisps *are* suitable for anything they're not
suitable for.  But you are claiming (or persisting in the belief
that) they cannot be.  You haven't shown that to be so or how
great the cost will be.  You won't even say how great are the
costs you've observed.  Moreover, you've done nothing to show
it will be difficult to avoid the cases that are "expensive"
when it's necessary to do so.

My arguments are all aimed at leaving open possibilities that are
in fact open.  There's nothing more that I've claimed or need
to prove (apart from the meta-junk which I hope we can eventually
drop).

>I have never said that there is _no_ real disagreement between us.
>I do think that the substantive issues on which we disagree are not
>worth the length or heatedness of this thread, and have proposed
>ending it any number of times.  I am not willing, however, to simply
>let this sort of message stand unanswered.

And I am not willing to let it stand that the behavior of
current implementations tells us what's possible for Lisp,
because it's not true.

BTW, I know of a number of applications for which all Lisps I've
looked at (a great many) are unsuitable.

-- jd

PS I'll try to answer you're longer article with less heat,
but the dictionary quoting at the start was more than I could
face today.
From: J W Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cusnxr.1n6@festival.ed.ac.uk>
····@triple-i.com (Kirk Rader) writes:

>In article <·················@arolla.idiap.ch> ···@idiap.ch writes:

>>So, if Irix can't cope with Lisp, that's a problem with Irix.  And
>>that problem will not just bite you with Lisp, but also with many
>>other kinds of applications.

>As someone who makes his living creating software for SGI's I cannot
>afford to use any tool that does not run well under Irix, whether or
>not Irix is particularly well designed.  The fact is that there is a
>whole SGI software industry, and if lisp evangelists would like to see
>more cases of lisp being used there, they would have a greater
>likelihood of success by suggesting that the lisp vendors do a better
>job of accomodating the platform than that the platform vendor
>accomodate lisp.  It's a simple matter of economics.

That I agree with.  But do you accept that Lisp vendors could
do a better job, opr are there inherent properties of Lisp that
prevent them from doing so (or from doing it to a sufficient extent)?
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cuw2nv.J5E@triple-i.com>
In article <··········@festival.ed.ac.uk> ····@festival.ed.ac.uk (J W Dalton) writes:

[...]

>
>That I agree with.  But do you accept that Lisp vendors could
>do a better job, opr are there inherent properties of Lisp that
>prevent them from doing so (or from doing it to a sufficient extent)?
>


Well, both.  I know that some vendors, at least, are working very hard
to correct many of the particular kinds of performance issues I have
raised.  But I just don't see how it can be thought that one language
which is inherently more powerful than another won't necessarily have
made some performance trade-offs in order to achieve that extra power.
So. I expect that it will always be the case that there will be some
applications that are best done in a lower-level language.  I don't
anticipate a time when some "one size fits all" language will have
been developed that makes it suitable for every type of application.

Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv3nx9.7Kr@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@festival.ed.ac.uk> ····@festival.ed.ac.uk (J W Dalton) writes:
>
>[...]
>
>>
>>That I agree with.  But do you accept that Lisp vendors could
>>do a better job, opr are there inherent properties of Lisp that
>>prevent them from doing so (or from doing it to a sufficient extent)?
>
>Well, both.  I know that some vendors, at least, are working very hard
>to correct many of the particular kinds of performance issues I have
>raised.  But I just don't see how it can be thought that one language
>which is inherently more powerful than another won't necessarily have
>made some performance trade-offs in order to achieve that extra power.

It's pretty easy: just think of the costs moving to compile-time.

Of course, maybe you're trying to make your claim true by definition
(of inherently more powerful).  If so, then read me as saying that
Lisp isn't inherently more powerful (at least not so far as anyone
has shown in this discussion).

>So. I expect that it will always be the case that there will be some
>applications that are best done in a lower-level language.  I don't
>anticipate a time when some "one size fits all" language will have
>been developed that makes it suitable for every type of application.

I have no quarrel with claims like that.  But I do with, say, atricles
that seem to suggest that Lisp's I/O is bound to be a inefficient or
that a language that includes closures should be used only for
applications where closures are really needed:

  Because one can only avoid the overhead inherent in using
  lexical-closures by choosing not to use them [...]  If one must
  exercise greater awareness of performance issues in order to
  specifically avoid using exactly those features of lisp that make it
  more powerful in order to achieve acceptable performance for a
  particular application, than how could it be considered other than a
  handicap?

-- jd
From: Tim Bradshaw
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TFB.94Aug11164906@sorley.cogsci.ed.ac.uk>
* Jeff Dalton wrote:
> I would agree that Lisp can do reasonably well on RISC machines,
> but the point of Lisp machines was not just to make Lisp fast
> but also to make it fast and safe at the same time and fast 
> without needing lots of declarations.

> Recent Lisp implementations (especially CMU CL) have gone a fair
> way towards making it easy to have safe, efficient code on RISC
> machines, but it may always require a somewhat different way of
> thinking.  (Not a bad way, IMHO, but different from LM thinking
> nonetheless.)

Could any of the lisp machines do fast floating point, without
declarations?  I know maclisp was rumoured to be able to (on
stock hardware even!) but did it use declarations?

I'd be interested in knowing how fast modern stock-hardware lisps do
per `MIPS' cf the special-architecture things.  Of course this is
probably a seriously hard comparison to do meaningfully for all sorts
of reasons.

> But what is this about "the way people wrote lisp systems in the 70s"?
> What sort of thing do you have in mind?  Lisps written in assembler
> that could run in 16K 36-bit words?  (Presumably not.)

Well the MIT lispms and I think the Xerox dmachines are basically
`70s technology' to my mind, that's what I meant.

--tim
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuDu8v.3xp@cogsci.ed.ac.uk>
In article <·················@sorley.cogsci.ed.ac.uk> ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>* Jeff Dalton wrote:
>> I would agree that Lisp can do reasonably well on RISC machines,
>> but the point of Lisp machines was not just to make Lisp fast
>> but also to make it fast and safe at the same time and fast 
>> without needing lots of declarations.
>
>> Recent Lisp implementations (especially CMU CL) have gone a fair
>> way towards making it easy to have safe, efficient code on RISC
>> machines, but it may always require a somewhat different way of
>> thinking.  (Not a bad way, IMHO, but different from LM thinking
>> nonetheless.)
>
>Could any of the lisp machines do fast floating point, without
>declarations? 

I don't know how fast their floating point was but, so far as I
know, declarations made no difference.

> I know maclisp was rumoured to be able to (on
>stock hardware even!) but did it use declarations?

Yes.

>> But what is this about "the way people wrote lisp systems in the 70s"?
>> What sort of thing do you have in mind?  Lisps written in assembler
>> that could run in 16K 36-bit words?  (Presumably not.)
>
>Well the MIT lispms and I think the Xerox dmachines are basically
>`70s technology' to my mind, that's what I meant.

Well, when you talk about "the way people wrote Lisp systems in
the 70s" that sounds like you're talking about pretty much everyone
and about most of the 70s.  There were plenty of non-LM Lisps in
the 70s, and Lisp machines didn't really take off until the early
80s when lots of people (mistakenly) thought specialized hardware
was the way to go.

-- jeff
From: Bill Gooch on SWIM project x7151
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <32e7jg$fad@pulitzer.eng.sematech.org>
In article <·················@sorley.cogsci.ed.ac.uk>, ···@cogsci.ed.ac.uk (Tim Bradshaw) writes:
|> Could any of the lisp machines do fast floating point, without
|> declarations? 

The following comments pertain to Symbolics machines:

For single precision, yes, because it doesn't require boxing.  
Symbolics sold some decent floating point accelerator boards.

Double precision is a horse of a different color, because of boxing. 
Accelerators didn't help noticeably because the boxing overhead was
heavily dominant over computation time anyway.  We were able to get 
very substantial improvements in double precision with a single pre-
cision FPA by keeping all the numbers unboxed and using subprimitives
that operate on unboxed args and return unboxed results.  This is of
course very specialized and hard-to-maintain code, but it did give us
good results perfomance-wise.  I don't remember the exact benchmark
results, but suffice it to say that they were quite competetive with 
using C on alternative stock hardware with floating point acceleration.

Symbolics later came out with fast DP floating point acceleration, but 
they somehow missed the point because they never put the hooks into 
their compiler so that one could just use declarations to get decent 
performance by avoiding boxing.  This meant that their FPA hardware
was essentially useless to anyone wanting to do DP who didn't want to
get into the kind of coding we did (for which, I should add, we needed
help and some compiler-macro code from David Plummer at Symbolics).
From: Anthony Berglas
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <3295lh$ig0@uqcspe.cs.uq.oz.au>
Clearly one should not require low level tricks to write fast code.
Below are two examples in C (not C++, reportedly slightly slower) and
Lisp. The first calculates prime nrs using a crude algorithm, CMUCL
beats Sun C anc gcc (it Kills prolog and I suspect most other
"interpretive" languages).  The second is a neural net test, in which
CMUCL just beats sun C, and is just beaten by gcc (optimized, of
course).

Note that neither of these applications are traditional Lisp ones ---
there is not a List in sight.

The syntax for declarations is *Awful*, but they only need to be added
to the 10% of code that takes 90% of the time.  In fact, most of the
declarations I have used make little difference.  C programmers might prefer

(Fixnum ((I 10)) ...) or (Let+ ((I Fixnum 10)) ...)
to
(Let ((I 10)) (Declare (Fixnum I)) ...)

Personally I prefer Let+, allow an optional third argument, but it is
easy to write your own macro.  (Try doing that in C++!).

The advantage of Lisp over other interpretive languages is that it can
and IS compiled efficently.  However, Microsoft has dictated that some
programs are two be writen in Visual Basic, and others in C++,
impedence mismatch being good for the soul, so who are we to argue?
IMHO the lisp community has only itself to blame for not providing a
standard option of a conventional syntax --- syntax is always more
important then semantics.

Anyway here's the code.

-------- Primes ---------

#include <stdio.h>
#include <assert.h>

int prime (n)
int n;
/*"Returns t if n is prime, crudely.  second result first divisor."*/
{ int divor;
  for (divor=2;; divor++)
  {  /*printf("divor %d n/divor %d n%%divor %d    ", divor, n/divor, n%divor);*/
     if (n / divor < divor)  return 1;
     if (n % divor == 0) return 0;
}  }

main(argc, argv)
int argc;
char **argv;
{ int n, sum=0, p, i;
  assert(argc == 2);
  n = atoi(argv[1]); printf("n %d, ", n);
for(i=1; i<=10; i++) { sum =0;
  for (p=0; p<n; p++)
  { /*printf("\nprime(%d): %d ", p, prime(p));*/
    if (prime(p))
      sum += p * p;
  }}
  printf("Sum  %d\n", sum);
}

(declaim (optimize (safety 0) (speed 3)))

(declaim (start-block primes))
(declaim (inline prime))
(deftype unum () '(unsigned-byte 29))

(defun prime (nn)
"Returns t if n is prime, crudely.  second result first divisor."
  (declare (type unum nn))
  (do ((divor 2 (+ divor 1)))
      (())
     (declare (type unum divor) (inline Floor))
     (multiple-value-bind (quot rem) (floor nn divor)
			  (declare (type unum quot rem))
       (when (< quot divor) (return t)) ; divor > sqrt
       (when (= rem 0) (return (values nil divor))) )))

(defun primes (n)
  "Returns sum of square of primes < n, basic algorithm."
  (declare (type unum n))
  (let ((sum 0))
;   (declare (integer sum))
    (dotimes (i 10)
;      (print sum)
      (setf sum 0)
      (do ((p 0 (the t (1+ p))))
	  ((>= p n))
	(declare (type unum p))
        (when (prime p)
	  (incf sum (* p p)) )))
    sum))

(declaim (end-block))



% crude prime tester.
prime(N):- test(N, 2), !, fail.
prime(N).
test(N, M):- N mod M =:= 0.
test(N, M):- J is M + 1, J * J =< N, test(N, J).

primes(P, S):- prime(P), Q is P - 1, Q > 0, primes(Q, T), S is T + P * P.
primes(P, S):- Q is P - 1, Q > 0, primes(Q, S).
primes(P, 0).

------------ Nueral Net ---------

#include <stdio.h>
#include <math.h>

#define size 5

float w[size][size];
float a[size];
float sum;

main()
{
	int epoch, i,j ;

	  for (i=0; i< size; i++)
	    for (j=0; j< size; j++)
	      w[i][j] = 0.0;

	for (epoch=0; epoch < 10000; epoch++){
	
	  for (i=0; i< size; i++)
	    a[i] = (float) (random()%32000)/(float) 32000 * 0.1;

	  for (i=0; i< size; i++)
	    for (j=0; j< size; j++)
	      w[i][j] += a[i] * a[j];

	  for (i=0; i< size; i++){
	    sum = 0.0;
	    for (j=0; j< size; j++)
	      sum += a[i] * w[i][j]; 
	     a[i] = 1.0/(1.0 - exp(sum)); 
	  };
	  
	  
	}
      }

;;; simon.lisp -- test or neural simulations
;;;
;;;  Simon Dennis & Anthony Berglas

;;; Library stuff -- Example of simple language extensions.
;;; *NOT* NEEDED FOR EFFICIENCY, JUST CONVENIENT

(defmacro doftimes ((var max &rest result) &body body)
  "Like DoTimes but var declared Fixnum."
  `(DoTimes (,Var ,Max ,@result)
      (Declare (Fixnum ,Var))
      ,@Body))
   ;; Note that this macro could expand code for fixed loops.

(Eval-When (eval load compile)
  ;; [a b c] -> (Aref a b c)
  (defun AREF-READER (Stream Char)
    (declare (ignore char))
    (Cons 'AREF (Read-Delimited-List #\] Stream T)) )
  (set-macro-character #\[ #'aref-reader Nil)
  (set-macro-character #\] (get-macro-character #\)) Nil) )


;;; The program.

(declaim (optimize (safety 0) (speed 3)))


(defconstant size 5)


(defvar *seed* *random-state*)
(defun main()

 ;; initialize the weight matrix
 (let ((w (make-array '(5 5) :element-type 'SHORT-FLOAT :initial-element 0s0))
       (a (make-array 5 :element-type 'SHORT-FLOAT)) )
  (setf *random-state* (make-random-state *seed*))
  (doftimes (epoch 10000)

   ;; make new activation vector
   (doftimes (i size)
     (setf [a i] (random 0.1)))

   ;; update the weights
   (doftimes (i size)
      (doftimes (j size)
         (setf [w i j] (+ [w i j] (* [a i] [a j]))) ))

   ;; update the activations
   (doftimes (i size)
      (let ((sum 0s0))
      (declare (short-float sum) (inline exp))
	 (doftimes (j size)
	   (incf sum (the short-float (* [a i] [w i j] ))) )
	 (setf [a i] (/ 1 (- 1 (exp sum)))) )))
  w))

--
Anthony Berglas
Rm 312a, Computer Science, Uni of Qld, 4072, Australia.
Uni Ph +61 7 365 4184,  Home 391 7727,  Fax 365 1999
From: Stephen J Bevan
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <BEVAN.94Aug10145335@lemur.cs.man.ac.uk>
In article <··········@uqcspe.cs.uq.oz.au> ·······@cs.uq.oz.au (Anthony Berglas) writes:
   Clearly one should not require low level tricks to write fast code.
   Below are two examples in C (not C++, reportedly slightly slower) and
   Lisp. The first calculates prime nrs using a crude algorithm, CMUCL
   beats Sun C anc gcc (it Kills prolog and I suspect most other
   "interpretive" languages). ...

The way I read the parenthetical remark it implies that Prolog is an
"interpretive" language.  Is that what you really meant?
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuA8As.JKv@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:

>>> : So, we've used lisp and not assembler. [...]

>>> And why, pray tell, would I wish to write this nearly indecipherable
>>> mess of Lisp code instead of 16 lines of perfectly readable assembler?
>>> This does seem like the wrong tool for a simple task.

>>It's an existence proof that the original assertion--that writing
>>`real-time' code in Lisp is nearly or truly impossible--is a bald-faced
>>falsehood.  [...]

>The bareface falsehood is the assertion that it is easy or even
>possible to achieve real-time behavior in any of the popular
>commercial Common Lisp implementations available on stock hardware,
>especially while retaining the features of lisp that are usually
>presented as its advantages - abstraction, ease of use, ease of
>debuggability, etc. 

Let's be clear about this.  Lisp is not the same as "the popular
commercial Common Lisp implementations available on stock hardware".

>   But suggesting that lisp is just as reasonable a choice as
>assembler or C for implementing things like device-drivers on typical
>hardware / software configurations is simply ludicrous.

You could say "suggesting that any of the popular commercial
Common Lisp implementations available on stock hardware is just
as reasonable a choice ..."

I suspect no one would disagree with _that_.

>Note also that Unix itself is not particularly well-suited to
>real-time applications.  Adding the overhead of supporting the typical
>lisp implementation's run-time system (especially its
>memory-management mechanisms) to the problems already inherent in Unix
>for the type of application under discussion only exacerbates the
>problems.

Now it's "the typical Lisp implementation" that's said to be losing.

The various terms are not interchangeable.
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuDLCo.Ln5@triple-i.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>

[...]

>
>Let's be clear about this.  Lisp is not the same as "the popular
>commercial Common Lisp implementations available on stock hardware".
>
>>   But suggesting that lisp is just as reasonable a choice as
>>assembler or C for implementing things like device-drivers on typical
>>hardware / software configurations is simply ludicrous.
>
>You could say "suggesting that any of the popular commercial
>Common Lisp implementations available on stock hardware is just
>as reasonable a choice ..."
>
>I suspect no one would disagree with _that_.
>
>>Note also that Unix itself is not particularly well-suited to
>>real-time applications.  Adding the overhead of supporting the typical
>>lisp implementation's run-time system (especially its
>>memory-management mechanisms) to the problems already inherent in Unix
>>for the type of application under discussion only exacerbates the
>>problems.
>
>Now it's "the typical Lisp implementation" that's said to be losing.
>
>The various terms are not interchangeable.
>


From the point of view of this thread the terms are interchangable to
this extent: "Lisp" (no "typical" or "commercial Common Lisp"
qualifiers) connotes not just any language based on the
lambda-calculus or any one that has generic features to support
higher-order functions and a functional-programming paradigm.  "Lisp"
ordinarily refers to a member of a specific family of languages that
all have certain features in common to which this thread has been
referring, such as automatic dynamic memory-allocation with a
garbage-collector based deallocation scheme.  The set of all "lisps"
obviously has fuzzy boundaries, but not more so than many other terms
about which it is possible to have meaningful discussions.  If you
want to say "I consider XYZ a dialect of lisp, and it avoids the
problems to which refer in the following ways...." that would be
perfectly valid (if true.)  But that does not alter the fact that the
majority of implementations of what most people would consider lisp
dialects do in fact suffer the kinds of performance problems which are
the subject of this thread.
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuL4nH.MCr@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>Let's be clear about this.  Lisp is not the same as "the popular
>>commercial Common Lisp implementations available on stock hardware".

My point here is that if a claim says "Lisp", the supporting
evidence should be more general that popular commercial CLs.

>>>Note also that Unix itself is not particularly well-suited to
>>>real-time applications.  Adding the overhead of supporting the typical
>>>lisp implementation's run-time system [...]

>>Now it's "the typical Lisp implementation" that's said to be losing.
>>
>>The various terms are not interchangeable.

>From the point of view of this thread the terms are interchangable to
>this extent: "Lisp" (no "typical" or "commercial Common Lisp"
>qualifiers) connotes not just any language based on the
>lambda-calculus or any one that has generic features to support
>higher-order functions and a functional-programming paradigm.

A straw man, since no one has said otherwise.

>"Lisp" ordinarily refers to a member of a specific family of languages that
>all have certain features in common to which this thread has been
>referring, such as automatic dynamic memory-allocation with a
>garbage-collector based deallocation scheme. 

Do you count reference counting as GC?  (It used to be considered
an alternative to GC, but a few years ago it looked like the
distinction hadn't been maintained, at least not in many people's
minds).  There are Lisps that use reference counting, and there
are Lisps that don't collect at all.  But this is really a matter
of the implementations, not the languages.

Besides, there are a number of cases where Lisp's alloc + GC will be
faster than "manual" alloc and dealloc.

> The set of all "lisps"
>obviously has fuzzy boundaries, but not more so than many other terms
>about which it is possible to have meaningful discussions.

But I am not attempting to exploit any "fuzzy boundaries".

>  If you
>want to say "I consider XYZ a dialect of lisp, and it avoids the
>problems to which refer in the following ways...." that would be
>perfectly valid (if true.) 

We got into this because someone posted some example real-time
code.  So far the complaints about the example have been that
it's messy [true] and that it's implementation-specific [so what?].
Sure, someone came right out and said it's "not Lisp", but their 
only argument to that effect was that it couldn't be found anywhere
in CLtL.

Now, if someone wants to argue that (say) a language with GC is
necessarily slower than one that works like C, let them do so.

>                            But that does not alter the fact that the
>majority of implementations of what most people would consider lisp
>dialects do in fact suffer the kinds of performance problems which are
>the subject of this thread.

Sure, and if you want around saying "the majority of implementations"
you'd receive no complaints from me.

-- jd
From: J W Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CusMtp.M8C@festival.ed.ac.uk>
····@triple-i.com (Kirk Rader) writes:

>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:

>>My point here is that if a claim says "Lisp", the supporting
>>evidence should be more general that popular commercial CLs.

>I have also used various commercial and shareware non-Common Lisp
>dialects on both Unix workstations and PC-class machines with similar
>results.  My point is not about any particular implementation of any
>particular dialect, but about language-intrinsic features of lisp as a
>class of programming language.  

Yes, but I disagree with that point, to a fair extent, and even
more with the way it has been argued.  That you have observed cases
that you've understood in a certain way doesn't show there's a
language-intrinsic problem.  You may well be right about typical
implementations or most applications or something like that.

Of course, there will be some applications for which Lisp works
less well than C.  But so far as I can tell, there's no language-
intrinsic feature that prevents there being cases where Lisp works
as well as or better than C.  Of course, maybe there's some lesser
consequence we should consider.

>   I did draw some particular examples
>from a particular project implemented using a particular commercial
>Common Lisp, but I also very explicitly stated that these were
>intended to illustrate points about "lisp", as the term is commonly
>understood, in general.

Sure, but illustration and demonstration are two different things.
Illustrations would be useful if they let one see how the general
claim was correct.  So far, yours have not done so, at least not
for me.

>>A straw man, since no one has said otherwise.

>Your continuing focus on "lisp" as opposed to "particular lisp
>implementations" shows that you are saying otherwise.

Bull.  I have never thought or claimed that Lisp is (you said
"connotes") "any language based on the lambda-calculus or any
one that has generic features to support higher-order functions
and a functional programming paradigm".  That is, I agree that
"Lisp connotes not not just" any such language.

It's possible, I suppose, that someone else disagrees, thus
making it not a straw man, but it's not me.

>>Do you count reference counting as GC? 

>I am indifferent as to how you choose to categorize different
>memory-management strategies, other than the basic difference between
>languages which automatically allocate and deallocate memory and those
>which only do so under explicit programmer control. 

So you do count reference counting in your claim.

>Some form of
>automatic memory allocation and recovery strategy is central to most
>people's idea of what a lisp dialect entails.  

It's also pretty essential to proper implementation of strings
in Basic, but people don't go around saying Basic is inherently
unsuited to workstations and PCs.

>    If you choose to
>include in the set of "lisps" some language which has a malloc() /
>free() style of memory management, then that dialect would, of course,
>be much less prone to memory-management conflicts with widely-used
>OS's.

Well, I certainly include implementations that use reference counting
and even ones that don't reclaim at all (see e.g. JonL White's paper
in the 1980 Lisp conference), since I'm not planning to define "Lisp"
so as to do violence to past usage.  Now if you want to show that
automatic reclamation inherently causes a mismatch with worstations
and PCs, or whatever your acctual claim is, please do so.  I would
like to understand what the problem is.

>>Besides, there are a number of cases where Lisp's alloc + GC will be
>>faster than "manual" alloc and dealloc.

>And such cases are among those for which I have explicitly advocated
>using lisp earlier in this and related threads.

So what _is_ supposed to be the problem with Lisp, then?

>>Now, if someone wants to argue that (say) a language with GC is
>>necessarily slower than one that works like C, let them do so.

>Not "necessarily slower", but in my experience it is more common in
>real-world programming projects to encounter cases were C's
>memory-management philosophy results in higher throughput and fewer
>problems for interactivity and real-time response than one which
>relies on GC.

Since this is probably not clear, let me say it explicitly.
If someone came along in comp.lang.lisp and said

   in my experience it is more common in real-world programming
   projects to encounter cases were C's memory-management philosophy
   results in higher throughput and fewer problems for interactivity and
   real-time response than one which relies on GC.

I would not disagree with them.

-- jd
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cuw0rL.Io8@triple-i.com>
In article <··········@festival.ed.ac.uk> ····@festival.ed.ac.uk (J W Dalton) writes:
>····@triple-i.com (Kirk Rader) writes:
>
>>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>

[...]

>
>Yes, but I disagree with that point, to a fair extent, and even
>more with the way it has been argued.  That you have observed cases
>that you've understood in a certain way doesn't show there's a
>language-intrinsic problem.  You may well be right about typical
>implementations or most applications or something like that.
a
It is hardly fair to excerpt only those quotes from a long thread
which refer to specific examples and then complain that the argument
is based solely on particular implementation details.  This thread has
been going on for weeks now (hopefully it won't for weeks more!) and
most of the concrete examples to which you refer were offered as
"existence proofs" to people who claimed that arguments based solely
on general principles were unconvincing.  I do not blame someone who
has never (presumably because they have been working in a problem
domain to which lisp is well-suited) encountered these kinds of
unacceptable performance problems to require concrete examples.  I
consider it false to say that in this thread I have relied solely on
such specific examples.

>
>Of course, there will be some applications for which Lisp works
>less well than C.  But so far as I can tell, there's no language-
>intrinsic feature that prevents there being cases where Lisp works
>as well as or better than C.  Of course, maybe there's some lesser
>consequence we should consider.

I have repeatedly agreed that that there are applications for which
lisp is better suited than C.

[...]

>
>Sure, but illustration and demonstration are two different things.
>Illustrations would be useful if they let one see how the general
>claim was correct.  So far, yours have not done so, at least not
>for me.

But you seem to have ignored the many other quotes in many messages in
this same thread that did refer to the kind of general principles that
you claim are lacking in my argument.  Rather than repeat a large
number of them here, let me ask the following question.  Since lisp's
semantics are unarguably richer and more powerful than C's, how do you
expect any implementation to obtain them for free?  It seems
elementary to me that if a language is intrinsically more powerful, it
will be intrinsically more complex and have more overhead in its
implementation.  For many applications, the greater expressive power
of lisp more than pays for itself.  For many others, it doesn't.  It
is an open question for any given application into which class it
falls.

[...]

>
>Bull.  I have never thought or claimed that Lisp is (you said
>"connotes") "any language based on the lambda-calculus or any
>one that has generic features to support higher-order functions
>and a functional programming paradigm".  That is, I agree that
>"Lisp connotes not not just" any such language.

Bull yourself.  You have several times tried to argue lisp's
performance "problems" (the quotes around "problems" is to emphasize
that I consider them real problems only for some applications, since
you seem to be missing that point fairly consistently) are not real
based on how it could run on hypothetical platforms or how it could
evolve so as to avoid the features that have been performance
stumbling blocks.  I have been explicitly referring to existing
dialects running on current platforms, and used the word "connotes"
deliberately so as to emphasize the particular usage of the word
"lisp" I was referring to.

>>>Do you count reference counting as GC? 
>
>>I am indifferent as to how you choose to categorize different
>>memory-management strategies, other than the basic difference between
>>languages which automatically allocate and deallocate memory and those
>>which only do so under explicit programmer control. 
>
>So you do count reference counting in your claim.

As I said in the quote which I left in, above, I do not count
reference counting per se either in or out of my "claim".  There are
for example, a number of standard idioms used in C and C++ that use
reference counting, sometimes to good effect and sometimes not, but
which still never allocate anything "behind the programmer's back", as
it were.  I do not count this as a GC-based approach, even though
reference counting can be used to implement a GC.  The critical
difference is that in the non-GC use of reference counting, even when
a "deallocation" is deferred due to a positive count, the block of
memory by definition is still not "garbage".

[...]

>
>It's also pretty essential to proper implementation of strings
>in Basic, but people don't go around saying Basic is inherently
>unsuited to workstations and PCs.

"People" may not, but I certainly would say that the class of
applications for which Basic is well-suited is probably smaller than
the classes of applications for which either lisp or C is well-suited.
I also am not among those "people" who say that "lisp inherently
unsuited to workstations and PCs."  I have only ever claimed that
there exist applications for which lisp is not particularly
well-suited, just as there are applications for which C or any other
particular language is not well-suited.

[...]

>
>Well, I certainly include implementations that use reference counting
>and even ones that don't reclaim at all (see e.g. JonL White's paper
>in the 1980 Lisp conference), since I'm not planning to define "Lisp"
>so as to do violence to past usage.  Now if you want to show that
>automatic reclamation inherently causes a mismatch with worstations
>and PCs, or whatever your acctual claim is, please do so.  I would
>like to understand what the problem is.

This particular problem is that for some applications GC is not the
optimum memory-management strategy, even though there are applications
for which it is.  My rule of thumb is that if an application needs to
make many small allocations, GC is likely to be the ideal memory
management strategy.  If an application needs to make only a few but
large allocations, malloc / free is likely to be more efficient for
the application as a whole.

[...]

>
>So what _is_ supposed to be the problem with Lisp, then?

Again, "the problem with Lisp" is only a problem for those
applications for which lisp's semantics are not a good fit.  Of
course, there are also applications for which lisp's sematics are a
better fit than, say, C's, so in those cases "the problem" would be
with C.  What started all this was my taking exception with claims
that lisp was always or almost always as good or better a choice than
C or C++, and that all claims of it being "too big" or "too slow" for
any particular application were ill-founded.

[...]

>
>Since this is probably not clear, let me say it explicitly.
>If someone came along in comp.lang.lisp and said
>
>   in my experience it is more common in real-world programming
>   projects to encounter cases were C's memory-management philosophy
>   results in higher throughput and fewer problems for interactivity and
>   real-time response than one which relies on GC.
>
>I would not disagree with them.
>
>-- jd

Such sagacity!  :-)

Kirk Rader
From: Jeff Dalton
Subject: Do general principles show C is faster than lisp?
Date: 
Message-ID: <CuzxJD.105@cogsci.ed.ac.uk>
This is part 2 of my response, and (I hope) more interesting.

In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@festival.ed.ac.uk> ····@festival.ed.ac.uk (J W Dalton) writes:
>>Sure, but illustration and demonstration are two different things.
>>Illustrations would be useful if they let one see how the general
>>claim was correct.  So far, yours have not done so, at least not
>>for me.
>
>But you seem to have ignored the many other quotes in many messages in
>this same thread that did refer to the kind of general principles that
>you claim are lacking in my argument.  Rather than repeat a large
>number of them here, let me ask the following question.  Since lisp's
>semantics are unarguably richer and more powerful than C's, how do you
>expect any implementation to obtain them for free?  It seems
>elementary to me that if a language is intrinsically more powerful, it
>will be intrinsically more complex and have more overhead in its
>implementation.  For many applications, the greater expressive power
>of lisp more than pays for itself.  For many others, it doesn't.  It
>is an open question for any given application into which class it
>falls.

First, I don't expect an implementation to obtain something extra 
for free.  But the costs don't have to occur at run-time.  For instance,
lexically scoped Lisps may perform a "closure analysis" to determine
which variables are "closed over".  This can be done by the compiler,
and there's no run-time cost in functions that don't make closures.

Now, some Lisp-family languages are very simple.  Their implementations
can be much smaller and simpler than C implementations.  I'm not sure
their semantics is richer and more powerful than C's.  Rather, I'm
inclined to say it's richer and more powerful in some areas and
poorer and simpler in others.  In any case, Lisp-family semantics 
don't require more _complex_ implementations, and many of the
"semantics induced costs" can be at compile-time rather than run-time.

Moreover, a richer and more powerful semantics doesn't _have_ to
result in run-time costs.  For instance, we could add a simple kind
of closure-like object to C.  Code that didn't use this feature 
wouldn't have to be any slower.  (C++ might be another example
of a semantically richer language that doesn't have to be slower.)

[If you want to say these aren't examples of richer and more powerful
semantics, then I don't think the point remains "elementary".  It
depends on the details, and you need to say exactly what does count
and why it must lead to greater costs.]

It's a commonplace that different languages are better at different
things, and there's surely a fair amount of truth to this.  But people
often get it wrong when dividing up application areas.  Lisp was long
thought to be unsuited to a number of tasks that it has turned out to
be able to handle fairly well.  Generally, people had good reasons
to suppose that Lisp was unsuited, typically their experience with
existing Lisps and some reasoning about the consequences of certain
implementation techniques.  The Prolog community has produced several
arguments that Prolog is inherently faster than Lisp "at what Lisp
does".  They were pretty good arguments, as such things go.
I feel that a degree of skepticism about such arguments is called for.

There clearly are application areas where existing Cs are better than
existing Lisps.  For instance, if you want a certain kind of
implementation for a programming language and want it to be fairly
portable, C is a better choice.  That's why so many Lisps are
implemented at least partly in C.  But it would be wrong to
suppose that Lisp was incapable of implementing Lisp efficiently.
A number of Lisps are implemented entirely in the same variety
of Lisp plus a little assembler.  (Typically the same stuff that
might be in assembler in a C-based implementation.)  Such
implementations needn't be any slower as a result.  In fact, 
existing Lisps can work fairly well for a number of "systems 
programming" tasks even though that seems unlikely _a priori_.

Indeed, there have been many cases in which it seemed pretty certain
that Lisp will lose to C but which didn't turn out that way.
The currently favorite candidates seem to be real-time applications
and applications that heavily use floating-point in a way that must
be optimized across separately compiled procedures.  But note that
the characterization of the floating-point case already has to
be fairly sophisticated in order to avoid counterexamples.
In the real time case, we've just seen a dispute about whether
a purported counterexample counted as Lisp or not.  So these cases
are already in some trouble, and it's far from clear that they'd
survive a determined implementation and language-design effort.

So I don't feel that the kind of conclusion you're after is
at all straightforward.  Perhaps there are application areas where
the best possible implementation of the most suitable language
in the Lisp family must have greater run-time overheads than
the best possible C implementation, but this is something that
has to be worked out in detail, and no one has done so.
Nor have they shown that there must be excessive compile-time
costs.

On the other hand, there's no question that many existing
implementations (at least) have serious problems for a number of
application areas, which is why I don't always use Lisp.

-- jeff
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv3JuF.4G6@triple-i.com>
This discussion has gone on long past the point where it is useful.
Neither of us is likely to convince the other of the correctness of
his position.  I will not respond point-by-point to all of the cases
where I believe you to be, putting it mildly, somewhat disingenuous
but one point I cannot let pass silently:

In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff
Dalton) writes:

[...]

>It's
>an old net technique to respond as if what's quoted was all that was
>considered.

This is exactly what I consider you to have been doing.

Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cv9q23.89q@cogsci.ed.ac.uk>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>This discussion has gone on long past the point where it is useful.
>Neither of us is likely to convince the other of the correctness of
>his position.  I will not respond point-by-point to all of the cases
>where I believe you to be, putting it mildly, somewhat disingenuous
>but one point I cannot let pass silently:

I have not been disingenuous at any point.  Either you've
misunderstood me, or you need to look up what the word means.

>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff
>Dalton) writes:
>
>[...]
>
>>It's
>>an old net technique to respond as if what's quoted was all that was
>>considered.
>
>This is exactly what I consider you to have been doing.

But *I* have not responded to *you* as if what you quoted was all you
considered.

I think you may be misunderstanding what you just quoted.  What I am
saying is that when A quotes passage X and responds, A may have
considered more than X.  *You* have reacted as if I considered only
what I quoted.  In fact, I have considered everything that you said
that reached me.  It's possible that I've misunderstood you, but if so
it's not because haven't paid attention to everything you've said.

Now, I suspect what you're actually trying to accuse me of is
distorting you by quoting out of context.  If you or anyone else
can point to any case where something I quoted means something
different in context, I will be glad to retract what I said
in reply and to apologize.

-- jd
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CvEMGF.570@triple-i.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:

[...]

>I have not been disingenuous at any point.  Either you've
>misunderstood me, or you need to look up what the word means.

I used the word I meant.

[...]

>But *I* have not responded to *you* as if what you quoted was all you
>considered.
>
>I think you may be misunderstanding what you just quoted.  What I am
>saying is that when A quotes passage X and responds, A may have
>considered more than X.  *You* have reacted as if I considered only
>what I quoted.  In fact, I have considered everything that you said
>that reached me.  It's possible that I've misunderstood you, but if so
>it's not because haven't paid attention to everything you've said.

No, you accused me of using "an old net technique" in claiming
unjustifiably that you were including only the most negative sounding
quotes and responding as if that were all I had said.  I still think
that is what you have _in fact_ been doing, whether that is "an old
net technique" or not.  (Perhaps it has become "an old net technique"
because the tactic of which it complains is actually rather widely
used?)  I also think that your reponses even to many of those quotes
you have left in have missed the mark, since your replies often seemed
to me to have gotten what I said wrong or not to really be relevant to
what I had said.  Since I credit you with the knowledge, skill, and
intelligence to know what you were doing in all this, I feel justified
in speculating that you have been disingenuous.  But perhaps it really
is just a prolonged misunderstanding, after all.

>Now, I suspect what you're actually trying to accuse me of is
>distorting you by quoting out of context.  If you or anyone else
>can point to any case where something I quoted means something
>different in context, I will be glad to retract what I said
>in reply and to apologize.
>
>-- jd

What I am actually accusing you of is either through honest
misunderstanding or deliberate distortion responding to arguments I
never made in support of positions I do not hold, and making this seem
not altogether ridiculous through selected excerpting of my messages
in your replies.

Once more for the record, and it is to be hoped finally, let me state
the position I have taken in this thread.  Different tools are better
for different tasks.  Asking is lisp (or any other language) too big
or too slow is not well-posed without some information about
application requirements.  Given that, it is false, or at least
misleading, to say that lisp (or any other language) is unequivocally
_not_ too big or too slow, as was done in the message to which my
original post was a reply.  Is Fred too short?  Maybe he is too short
to be a basketball player, but he may be just the right height to be a
jockey.  Does that make being a basketball player better or worse than
being a jockey?  Of course not; it does, however, recognize that the
requirements of the two occupations are different such that different
people are likely to have better natural aptitude for one than the
other.  The same is true for programming languages, so it is just
silly to argue endlessly about which language is "better" in some
absolute sense.  But in my opinion it is equally silly, and in the
particular case which prompted me to comment, dangerously misleading
to pretend that lisp _must_ be better than C or some other particular
language for all or even almost all applications simply in virtue of
the fact that is more powerful, or makes one more productive, or
anything else.  It seems to me that the state of the art in the design
and implementation of programming languages has not advanced to the
point where any one language is suitable for the majority of
applications, so prudence is required when choosing, or more to the
point recommending, what language to use for a particular application.

Now this is not the first time I have said any of the above, and you
have already stated on several occasions that you agree, which is why
I see no point in dragging this on endlessly.  I predict that you will
now respond, however, by taking some particularly inflamatory sounding
quote out of context from some previous posting and say, "if that is
all you meant, why did you say this other?"  The answer, again as I
have said before, is that I would not feel it fair or honest to simply
let stand unchallenged what have seemed to me to be mis-statements of
fact about the relative performance of different programming languages
for particular kinds of applications.  You and others have accused me
of trying to discourage people from using lisp, despite my frequent
references to the fact that I consider lisp well-suited to a variety
of kinds of applications.  What I _have_ been trying to discourage is
people, probably through over-compensating for what they perceive as
lisp-bashing, pretending that any language is actually good at those
specific kinds of tasks at which it is really bad.
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cvo0E7.Gws@cogsci.ed.ac.uk>
Go to the dashed line for the non-meta part.

In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>I have not been disingenuous at any point.  Either you've
>>misunderstood me, or you need to look up what the word means.
>
>I used the word I meant.

But did it mean what you thought it did?

>>But *I* have not responded to *you* as if what you quoted was all you
>>considered.
>>
>>I think you may be misunderstanding what you just quoted.  What I am
>>saying is that when A quotes passage X and responds, A may have
>>considered more than X.  *You* have reacted as if I considered only
>>what I quoted.  In fact, I have considered everything that you said
>>that reached me.  It's possible that I've misunderstood you, but if so
>>it's not because haven't paid attention to everything you've said.
>
>No, you accused me of using "an old net technique" in claiming
>unjustifiably that you were including only the most negative sounding
>quotes and responding as if that were all I had said.

No, that's not it.  I accused you of responding as if I'd considered
only the parts I quoted.  And you in fact did just that, or something
very near.  You even accuesed me of quoting out of context, though
neither you nor anyone else (on comp.lang.lisp or in e-mail -- hint
to some who've e-mailed me) has offered anything I quoted that means
something different in context.  If someone ever does, I will be 
glad to retract what I said and apologize.

For it may that I've misunderstood you.  That's entirely possible.
But I'm not deliberately distorting what you say, and I'm not being
disingenuous.

>  I still think
>that is what you have _in fact_ been doing, whether that is "an old
>net technique" or not.  

That's because you're not willing to believe me when I tell you
otherwise.

What I am _in fact_ doing is considering all you write.  I even save
your messages, because I find much of value in them.  However, for the
most part, I have responded only to the aspects I disagree with.

> I also think that your reponses even to many of those quotes
>you have left in have missed the mark, since your replies often seemed
>to me to have gotten what I said wrong or not to really be relevant to
>what I had said. 

That may be, but there must be better ways to straighten such things out.
At this point, we are both reacting w/ excess hostility, it seems to me.

As for relevance, I am sometimes trying to counter impressions rather
than to refute your points.  

>    Since I credit you with the knowledge, skill, and
>intelligence to know what you were doing in all this, I feel justified
>in speculating that you have been disingenuous.  But perhaps it really
>is just a prolonged misunderstanding, after all.

Well, I don't think it's _only_ a misunderstanding.  There seem to
be genuine disagreements as well.

>What I am actually accusing you of is either through honest
>misunderstanding or deliberate distortion responding to arguments I
>never made in support of positions I do not hold, and making this seem
>not altogether ridiculous through selected excerpting of my messages
>in your replies.

So what is your position then...

----------------------------------------------------------------------

>Once more for the record, and it is to be hoped finally, let me state
>the position I have taken in this thread. 

Thanks.

> Different tools are better
>for different tasks. 

I agree.  However, programming languages are rather flexible and
general tools.  When people try to assign tools (languages,
techniques, etc) to applications (e.g. OOP is good for this,
logic programming for that, etc), they very often get it wrong.
This has been a serious problem for the proponents of certain
languages.  For instance, The logic programming community had to 
spend a lot of time showing that Prolog could be used effetively 
in cases where people claimed it was unsuitable and that Lisp or
C or some other language would be better.

> Asking is lisp (or any other language) too big
>or too slow is not well-posed without some information about
>application requirements.

Ok.  But thinking of Lisp as "a [single] language" can be very
misleading.  Moreover, languages aren't too slow directly, only
implementations (or the code they produce) are.  Languages might 
be too big directly, perhaps making them hard to learn or to master, 
but if we're talking about memory requirements at run-time it's 
again only implementations or the compiled code that can be too big.

So asking whether Lisp is too big or slow is not well-posed without
some information about the possible variations in languages and
implementations.

>  Given that, it is false, or at least
>misleading, to say that lisp (or any other language) is unequivocally
>_not_ too big or too slow, as was done in the message to which my
>original post was a reply.  

But you often can conclude that language A needn't be any worse
than language B (or narrower conclusions along similar lines).

Now, why do you find it so important to make all these points about
Lisp rather than about particular Lisp-family languages and particular
implementations?  Why spend all this time and effort?  What's so
bad about someone saying Lisp needn't ever be too large or slow
and *getting away with it*?  Especially since very strong points
can be made about current implementations and the ones that are
likely to be available in the near future.

>    Is Fred too short?  Maybe he is too short
>to be a basketball player, but he may be just the right height to be a
>jockey.  Does that make being a basketball player better or worse than
>being a jockey?  Of course not; it does, however, recognize that the
>requirements of the two occupations are different such that different
>people are likely to have better natural aptitude for one than the
>other.  The same is true for programming languages, 

Programming languages don't line up neatly with application areas,
and when people try to line them up they often get it wrong.

Moreover, there's little point in concluding that Lisp will be worse
than C *somewhere* (though we don't know where).  We need to have an
idea of where and how much worse and what scope there is to make Lisp
do better.  So this general point about "natural aptitude" doesn't
tell us anything very useful even if it were completely correct.

>        so it is just
>silly to argue endlessly about which language is "better" in some
>absolute sense.  

But I (at least) don't argue about which language is better in
an absolute sense.  (Just in case this wasn't clear.)

>  But in my opinion it is equally silly, and in the
>particular case which prompted me to comment, dangerously misleading
>to pretend that lisp _must_ be better than C or some other particular
>language for all or even almost all applications simply in virtue of
>the fact that is more powerful, or makes one more productive, or
>anything else. 

That is again not a position I hold.  (I'm again trying to counter
possible confusions and misunderstandings just in case they exist.)

>   It seems to me that the state of the art in the design
>and implementation of programming languages has not advanced to the
>point where any one language is suitable for the majority of
>applications, so prudence is required when choosing, or more to the
>point recommending, what language to use for a particular application.

There I agree completely, as I've tried to make clear from time
to time.

>Now this is not the first time I have said any of the above, and you
>have already stated on several occasions that you agree,

But I don't agree completely with *all* of the above.  I agree
with some of the things you say.  Others I think are misleading
or potentially misleading.  Still others I think are wrong.

> which is why
>I see no point in dragging this on endlessly. 

Ok.

>   I predict that you will
>now respond, however, by taking some particularly inflamatory sounding
>quote out of context from some previous posting and say, "if that is
>all you meant, why did you say this other?" 

Well, why *did* you say those other things?  Presumably you did
feel Lisp, and not just particular implementations, had problems
with I/O; that higher-level features necessarily had costs; and
so on.

>   The answer, again as I
>have said before, is that I would not feel it fair or honest to simply
>let stand unchallenged what have seemed to me to be mis-statements of
>fact about the relative performance of different programming languages
>for particular kinds of applications. 

Then why didn't you confine yourself to facts instead of including
not-fully-justified claims about Lisp in general and making _a priori_
arguments about higher-level features?

Look, this is not, especially now, some kind of coded accusation or
claim about your real aims.  I don't know why you said many of the
things you did, especially since you could -- and did -- offer much
stronger arguments on narrower, less inflamatory, claims.  Moreover, 
I don't understand why you think I shouldn't pay any attention to
these other points.

>  You and others have accused me
>of trying to discourage people from using lisp, despite my frequent
>references to the fact that I consider lisp well-suited to a variety
>of kinds of applications. 

I think what you've said may have that effect (discouraging people
from using Lisp) and may well reinforce a number of prejudices besides.

I suspect that most people think Lisp is well-suited to some things.
That doesn't say very much in itself.  When I point out that Lisp
could do much better, that current implementations are misleading,
etc, you don't agree with me.  Nor do you (at least no so far)
say just how bad the problems you've observed are.  You give the
impression that they're so bad as to disqualify Lisp, but for
all we know Lisp is only 10% slower than C or the problems are
due to poorly tuned implementations.  (That a number of different
Lisps have the same problem shows very little, BTW.)

>   What I _have_ been trying to discourage is
>people, probably through over-compensating for what they perceive as
>lisp-bashing, pretending that any language is actually good at those
>specific kinds of tasks at which it is really bad.

I think it's counterproductive to engage in that kind of psychological
speculation/analysis: not only are they wrong, they're over-compensating
for a mistaken perception and *pretending* that the facts are different
than they are.  --  You must know how annoying and insulting that will
be.

Anyway, just what tasks *are* those at which Lisp is really bad?  
And just how bad is it?  Will you now say?

-- jeff
From: Kirk Rader
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CvvGsB.2AG@triple-i.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:

[...]

I used the word disingenuous in the following sense (quoting from
Merriam's New Collegiate Dictionary 1981 edition):

"disingenuous - lacking in candor; also: giving a false appearance of
simple frankness : CALCULATING" [capitalization in the source.]

>No, that's not it.  I accused you of responding as if I'd considered
>only the parts I quoted.  And you in fact did just that, or something
>very near.  You even accuesed me of quoting out of context, though
>neither you nor anyone else (on comp.lang.lisp or in e-mail -- hint
>to some who've e-mailed me) has offered anything I quoted that means
>something different in context.  If someone ever does, I will be 
>glad to retract what I said and apologize.

But by leaving out the surrounding statements which modified and
softened the position I was taking, it seemed to me that you gave the
appearance of my taking positions stronger and ultimately with a
different meaning than those I in fact did.  You then responded to the
arguments it seemed that you had carefully constructed the appearance
of my making, rather than the arguments I in fact made.  Also, you
failed to include or take account of the fact that all of the
strong-sounding statements you quoted were in the context of concrete
examples of specific applications where I had concluded from
experience that lisp was worse suited than C, while leaving out the
passages that acknowledged that for other kinds of applications I feel
that the opposite result holds.  Thus it seemed to me that you made it
appear as if I were stating general principles about all languages or
all kinds of applications that I never, in fact, made.

>For it may that I've misunderstood you.  That's entirely possible.
>But I'm not deliberately distorting what you say, and I'm not being
>disingenuous.

I accept what you say.  But can you understand from the above why I
have felt at times that you may have been?

[...]

>That may be, but there must be better ways to straighten such things out.
>At this point, we are both reacting w/ excess hostility, it seems to me.

I agree.

>As for relevance, I am sometimes trying to counter impressions rather
>than to refute your points.

But by leaving out or belittling the aspects of my own messages which
acknowledge lisp's strengths and explicitly state my real belief that
lisp is better suited to many tasks than languages like C, it seems to
me that you have reinforced the negative impression rather than
diminished it.

>Well, I don't think it's _only_ a misunderstanding.  There seem to
>be genuine disagreements as well.

[...]

Absolutely.  But I am of the impression that we actually agree on more
than we disagree, and that the areas of disagreement are hardly worth
the amount vitriol we have both expended.

>----------------------------------------------------------------------
>
[...]

>Now, why do you find it so important to make all these points about
>Lisp rather than about particular Lisp-family languages and particular
>implementations?  Why spend all this time and effort?  What's so
>bad about someone saying Lisp needn't ever be too large or slow
>and *getting away with it*?  Especially since very strong points
>can be made about current implementations and the ones that are
>likely to be available in the near future.

I think that our disagreement here is mainly a matter of our each
having perceived a different "spin" to this thread.  I have never
perceived myself to have been, and have several times specifically
disavowed, making theoretical claims about all possible lisp dialects.
I have consistently tried to make it clear that I have been talking
about practical programming issues using real-world lisp
implementations for real-world programming projects _today_.  I use
the word "lisp" rather than "Acme Common Lisp" or "Joe's Shareware
Scheme" both because I wish to make it clear that I am not talking
about issues relevant to just one implementation but to features that
all current implementations seem to share, and because I do not think
that it would be fair to any particular vendor to give the appearance
of singling them out as being particularly bad at some given task when
in fact the issues I have raised are equally applicable to their
competitors.

[...]

>Programming languages don't line up neatly with application areas,
>and when people try to line them up they often get it wrong.
>
>Moreover, there's little point in concluding that Lisp will be worse
>than C *somewhere* (though we don't know where).  We need to have an
>idea of where and how much worse and what scope there is to make Lisp
>do better.  So this general point about "natural aptitude" doesn't
>tell us anything very useful even if it were completely correct.

But why do you assume I have gotten it wrong?  I have tried to give
concrete examples of areas (e.g. GC being a poor choice of
memory-management for a particular kind of application which makes
few-in-number-but-large-in-size dynamic allocations) where it has been
demonstrated sufficiently to satisfy me that C's features (malloc /
free) are better suited than lisp's (make-array / GC).  But this isn't
a matter of "application areas" in the sense I think you mean but
rather a matter of specific application requirements, (although it is
true that in many application areas, most or all applications'
requirements will have a family resemblance.)

[...]

>But I (at least) don't argue about which language is better in
>an absolute sense.  (Just in case this wasn't clear.)

I wasn't accusing you of doing so.  I was trying to clearly restate
what _my_ position is.

[...]

>That is again not a position I hold.  (I'm again trying to counter
>possible confusions and misunderstandings just in case they exist.)

Again, I was stating _my_ position without reference to yours.

[...]

>Well, why *did* you say those other things?  Presumably you did
>feel Lisp, and not just particular implementations, had problems
>with I/O; that higher-level features necessarily had costs; and
>so on.

I do feel that lisp, i.e. the family of languages commonly regarded as
lisp dialects, does have as a group certain features that makes it
well suited to certain tasks and ill suited to others.  I do feel that
higher-level features necessarily have costs, with the specific
stipulation that it is possible to have trivial additional features or
purely syntactic conveniences with so little extra cost that they can
be regarded as free.  I do not put any of the features to which I have
been referring in that "trivial" category, however.  And that the cost
of supporting some additional feature may in particular instances be
more than outweighed by the benefits that it confers does not make it
free.  It does mean that sometimes it is worth trading off some
incremental increase in size or decrease in performance in one area so
as to obtain some advantage in another.  The higher-bandwidth
allocations and deallocations that are the result of using GC traded
off against the fact that, in general, where there is a GC there will
also be garbage is an example of a feature which can either benefit or
hurt an application depending on its requirements and design.

[...]

>Then why didn't you confine yourself to facts instead of including
>not-fully-justified claims about Lisp in general and making _a priori_
>arguments about higher-level features?

I disagree with your characterization of my arguments.

>Look, this is not, especially now, some kind of coded accusation or
>claim about your real aims.  I don't know why you said many of the
>things you did, especially since you could -- and did -- offer much
>stronger arguments on narrower, less inflamatory, claims.  Moreover, 
>I don't understand why you think I shouldn't pay any attention to
>these other points.

[...]

I don't think that you "shouldn't pay any attention to these other
parts."  I disagree with your interpretation of them.  If that is
because I haven't expressed myself clearly enough, then that is my
fault but I do not know how to express myself more clearly so I
suggest that we simply stop arguing in circles around one another, and
agree to disagree (where we do.)

>I think what you've said may have that effect (discouraging people
>from using Lisp) and may well reinforce a number of prejudices besides.

If that is the case, it was not my intention.

>I suspect that most people think Lisp is well-suited to some things.
>That doesn't say very much in itself.  When I point out that Lisp
>could do much better, that current implementations are misleading,
>etc, you don't agree with me.

But I _have_ agreed.  What I have objected to was what seemed to me to
be responses to a different argument than one which I was making.  I
have never disputed that lisp implementations could be different than
they are now, perhaps making them better suited to some of the tasks
at which I consider them presently to be rather poor.  But I have
never been talking about programming language theory, rather about
application programming practice (which I consider as valid a topic of
discourse for comp.lang.lisp.)  In addition, (and here _is_ a
substantive point of disagreement between us, I think) even from a
theoretical point of view I seriously question the assumption that
these future hypothetical lisp implementations could both be
well-suited to the kinds of applications to which C is currently
better suited _and_ still be well-suited to the kinds of applications
for which lisp is currently better suited.  I am willing to be shown
how they could, but I have seen no arguments that have convinced me
that it is even plausible.

>                              Nor do you (at least no so far)
>say just how bad the problems you've observed are.  You give the
>impression that they're so bad as to disqualify Lisp, but for
>all we know Lisp is only 10% slower than C or the problems are
>due to poorly tuned implementations.

It is true that I have not presented, and do not feel it necessary or
appropriate to present, the output of specific analytical tools.  I
have recommended that people use such tools on their own applications
to test whether or not they are suffering similar sorts of performance
problems.  I have stated that I do in fact consider that lisp has been
disqualified for certain applications on the basis of such analyses.
While I have observed differences in performance much greater than
10%, even 10% (or less) can make the difference between acceptable
performance and unacceptable performance for certain kinds of
applications.  Such criteria are so application specific that I do not
feel that it is useful or meaningful to even try to go into the level
of detail on which you seem to insist.

>                                      (That a number of different
>Lisps have the same problem shows very little, BTW.)

It shows that lisps in general and not just some particular
implementation have problems for the kinds of applications to which I
have referred.

[...]

>
>I think it's counterproductive to engage in that kind of psychological
>speculation/analysis: not only are they wrong, they're over-compensating
>for a mistaken perception and *pretending* that the facts are different
>than they are.  --  You must know how annoying and insulting that will
>be.

You have no idea how annoying and insulting I have found most of your
messages?

>Anyway, just what tasks *are* those at which Lisp is really bad?  
>And just how bad is it?  Will you now say?
>
>-- jeff

Since you ask, I will again repeat the kinds of tasks I feel lisp to
bad at: real-time tasks and system- or application-level tasks that
are not "real-time" in the traditional sense but which require
continuous high-bandwidth interaction with hardware or the user.  And
let me again point out that I only think that lisp is bad in those
instances where the requirements of the application are at odds with
the design and implementation of the language.  It may be possible
that some application that falls into the above categories actually
would benefit from some current or future lisp's memory-management
philosophy and other features, but I have yet to encounter one.  As
for "how bad", all that I can meaningfully say without going into much
greater detail than I am prepared to about the design and
implementation of the specific applications on which I have worked is
"bad enough".

Kirk Rader
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <Cw4rEu.2sB@cogsci.ed.ac.uk>
In article <·················@polaris.ih.att.com> ···@polaris.ih.att.com (Lawrence G. Mayka) writes:
>In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
>
>   discourse for comp.lang.lisp.)  In addition, (and here _is_ a
>   substantive point of disagreement between us, I think) even from a
>   theoretical point of view I seriously question the assumption that
>   these future hypothetical lisp implementations could both be
>   well-suited to the kinds of applications to which C is currently
>   better suited _and_ still be well-suited to the kinds of applications
>   for which lisp is currently better suited.  I am willing to be shown
>   how they could, but I have seen no arguments that have convinced me
>   that it is even plausible.
>
>Common Lisp implementations often contain extensions that "add back
>in" most-all the features for which C is praised: operations that
>compile to a single machine instruction, for example.  Theoretically,
>one could go to the trouble of standardizing such extensions, and
>defining their relationship to the rest of Common Lisp, the result
>being essentially a new language that is a union of Common Lisp and C
>functionality, but with regular syntax.

What puzzles me in Kirk's view is why someone would think that the
process of Lisp becoming "well-suited to the kinds of applications to
which C is currently better suited _and_ still be well-suited to the
kinds of applications for which lisp is currently better suited"
must have *stopped*.

It's happened for a number of applications so far, but for some
reason cannot go further.  I find such views bizarre.

(Indeed, it can easily go further for some applications just by
producing smaller images.  But I suppose Kirk really means real-time
and like applications and not in fact "the kinds of applications to
which C is currently better suited" in general.)

Anyway, in additions to the techniques LGM mentions, we might note
that (1) GC technology is still improving, (2) type inference has not
been exploited as much as it could be, (3) otehr forms of compile-time
analysis is still becoming more sophisticated -- e.g. compilers that
look at the whole program, (4) more use could be made of the kinds of
type information already available (see e.g. the thread on "data
bloat", (5) techniques for limiting the scope of redefinition
(freezing or whatever it is in Dylan, confining redefinable classes 
to a separate metaclass as in some EuLisp- related stuff), (6) 
different ways to package implementations (eg as libraries or
shared libraries).

No doubt a determined skeptic can find ways to seemingly dismiss all
of this (e.g. by characterizing things that can be compiled away as
"trivial" and "syntactic sugar", or by claiming that the resulting
languages won't be "well-suited to the kinds of applications for which
lisp is currently better suited"; and again by talking of "Lisp"
as if it were a single language).  N.B. e.g. not i.e.

-- jeff
From: Thomas M. Breuel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TMB.94Aug8201139@arolla.idiap.ch>
In article <··········@rci.ripco.com> ····@ripco.com (Flier) writes:
|Kris Karas (···@enterprise.bih.harvard.edu) wrote:
| In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
| >I have found that applications which require "real-time" performance,
| >[...] are almost if not impossible to achieve in lisp.
|
| Oh, rubbish.  
|
| So, fighting apples to apples now, and using some machine-specific
| code, let's see how "slow" this lisp program is compared to its C
| counterpart.  The task: copy from one wired array to another (wired
| means that the storage management system has been told to not swap the
| array to disk [virtual memory]).  To make the program slightly smaller
| for the sake of net.bandwidth, a lot of the setup code will be
| simplified (no multi-dimensional arrays, etc), we'll assume the arrays
| are integer multiples of 4 words long and that they're the same size.
|
| (defun copy-a-to-b (a b)
|   (sys:with-block-registers (1 2)
|     (setf (sys:%block-register 1) (sys:%set-tag (locf (aref a 0))
| 						sys:dtp-physical)
| 	  (sys:%block-register 2) (sys:%set-tag (locf (aref b 0))
| 						sys:dtp-physical))
|     (loop repeat (ash (length a) -2) do
|       (let ((a (sys:%block-read 1 :prefetch t))
| 	    (b (sys:%block-read 1 :prefetch t))
| 	    (c (sys:%block-read 1 :prefetch nil))
| 	    (d (sys:%block-read 1 :prefetch nil)))
| 	(sys:%block-write 2 a)
| 	(sys:%block-write 2 b)
| 	(sys:%block-write 2 c)
| 	(sys:%block-write 2 d)))))
|
|
| So, we've used lisp and not assembler.

Looks like assembler to me.  Worse yet, it's completely unportable and
requires intimate familiarity with the particular system used as well
as details of the hardware and software environment that most users
just don't care about.

The need to write code like in Lisp in order to achieve good
performance is precisely the reason why people prefer languages like C
for many tasks: writing equivalent, efficient C code is
straightforward and needs to rely only on portable primitives.

				Thomas.
From: Marco Antoniotti
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <MARCOXA.94Aug8183946@mosaic.nyu.edu>
In article <················@arolla.idiap.ch> ···@arolla.idiap.ch (Thomas M. Breuel) writes:


   From: ···@arolla.idiap.ch (Thomas M. Breuel)
   Newsgroups: comp.lang.lisp
   Date: 08 Aug 1994 18:11:39 GMT
   Organization: IDIAP (Institut Dalle Molle d'Intelligence Artificielle
	   Perceptive)
   Lines: 47


   In article <··········@rci.ripco.com> ····@ripco.com (Flier) writes:
   |Kris Karas (···@enterprise.bih.harvard.edu) wrote:
   | In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
   | >I have found that applications which require "real-time" performance,
   | >[...] are almost if not impossible to achieve in lisp.
   |
   | Oh, rubbish.  
   |
   | So, fighting apples to apples now, and using some machine-specific
   | code, let's see how "slow" this lisp program is compared to its C
   | counterpart.  The task: copy from one wired array to another (wired
   | means that the storage management system has been told to not swap the
   | array to disk [virtual memory]).  To make the program slightly smaller
   | for the sake of net.bandwidth, a lot of the setup code will be
   | simplified (no multi-dimensional arrays, etc), we'll assume the arrays
   | are integer multiples of 4 words long and that they're the same size.
   |
   | (defun copy-a-to-b (a b)
   |   (sys:with-block-registers (1 2)
   |     (setf (sys:%block-register 1) (sys:%set-tag (locf (aref a 0))
   | 						sys:dtp-physical)
   | 	  (sys:%block-register 2) (sys:%set-tag (locf (aref b 0))
   | 						sys:dtp-physical))
   |     (loop repeat (ash (length a) -2) do
   |       (let ((a (sys:%block-read 1 :prefetch t))
   | 	    (b (sys:%block-read 1 :prefetch t))
   | 	    (c (sys:%block-read 1 :prefetch nil))
   | 	    (d (sys:%block-read 1 :prefetch nil)))
   | 	(sys:%block-write 2 a)
   | 	(sys:%block-write 2 b)
   | 	(sys:%block-write 2 c)
   | 	(sys:%block-write 2 d)))))
   |
   |
   | So, we've used lisp and not assembler.

   Looks like assembler to me.  Worse yet, it's completely unportable

Never seen a thing as a "portable" assembler. Try to run MIPS code on
your 486 :)

   and
   requires intimate familiarity with the particular system used as well
   as details of the hardware and software environment that most users
   just don't care about.

   The need to write code like in Lisp in order to achieve good
   performance is precisely the reason why people prefer languages like C
   for many tasks: writing equivalent, efficient C code is
   straightforward and needs to rely only on portable primitives.

Like the stream class of C++? It took the major C++ vendors three
releases before getting out something that it would resemble the AT&T
stream class, and that would run the examples on Lippman's book.

Moreover, the intermediate language used by the GNU GCC compiler (RTL)
looks a lot like...guess what?

Cheers
--
Marco Antoniotti - Resistente Umano
-------------------------------------------------------------------------------
Robotics Lab		| room: 1220 - tel. #: (212) 998 3370
Courant Institute NYU	| e-mail: ·······@cs.nyu.edu

...e` la semplicita` che e` difficile a farsi.
...it is simplicity that is difficult to make.
				Bertholdt Brecht
From: Mike Haertel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <MIKE.94Aug9004110@pdx399.intel.com>
In article <····················@mosaic.nyu.edu> ·······@mosaic.nyu.edu (Marco Antoniotti) writes:
>   From: ···@arolla.idiap.ch (Thomas M. Breuel)
>   The need to write code like in Lisp in order to achieve good
>   performance is precisely the reason why people prefer languages like C
>   for many tasks: writing equivalent, efficient C code is
>   straightforward and needs to rely only on portable primitives.
>
>Like the stream class of C++? It took the major C++ vendors three
>releases before getting out something that it would resemble the AT&T
>stream class, and that would run the examples on Lippman's book.

There is a big difference between C++ and C.  C is not a semantic nightmare.

Anyway, you seem to have completely missed Thomas' point, that
the original poster's example of "efficient Lisp" wasn't Lisp at all.
I sure can't find it anywhere in CLtL.

I believe it's possible to make Lisp efficient, but dressing up
assembly language with lots of irritating silly parentheses is
*not* the way to go.

>Moreover, the intermediate language used by the GNU GCC compiler (RTL)
>looks a lot like...guess what?

GCC's RTL syntax may have lots of parentheses, but that's its only
connection with Lisp.  So it's not clear to me why you made that remark.
It sounds to me like you're saying "GCC's RTL looks a lot like Lisp,
therefore using assembler dressed up in parentheses must be a
reasonable way to write efficient Lisp code."  Bzzt!  Time for a
reality check.
--
Mike Haertel <····@ichips.intel.com>
Not speaking for Intel.
From: Paul F. Snively
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <3282ut$p97@news1.svc.portal.com>
In article <·················@pdx399.intel.com>
····@ichips.intel.com (Mike Haertel) writes:

> In article <····················@mosaic.nyu.edu> ·······@mosaic.nyu.edu (Marco Antoniotti) writes:
> Anyway, you seem to have completely missed Thomas' point, that
> the original poster's example of "efficient Lisp" wasn't Lisp at all.
> I sure can't find it anywhere in CLtL.

The original poster's point seemed to me to have been that any
real-world Lisp implementation will allow you to do what is generally
necessary in order to talk directly to the hardware, which is what many
people are concerned about when they talk about `real-time'
constraints.  It was also to point out that it's not necessarily any
harder to use registers and stack-frames to avoid garbage-collection in
cases where that's important.

To me, it's not valid argument to say `C is great because it's
essentially a quasi-portable assembler that lets you efficiently bang
on the hardware,' and then yell `foul' when someone else says `Lisp is
great because it's a high-level language that will let you, if/when you
want, efficiently bang on the hardware.'  I would say that the fact
that C _forces_ you to bang on the hardware, while Lisp merely allows
you to if you're willing to bypass all the abstraction away from the
hardware, is a compelling case in Lisp's favor _for most
non-device-driver one-megabyte-or-more application tasks_.

> I believe it's possible to make Lisp efficient, but dressing up
> assembly language with lots of irritating silly parentheses is
> *not* the way to go.

So far, no one has commented on G2, the hard real-time real-Lisp expert
system development tool.  I may have to dig up old AI Expert/PC AI
magazine articles about it.  From what I understand, in a nutshell,
they make heavy use of `resources' (that is, they maintain their own
free-lists of various types in order to avoid garbage collection) and
they use lots of type declarations in their code--which languages like
C again _force_ you to do anyway.  So by using C-like memory-management
and type declarations, they win, and still get to use Lisp's other
great features.

> >Moreover, the intermediate language used by the GNU GCC compiler (RTL)
> >looks a lot like...guess what?
> 
> GCC's RTL syntax may have lots of parentheses, but that's its only
> connection with Lisp.

I'm afraid that that's simply untrue.  The connection with Lisp is on a
deep mathematical/logical level.  Remember, RMS (Richard Stallman) is
one of the old MIT Lisp hackers.  One of his significant claims to fame
is that when the Lisp Machines that MIT created went commercial in the
form of Symbolics, Inc. and Lisp Machines, Inc. being spun off by
various denizens of the MIT AI Lab, RMS would take every new release of
Genera that Symbolics put out, reverse engineer it, and reimplement the
results, which he would then provide to LMI.  He sincerely believes
that technology should be freely available to anyone who wants it.

But I digress.  RTL is related to Lisp inasmuch as they both derive
directly from the Lambda Calculus.  As a theoretical point, it's been
understood for some time that if you're willing to express everything
about a program in terms of the Lambda Calculus, some wonderful
optimization opportunities arise.  One reason that this remained a
theoretical consideration for so long is the difficulty inherent in
representing side-effects in the Lambda Calculus, and popular languages
such as C are notorious for their reliance upon side-effects.  C adds
insult to injury by allowing aliases--that is, indirect side-effecting
through pointers.

Nevertheless, GCC manages to compile C to RTL, a Lambda-Calculus
derivative, at which point it can do many of the optimizations that one
would expect for a language based on the Lambda Calculus.  It then
translates the RTL to assembly language for the target processor, and
hands the results off to the system's assembler.

Note that _all optimizations that GCC does are done to the RTL, not the
assembly language_.  And GCC is still considered one of the best
optimizing C compilers around--generally better than the C compiler
that many vendors include with their system!  The wiser vendors have
taken to including the latest GCC with their system--Sun Microsystems
and NeXT come to mind.

> So it's not clear to me why you made that remark.
> It sounds to me like you're saying "GCC's RTL looks a lot like Lisp,
> therefore using assembler dressed up in parentheses must be a
> reasonable way to write efficient Lisp code."  Bzzt!  Time for a
> reality check.

I believe that the point was that it's possible for a lisp-like
language (RTL) to generate efficient code, and that GCC is an existence
proof.

> --
> Mike Haertel <····@ichips.intel.com>
> Not speaking for Intel.


-----------------------------------------------------------------------
Paul F. Snively          "Just because you're paranoid, it doesn't mean
·····@shell.portal.com    that there's no one out to get you."
From: Rob MacLachlan
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <3289mb$b20@cantaloupe.srv.cs.cmu.edu>
>From: ·····@shell.portal.com (Paul F. Snively)
>
>RTL is related to Lisp inasmuch as they both derive directly from the Lambda
>Calculus.

So far as I know, this is not true.  I do know that the early papers about the
use of "Register Transfer Language" in portable back-ends make no mention of
any such semantic tie-in, and in actual implementation was also quite un-lispy
(based on string processing of a fairly conventional assembly format.)

I'm not familiar with GCC internals, but Stallman's decision to use the term
RTL doesn't suggest a radical new semantic foundation.

There has been some discussion (such as in Guy Steele's Lambda papers) of
using lambda and continuation passing style as the ultimate target independent
intermediate format (even for non-Lisp languages), but so far as I know CPS
has only been used in basically non-optimizing Scheme and ML compilers.

>I believe that the point was that it's possible for a lisp-like
>language (RTL) to generate efficient code, and that GCC is an existence
>proof.

If you consider Lambda to be what makes a language "Lisp-like", then I would
agree that efficient code could easily be generated for C-with-Lambda (or
C-with-lambda-and-parens) especially if upward closures were illegal.
Performance-wise, the big problem with Lisp variants such as Common Lisp and
Scheme are that:
 -- Basic operations such as arithmetic are semantically far removed
    from the hardware,
 -- Dynamic memory allocation is done by the runtime system, and is thus
    not under effective programmer control, and
 -- A run-time binding model tends to be used for variable references and
    function calls. 

Inefficiency is a natural consequence of programming using high-level
operations, though this inefficiency can be overcome with a fair degree of
success by compiler technology and iterative tuning.

I don't hold out much hope for the idea of overcoming the inefficiency of
high-level operations by the explicit use of scads low-level operations,
especially when the operations are implementation dependent.  It just gives up
too many of the reasons for wanting to use a high-level language.

I believe that the primary key to adequate performance in dynamic languages
like Lisp is adequate tuning tools and educational materials.  Getting good
performance in dynamic languages currently requires far too much knowledge
about low-level implementation techniques.  Wizards have too long claimed that
"Lisp is just as efficient as C" --- although Lisp may be highly efficient in
the hands of the wizards, the vast majority of programmers who attempt Lisp
system development don't come anywhere near that level of efficiency.

  Rob
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuDv0w.49I@cogsci.ed.ac.uk>
In article <··········@cantaloupe.srv.cs.cmu.edu> ···@cs.cmu.edu (Rob MacLachlan) writes:
>
>If you consider Lambda to be what makes a language "Lisp-like", then I would
>agree that efficient code could easily be generated for C-with-Lambda (or
>C-with-lambda-and-parens) especially if upward closures were illegal.
>Performance-wise, the big problem with Lisp variants such as Common Lisp and
>Scheme are that:
> -- Basic operations such as arithmetic are semantically far removed
>    from the hardware,
> -- Dynamic memory allocation is done by the runtime system, and is thus
>    not under effective programmer control, and
> -- A run-time binding model tends to be used for variable references and
>    function calls. 

I know you know what you're talking about, but this doesn't make all
that much sense to me.

I can see the problem with arithmetic, though it's that hard to get
arithmetic that's fairly close to the hardware.

But I don't see why "programmer control" over allocation has to
be more efficient.  In many cases, it won't be, and it requires
a fair amount of skill to get efficient storage management when
using malloc directly would be too slow.  It's also far more
error-prone.

And what is this "run-time binding" and why is it worse than,
say, shared libraries?  _Most_ variables will be ordinary lexical
variables and correspond directly to stack locations just as in
C.  Using ordinary 70s-style technology, global variable and
function names can be looked up once to get the required address.
Thereafter, the cost is one level of indirection on each reference
or call.  Moreover, it's easy to do better in some cases.  For
instance, KCL uses direct calls to functions in the same file.

>I believe that the primary key to adequate performance in dynamic languages
>like Lisp is adequate tuning tools and educational materials.  Getting good
>performance in dynamic languages currently requires far too much knowledge
>about low-level implementation techniques.  Wizards have too long claimed that
>"Lisp is just as efficient as C" --- although Lisp may be highly efficient in
>the hands of the wizards, the vast majority of programmers who attempt Lisp
>system development don't come anywhere near that level of efficiency.

I agree that performance tools and educational materials are
important, but I don't think it's all that hard to get reasonably
efficient Lisp.

-- jeff
From: Marco Antoniotti
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <MARCOXA.94Aug9123734@mosaic.nyu.edu>
In article <··········@news1.svc.portal.com> ·····@shell.portal.com (Paul F. Snively) writes:

   From: ·····@shell.portal.com (Paul F. Snively)
   Newsgroups: comp.lang.lisp
   Date: 9 Aug 1994 14:13:48 GMT
   Organization: tumbolia.com
   Lines: 98
   Sender: ·····@nova.unix.portal.com
   References: <······················@wheaton.bbn.com> 
    <··········@triple-i.com> <··········@hsdndev.harvard.edu> 
    <··········@rci.ripco.com> <················@arolla.idiap.ch> 
    <····················@mosaic.nyu.edu>  
    <·················@pdx399.intel.com>
   X-Posted-From: InterNews ·····@tumbolia.com
   X-Authenticated: chewy on POP host nova.unix.portal.com

   In article <·················@pdx399.intel.com>
   ····@ichips.intel.com (Mike Haertel) writes:

	...

   So far, no one has commented on G2, the hard real-time real-Lisp expert
   system development tool.  I may have to dig up old AI Expert/PC AI
   magazine articles about it.  From what I understand, in a nutshell,
   they make heavy use of `resources' (that is, they maintain their own
   free-lists of various types in order to avoid garbage collection) and
   they use lots of type declarations in their code--which languages like
   C again _force_ you to do anyway.  So by using C-like memory-management
   and type declarations, they win, and still get to use Lisp's other
   great features.

As I remember correctly, G2 basically goes at great lengths to provide
GC-free arithmetics and carefully uses resources to avoid "consing".

As usual the definition of "Real Time" must be taken into account. NYU
is a notorious ADA stronghold and there are many stories going around
about it. One of the most succulent ones is that of an ADA system
failing to meet its Real Time specs - i.e. the tasks were missing the
deadlines. Well it turned out that no matter how good the compilation
was (and ADA can potentially be optimized in a better way than C) the
system still missed the deadlines. Of course the problem was the
scheduling policy. Hardly a matter of "efficiency of the language".

BTW. There is a portable implementation of Common Lisp Resources in
the Lisp Repository maintained by the never thanked enough Mark
Kantrowitz.

	...

   > So it's not clear to me why you made that remark.
   > It sounds to me like you're saying "GCC's RTL looks a lot like Lisp,
   > therefore using assembler dressed up in parentheses must be a
   > reasonable way to write efficient Lisp code."  Bzzt!  Time for a
   > reality check.

   I believe that the point was that it's possible for a lisp-like
   language (RTL) to generate efficient code, and that GCC is an existence
   proof.

Pretty much it. Thanks

Happy Lisping
--
Marco Antoniotti - Resistente Umano
-------------------------------------------------------------------------------
Robotics Lab		| room: 1220 - tel. #: (212) 998 3370
Courant Institute NYU	| e-mail: ·······@cs.nyu.edu

...e` la semplicita` che e` difficile a farsi.
...it is simplicity that is difficult to make.
				Bertholdt Brecht
From: Lawrence G. Mayka
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <LGM.94Aug10124715@polaris.ih.att.com>
In article <····················@mosaic.nyu.edu> ·······@mosaic.nyu.edu (Marco Antoniotti) writes:

   BTW. There is a portable implementation of Common Lisp Resources in
   the Lisp Repository maintained by the never thanked enough Mark
   Kantrowitz.

CLIM also includes one (possibly the same one, I don't know) in the
CLIM-SYS package.

--
        Lawrence G. Mayka
        AT&T Bell Laboratories
        ···@ieain.att.com

Standard disclaimer.
From: Mike Haertel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <MIKE.94Aug10140823@pdx399.intel.com>
In article <··········@news1.svc.portal.com> ·····@shell.portal.com (Paul F. Snively) writes:
>>[Mike Haertel wrote this, Snively incorrectly attributed it.]
>> GCC's RTL syntax may have lots of parentheses, but that's its only
>> connection with Lisp.
>
>I'm afraid that that's simply untrue.  The connection with Lisp is on a
>deep mathematical/logical level.  Remember, RMS (Richard Stallman) is
>one of the old MIT Lisp hackers.  One of his significant claims to fame
>is that when the Lisp Machines that MIT created went commercial in the
>form of Symbolics, Inc. and Lisp Machines, Inc. being spun off by
>various denizens of the MIT AI Lab, RMS would take every new release of
>Genera that Symbolics put out, reverse engineer it, and reimplement the
>results, which he would then provide to LMI.  He sincerely believes
>that technology should be freely available to anyone who wants it.

Ok, you're saying "RMS is an old time Lisp hacker, therefore GCC's RTL
*must* have some deep connection with Lisp."  A stupid thing to say.

The anecdote about his anti-Symbolics crusade is cute, true, and irrelevant
to the matter at hand.

>But I digress.  RTL is related to Lisp inasmuch as they both derive
>directly from the Lambda Calculus.

RTL has nothing whatever to do with the Lambda calculus.  Every
single RTL statement includes a side effect.  RTL makes no pretense
whatever of referential transparency.

>As a theoretical point, it's been
>understood for some time that if you're willing to express everything
>about a program in terms of the Lambda Calculus, some wonderful
>optimization opportunities arise.

This is called continuation passing style.  Using CPS as your
intermediate representation has some benefits.  However, other
representations, notably static single assignment, wherein each
temporary is assigned exactly once, have the same benefits.
Anyway, as far as I know CPS was invented by Steele for his
scheme compiler "Rabbit" around 1975.  It's been used in a
handful of compilers; New Jersey ML is a notable recent example,
and Appel wrote an oft-cited book about it.

But GCC doesn't use CPS.  In fact, as far as I know no compiler for
any conventional imperative language uses CPS.  There might be an
interesting research project in that.

>[...]

>Nevertheless, GCC manages to compile C to RTL, a Lambda-Calculus
>derivative, at which point it can do many of the optimizations that one
>would expect for a language based on the Lambda Calculus.  It then
>translates the RTL to assembly language for the target processor, and
>hands the results off to the system's assembler.

RTL is not a lambda-calculus derivative.  Various "register transfer
languages" have been around for a long time.  GCC's is based on the
one used in Davidsen & Fraser's "Portable Optimizer", done at Arizona
around 1980.  Davidsen & Fraser's RTL was in turn based on something,
I forget the name, invented at CMU.

>Note that _all optimizations that GCC does are done to the RTL, not the
>assembly language_.  And GCC is still considered one of the best
>optimizing C compilers around--generally better than the C compiler
>that many vendors include with their system!  The wiser vendors have
>taken to including the latest GCC with their system--Sun Microsystems
>and NeXT come to mind.

The purpose of RTL is to provide a semantic representation of machine
instruction effects.  For example, the 68020 instruction "addl ··@, d0"
maps (approximately) to the RTL instruction

  (set (reg 0) (add (reg 0)
		    (mem (reg 8))))

Therefore, almost exactly the opposite of what you said is true.
GCC optimizes directly on the target machine's assembly language.
It does not have an abstract intermediate language that it first
goes through.

>I believe that the point was that it's possible for a lisp-like
>language (RTL) to generate efficient code, and that GCC is an existence
>proof.

In case it's not yet blatantly clear to you, RTL is not a lisp-like
language:

1.  It is staticly typed.
2.  It is heavily side-effect oriented.
3.  It has no structured data types.
4.  It has no implicit memory allocation.

GCC's RTL's *only* connection with Lisp is that RMS chose a
parenthesis-heavy syntax for it.  The syntax is entirely superficial.
From: Henry G. Baker
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <hbakerCuDrvD.E9J@netcom.com>
In article <··················@pdx399.intel.com> ····@ichips.intel.com (Mike Haertel) writes:
>>As a theoretical point, it's been
>>understood for some time that if you're willing to express everything
>>about a program in terms of the Lambda Calculus, some wonderful
>>optimization opportunities arise.
>
>This is called continuation passing style.  Using CPS as your
>intermediate representation has some benefits.  However, other
>representations, notably static single assignment, wherein each
>temporary is assigned exactly once, have the same benefits.
>Anyway, as far as I know CPS was invented by Steele for his
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^
>scheme compiler "Rabbit" around 1975.  It's been used in a
>handful of compilers; New Jersey ML is a notable recent example,
>and Appel wrote an oft-cited book about it.

WRONG!  Continuation-passing style was apparently known & used by
lambda calculus mathematicians (Curry?, Church?) before computers.
(Unfortunately, I don't have a reference.)  CPS was popularized by
Michael Fischer in a 1972 paper that was recently reprinted in Lisp &
Symbolic Computation.  A great deal of Carl Hewitt's 1972-74 Actor
stuff is CPS++.  Steele's Rabbit compiler showed how CPS could be used
in a compiler to simplify and generalize many important optimizations.
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuA8Et.Jo5@cogsci.ed.ac.uk>
In article <·················@pdx399.intel.com> ····@ichips.intel.com (Mike Haertel) writes:

>Anyway, you seem to have completely missed Thomas' point, that
>the original poster's example of "efficient Lisp" wasn't Lisp at all.
>I sure can't find it anywhere in CLtL.

Since when is Lisp the same as what's in CLtL?

>I believe it's possible to make Lisp efficient, but dressing up
>assembly language with lots of irritating silly parentheses is
>*not* the way to go.

I agree.

>>Moreover, the intermediate language used by the GNU GCC compiler (RTL)
>>looks a lot like...guess what?
>
>GCC's RTL syntax may have lots of parentheses, but that's its only
>connection with Lisp. 

But that is Lisp here?  What's in CLtL?
From: Mike Haertel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <MIKE.94Aug10142220@pdx399.intel.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <·················@pdx399.intel.com> ····@ichips.intel.com (Mike Haertel) writes:
>>Anyway, you seem to have completely missed Thomas' point, that
>>the original poster's example of "efficient Lisp" wasn't Lisp at all.
>>I sure can't find it anywhere in CLtL.
>
>Since when is Lisp the same as what's in CLtL?

Since I was trying to make a point, didn't feel like going into the
never-ending philosophical discussion of exactly what characteristics
of a language make it "Lisp".  CLtL was a convenient scapegoat.

The point I was trying to make was that the original poster's
example was just as vendor-specific as assembly language.

>But that is Lisp here?  What's in CLtL?

I refuse to get sucked into this discussion.  This newsgroup has
seen it to many times before.  Are you bored, or what?

p.s. "Lisp" includes Scheme.
--
Mike Haertel <····@ichips.intel.com>
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuFqwK.69w@cogsci.ed.ac.uk>
In article <··················@pdx399.intel.com> ····@ichips.intel.com (Mike Haertel) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <·················@pdx399.intel.com> ····@ichips.intel.com (Mike Haertel) writes:
>>>Anyway, you seem to have completely missed Thomas' point, that
>>>the original poster's example of "efficient Lisp" wasn't Lisp at all.
>>>I sure can't find it anywhere in CLtL.
>>
>>Since when is Lisp the same as what's in CLtL?
>
>Since I was trying to make a point, didn't feel like going into the
>never-ending philosophical discussion of exactly what characteristics
>of a language make it "Lisp".  CLtL was a convenient scapegoat.

You don't have to go into any philosophical discussion to say
"Common Lisp" if you mean Common Lisp or to say "not portable
Lisp" or "vendor specific" or whatever if _that's_ what you mean.

>The point I was trying to make was that the original poster's
>example was just as vendor-specific as assembly language.

And he was making a point about Lisp, not about standardized,
portable Lisp or whatever it is you're talking about.

>>But that is Lisp here?  What's in CLtL?
>
>I refuse to get sucked into this discussion.  This newsgroup has
>seen it to many times before.  Are you bored, or what?

I'm not trying to suck you into any discussion.  I'm just tired of
people saying "Lisp" when they mean Common Lisp (or whatever they
mean -- it's usually unclear).  

Actually, I'm more than tired of it.  People are forming false
conclusions about all Lisps because of what they've seen of Common
Lisp.  This is a serious problem for anyone who wants to work in Lisp.
Claims about "Lisp" that are true only of some subset are part of
the problem.

-- jd
From: Jeff Dalton
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <CuA7ux.JFE@cogsci.ed.ac.uk>
In article <················@arolla.idiap.ch> ···@idiap.ch writes:

>| (defun copy-a-to-b (a b)
>|   (sys:with-block-registers (1 2)
>|     (setf (sys:%block-register 1) (sys:%set-tag (locf (aref a 0))
>| 						sys:dtp-physical)
>| 	  (sys:%block-register 2) (sys:%set-tag (locf (aref b 0))
>| 						sys:dtp-physical))
>|     (loop repeat (ash (length a) -2) do
>|       (let ((a (sys:%block-read 1 :prefetch t))
>| 	    (b (sys:%block-read 1 :prefetch t))
>| 	    (c (sys:%block-read 1 :prefetch nil))
>| 	    (d (sys:%block-read 1 :prefetch nil)))
>| 	(sys:%block-write 2 a)
>| 	(sys:%block-write 2 b)
>| 	(sys:%block-write 2 c)
>| 	(sys:%block-write 2 d)))))
>|
>| So, we've used lisp and not assembler.
>
>Looks like assembler to me. 

Really?  You must be used to pretty fancy macro assemblers (with
loop macros, etc).

>The need to write code like in Lisp in order to achieve good
>performance 

But it's not necessary to write code like that in order to
achieve _good_ performance in Lisp.
From: Kris Karas
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <32dtcr$l3p@hsdndev.harvard.edu>
In article <················@arolla.idiap.ch> ···@idiap.ch writes:
>|     (loop repeat (ash (length a) -2) do
>|       (let ((a (sys:%block-read 1 :prefetch t))
>| 	    (b (sys:%block-read 1 :prefetch t))..........
>|
>| So, we've used lisp and not assembler.
>
>Looks like assembler to me.  Worse yet, it's completely unportable and
>requires intimate familiarity with the particular system used

Fair enough.  C looks like assember to me.

What many C programmers fail to notice, however, is that what they are
programming has little to do with C alone by itself, and very much to
do with "the C syntax" layered on top of a vast library particular to
one specific system.  To wit: most of the C environments I use assume
that the computer it's running on supports a file system with
subdirectories, that there just happens to be a top level directory
called "usr", just happens to be a subdirectory under that called
"include", and so on ad nauseum.

Lets have fun.  Take a large system snatched from net.sources or
something, copy it over to a non-Unix platform which has a C compiler
and a library for the functions described in K&R, and compile the thing.
Does it run?  Better yet, does it even compile?  I have a DeSmet C
compiler on my PC that implements all of K&R, and I can find few
programs indeed that will actually compile and run successfully.
In short, C is *not* portable.  Most programs written in it depend
heavily upon knowledge of the platform (Unix) upon which it runs.
This is no different than the lisp/assembler program above depending
upon knowledge of its particular platform.

The problem with portability of C code was solved for non-Unix
platforms by making the compiler have knowledge of Unix platform
layout, wrapping this environment around the compiling program while
it compiles; if the program asks for a "/usr/include/time.h" the
compiler will provide it, even if there isn't a directory called
"usr/include" on the actual machine.

Portability problems for Lisp could be solved in a similar fashion.
Emulate enough of the system-specific functions of a popular
environment in other environments so that any lisp program could
depend upon those functions.  Add support for fast, low level I/O to
disk, keyboard, video, and so forth; process manipulation,
synchronization, and scheduler control; asynchronous
events/interrupts including being a network server; standardized calls
to editors, print managers, and so forth and so on.
-- 
Kris Karas <···@enterprise.bih.harvard.edu> for fun, or @aviion-b... for work.
(setq *disclaimer* "I barely speak for myself, much less anybody else."
      *conformist-numbers* '((AMA-CCS 274) (DoD 1236))
      *bikes* '((RF900RR-94) (NT650-89 :RaceP T) (TSM-U3 :Freebie-P T)))
From: Steven Rezsutek
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <STEVE.94Aug11155517@baloo.gsfc.nasa.gov>
···@enterprise.bih.harvard.edu (Kris Karas) writes:

   Portability problems for Lisp could be solved in a similar fashion.
   Emulate enough of the system-specific functions of a popular
   environment in other environments so that any lisp program could
   depend upon those functions.  Add support for fast, low level I/O to
   disk, keyboard, video, and so forth; process manipulation,
   synchronization, and scheduler control; asynchronous
   events/interrupts including being a network server; standardized calls
   to editors, print managers, and so forth and so on.

Cool.  A LispM-ulator. ;-)


Steve
-- 
---
Steven Rezsutek			      ···············@gsfc.nasa.gov
Nyma / NASA GSFC 	
Code 735.2			      Vox: +1 301 286 0897
Greenbelt, MD 20771	
From: John R. Bane
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <32fuhl$4ca@tove.cs.umd.edu>
In article <···················@baloo.gsfc.nasa.gov> ·····@baloo.gsfc.nasa.gov (Steven Rezsutek) writes:
>···@enterprise.bih.harvard.edu (Kris Karas) writes:
>
>   Portability problems for Lisp could be solved in a similar fashion.
>   Emulate enough of the system-specific functions of a popular
>   environment in other environments so that any lisp program could
>   depend upon those functions.....
>
>Cool.  A LispM-ulator. ;-)
>
It's been done, several times, and commercially to boot.  Medley from Venue
runs D-machine images on top of a C emulator that essentially fakes a
D-machine environment on top of Unix or MS-DOS.  You can build an image on
a Dorado, dump it and restart it on a Unix box without a hic-up.  You can't
go the other way, because the emulator doesn't do a D-machine perfectly
(doesn't maintain the D-machine page tables, for one thing).

Hasn't Symbolics done an emulator-based port to the Alpha?
-- 
Internet: ····@tove.cs.umd.edu
UUCP:...uunet!mimsy!bane
Voice: 301-552-4860
From: Thomas M. Breuel
Subject: Re: C is faster than lisp (lisp vs c++ / Rick Graham...)
Date: 
Message-ID: <TMB.94Aug12042824@arolla.idiap.ch>
In article <··········@hsdndev.harvard.edu> ···@enterprise.bih.harvard.edu (Kris Karas) writes:
|The problem with portability of C code was solved for non-Unix
|platforms by making the compiler have knowledge of Unix platform
|layout, wrapping this environment around the compiling program while
|it compiles; if the program asks for a "/usr/include/time.h" the
|compiler will provide it, even if there isn't a directory called
|"usr/include" on the actual machine.
|
|Portability problems for Lisp could be solved in a similar fashion.

Sure, a lot of C portability is based on de-facto standards, rather
than on codified standards.  But those de-facto standards exist.  And
where they don't exist, there exist easy, de-facto standards for
interfacing to system libraries.

|Emulate enough of the system-specific functions of a popular
|environment in other environments so that any lisp program could
|depend upon those functions.  Add support for fast, low level I/O to
|disk, keyboard, video, and so forth; process manipulation,
|synchronization, and scheduler control; asynchronous
|events/interrupts including being a network server; standardized calls
|to editors, print managers, and so forth and so on.

I would be happy if I could write efficient numerical functions in
CommonLisp that I could move to different CommonLisp implementations
on the same machine without having to redo all the declarations, and
if I could write foreign-function interface code that would work under
different CommonLisp implementations on the same machine.

Most of the other stuff you mentioned is operating system specific and
not language specific.  You wouldn't get portability for that in C or
any other language, nor do I see why you would expect to.

Of course, with the amazing shrinking number of CommonLisp vendors,
we may soon have all the de-facto standards we want...

				Thomas.
From: 25381-geller
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <ASG.94Jul28101843@xyzzy.gamekeeper.bellcore.com>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:

   In article <·················@polaris.ih.att.com> ···@polaris.ih.att.com (Lawrence G. Mayka) writes:

   [...]

   >
   >Seriously, real-life Common Lisp applications typically require
   >image-trimming (e.g., via a treeshaker) in order to be competitive in
   >space with similar applications written in more parsimonious
   >languages.  We simply must include the image-trimming effort in the
   >total productivity equation.  I still think we come out way ahead.
   >--
   >        Lawrence G. Mayka
   >        AT&T Bell Laboratories
   >        ···@ieain.att.com
   >
   >Standard disclaimer.


   I agree that using lisp you come out way ahead in productivity and
   without paying too large a cost in executable size and performance if:

   1. You are sufficiently aware of the performance and memory
   implications of common operations and idioms to know how to design for
   a sufficient degree of efficiency up front without having to spend too
   much time finding and fixing "performance leaks" after the fact.  This
   will be highly dependent on the particular implementation in use,
   since in my experience most (commercial or otherwise) lisp
   implementations have their own idiosyncratic patterns of which
   operations cons excessively and which do not, and which functions are
   coded in an optimally efficient manner and which are better avoided in
   favor of home-grown lisp or foreign code.

   2.  You are working in a problem domain which is well-suited to lisp
   in the first place.  Some problem domains are best addressed using the
   features of a lisp-like language because they actually make good use
   of lisp's semantic bells-and-whistles.  Note that the more "lispish"
   features one uses, the less tree-shaking is likely to actually find
   substantial amounts of unused code to eliminate but, since it is
   already conceded in this case that those features "pay their way" in
   the application's executable, this is not an issue.

   The considerations cited in 1, together with the quality of modern
   C/C++ integrated development and debugging environments, are the
   reason I feel that most claims of lisp's "productivity enhancing"
   features are overblown, if not simply false.  There are valid reasons
   for using lisp for certain kinds of applications.  There are also
   valid reasons for avoiding it in others.  When starting a new project,
   it is a good idea to spend at least some time considering the
   trade-offs involved before making a choice as basic as what language
   to use in implementing it.  As someone whose job it has been to make
   piles of performance-critical lisp-machine code run on general-purpose
   machines in commercial Common Lisp implementations, I have had this
   confirmed through bitter experience.  The more that tree-shaking,
   declarations, application-specific foreign or simplified versions of
   standard functions, etc. are required for acceptable performance and
   actually succeed in achieving it, the more evidence it is that lisp
   was not really the best choice for that particular application (even
   though there are applications for which it is, in fact, the best
   choice due to considerations as in 2, above) and the more likely it is
   that lisp will make one less, rather than more, productive.

Having just finished my first major development effort in C++, after
years of C programming and lots of Lisp, I have to take exception to
some of the assertions Mr. Rader is making here, both stated and
implied:

- My experience using SoftBench, ObjectCenter, and GNU emacs, compared
to MCL, is that I figure I'm at least twice as productive using MCL --
and I'm much more familiar with C than Lisp. Once a project grows to
any 'reasonable' size, tools like ObjectCenter become unmanageable,
and slow as molassas.
  The other other issue here is that it takes us over 5 hours, on an
otherwise empty Sparc 10, to rebuild our application. Even if only a
single source file is changed, a rebuild will often take over 3 hours,
since (for reasons that we don't really understand) many of our
templates will wind up getting rebuilt. This makes it very difficult
to fix (and unit test) more than a couple of small bugs in a single
day!

- The implication that C++ optimization is machine independent is just
plain wrong. Even across different compilers on the same machine, or
different libraries, there can often be significant variations in the
relative performance of operations such as heap allocation and
deallocation, function calls vs. inlined functions, single-
vs. double-precision arithmetic, switch/case vs. if/else, stdio
vs. streams vs. read/write, virtual vs. normal functions, etc. While
you can certainly argue that optimization is less necessary for C or
C++, I'm not sure that this is true in many commercial applications,
which usually spend most of their user CPU on string copying and
string scanning, rather than number crunching.

- One other major advantage of Lisp over C++ is the relative maturity
of the languages. We developed our application in C++ on SunOS 4.1.3
initially, using ObjectCenter's C++ compiler. When we moved to HP-UX
9.0, still with OC C++, we found numerous bugs related to 'edge'
conditions -- basically, all related to the fact that the order of
static object construction and destruction is completely undefined by
the ARM, and thus THE SAME COMPILER generated different orders on two
different systems.
  And yes, it's true, we shouldn't have coded anything with any
implicit assumptions about the order of static object construction or
destruction, but what we've been reduced to is a combination of hacks
and backing away from objects in favor of 'built-in' types, whose
static construction is always done first-thing (i.e., const char *
instead of const String). To me, this is a bug in the language
definition, and a very serious one.

   Several people have suggested in this thread that one code one's
   mainline application modules in C or C++, and use a small, easily
   extensible version of lisp as a "shell" or "macro" language to glue
   the pieces together and provide the end-user programmability features
   which are one of lisp's greatest assets.  This seems to me to be an
   ideal compromise for applications where lisp's performance and
   resource requirements are unacceptable to the main application but
   where it is still desired to retain at least some of lisp's superior
   features.

I'd argue for the reverse of this -- build one's mainline in Lisp, and
build the performance-critical pieces in C or C++. The less
statically-compiled, statically-linked stuff in the delivered
application, the better -- quicker edit/compile/build/test cycles,
easier patching in the field, and easier for your users to change or
extend what you've given them.

   ------------------------------------------------------------
   Kirk Rader                                 ····@triple-i.com

--
----------------------------------------------------------------------------- 
 Alan Geller                                            phone: (908)699-8285 
 Bell Communications Research                             fax: (908)336-2953 
 444 Hoes Lane                                   e-mail: ···@cc.bellcore.com 
 RRC 5G-110
 Piscataway, NJ 08855-1342
From: 25381-geller
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <ASG.94Aug5110506@xyzzy.gamekeeper.bellcore.com>
In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:

   Newsgroups: comp.lang.lisp
   Path: athos.cc.bellcore.com!uunet!spool.mu.edu!agate!darkstar.UCSC.EDU!news.hal.COM!decwrl!netcomsv!netcomsv!torii!kirk
   From: ····@triple-i.com (Kirk Rader)
   Sender: ······@triple-i.com
   Nntp-Posting-Host: pak+
   Organization: Information International Inc., Culver City, CA
   References: <·················@polaris.ih.att.com> <··········@triple-i.com> <·················@xyzzy.gamekeeper.bellcore.com>
   Date: Sat, 30 Jul 1994 19:57:55 GMT
   Lines: 192

   In article <·················@xyzzy.gamekeeper.bellcore.com> ···@xyzzy.gamekeeper.bellcore.com (25381-geller) writes:
   >In article <··········@triple-i.com> ····@triple-i.com (Kirk Rader) writes:
   >


   [...]

   >Having just finished my first major development effort in C++, after
   >years of C programming and lots of Lisp, I have to take exception to
   >some of the assertions Mr. Rader is making here, both stated and
   >implied:
   >
   >- My experience using SoftBench, ObjectCenter, and GNU emacs, compared
   >to MCL, is that I figure I'm at least twice as productive using MCL --
   >and I'm much more familiar with C than Lisp. Once a project grows to
   >any 'reasonable' size, tools like ObjectCenter become unmanageable,
   >and slow as molassas.
   >  The other other issue here is that it takes us over 5 hours, on an
   >otherwise empty Sparc 10, to rebuild our application. Even if only a
   >single source file is changed, a rebuild will often take over 3 hours,
   >since (for reasons that we don't really understand) many of our
   >templates will wind up getting rebuilt. This makes it very difficult
   >to fix (and unit test) more than a couple of small bugs in a single
   >day!


   This sounds like a case either of a novice C++ programmer's
   predictable problems with using templates effectively or a broken
   implementation.  Either way, my experience has been that Common Lisp
   implementations are more susceptible to both kinds of problems than
   C++ implementations, primarily because C++ attempts so much less than
   CL.  I have listed any number of specific "horror stories" encountered
   when trying to develop commercial-quality software using third-party
   commercial CL development platforms in previous postings and private
   email that they generated, but I will repeat them if requested.

Well, I'd certainly agree that I was, at least, a novice C++
programmer, but I've been programming in C for about 13 years now, and
I've been doing OOP development in a variety of languages (Smalltalk,
CLOS, C, and PASCAL) for over 5 years. I didn't have nearly this level
of difficulty with my first significant Lisp project.

We also have a few very experienced C++ developers on our staff, so
it's not just me.
   >
   >- The implication that C++ optimization is machine independent is just
   >plain wrong. Even across different compilers on the same machine, or
   >different libraries, there can often be significant variations in the
   >relative performance of operations such as heap allocation and
   >deallocation, function calls vs. inlined functions, single-
   >vs. double-precision arithmetic, switch/case vs. if/else, stdio
   >vs. streams vs. read/write, virtual vs. normal functions, etc. While
   >you can certainly argue that optimization is less necessary for C or
   >C++, I'm not sure that this is true in many commercial applications,
   >which usually spend most of their user CPU on string copying and
   >string scanning, rather than number crunching.


   I never suggested that C++ optimization was "machine independent" (or
   implementation independent, either.)  I do state emphatically that one
   must typically spend less time optimizing C++ code than lisp code
   since 1) the typical C++ compiler does a better job of optimizing on
   its own than the typical lisp compiler and 2) the much smaller
   run-time infrastructure assumed by C++ just doesn't give nearly as
   much room for implementations and naive programmers using them to do
   things in spectacularly unoptimal ways.  Since, to paraphrase
   Heinlein, it is never safe to underestimate the power of human
   stupidity :-), that doesn't mean that either the C++ vendor or the
   programmer can't still manage to make it so that, for example,
   changing one small line in one function causes a 3-hour rebuild.  In
   general, though, my experience has been that this sort of problem is
   more common in implementations of lisp-like languages than C-like ones
   where the size of the feature space and the myth that it is easy to go
   back after the fact and just tweak the slow parts of an application
   conspire to encourage sloppiness on the part of implementors and
   programmers alike.

Here again, I'd disagree. For one thing, the wide availability of
tools in C and C++ means that one is constantly running into someone's
class library, theoretically production-proven code, that uses (say) a
yacc/lex-based compiler to parse a simple data structure. Recoding
this parser by hand provided a 4-fold speed-up in a speed-critical
part of our application; of course, it also forced us to completely
redevelop the entire class, since we didn't have source (the speed-up
required direct access to private member variables and adding a new
class variable, and you can't do that without source, even in a
derived class).

Or alternatively, we use three different vendor class libraries that
use three different string representations (char *, USL String, and a
private vstr class). The time (both programmer and CPU) involved in
converting between these representations is not trivial. If C++ had
more run-time infrastructure, then this wouldn't be an issue.

Or again, part of why it takes us over 5 hours to rebuild our
application is that it's big -- over 8 meg (stripped). If a base class
gets changed, every child class has to get recompiled -- and every
other file that includes the header. If we were using a dynamic
language, rebuilding the application is a non-event.

   The trade-off to using this minimalist approach to language and
   run-time features is, of course, that if you find you really need a
   feature that is built in to a more elaborate language like CL, then
   you must either find an existing library implementation or write one
   yourself.  One factor frequently left of the debate on this issue is
   recognition of the fact that due to the greater general acceptance
   (deserved or not) of C and C++, there is a greater availability of
   both commercial and shareware libraries for everything from very
   application-level activities like serial communications to more
   system-level activities like memory-management.  When you add all of
   the features that are easily obtainable in this way to those built in
   to C++, it is not really as poor and benighted an environment as many
   regular contributors to this newsgroup would have one believe.

One of the biggest problems with C and C++ is the availability of
large numbers of incompatible, non-portable commercial and shareware
libraries that almost do everything you'd want, but not quite, and not
quite how you want, and optimized for a usage pattern different than
yours. If C++ allowed more flexible class modification, without
requiring source access, or if the subclassing facility were a bit
nicer, or if there were a MOP, then things would be much easier. But
what is the likelihood of this happening?
   >
   >- One other major advantage of Lisp over C++ is the relative maturity
   >of the languages. We developed our application in C++ on SunOS 4.1.3
   >initially, using ObjectCenter's C++ compiler. When we moved to HP-UX
   >9.0, still with OC C++, we found numerous bugs related to 'edge'
   >conditions -- basically, all related to the fact that the order of
   >static object construction and destruction is completely undefined by
   >the ARM, and thus THE SAME COMPILER generated different orders on two
   >different systems.
   >  And yes, it's true, we shouldn't have coded anything with any
   >implicit assumptions about the order of static object construction or
   >destruction, but what we've been reduced to is a combination of hacks
   >and backing away from objects in favor of 'built-in' types, whose
   >static construction is always done first-thing (i.e., const char *
   >instead of const String). To me, this is a bug in the language
   >definition, and a very serious one.


   While I can sympathize with the frustration that could be caused from
   discovering this as the cause of a seemingly mysterious bug, possibly
   after some considerable effort chasing blind alleys, I cannot really
   take seriously the assertion that this is such a major bug in the
   language definition as you make.  First of all, it does state quite
   explicitly in the ARM that once the default initialization of all
   static objects to zero is complete, "No further order is imposed on
   the initialization of objects from different translation units."  (pg.
   19, near the bottom) It then goes on to give references to the
   sections which define the ordering rules on which you can depend for
   local static objects.  So your observation that you shouldn't have
   relied on any particular order is quite correct.  There is also
   interpersed throughout the annotation sections of the ARM and
   throughout the C++ literature in general discussions of why that
   particular class of features - deferring to the implementation exacly
   what steps are taken and in what order when launching the application
   - exists and how to avoid the problems it can create.

If the ARM specified that the result of adding two integers was
implementation-dependent, would that make it OK? If a language
requires major hacks and kludging in order to provide deterministic
start-up and shut-down behavior, then that language has a serious bug
in its specification.

This is not the only place where C++'s relative immaturity is
evident. The ARM more or less throws up it's hands over the treatment
of temporary values, and gives up. There is no ability to extend
existing classes by adding new methods without subclassing, and
subclassing introduces major new hackery:

	class String;	// defined in, say, USL standard components

	class MyString : public String {
	public:
		// various constructors and whatnot removed, to
		// protect the innocent
	
		MyString& rtrim();
		MyString& pad(int length, char padChar = ' ');
	}

This looks easy, right? Except that you can't do the following:

	MyString a("hello "), b("world");
	MyString s = a + b;

unless you've written a MyString(String&) constructor, and even then
you're going to wind up copying the String in order to create a
MyString. Now, copying Strings isn't that big a deal (reference
counts!), but for some classes this can become quite expensive. (Yes,
it is possible to write a MyString::operator+(), etc., but this is a
hack, and messy, and bug-prone, and a maintenance nightmare).

Not that CL is perfect, either, of course. The lack of finalization (a
destructor method) is somewhat annoying, although using macros like
with-open-file helps a bit. The existence of set-value can be a
temptation to do truly evil things. The existence of eight different,
slightly incompatible ways to do any single operation is a real
problem when learning (reminiscent of X windows), although I've found
that I actually program in a CL subset, of say maybe 35-45% of the
language.

   There can be any amount of debate about the merits of any particular
   feature of any particular language.  Consider the recent long thread
   in this newsgroup on NIL vs '() vs boolean false in various dialects
   of lisp.  I believe the assertion that there are more such problems
   with the C++ specification than with the CL specification is simply
   false, again primarily due to the fact that it is so much smaller.

I disagree strongly. Again, Lisp or CL or Scheme are not perfect, but
since they've been used for a much longer time than C++
(specifically), and since far more people have had input into the
structure and semantics of the language than for C/C++, the problem
issues that remain are generally minor, peripheral issues (or
religious debates between different dialects), where for C to an
extent, and for C++ very strongly, there are many users of the
language who feel that it has serious if not fatal flaws in its
fundamental design and conception.

   What I believe you actually are objecting to is the fundamentally
   different set of principles guiding the initial design and subsequent
   evolution of the two families of languages.  Lisp began as an academic
   exercise in how to apply a certain technique from formal linguistics -
   the lamda calculus - to the design of a programming language as a way
   of exploring the possibilities inherent in the use of higher-order
   functions.  As such, theortical purity and semantic "correctness" was
   more important than practical considerations like performance and
   conservation of machine resources.  C, on the other hand, was
   originally conceived as a special-purpose "middle-level" programming
   language designed for the express purpose of writing a portable
   operating system - namely Unix.  Retaining assembly langauge's
   affinity for highly optimal hardware-oriented "hackery" was considered
   more important than theoretical rigor or syntactic elegance.  Decades
   later, the two families of languages have evolved considerably but
   they each still retain an essentially dissimilar "flavor" that is the
   result of their very different origins, and each retains features that
   make it a better choice of an implementation language for certain
   kinds of applications.

I don't think John McCarthy was really concerned with lambda calculus,
at least not from the articles I've read. On the other hand, I was
certainly not there -- I was born the same year as Lisp, 1958 -- so
it's not unlikely that I'm way off base on this.

C, at least, has to me an intellectual coherence that makes it a
useful and usable language. I've written hundreds of thousands of
lines of C code, and while it has its warts -- and while I'd rather
write CL or Scheme -- it's OK. C++, on the other hand, seems to me to
be a hack pasted on top of a kludge, with some spaghetti thrown on
top. It wants to be all things to all people, without admitting to
any flaws -- it's an OOPL, it's procedural, it's dynamic, it's static,
it's a mess.
   >
   >   Several people have suggested in this thread that one code one's
   >   mainline application modules in C or C++, and use a small, easily
   >   extensible version of lisp as a "shell" or "macro" language to glue
   >   the pieces together and provide the end-user programmability features
   >   which are one of lisp's greatest assets.  This seems to me to be an
   >   ideal compromise for applications where lisp's performance and
   >   resource requirements are unacceptable to the main application but
   >   where it is still desired to retain at least some of lisp's superior
   >   features.
   >
   >I'd argue for the reverse of this -- build one's mainline in Lisp, and
   >build the performance-critical pieces in C or C++. The less
   >statically-compiled, statically-linked stuff in the delivered
   >application, the better -- quicker edit/compile/build/test cycles,
   >easier patching in the field, and easier for your users to change or
   >extend what you've given them.

   Again, only assuming that your application can afford the performance
   penalties that these features entail - in which case it would not be
   in the class of applications to which I explicitly referred.  In our
   own case, using exactly the strategy you and so many others advocate
   has failed spectacularly in achieving comparable size and performance
   behavior in our products to those of our competitors, which are
   universally written in some combination of C and C++.

Why are there more performance penalties in one approach than in the
other? Build the mainline in Scheme, if you don't want the run-time
footprint of CL. Or in Dylan, or EuLisp, or whatever. I guess the
bottom line for me, though, whatever language your main line happens
to be written in, is do as much as you can dynamically, in Lisp or
Scheme or Python or Smalltalk or Self or SML or whatever.

   >
   >   ------------------------------------------------------------
   >   Kirk Rader                                 ····@triple-i.com
   >
   >--
   >----------------------------------------------------------------------------- 
   > Alan Geller                                            phone: (908)699-8285 
   > Bell Communications Research                             fax: (908)336-2953 
   > 444 Hoes Lane                                   e-mail: ···@cc.bellcore.com 
   > RRC 5G-110
   > Piscataway, NJ 08855-1342


   ------------------------------------------------------------
   Kirk Rader                                 ····@triple-i.com

--
----------------------------------------------------------------------------- 
 Alan Geller                                            phone: (908)699-8285 
 Bell Communications Research                             fax: (908)336-2953 
 444 Hoes Lane                                   e-mail: ···@cc.bellcore.com 
 RRC 5G-110
 Piscataway, NJ 08855-1342
From: Lawrence G. Mayka
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <LGM.94Aug5165629@polaris.ih.att.com>
In article <················@xyzzy.gamekeeper.bellcore.com> ···@xyzzy.gamekeeper.bellcore.com (25381-geller) writes:

   Not that CL is perfect, either, of course. The lack of finalization (a
   destructor method) is somewhat annoying, although using macros like
   with-open-file helps a bit. The existence of set-value can be a

I think most commercial CL implementations now have a finalization
capability, but I agree that we need a de facto standard interface to
it.
--
        Lawrence G. Mayka
        AT&T Bell Laboratories
        ···@ieain.att.com

Standard disclaimer.
From: Kelly Murray
Subject: Re: (lisp vs c++ performance) Rick Graham, Master of Hackology in C++
Date: 
Message-ID: <31rr2g$g01@sand.cis.ufl.edu>
In article <·····················@mosaic.nyu.edu>, ·······@mosaic.nyu.edu (Marco Antoniotti) writes:
|> In article <··················@qobi.ai> ····@qobi.ai (Jeffrey Mark Siskind) writes:
|> 
|> ...lots of stuff deleted about compilers.
|> 
|> I agree that a Common Lisp compiler built from scratch and based upon
|> such principles would be desirable. But, given the current
|> circumstances, I am very happy to stick with CMUCL.
|> 
|> 	...
|> 
|>    But rather than working on such a compiler, the Lisp community has three
|>    classes of people:
|> 	...
|> 
|>    - those adding creaping featurism to the language (like MOP, DEFSYSTEM,
|>      CLIM, ...)
|> 
|> I cannot agree on this statement. Common Lisp LOST the lead in the GUI
|> field because the user community (and, above all, the vendors) did not
|> agree on CLUE/CLIO when it first became available. CLIM, which is not
|> available (that is: I cannot run it on CMUCL or GCL, hence it is not
|> available) could be a good thing.

I guess I'll bore people again, too, though perhaps I'll say something
different this time.  

I agree with Siskind that Lisp could do much better to compete with C
by actually working on the areas that C people care about, rather
than just tell C people they care about the wrong things.

But I think the first mistake for Lisp users and vendors (was) is 
to compete against C at all.  Lisp is a great high-level language, 
suitable for doing high-level programming.  It allows one to implement stuff 
quickly and not worry about low-level details.

Who needs what Lisp offers???  Not people who implement file systems,
device drivers, graphics libraries, or other "low-level" code.
The right target is BUSINESS PEOPLE.  These people use COBOL,
and now relational databases, SQL, 4GLs, FORMS, and god-knows-what-else  
high-level, expensive, proprietary package, which lets them
quickly implement an application the Boss needed yesterday, 
like how many hoola-hoops were sold in California last month,  etc.

A Lisp-based system could serve this market very well.

- Kelly Murray  (···@prl.ufl.edu) <a href="http://www.prl.ufl.edu">
-University of Florida Parallel Research Lab </a> 96-node KSR1, 64-node nCUBE