From: ········@bayou.uh.edu
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <5lati6$h7k$5@Masala.CC.UH.EDU>
Scott Schwartz (········@galapagos.cse.psu.edu.NO-SPAM) wrote:
: ·············@Alcatel.com.au (Chris Bitmead uid(x22068)) writes:
: | I'm still a bit vague on why you couldn't do what you wanted in Lisp,
: | efficiency issues put aside for a moment...

: Bzzzzt.  Efficiency is everything.  You can't put it aside.

Bzzzzzt!  Efficiency isn't even something until you know _FOR A FACT_
that your program doesn't have it.  Keeping your eye on nothing but
efficiency while developing your program leads to code that is
unmaintainable, unreadable, and unreliable -- and all this for
a benefit that your program may have had to begin with.

And if your program isn't efficient enough?  Well remember the
80/20 rule (some even go far as to say 90/10).  The majority
of the processing time of your program is spent in a relatively
small area, so when the time comes to save cycles, optimize that
portion.  That way if you have to resort to monkey business to
get your program running faster, at least you would have limited
the extent of the changes.

That is not to say that you shouldn't code intelligently -- you
should.  But do things like pick algorithms that are good
performers, you know instead of say a linear search do a
binary search, etc...  But don't even play with pointers
or multiply by bit shifting, or any of that stupid garbage
until you know it is needed.  All the efficiency in the
world won't help you if your program doesn't work!

I truly wish people would get out of this ridiculous mindset.
I'm so sick of seeing code that uses pointer arithmetic instead
of array indexing because some clown without a clue figured it
would be efficient, only to see that this code gets called
once during the life of a program.  Drop the C/C++ mindset and
use your head.

Now all I need is a nice sounding catch phrase... hey how about
"Don't let the three U's happen to U!"?  


--
Cya,
Ahmed

From: Harley Davis
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <wkpvutimhx.fsf@laura.ilog.com>
········@Bayou.UH.EDU (········@bayou.uh.edu) writes:

> Scott Schwartz (········@galapagos.cse.psu.edu.NO-SPAM) wrote:
> : ·············@Alcatel.com.au (Chris Bitmead uid(x22068)) writes:
> : | I'm still a bit vague on why you couldn't do what you wanted in Lisp,
> : | efficiency issues put aside for a moment...
> 
> : Bzzzzt.  Efficiency is everything.  You can't put it aside.
> 
> Bzzzzzt!  Efficiency isn't even something until you know _FOR A FACT_
> that your program doesn't have it.  Keeping your eye on nothing but
> efficiency while developing your program leads to code that is
> unmaintainable, unreadable, and unreliable -- and all this for
> a benefit that your program may have had to begin with.
> 
> And if your program isn't efficient enough?  Well remember the
> 80/20 rule (some even go far as to say 90/10).  The majority
> of the processing time of your program is spent in a relatively
> small area, so when the time comes to save cycles, optimize that
> portion.  That way if you have to resort to monkey business to
> get your program running faster, at least you would have limited
> the extent of the changes.

In my experience, the real killer with Lisp is not so much raw speed,
but rather excess memory use.  Lisp by nature is something of a memory
hog, requiring lots of extra indirections and extra runtime
information, in addition to not-very-compact code and lots of runtime
libraries that are loaded pretty often.  The problem with excess
memory use is that it doesn't fit in well with multi-tasking operating
systems.  Once you start swapping for whatever reason performance is
killed completely and users are unhappy.

C++ for all of its numerous flaws does allow you to use a lot less
memory than dynamic competitors like SmallTalk, Lisp, and even Java,
and this boosts performance for real apps.

-- Harley

-------------------------------------------------------------------
Harley Davis                            net: ·····@ilog.com
Ilog, Inc.                              tel: (415) 944-7130
1901 Landings Dr.                       fax: (415) 390-0946
Mountain View, CA, 94043                url: http://www.ilog.com/
From: Erik Naggum
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <3072689658775455@naggum.no>
* Harley Davis
| In my experience, the real killer with Lisp is not so much raw speed, but
| rather excess memory use.

in my experience, Lisp systems start out with a couple megabytes of memory
to hold the entire run-time system, then grow very gradually and maintain a
low system profile as far as memory goes.  C programs that do any useful
amount of work grow rapidly and remain large after the allocated memory
have outlived their usefulness.  they even leak memory when run over long
periods of time.

the real difference between Lisp systems and C programs under Unix is that
the Lisp systems usually do their work in one long-lived process, whereas
the Unix way is lots of short-lived processes.  C's memory allocation is
well suited to this mode of operation, since the operating system does the
garbage collection when the process terminates.  however, the C++ way with
Windows is once again large, long-lived processes.  not surprisingly, C++
does _much_ worse in this department than Lisp systems generally do.

furthermore, if memory usage was a "killer", Microsoft would never have
been able to hoist its disgusting inferiority on the world.  even simple
tools like e-mail clients for Windows use more memory than computers _had_
only a few years ago.  Lisp systems _were_ relatively large, but since they
haven't grown very much, they are now _small_ compared to the cancerous
growth of software written by inferior programmers in inferior languages.

#\Erik
-- 
if we work harder, will obsolescence be farther ahead or closer?
From: ······@brightware.com
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <5lfrjf$2b74@news.brightware.com>
In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
;; * Harley Davis
;; | In my experience, the real killer with Lisp is not so much raw speed, but
;; | rather excess memory use.
;; 
;; only a few years ago.  Lisp systems _were_ relatively large, but since they
;; haven't grown very much, they are now _small_ compared to the cancerous
;; growth of software written by inferior programmers in inferior languages.
;;
;; #\Erik

To give a concrete example of the truth in this statement, on my Windows NT 
machine at work, Microsoft Outlook weighs in at 11.3 MB memory usage. 11 megs 
just to read email! Internet Explorer is sitting ugly at 13 MB.

Moving to my other desk, Macintosh Common LISP 4.0 with 2 medium-sized apps 
loaded (a blackboard system for improvising Jazz in real-time and a visual 
programming language a lot like Max) weighs in at just over 8 megabytes.

To contrast development environments-- when doing a compile in MS DevStudio, 
the IDE plus the compiler and preprocessor eat up about about 36 megs for 
medium-sized jobs, and up to 64 megs or more to compile our full system.  
Similar sized LISP projects in MCL usually compile in the default 8 MB memory 
partition, and never mroe than 12-16.

-Adam
From: Henry Baker
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <hbaker-1505971840540001@10.0.2.1>
In article <··············@laura.ilog.com>, Harley Davis
<·····@laura.ilog.com> wrote:
> These modern apps we whine about are bloated not because of their
> programming language, but simply because they try to do too much at
> once, implementing far more features (and Easter eggs, and scripting
> languages, etc.) than any normal mortal ever uses.  [For instance,
> there is a complete flight simulator program in Excel97.  See
> www.cnet.com under the Easter Egg section for details.] They are huge,
> complicated monsters, which, I'm willing to bet, would be equally
> bloated in any programming language - probably even more bloated in
> Lisp if written by the same quality of programmers.

I still disagree.  I've _looked_ at a lot of C code, and it often has miles and
miles of duplicated code, pulling apart 10,000 different structures for
which the differences are completely non-functional.  The same functionality
in Lisp would utilize a list, and the same set of access functions for all
of them.  Every loop in C is hand-crafted, and in any good sized program, you
have 100 different versions of a list-searching function, 10 bubble-sorts, and
a partridge in a pair tree.

In my previous posting, I forgot to mention several sources of 'bloat' in
the LispM, if you want to call it that.  I once estimated the total size of
all the _symbols_ in the system, and it was many megabytes.  For those of
you in the Unix world, this is the equivalent of keeping symbols in all the
 .o files (i.e., not stripping them), and allowing the programmer to actually
do something intelligent when a trap into the system occurs.

Another major source of 'bloat' in the LispM was the fact that it had full
word pointers at a time when microcomputers were still crawling around
with 16-bit pointers and wetting themselves (the Purify diaper hadn't
come on the scene yet).
From: ······@brightware.com
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <5lido9$2b75@news.brightware.com>
In article <·······················@10.0.2.1>, ······@netcom.com (Henry Baker) wrote:
;; In article <··············@laura.ilog.com>, Harley Davis
;; <·····@laura.ilog.com> wrote:
;; > These modern apps we whine about are bloated not because of their
;; > programming language, but simply because they try to do too much at
;; > once, implementing far more features (and Easter eggs, and scripting
;; 
;; I still disagree.  I've _looked_ at a lot of C code, and it often has miles and
;; miles of duplicated code, pulling apart 10,000 different structures for
;; which the differences are completely non-functional.  The same functionality
;; in Lisp would utilize a list, and the same set of access functions for all
;; of them.  Every loop in C is hand-crafted, and in any good sized program, you
;; have 100 different versions of a list-searching function, 10 bubble-sorts, and
;; a partridge in a pair tree.

Add templates to this, which are a feature that was bloody *designed* to 
duplicate code, and you've just compounded the problem. A project here 
dropped using templates for basic data structures because they bloated not 
only the object code size, but the run-time memory requirements ballooned  
because every template specialization on about 50 different types each had its 
own allocator which was happily gobbling up large "reserves" of memory, 
irrespective of each other.

- Adam
From: Hans-Juergen Boehm
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <337CDC61.446B@mti.sgi.com>
······@brightware.com wrote:
> 
> Add templates to this, which are a feature that was bloody *designed* to
> duplicate code, and you've just compounded the problem. A project here
> dropped using templates for basic data structures because they bloated not
> only the object code size, but the run-time memory requirements ballooned
> because every template specialization on about 50 different types each had its
> own allocator which was happily gobbling up large "reserves" of memory,
> irrespective of each other.
> 
C++ templates have their problems.  I would have much preferred a notion
of template that could truly be compiled separately, or at least type
checked separately.

But blaming an allocator-per-class library design on them isn't fair.
Some library implementations did that, but there's no good reason for
it.  (In the case of HP STL, I think that was an unerstood weakness.)
Recent versions of the draft standard have, if anything, discouraged it.
SGI's version of the Standard Template Library
(http://www.sgi.com/Technology/STL) does not.  As far as I could
determine, there is absolutely no performance advantage for the
allocator-per-class design.  And it causes nonperformance problems as
well.

-- 
Standard disclaimer ...
Hans-Juergen Boehm
·····@mti.sgi.com
From: Fergus Henderson
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <5lkrvq$26u@mulga.cs.mu.OZ.AU>
Hans-Juergen Boehm <·····@mti.sgi.com> writes:

>······@brightware.com wrote:
>> 
>> Add templates to this, which are a feature that was bloody *designed* to
>> duplicate code, and you've just compounded the problem. A project here
>> dropped using templates for basic data structures because they bloated not
>> only the object code size, but the run-time memory requirements ballooned
>> because every template specialization on about 50 different types each had its
>> own allocator which was happily gobbling up large "reserves" of memory,
>> irrespective of each other.
>
>C++ templates have their problems.  [...]
>But blaming an allocator-per-class library design on them isn't fair.

I think it _is_ fair to say that C++ encourages an allocator-per-class
library design.  The feature that encourages this is not templates, it is
class-specific operator new and delete methods.

(I think there's an example in "The C++ Programming Language" 2nd Ed
that encourages this sort of design.  [However, it is possible that I'm
misremembering that -- I've lent my copy to a friend, so I can't check it.])

--
Fergus Henderson <···@cs.mu.oz.au>   |  "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh>   |  of excellence is a lethal habit"
PGP: finger ···@128.250.37.3         |     -- the last words of T. S. Garp.
From: Hans-Juergen Boehm
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <33807951.41C6@mti.sgi.com>
Fergus Henderson wrote:
>
> Hans-Juergen Boehm <·····@mti.sgi.com> writes:
>
> >C++ templates have their problems.  [...]
> >But blaming an allocator-per-class library design on them isn't fair.
>
> I think it _is_ fair to say that C++ encourages an allocator-per-class
> library design.  The feature that encourages this is not templates, it is
> class-specific operator new and delete methods.
>
> (I think there's an example in "The C++ Programming Language" 2nd Ed
> that encourages this sort of design.  [However, it is possible that I'm
> misremembering that -- I've lent my copy to a friend, so I can't check it.])
>
Class specific new and delete methods encourage a per class choice of
allocator (which is also a feature that needs to be used with extreme
caution).  But this is still different from encouraging each class to
keep its own free list, etc.  It seems perfectly reasonable to define a
per-class new and delete that use a different global allocator, shared
by many such classes.  And this largely avoids the original heap
fragmentation problem.

I will agree that many books (not to mention many news messages) suggest
per-class free lists.  But you shouldn't believe everything you read. 
This is bad advice for many reasons:

1) It automatically makes the class thread-unsafe for any reasonable
definition of thread-safety.

2) It sometimes gets in the way of adding a garbage collector, since any
memory reachable from the free lists will look live to the collector.

3) It usually results in repeated code that should be factored out.

4) It results in unnecessary heap fragmentation for no measurable
performance gain that I have seen.  (There is sometimes a substantial
performance gain from inlining allocation code.  But that has nothing to
do with per-class free lists.)

--
Standard disclaimer ...
Hans-Juergen Boehm
·····@mti.sgi.com
From: Jim Veitch
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <337C9F80.425D@franz.com>
Henry Baker wrote:
> 
> In article <··············@laura.ilog.com>, Harley Davis
> <·····@laura.ilog.com> wrote:
> > These modern apps we whine about are bloated not because of their
> > programming language, but simply because they try to do too much at
> > once, implementing far more features (and Easter eggs, and scripting
> > languages, etc.) than any normal mortal ever uses.  [For instance,
> > there is a complete flight simulator program in Excel97.  See
> > www.cnet.com under the Easter Egg section for details.] They are huge,
> > complicated monsters, which, I'm willing to bet, would be equally
> > bloated in any programming language - probably even more bloated in
> > Lisp if written by the same quality of programmers.

The argument here is that Excel97 is written by average programmers. 
MicroSoft
has a practice of going after the best, doing lots of code review, and
doing a
large (very large!) amount of testing.  Hardly your average coding
environment.
 
> I still disagree.  I've _looked_ at a lot of C code, and it often has miles and
> miles of duplicated code, pulling apart 10,000 different structures for
> which the differences are completely non-functional.  The same functionality
> in Lisp would utilize a list, and the same set of access functions for all
> of them.  Every loop in C is hand-crafted, and in any good sized program, you
> have 100 different versions of a list-searching function, 10 bubble-sorts, and
> a partridge in a pair tree.

I agree with Henry's assessment, even in commercial grade code.  The
problem is that
local C libraries tend to be written and then not used because somehow
they
aren't quite general enough, or they are too general, or, my personal
opinion,
too hard to figure out.  My personal feeling is that it's hard to
understand
C/C++ code because design decisions get sprayed out throughout the code
rather than get packaged up in Lisp style constructs such as macros or
mixins.
So the programmers just end up reinventing functionality.  The upshot
often is:

1. The application becomes so hard to maintain opr develop it gets
abandoned, maybe
   even before it is finished.
2. The application runs slow because noone can figure out or change the
code.
3. The application gets big because functionality that should generic is
   repeated in multiple places.

Now of course all these things can happen with Lisp applications as
well!  But Lisp
makes it a lot easier to avoid.

Allegro CL for Windows has disk footprint of 6 MB.  The entire
development environment runs
without paging in 16 MB of RAM (on Windows 95).  This includes widgets,
browsers,
debugger, editor, all sorts of help and cross reference features, and of
course the
lisp itself.  A "hello world" application weighs in at around 1MB, which
of course
includes the runtime kernel libraries.
From: Erik Naggum
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <3072791535069493@naggum.no>
* Harley Davis
| These modern apps we whine about are bloated not because of their
| programming language, but simply because they try to do too much at once,
| implementing far more features (and Easter eggs, and scripting languages,
| etc.) than any normal mortal ever uses.  ... They are huge, complicated
| monsters, which, I'm willing to bet, would be equally bloated in any
| programming language - probably even more bloated in Lisp if written by
| the same quality of programmers.

well, how much are you willing to bet?  how would we settle this?  should I
make unfounded conjectures, too, and somebody in the audience toss a coin?

| Writing high-quality, efficient, non-memory-intensive code in Lisp (and,
| of course, in any language, but in Lisp the temptation to write
| inefficient code is great, as Siskind pointed out) is a challenging task,
| requiring high skill levels and an excellent understanding of underlying
| implementation issues.

what utter nonsense!  there is no "temptation to write inefficient code"!
where do you _get_ such silly ideas?  a temptation is something people are
too weak to resist.  nobody walks up to Lisp programmers and says "hey,
let's make this inefficient!" and when they answer "gee, that sounds cool,
but is it really a good idea?" go on with "aw, nobody will notice, anyway".
there's temptation to be lazy in any language, but there's no _temptation_
in Lisp to focus on efficiency _before_ correctness.  in C/C++, there is,
because it takes so much _more_ effort to write _correct_ and efficient
code in C++ than it does in Common Lisp.  40% of effort will give you
fast and "correctable" C++ or correct and "optimizable" Common Lisp.

| Few programmers are up to the task.  For many years, Ilog was the largest
| Lisp vendor in Europe, and believe me, I saw mounds of absolutely
| horrendous Lisp code written by supposedly well-educated and
| knowledgeable programmers, the cream of the crop of European programmers.

yeah, funny how 350 million people can produce so few good programmers when
250 million people can produce so many, isn't it?

| If these guys had written Microsoft Word, it wouldn't be a 20 megabyte
| brontosaurus - it would be a 50 megabyte brachiosaurus.

there is no evidence to suggest this, other than your own disappointment
and perhaps bitterness.  in fact, there is much evidence to suggest that
when a project grows large, there will be much duplication of effort if the
abstractions are wrong or if they fit the original purpose too tightly.

the most expensive error a designer or programmer can make is premature
optimization.  given this, it is ill-conceived to argue against those who
do _not_ make this mistake on the grounds that those who do make it write
faster code, because it will be doing the wrong thing faster, not the right
thing faster.

until you have hard evidence that 50M Lisp applications would take 20M in
C/C++, or that 20M C/C++ applications would take 50M in Lisp, I suggest you
accept the probability that 20M Lisp application would take 50M in C/C++
and that 50M C/C++ applications would take 20M in Lisp as _equally_ large
(or small).

#\Erik
-- 
if we work harder, will obsolescence be farther ahead or closer?
From: Harley Davis
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <wkhgg3hs9c.fsf@laura.ilog.com>
Erik Naggum <····@naggum.no> writes:

> * Harley Davis
> | These modern apps we whine about are bloated not because of their
> | programming language, but simply because they try to do too much at once,
> | implementing far more features (and Easter eggs, and scripting languages,
> | etc.) than any normal mortal ever uses.  ... They are huge, complicated
> | monsters, which, I'm willing to bet, would be equally bloated in any
> | programming language - probably even more bloated in Lisp if written by
> | the same quality of programmers.
> 
> well, how much are you willing to bet?  how would we settle this?  should I
> make unfounded conjectures, too, and somebody in the audience toss a
> coin?

My prima facie evidence is that Ilog rewrote several of its Lisp
libraries into C++ back in 92-93.  We consistently added features
during the transition, and the result was always faster and smaller
code and dynamic memory use, usually by several factors.  These were
the same programmers, mind you, all highly trained in Lisp who moved
to C++ very rapidly.

What metric would you propose for the opposite point of view?

> | Writing high-quality, efficient, non-memory-intensive code in Lisp (and,
> | of course, in any language, but in Lisp the temptation to write
> | inefficient code is great, as Siskind pointed out) is a challenging task,
> | requiring high skill levels and an excellent understanding of underlying
> | implementation issues.
> 
> what utter nonsense!  there is no "temptation to write inefficient code"!
> where do you _get_ such silly ideas?

Perhaps you feel that I abused the word "temptation", so let me
restate my point:  The path of least resistance (ie what code is
easiest to write based on built-in functionality and datatypes) in
Lisp often leads to slow code which allocates lots of unnecessary
memory.  The equivalent path of least resistance in C++, while far
from optimal, is somewhat better in this regard.

As far as my source for my silly ideas, they are purely based on my
own experience in seeing myself and my colleagues make the transition
from Lisp to C++, and many years of reviewing Lisp code written by
pretty decent programmers.

If you would like, I'd be happy to dig up an old email I have from a
prominent American Common Lisp luminary who made the same point a few
years back based on his experience with a lot of Common Lisp
programmers.

It's too bad that someone as passionate in their beliefs and as
articulate in their expression as you should be deceived on this
point.  However, I think it is an inescapable conclusion: In its
average incarnations, Lisp, for all its other benefits as a productive
tool, is simply not a shining example of an easy way to produce
efficient applications for most programmers.

> | Few programmers are up to the task.  For many years, Ilog was the largest
> | Lisp vendor in Europe, and believe me, I saw mounds of absolutely
> | horrendous Lisp code written by supposedly well-educated and
> | knowledgeable programmers, the cream of the crop of European programmers.
> 
> yeah, funny how 350 million people can produce so few good programmers when
> 250 million people can produce so many, isn't it?

Now perhaps it's time for *you* to propose a metric, or stop insulting
European programmers (of whom I'm not one, but for whom I've a great
deal of respect).

-- Harley

-------------------------------------------------------------------
Harley Davis                            net: ·····@ilog.com
Ilog, Inc.                              tel: (415) 944-7130
1901 Landings Dr.                       fax: (415) 390-0946
Mountain View, CA, 94043                url: http://www.ilog.com/
From: Alaric B. Williams
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <337d8b21.6361417@news.demon.co.uk>
On 16 May 1997 17:11:27 -0700, Harley Davis <·····@laura.ilog.com>
wrote:

>> well, how much are you willing to bet?  how would we settle this?  should I
>> make unfounded conjectures, too, and somebody in the audience toss a
>> coin?

>My prima facie evidence is that Ilog rewrote several of its Lisp
>libraries into C++ back in 92-93.  We consistently added features
>during the transition, and the result was always faster and smaller
>code and dynamic memory use, usually by several factors.  These were
>the same programmers, mind you, all highly trained in Lisp who moved
>to C++ very rapidly.

Then I doubt that was a very nice Lisp /implementation/. CMUCL (a free
Lisp implementation) is certainly in gcc's ballpark when it comes to
execution times - by the same ballpark, I mean that the compiled code
is either faster or slower than gcc, not merely "of the same order of
magnitude" - and IIRC space usage for single applications was about
1.2 to 1.5 times what gcc created, with better potential for sharing
that between applications, so I'm not sure how they compare as the
application size tends to infinity.

>What metric would you propose for the opposite point of view?

The above one :-)

>Perhaps you feel that I abused the word "temptation", so let me
>restate my point:  The path of least resistance (ie what code is
>easiest to write based on built-in functionality and datatypes) in
>Lisp often leads to slow code which allocates lots of unnecessary
>memory.  The equivalent path of least resistance in C++, while far
>from optimal, is somewhat better in this regard.

This is something I am fixing in my own Lisp dialect. However, 
"programmer misuse" is no reason to call the language slow; it is a
fault, yes, but not one that affects fully competent programmers.

>-- Harley

ABW
--

Limited Warranty: Macrosoft Corporation cannot accept
any responsobility for geological, ecological, biological, sociological,
political, or nuclear disasters caused by Windows for Early Warning
and Defence. The software is supplied as is, with no express or
implied warranty, and with no guarantee of fitness of purpose,
or sufficient reliability to be placed in a situation to threaten
millions, if not billions, of innocent lives.

FUN: http://www.abwillms.demon.co.uk/alaric/wfewad.htm
INTERESTING: http://www.abwillms.demon.co.uk/os/
OTHER: http://www.abwillms.demon.co.uk/
From: Rainer Joswig
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <joswig-ya023180001705971258020001@news.lavielle.com>
In article <··············@laura.ilog.com>, Harley Davis
<·····@laura.ilog.com> wrote:

> My prima facie evidence is that Ilog rewrote several of its Lisp
> libraries into C++ back in 92-93.  We consistently added features
> during the transition, and the result was always faster and smaller
> code and dynamic memory use, usually by several factors.  These were
> the same programmers, mind you, all highly trained in Lisp who moved
> to C++ very rapidly.

If I have the chance to review a program, completely rewrite it
without any need for historical baggage - guess what happens?

-- 
http://www.lavielle.com/~joswig/
From: Henry Baker
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <hbaker-1705971016050001@10.0.2.1>
In article <··············@laura.ilog.com>, Harley Davis
<·····@laura.ilog.com> wrote:

> My prima facie evidence is that Ilog rewrote several of its Lisp
> libraries into C++ back in 92-93.  We consistently added features
> during the transition, and the result was always faster and smaller
> code and dynamic memory use, usually by several factors.  These were
> the same programmers, mind you, all highly trained in Lisp who moved
> to C++ very rapidly.

I'd be curious to see some of the comparative examples of code in both
languages.

There is one significant difference between LispM's and Lisp's on C-oriented
microprocessors -- LispM list cells were typically 1/2 the size.  Considering
that list cells are typically the single most popular datatype, this could
make Lisp not look so good on C-oriented microprocessors.

In any case, I'd be curious if you or anyone else has done an in-depth
analysis to find out exactly where the differences in memory usage were.

My experience has been the opposite of yours -- the C/C++/Ada programs
come out much larger than the Lisp versions.  But I wasn't doing graphics --
I was doing things like compilers.

The Symbolics graphics stuff I believe _has_ been converted to run on
Lisp/Unix workstations.  I'd be curious to know the comparative sizes and
speeds of the resulting systems.
From: Erik Naggum
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <3072942706778740@naggum.no>
* Harley Davis
| My prima facie evidence is that Ilog rewrote several of its Lisp
| libraries into C++ back in 92-93.  We consistently added features during
| the transition, and the result was always faster and smaller code and
| dynamic memory use, usually by several factors.  These were the same
| programmers, mind you, all highly trained in Lisp who moved to C++ very
| rapidly.

I have done my share of developing in Lisp and deploying in C.  there's no
doubt that once I understand a problem well enough avoid backtracking while
programming, I also write much more compact code.  this is not unique to C,
however.  several times, I have rewritten a set of functions many times in
the course of a project, and they always get better with time.  I view
programming as an exploration of the solution space, and this means that I
know a lot more about the problem when I have working code than when I set
out to solve the problem, no matter how much design work I did up front.
C++ seems to be good at implementing the _final_ version of something.

however, when I wrote "original code" in C, my first shots were much more
buggy and "efficient" in the wrong places than the first shots are in Lisp.
I very seldom debug my Lisp code, but I also do a lot of exploratory work
in the Lisp listener.

there is also a question of how much time was spent writing the libraries
you refer to in C++ and Lisp.  I find myself writing Lisp code 5 times
faster than I ever did C.  (I don't do C++.)  somehow, I stop writing when
the code fits some idea of "complete", which is very different for C and
for Lisp.  if I spent equal amounts of time with the Lisp code as I do with
the C code, I can assure you that the Lisp code would be very efficient
indeed, profiled for time and space and tweaked where necessary, etc.

| What metric would you propose for the opposite point of view?

I thought my point was well communicated with "toss a coin" and "unfounded
conjecture".  you're the one to offer suggestions that cry out for metrics.
I don't see any from you, though.  what you provide is an excellent example
of a very different point.  if your libraries had to be grown from nothing
into their present stage by a _new_ set of people who had to make all the
design choices and all the mistakes that your legacy Lisp code required to
get to where it got, we would have a relevant comparison for your claim
that C++ applications would be a factor of 2.5 larger if written in Lisp,
but that is still an unfounded conjecture on your part.  all you have
evidence of is that something that grew in Lisp could be rewritten smaller
in C++ after the fact.  I claim that this is true of _any_ rewriting,
almost no matter which languages are involved.  as designs solidify, they
can almost always be expressed much more compactly.  at the end of the
journey lies simplicity achieved.

| Perhaps you feel that I abused the word "temptation", so let me restate
| my point: The path of least resistance (ie what code is easiest to write
| based on built-in functionality and datatypes) in Lisp often leads to
| slow code which allocates lots of unnecessary memory.  The equivalent
| path of least resistance in C++, while far from optimal, is somewhat
| better in this regard.

that's what I'm saying, too.  but once on the path of least resistance, it
is important to realize that the resulting C code is buggy and while it may
require little memory or CPU time, it requires all the more time debugging
and correcting, possibly rewriting to remove design deficiencies, while the
Lisp code requires much less time to write, and albeit simple-minded and
wasteful in memory and CPU, it works correctly and can be optimized when
there is time and need.  also, from all of my experience with C and Lisp,
the argument list in a C function goes through several changes that affect
all callers, while the lambda list in Lisp stays is adorned with optional
or keyword arguments added with suitable default values, affecting only
callers that need the changes.  data structures grow, too, and require
recompiles of all user code in C.  changes in Lisp structures don't require
recompiles, unless the code is very heavily optimized, which there is no
need to request until you actually need it.

however, and this is the important human factor of programming, if a Lisp
programmer spent as much time on his code as a C programmer does on his
path of least resistance, it wouldn't be the path of least resistance in
Lisp, anymore, it would be approaching production quality code.

| It's too bad that someone as passionate in their beliefs and as
| articulate in their expression as you should be deceived on this point.

I don't think I'm deceived, of course, and you've done nothing to show me I
am, so: many thanks for the compliments!

| However, I think it is an inescapable conclusion: In its average
| incarnations, Lisp, for all its other benefits as a productive tool, is
| simply not a shining example of an easy way to produce efficient
| applications for most programmers.

if we allow ourselves to focus on a single aspect of a language and its
use, it is possible to argue that any and all languages suck big time.  you
seem to have chosen "efficiency" of the code (regardless of its quality).
I choose efficiency of the programmer (which includes "correctness" of the
unoptimized code).  I won't argue that the first shot at a function that a
programmer who has made it to the "average" mark for C programmers would
write in Lisp would suck _more_ in Lisp than in C (because it would not be
useful later, and could only be a horizontal building block, not a vertical
one, all other differences between C and Lisp programmers being equal), but
I strongly disagree that one should argue from this to the merits of
programming languages.

I have used C since 1981, but I first saw Lisp in 1978.  the influence of
The Little Lisper has been remarkable, although I had limited access to
Lisp systems.  when I finally got a computer on which I could use Lisp for
real (after a harsh meeting with C++, I decided "there must be a better way
than this" and found it), I had a long transition period when I cared
overly much about the performance of my code.  many of my questions to this
group (and to my Lisp vendor) have been concerned with the efficiency of
the compiled code.  I used to be willing to spend days keeping some piece
of code as fast as it used to be while fixing design flaws in it, while it
obviously would take billions of runs to recover _my_ time in keeping it
fast.  I would be more than willing to wait several minutes for a compile
so I could pride myself on having shaved less time off the execution of an
_interactive_ program than an extra context switch or page fault would
cost.  this just wasn't smart.  the problem with the obsession with local
"efficiency" that so easily inflicts C and C++ programmers is that it was
once valid when small local improvements in the source code led to linear
improvements in performance.  this is no longer true.  achieving optimal
execution conditions is a very different task these days from it used to
be, and it is counter-productive to argue for programmer control over CPU
registers and "I know what I'm doing" because, in fact, programmers _don't_
know what the computer is doing when running a process, and the kinds of
issues that _matter_ to performance can only be handled by the compiler or
the run-time system.

| > | Few programmers are up to the task.  For many years, Ilog was the
| > | largest Lisp vendor in Europe, and believe me, I saw mounds of
| > | absolutely horrendous Lisp code written by supposedly well-educated
| > | and knowledgeable programmers, the cream of the crop of European
| > | programmers.
| > 
| > yeah, funny how 350 million people can produce so few good programmers
| > when 250 million people can produce so many, isn't it?
| 
| Now perhaps it's time for *you* to propose a metric, or stop insulting
| European programmers (of whom I'm not one, but for whom I've a great deal
| of respect).

well, actually, I thought _you_ were insulting all European programmers:
if the cream of the crop of European programmers exposed you to mounds of
absolutely horrendous Lisp code, what is one to believe of the rest?

BTW, Europe is suffering from more than countries and languages.  people
who don't know this continent might be appalled to learn that research goes
on in a whole bunch of languages and that only when something major is done
is it published in today's lingua franca of science, English.  all the
grunt work is repeated in all the languages, fruitlessly if nobody else in
the same language group reads it.  the cream of the crop of European
scientists (and programmers) often learn English well and move to the US
because of sheer exhaustion with this system.  now that the European Union
is going more protectionist than any block in recent human history, even my
cat has to eat inferior food because EU can't import foodstuff from the US,
anymore.  the manufacturer had to set up a "European Headquarters" with a
brand new plant, all of European make, and has to make food according to
European regulations, which are _way_ below the scientific results my pet
food manufacturer has discovered on its own and using as its fort� compared
to other brands.  hooray for Europe!  let's give communism a second chance!

#\Erik
-- 
if we work harder, will obsolescence be farther ahead or closer?
From: Benjamin Franksen
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <338D62DA.554B@bii.bessy.de>
Erik Naggum wrote:
> ..., even my
> cat has to eat inferior food because EU can't import foodstuff from the US,
> anymore.  the manufacturer had to set up a "European Headquarters" with a
> brand new plant, all of European make, and has to make food according to
> European regulations, which are _way_ below the scientific results my pet
> food manufacturer has discovered on its own and using as its fort� compared
> to other brands.  hooray for Europe!  let's give communism a second chance!

You talking about genetically manipulated stuff? Or about the usage of
hormones on cattle? Superior indeed. Hooray for America! Some day the
better technology will make us even better pets!?

	Ben
-- 
// snail: BESSY II, Rudower Chaussee 5, D-12489 Berlin, Germany
// email: ········@bii.bessy.de
// phone: +49 (30) 6392-4865
// fax:   +49 (30) 6392-4859
From: Erik Naggum
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <3073906412580136@naggum.no>
* Benjamin Franksen
| You talking about genetically manipulated stuff?

no.

| Or about the usage of hormones on cattle?

no.

#\Erik
-- 
if we work harder, will obsolescence be farther ahead or closer?
From: Mukesh Prasad
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <337C9D91.6779@polaroid.com>
Erik Naggum wrote:
> | Lisp vendor in Europe, and believe me, I saw mounds of absolutely
> | horrendous Lisp code written by supposedly well-educated and
> | knowledgeable programmers, the cream of the crop of European programmers.
> 
> yeah, funny how 350 million people can produce so few good programmers when
> 250 million people can produce so many, isn't it?

Is that supposed to be sarcasm?  Or wonder?
(I can't quite tell, it reads ok either way.)

If it is sarcasm, I am sure somebody from
the other side of Atlantic from you will
have a good response...

If it's wonder -- here's an IMHO:  Europe is a lot
of little countries with boundaries and very little immigration,
hence a Von Neumann closeted in Europe is just a fertile mind
which very few seeds of concepts.  Out here, he gets to
deal with, be challenged by and be stimulated by the best
of the full 250 million (actually, if you count immigration,
the best of a few billions), and doesn't stop growing
just after mingling with the best of a measly 5-10 millions.
So we have the computer.

Internet might change things a little, since prospective
Von Neumanns in Europe can now fully interact with
a much larger world.

Of course, the Scots might have different opinions
on that (You only need to cultivate one really
good programmer...)
From: Darin Johnson
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <slrn5npq8o.5s5.darin@connectnet1.connectnet.com>
>| These modern apps we whine about are bloated not because of their
>| programming language, but simply because they try to do too much at once,
>| implementing far more features (and Easter eggs, and scripting languages,
>| etc.) than any normal mortal ever uses.  ...

Scripting languages can be SMALL.  Forget Visual Basic, it's a bad
example.  On the Amiga with ARexx, adding scripting to a program took
a very small bit of code; and the run time portion of ARexx was less
than 4K.  And because it was widely used by applications, it was vastly
more useful than VB or Tcl.  Why aren't such things done anymore?

-- 
Darin Johnson
·····@usa.net.delete_me
From: Chris Bitmead uid(x22068)
Subject: Re: C++ briar patch (Was: Object IDs are bad)
Date: 
Message-ID: <BITMEADC.97May19110011@Alcatel.com.au>
In article <··············@laura.ilog.com> Harley Davis <·····@laura.ilog.com> writes:

>········@Bayou.UH.EDU (········@bayou.uh.edu) writes:
>
>> Scott Schwartz (········@galapagos.cse.psu.edu.NO-SPAM) wrote:
>> : ·············@Alcatel.com.au (Chris Bitmead uid(x22068)) writes:
>> : | I'm still a bit vague on why you couldn't do what you wanted in Lisp,
>> : | efficiency issues put aside for a moment...
>> 
>> : Bzzzzt.  Efficiency is everything.  You can't put it aside.
>> 
>> Bzzzzzt!  Efficiency isn't even something until you know _FOR A FACT_
>> that your program doesn't have it.  Keeping your eye on nothing but
>> efficiency while developing your program leads to code that is
>> unmaintainable, unreadable, and unreliable -- and all this for
>> a benefit that your program may have had to begin with.
>> 
>> And if your program isn't efficient enough?  Well remember the
>> 80/20 rule (some even go far as to say 90/10).  The majority
>> of the processing time of your program is spent in a relatively
>> small area, so when the time comes to save cycles, optimize that
>> portion.  That way if you have to resort to monkey business to
>> get your program running faster, at least you would have limited
>> the extent of the changes.
>
>In my experience, the real killer with Lisp is not so much raw speed,
>but rather excess memory use.  Lisp by nature is something of a memory
>hog, requiring lots of extra indirections and extra runtime
>information, in addition to not-very-compact code and lots of runtime
>libraries that are loaded pretty often.  The problem with excess
>memory use is that it doesn't fit in well with multi-tasking operating
>systems.  Once you start swapping for whatever reason performance is
>killed completely and users are unhappy.
>
>C++ for all of its numerous flaws does allow you to use a lot less
>memory than dynamic competitors like SmallTalk, Lisp, and even Java,
>and this boosts performance for real apps.

ROTFL.

I wonder why my C++ executables are up to 90megs before I even start
to run them then?