From: Richard Villanueva
Subject: Why garbage collection?
Date: 
Message-ID: <rvillDL4v3n.I8r@netcom.com>
I understand the classical Lisp method of automatic garbage collection,
and it is very elegant.  It reserves only one bit per cons cell (the
mark bit).  However, on large systems, the long pause for garbage
collection is bad, so people look for more sophisticated methods.

My question is, why not plain old reference counts?  Couldn't you
reserve one byte per cons for the count, and if the count reaches 255,
the cell becomes immortal?  I know that circular lists are a problem,
but the system could easily find all CDR-loops, say when all other
space was exhausted.

In additon to being simpler and having fewer long pauses, this method
is probably faster in the long run in spite of the overhead for
tallying reference counts.

So am I missing something?  Or are there other methods that are even
better?

--

+===============================================================+
| Richard Villanueva  |  Art and science cannot exist but in    |
| San Diego, Calif.   |  minutely organized particulars.        |
| ·····@netcom.com    |                        -- William Blake |
+===============================================================+

From: Timothy Larkin
Subject: Re: Why garbage collection?
Date: 
Message-ID: <30F838F6.2AE0@cornell.edu>
> My question is, why not plain old reference counts? 

From http://www.cs.rochester.edu/u/miller/ai-koans.html:

One day a student came to Moon and said, "I understand how to make a 
better garbage collector. We must keep a reference count of the 
pointers to each cons." Moon patiently told the student the 
following story-

       "One day a student came to Moon and said, "I understand how 
to make a better garbage collector...
From: Erik Naggum
Subject: Re: Why garbage collection?
Date: 
Message-ID: <19960114T050927Z@arcana.naggum.no>
[Richard Villanueva]

|   I understand the classical Lisp method of automatic garbage collection,
|   and it is very elegant.  It reserves only one bit per cons cell (the
|   mark bit).  However, on large systems, the long pause for garbage
|   collection is bad, so people look for more sophisticated methods.

there is no "long pause" in modern systems.  numerous brilliant minds have
worked on garbage collection for many years.  that is nearly a guarantee
that you will need to have in-depth knowledge of the prior art in garbage
collection techniques to be able to provide useful suggestions.

|   My question is, why not plain old reference counts?

One day a student came to Moon and said, "I understand how to make a better
garbage collector.  We must keep a reference count of the pointers to each
cons."  Moon patiently told the student the following story-

       "One day a student came to Moon and said, "I understand how to
       make a better garbage collector...

(from the AI koans collection, found, among other places, at
http://www.cs.rochester.edu/u/miller/ai-koans.html.)

|   So am I missing something?  Or are there other methods that are even
|   better?

I have misplaced the address of the mother of all sites on garbage
collection, but there is one that has an excellent survey, and there are
many papers available.  the literature was sufficiently, um, extensive,
that I figured that I had work to do and news to read before I would embark
on this long journey.  if my plan to become immortal succeeds, I'll
certainly study garbage collection.  in the meantime, I know only that
anything I could come up with on my own is bound to be discarded as too
inefficient already, unless I luck out and strike 18 aces in a row.

one of the most well-known inefficient and wasteful techniques is reference
counting.  most C++ programmers invent their own particularly ugly breed of
inefficient reference counting.  this is why C++ is so popular -- the
probability that each and every C++ programmer will actually be able to
improve something in that language and feel great about it is exactly 1.

btw, your "one byte" stuff won't work.  modern computers are no longer byte
adressable, but instead waste two or three bits of the precious machine
word and address range because some inferior languages think that byte
addressability is a win.  the smallest efficiently addressable unit on RISC
processors is usually 4 bytes, sometimes 8 bytes.  even on CISCs, you may
pay a hefty penalty for misaligning your data.

#<Erik 3030584967>
-- 
the problem with this "information superhighway" is mainly that if you ask
people to go play in it, they don't even understand when they get run over.
From: Richard Villanueva
Subject: Re: Why garbage collection?
Date: 
Message-ID: <rvillDL6u4q.KHL@netcom.com>
Erik Naggum (····@naggum.no) wrote:

: there is no "long pause" in modern systems.  numerous brilliant minds have
: worked on garbage collection for many years.  that is nearly a guarantee
: that you will need to have in-depth knowledge of the prior art in garbage
: collection techniques to be able to provide useful suggestions.

I realize how unlikely it is that I would break new ground on the subject.
Hence my puzzlement.  A friend of mine who has worked in support of AI
researchers for many years told me that garbage collection was one of the
obstacles that was hindering the acceptance of Lisp.  This got me wondering.
I take it that he must be ill-informed.

: I have misplaced the address of the mother of all sites on garbage
: collection, but there is one that has an excellent survey, and there are
: many papers available.

Someone else provided the site info.  I'll FTP it.

--

+===============================================================+
| Richard Villanueva  |  Art and science cannot exist but in    |
| San Diego, Calif.   |  minutely organized particulars.        |
| ·····@netcom.com    |                        -- William Blake |
+===============================================================+
From: Erik Naggum
Subject: Re: Why garbage collection?
Date: 
Message-ID: <19960115T003635Z@arcana.naggum.no>
[Richard Villanueva]

|   I realize how unlikely it is that I would break new ground on the
|   subject.  Hence my puzzlement.  A friend of mine who has worked in
|   support of AI researchers for many years told me that garbage
|   collection was one of the obstacles that was hindering the acceptance
|   of Lisp.  This got me wondering.  I take it that he must be
|   ill-informed.

most people are ill-informed about the issues of memory management in
general and garbage collection in particular.  as I hinted: it is a very
complicated topic.  e.g., people think that manual memory management is
always faster than garbage collection, and on top of this think that C's
malloc and free are time- and space-efficient.  _none_ of this is true.
it is no accident that none of the standard Unix programs use dynamic
memory unless absolutely necessary, and instead are rife with arbitrary
(small) limits.  the arguments used against garbage collections today
were probably used against dynamic memory as a whole not too long ago.

it is, however, very true that the acceptance of Lisp was hindered by a
particular brand of ill-informed misunderstanding, namely prejudice towards
automating one particular complex manual task.  for some bizarre reason,
programmers have been taught to automate complex manual tasks and have
consistently made them several orders of magnitude faster, but memory
management is somehow exempt from this rule.  why?  of course, it isn't,
but automatic storage reclamation was perceived as wasteful ("what?  there
are _dead_ objects out there, wasting _expensive_ memory?"), time-consuming
(it is actually much faster to pause infrequently (perhaps never) to do
garbage collection than to slow down the program all through its lifetime
with part-time garbage collection, but most people care about small and
immediate things, not big issues), even unnecessary ("relax, I can handle
this.  besides, I know better what my program needs than some newfangled
automatic algorithm ever could.").  some programmers also feel an acute
lack of control, and will argue about everything but their actual reasons
for not automating their memory management, preferring to micromanage
individual bytes instead of managing memory.

so your AI research supporter (?) was not ill-informed, but what he told
you was not what you thought you heard.  garbage collection hindered the
acceptance, but garbage collection was not at fault, the _perception_ of
garbage collection was the actual cause.  lack of understanding of what it
was made for a scape goat and an easy excuse.  also, popularity and lack of
it are both self-propelling.  (how often have you heard that somebody
doesn't want to learn Lisp because nobody uses it?  that's why!)

|   : I have misplaced the address of the mother of all sites on garbage
|   : collection, but there is one that has an excellent survey, and there
|   : are many papers available.
|   
|   Someone else provided the site info.  I'll FTP it.

it is customary to supplement such vague information with harder
information if you get it.  I would still like to know where it is.

#<Erik 3030654995>
-- 
the problem with this "information superhighway" is mainly that if you ask
people to go play in it, they don't even understand when they get run over.
From: Jeff Dalton
Subject: Re: Why garbage collection?
Date: 
Message-ID: <DLC45w.CAD.0.macbeth@cogsci.ed.ac.uk>
In article <···············@netcom.com> ·····@netcom.com (Richard Villanueva) writes:
>Erik Naggum (····@naggum.no) wrote:
>
>: there is no "long pause" in modern systems.  numerous brilliant minds have
>: worked on garbage collection for many years.  that is nearly a guarantee
>: that you will need to have in-depth knowledge of the prior art in garbage
>: collection techniques to be able to provide useful suggestions.
>
>I realize how unlikely it is that I would break new ground on the subject.
>Hence my puzzlement.  A friend of mine who has worked in support of AI
>researchers for many years told me that garbage collection was one of the
>obstacles that was hindering the acceptance of Lisp.  This got me wondering.
>I take it that he must be ill-informed.

That (that he's ill-informed) does not follow.

GC _is_ an obstacle that hinders the acceptance of Lisp.

That doesn't mean it does so *for good reasons*.

(There may be some real-time applications where GC is still a
problem, though a Lisp program should be able to avoid generating
garbage if it's necessary to avoid it.  But most of the time,
at least, modern garbage collectors are fast enough.)

-- jd
From: Tim Bradshaw
Subject: Re: Why garbage collection?
Date: 
Message-ID: <TFB.96Jan15133738@scarp.ed.ac.uk>
* Erik Naggum wrote:

[Heavily edited]

> [Richard Villanueva]
> |   My question is, why not plain old reference counts?

> one of the most well-known inefficient and wasteful techniques is reference
> counting.  

Is this (inefficient) true?  The Xerox Lisp machines had a
reference-counting GC which seemed to do OK.  I remember it taking 10%
of the time or something, which seemed like a reasonable overhead.

--tim
From: Jeff Dalton
Subject: Re: Why garbage collection?
Date: 
Message-ID: <DL89vq.9oI.0.macbeth@cogsci.ed.ac.uk>
In article <···············@netcom.com> ·····@netcom.com (Richard Villanueva) writes:
>I understand the classical Lisp method of automatic garbage collection,
>and it is very elegant.  It reserves only one bit per cons cell (the
>mark bit).  However, on large systems, the long pause for garbage
>collection is bad, so people look for more sophisticated methods.

These days, GC does not usually involve a long pause and is
probably more efficient than reference counts.

-- jd
From: Richard Villanueva
Subject: Re: Why garbage collection?
Date: 
Message-ID: <rvillDLA5o5.CD1@netcom.com>
I would just like to state that I reconstructed my previous
experiment in Allegro CL, and discovered that I made a mistake.
So I retract my statement that Allegro has slow garbage collection.

--

+===============================================================+
| Richard Villanueva  |  Art and science cannot exist but in    |
| San Diego, Calif.   |  minutely organized particulars.        |
| ·····@netcom.com    |                        -- William Blake |
+===============================================================+
From: Bruno Haible
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4eae5s$66p@nz12.rz.uni-karlsruhe.de>
Prof. Henry Baker <······@netcom.com> wrote:
>
> The other thing I'd like to do is to ask the ACM to embargo any textbooks
> that continue to prattle on about how bad GC is.  Unfortunately, these
> textbooks lend credence to the old adage "If you can, do; if you can't,
> write a bad textbook".

Please start with the textbook "C++ for C programmers", by Ira Pohl,
2nd edition. On page 268, he writes:

  " Other complexity issues are fundamental to the C++ language design,
  such as the lack of garbage collection (GC). Several proposals exist
  [4][5][6], and their implementations support the contention that they
  can be done without degrading performance in most applications. Most
  other major OOP languages, such as Smalltalk, CLOS, and Eiffel, support
  GC. The argument for GC is that it makes the programmer's task distinctly
  easier. Memory leaks and pointer errors are common when each class
  provides for its own storage management. These are very hard errors to
  find and debug. GC is a well-understood technology, so why not?

  " The argument against GC is that it extracts a hidden cost from all
  users when employed universally. Also, GC manages memory but not other
  resources. This would require destructors for _finalization_.
  Finalization is the return of resources and other behavior when an
  object's lifetime is over. For example, the object might be a file,
  and finalization might require closing the file. Finally, it is not
  in the tradition of the C community to have free store managed
  automatically."

  [4] Hans-J. Boehm and Mark Weiser. "Garbage Collection in an Uncooperative
      Environment." Software - Practice and Experience, Sept. 1988,
      pp. 807-820.

  [5] Daniel Edelson and Ira Pohl. "A Copying Collector for C++." In
      Usenix C++ Conf. Proc. 1991, pp. 85-102.

  [6] Daniel Edelson. "A Mark and Sweep Collector for C++." In Proc. Princ.
      Prog. Lang., January 1992.


Just look at the technical strength of the argument that GC is not
"in the tradition of the C community"...

                   Bruno Haible
From: Cyber Surfer
Subject: Re: Why garbage collection?
Date: 
Message-ID: <822675271snz@wildcard.demon.co.uk>
In article <··········@nz12.rz.uni-karlsruhe.de>
           ······@ma2s2.mathematik.uni-karlsruhe.de "Bruno Haible" writes:

> Just look at the technical strength of the argument that GC is not
> "in the tradition of the C community"...

Yeah, I love it. ;-)

BTW, I've been asked to review a GC for C++, so I guess now is a
good time to grab a few authoritive documents on the subject, like
the docs at cs.utexas.edu:/pub/garbage/ or the files in Henry
Baker's ftp space. I'll also have to find a Ghostscript viewer
(preferably for NT), as I don't currently have a Postscript viewer,
unless you count a laser printer. In this case, I don't.

Mind you, I'm very happy with a mark/compact GC, and I found one
in a computer science book, Fundamentals of Data Structures, by
E Horowitz and S Sahni. While they're not anti-GC, they refer to
Knuth and his belief that specialist languages such as Lisp and
SNOBOL are not necessary, and that list and string processing can
be done in any language. The languages that seem to have interested
them tend to be PL/I, Pascal, and Fortran. Not at all like Lisp.

The mark/compact algorithms they give are obviously not the best
available today, but they are at least simple enough to implement
easily. For programmers uncomfortable with relocating possible
every pointer in a heap any time the GC runs, this could be
important. I was suprised to find that when I coded a GC based on
their algorithms in C, they worked first time. I've been using
that code for the last 8 years without trouble, and with what I
find to be acceptable performance.

It was published in 1976, so its one of a number of books from
around that time which I love. Sadly, many of the techniques in this
particular book seem to have been forgotten. Is anyone still using
coral rings, multilists and inverted multilists? Have these data
structures become "obselete", like reference counting, cylinder-surface
indexing, etc?

Anyway, we can be sure that garbage collectors will be around for
a while yet, as a fair number of popular (that's probably a relative
issue - um, relative to VB or Perl? (-; ) language using a GC of
some kind are still kicking around. I think that Henry Baker made
a very good point about how people perceive delays in software.

I'm currently seeing very long delays when I try to access most
Internet sites in the US. :-( Still, that'll improve eventually.
Then I may be able to grab some useful files, like some GC docs. ;-)
Meanwhile, I should read the Java docs I have here, as I'll be
using that soon. Doesn't Java use a GC...? I think it does!
-- 
<URL:http://www.demon.co.uk/community/index.html>
<URL:http://www.enrapture.com/cybes/namaste.html>
Po-Yeh-Pao-Lo-Mi | "You can never GC enough."
From: Tim Hollebeek
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4eb75f$l2t@cnn.Princeton.EDU>
Cyber Surfer (············@wildcard.demon.co.uk) wrote:

: It was published in 1976, so its one of a number of books from
: around that time which I love. Sadly, many of the techniques in this
: particular book seem to have been forgotten. Is anyone still using
: coral rings, multilists and inverted multilists? Have these data
: structures become "obselete", like reference counting, cylinder-surface
: indexing, etc?

Reference counting is obsolete?  I better go rush over and tell that
to all the C++ programmers who use it extensively because they don't
have GC in the language :-)  Heck, Byte even published an article on
how to write a refcounting pointer class in the last few months.  Not
that Byte is exactly at the Leading Edge of programming, but it does
show that plenty of Real World programmers still do things that way.

Whether they _should_ is another story :-)

--
Tim Hollebeek      | Everything above is a true statement, for sufficiently
PChem Grad Student | false values of true.
Princeton Univ.    | ···@wfn-shop.princeton.edu
-------------------| http://wfn-shop.princeton.edu/~tim
From: Cyber Surfer
Subject: Re: Why garbage collection?
Date: 
Message-ID: <822848307snz@wildcard.demon.co.uk>
In article <··········@cnn.Princeton.EDU> ···@franck "Tim Hollebeek" writes:

> Cyber Surfer (············@wildcard.demon.co.uk) wrote:
> 
> : It was published in 1976, so its one of a number of books from
> : around that time which I love. Sadly, many of the techniques in this
> : particular book seem to have been forgotten. Is anyone still using
> : coral rings, multilists and inverted multilists? Have these data
> : structures become "obselete", like reference counting, cylinder-surface
> : indexing, etc?
> 
> Reference counting is obsolete?  I better go rush over and tell that
> to all the C++ programmers who use it extensively because they don't
> have GC in the language :-)  Heck, Byte even published an article on
> how to write a refcounting pointer class in the last few months.  Not
> that Byte is exactly at the Leading Edge of programming, but it does
> show that plenty of Real World programmers still do things that way.

I stopped reading Byte a few years ago, easily 5 years after they
lost interest in programmers. ;-) If you want to know about basic
garbage collecting, I'd recommend a computer science book, as it'll
probably be more up to date. Beware of many programming books, as
I've lost count of the number of tutorials that use examples without
much (if any) error checking. _That's_ a very bad way to code!
 
> Whether they _should_ is another story :-)

Do Real World programmers know anything about computer science?
I dunno. "If it works, it must be ok" could be enough for some.
I'm still using code I wrote 10 years ago, so I need to take a
little bit more care...
-- 
<URL:http://www.demon.co.uk/community/index.html>
<URL:http://www.enrapture.com/cybes/namaste.html>
Po-Yeh-Pao-Lo-Mi | "You can never browse enough."
From: Paul Wilson
Subject: GC & traditional allocators & textbooks
Date: 
Message-ID: <4f2ila$6p8@jive.cs.utexas.edu>
In article <············@wildcard.demon.co.uk>,
Cyber Surfer  <············@wildcard.demon.co.uk> wrote:
>In article <··········@jive.cs.utexas.edu>
>           ······@cs.utexas.edu "Paul Wilson" writes:
>
>> >If you want to know about basic
>> >garbage collecting, I'd recommend a computer science book, as it'll
>> >probably be more up to date.
>> 
>> I have to disagree here.  I know of no textbooks with even a decent
>> discussion of garbage collection. [...]
>
>Was I refering to modern GC? I'm not sure. I don't know of any books
>on modern GC, but a book 20 years old seems to contain GC techniques
>that many C/C++ programmers are unaware of. Even if that's the best
>book on the subject, it could still enlighten a few programmers.

OK, agreed.  There are some fundamental algorithms that *are* in some
textbooks and should be better known.

On the other hand, even some of the textbooks that do discuss the
fundamental algorithms often propagate naive views about GC that are rather
damaging.  (Henry Baker, Hans Boehm, and I have all put a fair amount
of effort into trying to slow the spread of those myths on the net.)

This is also true of traditional allocators.  The history of allocator
research has been a big mess---the literature is a bit of a disaster
area---and the textbooks reflect this.  The analyses in the books are 
shallow and largely wrong.  (This is less attributable to the textbooks
authors than the weak discussions of GC.  It's not their fault that they
can't summarize the gist of the literature and get it right, because
the literature in general is disorganized, inconsistent, and often wrong.)

One of the problems in the area of traditional memory allocators is that
people have taken one particular textbook far too seriously---Volume 1
of Knuth's _The_Art_of_Computer_Programming_.  It was written in 1968,
and some of it has turned out to be less than helpful.  It's still the
standard reference, though, and other textbook writers often regurgitate
its discussion of memory allocators.  Implementors often look at it and
go and implement things that have been known to be bad since the early
1970's.  (Knuth is still tremendously influential in allocator work,
despite the fact that he doesn't appear to have written anything about it
in over 25 years.  This is not Knuth's fault, of course---inertia makes
the world go 'round.)

"Modified First Fit" with the roving pointer is the clearest example.  It
was a bad idea, and it was quickly shown to be bad, but some other textbook
writers keep mindlessly cribbing from Knuth, and even some implementors still
use it.

Obligatory positive comment:  the best textbook discussion of allocators
that I know of is Tim Standish's in _Data_Structure_Techniques_.  He doesn't
recognize the near-universal methodological problems with allocator studies,
but he's unique in recognizing the basic data structure and algorithm issues
in implementing allocators.

>> I suggest looking at the papers on our web site (in my .sig, below) which
>> include two surveys (long and medium-sized) on GC.  (The long version will
>> appear in Computing Surveys after some revision.)

This site also has our big survey on memory allocators from IWMM '95, which
I hope will influence future textbooks.  It talks about empirical methodology
as well as giving a fairly exhaustive treatment of implementation techniques.

>> There are also several
>> other papers there by my research group and a bunch by other people
>> (from the '91 and '93 OOPSLA GC workshops), and a big bibliography in
>> LaTeX .bib format.  The web page also has links to Henry Baker's and Hans
>> Boehm's web pages.


-- 
| Paul R. Wilson, Comp. Sci. Dept., U of Texas @ Austin (······@cs.utexas.edu)
| Papers on memory allocators, garbage collection, memory hierarchies,
| persistence and  Scheme interpreters and compilers available via ftp from 
| ftp.cs.utexas.edu, in pub/garbage (or http://www.cs.utexas.edu/users/wilson/)      
From: Cyber Surfer
Subject: Re: GC & traditional allocators & textbooks
Date: 
Message-ID: <823455623snz@wildcard.demon.co.uk>
In article <··········@jive.cs.utexas.edu>
           ······@cs.utexas.edu "Paul Wilson" writes:

> >> >If you want to know about basic
> >> >garbage collecting, I'd recommend a computer science book, as it'll
> >> >probably be more up to date.
> >> 
> >> I have to disagree here.  I know of no textbooks with even a decent
> >> discussion of garbage collection. [...]
> >
> >Was I refering to modern GC? I'm not sure. I don't know of any books
> >on modern GC, but a book 20 years old seems to contain GC techniques
> >that many C/C++ programmers are unaware of. Even if that's the best
> >book on the subject, it could still enlighten a few programmers.
> 
> OK, agreed.  There are some fundamental algorithms that *are* in some
> textbooks and should be better known.

I agree. I just suspect that those who think about these issues a lot
under estimate how a programmer who is unfamiliar with even basic GC
will react to the more advanced techniques. Disbelief is a common
reaction, which I'm sure you've also seen. Arthur C Clarke's third
law is worth quoting here, "Any sufficiently advanced technology is
indistingishable from magic". Confront a programmer with some "magic"
and they'll disbelieve it.
 
> On the other hand, even some of the textbooks that do discuss the
> fundamental algorithms often propagate naive views about GC that are rather
> damaging.  (Henry Baker, Hans Boehm, and I have all put a fair amount
> of effort into trying to slow the spread of those myths on the net.)

And I hope you have a lot of success in your efforts, some of which I've
witnessed here on UseNet. However, you first have to kill the idea that
"successful GC" is magic. If it does work, then you'll have to offer
examples of "real world" uses of GC before a typical programmer will be
convinced. Most of the programmers I know are very suspicious of computer
science, which doesn't help. There's no way they'll ever bother reading
a paper about GC. On the other hand, they probably won't have read any
books on the subject, either.

This only leaves the word of other programmers, and the masses of code
that apparently has been written _without_ the use of GC. As I've said,
C/C++ programers see the cost of everything and the value of nothing.
Please note that I'm also a C/C++ programmers myself, and I also make
that mistake from time to time. Some languages let you focus on the
very small picture, instead of the big picture, where GC can help.
 
> This is also true of traditional allocators.  The history of allocator
> research has been a big mess---the literature is a bit of a disaster
> area---and the textbooks reflect this.  The analyses in the books are 
> shallow and largely wrong.  (This is less attributable to the textbooks
> authors than the weak discussions of GC.  It's not their fault that they
> can't summarize the gist of the literature and get it right, because
> the literature in general is disorganized, inconsistent, and often wrong.)

I won't argue with that! ;-) I'd love to see a good summary.

> One of the problems in the area of traditional memory allocators is that
> people have taken one particular textbook far too seriously---Volume 1
> of Knuth's _The_Art_of_Computer_Programming_.  It was written in 1968,
> and some of it has turned out to be less than helpful.  It's still the
> standard reference, though, and other textbook writers often regurgitate
> its discussion of memory allocators.  Implementors often look at it and
> go and implement things that have been known to be bad since the early
> 1970's.  (Knuth is still tremendously influential in allocator work,
> despite the fact that he doesn't appear to have written anything about it
> in over 25 years.  This is not Knuth's fault, of course---inertia makes
> the world go 'round.)

I also have that book, and the other 3 volumes. However, they didn't
stop me from writting and using a GC. If I wanted to write a floating
point package (very unlikely, even 10 years ago), then I might turn
to Knuth, but not for anything like memory management.

Perhaps I've been brainwashed by those evil people are Xerox PARC,
when I read the August 1981 issue of Byte. ;-) I doubt it. I just
don't have a blind spot that prevents me from seeing problems for
which a GC can help. As you said, Knuth's book is very old. In it
he refers to decimal computers! <ahem> Very dated, now.

> "Modified First Fit" with the roving pointer is the clearest example.  It
> was a bad idea, and it was quickly shown to be bad, but some other textbook
> writers keep mindlessly cribbing from Knuth, and even some implementors still
> use it.

Is it really worse than Best Fit? I've wondered about that ever
since first read that book. You seem like a good person to ask. ;)
 
> Obligatory positive comment:  the best textbook discussion of allocators
> that I know of is Tim Standish's in _Data_Structure_Techniques_.  He doesn't
> recognize the near-universal methodological problems with allocator studies,
> but he's unique in recognizing the basic data structure and algorithm issues
> in implementing allocators.

I'll see if I can find that book. Thanks.

> This site also has our big survey on memory allocators from IWMM '95, which
> I hope will influence future textbooks.  It talks about empirical methodology
> as well as giving a fairly exhaustive treatment of implementation techniques.

I FTP'd it a week or two ago. I intend to read it, as soon as I
get a (binary) copy of Ghostscript for NT. Thanks.
-- 
<URL:http://www.demon.co.uk/community/index.html>
<URL:http://www.enrapture.com/cybes/namaste.html>
Po-Yeh-Pao-Lo-Mi | "You can never browse enough."
From: Paul Wilson
Subject: allocator studies (was Re: GC & traditional allocators & textbooks)
Date: 
Message-ID: <4f59c3$7il@jive.cs.utexas.edu>
In article <············@wildcard.demon.co.uk>,
Cyber Surfer  <············@wildcard.demon.co.uk> wrote:
>In article <··········@jive.cs.utexas.edu>
>           ······@cs.utexas.edu "Paul Wilson" writes:
>
>> The history of allocator
>> research has been a big mess---the literature is a bit of a disaster
>> area---and the textbooks reflect this.  The analyses in the books are 
>> shallow and largely wrong.  (This is less attributable to the textbooks
>> authors than the weak discussions of GC.  It's not their fault that they
>> can't summarize the gist of the literature and get it right, because
>> the literature in general is disorganized, inconsistent, and often wrong.)
>
>I won't argue with that! ;-) I'd love to see a good summary.

Well, very briefly:

   1. Traditional allocator research has largely missed the point, because
      the true causes of fragmentation haven't been studied much.  Most
      studies (a recent exception being Zorn, Grunwald, and Barrett's
      work at the U. of Colorado at Boulder, and Phong Vo's latest paper
      from Bell Labs) have used synthetic traces whose realism has never
      been validated, rather than real program behavior.

   2. The standard synthetic traces of "program" behavior are unrealistic,
      because they're based on pseudo-random sequences, even if the
      object size and lifetime distributions are realistic.  For good
      allocator policies, this causes an unrealistically high degree
      of fragmentation, because you're shooting holes in the heap at
      random.  For some programs and some allocators, though, you get
      unrealistically low fragmentation.  In general, the random trace
      methodology makes most allocators' performance look pretty similar,
      when in fact the regularities in real traces make some allocators
      work *much* better than others.

   3. People have often focused on the speed of allocator mechanisms,
      at the expense of studying the effects of policy in a realistic
      context.  As it turns out, some of the best policies are amenable
      to fast implementations if you just think about it as a normal
      data-structures-and-algorithms problem.  The best policies have
      often been neglected because people confused policy and mechanism,
      and the best-known mechanisms were slow.

   4. People have usually failed to separate out "true" fragmentation
      costs---due to the allocator's placement policy and its interactions
      with real program behavior---from simple overheads of simplistic
      mechanisms.  Straightforward tweaks can reduce these costs easily.
     
   5. The best-known policies (best fit and address-ordered first fit)
      work better than anyone ever knew, while some policies (Knuth's
      "Modified First Fit" or "Next Fit") work worse than anyone ever
      knew, for some programs.  This is because randomized traces tend
      probabilistically to ensure that certain important realistic
      program behaviors will never happen in traditional experiments.

   6. None of the analytical results in the literature (beginning with
      Knuth's "fifty percent rule") are valid.  They're generally based
      on two or three assumptions that are all systematically false for
      most real programs.  (Random allocation order, steady-state behavior,
      and independence of sizes and lifetimes.  To make mathematical
      analyses tractable, people have sometimes made stronger and even
      more systematically false assumptions, like exponentially-distributed
      random lifetimes.)

   7. Because of these problems, most allocator research over the last
      35 years has been a big waste of time, and we still don't know
      much more than we knew in 1968, and almost nothing we didn't
      know in the mid-70's.

> [...]
>> "Modified First Fit" with the roving pointer is the clearest example.  It
>> was a bad idea, and it was quickly shown to be bad, but some other textbook
>> writers keep mindlessly cribbing from Knuth, and even some implementors still
>> use it.
>
>Is it really worse than Best Fit? I've wondered about that ever
>since first read that book. You seem like a good person to ask. ;)

Yes, it is significantly worse, and some versions are often far worse, even
pathological.  If you keep the free list in address order, it's just worse.
If you keep the free list in LIFO order, it can be pathological.  (LIFO
order is attractive because it's cheaper than address order, and if all
other things were equal, it would improve locality.)  For some programs,
it systematically takes little bites out of big free blocks, making those
blocks unusable for allocation requests of the same big size in the future.
Many traditional experiments have masked this effect by using smooth
distributions of object sizes, so that odd-sized blocks are less of a
problem.

(Given random sizes, you're likely to have a request that's a good fit
for the block size pretty soon, even after taking a bite out of it.  For
real traces, you're likely *not* to get any requests for
comparable-but-smaller sizes anytime soon.  And the randomization of
order tends to eliminate the potential pathological interactions due
to particular sequence behavior.  You eliminate the extreme cases.)

>> Obligatory positive comment:  the best textbook discussion of allocators
>> that I know of is Tim Standish's in _Data_Structure_Techniques_.  He doesn't
>> recognize the near-universal methodological problems with allocator studies,
>> but he's unique in recognizing the basic data structure and algorithm issues
>> in implementing allocators.
>
>I'll see if I can find that book. Thanks.
>
>> [Our] site [http://www.cs.utexas.edu/users/oops/] also has our big survey 
>> on memory allocators from IWMM '95, which I hope will influence future
>> textbooks.  It talks about empirical methodology as well as giving a
>> fairly exhaustive treatment of implementation techniques.

-- 
| Paul R. Wilson, Comp. Sci. Dept., U of Texas @ Austin (······@cs.utexas.edu)
| Papers on memory allocators, garbage collection, memory hierarchies,
| persistence and  Scheme interpreters and compilers available via ftp from 
| ftp.cs.utexas.edu, in pub/garbage (or http://www.cs.utexas.edu/users/wilson/)      
From: Phong Vo
Subject: Re: allocator studies (was Re: GC & traditional allocators & textbooks)
Date: 
Message-ID: <3117E708.41C67EA6@research.att.com>
Paul Wilson wrote:
> 

>    1. Traditional allocator research has largely missed the point, because
>       the true causes of fragmentation haven't been studied much.  Most
>       studies (a recent exception being Zorn, Grunwald, and Barrett's
>       work at the U. of Colorado at Boulder, and Phong Vo's latest paper
>       from Bell Labs) have used synthetic traces whose realism has never
             ^^^^^^^^^
>       been validated, rather than real program behavior.
> 
Just a quick note that I am now with the new AT&T Research.
The main topics of the paper that Paul mentioned are a new API for memory allocation
called Vmalloc and a performance study comparing it against a number of
well-known malloc implementations. The code is currently available for non-commercial
use for anyone interested at the below url:
	http://www.research.att.com/orgs/ssr/reuse/
This address also has pointers to many other neat software tools from our group.

Phong Vo, ···@research.att.com
From: Paul Wilson
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4f56t6$7i0@jive.cs.utexas.edu>
In article <············@wildcard.demon.co.uk>,
Cyber Surfer  <············@wildcard.demon.co.uk> wrote:
>In article <··········@jive.cs.utexas.edu>
>           ······@cs.utexas.edu "Paul Wilson" writes:
>
>Was I refering to modern GC? I'm not sure. I don't know of any books
>on modern GC, but a book 20 years old seems to contain GC techniques
>that many C/C++ programmers are unaware of. Even if that's the best
>book on the subject, it could still enlighten a few programmers.

True.  Simple GC techniques are acceptable for a fair number of applications.
My favorite example is scripting languages.  Most scripting languages are
so slow that the cost of GC is negligible, even if it's implemented badly
by the standards of the state of the art.  People often use reference counting
because that's what the C++ books give examples of, when mark-sweep would
work like a charm, and often when copying collection wouldn't be very
hard.  Reference counting works well in the sense that it wastes little
space most of the time---getting space back immediately in most cases,
rather than waiting until the next GC---but for a slow language implementation,
mark sweep is just about as efficient if you crank the GC frequency up.

With a simple non-generational GC, you may pay a fair amount of CPU time
to get space back quickly, but for scripting languages that cost is still
usually swamped by the cost of interpretation.  The real cost may be in
locality, because a simple GC touches most or all of the live data at
each GC cycle.  So you'd rather have a generational GC.

And a generational GC is pretty easy to implement, too.  Appel implemented
a decent generational GC for ML in 500 lines of C code.  For fast implemen-
tations of general-purposes languages, you may want something a little
fancier (his write barrier is probably too simple), but not much fancier.
For a scripting language, a 500 line GC is probably plenty fast.

-- 
| Paul R. Wilson, Comp. Sci. Dept., U of Texas @ Austin (······@cs.utexas.edu)
| Papers on memory allocators, garbage collection, memory hierarchies,
| persistence and  Scheme interpreters and compilers available via ftp from 
| ftp.cs.utexas.edu, in pub/garbage (or http://www.cs.utexas.edu/users/wilson/)      
From: Bruno Haible
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4fjfvl$nps@nz12.rz.uni-karlsruhe.de>
Jeff Dalton <····@cogsci.ed.ac.uk> wrote:

>>> Just look at the technical strength of the argument that GC is not
>>> "in the tradition of the C community"...
>>
>>Yeah, I love it. ;-)
>
> But it _is_ true that GC is not in the tradition of the C community.
> The argument that it's a "hidden cost" is key here.  C programmers
> feel that they know what everything will do in machine terms, and
> to a fair extent they are right.  (That's so despite a number of
> difficulties and exceptions.)
>
> So when a allocation might do lots of collecting as well (or
> whatever), and you don't really know when, that seems to move
> C into the higher-level / less-in-touch-with-the-machine camp.

But apparently more and more C programmers learn and use C++. This
language also has hidden costs here and there:

  - An assignment may involve more than moving around memory words,

  - A method (a.k.a. "member function") call usually involves two
    pointer accesses and two branches additionally to the C function
    call,

  - Virtual inheritance introduces hidden pointers within the objects,

  - Exception handling adds tables whose size is comparable to the
    size of the code,

and this is silently accepted by the C++ community. Exceptions
haven't been in the C tradition either, yet they are very welcome
in the C++ community. In fact, the C++ programmers are pushing the
C++ compiler vendors to implement these things.

So: why had the GC to wait for the arrival of Java until it became
widely accepted?


Bruno Haible                                     email: <······@ilog.fr>
Software Engineer                                phone: +33-1-49083585
From: Vlastimil Adamovsky
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4ecmfo$as9@news2.ios.com>
······@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) wrote:

>Just look at the technical strength of the argument that GC is not
>"in the tradition of the C community"...

The technical strength of the argument is that C++ is still low level
language with a high level language advantage and as such one it
should do the all low level operations itself and if it is needed, it
should implement the high level features by using its own low level
implementations.




*******************************************
*    Vlastimil Adamovsky                  *
* Smalltalk, C++ and Envelop development  *
*******************************************
From: Omar Othman
Subject: How to contain a object with different attribute...
Date: 
Message-ID: <11c7cc$a24.1a2@mercury>
Hi. C++ers.

This question keeps on bothering me.  I know it can be done.  I just need 
some example to get through the concept:

If I want to contain a lot of object with different attribute with one 
array or tree so that I can walk through all of them one by one.  How can 
I do it?  If you answer is too long, please point me to a book with some 
examples.

thanks a lot.

you can post or send a e-mail to ·····@simsci.com

Jerry
From: Stefan Monnier
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4ei4og$la1@info.epfl.ch>
In article <··········@news2.ios.com>,
Vlastimil Adamovsky <····@gramercy.ios.com> wrote:
] ······@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) wrote:
] >Just look at the technical strength of the argument that GC is not
] >"in the tradition of the C community"...
] The technical strength of the argument is that C++ is still low level
] language with a high level language advantage and as such one it
] should do the all low level operations itself and if it is needed, it
] should implement the high level features by using its own low level
] implementations.

You mean like the virtual table stuff that could be implemented with low-level
operations but instead is well hidden ? At least well enough to make C++-objects
very inconvenient to represent any low-level data-structure.

Little example: the vtable can be considered a class-pointer, but C++ has been
well enough designed to make it impossible for you to explicitely add any kind
of info to the vtable.

C++ is not low-level enough in this respect.
Furthermore, C++ is too low-level to make it reasonable to implement a copying
GC, thanks to all the nasty casts the programmer might feel like doing.

Basically, C++ is a mix between low and "high"-level language and I'm not
sure it's a good idea, since its high-level features are not really useable
when you want to use the low-level features and the low-level features make it
hard to take advantage of several aspects of the high-level features.


        Stefan
From: Marco Antoniotti
Subject: Re: Why garbage collection?
Date: 
Message-ID: <s08spgxh3r1.fsf@lox.ICSI.Berkeley.EDU>
In article <··········@research.att.com> ··@research.att.com (Bjarne Stroustrup <9758-26353> 0112760) writes:

   From: ··@research.att.com (Bjarne Stroustrup <9758-26353> 0112760)
   Newsgroups: comp.lang.lisp,comp.lang.c++
   Date: Tue, 30 Jan 1996 00:07:29 GMT
   Organization: Info. Sci. Div., AT&T Bell Laboratories, Murray Hill, NJ
   Lines: 66
   Xref: agate comp.lang.lisp:20712 comp.lang.c++:172072

   ·······@lox.icsi.berkeley.edu (Marco Antoniotti) write.

    > [discussion about C++ by someone else]
    > 
    > This is probably off-track, but, as a diversion, please bear this last
    > gripe of mine on the language which is going to be swapped away by
    > Java (no, it is not Lisp :) ).

   You are indeed way off track, and I think the announcements of C++'s
   imminent demise are rather premature.

Well, I think so too.

    > One of the things that bothered me most with C++, was this sort of
    > "newspeak" which it introduced.  For years people had been working in
    > Flavors, Clos, Smalltalk etc, and they pretty much shared a common
    > terminology.  Then suddendly, we did not have "methods" any more, we
    > had "member functions", we lost the "inheritance" (pun intended) and
    > started "deriving classes".

   I think you have your dates wrong. The C++ terminology was picked in
   1979. Then, the work on CLOS hadn't yet started, Smalltalk-80 hadn't
   been completed, and its predecessor was not well known outside a small
   circle of researchers. I don't recall the dates for Flavors and Loops,
   but again these languages were not known outside the AI community for
   quite a few years.

   The C++ terminology is based on that of Simula (1967) and partly on that
   of C (1972). The base- and derived class terminology was indeed invented
   for C++ - based on rather negative experience teaching using the Simula
   super- and subclass terminology.

   A good source for dates and other historical facts about these languages is:

	   Preprint of Proc. ACM History of Programming Languages
	   Conference (HOPL-2).
	   April 1993.
	   ACM SIGPLAN Notices, March 1993.

   The C++ paper there is

	   Stroustrup: The History of C++: 1979-1991.

   A more thorough description of the design of C++ is:

	   Stroustrup: The Design and Evolution of C++.
	   Addison-Wesley. ISBN 0-201-54330-3.

    > Of course, the argument is that C++ wanted to "clarify" such things
    > and the choice of new terminology was a "good thing".

   You got the motivation wrong as well. There wasn't an accepted terminology
   to "clarify." I stuck to the most widely used terminology at the time
   (Simula's) as far as I could, and introduced new terms that fitted that
   and the terminology of C only where I saw no alternative.

    > Well, I must say that I am very pleased to see that Java somewhat
    > reintroduced the "old" terminology and that Lisp, (as well as Dylan)
    > is not yet dead.
    > 
    > Half seriously yours

   It is a good idea to be at least half accurate even if only half serious.

Of course (this time seriously), I cannot contest Bjarne's account of
the history of C++ and the motivations that lead him to the choices
he made.  To my partial justification I can only say that the earliest
document of Bjarne's on C++ that was widely available dates to 1983
("Adding Classes to C..." SW Practice and Experience" 13), and that I
doubt that many people actually saw the AT&T C++ preprocessor until
87/88 (I might, of course, be wrong on this.)  Of course not many
people had a Xerox or a Symbolics to play around either.

What I find very interesting in Bjarne's post is the reference to
Simula.  I never programmed in it and have only memories of the
chapters on Ghezzi's book (1st edition).  It would seem that my
accusation of inventing a "newspeak" must then fall on the
Smalltalk/Loops/Flavors people. :)

Cheers
-- 
Marco Antoniotti - Resistente Umano
===============================================================================
International Computer Science Institute	| ·······@icsi.berkeley.edu
1947 Center STR, Suite 600			| tel. +1 (510) 643 9153
Berkeley, CA, 94704-1198, USA			|      +1 (510) 642 4274 x149
===============================================================================
	...it is simplicity that is difficult to make.
	...e` la semplicita` che e` difficile a farsi.
				Bertholdt Brecht
From: Cyber Surfer
Subject: Re: Why garbage collection?
Date: 
Message-ID: <823382258snz@wildcard.demon.co.uk>
In article <··········@news2.ios.com>
           ····@gramercy.ios.com "Vlastimil Adamovsky" writes:

> Do I need Garbage Collection? NO! Never ever it was needed in my
> programs. People crying for GC should see DB Tools.h++ written by
> RogueWave and then they would understand why it is not necessary.

My experience is that programming in C, you develop a blind spot.
Because some things are hard to do, you don't see them as possible
ways of coding something.
 
> I would wish to our software industry more real programmers in the New
> Year 1996.

More programmers with blind spots? I won't argue with that. If I find
an edge, the last thing I'm going to do is tell everyone else about it.
If I call one of the "tricks" I use GC, and you tell me that it can't
be an edge, then I won't disillusion you.

I would wish to our software industry better tools and more productive
programmers - so long as their software isn't competing with mine,
that is. ;-) That's unlikely, so I'm not worried...
-- 
<URL:http://www.demon.co.uk/community/index.html>
<URL:http://www.enrapture.com/cybes/namaste.html>
Po-Yeh-Pao-Lo-Mi | "You can never browse enough."
From: Ed Kaulakis
Subject: Re: Why garbage collection?
Date: 
Message-ID: <31148ACE.458A@mcig.com>
If Vlastimil has never orphaned storage or followed a dead pointer and if CyberSurfer has never 
needed a program to run in predictable real time with minimal hardware resources, we should all pay 
attention to this thread. Otherwise, not.
From: Vlastimil Adamovsky
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4fc11m$i6u@news2.ios.com>
Ed Kaulakis <···@mcig.com> wrote:

>If Vlastimil has never orphaned storage or followed a dead pointer and if CyberSurfer has never 
>needed a program to run in predictable real time with minimal hardware resources, we should all pay 
>attention to this thread. Otherwise, not.

It could happen maybe three of four times (dead pointers) but I never
felt need of GC. 
I compile a program, run some memory checking programs and that's it.



*******************************************
*    Vlastimil Adamovsky                  *
* Smalltalk, C++ and Envelop development  *
*******************************************
From: Vlastimil Adamovsky
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4fc11k$i6u@news2.ios.com>
Cyber Surfer <············@wildcard.demon.co.uk> wrote:

>My experience is that programming in C, you develop a blind spot.

I don't program in C.

>I would wish to our software industry better tools and more productive
>programmers

It can be expressed this way, yes...



*******************************************
*    Vlastimil Adamovsky                  *
* Smalltalk, C++ and Envelop development  *
*******************************************
From: Bruno Haible
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4fjjk0$r9m@nz12.rz.uni-karlsruhe.de>
Daniel Barlow <······@xserver.sjc.ox.ac.uk> wrote:
>
> I don't know lisp.  I've done little bits of elisp (but I understand that
> that Doesn't Count) and I've played with scheme (actually guile), but 
> that's about all.  I'd like to learn (curiosity value) but I'm
> holding off until I can think of something to write in it.  And until
> I have more time.  

You can write any kind of non-trivial text/data/representation processing
in Lisp. But you have to wait until you have _less_ time.

For example, a couple of days ago I wrote a smallie which converts a
unidiff (output of "diff -u") to the equivalent context diff (output
of "diff -c3"), for readability.

First wrote in Lisp, within 2 hours. Worked and still works fine.

The rewrote it in C++. Took me more than a whole day. Mostly works,
but the output contains garbled lines in rare cases. Hence the resulting
program is useless unless I put several more hours of debugging into it.

Needless to say, the C++ code is three times as large than the Lisp
code, and therefore doesn't reflect the overall algorithm as well.

Conclusion: The time you save by using a high-level language like Lisp and
an interactive development environment is amazing. I wish to invite you:
make the experience yourself, with a non-trivial program of yours.

> The limiting factor is surely not cost.  I have gcc and gcl on my 
> computer; they were both entirely free.  In fact, I understand that gcl
> comes precompiled as part of the popular Slackware Linux distribution.
> I know of approximately one linux user who actually installed it (except
> by accident).  Why the low takeup?

Maybe because gcl, as present in Slackware, is pretty spartanic:
no command-line history (unless you use Emacs), CLOS not built-in,
slow compiler. You should complement it with CLISP.

                   Bruno

----------------------------------------------------------------------------
Bruno Haible                            net: <······@ilog.fr>
ILOG S.A.                               tel: +33 1 4908 3585
9, rue de Verdun - BP 85                fax: +33 1 4908 3510
94253 Gentilly Cedex                    url: http://www.ilog.fr/
France                                  url: http://www.ilog.com/
From: Matthias Blume
Subject: Re: Why garbage collection?
Date: 
Message-ID: <BLUME.96Feb3182051@zayin.cs.princeton.edu>
In article <············@wildcard.demon.co.uk> Cyber Surfer <············@wildcard.demon.co.uk> writes:

   In article <··········@news2.ios.com>
	      ····@gramercy.ios.com "Vlastimil Adamovsky" writes:

   > Do I need Garbage Collection? NO! Never ever it was needed in my
   > programs. People crying for GC should see DB Tools.h++ written by
   > RogueWave and then they would understand why it is not necessary.

   My experience is that programming in C, you develop a blind spot.
   Because some things are hard to do, you don't see them as possible
   ways of coding something.

   > I would wish to our software industry more real programmers in the New
   > Year 1996.

   More programmers with blind spots?

Well, I also think that the world would be a better place if it were
full of people like Vlastimil.  We wouldn't be inhibited in our work
by unnecessary knowledge, we would be so full of ourselves... it would
be awesome.  We would be all so smart, that real-life problems would
be so infinitely small that in order to spice things up we would use
all the wrong tools (or no tools at all) to get things done.  Only
this way we would find excitemement and satisfaction in life.

I wish I were a Real Guy, with a macho attitude, just like Vlastimil.
BTW, women just *love* macho programmers.

Cheers,
--
-Matthias
From: Vlastimil Adamovsky
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4fl5b0$9k0@news2.ios.com>
·····@zayin.cs.princeton.edu (Matthias Blume) wrote:


>I wish I were a Real Guy, with a macho attitude, just like Vlastimil.
I am happy I am your idol. 

>BTW, women just *love* macho programmers.
I know.



*******************************************
*    Vlastimil Adamovsky                  *
* Smalltalk, C++ and Envelop development  *
*******************************************
From: Bruno Haible
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4fjk9p$17e@nz12.rz.uni-karlsruhe.de>
Vlastimil Adamovsky <····@gramercy.ios.com> wrote:
> Why would you need to modify the vtable. Create a pointer to your
> table and modify it as you wish. By the way, can you modify C -
> language compiler so it will behave as YOU want to?

Yes. gcc and g++ come with source. You want to add a warning, add a
built-in or fix a bug? You can.

                   Bruno
From: Richard Pitre
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4eipuh$3nk@ra.nrl.navy.mil>
In article <··········@nz12.rz.uni-karlsruhe.de>  
······@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:
> Prof. Henry Baker <······@netcom.com> wrote:
> >
> > The other thing I'd like to do is to ask the ACM to embargo any textbooks
> > that continue to prattle on about how bad GC is.  Unfortunately, these
> > textbooks lend credence to the old adage "If you can, do; if you can't,
> > write a bad textbook".
> 
> Please start with the textbook "C++ for C programmers", by Ira Pohl,
> 2nd edition. On page 268, he writes:
> 
>   " Other complexity issues are fundamental to the C++ language design,
>   such as the lack of garbage collection (GC). Several proposals exist
>   [4][5][6], and their implementations support the contention that they
>   can be done without degrading performance in most applications. Most
>   other major OOP languages, such as Smalltalk, CLOS, and Eiffel, support
>   GC. The argument for GC is that it makes the programmer's task distinctly
>   easier. Memory leaks and pointer errors are common when each class
>   provides for its own storage management. These are very hard errors to
>   find and debug. GC is a well-understood technology, so why not?
> 
>   " The argument against GC is that it extracts a hidden cost from all
>   users when employed universally. Also, GC manages memory but not other
>   resources. This would require destructors for _finalization_.
>   Finalization is the return of resources and other behavior when an
>   object's lifetime is over. For example, the object might be a file,
>   and finalization might require closing the file. Finally, it is not
>   in the tradition of the C community to have free store managed
>   automatically."
> 
>   [4] Hans-J. Boehm and Mark Weiser. "Garbage Collection in an Uncooperative
>       Environment." Software - Practice and Experience, Sept. 1988,
>       pp. 807-820.
> 
>   [5] Daniel Edelson and Ira Pohl. "A Copying Collector for C++." In
>       Usenix C++ Conf. Proc. 1991, pp. 85-102.
> 
>   [6] Daniel Edelson. "A Mark and Sweep Collector for C++." In Proc. Princ.
>       Prog. Lang., January 1992.
> 

Everyone understands that its a "good thing" to reduce the effort required to  
generate and maintain a program and everyone appreciates the fact that bugs are  
a "bad thing" especially when they cannot be overcome by statistical  
arguments(BS), or legal, marketing and insurance departments. But, until  
everyone appreciates the actual cost of software development and maintenance  
and until everyone has a close relative who has suffered grevious bodily harm  
because of a software error then originless destination free discussions, like  
the one in this book, will continue. What worries me is that, at least in the  
good ole US of A, there is precedence for a solution to the software quality  
problem based on marketing, insurance and legal mechanisms. This type of  
solution does not necessarily promote technological innovation in computer  
products or a need for any more computer science.

If you prioritize speed over quality and reliability in the right way then you  
can have all your code written in C or C++ on a small budget. In time, software  
optimization technology will remove this problem. Otherwise only critical code  
at the bottom layers of software systems can sometimes be written in C or C++.   
Even so, for small enough amounts of code the advantages of C++ over assembler  
are, in my incredible mind, questionable because you have to rely on one of  
todays compilers and because C++ is complex. Todays compilers are written in a  
psychological ambiance containing a special "warm fuzzy". This "warm fuzzy"  
suggests  that the inevitability of bugs implies that we don't have to worry  
too much about generating them or developing serious fundamental technology to  
avoid them. This insanity has progressed to the point that many people would  
casually acknowledge that their cpu may have  documented and undocumented bugs.  
To the extent that software consumers are insensitive to differences in  
quality, only junk will survive in the marketplace. Based on my  recent  
shopping extravaganzas there is a serious need for that kind of sensitivity  
training in the software consumming public.

richard 


 
From: Marco Antoniotti
Subject: Re: Why garbage collection?
Date: 
Message-ID: <s08wx6akhlt.fsf@lox.ICSI.Berkeley.EDU>
In article <··········@info.epfl.ch> "Stefan Monnier" <··············@lia.di.epfl.ch> writes:

   From: "Stefan Monnier" <··············@lia.di.epfl.ch>
   Newsgroups: comp.lang.lisp,comp.lang.c++
   Date: 29 Jan 1996 09:41:36 GMT
   Organization: Ecole Polytechnique Federale de Lausanne
   Lines: 30
   Originator: ·······@lia.di.epfl.ch (Stefan Monnier)
   Xref: agate comp.lang.lisp:20701 comp.lang.c++:171908

   In article <··········@news2.ios.com>,
   Vlastimil Adamovsky <····@gramercy.ios.com> wrote:
   ] ······@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) wrote:
   ] >Just look at the technical strength of the argument that GC is not
   ] >"in the tradition of the C community"...
   ] The technical strength of the argument is that C++ is still low level
   ] language with a high level language advantage and as such one it
   ] should do the all low level operations itself and if it is needed, it
   ] should implement the high level features by using its own low level
   ] implementations.

   You mean like the virtual table stuff that could be implemented
   with low-level 
   operations but instead is well hidden ? At least well enough to
   make C++-objects 
   very inconvenient to represent any low-level data-structure.

   Little example: the vtable can be considered a class-pointer, but
   C++ has been 
   well enough designed to make it impossible for you to explicitely
   add any kind 
   of info to the vtable.

This is probably off-track, but, as a diversion, please bear this last
gripe of mine on the language which is going to be swapped away by
Java (no, it is not Lisp :) ).

One of the things that bothered me most with C++, was this sort of
"newspeak" which it introduced.  For years people had been working in
Flavors, Clos, Smalltalk etc, and they pretty much shared a common
terminology.  Then suddendly, we did not have "methods" any more, we
had "member functions", we lost the "inheritance" (pun intended) and
started "deriving classes".

Of course, the argument is that C++ wanted to "clarify" such things
and the choice of new terminology was a "good thing".

Well, I must say that I am very pleased to see that Java somewhat
reintroduced the "old" terminology and that Lisp, (as well as Dylan)
is not yet dead.

Half seriously yours
-- 
Marco Antoniotti - Resistente Umano
===============================================================================
International Computer Science Institute	| ·······@icsi.berkeley.edu
1947 Center STR, Suite 600			| tel. +1 (510) 643 9153
Berkeley, CA, 94704-1198, USA			|      +1 (510) 642 4274 x149
===============================================================================
	...it is simplicity that is difficult to make.
	...e` la semplicita` che e` difficile a farsi.
				Bertholdt Brecht
From: Bjarne Stroustrup <9758-26353> 0112760
Subject: Re: Why garbage collection?
Date: 
Message-ID: <DLywCI.LDG@research.att.com>
·······@lox.icsi.berkeley.edu (Marco Antoniotti) write.

 > [discussion about C++ by someone else]
 > 
 > This is probably off-track, but, as a diversion, please bear this last
 > gripe of mine on the language which is going to be swapped away by
 > Java (no, it is not Lisp :) ).

You are indeed way off track, and I think the announcements of C++'s
imminent demise are rather premature.

 > One of the things that bothered me most with C++, was this sort of
 > "newspeak" which it introduced.  For years people had been working in
 > Flavors, Clos, Smalltalk etc, and they pretty much shared a common
 > terminology.  Then suddendly, we did not have "methods" any more, we
 > had "member functions", we lost the "inheritance" (pun intended) and
 > started "deriving classes".

I think you have your dates wrong. The C++ terminology was picked in
1979. Then, the work on CLOS hadn't yet started, Smalltalk-80 hadn't
been completed, and its predecessor was not well known outside a small
circle of researchers. I don't recall the dates for Flavors and Loops,
but again these languages were not known outside the AI community for
quite a few years.

The C++ terminology is based on that of Simula (1967) and partly on that
of C (1972). The base- and derived class terminology was indeed invented
for C++ - based on rather negative experience teaching using the Simula
super- and subclass terminology.

A good source for dates and other historical facts about these languages is:

	Preprint of Proc. ACM History of Programming Languages
	Conference (HOPL-2).
	April 1993.
	ACM SIGPLAN Notices, March 1993.

The C++ paper there is

	Stroustrup: The History of C++: 1979-1991.

A more thorough description of the design of C++ is:

	Stroustrup: The Design and Evolution of C++.
	Addison-Wesley. ISBN 0-201-54330-3.
	
 > Of course, the argument is that C++ wanted to "clarify" such things
 > and the choice of new terminology was a "good thing".

You got the motivation wrong as well. There wasn't an accepted terminology
to "clarify." I stuck to the most widely used terminology at the time
(Simula's) as far as I could, and introduced new terms that fitted that
and the terminology of C only where I saw no alternative.

 > Well, I must say that I am very pleased to see that Java somewhat
 > reintroduced the "old" terminology and that Lisp, (as well as Dylan)
 > is not yet dead.
 > 
 > Half seriously yours

It is a good idea to be at least half accurate even if only half serious.

	- Bjarne
From: Vlastimil Adamovsky
Subject: Re: Why garbage collection?
Date: 
Message-ID: <4f2kom$q5a@news2.ios.com>
····@cogsci.ed.ac.uk (Jeff Dalton) wrote:

>In article <··········@news2.ios.com> ····@gramercy.ios.com (Vlastimil Adamovsky) writes:
>>·······@lox.icsi.berkeley.edu (Marco Antoniotti) wrote:
>>
>>>This is probably off-track, but, as a diversion, please bear this last
>>>gripe of mine on the language which is going to be swapped away by
>>>Java (no, it is not Lisp :) ).
>>
>>I wonder in what language is Java language implemented? 

>So what language _is_ it implemented in?  C may be better than C++
>for such things (I prefer it anyway).

You can not compare C++ and C. Philosophy behind the language is quite
different.
It is like you would like to compare assembler and Eiffel. Which is
better?

*******************************************
*    Vlastimil Adamovsky                  *
* Smalltalk, C++ and Envelop development  *
*******************************************
From: Bruno Haible
Subject: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <4fjhu5$o2s@nz12.rz.uni-karlsruhe.de>
Marco Antoniotti <·······@lox.icsi.berkeley.edu> wrote:
>
> But since we are at it, I want to complete the list of penances that
> have been required from me. (Please assume the warning is repeated here).
>
> ...
>
> 3 - Common Lisp is dead.

The king is dead, who's the next king? Let's try it:

"Hurray! Long live ISLisp!"

Well, ISLisp is not accepted because it doesn't come with C++ syntax.
So let's try it this way:

"Hurray! Long live Dylan!"

But Dylan is suddenly already dead as well. How come? Never mind, let's try
it this way:

"Hurray! Long live Java!"

Seems to be for real this time. But: Where are the macros?

Dazed and confused.

---

Half seriously yours,

                            Bruno
From: William Paul Vrotney
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <vrotneyDMn59o.526@netcom.com>
In article <··········@nz12.rz.uni-karlsruhe.de>
······@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:

> Marco Antoniotti <·······@lox.icsi.berkeley.edu> wrote:

> >
> > But since we are at it, I want to complete the list of penances that
> > have been required from me. (Please assume the warning is repeated here).
> >
> > ...
> >
> > 3 - Common Lisp is dead.
> 
> The king is dead, who's the next king? Let's try it:
> 
> "Hurray! Long live ISLisp!"
> 
> Well, ISLisp is not accepted because it doesn't come with C++ syntax.
> So let's try it this way:
> 
> "Hurray! Long live Dylan!"
> 
> But Dylan is suddenly already dead as well. How come? Never mind, let's try
> it this way:
> 
> "Hurray! Long live Java!"
> 
> Seems to be for real this time. But: Where are the macros?
> 
> Dazed and confused.
> 

Please don't be dazed and confused to the point of giving up on Lisp, we
need your dedication.  And thank you for your contributions.

I look at it this way.  Lisp is the ONLY language that is STILL ALIVE (after
all these years)!  I am still making a living writing Lisp programs as well
as a number of other people I know.  Not a lot, but Lisp certainly is not
dead by any means, and there is a good chance that it will be rediscovered
and thrive in the 21st century.  The reason for this is that most currently
popular languages have and are discovering ideas that Lisp has already (not
to make a pun) addressed, but have not combined all of the ideas cohesively
as Lisp has done.  I see most of these popular languages as addressing a
specific need and not a more general programming problem as Lisp has
addressed.  If you solve somebody's specific problem with a specific
solution you are guaranteed short term success with that solution but not
necessarily long term success.

Perhaps not directly addressing your bedazzlement with Dylan and Java but
maybe some help to someone, I predict the "Static Typing" argument, which in
my opinion is the root of the current practical argument against Lisp with
any substance, will peter out in the near future.  The reason being that it
is probably too hard for code analyzers to do a perfect job.  And if it is
not perfect then it is not good enough to merit dealing with the
difficulties associated with it.  The best written complex C++ programs,
from my experience, will eventually break unpredictably.  Code "purifiers"
do a pretty good job, but once again they are not and probably can not be
made perfect, and hence the same argument applies.  One of the difficulties
with Static Typing is that it impedes natural programming constructs that
support generality and hence further abstraction.  Programmers have now seen
complex C++ projects fizzle due to becoming unmanageable from the standpoint
of building higher level abstractions simply because of the weight of the
artificial tricks built into the class hierarchy.  These artificial tricks
are there simply to support the Static Typing pedantics, ie. in a
Dynamically Typed system they would not be there.  C++ programmers,
including myself, are quite proud of these tricks when we discover them.
But basically we are all fooling ourselves and avoiding the higher level
view of things.  Even the definition of the C++ language itself has to use
some artificial tricks for the language to be usable, but these make the
language itself more complex and harder to learn.  Virtual Functions and
Templates are two good examples of such tricks.  Clever, but unnatural and
necessary only to satisfy the Static Typing pendatics.  Virtual Function
implementation in Lisp for example is just one of many ways to do dynamic
type dispatching, but it is not required in the definition of the language.
And Templates can be completely avoided.  I'm still waiting for a GOOD
solution for heterogeneous container classes in C++, something you don't
even have to think about in Lisp.

So to sum up, I say

        "Hurray, long live Lisp!  But put up with specific solutions in
         the mean time."


-- 

William P. Vrotney - ·······@netcom.com
From: Philip Jackson
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <4fmhie$t43@condor.ic.net>
William Paul Vrotney (·······@netcom.com) wrote:

: I look at it this way.  Lisp is the ONLY language that is STILL ALIVE (after
: all these years)!  

Isn't Fortran still alive, and in widespread use for "scientific 
programs"?  If so, it's my understanding it originated about the same time
as Lisp.

Cheers,

Phil Jackson
------------------
"...for the word is the sole sign and the only certain mark of the
presence of thought hidden and wrapt up in the body..." -- Descartes
------------------
Standard Disclaimers. <········@ic.net>
From: William Paul Vrotney
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <vrotneyDMnLyw.FoL@netcom.com>
In article <··········@condor.ic.net> ········@falcon.ic.net (Philip
Jackson) writes:

   William Paul Vrotney (·······@netcom.com) wrote:

   : I look at it this way.  Lisp is the ONLY language that is STILL ALIVE (after
   : all these years)!  

   Isn't Fortran still alive, and in widespread use for "scientific 
   programs"?  If so, it's my understanding it originated about the same time
   as Lisp.

Are Roman numerals still alive?  They are still in widespread use.

Yes, FORTRAN and Lisp originated around the same time but there was no Ada,
C, C++, Mathematica, Smalltalk ... etc back then to compete with.  My
understanding is that most of the FORTRAN used these days is old existing
libraries or engineering applications.  Probably few programmers today would
start a new non-engineering project in FORTRAN.  However this is not true of
Lisp.  Engineers would be more apt to choose one of the Math/Engineering
interpreters/Lab instead of FORTRAN for numerical computation.  Whether, if
this is true, qualifies as being either dead or alive is a matter of
semantics.  I know of no one starting a brand new non-engineering project in
FORTRAN today, if someone does, let us know, and I will either revise my
statement, or advise the person doing such of other alternatives.  However
even if we put FORTRAN and Lisp in the same category, if Lisp is ONLY ONE OF
TWO oldest programming languages STILL ALIVE today, that is still pretty
remarkable.

-- 

William P. Vrotney - ·······@netcom.com
From: Richard Pitre
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <4fnmcd$pjm@ra.nrl.navy.mil>
In article <·················@netcom.com> ·······@netcom.com (William Paul  
Vrotney) writes:
> In article <··········@condor.ic.net> ········@falcon.ic.net (Philip
> Jackson) writes:
> 
>    William Paul Vrotney (·······@netcom.com) wrote:
> 
>    : I look at it this way.  Lisp is the ONLY language that is STILL ALIVE  
(after
>    : all these years)!  
> 
>    Isn't Fortran still alive, and in widespread use for "scientific 
>    programs"?  If so, it's my understanding it originated about the same time
>    as Lisp.
> 
> Are Roman numerals still alive?  They are still in widespread use.
> 
> Yes, FORTRAN and Lisp originated around the same time but there was no Ada,
> C, C++, Mathematica, Smalltalk ... etc back then to compete with.  My
> understanding is that most of the FORTRAN used these days is old existing
> libraries or engineering applications.  Probably few programmers today would
> start a new non-engineering project in FORTRAN.  However this is not true of
> Lisp.  Engineers would be more apt to choose one of the Math/Engineering
> interpreters/Lab instead of FORTRAN for numerical computation.  Whether, if
> this is true, qualifies as being either dead or alive is a matter of
> semantics.  I know of no one starting a brand new non-engineering project in
> FORTRAN today, if someone does, let us know, and I will either revise my
> statement, or advise the person doing such of other alternatives.  However
> even if we put FORTRAN and Lisp in the same category, if Lisp is ONLY ONE OF
> TWO oldest programming languages STILL ALIVE today, that is still pretty
> remarkable.
> 
> -- 
> 
> William P. Vrotney - ·······@netcom.com

I don't have specific instances lined up for you but I believe that on big  
parallel engines FORTRAN is becoming one of, if not the, primary programming  
language. THE fortran data structure, arrays of fixed sized numbers, finds  
nirvana in the happy space of the hardware. Fortran90 supports the construction  
of other data structures but I predict that it will never be popular because  
processing these other data structures will never be "efficient"  on the  
TVN/Fortran/C hardware. There is even a high performance fortran(HPF) standard.  
As far as I'm concerned they might as well eprom the god forsaken  
Fortran90/C/C++ compiler and make the standard shell  an interactive  
editor/debugger. The hardware is designed for the language and visa versa.
Don't get me wrong. There are lots of good uses for high speed  
array/parallel-array processing and good-ole Fortran is about all that you need  
for this.

Programming in C is the Boss Hoss thing to do these days. Most people  
programming in C++ would have much greater success programming in Visual Basic.  
Programmers who distribute their programs written in C++ should be forced to  
include a warning. The warning should display when the program starts up and  
the user should, at a minimum, be required to press Cntrl-Alt-Del in order to  
proceed with the execution of the program. The warning should state that the  
code was written in C++ and that the programmer  has no credentials to support  
any assumption of quality and that to the extent that you rely on the code for  
anything important, the code is dangerous. A meta level warning  of this nature  
should be imposed on C++ compilers.

As with any toy C++ will eventually lose its popularity. Eventually the  
association of sexual prowess with the ability to write the Hello World program  
in C++ will fade. Then everyone will be left with the reality that C++ is  
indispensible for a narrow range of problems but otherwise it is a millstone. 
Just in case the reader is toying with the idea that I'm crazy, I will give  
confirmation by saying that the notion of object-oriented-programming is  
primarily necessitated by limitations of procedural programming. Its almost a  
good idea. 

Lisp was a good idea and then a big committee go hold of it and it got big. 
With standardization it quit evolving to take advantage of the fruits of  
research. While Lisp will always be a great tool and it will not die, it will  
eventually be supplanted in its domain. 

richard
From: T. Kurt Bond
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <311F90ED.449B@sol.newnet.navy.mil>
Richard Pitre wrote:
> Lisp was a good idea and then a big committee go hold of it and it got big.

Remember, Lisp is not just Common Lisp.  What about the other interesting dialects,
such as EuLisp or ILOG Talk?

> With standardization it quit evolving to take advantage of the fruits of
> research. 

Hmm.  The standardization of another Lisp dialect, Scheme, appears to have slowed 
considerably after its latest standard; perhaps because many of its implementors 
use it as a research vehicle?

I really don't see Lisp in general as no longer evolving; I'm not even sure that 
Common Lisp isn't still evolving.

--
T. Kurt Bond, ···@sol.newnet.navy.mil
From: Richard Pitre
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <4fq52q$fhj@ra.nrl.navy.mil>
In article <·············@sol.newnet.navy.mil> "T. Kurt Bond"  
<···@sol.newnet.navy.mil> writes:
> Richard Pitre wrote:
> > Lisp was a good idea and then a big committee go hold of it and it got big.
> 
> Remember, Lisp is not just Common Lisp.  What about the other interesting  
dialects,
> such as EuLisp or ILOG Talk?
> 
> > With standardization it quit evolving to take advantage of the fruits of
> > research. 
> 
> Hmm.  The standardization of another Lisp dialect, Scheme, appears to have  
slowed 
> considerably after its latest standard; perhaps because many of its  
implementors 
> use it as a research vehicle?
> 
> I really don't see Lisp in general as no longer evolving; I'm not even sure  
that 
> Common Lisp isn't still evolving.
> 
> --
> T. Kurt Bond, ···@sol.newnet.navy.mil

Lisp is probably one of the best environments for  experimenting with new  
ideas, languages and algorithms. It takes another level of effort to actually  
extend the Lisp development enviroment and the experimenter usually doesn't  
have the background or time to make it so. For example its easy to implement  
Prolog in Lisp but its quite another matter to make the unification algorithm  
use your Lisp function definitions(e.g. see the language Escher). Addressing  
fundamental semantic issues,  completeness and soundness, and a slew of  
practical problems is too much for me and most people that I know.  Of all the  
languages that have been standardized Lisp is probably the one that is least  
restricted by that standardization. I haven't been distinguishing between  
dialects of Lisp.

It would be a great thing if some recognized Lisp language experts got together  
and began work on a specification of a new standard for Lisp that incorportated  
the more useful functionality of logic and constraint logic programming.  
Standardized graphical lisp readers and writers would also be useful. 
A committee of one person or three very like minded persons might accomplish  
something really good for Lisp. A bigger committee would guarantee the need for  
several gigabytes of memory just to handle the standard function library and 
an expert in the intricacies of the new Lisp Canon Law to interpret programs.

richard
From: Philip Jackson
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <4fooft$8ac@condor.ic.net>
Richard Pitre (·····@n5160d.nrl.navy.mil) wrote:
: In article <·················@netcom.com> ·······@netcom.com (William Paul  
: Vrotney) writes:
: > In article <··········@condor.ic.net> ········@falcon.ic.net (Philip
: > Jackson) writes:
: > 
: >    William Paul Vrotney (·······@netcom.com) wrote:
: > 
: >    : I look at it this way.  Lisp is the ONLY language that is STILL ALIVE  
: >    : (after all these years)!  
: > 
: >    Isn't Fortran still alive, and in widespread use for "scientific 
: >    programs"?  If so, it's my understanding it originated about the same 
: >    time as Lisp.
: > 
: > Are Roman numerals still alive?  They are still in widespread use.

Are Roman numerals a programming language? Answer = No, of course.

: > 
: > Yes, FORTRAN and Lisp originated around the same time but there was no Ada,
: > C, C++, Mathematica, Smalltalk ... etc back then to compete with.  My
: > understanding is that most of the FORTRAN used these days is old existing
: > libraries or engineering applications.  Probably few programmers today would
: > start a new non-engineering project in FORTRAN.  However this is not true of
: > Lisp.  Engineers would be more apt to choose one of the Math/Engineering
: > interpreters/Lab instead of FORTRAN for numerical computation.  Whether, if
: > this is true, qualifies as being either dead or alive is a matter of
: > semantics.  I know of no one starting a brand new non-engineering project in
: > FORTRAN today, if someone does, let us know, and I will either revise my
: > statement, or advise the person doing such of other alternatives.

: > However
: > even if we put FORTRAN and Lisp in the same category, if Lisp is ONLY ONE OF
: > TWO oldest programming languages STILL ALIVE today, that is still pretty
: > remarkable.

Agreed.  Lisp is a great language and I hope it stays around much longer.
Fortran is a great language also, for what it does, though of course there's
no comparison with Lisp. Perhaps Lisp:Fortran::Fortran:Roman Numerals? :-)

: I don't have specific instances lined up for you but I believe that on big  
: parallel engines FORTRAN is becoming one of, if not the, primary programming  
: language. THE fortran data structure, arrays of fixed sized numbers, finds  
: nirvana in the happy space of the hardware. Fortran90 supports the construction  
: of other data structures but I predict that it will never be popular because  
: processing these other data structures will never be "efficient"  on the  
: TVN/Fortran/C hardware. There is even a high performance fortran(HPF) standard.  
: As far as I'm concerned they might as well eprom the god forsaken  
: Fortran90/C/C++ compiler and make the standard shell  an interactive  
: editor/debugger. The hardware is designed for the language and visa versa.
: Don't get me wrong. There are lots of good uses for high speed  
: array/parallel-array processing and good-ole Fortran is about all that you need  
: for this.[...]


Thanks Richard for providing this information.

Cheers,

Phil Jackson
------------------
"...for the word is the sole sign and the only certain mark of the
presence of thought hidden and wrapt up in the body..." -- Descartes
------------------
Standard Disclaimers. <········@ic.net>
From: William Paul Vrotney
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <vrotneyDMp2wt.BH7@netcom.com>
In article <··········@condor.ic.net> ········@falcon.ic.net (Philip
Jackson) writes:
> Richard Pitre (·····@n5160d.nrl.navy.mil) wrote:
> : In article <·················@netcom.com> ·······@netcom.com (William Paul  
> : Vrotney) writes:
> : > In article <··········@condor.ic.net> ········@falcon.ic.net (Philip
> : > Jackson) writes:
> : > 
> : >    Isn't Fortran still alive, and in widespread use for "scientific 
> : >    programs"?  If so, it's my understanding it originated about the same 
> : >    time as Lisp.
> : > 
> : > Are Roman numerals still alive?  They are still in widespread use.
> 
> Are Roman numerals a programming language? Answer = No, of course.
> 

The point that I was trying to make here is to raise the question

        "What do we mean when we say non-bio something is alive?"

And in particular what do we mean when we say that a programming language is
alive.  In my last post I suggested that it might mean that programmers
would use it to write NEW programs in a NEW project.  I didn't think that
this was widely true of FORTRAN anymore, much as I don't think that it is
widely true that people would use Roman numerals to compute anymore.

I didn't intend to get off on this FORTRAN tangent, sorry.  If people are in
fact using FORTRAN to start new projects then I stand corrected and change
my statement to "Lisp is ONE of the oldest programming languages STILL
ALIVE", although that doesn't have quite the impact.  The idea here was to
compare Lisp with all other programming languages and not just the oldest
like FORTRAN.  The intent of this idea was to point out that Lisp is still
being used to start NEW projects combined with the fact that it has been
around for a long time.  People who say "Lisp is dead" need to answer to
this.  I didn't mean to put down FORTRAN, I meant to uplift Lisp.


-- 

William P. Vrotney - ·······@netcom.com
From: ······@user1.channel1.com
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <KASPER.96Feb21105550@user1.channel1.com>
My $.02 on all of this: My company sells a CL-based software system
that is used primarily by mechanical engineers.  There is a fair
number of CL programmers here.  The users must write code CL (with
our extensions) to use the tools we provide.  In addition to using CL,
the users often use Fortran, simply because so many engineering
problems have already been coded in Fortran.  Both languages are alive
and well.

-- 
Rich Kasperowski                   Cambridge, MA, USA   
home:   ······@user1.channel1.com
work:   ·····@concentra.com        +1-617-229-4637
From: Justin Cormack
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <4g00le$g1o@oak77.doc.ic.ac.uk>
In article <·················@netcom.com>
·······@netcom.com (William Paul Vrotney) wrote

> Perhaps not directly addressing your bedazzlement with Dylan and Java but
> maybe some help to someone, I predict the "Static Typing" argument, which in
> my opinion is the root of the current practical argument against Lisp with
> any substance, will peter out in the near future.  The reason being that it
> is probably too hard for code analyzers to do a perfect job.  And if it is
> not perfect then it is not good enough to merit dealing with the
> difficulties associated with it.  The best written complex C++ programs,
> from my experience, will eventually break unpredictably.  Code "purifiers"
> do a pretty good job, but once again they are not and probably can not be
> made perfect, and hence the same argument applies.  One of the difficulties
> with Static Typing is that it impedes natural programming constructs that
> support generality and hence further abstraction. ...

Actually it is *not* too hard for code analysers to do the job of static
typing and there are algorithms that provably work based on unification.
Languages based on this include Miranda and Haskell, which allow polymorphic
functions (that is functions that can take any of several types, see
example below) to be defined without losing the benefits of
static, compile time type checking, but without any run time
overheads, and with reusable higher order functions. This is fairly recent
work (the earliest implementations were around 1985) so came too late for
Lisp. In the long term the fact that this can be done is probably the
best argument against Lisp and also the unduly restrictive static typing
of C/Pascal-like languages which as you rightly say makes reusability
very difficult.

As an example (in Miranda), the function map can be defined which applies
a provided function over each element in a list:

: is (infix) cons, [] is nil; function application is just a space so f x is
Lisp (f x)

map f []     = []
map f (x:xs) = f x : map f xs

The compiler determines that map has type

map :: (* -> **) -> [*] -> [**]  || funny Miranda notation for types...

ie take a function from one type to another a list of items of the
first type and return a list of items of the second type. Then it can
give you a compile time type error if you map the function add 2
to a list of characters but not if it is given a list of numbers.

Essentially the idea is that any function that does not make any use
of the types of its arguments can be given arguments of any type, and
it can also be guaranteed that no run-time type checking is needed
either.

Have not got the references to hand - email me or try
http://www.lpac.ac.uk/SEL-HPC/Articles/GeneratedHtml/functional.type.html

Justin Cormack
·········@ic.ac.uk
From: Jeffrey Mark Siskind
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <QOBI.96Feb18091017@ee.technion.ac.il>
In article <··········@oak77.doc.ic.ac.uk> ····@doc.ic.ac.uk (Justin Cormack) writes:

   Languages based on this include Miranda and Haskell, which allow polymorphic
   functions (that is functions that can take any of several types, see
   example below) to be defined without losing the benefits of
   static, compile time type checking, but without any run time
   overheads, and with reusable higher order functions. This is fairly recent
   work (the earliest implementations were around 1985) so came too late for
   Lisp.

See Stalin, a implementation of R4RS Scheme, available free from my home
page. It does exactly this kind of type inference (and a lot more).

   In the long term the fact that this can be done is probably the
   best argument against Lisp and also the unduly restrictive static typing
   of C/Pascal-like languages which as you rightly say makes reusability
   very difficult.

Quite to the contrary. The fact that this can be done is probably the best
argument for languages like Lisp that omit type declarations and allow the
compiler to infer them, and against languages that require the user to
manually provide type declarations.

    Jeff (home page http://www.cs.toronto.edu/~qobi)
--

    Jeff (home page http://www.cs.toronto.edu/~qobi)
From: Justin Cormack
Subject: Re: Common Lisp is dead (was: Re: Why garbage collection?)
Date: 
Message-ID: <4g7cfu$fg4@oak21.doc.ic.ac.uk>
In article <··················@ee.technion.ac.il>, ····@ee.technion.ac.il (Jeffrey Mark Siskind) writes:
|> See Stalin, a implementation of R4RS Scheme, available free from my home
|> page. It does exactly this kind of type inference (and a lot more).
|> 
|>    In the long term the fact that this can be done is probably the
|>    best argument against Lisp and also the unduly restrictive static typing
|>    of C/Pascal-like languages which as you rightly say makes reusability
|>    very difficult.
|> 
|> Quite to the contrary. The fact that this can be done is probably the best
|> argument for languages like Lisp that omit type declarations and allow the
|> compiler to infer them, and against languages that require the user to
|> manually provide type declarations.

I think we both agree here actually. If Lisp evolves in this direction
it will be fine. Need to add the ability to optionally declare types
too (or the error messages are too unhelpful...) 
From: Marco Antoniotti
Subject: Re: Why garbage collection?
Date: 
Message-ID: <s08pwbr67hp.fsf@lox.ICSI.Berkeley.EDU>
In article <··········@news2.ios.com> ····@gramercy.ios.com (Vlastimil Adamovsky) writes:

   From: ····@gramercy.ios.com (Vlastimil Adamovsky)
   Newsgroups: comp.lang.lisp,comp.lang.c++
   Date: Fri, 02 Feb 1996 14:49:15 GMT
   Organization: Internet Online Services
   Lines: 13
   X-Newsreader: Forte Free Agent 1.0.82
   Xref: agate comp.lang.lisp:20745 comp.lang.c++:172712

   ·······@lox.icsi.berkeley.edu (Marco Antoniotti) wrote:

   >I assume that Vlastimil advocates the use of INTERCAL as the proper
   >language for Smalltalk and C++ environments construction. :)

   Would you let me know in what news group os the INTERCAL discussed?
   I don't work in academia so maybe I missing something..

I am surprised.  How come a "real quiche-hating programmer" does not
know where INTERCAL is discussed?  alt.lang.intercal, of course. :)

You can also check the INTERCAL entry in the "New Hacker's Dictionary"
and its references in the RETROCOMPUTING entry. :)

Not bad for somebody "who works in academia", isn't it? :)

Cheers
-- 
Marco Antoniotti - Resistente Umano
===============================================================================
International Computer Science Institute	| ·······@icsi.berkeley.edu
1947 Center STR, Suite 600			| tel. +1 (510) 643 9153
Berkeley, CA, 94704-1198, USA			|      +1 (510) 642 4274 x149
===============================================================================
	...it is simplicity that is difficult to make.
	...e` la semplicita` che e` difficile a farsi.