I was looking through some garnet infor the other day and found the
following in an FAQ :
[12] Why the change to C++?
<snip>
* Speed: We spend 5 years and lots of effort optimizing our Lisp
code, but it was still pretty slow on "conventional" machines.
The initial version of the C++ version, with similar
functionality, appears to be about THREE TIMES FASTER than the
current Lisp version without any tuning at all.
When I look at lisp code versus C++, I really don't see how compiled
C++ can run 3x faster than compiled lisp. But I have to believe that
the people on the garnet project knew what the hell they were doing so
this statement seems to be outrageous - (5 year) optimized lisp code
is 3x slower than the first try at C++ ?!
Can anybody elaborate on the FAQ statement ?
I'm just interested in performance issues. C++ is a mess and I really
don't see why an optimizing lisp compiler wouldn't be able to keep up
with an optimizing C++ compiler.
When it comes to OO sorts of things, I CAN see that CLOS introduces a
nasty amount of overhead as opposed to C++ where you just toss
function pointers around.
Any reasonable, insightful, comments ?
Brian
Brian Denheyer <······@deldotd.com> writes:
< I was looking through some garnet infor the other day and found the
< following in an FAQ :
<
<
< [12] Why the change to C++?
<
< <snip>
<
< * Speed: We spend 5 years and lots of effort optimizing our Lisp
< code, but it was still pretty slow on "conventional" machines.
< The initial version of the C++ version, with similar
< functionality, appears to be about THREE TIMES FASTER than the
< current Lisp version without any tuning at all.
<
< When I look at lisp code versus C++, I really don't see how compiled
< C++ can run 3x faster than compiled lisp. But I have to believe that
< the people on the garnet project knew what the hell they were doing so
< this statement seems to be outrageous - (5 year) optimized lisp code
< is 3x slower than the first try at C++ ?!
Is it really possible for code coming from one language to run slower
than code coming from another language? This is really a silly idea if
you think about it.
The problem is probably that the Lisp code was doing more work than
the C++ code. The percieved difference in speed may come from the
distribution of work; some people think the compiler should do it, and
other people think they have to do it.
If I were to write 300 lines of code in language x versus 1000 lines
of code in language z, I would like to think that the 1000 lines of
code was doing more work than the 300 lines of code. I don't know if
other people would feel similiar, this is just how I think I would
like to feel.
Consider the following C code.
[ I would show C++ but I don't have a C++ compiler - I saw `cout' being
shifted "Hello world" times to the left and stopped right there (I was
told that C++ was compatible with C, hence the assumption). ]
char
index_array (char *s, int i)
{
return s [i];
}
main (void)
{
int i = 1;
char array [100];
memset (array, 'A', 100);
printf ("%d\n", index_array (array, i));
}
Note that the following lisp code seems to do the same thing.
(defun index-array (string index)
(write (aref string index)))
(let ((i 1)
(array (make-array 100)))
(fill array 65)
(index-array array i))
Both operations took about the same amount of code, and they yeild the
same result.
The lisp compilers I am familiar with generate _extremely_ efficient
code (from what I understand of how my machine works at least). My
experience is that C code seems to execute in less time because it
does less work, not because it can do the same work in less time.
The real question is probably one involving your value of time, but I
imagine people tend to get upset when thinking about this.
< When it comes to OO sorts of things, I CAN see that CLOS introduces a
< nasty amount of overhead as opposed to C++ where you just toss
< function pointers around.
Can you do the same things in C++ as CLOS? I imagine you could
probably get the same results, but I don't know enough about either to
make this judgment.
Brian Denheyer <······@deldotd.com> writes:
> When I look at lisp code versus C++, I really don't see how compiled
> C++ can run 3x faster than compiled lisp. But I have to believe that
> the people on the garnet project knew what the hell they were doing so
> this statement seems to be outrageous - (5 year) optimized lisp code
> is 3x slower than the first try at C++ ?!
Well, there might be several reasons, most of which have nothing to do
with C++ or Lisp:
a) The "first try" at C++ of course wasn't really a first try. It had
the experience of at least 5 years of Lisp optimization and usage
of Garnet. With that amount of experience with a system, any kind
of complete rewrite is going to fare much better in terms of speed,
since you already know many of the bottle-necks, and you know which
seldomly-used features you can sacrifice to gain much in speed.
Many of these optimizations are out of the question for normal
maintenance, because they either break compatibility, and/or it
would require too much work to rewrite large portions of your
code-base.
b) C++ _forces_ you to micro-optimize most things, because you are
directly involved in nearly all low-level decisions. In writing
new code, this often gobles up so much of your time, resources and
attention, that it leads to micro-overoptimization, whilst
macro-optimization, code quality, progress, maintainability,
etc. suffer.
When you are rewriting a well understood, stable toolkit, some of
the downsides of this decrease, whereas advantages increase.
c) The C++ version is _very_ unlikely to be really comparable in terms
of functionality, clarity or extendability. For example I don't
see any way they could have implemented their prototype-based
object system on top of C++ in any kind of reasonable way. Doesn't
mean there is no way, just that I can't imagine one, which could
just be my missing imagination again ;). So most likely they will
have switched to C++ built-in object-system, which of course
changes things quite a bit.
Of course, not knowing the C++ version of Garnet, and not knowing
Garnet in-depth, all of my comments should be taken in perspective.
OTOH most of this is generic reasoning...
Regs, Pierre.
--
Pierre Mai <····@acm.org> http://home.pages.de/~trillian/
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
There was a thread on this sometime back. (1-2 years? Try deja-news.)
I seem to remember someone (Barmar?) pointing out some really bad Lisp
code in the Lisp version of Garnet.
I agree with what you say here as well. I'll take your point about the
low level of C++ as an opportunity to point people to
http://www.elwood.com/alu/table/performance.htm#language, (and indeed,
the whole of that page). As a teaser, I'll include here what I think
are the most important points about performance in Lisp versus other
languages -- points which I'd like to know that everyone who reads this
newsgroup understands (and I'm happy to think that practically all do):
"...
If it is easier and faster to program arbitrary applications in Lisp
than in some other languages, then, too, it is easier to program
inefficient programs in Lisp. ...
In Lisp, it is easy to obscure implementation details while making the
programmer's intent clear. This can lead to productive, clear, easily
maintained code, which may, sometimes, be inefficient.
In other languages, it is easy to obscure the programmer's intent while
making the
implementation details clear. This can lead to efficient code which may,
sometimes, be unclear, unproductive, or even incorrect."
Pierre Mai wrote:
>
> Brian Denheyer <······@deldotd.com> writes:
>
> > When I look at lisp code versus C++, I really don't see how compiled
> > C++ can run 3x faster than compiled lisp. But I have to believe that
> > the people on the garnet project knew what the hell they were doing so
> > this statement seems to be outrageous - (5 year) optimized lisp code
> > is 3x slower than the first try at C++ ?!
>
> Well, there might be several reasons, most of which have nothing to do
> with C++ or Lisp:
>
> a) The "first try" at C++ of course wasn't really a first try. It had
> the experience of at least 5 years of Lisp optimization and usage
> of Garnet. With that amount of experience with a system, any kind
> of complete rewrite is going to fare much better in terms of speed,
> since you already know many of the bottle-necks, and you know which
> seldomly-used features you can sacrifice to gain much in speed.
> Many of these optimizations are out of the question for normal
> maintenance, because they either break compatibility, and/or it
> would require too much work to rewrite large portions of your
> code-base.
>
> b) C++ _forces_ you to micro-optimize most things, because you are
> directly involved in nearly all low-level decisions. In writing
> new code, this often gobles up so much of your time, resources and
> attention, that it leads to micro-overoptimization, whilst
> macro-optimization, code quality, progress, maintainability,
> etc. suffer.
>
> When you are rewriting a well understood, stable toolkit, some of
> the downsides of this decrease, whereas advantages increase.
>
> c) The C++ version is _very_ unlikely to be really comparable in terms
> of functionality, clarity or extendability. For example I don't
> see any way they could have implemented their prototype-based
> object system on top of C++ in any kind of reasonable way. Doesn't
> mean there is no way, just that I can't imagine one, which could
> just be my missing imagination again ;). So most likely they will
> have switched to C++ built-in object-system, which of course
> changes things quite a bit.
>
> Of course, not knowing the C++ version of Garnet, and not knowing
> Garnet in-depth, all of my comments should be taken in perspective.
> OTOH most of this is generic reasoning...
>
> Regs, Pierre.
>
> --
> Pierre Mai <····@acm.org> http://home.pages.de/~trillian/
> "One smaller motivation which, in part, stems from altruism is Microsoft-
> bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
It was Ken Anderson at BBN. As I recall, he invested some time in looking over the
system and concluded that almost none of the "optimizations" done had any effect
at all and here were very few declarations (most of the ones he found were
ineffectual). Additionally, when converting to C++ there was likely considerable
work done on the algorithms in the system. Since they used their own object
system, I wonder if there was much optimization donw with it. Certainly, the
compiler would not be able to help much. Actually, it would be an interesting
excercise in optimization techniques to see what could be done with the code under
"modern" conditions with ACL or Harlequin.
"Howard R. Stearns" wrote:
> There was a thread on this sometime back. (1-2 years? Try deja-news.)
> I seem to remember someone (Barmar?) pointing out some really bad Lisp
> code in the Lisp version of Garnet.
>
Pierre Mai <····@acm.org> writes:
> Brian Denheyer <······@deldotd.com> writes:
>
> > When I look at lisp code versus C++, I really don't see how compiled
> > C++ can run 3x faster than compiled lisp. But I have to believe that
> > the people on the garnet project knew what the hell they were doing so
> > this statement seems to be outrageous - (5 year) optimized lisp code
> > is 3x slower than the first try at C++ ?!
I suspect that if they tried to rewrite the C++ version in lisp,
they would get another 2x or 3x speedup.
> Well, there might be several reasons, most of which have nothing to do
> with C++ or Lisp:
>
> a) The "first try" at C++ of course wasn't really a first try. It had
> the experience of at least 5 years of Lisp optimization and usage
> of Garnet. With that amount of experience with a system, any kind
> of complete rewrite is going to fare much better in terms of speed,
> since you already know many of the bottle-necks, and you know which
> seldomly-used features you can sacrifice to gain much in speed.
> Many of these optimizations are out of the question for normal
> maintenance, because they either break compatibility, and/or it
> would require too much work to rewrite large portions of your
> code-base.
In other words, the lisp version was a prototype.
Coming from a hardware background myself, I grew up learning the
importance of the prototype, and have always been aware of the truth
in the adage "Always throw the first one away". I am a bit surprised
at how few programmers buy in to this truth, especially given how
easy it is to change software relative to hardware (yes, even C++)
[perhaps not as true as it once was, in this throw-the-old-chip-away
era, but it was certainly true 20 years ago; the equivalent of the
chip filled a room back then, and it was certainly not an option to
throw it away].
I believe that lisp's reputation for being slow comes about for
precisely this reason; it is so easy to prototype in lisp, and the
result looks so deceptively like the desired final results that it
is easy to just skip the step of rewriting it, and start heading
for production. Thus, it is easy to ship prototype code. And,
somewhat ironically, it is easy to write high-quality code using
a terrible algorithm (or, not knowing the problem space, the wrong
algorithm), so it is usually harder to recognize a prototype, or
for the developer to recognize that it must be re-written. So a
lisp project may go into production for 3, 5, 10, or more years in
the prototype stage.
> b) ...
>
> c) ...
>
> Of course, not knowing the C++ version of Garnet, and not knowing
> Garnet in-depth, all of my comments should be taken in perspective.
> OTOH most of this is generic reasoning...
As it is with me. I know little about Garnet, and my words are
more generalizations, and the Garnet discussion just gave me a
soap-box from which to preach ...
--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 ·····@Franz.COM (internet)
>>>>> "Duane" == Duane Rettig <·····@franz.com> writes:
Duane> Pierre Mai <····@acm.org> writes:
Duane> In other words, the lisp version was a prototype.
Duane> Coming from a hardware background myself, I grew up
Duane> learning the importance of the prototype, and have always
Duane> been aware of the truth in the adage "Always throw the
Duane> first one away". I am a bit surprised at how few
Duane> programmers buy in to this truth, especially given how easy
Duane> it is to change software relative to hardware (yes, even
Duane> C++)
You make some very good points, IMHO. The fact that the C++ version
was developed AFTER 5 years of tweaking lisp code had to have had a
major impact.
Along these lines, it's interesting that I haven't really seen any
good guidelines for coding for speed in lisp or scheme. Anybody have
some goo d pointers to books or webpages ?
Brian
--
Brian Denheyer ······@deldotd.com
Deldot Design, Inc. (503) 788-1607
__
\/.d = Electronic design and development
Well, if you want the word of someone with VERY limited experience, I would
recommend Norvig's "Paradigms of Artificial Intelligence Programming." He
has a couple of excellent chapters on efficiency issues. Actually most of the
book could be considered something of a style guide, or Guide to Writing Good
Lisp, because of the way that Dr. Norvig takes complex problems (from the
AI community) and describes the process of writing solutions in Lisp. I had
warm fuzzy feelings about Lisp before reading this book, but came away
feeling nearly evangelical about Lisp's place in the world. Luckily my fine
sense of pragmatism and irony keeps me from going medieval on my Visual C++
and Visual Basic (shudder) fellow workers.
Not so sure about web pages, but if there is a page I would betcha that ALU
has one (http://www.elwoodcorp.com/alu/).
John Miller
········@enteract.com
Brian Denheyer <······@deldotd.com> writes:
>
> Along these lines, it's interesting that I haven't really seen any
> good guidelines for coding for speed in lisp or scheme. Anybody
> have some goo d pointers to books or webpages ?
> Brian
>
>
> --
> Brian Denheyer ······@deldotd.com Deldot Design, Inc. (503)
> 788-1607 __ \/.d = Electronic design and development
> Brian Denheyer <······@deldotd.com> writes:
> > Along these lines, it's interesting that I haven't really seen any
> > good guidelines for coding for speed in lisp or scheme. Anybody
> > have some goo d pointers to books or webpages ?
John Michael Miller wrote:
....
> Not so sure about web pages, but if there is a page I would betcha that ALU
> has one (http://www.elwoodcorp.com/alu/).
http://www.elwood.com/alu/table/style.htm
As always, comments on how to make things easier to find are always
welcome.
* Brian Denheyer <······@deldotd.com>
| Along these lines, it's interesting that I haven't really seen any
| good guidelines for coding for speed in lisp or scheme. Anybody have
| some good pointers to books or webpages ?
Norvig has already been mentioned, and Graham also includes a chapter on
various aspects of efficiency.
however, different things require different forms of attention in
different implementations of Common Lisp, so you really need to profile
your code to see where the hot spots are.
#:Erik
--
Y2K conversion simplified: Januark, Februark, March, April, Mak, June,
Julk, August, September, October, November, December.
* Brian Denheyer <······@deldotd.com>
| Along these lines, it's interesting that I haven't really seen any
| good guidelines for coding for speed in lisp or scheme. Anybody have
| some good pointers to books or webpages ?
In addition to the books and other good advice, you might want to check out
past articles concerning efficiency in this newgroup. I remember two
instances where a bunch of people improved/rewrote pieces of code
for efficiency. Here are the dejanews pointers to the starting articles:
http://search.dejanews.com/getdoc.xp?AN=332449950
http://search.dejanews.com/getdoc.xp?AN=268473223
BM
Bulent Murtezaoglu said:
check out past articles concerning efficiency in this newgroup. I
remember two instances where a bunch of people improved/rewrote pieces of
code for efficiency. Here are the dejanews pointers to the starting
articles:
http://search.dejanews.com/getdoc.xp?AN=332449950
http://search.dejanews.com/getdoc.xp?AN=268473223
For some reason, dejanews seems not to have indexed my followup summary to
the latter. Here it is. I learned a lot from this exercise.
Larry
Newsgroups: comp.lang.lisp,comp.lang.lisp.franz
Subject: Re: Optimization hints?
References: <··············@work.nlm.nih.gov> <················@naggum.no>
>From: Larry Hunter <······@nlm.nih.gov>
Date: 27 Aug 1997 10:31:42 -0400
Thanks very much to Luca Pisati, Erik Naggum, Rainer Joswig & others who
made suggestions about optimizing the function I described. I thought I
would summarize my results in case other found them useful.
By far, the most important thing was Luca's suggestion to change
MISSING-VALUE-P from a function to a macro. Nearly all of the consing was
because the floats were being boxed for that function call, which was made
very many times. That change reduced the amount of consing by 3 orders of
magnitude, and the consequent speedup was nearly 2 orders of magnitude
(counting reductions in GC).
The next most important suggestion was to break up the calculation so that I
could declare the types in detail. This lead to about a 30% speedup.
The final change that made any difference was to move the range calculation
out of the function to avoid redundancy; that shaved off an additional 10%
or so.
None of the other suggested changes (using NaN to represent missing values,
using a bit vector mask instead of a list of ignore positions, getting rid
of the ABS call and testing for negatives explicitly, eliminating the
instances when 0 is added to the result, etc.) made any significant
difference. Some actually slowed things down!
Thanks to your help, the time it takes to calculate the roughly 30 million
applications of this function has gone from about 4 days to under an hour.
The final version of the function is at the end of this message.
My old professor Alan Perlis was not entirely right when he said that lisp
programmers know the value of everything and the cost of nothing; some of
you really do know where the costs are hiding....
Thanks again.
Larry
(defun distance (v1 v2 &key (types *sb-types*) (ignore nil) (ranges nil))
(declare (optimize (speed 3) (safety 0) (debug 0))
(simple-vector v1 v2 types))
(let ((result 0.0))
(declare (single-float result)
(simple-vector ranges))
(unless ranges
(setq ranges (map 'vector #'(lambda (type) (- (third type) (second type))) types)))
(dotimes (i (length v1) result)
(let ((val1 (svref v1 i))
(val2 (svref v2 i)))
(unless (or (member i ignore :test #'=) (missing-value-p val1) (missing-value-p val2))
(incf result
(the single-float
(case (first (svref types i))
(nominal (if (eq val1 val2) 0.0 1.0))
(real (abs (/ (the single-float (- (the single-float val1)
(the single-float val2)))
(the single-float (svref ranges i)))))
(integer (float (abs (/ (the fixnum (- (the fixnum val1)
(the fixnum val2)))
(the fixnum (svref ranges i))))))))))))))
--
Lawrence Hunter, PhD.
National Library of Medicine phone: +1 (301) 496-9303
Bldg. 38A, 9th fl, MS-54 fax: +1 (301) 496-0673
Bethesda. MD 20894 USA email: ······@nlm.nih.gov
Erik Naggum <····@naggum.no> writes:
> however, different things require different forms of attention in
> different implementations of Common Lisp, so you really need to profile
> your code to see where the hot spots are.
And, actuslly, isn' that the only reaslly useful advice on writing
fast code in CL? Profile and then fix the bottlenecks.
--tim
* Duane Rettig <·····@franz.com>
| Coming from a hardware background myself, I grew up learning the
| importance of the prototype, and have always been aware of the truth in
| the adage "Always throw the first one away". I am a bit surprised at how
| few programmers buy in to this truth, especially given how easy it is to
| change software relative to hardware (yes, even C++) [perhaps not as true
| as it once was, in this throw-the-old-chip-away era, but it was certainly
| true 20 years ago; the equivalent of the chip filled a room back then,
| and it was certainly not an option to throw it away].
since I recently said I don't believe in prototypes, let me clarify and
expand on it: I don't think one should write software with the conscious
intent to throw it away. instead, you should be prepared for that
outcome, but work to make it the real system. after a while, you _may_
find that certain areas of the application domain were underresearched
and find it easier to start over, but people who work on software that is
known to become junk get quite disillusioned, and managers never give
them the resources they need to make the prototype useful to learn from.
instead, the first _real_ version will be the prototype that they wanted
the prototype to be: the one from which they learn what they were really
trying to do, so the prototype is a waste of time and money.
add to this that many a manager feels this way about paying for software
that isn't going to be the real thing and associates bad things with
"prototype". since so many are quick to point out that Lisp is great for
prototyping, the message is that Lisp is a great choice if you want to
waste time and money. I think it's the other way around: if you are free
to make all sorts of changes to the design with little impact on the
system, what you end up with has probably learned all there is to learn
from the iterative process that otherwise would have involved several
languages, teams, and project managers.
#:Erik
--
Y2K conversion simplified: Januark, Februark, March, April, Mak, June,
Julk, August, September, October, November, December.
Erik Naggum <····@naggum.no> writes:
> * Duane Rettig <·····@franz.com>
> | Coming from a hardware background myself, I grew up learning the
> | importance of the prototype, and have always been aware of the truth in
> | the adage "Always throw the first one away". I am a bit surprised at how
> | few programmers buy in to this truth, especially given how easy it is to
> | change software relative to hardware (yes, even C++) [perhaps not as true
> | as it once was, in this throw-the-old-chip-away era, but it was certainly
> | true 20 years ago; the equivalent of the chip filled a room back then,
> | and it was certainly not an option to throw it away].
>
> since I recently said I don't believe in prototypes, let me clarify and
> expand on it:
Sorry, I didn't see that statement or its context. And because of
your cogent arguments below, I think I need to do some clarifying
and explaining of my own. I think you'll find we are both touching
opposite sides of the same elephant. Your arguments sound like
the "quality" argument (i.e. "do it right the first time", "strive
for zero defects"), and mine comes from a slightly different
point of view that I believe is in harmony with the quality
perspective, but which is often forgotten.
Of course like any adage or truism, the one above is a generalization,
and one can always find circumstances in which the adage is false.
However, it is intended as a way to shake people out of an opposite
state of thinking. Another way to put it is "Don't hang on to the
first thing you did."
As for the definition of "prototype", my dictionary says "1. An original
model on which something is patterned". In other words, the first
time you build anything, it is by definition a prototype. However,
I submit that lispers prototype constantly, and that every time you
type "(defun foo () ...)" to the prompt, you are prototyping something
that you _intend_ to throw away (otherwise you would have put it into
a file!). This prototyping becomes so ingrained into lisp programmers
that they may not even realize they are doing it (or may find it hard
to explain how they can be so productive to a C++ programmer, who in
general doesn't have this instant-prototyping capability).
> I don't think one should write software with the conscious
> intent to throw it away. instead, you should be prepared for that
==================================================^^^^^^^^^^^^^^^^^^^^
> outcome, but work to make it the real system. after a while, you _may_
====^^^^^^^
> find that certain areas of the application domain were underresearched
====^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> and find it easier to start over, but people who work on software that is
> known to become junk get quite disillusioned, and managers never give
> them the resources they need to make the prototype useful to learn from.
> instead, the first _real_ version will be the prototype that they wanted
> the prototype to be: the one from which they learn what they were really
> trying to do, so the prototype is a waste of time and money.
This whole paragraph is true, but for a junior programmer (or electronics
engineer, or building contractor, or anyone else who builds _things_) the
flagged sections tend to be extremely hard to do. Everyone needs to be able
to take pride in what they do, but they need to understand exactly what it
is they are doing. Instead what happens is that the programmer (for example)
places all of his ego into the program he is building, instead of the ideas
which he is learning as a result of building his program. Thus it is
impossible to wrench him away from his "baby" either to rebuild it into
something much better, or else to move him on to something more interesting.
The world is full of programmers stuck on their own dinosaurs because
they won't let go.
> add to this that many a manager feels this way about paying for software
> that isn't going to be the real thing and associates bad things with
> "prototype". since so many are quick to point out that Lisp is great for
> prototyping, the message is that Lisp is a great choice if you want to
> waste time and money. I think it's the other way around: if you are free
> to make all sorts of changes to the design with little impact on the
> system, what you end up with has probably learned all there is to learn
> from the iterative process that otherwise would have involved several
> languages, teams, and project managers.
In which case you have already rewritten (at least parts of) your
application several times over. But to be free to do this rewriting,
you have to have made a correct estimate of how long the project
will take. If you factor in the prototyping (call it "learning curve",
or "concept development") time, you will have the time to do this.
[I observe that programmers, myself included, tend to be overly
optimistic in the way they estimate times for a project; it is
not that we are bad planners - a bad planner might estimate two
weeks for a two month project and 4 months for a three-week
project; it is instead the case that we are consistently over-
optimistic. And of course no manager wants to hear the bad news
regarding schedules; they always want to hear good news. But if
they hear bad news from someone they know can and has done the job
on time in the past, they tend to be much happier than when they
hear good news from someone that they know they are going to have
to plan "slop" time for. And for those programmers who gain the
reputation of getting work done on time, the subject of prototyping
should rarely even come up.]
--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 ·····@Franz.COM (internet)
* Duane Rettig <·····@franz.com>
| Your arguments sound like the "quality" argument (i.e. "do it right the
| first time", "strive for zero defects"), and mine comes from a slightly
| different point of view that I believe is in harmony with the quality
| perspective, but which is often forgotten.
um, no, not quite. I don't believe it is possible to do anything right
until you know what "right" means, and this is not available a priori,
despite what all the lectures in programming methodology and systems
design would have poor students believe. (if it was, we would have had a
perfect society, too -- law is nothing more than programming society, so
if it were possible to plan for everything, it would have been worked out
many millenia ago. I'm amazed that people think large-scale experiments
in top-down design like dictatorships and planned economies would fare
any better in any other endeavor involving creative people.) therefore,
you have to go through various iterations, and your design must be
prepared for this, but that is not to say that you should _not_ expect
success, or actively prohibit it.
the issue I want to raise is whether people should approach the first
version with "let's try to get this right" vs "we're going to throw this
one out, so let's not waste too much resources on it". the whole point
with wasting the first version is to waste _only_ the first version, at
worst. people who set out to do something that will be a waste in the
eyes of managers who cannot understand why their employees do not know
enough to get it right the first time, will _not_ learn all there is to
learn and will _not_ ensure that only the first version gets wasted, but
will waste version after version.
now, if they do get he first version right, and don't need to waste it at
all, that's a very commendable result, but if you only set out to
"prototype", this is no longer an option, either by choice of language or
tools or something else that everyone knows was intended to be wasted,
and would feel bad if it were to go into production. that's what I think
is wrong about using specific languages to prototype things and talk
about a language as "good for prototyping". it is invariably understood
as "not good for production code". I have to fight this impression.
in conclusion, it is hard enough work to it right the second time when
you have all the data from the first try available if you should not also
have to do it a third time because you weren't _allowed_ to go all the
way in the first attempt.
| Another way to put it is "Don't hang on to the first thing you did."
my argument is that the reverse is equally valid: don't _always_ throw
out the first thing you did.
| However, I submit that lispers prototype constantly, and that every time
| you type "(defun foo () ...)" to the prompt, you are prototyping
| something that you _intend_ to throw away (otherwise you would have put
| it into a file!).
ok, that's the gist of your disagreement. I think intent to throw away
is the bad part about prototyping. throwing away is something you should
be prepared to do without much concern, but you should strive to make it
usable, too. if you intend to throw it away, the threshold to keep it is
set too high. now, what if it actually does work the way you wanted it
to? do you still throw it away? if you prototype in Lisp and write
production code in C, you would have to, right? that's bad. not only
because doing something you have already done over again is stifling, the
problems in C are different from the problems in Lisp and you have to do
a lot of redundant work.
| The world is full of programmers stuck on their own dinosaurs because
| they won't let go.
well, letting go is as much a part of caring about something as not
letting go. sadly, our cultures are built by people who couldn't let go,
so everything around us is shaped by an unhealthy conservatism, and
because people are afraid of having to suffer as much pain every time
they manage to get something to work somewhat right, we have a lot of
stuff that were shaped by a trial-and-error process that stopped as soon
as the errors were tolerably small. had people been able to let go, and
use their knowledge from what failed, perhaps we could move on. in my
view, the patent system was supposed to let inventors capitalize on the
works of others and make _significant_ improvements, while the patent
effectively forbade the meager insignificant improvements. a wonderful
mechanism that should be applauded and made much stronger, it has turned
into a monster that stifles innovation by admitting overly broad and
senseless patents, instead because the wrong kind of people approve
patents. this is just another example of how society is shaped by
conservatives who have no clue _what_ they are conserving, only that
change is bad.
| In which case you have already rewritten (at least parts of) your
| application several times over. But to be free to do this rewriting, you
| have to have made a correct estimate of how long the project will take.
| If you factor in the prototyping (call it "learning curve", or "concept
| development") time, you will have the time to do this.
ironically, you always have time to do that, despite what everyone says.
the key is to get something to the point where it does not fail, and then
you can change everything as long as it continues not to fail. people
always have time to improve things, and they embrace things get improved
in oh so minor ways.
| But if they hear bad news from someone they know can and has done the job
| on time in the past, they tend to be much happier than when they hear
| good news from someone that they know they are going to have to plan
| "slop" time for. And for those programmers who gain the reputation of
| getting work done on time, the subject of prototyping should rarely even
| come up.
well, there's often a "fuzz factor" you can add to or multiply with what
people say. a good manager learns the fuzz factor and doesn't want to
change it. some programmers get all the big work done in the first few
days and then spends a lot of time fixing minor bugs, while others tinker
with fundamentals 95% of the time and whip up a marvelous piece of art in
the last few days. the key to managing is to understand what you deal
with, and programmers are predictable _one_ at a time, even when they are
(supposedly) part of a team.
incidentally, getting something to work is child's play. where you need
the expertise is in not making it fail, or fail gracefully. and I don't
think a prototype that only shows how it works is useful at all, since
you learn nothing about surviving the failure modes by having something
that only "works".
anyway, I like Common Lisp because I can write enough serious code in it
to watch how it actually deals with a hostile world (specifically, socket
code that distributes functionality across hundreds of computers over
thousands of little stretches of cable and routers that people upgrade
and add bogus routes to and all kinds of shit) and then rewrite it before
the hostilities resulted in harming anything. also, Allegro CL is the
first environment I have used which allows me to use all I know and learn
about the hostile world without writing tons and tons of code. when
something causes a crash only once a year, you ignore it in C because it
might change your whole design and basically send you back to the drawing
board for a year to deal with it, and then you suffer a crash once a
year, so it isn't worth the costs. it might take a week or a month to
redesign to survive such a problem in Common Lisp, but you can actually
offord to take care of it. this means that the next time you have to
deal with this kind of problem, the competent Common Lisp programmer gets
it right, while the competent C programmer still doesn't know what it
would take to solve that particular problem because he hasn't been there.
#:Erik
--
Y2K conversion simplified: Januark, Februark, March, April, Mak, June,
Julk, August, September, October, November, December.
Pierre Mai <····@acm.org> writes:
> c) The C++ version is _very_ unlikely to be really comparable in terms
> of functionality, clarity or extendability. For example I don't
> see any way they could have implemented their prototype-based
> object system on top of C++ in any kind of reasonable way. Doesn't
> mean there is no way, just that I can't imagine one, which could
> just be my missing imagination again ;). So most likely they will
> have switched to C++ built-in object-system, which of course
> changes things quite a bit.
I looked at it once, they have rewritten their object system in
C++. It wasn't even all that large in comparison to the lisp version.
--
Lieven Marchand <···@bewoner.dma.be>
------------------------------------------------------------------------------
Few people have a talent for constructive laziness. -- Lazarus Long
In article <··············@soggy.deldotd.com>, Brian Denheyer <······@deldotd.com> wrote:
> When it comes to OO sorts of things, I CAN see that CLOS introduces a
> nasty amount of overhead as opposed to C++ where you just toss
> function pointers around.
>
> Any reasonable, insightful, comments ?
Garnet did not really use CLOS. They had their own object system
(prototype based, IIRC). I checked it out in CMU CL years ago -
it was slow also because CMU CL has/had a weak GC implementation.
The LOOM project, btw., makes a similar statement about
their new "language" Stella, which can be compiled to
Lisp or C++. The C++ code runs ten times faster, they claim:
http://www.isi.edu/isd/LOOM/Stella/index.html
----
I remember the situation with MCL.
Years ago people started to develop big Lisp software in MCL
on 68k Macs. You really needed a high-end Mac (68040, lots of
memory) to get something done with these *big* (prototype)
software systems - and it was really expensive.
Then came the PowerPC. I got a machine with a PowerPC 603e.
It was slow, slow, slow. Two years later I got a new one.
*Ten* times faster -> Lisp screams.
There are only few excuses not to use Lisp nowadays.
Speed is not one of them.
--
http://www.lavielle.com/~joswig
Rainer Joswig wrote:
> In article <··············@soggy.deldotd.com>, Brian Denheyer <······@deldotd.com> wrote:
>
> > When it comes to OO sorts of things, I CAN see that CLOS introduces a
> > nasty amount of overhead as opposed to C++ where you just toss
> > function pointers around.
> >
> > Any reasonable, insightful, comments ?
>
> Garnet did not really use CLOS. They had their own object system
> (prototype based, IIRC). I checked it out in CMU CL years ago -
> it was slow also because CMU CL has/had a weak GC implementation.
>
I seem to remember that Garnet built using CLISP with Bruno Haible'shomebrew CLX is about
twice as fast as using standard CLX.
I'm sure the optimized CLX implementation would be portable to CMUCL.
Combine that with better GC in CMUCL, and you've probably coveredmost of the 3X performance
gap stated in previous posts.
> The LOOM project, btw., makes a similar statement about
> their new "language" Stella, which can be compiled to
> Lisp or C++. The C++ code runs ten times faster, they claim:
>
> http://www.isi.edu/isd/LOOM/Stella/index.html
>
> ----
>
> I remember the situation with MCL.
> Years ago people started to develop big Lisp software in MCL
> on 68k Macs. You really needed a high-end Mac (68040, lots of
> memory) to get something done with these *big* (prototype)
> software systems - and it was really expensive.
>
> Then came the PowerPC. I got a machine with a PowerPC 603e.
> It was slow, slow, slow. Two years later I got a new one.
> *Ten* times faster -> Lisp screams.
>
> There are only few excuses not to use Lisp nowadays.
> Speed is not one of them.
>
> --
> http://www.lavielle.com/~joswig
--
Alias|Wavefront e-mail: ·····@aw.sgi.com
210 King St East phone: +1 (416) 362-8558 x8359
Toronto, Ontario M5A 1J7 FAX: +1 (416) 369-6151
Canada
----------------------------------------------------------------
An idea is a greater monument than a cathedral.
Spencer Tracy, "Inherit the Wind"
>>>>> "John" == John Casu <·····@aw.sgi.com> writes:
John> Rainer Joswig wrote:
>> In article <··············@soggy.deldotd.com>, Brian Denheyer <······@deldotd.com> wrote:
>>
>> > When it comes to OO sorts of things, I CAN see that CLOS introduces a
>> > nasty amount of overhead as opposed to C++ where you just toss
>> > function pointers around.
>> >
>> > Any reasonable, insightful, comments ?
>>
>> Garnet did not really use CLOS. They had their own object system
>> (prototype based, IIRC). I checked it out in CMU CL years ago -
>> it was slow also because CMU CL has/had a weak GC implementation.
>>
John> I seem to remember that Garnet built using CLISP with Bruno
John> Haible'shomebrew CLX is about twice as fast as using
John> standard CLX. I'm sure the optimized CLX implementation
John> would be portable to CMUCL.
I'm pretty sure the the optimized CLX for CLISP is not portable. It
was rewritten in "C" expressly for CLISP. I think someone has used
the FFI in GCL to hook up directly to X. That would be pretty fast, I
guess.
John> Combine that with better GC in CMUCL, and you've probably
John> coveredmost of the 3X performance gap stated in previous
John> posts.
On x86, there's a gengc. I guess it's good, but I don't know how
good. I find Garnet to run fast enough on a 200 MHz x86 box. It's
fine on a Sparc 20 too. Well, at least the demos run reasonably
fast. I don't have any "real" heavy-duty Garnet apps to test.
Ray
>>>>> "Rainer" == Rainer Joswig <······@lavielle.com> writes:
Rainer> The LOOM project, btw., makes a similar statement about
Rainer> their new "language" Stella, which can be compiled to
Rainer> Lisp or C++. The C++ code runs ten times faster, they claim:
Rainer> http://www.isi.edu/isd/LOOM/Stella/index.html
But aren't some of the things they claim also wrong:
Common Lisp provides a very supportive environment for the
development of symbolic processing applications (e.g.,
``intelligent'' applications). However, Common Lisp has
several drawbacks: Its performance is slow (as compared to a
C++ program); it does not interface well to tools and
applications in commercial environments (e.g., to C++ and
CORBA); and it fails to incorporate recent object system
trends (e.g., collections, iterators, and parameterized
types). As a result, customer demand and vendor support for
Lisp is declining, creating yet another reason to abandon
Lisp.
Lisp has CORBA right?
I also think interfacing anything to C++ except C++ is quite hard, so
that's a moot point.
Doesn't lisp have collections and iterators (of some sort)?
I'm not sure about "parameterized types".
Ray
Raymond Toy wrote:
> Doesn't lisp have collections and iterators (of some sort)?
>
> I'm not sure about "parameterized types".
I tried to get opinion on these issues 1-2 months back. I did not get
messages other than a couple that agreed that for it would be neat to be
able to specialise, for example, for arrays or trees with elements of a
certain type.
In my previous posting, I was not complaining, just looking for opinion
of more experienced people.
You can obviously create classes where the actual contents are in a
slot, or write functions that perform iterations on arrays beyond
vectors.
Still, it is difficult to say that CL has explicit and thorough support
for collections and iterations. I agree with those saying that in CL
you do not need this as much as in other languanges due to its power and
flexibility, and it is often trivial to create collections atop of
classes, structures, arrays, hash tables and lists. It is not
show-stopper by any means.
Robert
* Raymond Toy <···@rtp.ericsson.se>
| But aren't some of the things they claim also wrong:
|
| However, Common Lisp has several drawbacks:
| Its performance is slow (as compared to a C++ program);
my experience is that well-written Common Lisp is more space and time
efficient than well-written C++, but not so with well-written C in which
you basically hard-code every assumption and thereby win big. enter a
lot of dynamic memory, and C also loses to Common Lisp, not the least
because malloc/free are hopelessly ineffecient in both time and space.
| it does not interface well to tools and applications in commercial
| environments (e.g., to C++ and CORBA);
I'm sure this was true once.
| and it fails to incorporate recent object system trends
| (e.g., collections, iterators, and parameterized types).
these are specific implementations of "inventions" required in C++
because it doesn't have what people _actually_ need.
| As a result, customer demand and vendor support for Lisp is
| declining, creating yet another reason to abandon Lisp.
I think "creating" here should be read literally, they literally had to
create a reason to abandon Lisp...
| Lisp has CORBA right?
there's the ILU, and Allegro CL ORBlink.
| I also think interfacing anything to C++ except C++ is quite hard, so
| that's a moot point.
well, Allegro CL does that, too.
| Doesn't lisp have collections and iterators (of some sort)?
we have mapping functions, which are much more powerful than iterators.
collections? what's a list if _not_ a collection?
| I'm not sure about "parameterized types".
I see STRING in (coerce <whatever> 'string) as a paremeterized type.
there are still some valid reasons to reject Common Lisp. I find it
rather curious that people don't use them.
#:Erik
--
Y2K conversion simplified: Januark, Februark, March, April, Mak, June,
Julk, August, September, October, November, December.
Erik Naggum <····@naggum.no> writes:
>
> | Lisp has CORBA right?
>
> there's the ILU, and Allegro CL ORBlink.
>
Harlequin also offer a CORBA ORB (The Harlequin Common Lisp ORB).
Clive Tong Email: ·····@harlequin.co.uk
Harlequin Ltd, Barrington Hall, TEL: +44 1223 873834
Barrington, Cambridge CB2 5RG, England. FAX: +44 1223 873873
These opinions are not necessarily those of Harlequin Ltd.
Erik Naggum wrote:
> | I'm not sure about "parameterized types".
>
> I see STRING in (coerce <whatever> 'string) as a paremeterized type.
Haven't you forgotten to attach a smiley at the end of your sentence?
:-) If not, what have you meant by it? To me, a parameterized type
would be something like "vector of integers" or "list of bank
transaction objects" where you parameterize LIST or VECTOR with your
choice of element type. You can specialize methods on STRING, but not
with something you create with DEFTYPE.
Robert
* Robert Monfera <·······@fisec.com>
| Haven't you forgotten to attach a smiley at the end of your sentence?
no.
| If not, what have you meant by it? To me, a parameterized type would be
| something like "vector of integers" or "list of bank transaction objects"
| where you parameterize LIST or VECTOR with your choice of element type.
this entire issue is a joke on the explicit typing crowd. by refusing to
understand that lists and vectors and other generic collections shouldn't
_have_ to be specialized on the type, they _invent_ a stupid need that
smart people don't have.
the whole parameterized type nonsense is a testament to bad language
design. it is completely devoid of meaning.
incidentally, (make-array <dimensions> :element-type <type>) is your
parameterized type if you didn't like my COERCE example, which I use to
show these explicit typing doofuses who drop type information at runtime
what they _really_ wanted, but didn't know how to ask for. imagine a
simple function that accepts a type _name_ as argument and tries its best
to convert an object into that type. it's template hell in C++. the
name of its types are _gone_ at runtime, for starters.
if you want a list of bank transaction objects, just don't put anything
else on it. how hard can it be? I fail to see the need to construct a
bogus language construct just to make that _harder_. if you want a typed
container, define a class that has a type specifier and a sequence or
whatever you want to hold them, and provide general functions to add
things to it after having checked the type. the problem is that this
stuff is so seldom _needed_ in a language that remembers the types of its
objects apart from whatever variable used to hold them.
| You can specialize methods on STRING, but not with something you create
| with DEFTYPE.
if you're talking about CLOS methods, you're being disingenious. CLOS
was designed to dispatch on classes, not types. if you don't like that,
nothing whatsoever stops you from running down a TYPECASE or making a
more general mechanism. yes, you have to roll your own. that's true for
just about everything people want. "I do not want what I haven't got"
does not apply to programmers. incidentally, PPRINT-DISPATCH has a nice
type-dispatching system that does a lot more than CLOS dispatch does.
just because something is a good thing to do when you regularly shoot
yourself in the foot, doesn't mean it's a good thing to do if you can
avoid shooting yourself in the foot.
enough comparisons with C++. I get sick to my stomach thinking about the
talent wasted in inventing ways that braindamaged language can hurt less.
#:Erik
--
Y2K conversion simplified: Januark, Februark, March, April, Mak, June,
Julk, August, September, October, November, December.
In article <················@naggum.no>,
Erik Naggum <····@naggum.no> writes:
EN> the whole parameterized type nonsense is a testament to bad language
EN> design. it is completely devoid of meaning.
I beg to disagree.
Unless you really care for efficiency, the main reason to have types
in a language is that they allow you to reason about your code. They
allow you to express simple invariants of the form
x always holds an integer;
y always holds an object on which the generic function `foo' has an
applicable method.
Cardelli (?) coined the term `typeful programming' for this approach.
Whether type declarations are compulsory in the code, optional, or not
written at all, and whether they are statically or dynamically checked
is more or less irrelevant to this discussion.
In this view, being able to express notions such as `list of integers'
is a perfectly natural request. A number of people (basically the
dependent types crowd) are even working on calculi in which you have
the full power of the lambda-calculus at the level of types -- very
natural if you've been brought up on Lisp, and like to have full
access to the language at compile-time. You may then express types
such as `vector of int of length 42' or `cons is for any n:int a
function from vectors of length n to vectors of length n+1'. (I don't
think the latter can be expressed in Common Lisp, not even with
SATISFIES.)
(No, those calculi are not ready for general consumption by
programmers, although they've been remarkably successful in theorem
proving.)
J.
P.S. Of course, this has nothing to do with classes and generic
function dispatch.
* Erik Naggum <····@naggum.no> writes:
| the whole parameterized type nonsense is a testament to bad language
| design. it is completely devoid of meaning.
* Juliusz Chroboczek <···@dcs.ed.ac.uk>
| I beg to disagree.
um, what part of it do you disagree with?
in case it hasn't been clear, I have been trying to show that Common Lisp
_has_ what C++ calls parameterized types, and has had it since way before
C++ was conceived. the overall context here is "why reject Common Lisp?"
and one argument was: "it doesn't have parameterized types", where
"parameterized type" is a language-specific feature in C++, much hyped
because the language lacks everything that would make this a _natural_
thing in C++. the whole idea of adding parameterized types to a language
that doesn't have a useful way of representing type information at
run-time is just silly.
so, I'm not arguing against _types_. criminy. I'm not arguing against
the ability to reason about code. how could I be a strong favor of type
inference and author of code analysis tools if I were? I'm not arguing
about the usefulness of type calculus. geez, my University education was
all about static typing and verification and all that stuff that has yet
to prove itself because it's just too hard on programmers to tell these
things all they need to know.
I'm arguing against the stupid disdain for dynamic typing and run-time
type information (in C++ terms) except as an inefficient last resort when
it's too hard to do it statically.
| Whether type declarations are compulsory in the code, optional, or not
| written at all, and whether they are statically or dynamically checked
| is more or less irrelevant to this discussion.
I think they are irrelevant only to your own discussion, which doesn't
address anything people here have actually argued against. my argument
is that you cannot both wish to retain type information _and_ drop it.
if your language can't speak about its types, any benefit of this type
business is in efficiency in the compiler. if you want to reason about
types, you just can't do it in C++, and it is entirely irrelevant to any
abstract meaning of "parameterized type" -- C++ does not instantiate it
in a useful way, while Common Lisp does.
| In this view, being able to express notions such as `list of integers' is
| a perfectly natural request.
there is a point at which you have to ask of any theory what it will
change for the better if people adopted it. theories that are very nice
and clean and powerful which change nothing in the ways people think or
act in practice are ineffectual in the extreme. I want theories that are
so powerful that people change their ways _completely_, because what
people do today is mostly stupid, especially in the language design area.
do we have the concept of a "list of integers" in Common Lisp? yes. is
it different from any other list, operationally and notationally? no.
did C++ have the concept of a "list of integers"? _no_. it then chose
to add a list implementation that _had_ to be specialized on the type.
of course, this is a pain. so naturally the non-solution "parameterized
types" had to be invented. this is not because of any lack of usefulness
of typeful languages, not because people do not want "list of integers",
but because C++ has no concept of a _type_hierarchy_ that would allow
"list of any-type" to be expressed usefully. all C++ can do is "list of
some-type", and since its types do not form a hierarchy, but are disjoint
(even the classes, there is no "universal superclass") so instantiation
of the type becomes a _necessity_. the concept of "parameterized types"
were born out of this necessity.
now, in a type calculus, you don't talk about types as any special kind
of parameter, it's your _favorite_ kind of parameter. you compute types
and of course it takes types as arguments. making up "parameterized
type" as a phrase means you _don't_ usually compute or specify types
abstractly. in other words, you need the terminology because it is
something people need to be aware that they can do, and there is no space
for it in their existing vocabulary.
| A number of people ... are even working on calculi in which you have the
| full power of the lambda-calculus at the level of types -- very natural
| if you've been brought up on Lisp, and like to have full access to the
| language at compile-time.
well, here's my biggest gripe with the explicit-typing crowd: they have a
notion of a dichotomy between run-time and compile-time, and it's a moral
dichotomy: "what's good at compile-time is bad at runtime, and what's
good at run-time is bad at compile-time". macros in Lisp aren't
different from any other function, they are just called under slightly
different conditions. the _same_ language is used. why did they need a
_new_ language to get a lambda-calculus for typs? well, I think it's
because the language in question is so badly designed that they had to.
in all the literature I have read about types, and it's considerable,
this notion is hidden under a lot of consequences of holding it, but
never made explicit. I think the reason it is not explicit is that it is
genuinely stupid, and like so many other things in our cultures and
traditions that are stupid, smart people find smart ways around them, and
it's fundamentally risky to stand up and say "this is stupid" without
having a much better idea, because millions of people will flock to the
defense of the stpuid old traditions without even considering the other
idea unless it's so intuitively superior that people go "ah, of course",
instead. so real progress happens only when some genius is willing to
ignore the stupid ways and do something that opens up entirely new paths.
in this case, inventing parameterized types hss done nothing to help us
get out of the stupid ways. if people were somewhat less insistent on
obeying the stupid ways of their ancestors, maybe we also could have more
respect for their _smart_ ways. but I digress. (and don't get me
started on foreign aid to countries where a family's prestige is
determined by how many male children they have.)
| You may then express types such as `vector of int of length 42'
(vector integer 42)
| or `cons is for any n:int a function from vectors of length n to vectors
| of length n+1'. (I don't think the latter can be expressed in Common
| Lisp, not even with SATISFIES.)
I assume some function like ADJUST-ARRAY was intended, not CONS, since
CONS doesn't do that and it's good that it cannot be expressed in Common
Lisp. (why do people who want to make theoretical points so often miss
the details in ways that shows that they have genuine disdain for them?)
recently, an implementation of core concepts in programming by contract
was posted here, in Common Lisp. you can certainly express the above in
such a system. (I know the argument to follow: "but does it help the
compiler produce better code?" -- more on that just below.)
| (No, those calculi are not ready for general consumption by programmers,
| although they've been remarkably successful in theorem proving.)
I think you underestimate practitioners, but hey, that's the norm when
people invent theories that have _zero_ effect on the way they work.
here's my view on this type business: it makes no difference whatsoever
whether you compute types at run-time or at compile-time, you can reason
about the code just the same. (people who scream and shout that they
can't, have yet to discover the simple concept of a type hierarchy with
embracive, abstract types like NUMBER instead of "integer of 16 bits"
being disjoint from "integer of 32 bits") it makes no difference to
anything except performance whether the system has to figure out the type
of an object at run-time or at compile-time, and since we have a type
hierarchy for everything, the way to move something into the compile-time
arena is either by type inference or hints from the programmer. and if
the programmer wants to ensure that he does not see type mismatches, he
can either write code for it or declarations for it, which are identical
in conceptual terms. (in particular, if you check for a type, you know
that whatever was checked has that type afterwards, just like you would
if you had a declaration -- so why do the strong-typing crowd get so anal
about this?)
despite all the hype and noise about parameterized types in C++, you
still can't use it any useful way at run-time. all you can hope for is
that the compiler did all the useful things by the time the code was cast
in stone, and hopefully didn't make any mistakes (which the C++ template
cruft has been suffering from in all implementations for years). in my
view, this restriction on type calculus is phenomenally stupid, and
anyone who works with strong typing after accepting the dichotomy between
run-time and compile-time is unlikely to contribute anything of serious
value, simply because the mind-set is too narrow. in particular, if it
is necessary to study how something behaves in a run-time situation in
order to come up with a breakthrough idea, the person who believes in
compile-time will have to invent his own _language_ and write a compiler
for it, which is just so much work that no sane person would want to do
it just to prove a simple point, and if someone did it anyway, you would
get a new language with a number of features that couldn't be used
anywhere else, and nothing would come of it until C++ incorporated a
braindamaged version of it so people could argue against Common Lisp not
adopting "modern trends in object-orientedness" or whatever their
argument was.
in brief, the strong typing community are solving the wrong problems the
wrong ways. the results of their work increasingly depend on accepting
that certain operations are _bad_ at run-time, and they provide the world
with their results in the shape of immature, specialized languages that
cater to their own aesthetics and very few other people's actual needs.
strong-typing language researchers prove their points with languages.
Lisp programmers and theoricians alike prove them with macros.
so, to repeat myself: I'm not arguing against type theories, not arguing
against the usefulness of various forms of type inference, not arguing
against reasonable requests, not arguing against the desire to reason
about code, not arguing against tools to ensure correctness and all that
good stuf. I am arguing against the belief that types belong in the
compiler and not in the running system, and I am arguing that the
specific idea of a "parameterized type" is moot in Common Lisp, both
because we have typed objects and because we have a type hierarchy for
_all_ types with specialization on disjoint types. I am arguing that
parameterized types are _necessary_ when you accept disjoint types and
believe in the compiler as your only savior.
I hope this avoids any more arguments as if I was deriding or did not
understand type theory. I have a specific gripe with one of its core
premises, not the random, unlearned hostility to theories as such that
theoreticians who have a random, unexperienced hostility to practice so
frequently believe their critics suffer from.
#:Erik
> this entire issue is a joke on the explicit typing crowd. by refusing to
> understand that lists and vectors and other generic collections shouldn't
> _have_ to be specialized on the type, they _invent_ a stupid need that
> smart people don't have.
>
Can't quite resist this one, coming from C++ land as I do. Don't
worry, I'm not going to try and defend C++. I am quite enjoying
working in Lisp after years of working C++.
That said, strongly typed languages aren't a bad idea by themselves.
There are quite a few kinds of applications where, however, unlikely
you might think it is, you don't want to find out at runtime that
there is type mismatch. Also, there is some performance improvement
when the type-checking can be done at compile time. Of course, this
latter benefit is available in Lisp, so there is no dispute there.
I think the problem with strongly typed languages is that they are
really bad for iterative development, which is a very fast way to
work. It is really nice to work in Lisp where I don't have to have the
design completely correct before I can start development. There is a
certain point, however, as the design stabilizes, where the compile
time type-checking is useful to guarantee certain exceptions will not
occur. (This is the same idea behind exception specifications.)
It is possible to implement these things rather easily in Lisp. It
would be easy enough to check method or function invocations against
type specializers or declarations before compiling. It is also
straightforward to implement collections and iterators. In fact, I
think for mission critical systems, this is probably done routinely.
What is unfortunate, I suppose, is that none of this code has found
it's way into the public domain.
Or has it? =)
-D
* ······@tazent.com (Daniel J. Yoder)
| Can't quite resist this one, coming from C++ land as I do. Don't worry,
| I'm not going to try and defend C++. I am quite enjoying working in Lisp
| after years of working C++.
well, it was six months of C++ that made me determined to find something
else to with my life. I was a free-lance journalist for a while since I
just couldn't stomach programming, anymore. I know where you come from.
| That said, strongly typed languages aren't a bad idea by themselves.
I try to separate "strongly typed" from "explicitly typed". I have no
qualms whatsoever with strong typing through type inference, � la ML,
it's the stupid need to write these things out explicitly that I find so
nauseating because the only thing the compiler does with it is bark at me
for not agreeing with it. how obnoxious. if it already knows what it
should have been, it should shut the <beep> up, and just do its job.
| There are quite a few kinds of applications where, however, unlikely you
| might think it is, you don't want to find out at runtime that there is
| type mismatch.
my current application is mission critical and such errors would be very
annoying, but of all the silly mistakes I have made, a type mismatch has
occurred only once. (ok, so there have been 49 programming errors caught
by the debugger in 6 months of operation, dealing with such things as
calling VECTOR-PUSH-EXTEND on a vector that wasn't actually adjustable --
I'm not sure how much extra stuff I'd have to write to make that error
detectible at compile-time, considering that the array is created by a
wrapper that adjusts the argument list a little and applies MAKE-ARRAY to
it, and the wrapper is passed its arguments from a configuration file...)
| It is really nice to work in Lisp where I don't have to have the
| design completely correct before I can start development.
oh, yeah. I used to feel as if I had to have The Perfect Class Hierarchy
before I could sit down with C++; the cost of redesign was just too high
to stomach. there's hardly a weak that I don't rely on CLOS updating the
instances after a class has been changed in a running system, these days.
| Or has it? =)
not that I know. I would like to have code analysis tools that reported
on calling patterns in my code, and with deep nesting of functions it is
sometimes useful to see where a variable is actually used, despite being
passed in to a very high-level function. and lots of other stuff that
needs intimate coupling with the compiler.
#:Erik
······@lavielle.com (Rainer Joswig) wrote:
>In article <··············@soggy.deldotd.com>, Brian Denheyer <······@deldotd.com> wrote:
>> When it comes to OO sorts of things, I CAN see that CLOS introduces a
>> nasty amount of overhead as opposed to C++ where you just toss
>> function pointers around.
>>
>> Any reasonable, insightful, comments ?
>
>Garnet did not really use CLOS. They had their own object system
>(prototype based, IIRC). I checked it out in CMU CL years ago -
>it was slow also because CMU CL has/had a weak GC implementation.
hmm, garnet used kr because of it's simplicity and better performance,
compared to CLOS that days.
I personally really enjoy kr and favor it over CLOS, but unfortunately
kr's lifetime is over. it reminds me very much how autocad handles
objects, everything's event-based.
"KR: Constraint-Based Knowledge Representation
Dario Giuse
October 1993
Abstract
KR is a very efficient knowledge representation language implemented in
Common Lisp. It provides powerful frame-based knowledge representation
with user-defined inheritance and relations, and an integrated
object-oriented programming system. In addition, the system supports a
constraint maintenance mechanism which allows any value to be computed
from a combination of other values. KR is simple and compact and does
not include some of the more complex functionality often found in other
knowledge representation systems. Because of its simplicity, however, it
is highly optimized and offers good performance. These qualities make it
suitable for many applications that require a mixture of good
performance and flexible knowledge representation."
--
Reini
······@lavielle.com (Rainer Joswig) writes:
> The LOOM project, btw., makes a similar statement about
> their new "language" Stella, which can be compiled to
> Lisp or C++. The C++ code runs ten times faster, they claim:
>
> http://www.isi.edu/isd/LOOM/Stella/index.html
Since that was written, the Lisp code has been improved, but we still
see a performance difference of roughly 3:1 in favor of C++. Our
testing indicates that this difference is largely caused by the extra
overhead of CLOS method dispatch versus C++ method dispatch (and
especially slot value accesses). In that sense there isn't too much
that can be done about it, since a large part of the method dispatch
overhead in CLOS is an inherent part of the need to be able to
dynamically redefine interfaces.
--
Thomas A. Russ, USC/Information Sciences Institute ···@isi.edu
In article <···············@sevak.isi.edu>,
Thomas A. Russ <···@sevak.isi.edu> wrote:
>
>Since that was written, the Lisp code has been improved, but we still
>see a performance difference of roughly 3:1 in favor of C++. Our
>testing indicates that this difference is largely caused by the extra
>overhead of CLOS method dispatch versus C++ method dispatch (and
>especially slot value accesses). In that sense there isn't too much
>that can be done about it, since a large part of the method dispatch
>overhead in CLOS is an inherent part of the need to be able to
>dynamically redefine interfaces.
>
Is that really the explanation for the perceived speed difference?
I thought Stella->Lisp used some low-level vector-based representation
instead of full-fledged CLOS.
Bernhard (who'd still like to know the real reason)
--
--------------------------------------------------------------------------
Bernhard Pfahringer
Austrian Research Institute for http://www.ai.univie.ac.at/~bernhard/
Artificial Intelligence ········@ai.univie.ac.at
···@sevak.isi.edu (Thomas A. Russ) writes:
> Since that was written, the Lisp code has been improved, but we still
> see a performance difference of roughly 3:1 in favor of C++. Our
> testing indicates that this difference is largely caused by the extra
> overhead of CLOS method dispatch versus C++ method dispatch (and
> especially slot value accesses). In that sense there isn't too much
> that can be done about it, since a large part of the method dispatch
> overhead in CLOS is an inherent part of the need to be able to
> dynamically redefine interfaces.
Do you have profile output or whatever that shows this? I'm not
doubting, but I'd be interested to see it, as I'm kind of interested
in CLOS performance. What implementation was it for (performance
seems to vary a huge amount between awful for PCL to really good for
some commercial ones, especially for slot access, which some people
seem to be able to optimise into the ground).
--tim
In article <···············@tfeb.org>, Tim Bradshaw <···@tfeb.org> wrote:
> ···@sevak.isi.edu (Thomas A. Russ) writes:
>
> > Since that was written, the Lisp code has been improved, but we still
> > see a performance difference of roughly 3:1 in favor of C++. Our
> > testing indicates that this difference is largely caused by the extra
> > overhead of CLOS method dispatch versus C++ method dispatch (and
> > especially slot value accesses). In that sense there isn't too much
> > that can be done about it, since a large part of the method dispatch
> > overhead in CLOS is an inherent part of the need to be able to
> > dynamically redefine interfaces.
>
> Do you have profile output or whatever that shows this?
MCL for example allows to get better slot access speed
by restricting classes to use a kind of single inheritance
(primary classes). Other vendors may also offer certain speed
improvements...
--
http://www.lavielle.com/~joswig
Brian Denheyer <······@deldotd.com> writes:
> * Speed: We spend 5 years and lots of effort optimizing our Lisp
> code, but it was still pretty slow on "conventional" machines.
> The initial version of the C++ version, with similar
> functionality, appears to be about THREE TIMES FASTER than the
> current Lisp version without any tuning at all.
First of all, this has already been discussed on the list and it seems
to be the case that their lisp code is not all that optimized. I've
browsed it a bit and there's certainly a lack of declarations etc. If
you put it through CMUCL it gives a large number of efficiency notes
that could be worth checking out.
Secondly, perhaps they were also evaluating on obsolete machines. I'm
running it with ACL/Linux on a 486 with 16MB and it is fast enough for
what I'm doing with it so any more modern machine shouldn't have too
many problems.
--
Lieven Marchand <···@bewoner.dma.be>
------------------------------------------------------------------------------
Few people have a talent for constructive laziness. -- Lazarus Long