From: Satan - The Evil 1
Subject: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <5puscn$e0h$1@cdn-news.telecom.com.au>
After reading the "Lisp in the Real World" thread I have
the following opinion :) to raise and would be interested
to hear some other "non-religious" opinions regarading
Lisp compared to languages such as C/C++ in the area of
application speed and efficiency.

*Please don't take this as a flame or a troll*

Lisp seems OK for prototyping and implementing higher level
business rules and for general scripting, but it simply
will never be able to compete with languages such as
C and C++ (and to a lesser extent languages such as VB 
and Delphi under Windows). 

Have you noticed...
  You don't see fast numerical libraries written in Lisp.
  You don't see scientific libraries written in Lisp.
  You don't see commercial games written in Lisp.
  You don't see application suites written in Lisp.

In fact, you don't see any mainstream commercial applications
written in Lisp for the the basic reason that any
competitor will simply go out and write their competing
application in C/C++ and make a faster, more responsive
application that makes more efficient use of machine
resources.  Why do you think that, despite the massive
amount of hype, no mainstream apps have been written
in Java? Because it is too slow for the real world when
compared to equivalent code written in C or C++. 

I would say that C++ has such a bad repution amongst Lisp
programmers because it takes several years to become
a very good and efficient C++ programmer whilst you can
quickly become an efficient Lisp programmer.  The massive
number of *bad* C++ programmers certainly doesn't help
its reputation.  An experienced C++ programmer can write
truelly elegant, safe, and efficient code that leaves
the equivalent Lisp code in the dust.

Of course, the ease at which you can write good code in
Lisp is a major point in its favour.

But to sum up; we always seem to push hardware to its 
limits and because of this languages such as C and C++
will maintain their edge over languages such as Lisp.
I suppose one day we may have computers with more 
processing power than we could ever make use of 
(e.g. quantum computers) and then it will be commercially
feasible to throw away the low-level languages.  But I
imagine by that time Artificial Intelligences will have
done away with programmers alltogether.

OK now... go ahead an rip my argument to shreds :)

Cheers,
- PCM

From: Shaun Flisakowski
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <5pv5qh$n94@spool.cs.wisc.edu>
In article <············@cdn-news.telecom.com.au>,
Satan - The Evil 1 <·····@vus002.telecom.com.au> wrote:
:After reading the "Lisp in the Real World" thread I have
:the following opinion :) to raise and would be interested
:to hear some other "non-religious" opinions regarading
:Lisp compared to languages such as C/C++ in the area of
:application speed and efficiency.
:
:Lisp seems OK for prototyping and implementing higher level
:business rules and for general scripting, but it simply
:will never be able to compete with languages such as
:C and C++ (and to a lesser extent languages such as VB 
:and Delphi under Windows). 
:
:Have you noticed...
:  You don't see fast numerical libraries written in Lisp.
:  You don't see scientific libraries written in Lisp.
:  You don't see commercial games written in Lisp.
:  You don't see application suites written in Lisp.
:
:In fact, you don't see any mainstream commercial applications
:written in Lisp for the the basic reason that any
:competitor will simply go out and write their competing
:application in C/C++ and make a faster, more responsive
:application that makes more efficient use of machine
:resources.  Why do you think that, despite the massive
:amount of hype, no mainstream apps have been written
:in Java? Because it is too slow for the real world when
:compared to equivalent code written in C or C++. 


  I think that's more because most programmers don't
  _like_ Lisp very much.  Notice all the crap being made in/for
  VB - and that's _interpreted_, at least Lisp can be compiled.

  Look at the popularity of perl - a garbage collected interpreted
  language.  The upcoming popularity of Java, again garbage collected
  and interpreted.

  If the programmers wanted to Lisp, and there was a abundance of
  Lisp programmers, companies would make programs written in Lisp.

:But to sum up; we always seem to push hardware to its 
:limits and because of this languages such as C and C++
:will maintain their edge over languages such as Lisp.
:I suppose one day we may have computers with more 
:processing power than we could ever make use of 
:(e.g. quantum computers) and then it will be commercially
:feasible to throw away the low-level languages.  But I
:imagine by that time Artificial Intelligences will have
:done away with programmers alltogether.

  Whoa.  I suspect quantum computers will come long before AI
  makes any significant headway; unless you mean that the
  indeterism in quantum computers might allow something closer
  to an actual brain, in which case I dunno.

  Again we see the popularity of numerous "unefficient" languages,
  Java, Tk/Tcl, and Perl.

:OK now... go ahead an rip my argument to shreds :)

  I'm not a regular Lisp programmer, btw; I use mostly C/C++ and Delphi.

  Personally, I prefer Scheme to Lisp; it strikes me as a cleaner,
  more orthogonal language.

-- 
  Shaun        ········@cs.wisc.edu
  http://www.kagi.com/flisakow  - Shareware Windows Games, and Unix Freeware.
 "In your heart you know its flat."
                           -Flat Earth Society
From: Pierpaolo Bernardi
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <5q09q4$r44$1@pania.unipi.it>
:   I'm not a regular Lisp programmer, btw; I use mostly C/C++ and Delphi.

:   Personally, I prefer Scheme to Lisp; it strikes me as a cleaner,
:   more orthogonal language.

Do you mean Scheme is not Lisp?  This is weird.


: -- 
:   Shaun        ········@cs.wisc.edu
:   http://www.kagi.com/flisakow  - Shareware Windows Games, and Unix Freeware.
:  "In your heart you know its flat."
:                            -Flat Earth Society
From: Tyson Jensen
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <01bc8cb7$3cf0a940$c91dadcf@tjensen>
> Do you mean Scheme is not Lisp?  This is weird.

Scheme is a subset of lisp, like C is a subset of C++.  There are C
programmers out there who prefer C to C++, citing the simplicity and
standardization present in C.  Standards exist in C++, but they aren't
uniformly implemented by compilers such as Microsoft VC++.

-- 
Tyson Jensen
·······@mosbych1.com

 
From: Pierpaolo Bernardi
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <5q2lv2$u2e$1@serra.unipi.it>
Tyson Jensen (·······@mosbych1.com) wrote:
: > Do you mean Scheme is not Lisp?  This is weird.

: Scheme is a subset of lisp, like C is a subset of C++.  

This is false.

: There are C programmers out there who prefer C to C++,

Duh.  If they are C programmers, this is comprehensible.

Sorry, I don't care about C and C++.

The point is that Scheme _is_ lisp,  you are confused about what lisp is.

Scheme is lisp, so is Common Lisp, so is Lisp 1.5, so is T, so is NIL,
so is Maclisp, so is Interlisp, so is Franz Lisp, so is Mulisp, so is
Multilisp, so is Standard Lisp, so is Portable Standard Lisp, so is
Oaklisp, so is elisp, so is xlisp, so is ISO Lisp, so is Eulisp, so is
*lisp, so is 1100 Lisp.
Sorry for the ones that I forgot, hope you got the idea anyway.

: -- 
: Tyson Jensen
: ·······@mosbych1.com
From: David Thornley
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <5q2pf4$ge7$1@darla.visi.com>
In article <············@serra.unipi.it>,
Pierpaolo Bernardi <········@cli.di.unipi.it> wrote:
>Tyson Jensen (·······@mosbych1.com) wrote:
>: > Do you mean Scheme is not Lisp?  This is weird.
>
>: Scheme is a subset of lisp, like C is a subset of C++.  
>
>This is false.
>
No, it is true.  Scheme is not a subset of Lisp, but one of many
implementations.  C is not a subset of C++, but the two languages
have a large common subset, much closer to the whole of C than the
whole of C++.  There are reasons why it is bad to write soley in
the subset, and you can get a good explanation out of comp.lang.c.

Therefore, Scheme is a subset of Lisp like C is a subset of C++
like fish are a subset of Amazonian life forms - all are false.

Having said that, I'd like to get back to my favorite C/C++/whatever
vs. Lisp rant.

Assume two programmers, both competent, one in C or C++ or something
similar, one in Lisp.

Both write a program to solve a given task.  At the end of a certain
time period, they have initial versions.  The C version is efficient,
but it doesn't work.  The Lisp version works, but is inefficient.
Now, it's obvious to everybody that the program has to work (well,
everybody not involved with Microsoft), but it isn't obvious that
it has to work efficiently.  There is a strong temptation to ship the
Lisp version and report to the customers that the C program is coming
along nicely.

Suppose that both programmers continue.  Both will end up with a
working, efficient program.  The Lisp programmer is likely to finish
sooner, and the Lisp version will be more maintainable (to good
Lisp programmers) than the C version (to good C programmers).
The Lisp version is likely to work better, and will be less susceptible
to certain classes of bugs (like slow memory leaks).

There's also the cultural thing.  Most programming books I've seen
make reference to machine efficiency (sometimes mistakenly).  There
are exceptions, notably Kernighan and Plauger's (is that correct?)
superb _Elements_of_Programming_Style_.  Most Lisp books mention
efficiency only briefly.  Until Norvig's excellent book came out,
I didn't have one that helped me write efficient Common Lisp programs.

Another thing that gave rise to the inefficient Lisp myth is that
Lisp runtimes are usually considerably larger than older C
runtimes, and therefore there was a strict limit to how small a
Lisp executable could be, as opposed to a C executable.  This is
changing, as the standard applications are getting bigger and the
old, lean, C runtimes are replaced by the more modern and fatter
C++ runtimes, while Lisp runtimes aren't getting much bigger.

>: There are C programmers out there who prefer C to C++,
>
>Duh.  If they are C programmers, this is comprehensible.
>
Matter of opinion.  The thing to remember about C++ is that you
don't have to use all the features.  Operator overloading can get
you into deep weeds if you use it inappropriately, templates can
bloat your executables, RTTI can be used to make some really ugly
programs, and so on.  I prefer C++ to C because there are some
things you can do in C++ that you can't do anywhere near as well
in C, and I am old and scarred enough to use the neat powerful stuff
only when appropriate.

>The point is that Scheme _is_ lisp,  you are confused about what lisp is.
>
>Scheme is lisp, so is Common Lisp, so is Lisp 1.5, so is T, so is NIL,
>so is Maclisp, so is Interlisp, so is Franz Lisp, so is Mulisp, so is
>Multilisp, so is Standard Lisp, so is Portable Standard Lisp, so is
>Oaklisp, so is elisp, so is xlisp, so is ISO Lisp, so is Eulisp, so is
>*lisp, so is 1100 Lisp.
>Sorry for the ones that I forgot, hope you got the idea anyway.
>
Um, Pearl Lisp, the 8080 and 6800 Lisps in the old Doctor Dobb's
Journals I threw out last year, that CP/M Lisp I used to use and
can't remember the name of any more....  Oh well, I'd rather use
any of them than COBOL or Pascal (there's ones I wouldn't prefer
over C, but that's another story).

David Thornley
From: Robert Monfera
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <33C5B958.70D1@interport.net.removethisword>
David Thornley wrote:

[snip]

> 
> Another thing that gave rise to the inefficient Lisp myth is that
> Lisp runtimes are usually considerably larger than older C
> runtimes, and therefore there was a strict limit to how small a
> Lisp executable could be, as opposed to a C executable.  This is
> changing, as the standard applications are getting bigger and the
> old, lean, C runtimes are replaced by the more modern and fatter
> C++ runtimes, while Lisp runtimes aren't getting much bigger.

I second this. SAP has been the world's most successful standard 
application for half a decade, with a footprint of 20GB storage and 
192MB - 2GB RAM (it just barely runs with 192MB, average is
256MB-1.5GB).
Even if you can write the same functionality in Lisp with a portion
of this storage footprint, the RAM requirements would not really change.
A few megs of Lisp (running or development) environment is just not
significant at all.

Just my 2 Fille'r.

Robert
From: J.D. Jordan
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <33C4D182.89A4AA92@erols.com>
C a subset of C++??? C came first!

Tyson Jensen wrote:

> > Do you mean Scheme is not Lisp?  This is weird.
>
> Scheme is a subset of lisp, like C is a subset of C++.  There are C
> programmers out there who prefer C to C++, citing the simplicity and
> standardization present in C.  Standards exist in C++, but they aren't
>
> uniformly implemented by compilers such as Microsoft VC++.
>
> --
> Tyson Jensen
> ·······@mosbych1.com
>  

  
From: Dennis Weldy
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <fBsGrj5j8GA.188@news2.ingr.com>
 Doesnt really matter which came first, with regards to whether something
is a subset. Whether A is a subset of B depends on whether A intersect B =
A. If it does, then A is a subset of B. Doesn't matter whether you had B
first or not :-).

Strictly speaking though, there are enough differences between C and C++ to
make life interesting. :-)
So, realistically, there exists a set S = C intersect C++. S contains the
common language features and syntax. Note that S != C. 

Dennis

J.D. Jordan wrote in article <·················@erols.com>...

>C a subset of C++??? C came first!
>
>Tyson Jensen wrote:
>
>> > Do you mean Scheme is not Lisp?  This is weird.
>>
>> Scheme is a subset of lisp, like C is a subset of C++.  There are C
>> programmers out there who prefer C to C++, citing the simplicity and
>> standardization present in C.  Standards exist in C++, but they aren't
>>
>> uniformly implemented by compilers such as Microsoft VC++.
>>
>> --
>> Tyson Jensen
>> ·······@mosbych1.com
>>  
>
>  
>
>.
> 
From: Martin Rodgers
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <MPG.e330392a4f045409898cc@news.demon.co.uk>
With a mighty <·················@erols.com>,
········@erols.com uttered these wise words...

> C a subset of C++??? C came first!

And Scheme existed before Common Lisp. Not that will stop anyone from 
calling Scheme a subset of CL. Perhaps it would be better to say that 
C++ is a superset of C, and that CL is a larger language than Scheme?

I vaguely remember a language, I think called something like Comal, 
that was a superset of Basic. At least, it looked like at the time, as 
every Basic available on micros was pathetically small, and most of 
them didn't even have WHILE/WEND.

For what it's worth, every Pascal compiler that I've used has been a 
superset of ISO Pascal. I've even witnessed a Pascal programmer 
refering to as Modular 2 as Pascal. Does that make M-2 a superset of 
Pascal, or was he just wrong?

As a he was arguing with some C programmers at the time, perhaps he 
was excused. I found it very hard to follow the debate, as neither 
side were talking about the same languages. So it could've been 
C/C++/K&R vs ISO Pascal/Turbo Pascal/M-2, respectively. No wonder I 
was confused!
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
                 Will write Lisp code for food
From: John Nagle
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <nagleED9qM2.46t@netcom.com>
(Martin Rodgers) writes:
>········@erols.com 

>> C a subset of C++??? C came first!

>And Scheme existed before Common Lisp. Not that will stop anyone from 
>calling Scheme a subset of CL. Perhaps it would be better to say that 
>C++ is a superset of C, and that CL is a larger language than Scheme?

    No, Scheme came after Common LISP.  Scheme was a reaction to Common
LISP by a group of people at MIT who wanted a simple, clean LISP.
Common LISP had too much junk in it at the insistence of the Symbolics
people, who wanted to justify their custom hardware.  The original
Scheme paper is a joy to read; in a few tightly-written pages it
defines the whole language.

>For what it's worth, every Pascal compiler that I've used has been a 
>superset of ISO Pascal. I've even witnessed a Pascal programmer 
>refering to as Modular 2 as Pascal. Does that make M-2 a superset of 
>Pascal, or was he just wrong?

     Modula 2 is considered to belong to the Pascal/Modula/Ada family
of languages, but it is not a superset of Pascal.  It isn't even a
superset of Modula 1.  Pascal, Modula 1, and Modula 2 were designed
by Wirth; Modula 3 was designed at DEC, and Ada was designed through
a competition between four proposals.

					John Nagle
From: Martin Rodgers
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <MPG.e33514b889d00ff9898d0@news.demon.co.uk>
With a mighty <···············@netcom.com>,
·····@netcom.com uttered these wise words...

>     No, Scheme came after Common LISP.  Scheme was a reaction to Common
> LISP by a group of people at MIT who wanted a simple, clean LISP.
> Common LISP had too much junk in it at the insistence of the Symbolics
> people, who wanted to justify their custom hardware.  The original
> Scheme paper is a joy to read; in a few tightly-written pages it
> defines the whole language.

Hmm. I seem to recall finding references to Scheme: An interpreter for 
the extended lambda calculus, Memo 349, MIT AI Laboratory, 1975. Does 
Common Lisp predate this memo?

>      Modula 2 is considered to belong to the Pascal/Modula/Ada family
> of languages, but it is not a superset of Pascal.  It isn't even a
> superset of Modula 1.  Pascal, Modula 1, and Modula 2 were designed
> by Wirth; Modula 3 was designed at DEC, and Ada was designed through
> a competition between four proposals.

Exactly. That's why I mentioned it. IMHO a comparison between Scheme 
and Common Lisp that declares that one is the subset of the other is 
not unlike declaring that Pascal is a subset of Modular 2. I prefer to 
say that only that they're different languages, with strong, possibly 
supperficial, similarities. I feel that it's more honest to say that 
the real relationship between them is a historical one.

I'm happy to leave the exact nature of that relationship to the 
historians, language lawyers, and other pedants. ;) I'm a programmer. 
If I can use a language to write code that performs tasks useful to 
me, then that's good enough for me.

Language wars are for those who either have too much time to spare, or 
have too much to gain by spreading hostile memes. If there's an idea 
used in some language not currently exploited by the language that I'm 
using, there are a number of options. The one that I like best is to 
take that idea and add it (perhaps as an option) to my chosen 
language. This is how language speciation may occur, but it's not 
always necessary to create a new language in order to accomplish this, 
esp if your language is flexible enough to extend itself.

This may be why some people claim that Lisp semantics can include the 
semantics of any other language. Symbolic expressions may express 
anything we wish, making Lisp semantics (or meta semantics?) a 
superset of everything else that we can imagine. In order to be of 
practical use, we need only implement those semantics. This is true 
for all practical languages anyway, hence the superset argument.

So, perhaps there's no point in asking which Lisp dialect is superset 
of another? The only real question is how much effort it would take to 
add the semantics of one dialect (or language) to another, and what 
would be the practical value of doing so? I expect the answer(s) to 
vary according to the needs and abilities of each programmer.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
                 Will write Lisp code for food
From: William D Clinger
Subject: the relationship between Scheme and Common Lisp
Date: 
Message-ID: <33CF92F2.7843@ccs.neu.edu>
John Nagle <·····@netcom.com> wrote:

> No, Scheme came after Common LISP.  Scheme was a reaction to Common
> LISP by a group of people at MIT who wanted a simple, clean LISP.

This is not true.  Scheme was invented in 1975 by Gerry Sussman
and Guy L Steele Jr; see MIT AI Memo 349.  Common Lisp did not
begin until after an ARPA "Lisp Community Meeting" in April 1981;
see the article by Steele and Gabriel in the History of Programming
Languages Conference (HOPL-II), SIGPLAN Notices 28(3), March 1993.

The lexical scoping of local variables in Common Lisp came from
Scheme, and Common Lisp had some influence on Scheme's later
evolution (e.g. Scheme's generic arithmetic), but by and large
these two languages evolved separately.

It is true that C is almost a subset of C++, but it is not true
that Scheme is a subset of Common Lisp.  There are significant
differences between Scheme and Common Lisp concerning scope rules,
tail recursion, generic arithmetic, data types, exceptions,
continuations, macros, and the Common Lisp Object System.
The relationship between IEEE/ANSI Scheme and ANSI Common Lisp
is more like the relationship between Modula-2 and Ada83 than
the relationship between C and C++.

William D Clinger
From: Shaun Flisakowski
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <5q35sa$s81@spool.cs.wisc.edu>
In article <············@pania.unipi.it>,
Pierpaolo Bernardi <········@cli.di.unipi.it> wrote:
>:   I'm not a regular Lisp programmer, btw; I use mostly C/C++ and Delphi.
>
>:   Personally, I prefer Scheme to Lisp; it strikes me as a cleaner,
>:   more orthogonal language.
>
>Do you mean Scheme is not Lisp?  This is weird.

  When I said Lisp, I was referring to common lisp, which I though had
  been standardized.

  I know that Scheme has been standardized, which seems like rather
  unusual for a mere "dialect".

-- 
  Shaun        ········@cs.wisc.edu
  http://www.kagi.com/flisakow  - Shareware Windows Games, and Unix Freeware.
 "In your heart you know its flat."
                           -Flat Earth Society
From: Martin Rodgers
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <MPG.e2ff2e1ca422ead9898c2@news.demon.co.uk>
With a mighty <··········@spool.cs.wisc.edu>,
········@fontina.cs.wisc.edu uttered these wise words...

>  When I said Lisp, I was referring to common lisp, which I though had
>   been standardized.

Common Lisp and Scheme appear to me to be two dialects of the family 
of languages that known as Lisp. Their semantics overlap, but CL 
doesn't entirely enclose those of Scheme, nor do Scheme's semantics 
entirely enclose those of CL.

The size of the language is a red herring. Think of the semantics as 
sets, and then draw a Venn diagram. The sets intersect.
 
>   I know that Scheme has been standardized, which seems like rather
>   unusual for a mere "dialect".

Ohh, excellent flamebait. ;) Yep, Fortran is a subset of Basic, C++ is 
a subset of Smalltalk, and _everything_ is a subset of Lisp. Hmm. 
Well, that last one could be true (in the sense that _anything_ can be 
added to Lisp), but you might not get away with it in a newsgroup like 
comp.lang.c++. Would _you_ like to try it? Light the blue touch paper 
and stand well back...

With Lisp, talking about sets of semantics is probably meaningless, 
because we can extend the language in any way we like, so easily. If 
you want to be pedantic, and only use the language as defined by the 
language spec, then that's different. Does the Scheme spec specify a 
standard macro system? Is that even necessary?

Meta-circular evaluators (SICP style) make _anything_ possible. All 
you have to do is write your meta-circular evaluator.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
                 Will write Lisp code for food
From: Harley Davis
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <wkg1tlgiri.fsf@laura.ilog.com>
········@fontina.cs.wisc.edu (Shaun Flisakowski) writes:

> >:   Personally, I prefer Scheme to Lisp; it strikes me as a cleaner,
> >:   more orthogonal language.
> >
> >Do you mean Scheme is not Lisp?  This is weird.
> 
>   When I said Lisp, I was referring to common lisp, which I though had
>   been standardized.

Common Lisp has been standardized; there is ANSI Common Lisp.  There
is also a dialect of Lisp called ISLisp that has been standardized by
ISO.  However, "Lisp" is a generic term for a family of languages and
it is unlikely that any standard will ever be called "The Standard
Lisp".  This was attempted with ISLisp and voted down; I doubt there
is any standards organization out there which will take up the banner
again.

-- Harley

-------------------------------------------------------------------
Harley Davis                            net: ·····@ilog.com
Ilog, Inc.                              tel: (415) 944-7130
1901 Landings Dr.                       fax: (415) 390-0946
Mountain View, CA, 94043                url: http://www.ilog.com/
From: Jason Trenouth
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <33e7b9bc.1369535968@newshost>
On 9 Jul 1997 04:58:57 GMT, ········@fontina.cs.wisc.edu (Shaun Flisakowski)
wrote:

> :Have you noticed...
> :  You don't see fast numerical libraries written in Lisp.
> :  You don't see scientific libraries written in Lisp.
> :  You don't see commercial games written in Lisp.
> :  You don't see application suites written in Lisp.
> :

None of the above are true, of course. BTW The third domain is perhaps the
most in fashion of those listed.

__Jason
From: Jason Trenouth
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <33eff93c.1385791968@newshost>
On 9 Jul 1997 04:58:57 GMT, ········@fontina.cs.wisc.edu (Shaun Flisakowski)
wrote:

> :Have you noticed...
> :  You don't see fast numerical libraries written in Lisp.
> :  You don't see scientific libraries written in Lisp.
> :  You don't see commercial games written in Lisp.
> :  You don't see application suites written in Lisp.
> :

None of the above are true, of course. BTW The third domain is perhaps the
most in fashion of those listed.

__Jason
From: Harley Davis
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <wken95gio8.fsf@laura.ilog.com>
·····@harlequin.co.uk (Jason Trenouth) writes:

> On 9 Jul 1997 04:58:57 GMT, ········@fontina.cs.wisc.edu (Shaun Flisakowski)
> wrote:
> 
> > :Have you noticed...
> > :  You don't see fast numerical libraries written in Lisp.
> > :  You don't see scientific libraries written in Lisp.
> > :  You don't see commercial games written in Lisp.
> > :  You don't see application suites written in Lisp.
> 
> None of the above are true, of course. BTW The third domain is perhaps the
> most in fashion of those listed.

That's interesting.  Could you give a couple of examples of commercial
games running compiled (or even interpreted) Lisp code.  How about an
application suite?  Anything mainstream?

Thanks,

-- Harley

-------------------------------------------------------------------
Harley Davis                            net: ·····@ilog.com
Ilog, Inc.                              tel: (415) 944-7130
1901 Landings Dr.                       fax: (415) 390-0946
Mountain View, CA, 94043                url: http://www.ilog.com/
From: Erik Naggum
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <3078047256675330@naggum.no>
* Harley Davis
| That's interesting.  Could you give a couple of examples of commercial
| games running compiled (or even interpreted) Lisp code.  How about an
| application suite?  Anything mainstream?

a friend of mine sent me some Lisp code that looked really odd some time
ago, and told me it was from a game called Abuse.  I don't know much about
it, but the Lisp code appeared to be definitions of characters and such,
and Lisp is the extension language of this game, running interpreted or
compiled after loading source code.  no compile-file exists.  he described
Abuse as a "sidescroller" game.  I don't know what that is; some sort of
shoot-em-up game.  see http://games.3dreview.com/abuse/files/lispedit.txt
or http://games.3dreview.com/abuse/minigames/minigames.html.

the new Nintendo 64-bit games are produced with Common Lisp, but I don't
know how much of Lisp is running in the actual game.  Nichimen Graphics and
their N-world is the place to go for Lisp game action.  check out
www.franz.com -- they had a lot of pointers there some time ago.  (I can't
check right now, 'cuz Netscape expresses their desire to have me to upgrade
to their next version in their own peculiar way.)

#\Erik
-- 
Microsoft Pencil 4.0 -- the only virtual pencil whose tip breaks.
From: Steven Perryman
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <5qng2t$2l5@bhars12c.bnr.co.uk>
In article <··············@laura.ilog.com> Harley Davis <·····@laura.ilog.com> writes:

>> > :  You don't see commercial games written in Lisp.
>> > :  You don't see application suites written in Lisp.
>> 
>> None of the above are true, of course. BTW The third domain is perhaps the
>> most in fashion of those listed.

>That's interesting.  Could you give a couple of examples of commercial
>games running compiled (or even interpreted) Lisp code.  How about an
>application suite?  Anything mainstream?

In Object Expert magazine, 2 or 3 issues ago, there was an article on this
topic. A company that writes games were using Lisp (CLOS too ?? ) with
great success. They built a layer on top of Lisp specifically for games-
related needs, and called it GOOL (Games OO Lisp/Layer ?? ) .

I can't remember the name of the game(s) offhand.


Regards,
Steven Perryman
·······@nortel.co.uk
From: Barry Margolin
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <5qp9kb$f8q@pasilla.bbnplanet.com>
In article <··········@bhars12c.bnr.co.uk>,
Steven Perryman <·······@bnr.co.uk> wrote:
]In article <··············@laura.ilog.com> Harley Davis <·····@laura.ilog.com> writes:
]>That's interesting.  Could you give a couple of examples of commercial
]>games running compiled (or even interpreted) Lisp code.  How about an
]>application suite?  Anything mainstream?
]
]In Object Expert magazine, 2 or 3 issues ago, there was an article on this
]topic. A company that writes games were using Lisp (CLOS too ?? ) with
]great success. They built a layer on top of Lisp specifically for games-
]related needs, and called it GOOL (Games OO Lisp/Layer ?? ) .

I believe all the old Infocom games were written in ZDL (Zork Definition
Language, I think).  I heard that ZDL was derived from MDL, a Lisp dialect
that was in use at MIT in the 70's, and which was used to implement the
original Zork on the PDP-10.

-- 
Barry Margolin, ······@bbnplanet.com
BBN Corporation, Cambridge, MA
Support the anti-spam movement; see <http://www.cauce.org/>
From: Fred Haineux
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <bc-1907971824090001@17.127.18.96>
Barry Margolin <······@bbnplanet.com> wrote:
|  I believe all the old Infocom games were written in ZDL (Zork Definition
|  Language, I think).  I heard that ZDL was derived from MDL, a Lisp dialect
|  that was in use at MIT in the 70's, and which was used to implement the
|  original Zork on the PDP-10.

It pains me to note that someone has written a new "C syntax" language
that outputs pseudo-code for the MDL-running engine so that they can make
"new" infocom-style games without having to learn Lisp... (I suppose I
should've looked up the details....)

One of the reasons I went to MIT was a run-in with MDL courtesy Prof
Lickleider at the AI-Lab back when I was an urchin....

bc
From: Vassili Bykov
Subject: Details about Zork (Re: But Lisp is *SLOW* Compared to C/C++)
Date: 
Message-ID: <wkk9ikfcp3.fsf_-_@cam.org>
[Not crossposted to comp.lang.c++ and comp.lang.programming]
··@wetware.com (Fred Haineux) writes:> 
> Barry Margolin <······@bbnplanet.com> wrote:
> |  I believe all the old Infocom games were written in ZDL (Zork Definition
> |  Language, I think).  I heard that ZDL was derived from MDL, a Lisp dialect
> |  that was in use at MIT in the 70's, and which was used to implement the
> |  original Zork on the PDP-10.
> 
> It pains me to note that someone has written a new "C syntax" language
> that outputs pseudo-code for the MDL-running engine so that they can make
> "new" infocom-style games without having to learn Lisp... (I suppose I
> should've looked up the details....)

It is not fair to say so.  Their goal was not writing Infocom-style
games without having to learn Lisp.  The goal was writing
Infocom-style games, period.  You can't get your hands on the
development system Infocom used (ZIL--"Zork Implementation
Language"--a stripped-down MDL): it ran on DECsystem-20s and, even if
anything remains of it now, is now a property of Activision.  So,
learning Lisp, MDL, or ZIL (most unlikely, "a stripped-down MDL" is as
much as you can find out about it now) would not help the tiniest bit.

Also, there was no "MDL-running engine" in Infocom games.  MDL itself
was only used to implement the original Zork at MIT (1977-1979) -- a
students' creation later reimplemented in Fortran and renamed
"Dungeon" after Zork became Infocom's trademark.  Infocom games,
starting with Zork I,II, and III (the original Zork split into three
parts to fit into tiny micros of the time) were written in ZIL which
compiled into a so-called Z-code: a specialized byte code designed to
compactly represent the specific data structures and operations of
interactive fiction (i.e. text adventures).  Z-code files ("story
files") were interpreted by a Z-code interpreter: a small virtual
machine implemented for a specific platform.  By targeting this
specialized code instead of a generic MDL engine they were able to fit
sophisticated games on small floppies and into tight RAMs.

A few years ago Graham Nelson, a math professor from Oxford, designed
a language called Inform, wrote a compiler which produces Z-code as
its output, and a run-time library--all to make text adventures
similar in "look and feel" to Infocom's--plus two excellent games,
Curses and Jigsaw, rivaling orginal Infocom ones in quality. Why
Z-code as the target?  Because there are Z-code interpreters for
virtually any machine you can think of, and anything compiled to
Z-code becomes accessible to anyone (plus the nostalgic value of "just
like Infocom's", of course).  You see, Inform didn't win that "market"
just because they let people make games without having to learn Lisp.
They let them make text adventures which are not worse (techically, at
least--gameplay is the designer's responsibility) than Infocom's, and
are easily accessible.  Most interestingly, the upcoming "Zork: The
Grand Inquisitor", a yet another grahical game set in Zork universe,
will also include a new "Zork: the Undiscovered Underground" text
adventure written by Mark Blanc, one of the original Zork
implementors--in Inform, not ZIL or MDL.

If there is a point to be made here, it is not grief over Nelson
choosing Lisp over His Own Toy Language (TM) (gee, he even had the
dangling else problem until version 5 or 6!), it is that with all the
inhererent superiority of Lisp, superiority alone is not enough to
encourage people to use it.

Details:

  The Lore and Legends of Infocom (unofficial Infocom home page)
  <http://www.csd.uwo.ca/Infocom/>

  "Zork: A Computerized Fantasy Simulation Game" by P. David Lebling,
   Marc S. Blank, and Timothy A. Anderson. 
   IEEE Computer, vol. 12, no. 4, April 1979, pages 51-59. 
  (includes pieces of the original MDL Zork code)
  <http://www.csd.uwo.ca/Infocom/Articles/ieee.html>

  "How to Fit a Large Program Into a Small Machine"  by Marc S. Blank 
   and S. W. Galley.
   Creative Computing, July 1980, pages 80-87.
  <http://www.csd.uwo.ca/Infocom/Articles/small.html>

  Graham Nelson's Inform 6 home page
  <http://www.gnelson.demon.co.uk/inform.html>

  Interactive Fiction Arhive
  <ftp://ftp.gmd.de/if-fiction>

  <news:rec.arts.int-fiction>

--Vassili
From: Fred Haineux
Subject: Re: Details about Zork (Re: But Lisp is *SLOW* Compared to C/C++)
Date: 
Message-ID: <bc-2207970943050001@17.127.18.96>
In article <·················@cam.org>, Vassili Bykov <······@cam.org> wrote:
|  It is not fair to say so.  Their goal was not writing Infocom-style
|  games without having to learn Lisp.  The goal was writing
|  Infocom-style games, period.  You can't get your hands on the
|  development system Infocom used (ZIL--"Zork Implementation
|  Language"--a stripped-down MDL): it ran on DECsystem-20s and, even if
|  anything remains of it now, is now a property of Activision.  So,
|  learning Lisp, MDL, or ZIL (most unlikely, "a stripped-down MDL" is as
|  much as you can find out about it now) would not help the tiniest bit.

Thank you for correcting me. I certainly wasn't right to imply that
someone hated Lisp or any such thing. Please consider that this was mostly
due to complete ignorance of the actual facts.

I do hanker for a ZIL of mine own, still, but not enough (so far) to get
off my ass and write one.

First, it's get that thesis running again after all these years.

bc
From: J. Holder
Subject: Re: Details about Zork (Re: But Lisp is *SLOW* Compared to C/C++)
Date: 
Message-ID: <5rd538$fju$1@europa.frii.com>
In article <·················@cam.org>, Vassili Bykov <······@cam.org> wrote:
|  It is not fair to say so.  Their goal was not writing Infocom-style
|  games without having to learn Lisp.  The goal was writing
|  Infocom-style games, period.  You can't get your hands on the
|  development system Infocom used (ZIL--"Zork Implementation
|  Language"--a stripped-down MDL): it ran on DECsystem-20s and, even if
|  anything remains of it now, is now a property of Activision.  So,
|  learning Lisp, MDL, or ZIL (most unlikely, "a stripped-down MDL" is as
|  much as you can find out about it now) would not help the tiniest bit.

Although I wouldn't mind seeing a LISP-like language compiler to the
Zmachine image format.  When I worked on JZIP, my freeware infocom
interpreter for UNIX & DOS, (shameless plug: 
http://ftp.gmd.de/if-archive/infocom/interpreters/zip/jzip201g.zip) 
I was a little dissappointed that there wasn't.  But then again, I 
haven't gotten off my butt and done it...  However, the best spot to
start is probably the MDL language docs which you can order from MIT
even still.  A recent email of mine discovered that the papers were
written by the original implementors, too.  Here's the info:

( NOTE: Galley wrote The Witness and Seastalker; Lebling cowrote Zork I, II 
        III, and Enchanter, coauthored the original mainframe Zork, Wrote 
	Starcross, Suspect, Spellbreaker, and The Lurking Horror on his own.)

----- included email from ····@MIT -----

From ······@MIT.EDU  Wed May 14 14:22:09 1997
Date: Wed, 14 May 1997 16:22:38 -0400
To: ·······@frii.com (John Holder)
From: trish reid <······@MIT.EDU>
Subject: Re: LCS papers
Cc: ····@MIT.EDU

          AUTHOR :Galley, Stuart Wilbur.
           TITLE :MDL primer and manual : for versions 54 and 104 / S. W. Galley
                  and Greg Pfister.
        LANGUAGE :ENGLISH
       PUBLISHED :Cambridge : Laboratory for Computer Science, Massachusetts
                  Institute of Technology, 1977.
   PHYSICAL DESC :270 p. ; 28 cm.
           NOTES :Title from cover.
                  Includes index.
    FUNDING INFO :Office of Naval Research contract N00014-75-C-0661
    BIBLIOGRAPHY :Bibliography: p. 260.
    OTHER AUTHOR :Pfister, Greg.

---------------------------------------------------------------------

          AUTHOR :Galley, S. W.
           TITLE :The MDL programming language / S.W. Galley and Greg Pfister.
        LANGUAGE :ENGLISH
       PUBLISHED :Cambridge, Mass. : Laboratory for Computer Science,
                  Massachusetts Institute of Technology, c1979.
   PHYSICAL DESC :276 p. : ill. ; 28 cm.
          SERIES :MIT/LCS/TR; 293
    FUNDING INFO :Supported by the Advanced Research Projects Agency of the
                  Dept. of Defense. N00014-75-C-0661
    BIBLIOGRAPHY :Includes bibliographical references.
    OTHER AUTHOR :Pfister, Gregory F.
    OTHER AUTHOR :Massachusetts Institute of Technology. Laboratory for Computer
                  Science.
     OTHER TITLE :Programming language, The MDL.

-------------------------------------------------------------------

          AUTHOR :Lebling, P. David.
           TITLE :The MDL programming environment / P. David Lebling.
        LANGUAGE :ENGLISH
       PUBLISHED :Cambridge, Mass. : Laboratory for Computer Science,
                  Massachusetts Institute of Technology, 1980.
   PHYSICAL DESC :iv, 137 p. ; 28 cm.
          SERIES :MIT/LCS/TR; 294
    BIBLIOGRAPHY :Includes bibliographical references and index.
    OTHER AUTHOR :Massachusetts Institute of Technology. Laboratory for Computer
                  Science.
     OTHER TITLE :Programming environment, The MDL.


Dear John,

These documents are available for $24 each, including airmail shipment.  The
first title on your list did not show up on our database, I have given the
cite of the closest title. If you would like to purchase a copy, please
reply with your VISA, MasterCard or American Express number with expiration
date and name as it appears on the card.  Or, if you prefer, you may fax
that information to us at 617-253-1690.  I have included ordering and price
information at the end of this message.

Thank you,

-Trish Reid
MIT Document Services
····@mit.edu

MIT LIBRARIES				
DOCUMENT SERVICES			
Room 14-0551				
Cambridge, MA USA 02139-4307
(voice) 617-253-5668
(fax) 617-253-1690
(e.mail) ····@MIT.EDU
WWW: http://lauren.mit.edu/depts/document_services/docs.html

DOCUMENT DELIVERY SERVICE
Domestic Rates

Journal Article / $12
  (price per article up to 30 pages; overage add $.25/page)
RUSH SERVICE:   ***Note: Does not include transmission charges***
 SAME DAY: $10.00 extra
 Requests rec'd by 12:00pm Eastern time: Out by 6:00 Same day 
NEXT DAY:  $5.00 extra

MIT Thesis
     Hardcopy / $51
     Microfiche/Microfilm (please specify) / $36
     Abstract only / $12
     
MIT Technical Report/Working Paper                
$12 / $17 / $24 each
(1-30 pages / 31-100 pages / 101-400 pages / 401+ add $.25/page)       

MIT Press Out-of-Print Books	                      
$51 up to 400 pages				  
Add $.25/pg 401+
 
ALL PRICES INCLUDE AIRMAIL POSTAGE        

 DELIVERY OPTIONS
     1st Class/Airmail  (included in ALL prices)
     Ariel Electronic Document Transmission System (no additional charge)
     Fax ($8.00 first 30 pages, add $.50/page overage)
     Federal Express Overnight Delivery (must supply your own FedEx account
number)
     DHL Express Courier ($25.00 for first 5 items)

 ORDERING OPTIONS
     Fax (617-253-1690)
     E-Mail (····@MIT.EDU)
     OCLC (symbol MYG)
     Mail (address above)
     Telephone (617-253-5668)

PAYMENT OPTIONS
     Invoice (domestic institutions only, not individuals)
     Credit card (Visa,Mastercard,American Exp.) -must include card #, exp.
date & name as it appears on card
     Prepayment check -must be in U.S. funds & drawn on U.S. bank; check
payable to MIT


At 01:37 PM 5/14/97 -0600, you wrote:
>
>I am studying LISP languages and their variants, and would like to
>know how much it would cost (and if it is possible) to order the following
>three M.I.T. Publications:
>
>Dornbrook, M.; Blank, M.: The MDL Programming Language Primer, M.I.T.
>   Computer Science Lab., MIT/LCS, Cambridge, Mass., 1980
>
>Galley, S.W.; Pfister, G.: The MDL Programming Language, M.I.T. Computer
>   Science Lab., MIT/LCS/TR-293, Cambridge, Mass., 1979
>
>Lebling, P.D.: The MDL Programming Environment, M.I.T. Computer Science
>   Lab., MIT/LCS, Cambridge, Mass., May 1980
>
>Thank you,
>John Holder
>-- 
>John Holder (·······@frii.com) /\            http://www.frii.com/~jholder/
>UNIX Specialist, Paranet Inc. <--> Raytracing|Fractals|Interactive Fiction
>http://www.paranet.com/        \/           Homebrewing|Strange Attractors
>


--
John Holder (·······@frii.com) /\            http://www.frii.com/~jholder/
UNIX Specialist, Paranet Inc. <--> Raytracing|Fractals|Interactive Fiction
http://www.paranet.com/        \/           Homebrewing|Strange Attractors
From: Gareth McCaughan
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <86lo30o6j8.fsf@g.pet.cam.ac.uk>
Fred Haineux wrote:

> It pains me to note that someone has written a new "C syntax" language
> that outputs pseudo-code for the MDL-running engine so that they can make
> "new" infocom-style games without having to learn Lisp... (I suppose I
> should've looked up the details....)

If you're talking about Inform, he didn't write it so that you wouldn't
have to learn Lisp; he wrote it because there wasn't *any* freely available
(or indeed non-freely available) way of producing code for the Z-machine.
Which, incidentally, runs a byte code that isn't particularly Lispish.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Jason Trenouth
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <33ea87ce.14677453@newshost>
On 11 Jul 1997 18:31:35 -0700, Harley Davis <·····@laura.ilog.com> wrote:

> ·····@harlequin.co.uk (Jason Trenouth) writes:
> 
> > On 9 Jul 1997 04:58:57 GMT, ········@fontina.cs.wisc.edu (Shaun Flisakowski)
> > wrote:
> > 
> > > :Have you noticed...
> > > :  You don't see fast numerical libraries written in Lisp.
> > > :  You don't see scientific libraries written in Lisp.
> > > :  You don't see commercial games written in Lisp.
> > > :  You don't see application suites written in Lisp.
> > 
> > None of the above are true, of course. BTW The third domain is perhaps the
> > most in fashion of those listed.
> 
> That's interesting.  Could you give a couple of examples of commercial
> games running compiled (or even interpreted) Lisp code. 

How about Nichimen's content development system for Nintendo 64?

There have also been some relatively recent articles in some software
magazines about Lisp and games: particularly the use of hybrid Lisp/C systems
used to fast-track the development of games. However, I'm unable to find one
of these at the moment. Perhaps I dreamt them... :-(

> How about an
> application suite?  

I guess it depends what you mean by "application suite", but Lisp vendors (eg
ourselves, and presumably yourselves) have had GUI and DBI frameworks for
sometime so probably you meant something else...

> Anything mainstream?

:-j

__Jason
From: Ulric Eriksson
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <5smn1b$j3v$1@news.edu.stockholm.se>
In article <·················@newshost>,
Jason Trenouth <·····@harlequin.co.uk> wrote:
>On 11 Jul 1997 18:31:35 -0700, Harley Davis <·····@laura.ilog.com> wrote:
>
>> That's interesting.  Could you give a couple of examples of commercial
>> games running compiled (or even interpreted) Lisp code. 
>
>How about Nichimen's content development system for Nintendo 64?
>
>There have also been some relatively recent articles in some software
>magazines about Lisp and games: particularly the use of hybrid Lisp/C systems
>used to fast-track the development of games. However, I'm unable to find one
>of these at the moment. Perhaps I dreamt them... :-(

Abuse from Crack Dot Com. http://www.crack.com/abuse

>> How about an
>> application suite?  
>
>I guess it depends what you mean by "application suite", but Lisp vendors (eg
>ourselves, and presumably yourselves) have had GUI and DBI frameworks for
>sometime so probably you meant something else...

Perhaps something like {MS, Perfect, Smart} Office. My spreadsheet
SIAG is written in C and Scheme. http://www.edu.stockholm.se/siag

Spreadsheets using Scheme (in particular Guile) are discussed in
the mailing list ···@nortom.com.

Ulric Eriksson

-- 
I proactively leverage my synergies.
From: Vassil Nikolov
Subject: Re: Lisp Compared to C/C++---becoming an efficient programmer
Date: 
Message-ID: <33C3917D.6AFF@lri.fr>
Yet another popular subject...

Satan - The Evil 1 wrote:
> ... text omitted...
> 
> I would say that C++ has such a bad repution amongst Lisp
> programmers because it takes several years to become
> a very good and efficient C++ programmer whilst you can
> quickly become an efficient Lisp programmer.

In my opinion, no one can quickly become an efficient Lisp
programmer.

(No one can quickly become an efficient programmer in any
language, for that matter...)

And even if the above were true, I don't think that has any
contribution to the negative attitude towards C++.

I also don't think that an efficient Lisp programmer would
have any problems becoming an efficient C++ programmer
(apart from overcoming their distaste for C++, that is).

> ... text omitted...

My 2 centimes worth, I guess.

-- 
Vassil Nikolov, visitor at LRI:
  Laboratoire de Recherche en Informatique, Universite Paris-Sud
Until 9 July 1997: ··············@lri.fr
Normally:          ········@bgearn.acad.bg
From: Gareth McCaughan
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <86g1toxofm.fsf@g.pet.cam.ac.uk>
Someone calling himself "Satan - The Evil 1" (but I beg leave to doubt
whether that's really who it was) wrote:

> Have you noticed...
>   You don't see fast numerical libraries written in Lisp.
>   You don't see scientific libraries written in Lisp.
>   You don't see commercial games written in Lisp.
>   You don't see application suites written in Lisp.
> 
> In fact, you don't see any mainstream commercial applications
> written in Lisp for the the basic reason that any
> competitor will simply go out and write their competing
> application in C/C++ and make a faster, more responsive
> application that makes more efficient use of machine
> resources.

That might be part of it. I think more relevant facts are:

  - There are way, way fewer Lisp programmers than C programmers
    around.

  - Lisp programmers are usually not very interested in writing
    mainstream commercial applications.

  - Vendors haven't really targetted their Lisp systems for the
    ability to produce mainstream commercial applications.

  - The people producing the mainstream commercial applications
    have a long history of writing stuff in C. They probably
    haven't even considered considering higher-level languages.

It's a cultural thing.

>             Why do you think that, despite the massive
> amount of hype, no mainstream apps have been written
> in Java? Because it is too slow for the real world when
> compared to equivalent code written in C or C++. 

Actually, there has been at least one mainstream app written in
Java. Unfortunately.

There's no reason why you couldn't make a Java compiler that
produces code at least as good as you get from a C or C++ compiler.
However, for cultural reasons this hasn't been done yet: everyone
is more interested (rightly) in writing Java compilers that target
the JVM, so that the code they produce is portable.

> I would say that C++ has such a bad repution amongst Lisp
> programmers because it takes several years to become
> a very good and efficient C++ programmer whilst you can
> quickly become an efficient Lisp programmer.

I don't agree. It's certainly much faster to become able to write
non-trivial programs in Lisp than to become able to write non-trivaial
programs in C++, because Lisp is a much more elegant language. But
becoming *very good and efficient* in any language takes time, and
Lisp isn't an exception.

>                                               The massive
> number of *bad* C++ programmers certainly doesn't help
> its reputation.  An experienced C++ programmer can write
> truelly elegant, safe, and efficient code that leaves
> the equivalent Lisp code in the dust.

If this is true, it's only because of shortcomings in the available
Lisp implementations. In other words, if your argument is valid then
it's not an argument against Lisp but an argument against Allegro
Common Lisp, LispWorks, Macintosh Common Lisp, and all the other
specific compilers out there.

(And I'm not sure it's true at all.)

> But to sum up; we always seem to push hardware to its 
> limits and because of this languages such as C and C++
> will maintain their edge over languages such as Lisp.

This might be true if "languages such as Lisp" means "languages
that happen not to have any implementations with properties X,Y,Z".
But I don't think that's really a language issue.

And, of course, even if everything you've said is right, it
doesn't apply to every field of programming. You mentioned
artificial intelligence, for instance. If you're right in predicting
that programmers will be made obsolete by AI, then I bet you the
programs that make them obsolete won't be written in C.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Martin Rodgers
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <MPG.e2dd36eef78c6b29898b9@news.demon.co.uk>
With a mighty <··············@g.pet.cam.ac.uk>,
·····@dpmms.cam.ac.uk uttered these wise words...

> Someone calling himself "Satan - The Evil 1" (but I beg leave to doubt
> whether that's really who it was) wrote:

Well, it's not me. ;-)

>   - There are way, way fewer Lisp programmers than C programmers
>     around.

This is hard to dispute. I doubt that book publishers are stupid. If 
there's money to be made by publishing books that sell, and books 
about a particular language do in fact sell, then they'll go for it. 
Either there's less interest in Lisp than C/C++, or there's a book 
publishing conspiracy.

>   - Lisp programmers are usually not very interested in writing
>     mainstream commercial applications.

Not too long ago a follow Lisp programmer tried to convince me that 
there are more programmers who know Lisp than SQL. His evidence seemed 
to be anecdotal, as his sample of programmers only included his 
colleagues and the students that he teaches Lisp to. He later argued 
that his claim was true of programmers in the city where he lived, 
which I couldn't dispute.

Why were we discussing (by email) the relative popularity of SQL vs 
Lisp? We began by discussing the use of CL-HTTP as an alternative to 
the various - and popular - web server extensions that use database 
features. I suggested that most people wanted a familiar query 
language.
 
>   - Vendors haven't really targetted their Lisp systems for the
>     ability to produce mainstream commercial applications.

Coupling CL-HTTP with a Lisp package would be very tasty. If the Lisp 
aspects could be hidden behind a less "Lisp-ish" syntax, not unlike 
mu-simp, it might even look "friendly" to non-Lisp people (the kind of 
people who're scared off by the simplicity of the Lisp non-syntax).

Some heavy marketing might also help.
 
>   - The people producing the mainstream commercial applications
>     have a long history of writing stuff in C. They probably
>     haven't even considered considering higher-level languages.

This is why I believe that heavy marketing could help. It might even 
be necessary, if we consider the myths about Lisp that need killing 
before we can even begin to discuss Lisp with some people.

> It's a cultural thing.

Hmm. Lisp Machines are not what most people want, and yet this seems 
to be what many Lisp people are still lusting after. We know how 
bloody OS wars can be, but the culture that comes with an OS can also 
create friction. Consider the hostility toward command lines, for 
example. Any "negative" feature associated with an OS can do this. 
Those of us who don't feel such hostility perhaps understand that 
feature and appreciate it. Others may understand it and yet still 
dislike it. The rest could be baffled and scared of it, because it's 
something that they don't understand and it intimidates them.

To remove the hostility, perhaps we first have to create 
understanding, then an appreciating. This could be as true for 
programmers as it is for non-programmers. Being techie doesn't mean 
that a person will understand and appreciate all things technical.
 
> Actually, there has been at least one mainstream app written in
> Java. Unfortunately.

There are also many that don't get much attention, because they just 
quietly work for the people who wrote them and the people they work 
for. The same is true for a lot of languages, including Lisp.
 
> There's no reason why you couldn't make a Java compiler that
> produces code at least as good as you get from a C or C++ compiler.
> However, for cultural reasons this hasn't been done yet: everyone
> is more interested (rightly) in writing Java compilers that target
> the JVM, so that the code they produce is portable.

There are also historical reasons. We should be comparing Java to 
languages with an equally short history, or at least with languages at 
a point where their history was equally short.

Unfortunately, there are a few people who appear to prefer to ignore a 
language's history, and perhaps even distort it. If Java was 20 years 
old, perhaps it would be fair to compare it with C. Perhaps if it was 
15 years ago, it might be fair to compare it with C++.

There's also the possibility that some people are using Java for 
things they might not have considered doing in C++. I know that there 
are things I won't hesitate to do in Lisp that I'd never have dreamed 
of doing in C or C++. One language opens up my imagination by making 
hard things easy, while the other makes simple things hard work. 
However, once I've written something in Lisp, and found that it works 
and is useful, I can then think about how to write it in C++.

This is not too different from writting a tool in a shell language, 
awk, or perl, producing a solution quickly and easily. Later, after 
you've used your tool enough to see that it's worth the trouble, you 
can re-write it in C++. If you started in C++, you might never start 
it, or possibly worse, spend a lot of time on it but never complete 
it, making your time and effort a waste.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
                 Will write Lisp code for food
From: Larry Gordon
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <5q0r9v$sj1$1@news.flinet.com>
During the peak AI years in the 60's special machines were written that
only did lisp (as I'm sure everyone that is reading this obvious troll /
crossposted thread in comp.sys.lisp is aware of).
The fact is that lisp was a language very useful for ai applications
because of it's nature.  C++ exists because the UN*X community was
indoctrinated with C in college (because of the computers used in school).
The C language then took on C++ because of widespead support for object
oriented methodologies and a familiar programmer base with 'C'  Since C /
C++ are easy to write to hardware in a manner that is familiar and
understandable to both hardware and software engineers, it has become
supported in mainstream computer systems.  API's were written into the
operating system (which have been mainly written in C/C++) and the support
for application developers was written in C/C++.

My opinion is that anyone who uses computers to develop
software and wants to have marketable skills needs to
learn the following:

C or C++
familiarity with unix and windowing systems
API libraries (MFC, STL) or that for the operating system you are using
User Interface design fundamentals
Event based programming
Embedded programming fundamentals
Familiarity with threads and multiprocess communication
Debugging
An editor style(brief/EMACS)
Perl or Rexx
Other languages as needed (LISP could be added here in support of a
	particular project or as scripting for EMACS)
Object methodologies (OOA/D, Booch or OMT, patterns)
Familiarity with makefile utilities
Familiarity with version control and configuration management
Some type of prototyping package (visual basic/rexx)
Good coding style that is readable and understood by others
Good communication skills and the ability to get along with *ANYONE*
Complete understanding of the problem at hand
Total humility to admit when you do not understand something completely
and are willing to ask questions.
An understanding of full time employment benefits as well as dealing with
contract engineering (you are either a ass kisser, an ass, or a
mercenary...)

The students/young professionals out there wondering what they need to
learn to be marketable need to become good engineers, knowledgable in
tools and technologies, but must first be responsible human beings.

Just my rants.

I welcome any comments,
Larry

-----------------------------------------------------------------------
Laurence A. Gordon
Intelligent Medical Imaging, Inc     "No generalization is worth a damn,
4360 Northlake Blvd                  including this one." -- Will Rogers    
Palm Beach Gardens, FL 33410
(561) 627-0344 XT 3201                           DISCLAIMER
                                The opinions of the aforenamed individual
799 Sanctuary Cove Dr.          might not be the same as his employer's.
North Palm Beach, FL 33410      If you feel like telling his boss about how
(561) 776-6336                  much of an insensitive, obnoxious jerk the
                                author is, don't bother.  He already knows.
-----------------------------------------------------------------------
From: Georg Bauer
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <199707110807.a37639@ms3.maus.de>
MR> We began by discussing the use of CL-HTTP as an alternative to 
MR> the various - and popular - web server extensions that use database 
MR> features.

Blubber. What has a database-extension to a web-server to do with CL-HTTPD?
The latter one is a _complete_ web-server. If you need databases, you will
have to write a extension to the CL-HTTPD to access the database. And of
course, you will use the database' query language, then.

And why people should be afraid of CL-HTTPD because of lispish "non-syntax"
(utter bullshit, of course Lisp has a syntax - every special form can be
regarded as part of the syntax, even if it is a very "thin" syntax), I
don't understand. Only those who program extensions to the server need to
know Lisp. Most people will install the server and install some extensions
and use that together.

But if you need to build extensions, you can use a language that's much
better suited for the job than perl - and you don't need to use this
primitive CGI stuff (actually I always wondered what's so special about
writing CGI's, since it's only a question of environment variables and
stdout. One of the most stupid interfaces possible, could only be
engineered by some Unix-freaks. A good plug-in interface would have been
much better, we could have a standardized fast interface for
server-extension by now, if it were not for CGI) 
From: Martin Rodgers
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <MPG.e345bc15bc9c0289898d6@news.demon.co.uk>
With a mighty <···················@ms3.maus.de>,
···········@ms3.maus.westfalen.de uttered these wise words...

> MR> We began by discussing the use of CL-HTTP as an alternative to 
> MR> the various - and popular - web server extensions that use database 
> MR> features.
> 
> Blubber. What has a database-extension to a web-server to do with CL-HTTPD?
> The latter one is a _complete_ web-server. If you need databases, you will
> have to write a extension to the CL-HTTPD to access the database. And of
> course, you will use the database' query language, then.

Some webservers include CGI extension tools with database support. 
This is the kind of tool I was refering to. You're just saying what I 
was saying, that such extensions would need to be added to CL-HTTP.
 
> And why people should be afraid of CL-HTTPD because of lispish "non-syntax"
> (utter bullshit, of course Lisp has a syntax - every special form can be
> regarded as part of the syntax, even if it is a very "thin" syntax), I
> don't understand. Only those who program extensions to the server need to
> know Lisp. Most people will install the server and install some extensions
> and use that together.

As I've pointed out, this work has already been done, more than once. 
So the challenge is to convince people that CL-HTTP should be used 
when it won't actually do what's needed "out of the box". Time is 
money, so most people will prefer to spent that money (or less) on a 
commercial webserver.

This is unfortunately one of the areas in which Lisp loses. If you 
have a good reason for already using Lisp, the situation may be 
different, but it's a very bad way to sell Lisp as an idea to people 
who are currently using something else.
 
> But if you need to build extensions, you can use a language that's much
> better suited for the job than perl - and you don't need to use this
> primitive CGI stuff (actually I always wondered what's so special about
> writing CGI's, since it's only a question of environment variables and
> stdout. One of the most stupid interfaces possible, could only be
> engineered by some Unix-freaks. A good plug-in interface would have been
> much better, we could have a standardized fast interface for
> server-extension by now, if it were not for CGI) 

Who said anything about doing CGI in Perl? I've never done that, and I 
hope I never will. I only mentioned Perl in the context of general 
tool building:

> This is not too different from writting a tool in a shell language, 
> awk, or perl, producing a solution quickly and easily. Later, after 
> you've used your tool enough to see that it's worth the trouble, you 
> can re-write it in C++. If you started in C++, you might never start 
> it, or possibly worse, spend a lot of time on it but never complete 
> it, making your time and effort a waste.

I was arguing in favour of high level languages that make translating 
ideas into code as rapid as possible. Lisp, for example. I think that 
we agree about CGI writing. I do most of mine in an extended HTML 
language, with the option for extending it in C++. I didn't design it, 
of course, otherwise I might've done it differently. So might most of 
us Lisp programmers.

However, most if not all of the CGI/database tools that I've used or 
read about appear to use an extended HTML. The one that I'm most 
familiar with (as my colleagues and I use it) allows SQL queries to be 
embedded in the HTML templates. I don't doubt that CL-HTTP could be 
extended to do the same thing, and that this would note expose the use 
of Lisp. Unfortunately, that cancels most of the advantages of using 
LIsp in the first place! You'd have to show that CL-HTTP is superior 
to all the rival web servers with database suppport, without talking 
about Lisp. Worse, you'd have to show that the time and money spent 
re-inventing a wheel in Lisp would be worth it.

This is why I suggest that a Lisp vendor should package CL-HTTP (or a 
server like it) with their Lisp and market it heavily. Pick a magazine 
read by web developers and place an advert in it. Make it impressive.
I'm assuming that this is a market that you'd like to see Lisp used 
in. I certainly would, as I have an interest in using Lisp in that 
market. Perhaps you feel differently, or you know more web developers 
using Lisp than I do?

Remember the SQL vs Lisp question? A lot of the smarter web developers 
are coming from a database background. Perhaps if more of them knew 
and used Lisp than SQL, we'd see more web servers - perhaps even a 
commercial web server - using Lisp. Unfortunately, the nature of the 
market determines these things, not our wishful thinking. There are 
good reasons why database people know SQL. Lisp has a better chance of 
competing with Perl in the general CGI domain.

Lisp might even compete in more general areas, like web servers 
themselves. It would help if more people knew that Lisp could do this!
However, as we know too well, there's a great deal of ignorance of 
Lisp, and the capabilties of _modern_ Lisps. My hope is that this will 
change, that Java will help dispel the myths about garbage collection 
and dynamic languages, and that Dylan will also help.

But I've said all of that before...
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
                 Will write Lisp code for food
From: Rainer Joswig
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <joswig-ya023180001507971818260001@news.lavielle.com>
In article <·························@news.demon.co.uk>,
···@this_email_address_is_intentionally_left_crap_wildcard.demon.co.uk
(Martin Rodgers) wrote:

> But I've said all of that before...

How about trying for atleast the next six month to post
only messages with pure *technical* content?

As an alternative you could start an initiative for a new newsgroup:
comp.lang.lisp.advocacy . 

Thank you,

Rainer Joswig

-- 
http://www.lavielle.com/~joswig/
From: Martin Rodgers
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <MPG.e372b323a4799609898e4@news.demon.co.uk>
With a mighty <·································@news.lavielle.com>,
······@lavielle.com uttered these wise words...

> How about trying for atleast the next six month to post
> only messages with pure *technical* content?

How about admitting that there are _some_ people who don't understand 
Lisp? There must be a few, out there.

It would be great if everyone felt as you did, Rainer, and I'd welcome 
a world where Lisp was accepted by more people. Right now, I keep 
finding people expressing doubts about Java in the same way that 
others express doubts about Lisp. In case you hadn't noticed, Java is 
rather popular at the moment, yet it still gets attacked for being 
slow, using GC, and a number of other things that Java has, or is 
alleged to have, in common with Lisp. I don't believe that Lisp is 
slow, nor do I believe that Java will always be judged by the standard 
of the worst JVM.

Yet Lisp continues to be seen by "outsiders" in an even worse light.
So, how about discussing why this should be? Or is the answer too 
simple, that these unbelievers are just stupid? If they are stupid, 
then we have no chance of educating them. If they're merely ignorant, 
then we can change that.

> As an alternative you could start an initiative for a new newsgroup:
> comp.lang.lisp.advocacy . 

This isn't a personal attack on you, nor on Lisp. Instead, I'm trying 
to discuss Lisp's future in a constructive manner. Now, perhaps you 
could criticise the manner in which I'm doing this, but I don't see 
how my posts lack technical content. They may lack _code_, but I'm not 
concerned about code as much as the attacks I've seen on Lisp during 
the 5 years that I've been reading UseNet, and elsewhere. All of these 
attacks have been based on ignorance, which is what I'm trying to 
discuss with you.

What I've often felt was needed was a comp.lang.c++.advocacy 
newsgroup. Curiously, altho there's a comp.lang.java.advocacy 
newsgoup, it appears to be full of anti-MS posts, while the anti-Java 
posts tend to appear in other places. The anti-Lisp posts have a habit 
of being crossposted with comp.lang.lisp, as you might expect.

Don't you ever get tired of all these people shitting on Lisp?
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
                 Will write Lisp code for food
From: Rainer Joswig
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <joswig-ya023180001707972306580001@news.lavielle.com>
In article <·························@news.demon.co.uk>,
···@this_email_address_is_intentionally_left_crap_wildcard.demon.co.uk
(Martin Rodgers) wrote:

> With a mighty <·································@news.lavielle.com>,
> ······@lavielle.com uttered these wise words...
> 
> > How about trying for atleast the next six month to post
> > only messages with pure *technical* content?
> 
> How about admitting that there are _some_ people who don't understand 
> Lisp? There must be a few, out there.

But who cares? This is going on for the last twenty years. Where
have you been? It is so boring. Somebody posts a random flame and
some self-named Lisp programmers are thinking they can have
a discussion. The rest of the Usenet is only laughing at them.

> Don't you ever get tired of all these people shitting on Lisp?

Not really. I just don't care that much about it. Lisp and its
various incarnations have enough weak points to keep a lot
people busy improving it. I'm much more interested in applications
of Lisp and keeping people busy using it.

Still I don't see your contribution.
Your last (and first) posting with technical content was about
the non-performance of stream-based I/O. We knew that already.
This is a typical beginner question. But go on. How do
the various CLs implement streams? Is there a defacto standard?
How could it be improved? How could we get more *portable*
performance? Are there libraries who may deal with that?
Don't waste your time with fruitless bozo discussions.

>                  Will write Lisp code for food

What do you have accomplished? Why should a company hire you as a Lisp
programmer? Any references, examples, contributions?

-- 
http://www.lavielle.com/~joswig/
From: Martin Rodgers
Subject: Lisp elitism
Date: 
Message-ID: <MPG.e39d996cc53201d9898eb@news.demon.co.uk>
With a mighty <·································@news.lavielle.com>,
······@lavielle.com uttered these wise words...

> > How about admitting that there are _some_ people who don't understand 
> > Lisp? There must be a few, out there.
> 
> But who cares? This is going on for the last twenty years. Where
> have you been? It is so boring. Somebody posts a random flame and
> some self-named Lisp programmers are thinking they can have
> a discussion. The rest of the Usenet is only laughing at them.

I agree that it's boring. So isn't it time to do something about it? 
If you can't be bothered, then just add comp.lang.c++ to your 
killfile. I do that a few years ago, mainly because it was worrying me 
that a few (hopefully a minority) C++ programmers could be so dumb.

But we must flame all newbies, right? <sigh>

After a while, it occured to me that they probably don't realise how 
dumb their mistaken beliefs appear to us. After all, these myths are 
believed a heck of a lot of people. How many of them have you met? 
Perhaps we only see things differently because I know more of these 
people than you do? Who knows?

I _do_ know that I've never seen a Lisp column in a mainstream 
computer magazine (or any mag). I suspect that the columns that _do_ 
appear would amuse me as much as they amuse me, but would you like to 
change this unfortunate situation, or help perpetuate it? What do you 
feel is better, greater "mindshare", or less? Or are you only 
concerned about whether _you_ can use Lisp, and screw anyone else?

Can you blame me for expressing an interest in using Lisp, when you do 
the same? My only crime is wishing that someone might pay me to use 
Lisp, so I can avoid using C++. What gives _you_ the right to this 
privilege, but not somebody else? Is that not elitism?

> > Don't you ever get tired of all these people shitting on Lisp?
> 
> Not really. I just don't care that much about it. Lisp and its
> various incarnations have enough weak points to keep a lot
> people busy improving it. I'm much more interested in applications
> of Lisp and keeping people busy using it.

I have similar interests, but perhaps I don't - or can't - just ignore 
the shit that gets dumped on Lisp. Until somebody pays me to program 
in Lisp, I'll keep that interest. Then at least I could argue as you 
do, based on personal _work_ experience. As it is, I can only find 
people who either 1) don't know Lisp 2) know it but don't take it 
seriously. They do, however, take languages like SQL seriously.
 
> Still I don't see your contribution.

I'm not suprised. You don't see how Lisp's poor image is hurting the 
language. Are you suggesting that I'm the only programmer who'd like 
to use Lisp, but who is told to use something else? Sales of Lisp 
systems can help Lisp. Loss of sales can hurt it. See the recent post 
from an ILOG employee concerning the change in status of a Lisp, from 
commercial to non-commercial. Believe it or not, but some people take 
support seriously, and refuse to use products with any. My guess is 
that ILOG Talk suffered from lack of sales because of its high price, 
not because of any technical problem.

> Your last (and first) posting with technical content was about
> the non-performance of stream-based I/O. We knew that already.
> This is a typical beginner question. But go on. How do
> the various CLs implement streams? Is there a defacto standard?

I've been using Lisp for years (I've even written my own interpreters 
and compilers). If you wish to call me a beginner, fine. I don't doubt 
that you have more experience using Lisp than I have, esp as I've 
often wasted a lot of time using less productive languages (like C).
My I/O code has performed well without fancy declarations, perhaps 
because the I/O code in the runtime support for my first Lisps were 
all very primitive - no better than a subset of the CL I/O functions, 
working only on text files. The runtime code was a thin layer above 
the C library functions, just adding an 8K buffer.

O'caml can also perform well, and I/O code runs as fast as C can, and 
without any fancy declarations. Obviously O'caml is a very different 
language to CL. The I/O is certainly far less sophisticatd than what 
we have in CL. Perhaps by offering I/O support barely better than what 
C offers, O'caml can avoid the overheads that CL forces on us? I 
wasn't even aware of any overhead until I began using Lisps written by 
other people.

> How could it be improved? How could we get more *portable*
> performance? Are there libraries who may deal with that?
> Don't waste your time with fruitless bozo discussions.

Isn't ANSI Common Lisp portable enough? Isn't O'caml portable enough? 
Mac, Windows, and Unix support sounds pretty damn portable to me. Of 
course, there are a few other platforms not in that list, but I don't 
see why O'caml couldn't be ported to them if there was sufficient 
interest to justify the effort.
 
> >                  Will write Lisp code for food
> 
> What do you have accomplished? Why should a company hire you as a Lisp
> programmer? Any references, examples, contributions?

Why would you care? So far, nobody has offered me a job as a Lisp 
programmer, leaving me with no professional experience as a Lisp 
programmer, so yes, why anyone give me that chance? Thanks for 
explaining why I'm excluded from the ranks of professional Lisp 
programmers. Perhaps that's why so many other programmers are excluded 
from the same kind of work?

It's a viscious circle. Let's break the circle, instead of celebrating 
it's strength. Then a few more programmers may discover the joys of 
using Lisp, thus making Lisp stronger.

Meanwhile, I've updated my sigfile. You seem to be objecting from Lisp 
gaining one more professional programmer, so I won't offend you 
further by offering my services as a Lisp progammer. You seem 
determined to deny that your attitude is hurting Lisp, and yet look at 
what you're doing _right here_. You're giving the impression that you 
wish to keep Lisp to yourself, and exclude anyone who doesn't share 
your fanatical support for the language. I hope this is _not_ your 
intention, but please let me know.

Denying that a lot of hostility toward Lisp exists isn't constructive.
Nor is denying the ignorance of Lisp going to help introduce Lisp to 
more programmers. Surely Lisp vendors would like to sell more copies 
of their products? Do you not feel the same way? Or is it better to 
keep Lisp an exclusive langauge, reserved for only those "right 
thinking" people with enough cash?

BTW, insisting that more programmers know Lisp than SQL won't help 
your credibility. Highstreet bookstores refute such claims, by stoking 
more books on software using SQL than Lisp. Unless book publishers 
have yet to discover Lisp? Surely not. If you think that this isn't 
relevant, then it's a sign of elitism. IMHO education is the solution 
to ignorance, which in turn can help Lisp. Or would you prefer to keep 
most programmers ignorant of Lisp? How does _that_ help Lisp?
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
From: Pierre Mai
Subject: Re: Lisp elitism
Date: 
Message-ID: <m3vi25ble8.fsf@torus.cs.tu-berlin.de>
>>>>> "MR" == Martin Rodgers <···@this_email_address_is_intentionally_left_crap_wildcard.demon.co.uk> writes:

    >> But who cares? This is going on for the last twenty
    >> years. Where have you been? It is so boring. Somebody posts a
    >> random flame and some self-named Lisp programmers are thinking
    >> they can have a discussion. The rest of the Usenet is only
    >> laughing at them.

    MR> After a while, it occured to me that they probably don't
    MR> realise how dumb their mistaken beliefs appear to us. After
    MR> all, these myths are believed a heck of a lot of people. How

So what?  You will _not_ change any of their "mistaken" beliefs.
You will not convince one single C/C++/whatever biggot, if all that
they are looking for is market share -- and this _is_ what most of
them are looking for.

That said, my personal oppinion is, that many of these so-called
programmers will die out sooner or later: It is getting easier and
easier to do the jobs _they_ are capable of doing, so sooner or later
non-programmers will do these jobs.  What remains are jobs, that
someone who beliefs in fitting the problem to the tool, will not be
able to solve.

    MR> perpetuate it? What do you feel is better, greater
    MR> "mindshare", or less? Or are you only concerned about whether
    MR> _you_ can use Lisp, and screw anyone else?

The simple fact is, that if all C++-Programmers start using Common
Lisp instead, we will have gained nothing:  Those who use C++ for the
wrong reasons, will just use Common Lisp for the wrong reasons.  And
because those are mostly barely competent at programming itself, we
will just have more incompetent Common Lisp programmers, so what is
gained?

This whole issue is _not_ about wrong and right languages -- there
exists no such (mythic) thing.  It is about a higher-level of
thinking, a higher level of abstraction.  If you can only think in
terms of loops and branches (or recursion and branches, doesn't
matter), you will not be able to provide the kind of solutions that
are (really) needed in today's world.  If on the other hand, you have
learned how to abstract away from the many languages in existance, and
have thus gained real insight into the fundamental mechanisms
involved, you will be able to choose the right language for the job
(or even devise a new one, if that is what is needed [1]), and you
will be able to provide that solution.

So to make the world a better place, you will not have to convince the
masses to use Lisp, or any other language, but you will need to
educate the masses better, raise their level of expertise and broaden
their horizon.  This is a very unthankful job, as it has yet to be
demonstrated that such things can achieve permanent results.  And
possibly the best way to do this might be to just go ahead, and live
this ideal.

If this sounds like some kind of Buddhist talk on enlightenment, this
might indeed give us some clue as to what is needed here.  You will
not bring anyone any closer to enlightenment by selling them little
pictures of Buddha, and praising their magic value.  The way to
enlightenment is steep, and everyone must go his way on his own.

    MR> Can you blame me for expressing an interest in using Lisp,
    MR> when you do the same? My only crime is wishing that someone
    MR> might pay me to use Lisp, so I can avoid using C++. What gives
    MR> _you_ the right to this privilege, but not somebody else? Is

One doesn't get paid for "using Lisp", just like a plumber is not
getting paid for using a hammer.  You are getting paid for providing
solutions.  If Lisp (or Dylan, or Ada, or Haskell, or Perl, or C++ or
...) is part of that solution, I will make this clear to my employer
and will expect him to respect my decision, if he cannot convince me
of reasons precluding this selection (there _are_ a number of
non-technical factors to be considered!).

Should my employer not agree with my decision, even after I have
demonstrated it's viability (again including any non-technical
considerations), I will make it perfectly clear to him, that under
these circumstances I will not be able to deliver the demanded
solution with the efficency and/or stability possible or even
required.  If the discrepancy between problem and available/mandated
tools should be so high, that the delivery of a stable solution is
doubtful (i.e. a mission-critical application with VB, ...) --
depending on the circumstances -- I will even have to go so far as to
refuse working with the tools provided.  This is part of my
responsibility as a programmer, either for my company, any clients or
even humankind (i.e. flight safety software, etc.).

Of course this kind of stance has it's risks, and I might find myself
unemployed one day for "speaking up".  But until now none of my
employers has refused to listen to me (though sometimes he/she has
disagreed and sometimes even convinced me).

If for this kind of "personal engagement", I get "rewarded" by being
able to use the tools I want/need, well, so be it.  If on the other
hand those, that blindly accept whatever "management" throws at them,
don't get the same treatment, I can't change it.  Those who never
voice their opinions (to the right people, that is, which does not
include this newsgroup), will most certainly never change things.

    MR> that not elitism?

See above:  If the concept of "elightenment" (or ability, if you
prefer a less mythic term) is considered to be elitist, then yes, this
stance is elitist.

    MR> I'm not suprised. You don't see how Lisp's poor image is
    MR> hurting the language. Are you suggesting that I'm the only

This is just the same discussion that is happening in the Linux/*BSD
groups, the Ada/Eiffel/... groups, etc. etc.  Sorry but these
OS/Languages/Products are that way _because_ they are not being sold
as the mass users'/programmers' tool.  I would argue that you cannot
compete in today's mass market without making the same kind of trade
off's (and this includes the many non-technical trade-offs) that you
object to in the first place.  In other words: You don't have to
change to products to meet the mass market's desires, you have to
change the mass market.  But, Erik Naggum has correctly stated this
before, and still you argue, so I suppose you will not listen to me
either, so I should just stop here, and do something more
productive...

One more point, though:  From all the postings so far, it seems, that
CL is _not_ the right language for your problems, one reason being the
lacking support for your OS you belate so much.  So, then, why not
choose another language?  There are certainly many languages out
there, it's not the choice between C/C++ and CL you seem to suggest!

Just as a side note, let me mention the project I'm currently involved
with.  I'm currently in charge of the ongoing maintenance and
development of a simulation of final-assembly factory floors.  This
project is implemented in C++ and Tcl/Tk for historical reasons,
running on DEC Alpha/Intel Workstations/SMP-Servers under Linux and
Digital Unix.  It is split up into different processes, communicating
via pipes.  I personally would never have choosen C++ for this
particular project (and neither Tcl/Tk for that matter, there are
other, better bindings of the Tk-Widget-set).  Currently we are
rewriting the simulation in parts, to adapt to new requirements.

I'm currently using CLOS to prototype selected parts of the new
simulation model.  The final implementation will remain in C++ for the
time being, because of time pressures and the amount of work involved
in a complete rewrite.  Whereas rewriting C++ is feasible, prototyping
in C++ is clearly out of the question, and CLOS is an immensely
helpful tool for doing this kind of work.  As a matter of fact, given
the option, I would rewrite the simulation completely in CL, as this
would make a number of things much simpler (for instance the
introduction of multi-threading, or the handling of configuration
files, or the handling of dynamic decission policies).  Sadly this is
currently not possible, as our dead-lines are currently very tight,
but should time permit, I'm considering doing the rewrite then.

So am I getting paid for "using Lisp"?  In no way, but I'm using Lisp
none the less...

    MR> BTW, insisting that more programmers know Lisp than SQL won't
    MR> help your credibility. Highstreet bookstores refute such
    MR> claims, by stoking more books on software using SQL than
    MR> Lisp. Unless book publishers have yet to discover Lisp? Surely

But this is no contradiction, and this is what makes "discussion" very
difficult:

1) Just because there are more books explaining A than B, it does not
follow, that more people know A than B (one silly example is A being
calculus and B being breathing).

2) If I look around, I will see many more books on Java in bookshops,
than books on COBOL (not COBOL 90 thank you), yet you would have to
argue hard, that there are more people that know Java than COBOL.

Regs, Pierre.
From: Emergent Technologies Inc.
Subject: Re: Lisp elitism
Date: 
Message-ID: <5r0sqi$66h$1@newsie.cent.net>
 Pierre Mai wrote in article ...
>
>The simple fact is, that if all C++-Programmers start using Common
>Lisp instead, we will have gained nothing:  Those who use C++ for the
>wrong reasons, will just use Common Lisp for the wrong reasons.  And
>because those are mostly barely competent at programming itself, we
>will just have more incompetent Common Lisp programmers, so what is
>gained?
>

What is gained is that I can write my code in MSVisual CLOS and
not have to jump through hoops to interface it to other code, or convince
my clients that CLOS is good.

Even barely useable CommonLisp is orders of magnitude more useful
than the bulk of the C and C++ out there.
From: Martin Rodgers
Subject: Re: Lisp elitism
Date: 
Message-ID: <MPG.e3e73b5200a47ec9898f2@news.demon.co.uk>
With a mighty <············@newsie.cent.net>,
········@eval-apply.com uttered these wise words...

> What is gained is that I can write my code in MSVisual CLOS and
> not have to jump through hoops to interface it to other code, or convince
> my clients that CLOS is good.

Is it really CLOS that they object to? How interested are they in the 
programming details, and why? I'm not doubting that people can do 
this, but the reasons may be relevant. If they have programmers with a 
bias toward one language (say, C++), or against another (Lisp), that 
could certainly make it necessary o justify your choice of language.

If their language preferences are more influenced by marketing that 
promotes a non-Lisp language, then we should ask why the pro-Lisp 
marketing isn't also shaping their views.

> Even barely useable CommonLisp is orders of magnitude more useful
> than the bulk of the C and C++ out there.
 
It certainly is! Full CL would be ideal, but considering how far less 
ideal C++ is, a compromise would be preferable. Perhaps a subset, like 
CL without features like EVAL and COMPILE at _runtime_. Very little 
would be lost, compared to losing Lisp altogether.

"Visual CLOS" sounds very tasty. Where can I buy it? ;)
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"An operating system is a collection of things that don't fit into a
 language. There shouldn't be one." Daniel Ingalls, Byte August 1981
From: Emergent Technologies Inc.
Subject: Re: Lisp elitism
Date: 
Message-ID: <5rdm2g$lci$1@newsie2.cent.net>
Martin Rodgers arranged ascii chars in this pattern...
>Is it really CLOS that they object to? How interested are they in the
>programming details, and why? 

Actually, at this point, I don't accept clients who want code written
in C++ unless they wish to pay a significant premium for it. 
Unfortunately, I
often end up programming in C or C++ because I believe it is in my
clients' best interest.  The main engineering motivations here are

1.  compatability with windows, which is much easier to achieve with tools
from MS. Try writing a windows NT device driver in anything but C++.

2.  standalone executables. Again, try writing a windows NT device driver
in 
anything but C++.

3.  code that can be extended and maintained by one of the million
of resonably competent C or C++ programmers out there.  This is actually
far less important than it sounds.  If there were a really good lisp/c
interface
then any resonably competent programmer should be able to extend
any program using a language of his or her own choice (even Visual Basic).

>
>"Visual CLOS" sounds very tasty. Where can I buy it? ;)
>--

If LISPMachine were still around, they'd sell it to you now (and develop it
later!).
Give me some time.  If I had a couple of megabucks (or even megapounds)
I'd have some people writing it as we speak.  Unfortunately, I don't think
a good Visual CLOS is a one-person affair.
From: Martin Rodgers
Subject: Re: Lisp elitism
Date: 
Message-ID: <MPG.e45517a7f86b18f98990a@news.demon.co.uk>
My body fell on Emergent Technologies Inc. like a dead horse,
who then wheezed these wise words:

> The main engineering motivations here are
> 
> 1.  compatability with windows, which is much easier to achieve with tools
> from MS. Try writing a windows NT device driver in anything but C++.

Ah, yes. The device driver issue.
 
> 2.  standalone executables. Again, try writing a windows NT device driver
> in 
> anything but C++.

MS give you a certain amount of support, if you use C++ or VB. As for 
writing device drivers, could that be a special case? I get the 
impression that for MS, it isn't. They expect you to use a language 
that they sell. That's understandable. It's in their interest to lock 
us into the products that they sell.

Once a software house has invested heavily in one or two languages, 
it's in their interest to exploit them heavily, and not diversify by 
using even more languages. Perhaps C++ is "popular" for the same 
reason that the surname "Smith" is common. Not because of any inherent 
superiority, but simply because a small bias eventually becames a big 
bias. A new company may be so small that they can only afford to 
invest in one language. If that language happens to be X (where X can 
be any language you like), then they may still be using it years 
later, when they can afford to also use one or two other languages. 
The "leaverage" they get from products for a particular language may 
be smaller, because their language skills are more diverse.

Another problem is that in a hypothetical company, like the kind I've 
described above, one of their programmers may be the first  at that 
company to use a language other than X. Language Y then has a massive 
disdvantage, from their point of view, as only one of their 
programmers can use it. Any project that uses language Y will depend 
on the skills of a single programmer, making that project vulnerable.
If that single programmer is not available on a day when something 
_must_ be fixed in that project, time is lost. Time can be expensive, 
and it doesn't take long for the cost of using language Y to exceed 
the cost of the development system for Y.

The fear of losing money can be enough to deter companies from trying 
a language that their programmers don't currently use. You can't try 
something new without risk, but business people don't like risks that 
they don't understand. Programming issues have a tendency to be things 
that people higher up in a company don't appreciate. If they happen to 
be, or have once been, programmers themselves...then they may have 
their own personal biases. E.g. Bill Gates and his bias toward Basic.
The whims of a single person can, and do, dictate the choice of 
language used by vast numbers of programmers.

Clients are another matter. I can't comment on them too much, as I 
rarely meet them myself. However, in my experience it's their demands 
that determine the choice of language. In other words, if they ask for 
a Windows device driver, then the choice would likely be C/C++. If 
they want software that fits into a single floppy disk, this too may 
force the choice to be C/C++.

Perhaps these demands may sound unreasonable, but they are still real 
demands. Should we instead be looking for clients with more reasonable 
demands? How easy is it to find some creatures? I guess that'll depend 
on who you know, where you look, etc.

BTW, I don't often get asked to write device drivers. So far, I've 
managed to avoid it. ;)
 
> 3.  code that can be extended and maintained by one of the million
> of resonably competent C or C++ programmers out there.  This is actually
> far less important than it sounds.  If there were a really good lisp/c
> interface
> then any resonably competent programmer should be able to extend
> any program using a language of his or her own choice (even Visual Basic).

ActiveX? The Script Engine classes do this. Unfortunately, that means 
using MFC, and no Lisp that I know of has support for MFC. Pythonwin 
does, which is how it supports ActiveX Scripting.

> If LISPMachine were still around, they'd sell it to you now (and develop it
> later!).
> Give me some time.  If I had a couple of megabucks (or even megapounds)
> I'd have some people writing it as we speak.  Unfortunately, I don't think
> a good Visual CLOS is a one-person affair.

I fear that you may be right. There are loads of neat things, like 
this, that never get done because nobody can commit themselves to it, 
or not enough people can make the commitment. It's risky, and there 
are too many safe jobs for C++ programmers. So it goes.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
    "My body falls on you like a dead horse" -- Portait Chinois
            Please note: my email address is gubbish
From: Will Hartung
Subject: Re: Lisp elitism
Date: 
Message-ID: <vfr750EDoH94.ELI@netcom.com>
Pierre Mai <····@cs.tu-berlin.de> writes:

>If this sounds like some kind of Buddhist talk on enlightenment, this
>might indeed give us some clue as to what is needed here.  You will
>not bring anyone any closer to enlightenment by selling them little
>pictures of Buddha, and praising their magic value.  The way to
>enlightenment is steep, and everyone must go his way on his own.

Oh Great! Thanx a LOT Pierre! Pop my bubble!

I guess I won't be asking Erik for an autographed photo now.

There goes my silver bullet, I guess.

:-)


-- 
Will Hartung - Rancho Santa Margarita. It's a dry heat. ······@netcom.com
1990 VFR750 - VFR=Very Red    "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
1993 Explorer - Cage? Hell, it's a prison.                    -D. Duck
From: Holger Schauer
Subject: Re: Lisp elitism
Date: 
Message-ID: <64d8ochikc.fsf@gmd.de>
>>"MR" == Martin Rodgers wrote on 18 Jul 97 19:51:57 GMT:

 MR> I _do_ know that I've never seen a Lisp column in a mainstream
 MR> computer magazine (or any mag).

Dr. Dobbs Journal had at least two this year already (although they
used XLisp and did not show anything particular fancy). I am still
waiting for a nice article about recent commercial Common Lisp
systems, showing a little bit about CLOS or macros so that people can
have a look at what CL can do today (and this would surely do CL a
greater favour than your advocacy in cll. Do you believe that
C++-fanatics ever take a look at your postings in cll ? Go and post in
some other groups if you really wish to convince anybody.)

Holger
From: Martin Rodgers
Subject: Re: Lisp elitism
Date: 
Message-ID: <MPG.e3dbdab9dbcf4e69898f0@news.demon.co.uk>
With a mighty <··············@gmd.de>,
··············@gmd.de uttered these wise words...

> Dr. Dobbs Journal had at least two this year already (although they
> used XLisp and did not show anything particular fancy). I am still

Would that the infamous XLISP vs Perl comparison? I've not seen it 
myself (I've yet to see an issue of Dr Dobbs), but it didn't sound 
like a _Lisp_ column. How regular is it? A regular Lisp column could 
be what persuades me to subscribe to the magazine. Esp if it's a 
monthly column.

> waiting for a nice article about recent commercial Common Lisp
> systems, showing a little bit about CLOS or macros so that people can
> have a look at what CL can do today (and this would surely do CL a
> greater favour than your advocacy in cll. Do you believe that
> C++-fanatics ever take a look at your postings in cll ? Go and post in
> some other groups if you really wish to convince anybody.)
 
I've seen a column in Windows Tech Journal that was nearly a Lisp 
column, if only because it often featured Lisp code. The ideas 
discussed in it would certainly be familiar to Lisp people. I seem to 
recall Kelly Murray writing a few columns for the same mag.

I'm not trying to convince C++ fanatics that Lisp is a good tool. I'm 
trying to convince a few Lisp people that these fanatics are a real 
problem for some Lisp programmers. Do we know how many programmers 
would be using Lisp if not for other people telling them to use C++?

How would I last in comp.lang.c++? About as long as I do in Real Life, 
I suspect. I'd get the "blank look" treatment. I get it already, in 
other online forums (cix.co.uk, compuserve.com). It's so much easier 
to ignore the problem and just avoid C++ people and their petty 
problems. I'm not ready to give up on people offering me work, and go 
chasing people who are _not_.

If I thought that this was only my problem, I wouldn't bother starting 
a discussion about it. Ironically, it's rants like Erik Naggum's that 
make me think that it's time something changed. Why run away from a 
problem when we could discuss it and some possible solutions? I don't 
feel that anyone here wants to discuss it. Those who do, tend to reply 
by email. You can't have it both ways: either there's something wrong 
with non-Lisp software (e.g. almost all Windows software), and it 
should be changed, or there's no problem at all. Simply complaining  
about the situation but refusing to do anything constructive is not 
too impressive. Anyone can rant. I'd like to _discuss_ it.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"An operating system is a collection of things that don't fit into a
 language. There shouldn't be one." Daniel Ingalls, Byte August 1981
From: Alasdair McIntyre
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <453787011wnr@thermo.thermo.reply-without-this-text.co.uk>
In article: <············@cdn-news.telecom.com.au>  
·····@vus002.telecom.com.au (Satan - The Evil 1) writes:
> 
> After reading the "Lisp in the Real World" thread I have
> the following opinion :) to raise and would be interested
> to hear some other "non-religious" opinions regarading
> Lisp compared to languages such as C/C++ in the area of
> application speed and efficiency.
[snip]
> Have you noticed...
>   You don't see fast numerical libraries written in Lisp.
>   You don't see scientific libraries written in Lisp.
>   You don't see commercial games written in Lisp.
>   You don't see application suites written in Lisp.
[snip]

I suggest you take a look at the following reference:

  R. Bradley Andrews (············@rbacomm.com)
  "Speeding Game Development and Debugging (Lisp
  finds a niche in the game development arena)",
  pp.28-31, Object Magazine, May 97,
  SIGS Publications (http://www.sigs.com).

Here are some edited quotes which you may find revealing:

  "Nichimen Graphics used LISP to devlop their powerful
   N-World development environment... Nintendo used N-World
   to produce all the characters in their flagship 64-bit
   game, "Mario 64" (including Mario himself)."

  "...Naughty Dog Software - makers of "Crash Bandicoot,"
   the current mascot for the Sony Playstation - have proven
   that it [LISP] can [hook together a modern game that gets
   played]. Like their previous release, "Way of the Warrior,"
   their current bestseller uses LISP code for significant
   parts of the game, including character control and AI
   functionality."

  "Gavin [Naughty Dog co-founder] emphasizes that some ideas
   about LISP are outdated... The speed issue is one where
   reality is different than the perception. "It is easy to
   construct a simple dialect which is just as efficient as
   C, but retains the dynamic and consistent qualities that
   make LISP a much more effective expression of one's
   programming intentions," adds Gavin."

   Copyright acknowledged.

Hope you find this informative.
-- 
alasdair mcintyre - thermoteknix
<initial>"dot"<surname>"at"<company>"dot"com
From: Fred Haineux
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <bc-0907971141180001@17.127.19.49>
·····@vus002.telecom.com.au (Satan - The Evil 1) wrote:
|  After reading the "Lisp in the Real World" thread I have
|  the following opinion :) to raise and would be interested
|  to hear some other "non-religious" opinions regarading
|  Lisp compared to languages such as C/C++ in the area of
|  application speed and efficiency.
|  
|  *Please don't take this as a flame or a troll*

Why not? It *IS* a troll. You can tell it's a troll because anyone with
half-a-brain would have looked in DejaNews and seen that this "brilliant
insight" is posted about once a WEEK, and has been for the last ZILLION
YEARS.

Does this reflect well on C programmers's intelligence or programming
style, if they can't even consider the idea that someone somewhere has
already talked about this topic? No wonder "reusable code" is such a
hot-button!

IF you want real answers to your questions, GET OFF YOUR ASS and go look.
Go ask a Lisp vendor what "real world" apps have been written. For crying
out loud, go read the back scroll of comp.language.lisp
(<http://www.dejanews.com/> found almost 500 articles when I did a simple
query). Don't bother us Lisp weirdos. We're busy hacking dumb things with
lambda calculus.

IF you don't want answers, just go back to your C++ and write some code.
You have my OFFICIAL permission to do so. If you want to feel that C++ is
better, GO RIGHT AHEAD. I don't CARE.

After all, everyone knows that the BEST language in the ENTIRE WORLD is
COBOL, because there's more COBOL code than ALL OTHER LANGUAGES PUT
TOGETHER.

Have a really nice day,
HEINOUS

ps. If application speed is so damn important, explain damn near any
Microsoft product. "Slow, huge programs"? The most popular software in the
world -- and it's written in C.
From: Martin Cracauer
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <1997Jul9.100554.20174@wavehh.hanse.de>
[Followups trimmed]

You are a complete idiot who known knothing about the problems in
optimizing highly abstract code and obviously don't program yourself,
but for other interested reader...

See http://www.cons.org/cracauer/lisp-c-fp-bench/ for an example of
equivalent fast Common Lisp and C floating point code. No overhead in
space or time, given a good Lisp compiler.

·····@vus002.telecom.com.au (Satan - The Evil 1) writes:

>After reading the "Lisp in the Real World" thread I have
>the following opinion :) to raise and would be interested
>to hear some other "non-religious" opinions regarading
>Lisp compared to languages such as C/C++ in the area of
>application speed and efficiency.

Instead of reading this quite useless thread you were better of
reading the recent thread were people actually tried to write
efficient C++ code using C++ abstraction methods like the STL.

While it is quite easy to write efficient code in C or C++ without
advanced featuers, C++ didn't yet succeed in providing higher
abstraction levels without major performance hits.

>I would say that C++ has such a bad repution amongst Lisp
>programmers because it takes several years to become
>a very good and efficient C++ programmer whilst you can
>quickly become an efficient Lisp programmer.  The massive
>number of *bad* C++ programmers certainly doesn't help
>its reputation.  An experienced C++ programmer can write
>truelly elegant, safe, and efficient code that leaves
>the equivalent Lisp code in the dust.

Only if you write your code at a pretty low level.

No doubt, writing efficient Common Lisp code requires a lot of
knowledge, but reaching Common Lisp basic knowledge + Common Lisp
optimization knowledge isn't really more work that to learn to use C++
efficiently.

>Of course, the ease at which you can write good code in
>Lisp is a major point in its favour.

This is probably the most complete bullshit in your posting. Writing
*good* code is hard in any language. There is a lot of bad Lisp code
as well as bad C++ code.

You sound like "good" code doesn't have to be efficient in your
opinion. No wonder you sound like you don't code yourself. What make
code "good" in your opinion?

>But to sum up; we always seem to push hardware to its 
>limits and because of this languages such as C and C++
>will maintain their edge over languages such as Lisp.
>I suppose one day we may have computers with more 
>processing power than we could ever make use of 
>(e.g. quantum computers) and then it will be commercially
>feasible to throw away the low-level languages.  But I
>imagine by that time Artificial Intelligences will have
>done away with programmers alltogether.

Complete wrong point. There are no CPU cycles and RAM cells to waste,
not yet, not in future, the applications will require the resources
and programmers will never be allowed to use them to ease their
job. No inefficient language will ever make it into a mainstrain
application language.

However, Common Lisp doesn't belong to the language that waste
resources anymore. There is an initial amount of memory that is bigger
than the initial amount a C++ requires, but this is a constant
overhead. When the program grows under the application's data
requirements, the groth is equivalent in both languages. And today's
application's requirements make the initial Lisp overhead for the
runtime look neglectable, less than 10% overhead. That is nothing, any
coding inefficiency changes your resource requirements you a much
larger factor.

Martin
-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%% ···············@wavehh.hanse.de http://cracauer.cons.org %%%%%%%%%%%
%%
%% Unwanted commercial email will be billed at $250. You agree by sending.
%%
%% If you want me to read your usenet messages, don't crosspost to more 
%% than 2 groups (automatic filtering).
From: Craig Brozefsky
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <m2wwmukj7y.fsf@subject.cynico.com>
········@wavehh.hanse.de (Martin Cracauer) writes:

> Complete wrong point. There are no CPU cycles and RAM cells to waste,
> not yet, not in future, the applications will require the resources
> and programmers will never be allowed to use them to ease their
> job. No inefficient language will ever make it into a mainstrain
> application language.

Visual Basic has a rather sizeable market and certainly shouldn't be
discounted here.
From: Martin Rodgers
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <MPG.e333939a932e31b9898cf@news.demon.co.uk>
With a mighty <··············@subject.cynico.com>,
·····@onshore.com uttered these wise words...

> Visual Basic has a rather sizeable market and certainly shouldn't be
> discounted here.

It may sound odd, but we could all learn a few things from VB. I'm not 
sure that it's the _language_ that makes this product so successful. I 
suspect that it's the IDE and the integration with the language.

If you're unfamiliar with the Visual Basic IDE, take a look at the IDE 
for Visual TCL and the screen shot. It's very similiar to what a VB 
programmer using Windows will see. Forget the language and think about 
the IDE, what it does, and how useful that is to programmers. Not just 
Windows programmers, but also X Windows programmers. Yep, VisualTCL  
runs under a non-MS system. The screen shot is from a Linux machine.

VisualTCL: <URL:http://www.neuron.com/stewart/vtcl>

Performance of compiled code and language expressiveness aren't the 
only factors that affect programmer productivity. Sometimes they don't 
even enter the picture! Alas. Still, if programmer productivity is  
your aim, and tools like VB and VisualTCL acheive that (or at least 
many programmers believe that they do - play semantic games if you 
like), then why not discuss them?
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
                 Will write Lisp code for food
From: Mark Greenaway
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <868842387.689445@cabal>
Craig Brozefsky <·····@onshore.com> writes:

>········@wavehh.hanse.de (Martin Cracauer) writes:

>> Complete wrong point. There are no CPU cycles and RAM cells to waste,
>> not yet, not in future, the applications will require the resources
>> and programmers will never be allowed to use them to ease their
>> job. No inefficient language will ever make it into a mainstrain
>> application language.

>Visual Basic has a rather sizeable market and certainly shouldn't be
>discounted here.

If the original poster's statement was true, there would be no C. We would
all still be programming in assembly.

--
Mark
Certified Waifboy                   And when they come to ethnically cleanse me
                                    Will you speak out? Will you defend me? 
http://www.st.nepean.uws.edu.au/~mgreenaw         - Ich bin ein Auslander, PWEI
From: Bill House
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <01bc9008$15a4ef00$03d3c9d0@wjh_dell_133.dazsi.com>
Mark Greenaway <········@st.nepean.uws.edu.au> wrote in article
<················@cabal>...
>
> If the original poster's statement was true, there would be no C. We would
> all still be programming in assembly.
>
So true. Actually, as hardware resources get cheaper and faster, it really
becomes a question of who should be doing all the work -- the person or the
machine. I'll opt for the machine doing all the work whenever practical. After
all, that's what machines are there for -- to do the work that makes people's
lives easier.

> 
> --
> Mark
> Certified Waifboy                   And when they come to ethnically cleanse
me
>                                     Will you speak out? Will you defend me? 
> http://www.st.nepean.uws.edu.au/~mgreenaw         - Ich bin ein Auslander,
PWEI
> 


Bill House
-- 
Note: my e-mail address has been altered to
confuse the enemy. 
From: Martin Rodgers
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <MPG.e34388cbbcbe7709898d2@news.demon.co.uk>
With a mighty <··························@wjh_dell_133.dazsi.com>,
······@nospam.housewebs.com uttered these wise words...

> So true. Actually, as hardware resources get cheaper and faster, it really
> becomes a question of who should be doing all the work -- the person or the
> machine. I'll opt for the machine doing all the work whenever practical. After
> all, that's what machines are there for -- to do the work that makes people's
> lives easier.

Most definitely. For years I've been read that machines have better 
memories than we do, which is why I find the arguments against GUIs so 
odd. I feel the same way about high level languages - and the 
arguments against them.

It's almost as if some people are saying, "Bring back big timesharing 
systems, batch processing and punched cards." Even if most programmers 
wanted these things, and could justify them, we wouldn't necessarily 
get them. Not now, anyway!

If anyone still wants these things, they can certainly have them. I 
just don't see why we should all sink to the same low level. Can you 
seriously imagine all those multimedia games being thrown away, in 
favour of those old lunar lander sims? I've heard Andy Grove claiming 
that "Intel is bigger than the entire entertainment industry" (he was 
knocking the Java/NetPC idea). I wouldn't go that far, but there's an 
interesting point to be made about what _users_ want from a machine. 
The techies who like putting the machine first and humans second may 
want to turn the clock back (that may explain the myths about "wasted 
CPU cycles"), but I'd expect to see strong resistance from the rest of 
the industry.

Pandora's box has been opened. Try putting fire back in the box, now 
that so many people are playing with it.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
                 Will write Lisp code for food
From: Lynton Alan Fletcher
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <34376fdc.0@news.camtech.net.au>
Martin Cracauer wrote in message <·····················@wavehh.hanse.de>...
>[Followups trimmed]
>
>You are a complete idiot who known knothing about the problems in
>optimizing highly abstract code and obviously don't program yourself,
>but for other interested reader...

You may "program" yourself, but you clearly don't "Software Engineer".
There are factors other than execution efficiency. There's portability,
reliability, modifiability, readability just to name a few. At the end of
the
day, most customers won't care if their software executes several
milliseconds slower!

For your information, comparing the execution speed of C++ and LISP
is a useless exercise. LISP works best on a specialised LISP machine.
C++ was designed for machines based on the Von-Neumann principle.

Try executing a C++ program on a LISP machine!

true LISP also has another advantage: you can use recursion without
worrying about stack overhead. True LISP compilers use a special
algorithm to change recursion into iteration.


Cheers,
Lynton Fletcher
From: Bryant Brandon
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <20CE4640EB882CF6.EDB044D848198401.3E18CED7D6D02B76@library-proxy.airnews.net>
   Ahh!  Damn troll!  Die!  Die!  Die!

B.B.
From: ········@hotmail.com
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <868619535.14599@dejanews.com>
In article <············@cdn-news.telecom.com.au>,
  ·····@vus002.telecom.com.au (Satan - The Evil 1) wrote:
> [...]
> Have you noticed...
>   You don't see fast numerical libraries written in Lisp.
>   You don't see commercial games written in Lisp.
> [...]
> But to sum up; we always seem to push hardware to its
> limits and because of this languages such as C and C++
> will maintain their edge over languages such as Lisp.

I guess I will get the C-programming community against by saying this,
but the thruth has to be told:

Maybe, C is pretty fast, but its speed is nothing compared to the speed a
_real_ programmer can achieve.

Using assembler. (If the name doesn't ring a bell, I can tell you that
it's the only low-level language there is where every instruction is
translated into one single machine code instruction, i e assembler is
only an easy-to-use representation of machine code)

C doesn't push hardware to its limits, C push hardware _manufacturers_ to
their limits. Manufacturers who since the dawn of high-level languages
has been forced to improve the speed of hardware, instead of the
programmers writing fast, efficient and clean code.

Also I think all commercial games have code written in assembler.

For speed.

/Alfons

-------------------==== Posted via Deja News ====-----------------------
      http://www.dejanews.com/     Search, Read, Post to Usenet
From: Hrvoje Niksic
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <kig67uhtj67.fsf@jagor.srce.hr>
········@hotmail.com writes:

> Maybe, C is pretty fast, but its speed is nothing compared to the
> speed a _real_ programmer can achieve.

Exactly.

finger -l ·······@fly.cc.fer.hr

...or look at Appendix A of The Jargon Lexicon.

> Also I think all commercial games have code written in assembler.

Actually, you are wrong here.  Most of the commercial games written in
the last few years have been written in C, with optional bottle-necks
coded in assembler for speed.

-- 
Hrvoje Niksic <·······@srce.hr> | Student at FER Zagreb, Croatia
--------------------------------+--------------------------------
Ask not for whom the <CONTROL-G> tolls.
From: ········@hotmail.com
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <868875811.23946@dejanews.com>
In article <···············@jagor.srce.hr>,
  Hrvoje Niksic <·······@srce.hr> wrote:
>
> ········@hotmail.com writes:
>
> > Maybe, C is pretty fast, but its speed is nothing compared to the
> > speed a _real_ programmer can achieve.
>
> Exactly.
>
> finger -l ·······@fly.cc.fer.hr
>
> ...or look at Appendix A of The Jargon Lexicon.
>
> > Also I think all commercial games have code written in assembler.
>
> Actually, you are wrong here.  Most of the commercial games written in
> the last few years have been written in C, with optional bottle-necks
> coded in assembler for speed.
>
That's what I meant.

-------------------==== Posted via Deja News ====-----------------------
      http://www.dejanews.com/     Search, Read, Post to Usenet
From: Emergent Technologies Inc.
Subject: [NOISE] Re: But C is *SLOW* Compared to Lisp
Date: 
Message-ID: <5qk8t6$23v$2@newsie.cent.net>
Every time I try to benchmark C performance, I have to discard
a number of runs to avoid dividing by zero... or is getting
the correct answer not an important factor in C benchmarking?

>C doesn't push hardware to its limits, C push hardware _manufacturers_ to
>their limits. Manufacturers who since the dawn of high-level languages
>has been forced to improve the speed of hardware, instead of the
>programmers writing fast, efficient and clean code.

C tends to push *me* to *my limits*.
 
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CA6A40.7A44@capital.net>
Satan - The Evil 1 wrote:
> 
> After reading the "Lisp in the Real World" thread I have
> the following opinion :) to raise and would be interested
> to hear some other "non-religious" opinions regarading
> Lisp compared to languages such as C/C++ in the area of
> application speed and efficiency.
> 
> *Please don't take this as a flame or a troll*
> 
> Lisp seems OK for prototyping and implementing higher level
> business rules and for general scripting, but it simply
> will never be able to compete with languages such as
> C and C++ (and to a lesser extent languages such as VB
> and Delphi under Windows).
> 
> Have you noticed...
>   You don't see fast numerical libraries written in Lisp.
>   You don't see scientific libraries written in Lisp.
>   You don't see commercial games written in Lisp.
>   You don't see application suites written in Lisp.
> 
> In fact, you don't see any mainstream commercial applications
> written in Lisp for the the basic reason that any
> competitor will simply go out and write their competing
> application in C/C++ and make a faster, more responsive
> application that makes more efficient use of machine
> resources.  Why do you think that, despite the massive
> amount of hype, no mainstream apps have been written
> in Java? Because it is too slow for the real world when
> compared to equivalent code written in C or C++.
> 
> I would say that C++ has such a bad repution amongst Lisp
> programmers because it takes several years to become
> a very good and efficient C++ programmer whilst you can
> quickly become an efficient Lisp programmer.  The massive
> number of *bad* C++ programmers certainly doesn't help
> its reputation.  An experienced C++ programmer can write
> truelly elegant, safe, and efficient code that leaves
> the equivalent Lisp code in the dust.
> 
> Of course, the ease at which you can write good code in
> Lisp is a major point in its favour.
> 
> But to sum up; we always seem to push hardware to its
> limits and because of this languages such as C and C++
> will maintain their edge over languages such as Lisp.
> I suppose one day we may have computers with more
> processing power than we could ever make use of
> (e.g. quantum computers) and then it will be commercially
> feasible to throw away the low-level languages.  But I
> imagine by that time Artificial Intelligences will have
> done away with programmers alltogether.
> 
> OK now... go ahead an rip my argument to shreds :)
> 
> Cheers,
> - PCM


	Lisp is a deception. All lisp compilers and interpreters that 
I've seen have been written in C, and run on top of a C program. I've
seen a lot of LISP and PROLOG programmers, especially in the post
graduate 
level of computer science, think that lisp functions the same way as 
mathematics. They think that a call to a recursive function
instantaneously 
returns a result. The fact is, these function is broken down into
machine
level instructions, and is executed the same way as a for next loop. 

	AI is a bunch of garbage as well. Do you know what the main goal 
of AI is? It is to develope a program where a person cannot
distinguish the program from a human being. What does that have to do
with intelligence? It's just an emulator. 

	The bottom line is, all computer programs, including AI 
programs, are just fast pocket calculators.



					Peaceman
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e348d6681633af79898db@news.demon.co.uk>
With a mighty <·············@capital.net>,
········@capital.net uttered these wise words...

> 	The bottom line is, all computer programs, including AI 
> programs, are just fast pocket calculators.

Nice troll. Nothing original, alas.

You might like to read the FAQ for comp.lang.lisp, and check every one 
of your "claims" against the reality. Section 4 should be esp 
illuminating!

<URL:http://www.cs.cmu.edu/afs/cs.cmu.edu/project/ai-
repository/ai/html/faqs/lang/lisp/top.html>

Alternately, try the "rtfm" site at MIT:
<URL:ftp://rtfm.mit.edu/pub/usenet-by-hierarchy/comp/lang/lisp/>

RTFM is rather appropriate, in your case. Read the FAQ and weep.
Then try something rather more constructive, please.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
                 Will write Lisp code for food
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf3ephuzij.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Mon, 14 Jul 1997 14:04:48 -0400
   Organization: Logical Net
   Reply-To: ········@capital.net
   Lines: 82
   References: <············@cdn-news.telecom.com.au>
   Mime-Version: 1.0
   Content-Type: text/plain; charset=us-ascii
   Content-Transfer-Encoding: 7bit
   X-Mailer: Mozilla 3.01 (WinNT; I)
   Xref: agate comp.lang.lisp:29244 comp.programming:52315 comp.lang.c++:281622

   Satan - The Evil 1 wrote:
   > 

   > OK now... go ahead an rip my argument to shreds :)
   > 
   > Cheers,
   > - PCM


	   Lisp is a deception. All lisp compilers and interpreters that 
   I've seen have been written in C, and run on top of a C program. I've
   seen a lot of LISP and PROLOG programmers, especially in the post
   graduate 
   level of computer science, think that lisp functions the same way as 
   mathematics. They think that a call to a recursive function
   instantaneously 
   returns a result. The fact is, these function is broken down into
   machine
   level instructions, and is executed the same way as a for next loop. 

Coming from a person who might seem to think that binary tree
traversal can be executed in "the same way as a for next loop", I
cannot help to dismiss the whole argument.  Why have C (or Pascal or
Algol or Lisp) in the first place?  Let's stick to FORTRAN. :)

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Thomas Hallock
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <c.hallock-1507971022590001@slip-90-9.ots.utexas.edu>
> Coming from a person who might seem to think that binary tree
> traversal can be executed in "the same way as a for next loop", I
> cannot help to dismiss the whole argument.  Why have C (or Pascal or
> Algol or Lisp) in the first place?  Let's stick to FORTRAN. :)

well, correct me if i'm wrong, but i have read in many places that *all
recursive functions can be expressed as iterations* yea, it's hard to
believe, but it's been prooven, but i don't really know how to do such a
work of magic.
note that this is done automatically in some really good compilers!
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf7mes5jab.fsf@infiniti.PATH.Berkeley.EDU>
   Delivery-Date: Tue, 15 Jul 1997 08:21:57 -0700
   Date: Tue, 15 Jul 1997 10:23:04 +0000
   From: ·········@mail.utexas.edu (Thomas Hallock)
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   References: <············@cdn-news.telecom.com.au> <·············@capital.net> <···············@infiniti.PATH.Berkeley.EDU>
   Organization: bovine soft.

   (A copy of this message has also been posted to the following newsgroups:
   comp.lang.lisp, comp.programming,comp.lang.c++)

   > Coming from a person who might seem to think that binary tree
   > traversal can be executed in "the same way as a for next loop", I
   > cannot help to dismiss the whole argument.  Why have C (or Pascal or
   > Algol or Lisp) in the first place?  Let's stick to FORTRAN. :)

   well, correct me if i'm wrong, but i have read in many places that *all
   recursive functions can be expressed as iterations* yea, it's hard to
   believe, but it's been prooven, but i don't really know how to do such a
   work of magic.
   note that this is done automatically in some really good compilers!

You are right.  I was kinda flame baiting :).  You can always express a
recursive function with a loop.  However, some functions (e.g. binary
tree traversal) are "inherently" recursive.  I.e. to remove the
recursive calls you have to explicitely manage a stack.  Old FORTRAN
is a good example of this burden posed on the programmer.

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: W. Daniel Axline
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CC7423.5A82@cse.unl.edu>
Marco Antoniotti wrote:
> 
>    Delivery-Date: Tue, 15 Jul 1997 08:21:57 -0700
>    Date: Tue, 15 Jul 1997 10:23:04 +0000
>    From: ·········@mail.utexas.edu (Thomas Hallock)
>    Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
>    References: <············@cdn-news.telecom.com.au> <·············@capital.net> <···············@infiniti.PATH.Berkeley.EDU>
>    Organization: bovine soft.
> 
>    (A copy of this message has also been posted to the following newsgroups:
>    comp.lang.lisp, comp.programming,comp.lang.c++)
> 
>    > Coming from a person who might seem to think that binary tree
>    > traversal can be executed in "the same way as a for next loop", I
>    > cannot help to dismiss the whole argument.  Why have C (or Pascal or
>    > Algol or Lisp) in the first place?  Let's stick to FORTRAN. :)
> 
>    well, correct me if i'm wrong, but i have read in many places that *all
>    recursive functions can be expressed as iterations* yea, it's hard to
>    believe, but it's been prooven, but i don't really know how to do such a
>    work of magic.
>    note that this is done automatically in some really good compilers!
> 
> You are right.  I was kinda flame baiting :).  You can always express a
> recursive function with a loop.  However, some functions (e.g. binary
> tree traversal) are "inherently" recursive.  I.e. to remove the
> recursive calls you have to explicitely manage a stack.  Old FORTRAN
> is a good example of this burden posed on the programmer.

I have written binary tree traversal algorithms which don't use a stack.
I would be interested to know if this were true for some other
algorithms,
however.
Here's an iterative preorder binary tree traversal. 
(Preorder means process root, then process the subtrees.)

void BT::Preorder2Screen()
{
BTnode *temp = this->root; //used to find a node with an untraversed
right subtree
BTnode *current = this->root; //start at the root
int i;
while(temp!=NULL) // need to do this for each node
{
  cout<<current->key<<" ";
  if(current->Lchild)
  {
    current = current->Lchild;
  }
  else
  {
    if(current->Rchild!=NULL)
    {
      current = current->Rchild;
    }
    else //go backwards
    {
      temp = current->parent;
      while(((temp->Rchild == NULL)||(temp->Rchild == current))&&
	       (temp!=NULL)) //shouldn't happen
      {
	current = temp;
	temp = temp->parent;
      }
      if (temp!=NULL)
	 current = temp->Rchild;
    }
  }
}//endwhile
cout<<"\n\n";
}//end Preorder2Screen()
From: Michael Schuerig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <19970716145002792776@rhrz-isdn3-p5.rhrz.uni-bonn.de>
W. Daniel Axline <·······@cse.unl.edu> wrote:

> I have written binary tree traversal algorithms which don't use a stack.
> I would be interested to know if this were true for some other
> algorithms,
> however.
> Here's an iterative preorder binary tree traversal. 
[snip]
>       temp = current->parent;
                        ^ That's the catch.

Effectively you're using a stack. Not the function call stack, but a
stack embedded into your tree. It's fine as long as you need the parent
pointers anyway, but if you only use them for traversal they impose a
possibly significant space overhead and they take time to update.

Michael

--
Michael Schuerig              The usual excuse for our most unspeakable
·············@uni-bonn.de       public acts is that they are necessary.
http://www.uni-bonn.de/~uzs90z/                       -Judith N. Shklar
From: Bill Wade
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <01bc91fa$bde89880$2864370a@janeway>
Michael Schuerig <······@uni-bonn.de> wrote in article
<····················@rhrz-isdn3-p5.rhrz.uni-bonn.de>...
> W. Daniel Axline <·······@cse.unl.edu> wrote:
> 
> > I have written binary tree traversal algorithms which don't use a
stack.
> > I would be interested to know if this were true for some other
> > algorithms,

There are plenty of recursive functions for which most of us don't use a
stack.  Iterative factorial() or fibonicci() are examples.

int fact(unsigned int n)
{
  int result = 1;
  while(n) result *= n--;
  return result;
}

> > however.
> > Here's an iterative preorder binary tree traversal. 
> [snip]
> >       temp = current->parent;
>                         ^ That's the catch.
> 
> Effectively you're using a stack. Not the function call stack, but a
> stack embedded into your tree. It's fine as long as you need the parent
> pointers anyway, but if you only use them for traversal they impose a
> possibly significant space overhead and they take time to update.

Its possible to do n-ary trees without child or parent pointers (a heap),
but some operations (such as swapping child trees or insertions at arbitray
points) become much more expensive.  However depth-first or breadth-first
traversals of a heap are easy to do without recursion.

Its hard to argue that fact() above has significant time or space penalties
compared to a recursive version.

The obvious iterative fib() function requires a linear number of additions
and no stack.  The obvious recursive version requires an exponential number
of additions.
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scfyb76vihk.fsf@infiniti.PATH.Berkeley.EDU>
In article <··························@janeway> "Bill Wade" <·········@stoner.com> writes:

   From: "Bill Wade" <·········@stoner.com>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: 16 Jul 1997 15:12:58 GMT
   Organization: NeoSoft, Inc.  
   X-Newsreader: Microsoft Internet News 4.70.1161
   Lines: 44
   Xref: agate comp.lang.lisp:29286 comp.programming:52441 comp.lang.c++:282065



   Michael Schuerig <······@uni-bonn.de> wrote in article
   <····················@rhrz-isdn3-p5.rhrz.uni-bonn.de>...
   > W. Daniel Axline <·······@cse.unl.edu> wrote:
   > 
   > > I have written binary tree traversal algorithms which don't use a
   stack.
   > > I would be interested to know if this were true for some other
   > > algorithms,

   There are plenty of recursive functions for which most of us don't use a
   stack.  Iterative factorial() or fibonicci() are examples.

   int fact(unsigned int n)
   {
     int result = 1;
     while(n) result *= n--;
     return result;
   }

You are not getting it.  There are "inherently recursive" function
definitions for which an equivalent iterative algorithm *requires* the
use of a stack.  No matter how disguised.

   > > however.
   > > Here's an iterative preorder binary tree traversal. 
   > [snip]
   > >       temp = current->parent;
   >                         ^ That's the catch.
   > 
   > Effectively you're using a stack. Not the function call stack, but a
   > stack embedded into your tree. It's fine as long as you need the parent
   > pointers anyway, but if you only use them for traversal they impose a
   > possibly significant space overhead and they take time to update.

   Its possible to do n-ary trees without child or parent pointers (a heap),
   but some operations (such as swapping child trees or insertions at arbitray
   points) become much more expensive.  However depth-first or breadth-first
   traversals of a heap are easy to do without recursion.

That, once again is because - I surmise - you are assuming an array
implementation of the heap where you know exactly where "parent" and
"children" are (typically at (i%2), (2*i) and (2*1 + 1) for a node at
'i' in a binary heap).  You are always allocating memory in some way
or the other which eventually provides you with an "encoded stack".
You can't do that with purely malloc'ed data structures.

   Its hard to argue that fact() above has significant time or space penalties
   compared to a recursive version.

   The obvious iterative fib() function requires a linear number of additions
   and no stack.  The obvious recursive version requires an exponential number
   of additions.

These are examples of non-inherently recursive functions and do not
change a bit the terms of the debate. If memory does not fail me,
there is even a closed form solution for the Fibonacci numbers that
can - hear hear - be computed in O(1) time.

Again, try to do the binary tree search without extra memory allocated
in the basic data structures.

As you might have noticed.  The title of this thread is now totally
bogus :)

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scfzprkfc1a.fsf@infiniti.PATH.Berkeley.EDU>
In article <···············@infiniti.PATH.Berkeley.EDU> ·······@infiniti.PATH.Berkeley.EDU (Marco Antoniotti) writes:

	...

   These are examples of non-inherently recursive functions and do not
   change a bit the terms of the debate. If memory does not fail me,
   there is even a closed form solution for the Fibonacci numbers that
   can - hear hear - be computed in O(1) time.

I was just chastised for my big mouth.  The closed form contains
exponentials that cannot be computed in constant time.

I guess *I* have to go back to the books.

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scfyb76vr0d.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@cse.unl.edu> "W. Daniel Axline" <·······@cse.unl.edu> writes:

   From: "W. Daniel Axline" <·······@cse.unl.edu>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Wed, 16 Jul 1997 02:11:31 -0500
   Organization: Internet Nebraska
   Reply-To: ·······@cse.unl.edu
   Mime-Version: 1.0
   Content-Type: text/plain; charset=us-ascii
   Content-Transfer-Encoding: 7bit
   X-Mailer: Mozilla 3.0Gold (Win95; I)
   Xref: agate comp.lang.lisp:29268 comp.programming:52406 comp.lang.c++:281961

   Marco Antoniotti wrote:
   > 
   >    Delivery-Date: Tue, 15 Jul 1997 08:21:57 -0700
   >    Date: Tue, 15 Jul 1997 10:23:04 +0000
   >    From: ·········@mail.utexas.edu (Thomas Hallock)
   >    Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   >    References: <············@cdn-news.telecom.com.au> <·············@capital.net> <···············@infiniti.PATH.Berkeley.EDU>
   >    Organization: bovine soft.
   > 
   >    (A copy of this message has also been posted to the following newsgroups:
   >    comp.lang.lisp, comp.programming,comp.lang.c++)
   > 
   >    > Coming from a person who might seem to think that binary tree
   >    > traversal can be executed in "the same way as a for next loop", I
   >    > cannot help to dismiss the whole argument.  Why have C (or Pascal or
   >    > Algol or Lisp) in the first place?  Let's stick to FORTRAN. :)
   > 
   >    well, correct me if i'm wrong, but i have read in many places that *all
   >    recursive functions can be expressed as iterations* yea, it's hard to
   >    believe, but it's been prooven, but i don't really know how to do such a
   >    work of magic.
   >    note that this is done automatically in some really good compilers!
   > 
   > You are right.  I was kinda flame baiting :).  You can always express a
   > recursive function with a loop.  However, some functions (e.g. binary
   > tree traversal) are "inherently" recursive.  I.e. to remove the
   > recursive calls you have to explicitely manage a stack.  Old FORTRAN
   > is a good example of this burden posed on the programmer.

   I have written binary tree traversal algorithms which don't use a stack.
   I would be interested to know if this were true for some other
   algorithms,
   however.
   Here's an iterative preorder binary tree traversal. 
   (Preorder means process root, then process the subtrees.)

   void BT::Preorder2Screen()
   {
   BTnode *temp = this->root; //used to find a node with an untraversed
   right subtree
   BTnode *current = this->root; //start at the root
   int i;
   while(temp!=NULL) // need to do this for each node
   {
     cout<<current->key<<" ";
     if(current->Lchild)
     {
       current = current->Lchild;
     }
     else
     {
       if(current->Rchild!=NULL)
       {
	 current = current->Rchild;
       }
       else //go backwards
       {
	 temp = current->parent;
	 while(((temp->Rchild == NULL)||(temp->Rchild == current))&&
		  (temp!=NULL)) //shouldn't happen
	 {
	   current = temp;
	   temp = temp->parent;
	 }
	 if (temp!=NULL)
	    current = temp->Rchild;
       }
     }
   }//endwhile
   cout<<"\n\n";
   }//end Preorder2Screen()

You are embedding a 'parent' field in your tree nodes.  This is equivalent
to use up the the memory needed for a call stack or for a node stack.
If you do not need the parent node for other purposes, this is a waste
of memory which makes the C++ program "inefficient" :)

The recursive solution is much more compact and readable.

   void
   BTNode::Preorder()
   {
     cout << current->key << ' ';		// Write a space character.
     
     if (Rchild != NULL)
       Rchild->Preorder();

     if (Lchild != NULL)
       Lchild->Preorder();

     cout << '\n' << endl;			// Use 'endl' to flush.
   }
   //end Preorder2Screen()

Of course, I could have written a standard function taking a BTNode
argument.  The code would have been even more compact, but I decided
to stick to your style.

Not bad for a Lisp programmer, isn't it?  Or maybe, it is because I am
a Lisp programmer that I see the beauty and the ease in which you
write recursive functions like these :)

BTW. I suggest to look at

	http://www.delorie.com/gnu/docs/GNU/standards_toc.html

reading C/C++ code inconsistently formatted is worse than reading lots
of parenthesis.

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Horst von Brand
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <m167u86nx4.fsf@pincoya.inf.utfsm.cl>
"W. Daniel Axline" <·······@cse.unl.edu> writes:

[...]

> I have written binary tree traversal algorithms which don't use a stack.

Using a parent pointer is cheating ;-)
-- 
Dr. Horst H. von Brand                       ···············@inf.utfsm.cl
Departamento de Informatica                     Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria              +56 32 654239
Casilla 110-V, Valparaiso, Chile                Fax:  +56 32 797513
From: Joe Sysop
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <slrn5snfqs.1dm.sysop@uplift.sparta.lu.se>
On Tue, 15 Jul 1997 10:22:59 +0000, Thomas Hallock <·········@mail.utexas.edu> wrote:
>> Coming from a person who might seem to think that binary tree
>> traversal can be executed in "the same way as a for next loop", I
>> cannot help to dismiss the whole argument.  Why have C (or Pascal or
>> Algol or Lisp) in the first place?  Let's stick to FORTRAN. :)
>
>well, correct me if i'm wrong, but i have read in many places that *all
>recursive functions can be expressed as iterations* yea, it's hard to
>believe, but it's been prooven, but i don't really know how to do such a
>work of magic.
>note that this is done automatically in some really good compilers!

It is true, and often not that hard. But i don't see why compilers want
to make recursion interative, since that (if the recursion is justified)
means making the code bigger -> bigger/slower executable. Or am i wrong?

/Joe
From: W. Daniel Axline
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CC7236.752C@cse.unl.edu>
Joe Sysop wrote:
> 
> On Tue, 15 Jul 1997 10:22:59 +0000, Thomas Hallock <·········@mail.utexas.edu> wrote:
> >> Coming from a person who might seem to think that binary tree
> >> traversal can be executed in "the same way as a for next loop", I
> >> cannot help to dismiss the whole argument.  Why have C (or Pascal or
> >> Algol or Lisp) in the first place?  Let's stick to FORTRAN. :)
> >
> >well, correct me if i'm wrong, but i have read in many places that *all
> >recursive functions can be expressed as iterations* yea, it's hard to
> >believe, but it's been prooven, but i don't really know how to do such a
> >work of magic.
> >note that this is done automatically in some really good compilers!
> 
> It is true, and often not that hard. But i don't see why compilers want
> to make recursion interative, since that (if the recursion is justified)
> means making the code bigger -> bigger/slower executable. Or am i wrong?
> 
> /Joe

Actually, I have been given to understand quite the opposite. Apparently 
whenver (during runtime) a function calls itself, it has to make another
complete copy of itself to run. You can see how this would tend to take
up much more in the way of resources than an iterative function.
I'm not sure what the exact effect on speed would be, but I imagine it
would be slower. In my limited experience, iterative algorithms have not
been that much larger than their recursive counterparts, just more 
difficult to divine.
From: Erik Naggum
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <3078042529703162@naggum.no>
* W. Daniel Axline
| Actually, I have been given to understand quite the opposite. Apparently
| whenver (during runtime) a function calls itself, it has to make another
| complete copy of itself to run. You can see how this would tend to take
| up much more in the way of resources than an iterative function.  I'm not
| sure what the exact effect on speed would be, but I imagine it would be
| slower. In my limited experience, iterative algorithms have not been that
| much larger than their recursive counterparts, just more difficult to
| divine.

I wonder what you mean by "a complete copy of itself".  it appears that you
think a copy of the actual function's _code_ is copied, and this is of
course not true.  however, a new stack frame is often created upon a
function call, with space for various registers, local variables, etc, etc.
this does consume resources.  languages that are made for or encourage
recursive function calls often offer "tail call merging" to handle the case
where a function returns simply what the function it is about to call would
return.  in such a case, the calling function's stack frame is undone
before the call (if necessary), and the called function returns to the
caller's caller, saving both time and memory.  (Scheme is "properly tail
recursive" because it requires an implementation to do tail calls as jumps,
not calls.  most Common Lisp implementations offer tail call merging.)

I previously made a serious mistake in believing that Lisp compilers were
so much faster than C compilers when the real difference was that the Lisp
compilers did tail call merging and the C compiler did not.  on a SPARC,
this translates to a very heavy penalty because of the way register windows
are saved on the stack, so the Lisp compilers won big, disproportionately.
(I haven't been able to compute the actual cost of a call so I could
discount it and get a better comparison.  for now, the GNU C compiler makes
recursion extremely costly on the SPARC.)

programmers who write recursive functions learn to use tail recursion soon
after they discover that each function call can take up hundreds of bytes
of memory.  e.g., the former of these two functions will use memory (stack
space) proportional to n, while the latter will use constant space if the
compiler merges tail calls.

(defun factorial (n)
  (if (plusp n)
    (* n (factorial (1- n)))
    1))

(defun factorial (n)
  (flet ((tail-factorial (n accumulator)
	   (if (plusp n)
	     (tail-factorial (1- n) (* n accumulator))
	     accumulator)))
    (tail-factorial n 1)))

let's hope this puts some needless worries about recursion to rest.

#\Erik
-- 
Microsoft Pencil 4.0 -- the only virtual pencil whose tip breaks.
From: Vassili Bykov
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <wklo36liji.fsf@cam.org>
Erik Naggum <····@naggum.no> writes:
> [...good points about tail recursion...]
>
> (defun factorial (n)
>   (flet ((tail-factorial (n accumulator)
> 	   (if (plusp n)
> 	     (tail-factorial (1- n) (* n accumulator))
> 	     accumulator)))
>     (tail-factorial n 1)))

For perfection, that's LABELS, not FLET.

--Vassili
From: Michael Schuerig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <19970716145006793044@rhrz-isdn3-p5.rhrz.uni-bonn.de>
W. Daniel Axline <·······@cse.unl.edu> wrote:

> Actually, I have been given to understand quite the opposite. Apparently
> whenver (during runtime) a function calls itself, it has to make another
> complete copy of itself to run.

You mean a complete copy of the function's *code*? No, definitely not. 
Parameters and local variables are allocated on the stack and control
flow continues at the beginning of the function.

If the recursive call is the last one in the function ("tail-recursion")
this can even be done without growing the stack as the new parameters
and variables replace the old ones. You get a function that looks
recursive, but effectively is executed iteratively. Lisp compilers do
this optimization, I'm not sure about others.


Michael

--
Michael Schuerig         P'rhaps he's hungry. Six volts make him smile.
·············@uni-bonn.de         And twelve volts would probably kill.
http://www.uni-bonn.de/~uzs90z/  -Jethro Tull, "Batteries Not Included"
From: Rainer Joswig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <joswig-ya023180001707971426240001@news.lavielle.com>
In article <····················@rhrz-isdn3-p5.rhrz.uni-bonn.de>,
······@uni-bonn.de (Michael Schuerig) wrote:

> W. Daniel Axline <·······@cse.unl.edu> wrote:
> 
> > Actually, I have been given to understand quite the opposite. Apparently
> > whenver (during runtime) a function calls itself, it has to make another
> > complete copy of itself to run.
> 
> You mean a complete copy of the function's *code*? No, definitely not. 
> Parameters and local variables are allocated on the stack and control
> flow continues at the beginning of the function.
> 
> If the recursive call is the last one in the function ("tail-recursion")
> this can even be done without growing the stack as the new parameters
> and variables replace the old ones. You get a function that looks
> recursive, but effectively is executed iteratively. Lisp compilers do
> this optimization, I'm not sure about others.

The tail recursive version also doesn't need any function call -
it is just a jump.

For a normal Lisp compiler there won't be much speed difference.
(fac 1000) with bignum arithmetic needs 0.1 seconds on
my Macintosh Powerbook with MCL 4.1 in all three code versions. 
The iterative execution. uses less space.

I prefer a system which lets me write most of the time
functional style code (since this is clearer code most of the time)
without much speed penalty. We have that.

-- 
http://www.lavielle.com/~joswig/
From: Barry Margolin
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5qjqre$q84@tools.bbnplanet.com>
In article <·············@cse.unl.edu>,
W. Daniel Axline <·······@cse.unl.edu> wrote:
>Joe Sysop wrote:
>> It is true, and often not that hard. But i don't see why compilers want
>> to make recursion interative, since that (if the recursion is justified)
>> means making the code bigger -> bigger/slower executable. Or am i wrong?
>
>Actually, I have been given to understand quite the opposite. Apparently 
>whenver (during runtime) a function calls itself, it has to make another
>complete copy of itself to run. You can see how this would tend to take
>up much more in the way of resources than an iterative function.

Is one of you flame-baiting, or are you both completely clueless?

Neither recursive nor iterative functions are inherently larger than one
another; in general, the amount of code will be very similar.  Most of the
work is usually done in the body of the loop or recursive function, not in
the code that determines how to repeat itself, and this should be almost
identical in the two versions.

When a function recurses, it doesn't "make another complete copy of itself
to run."  There's just a single copy of a function, but perhaps a new
activation record (AKA stack frame) to hold the context of the recursive
call.  And if the function is tail-recursive, and the language
implementation supports tail-call optimization, the recursive call can use
the same activation record.  Sometimes the tail-recursive version of a
function is a little less obvious than the recursive version, though.

Here's an example of factorial in iterative, recursive, and tail-recursive
versions:

(defun fact-iter (n)
  (do ((result 1 (* result i))
       (i n (1- i)))
      ((< i 2) result)))

(defun fact-recurs (n)
  (if (< n 2)
      1
      (* n (fact-recurs (1- n)))))

(defun fact-tail-recurs (n)
  (labels ((recurs (i result)
             (if (< i 2)
                 result
                 (recurs (1- i) (* result i)))))
    (recurs n 1)))

In an implementation that supports tail-recursion optimization, the
iterative and tail-recursive versions should generate almost identical
code.  The recursive version, however, will use O(n) stack space for all
the recursive calls; once the recursion bottoms out, the multiplications
will be done as each call returns.

-- 
Barry Margolin, ······@bbnplanet.com
BBN Corporation, Cambridge, MA
Support the anti-spam movement; see <http://www.cauce.org/>
From: ? the platypus {aka David Formosa}
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <869117695.692532@cabal>
In <·············@cse.unl.edu> "W. Daniel Axline" <·······@cse.unl.edu> writes:

[...]

>Actually, I have been given to understand quite the opposite. Apparently 
>whenver (during runtime) a function calls itself, it has to make another
>complete copy of itself to run.

This is not true.  All has to be done is a new bit of stack has to be
allocated.

[...]

> In my limited experience, iterative algorithms have not
>been that much larger than their recursive counterparts, just more 
>difficult to divine.

Ok the extened eucliden algorthum then.  I have seen no way to do it
iteraveravly that dosen't involve stacks and a hell of a lot hard to
underatand code.

--
Please excuse my spelling as I suffer from agraphia see the url in my header. 
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is $0.08 per day too much to pay?   ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86iuy9dblk.fsf@g.pet.cam.ac.uk>
David Formosa wrote:

> Ok the extened eucliden algorthum then.  I have seen no way to do it
> iteraveravly that dosen't involve stacks and a hell of a lot hard to
> underatand code.
> 
> --
> Please excuse my spelling as I suffer from agraphia see the url in my header.
I don't know whether you're a Lisp person or a C++ person (both groups
are in the headers), so I'll do this in pseudocode. It's not the
most efficient code you could write, but it works and it should be
pretty easy to understand.

  function euclid(integer x, y) returning integers d,a,b:
    // d is the gcd of x,y
    // ax+by = d
    // for simplicity we assume that x,y are both non-negative,
    // and that x>=y.
    integer xx=x, px=1, qx=0    // xx=px.x+qx.y always
    integer yy=y, py=0, qy=1    // yy=py.x+qy.y always
    while yy>0 do
      integer q=floor(xx/yy), r=yy-q*xx
      // now replace xx,yy with yy,r
      (xx,px,qx, yy,py,qy) := (yy,py,qy, r,px-q*py,qx-q*qy)
    return d=xx, a=px, b=qx

Not a stack in sight.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: ? the platypus {aka David Formosa}
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <869228486.343021@cabal>
In <··············@g.pet.cam.ac.uk> Gareth McCaughan <·····@dpmms.cam.ac.uk> writes:

>David Formosa wrote:

>> Ok the extened eucliden algorthum then.  I have seen no way to do it
>> iteraveravly that dosen't involve stacks and a hell of a lot hard to
>> underatand code.

>I don't know whether you're a Lisp person or a C++ person (both groups
>are in the headers), so I'll do this in pseudocode.

The implemention you gave is for the eucliden algorthum.  The extened
eucliden algorthum takes 'p' and 'r' and returns q such that 

(p * q) mod r == 1

--
Please excuse my spelling as I suffer from agraphia see the url in my header. 
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is $0.08 per day too much to pay?   ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86g1tcxye2.fsf@g.pet.cam.ac.uk>
David Formosa wrote:

> The implemention you gave is for the eucliden algorthum.  The extened
> eucliden algorthum takes 'p' and 'r' and returns q such that 
> 
> (p * q) mod r == 1

No, the implementation I gave is for the extended Euclidean algorithm.
Given x,y it returns their gcd d, and integers a,b with ax+by=d. In
particular, if the gcd is 1 then you get ax+by=1, i.e. ax == 1 mod y,
which is exactly what you say above.

If you don't believe me, convert it into your favourite language and
try it.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Igor
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D112C3.C68B35D3@farmak.kiev.ua>
--------------087D7F8F6DEB1927E6ECD5F1
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

W. Daniel Axline wrote:

     Actually, I have been given to understand quite the opposite.
     Apparently
     whenver (during runtime) a function calls itself, it has to make
     another
     complete copy of itself to run.

       Function does not make "another complete copy of itself to run."
     Function only
     allocates local variables on the stack

--------------087D7F8F6DEB1927E6ECD5F1
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<HTML>
W. Daniel Axline wrote:
<OL>Actually, I have been given to understand quite the opposite. Apparently
<BR>whenver (during runtime) a function calls itself, it has to make another
<BR>complete copy of itself to run.

<P>&nbsp; Function does not make "another complete copy of itself to run."
Function only
<BR>allocates local variables on the stack</OL>
</HTML>

--------------087D7F8F6DEB1927E6ECD5F1--
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D63DA8.2B99@capital.net>
Igor wrote:
> 
> W. Daniel Axline wrote:
> 
>      Actually, I have been given to understand quite the opposite.
>      Apparently
>      whenver (during runtime) a function calls itself, it has to make
>      another
>      complete copy of itself to run.
> 
>        Function does not make "another complete copy of itself to
>      run." Function only
>      allocates local variables on the stack

	Igor, most people in the LISP newsgroup can't read your message
(which is MIME encoded). They probably use the EMACS Gnus newsreader,
which is a good newsreader, but doesn't support MIME encoded messages.

						Peaceman
From: Alexey Goldin
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <m14t9ld1gq.fsf@spot.uchicago.edu>
Sajid Ahmed the Peaceman <········@capital.net> writes:


> 	Igor, most people in the LISP newsgroup can't read your message
> (which is MIME encoded). They probably use the EMACS Gnus newsreader,
> which is a good newsreader, but doesn't support MIME encoded messages.
> 
> 						Peaceman


It does.
From: Tyson Jensen
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D90F58.3B534FAC@inconnect.com>
> Sajid Ahmed the Peaceman <········@capital.net> writes:
>
> >       Igor, most people in the LISP newsgroup can't read your
> message
> > (which is MIME encoded). They probably use the EMACS Gnus
> newsreader,
> > which is a good newsreader, but doesn't support MIME encoded
> messages.
> >
> >                                               Peaceman
>
> It does.

Long live EMACS!  If your EMACS doesn't do something, just download the
latest version.  Any new thing that comes around, you can bet that
someone, somewhere is writing an e-lisp extension for EMACS.

EMACS - it's not just a program, it's a lifestyle!

--
Tyson Jensen
Mosby Consumer Health       w: (801)-464-6217
·······@mosbych1.com        h: (801)-461-4687
·····@inconnect.com
From: Daniel Barker
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <Pine.GSO.3.95.970804061547.6152C-100000@holyrood.ed.ac.uk>
> Long live EMACS!  If your EMACS doesn't do something, just download the
> latest version.  Any new thing that comes around, you can bet that
> someone, somewhere is writing an e-lisp extension for EMACS.
> 
> EMACS - it's not just a program, it's a lifestyle!
> 
> --
> Tyson Jensen
> Mosby Consumer Health       w: (801)-464-6217
> ·······@mosbych1.com        h: (801)-461-4687
> ·····@inconnect.com

But if you want a text editor rather than a way of life, use ue. 


Daniel Barker,
Institute of Cell and Molecular Biology,
University of Edinburgh,
Daniel Rutherford Building,
King's Buildings,
Mayfield Road,
Edinburgh
EH9 3JR
From: William Clodius
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CCE8EC.59E2@lanl.gov>
Joe Sysop wrote:
><snip>
> 
> It is true, and often not that hard. But i don't see why compilers want
> to make recursion interative, since that (if the recursion is justified)
                     \
> means making the code bigger -> bigger/slower executable. Or am i wrong?
> 
> /Joe

You are wrong, but it is not clear to me how wrong you are since

1. It is not clear to me what you mean by the phrase "if the recursion
is justified". The typical justification for the use of recursion over
iteration is that it more clearly represents the programmer's intent for
the properties of the code. However, clarity of intent need not have a
direct relationship with efficiency of implementation.

2. The most commonly quoted examples of replacement of recursion by
iteration, i.e., tail recursion elimination, do gain in efficiency by
the replacement, for a reasonably intelligent compiler. The procedure
call adds overhead and indirection not present in the equivalent
iterative construct. However it is not clear to me that typical
compilers can replace more sophisticated useages of recursion with
comparably efficient iterative implementations.

-- 

William B. Clodius		Phone: (505)-665-9370
Los Alamos Nat. Lab., NIS-2     FAX: (505)-667-3815
PO Box 1663, MS-C323    	Group office: (505)-667-5776
Los Alamos, NM 87545            Email: ········@lanl.gov
From: Mike Rilee
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CD3DE2.41C6@hannibal.gsfc.nasa.gov>
William Clodius wrote:
> 
> Joe Sysop wrote:
> ><snip>
> >
> > It is true, and often not that hard. But i don't see why compilers want
> > to make recursion interative, since that (if the recursion is justified)
>                      \
> > means making the code bigger -> bigger/slower executable. Or am i wrong?
> >
> > /Joe
> 
> You are wrong, but it is not clear to me how wrong you are since
> 
> 1. It is not clear to me what you mean by the phrase "if the recursion
> is justified". The typical justification for the use of recursion over
> iteration is that it more clearly represents the programmer's intent for
> the properties of the code. However, clarity of intent need not have a
> direct relationship with efficiency of implementation.
> 
> 2. The most commonly quoted examples of replacement of recursion by
> iteration, i.e., tail recursion elimination, do gain in efficiency by
> the replacement, for a reasonably intelligent compiler. The procedure
> call adds overhead and indirection not present in the equivalent
> iterative construct. However it is not clear to me that typical
> compilers can replace more sophisticated useages of recursion with
> comparably efficient iterative implementations.
> 

Might one gain some advantage by first implementing some
reasonably complex functionality using a clear, if less
efficient, recursive style?  Then, if necessary, one could
implement the same functionality using an iterative style. 

This presupposes that it is easier to produce correct recursive
solutions.  It can be useful to have two solutions to the
same problem at hand.  Even if one is dramatically less
efficient than the other.

I would speculate that choosing to implement a more complex
solution first would run a greater risk of failure and might 
be more expensive in the long run.  Constructing a prototype
first would allow one to obtain experience solving the
problem (or pieces of it).  If the main effort fails then
the prototype would exist as a solution.


-- 
Mike Rilee NASA/GSFC Mailstop 930.0 Greenbelt, MD 20771
······@hannibal.gsfc.nasa.gov Ph. (301)286-4743 Fx. (301)286-1777
--
Composed using Oberon. http://www-cs.inf.ethz.ch/Oberon.html
From: William D Clinger
Subject: Patterson-Hewitt theorem (was "Lisp is *SLOW*")
Date: 
Message-ID: <33CF8810.6143@ccs.neu.edu>
Here is the real answer to the question of whether all recursive
programs can be converted into iterative programs.  The question
is a bit subtle, so we need precise definitions.

Definition.  A recursive program parameterized by an abstract data
type T consists of a set of first-order recursion equations over T.

Definition.  An iterative program over an abstract data type T
consists of a flowchart program over T.

Theorem.  Any recursive program parameterized by an abstract data
type T can be translated into an iterative program over the abstract
data type T' consisting of the union of T with auxiliary stacks.

Sketch of proof:  Compilers do this.  QED

Patterson-Hewitt Theorem.  There exists a recursive program
parameterized by an abstract data type T that cannot be
translated into an iterative program over T.

Sketch of proof:  Let T have operations

    b : T -> boolean
    i : T -> T
    f : T -> T
    g : T -> T
    h : T * T -> T

and consider the program F defined by

    F (x) = if b (x) then i (x) else h (F (f (x)), F (g (x)))

Suppose there exists an equivalent iterative program over T.
By a construction akin to the pumping lemma for finite state
automata, you can use the Herbrand interpretation of T to
construct two distinct inputs for which the iterative program
computes the same result, at least one of which must be incorrect.
Therefore no such iterative program exists.  QED

So the bottom line is that translating recursion into iteration
sometimes requires an auxiliary data structure such as a stack.
Real programming languages provide many choices for this auxiliary
structure.  For example, higher order languages allow recursion to
be translated into continuation-passing style (CPS), which is an
iterative form in which first class procedures are used as the
auxiliary data structure.

Will
From: Sajid Ahmed the Peaceman
Subject: Re: Patterson-Hewitt theorem (was "Lisp is *SLOW*")
Date: 
Message-ID: <33D10C2A.3240@capital.net>
William D Clinger wrote:
> 
> Here is the real answer to the question of whether all recursive
> programs can be converted into iterative programs.  The question
> is a bit subtle, so we need precise definitions.
> 
> Definition.  A recursive program parameterized by an abstract data
> type T consists of a set of first-order recursion equations over T.
> 
> Definition.  An iterative program over an abstract data type T
> consists of a flowchart program over T.
> 
> Theorem.  Any recursive program parameterized by an abstract data
> type T can be translated into an iterative program over the abstract
> data type T' consisting of the union of T with auxiliary stacks.
> 
> Sketch of proof:  Compilers do this.  QED
> 
> Patterson-Hewitt Theorem.  There exists a recursive program
> parameterized by an abstract data type T that cannot be
> translated into an iterative program over T.
> 
> Sketch of proof:  Let T have operations
> 
>     b : T -> boolean
>     i : T -> T
>     f : T -> T
>     g : T -> T
>     h : T * T -> T
> 
> and consider the program F defined by
> 
>     F (x) = if b (x) then i (x) else h (F (f (x)), F (g (x)))
> 
> Suppose there exists an equivalent iterative program over T.
> By a construction akin to the pumping lemma for finite state
> automata, you can use the Herbrand interpretation of T to
> construct two distinct inputs for which the iterative program
> computes the same result, at least one of which must be incorrect.
> Therefore no such iterative program exists.  QED
> 


	I don't know what the Herbrand interpretation is, 
but I can give you another proof. 

	Suppose you want to write a function that calculates the 
Sine or Cosine of a number. You can write a nonterminating recursive
function for it, which has no iterative counterpart.

	The point I'm making here is for functions used on 
a computer.  They're all translated into iterative assembly 
language code.

					Peaceman
From: Marco Antoniotti
Subject: Re: Patterson-Hewitt theorem (was "Lisp is *SLOW*")
Date: 
Message-ID: <scfwwmkz0z7.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Sat, 19 Jul 1997 14:49:14 -0400
   Organization: Logical Net
   Reply-To: ········@capital.net
   Lines: 55
   Mime-Version: 1.0
   Content-Type: text/plain; charset=us-ascii
   Content-Transfer-Encoding: 7bit
   X-Mailer: Mozilla 3.01 (WinNT; I)
   Xref: agate comp.lang.lisp:29374 comp.programming:52617 comp.lang.c++:282648

   William D Clinger wrote:
   > 
	...
   > 
   > Suppose there exists an equivalent iterative program over T.
   > By a construction akin to the pumping lemma for finite state
   > automata, you can use the Herbrand interpretation of T to
   > construct two distinct inputs for which the iterative program
   > computes the same result, at least one of which must be incorrect.
   > Therefore no such iterative program exists.  QED
   > 


	   I don't know what the Herbrand interpretation is, 
   but I can give you another proof.

	   Suppose you want to write a function that calculates the 
   Sine or Cosine of a number. You can write a nonterminating recursive
   function for it, which has no iterative counterpart.

I doubt that such a function would be useful.  Ever heard of
"numerical precision"?

	   The point I'm making here is for functions used on 
   a computer.  They're all translated into iterative assembly 
   language code.

Very well.  Translate this into purely iterative assembly code.  No
PUSH operations of *any* kind allowed.... and no 'parent' fields or
extra memory esplicitely allocated by the programmer allowed.  Using a
continuation passing style is also a no-no.

Note that this is perfectly acceptable C code (indented as per the GNU
coding standars) and which a programmer might be asked to write.

The point will be that neither you nor any C (or Lisp, or INTERCAL, or
Ada) compiler will be able to produce an iterative assembly
translation for this piece of code.  Inherently recursive algorithms
and data structures do exists.


==============================================================================

void
preorder_traversal (tree_node *tree)
{
  if (tree == 0)
    return;
  else
    {
      printf ("%d ", tree->key);
      preorder_traversal (tree->left);
      preorder_traversal (tree->right);
    }
}

==============================================================================

And, since we are at it, here is the whole program.  Tested and
working on a Solaris platform using gcc 2.7.2.

------------------------------------------------------------------------------

#include <stdio.h>
#include <assert.h>
#include <stdlib.h>

typedef struct _tree_node
{
   int key;
   struct _tree_node *left;
   struct _tree_node *right;
} tree_node;

void
insert (tree_node *tree, int value)
{
  assert (tree != 0);
  
  if (value == tree->key)
    return;
  else if (value < tree->key)
    {
      if (tree->left == 0)
	{
	  tree->left = (tree_node*) malloc (sizeof (tree_node));
	  tree->left->key = value; /* Sorry.  No check on malloc return. */
	}
      else
	insert (tree->left, value);
    }
  else
    {
      if (tree->right == 0)
	{
	  tree->right = (tree_node*) malloc (sizeof (tree_node));
	  tree->right->key = value; /* Sorry.  No check on malloc return. */
	}
      else
	insert (tree->right, value);
    }
}


void
preorder_traversal (tree_node *tree)
{
  if (tree == 0)
    return;
  else
    {
      printf ("%d ", tree->key);
      preorder_traversal (tree->left);
      preorder_traversal (tree->right);
    }
}


void
inorder_traversal (tree_node *tree)
{
  if (tree == 0)
    return;
  else
    {
      inorder_traversal (tree->left);
      printf ("%d ", tree->key);
      inorder_traversal (tree->right);
    }
}

void
main()
{
  tree_node *root = (tree_node*) malloc (sizeof (tree_node));
  __const__ int tree_size = 20;
  int count;

  /* Sorry.  No system error checking.. */

  srand (getpid ());

  root->key = rand () % 100;

  for (count = 0; count < tree_size; count++)
    insert(root, rand () % 100);

  puts ("Preorder traversal\n");
  preorder_traversal (root);
  putchar ('\n');

  puts ("\nInorder traversal\n");
  inorder_traversal (root);
  putchar ('\n');
}

------------------------------------------------------------------------------

Enjoy.
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Jens Kilian
Subject: Re: Patterson-Hewitt theorem (was "Lisp is *SLOW*")
Date: 
Message-ID: <5r4pom$e1q@isoit109.bbn.hp.com>
Marco Antoniotti (·······@infiniti.PATH.Berkeley.EDU) wrote:
> Very well.  Translate this into purely iterative assembly code.  No
> PUSH operations of *any* kind allowed.... and no 'parent' fields or
> extra memory esplicitely allocated by the programmer allowed.  Using a
> continuation passing style is also a no-no.

Every garbage collector capable of working in constant space does this.
You should add "does not modify the original data structure while
traversing it" to your list of no-nos.  (Mark bits are usually shaved off
existing pointers, so they aren't really "explicitly allocated".)

IIRC, Knuth also describes an "elegant" technique somewhere in his AoCP
that involves encoded parent pointers without taking up additional space.
I remember an example involving double-linked lists, but I'm not sure if
this also works for trees.

Of course, the resulting code will be *much* uglier than a simple recursive
function.  But there are people who don't care for elegance, or even
maintainability.  I keep wondering why they feel a need for asserting the
perceived superiority of their favourite language here in comp.lang.lisp.
Perhaps to them, the mere existence of Lisp seems a threat.
And it may well be.

Greetings,

	Jens.
--
··········@acm.org                 phone:+49-7031-14-7698 (HP TELNET 778-7698)
                                     fax:+49-7031-14-7351
PGP:       06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]
From: Martin Rodgers
Subject: Re: Patterson-Hewitt theorem (was "Lisp is *SLOW*")
Date: 
Message-ID: <MPG.e40485c66d77e7c9898fa@news.demon.co.uk>
With a mighty <··········@isoit109.bbn.hp.com>,
·····@bbn.hp.com uttered these wise words...

> I keep wondering why they feel a need for asserting the
> perceived superiority of their favourite language here in comp.lang.lisp.
> Perhaps to them, the mere existence of Lisp seems a threat.
> And it may well be.

This seems to be the only plausible explanation. It might also explain 
many of the Java attacks, which use _identical_ arguments to attacks 
on Lisp. They're all bizarre, wrong, and easily refuted.

Are there any archives for comp.lang.lisp, so that we could refer such 
people to them? That way, they could simply read the efforts of the 
past, and save everybody's time.

Watching hords of C++ programmers trying to convince us that Lisp 
can't do what it _is_ doing (and has been doing, for some time) used 
to remind me of seal culling. The C++ people were the baby seals, 
while Lisp programmers would be the seal trappers, gently tapping the 
heads of their prey with CS papers, Lisp software, and Lisp systems. 
The result was the quick death of any anti-Lisp argument. Eventually, 
the entire thread would die.

A few months would pass, and these events would repeat themselves, 
only the words would be different. The arguments would be equally 
clueless. Hence the image of baby seal culling. The difference between 
C++ programmers and seals is that, well, seals don't write software.

Scary, isn't it?
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
    "My body falls on you like a dead horse" -- Portait Chinois
            Please note: my email address is gubbish
From: Fred Haineux
Subject: the languages of cats and dogs...
Date: 
Message-ID: <bc-3107971140270001@17.127.10.119>
·······@nortel.ca (Chris Ebenezer) wrote:
|  To listen to many lisp advocates, lisp is :-
|  
|  1, More powerful at machine level than any low level language hitherto 
|     created. In fact Lisp creates a psychic interface enabling you
|     to coax the best out of the CPU *ooo now baby*
|  
|  2, More suitable for OO development than any other language except for
|     froo, a highly advanced mathematical language, details of which were
|     zapped from Marvin Minsky's brain in highly suspicious circumstances.
|     No one knows anymore details but its possible the *dark mutter* CIA
|     were involved.
|  
|  3, More suitable than any other language for scripting, in fact you should
|     burn all heretics who use the mysterious of sh, ksh, perl or even
*shudder*
|     Tcl. Such people are all management lackeys, or very misguided at best.
|     They may only save themselves by committing themselves to a years   
|     intensive study of the oracles of SickPea.
|  
|  4, More intuitive than any natural language. After all who wants to speak in
|     English, SerboCroat or that African language with the tongue clicks. In
|     fact if we only re-engineered all our languages around lisp, we could
|     make full use of AI! Not only would that fish swim on your screen, but
|     it could talk to you!
|  
|  5, Ideal for writing extensions in, i mean look at emacs! It may be a bloated
|     pig of an editor, but it works, and more importantly is conceptually
|     elegant. whaddya mean you don't agree ? What are you anyway ? Some sort of
|     VB loving COMMUNIST ???
|  
|  
|  Did i miss anything out ?

Well, actually, you forgot the part about how since so many AI programs
have been written in Lisp, that the language itself has actually become
intelligent. "Well," you might say, "if this is so, why does it let me
write bad code?"

The answer is, of course, that Lisp is very much like a cat. Cats, you
see, understand every word you say. They don't pay any attention, usually,
but they do understand.

One could go along with the idea that C is much like a dog -- it assumes
you're the Lead Dog, and therefore follows your instructions implicitly.

This leaves C++, which seems to be neither cat nor dog.

I think that sums it up nicely....///..........

(the preceding was HUMOROUS FARCE, and should not be taken internally
unless under direction of competent counsel...)
From: Pierpaolo Bernardi
Subject: Re: Patterson-Hewitt theorem (was "Lisp is *SLOW*")
Date: 
Message-ID: <5rsacg$i2m$1@pania.unipi.it>
Chris Ebenezer (·······@nortel.ca) wrote:

: To listen to many lisp advocates, lisp is :-

: 1, More powerful at machine level than any low level language hitherto 
:    created. In fact Lisp creates a psychic interface enabling you
:    to coax the best out of the CPU *ooo now baby*

: 2, More suitable for OO development than any other language except for
:    froo, a highly advanced mathematical language, details of which were
:    zapped from Marvin Minsky's brain in highly suspicious circumstances.
:    No one knows anymore details but its possible the *dark mutter* CIA
:    were involved.

: 3, More suitable than any other language for scripting, in fact you should
:    burn all heretics who use the mysterious of sh, ksh, perl or even *shudder*
:    Tcl. Such people are all management lackeys, or very misguided at best.
:    They may only save themselves by committing themselves to a years   
:    intensive study of the oracles of SickPea.

: 4, More intuitive than any natural language. After all who wants to speak in
:    English, SerboCroat or that African language with the tongue clicks. In
:    fact if we only re-engineered all our languages around lisp, we could
:    make full use of AI! Not only would that fish swim on your screen, but
:    it could talk to you!

: 5, Ideal for writing extensions in, i mean look at emacs! It may be a bloated
:    pig of an editor, but it works, and more importantly is conceptually
:    elegant. whaddya mean you don't agree ? What are you anyway ? Some sort of
:    VB loving COMMUNIST ???

With the possible exception of nr. 4, you are completely right. [*]

IMO, programming languages should not be intuitive, they must be self
consistent and founded on very few principles, and not rely on any
external _intuit_ to make sense.

Pierpaolo.

[*] 8-)
From: Martin Rodgers
Subject: Re: Patterson-Hewitt theorem (was "Lisp is *SLOW*")
Date: 
Message-ID: <MPG.e4c1229584db54e989910@news.demon.co.uk>
My body fell on Chris Ebenezer like a dead horse, thusly:

> : This seems to be the only plausible explanation. It might also explain 
> : many of the Java attacks, which use _identical_ arguments to attacks 
> : on Lisp. They're all bizarre, wrong, and easily refuted.
> 
> Or OTOH, this could be because certain Java/Lisp programmers advocate
> their programming language as a solution for all problems at every level
> of abstraction. All it needs is for someone to opinion that certain 
> problems may need assembler (as an an example) as an optimal solution,
> immediately a Java programmer will jump in saying that "no one programs in
> assembler anymore", and that in his opinion the problem "would be much better
> coded in Java, for in Java you could do x", and if you ^really^ want to
> program in assembler "you can wait 3 years until a native Java CPU is
> released", and in the meantime "theres a really kewl assembler for Java
> byte codes".

I've never seen a Java programmer will jump in saying that "no one 
programs in assembler anymore". Where did you see this remarkable 
claim made? I'd dispute it myself!
 
> To listen to many lisp advocates, lisp is :-
> 
> 1, More powerful at machine level than any low level language hitherto 
>    created. In fact Lisp creates a psychic interface enabling you
>    to coax the best out of the CPU *ooo now baby*

I've never seen this claimed, either. Not in the 5 years that I've 
been reading comp.lang.lisp. Some possibly over zealous claims _may_ 
be made, but it's also possible that a few people simply find the 
claims hard to believe, maybe coz they've not actually checked to see 
what's being done with a particular language.

Let us not forget the Clarke law about sufficiently advanced tech 
being indistinguishable from magic. There's no magic to be found, but 
there is tech that does some amazing things. I frequently see this law 
in action, regarding all kinds of languages.
 
> 2, More suitable for OO development than any other language except for
>    froo, a highly advanced mathematical language, details of which were
>    zapped from Marvin Minsky's brain in highly suspicious circumstances.
>    No one knows anymore details but its possible the *dark mutter* CIA
>    were involved.

Perhaps someone better acquainted with mathematical languages could 
answer this one, as I've no idea what you're talking about. I'm more 
familiar with text processing, and compilers in particular.
 
> 3, More suitable than any other language for scripting, in fact you should
>    burn all heretics who use the mysterious of sh, ksh, perl or even *shudder*
>    Tcl. Such people are all management lackeys, or very misguided at best.
>    They may only save themselves by committing themselves to a years   
>    intensive study of the oracles of SickPea.

I've never seen this claimed, either. I don't doubt that Lisp can be 
used for scripting (consider scsh!). I used to use Forth for 
scripting, back when CP/M-68K was my OS. Well, the scripting in that 
OS is almost non-existant, so perhaps I had a good excuse.

When and where have you seen Lisp people insisting that only Lisp can 
do scripting? ISTR a nice little article published by Byte in which a 
William Gates outlined his plans for using Basic as a scripting tool.
Everyone can, and IMHO should able to, use their chosen language for 
scripting. MS actually let us choose the language, unlike a great many 
others. Even if you're right about this point, and that _some_ Lisp 
people want to convert us all to Lisp, there are others promoting 
other languages, and they have much greater power to make their dreams 
come true.

I'm very relaxed about scripting, as I don't always have the luxury of 
a choice, and when I do, well, there's no need to convert anyone, nor 
any need to defend my choice(s). I read the Tcl thread, earlier this 
year, with a certain amount of amusement. This is because I have no 
strong feelings about what choices _other people make_. None of it 
affects me. Is this a positive attitude? I can't tell.
 
> 4, More intuitive than any natural language. After all who wants to speak in
>    English, SerboCroat or that African language with the tongue clicks. In
>    fact if we only re-engineered all our languages around lisp, we could
>    make full use of AI! Not only would that fish swim on your screen, but
>    it could talk to you!

Now you've completely lost me. Techies like us tend to reject 
"friendly" user interfaces, like "Bob". I'm not sure I'd like using 
such a beast, but then, I'm likely to, so why worry?

I can't remember the last time I saw a thread discussing user 
interfaces in comp.lang.lisp. Can you?
 
> 5, Ideal for writing extensions in, i mean look at emacs! It may be a bloated
>    pig of an editor, but it works, and more importantly is conceptually
>    elegant. whaddya mean you don't agree ? What are you anyway ? Some sort of
>    VB loving COMMUNIST ???

Well, I like the IDE in VB. Perhaps I'm a heretic? Nobody has burned 
me yet. If you wish to flamebait emacs users, don't look at me. Try an 
emacs newsgroup instead.
 
> Did i miss anything out ?

Any evidence? I can refer you to the recent "Lisp is SLOW" thread, 
which we're still posting to, plus various other recent threads, like 
the one about garbage collection (see dejanews). There's also the 
series of threads that have repeated themselves, of which I read at 
least 3 years worth - too bad I didn't archive them, because then you 
too could read them. You might learn something, even if it's how 
pointless it is to argue with people who know more about the subject 
than yourself. In this case, the subject is Lisp.

Here's my evident: the Lisp FAQ and everthing that it refers to. 
Books, software, companies, compilers, it's all there. As Henry Baker 
once said, when refering someone to one of his papers, read it and 
weep. Alternately, you might discover something useful. You never 
know, it might be a better arguement for not using Lisp...Try it and 
let us all know what you find.

If you don't have time to do this, fair enough. After all, it's a big 
world, even if we ony restrict ourselves to studying computers. Nobody 
can know about all of it and then talk about it with any authority. 
This is why you should consider that others may have a greater depth 
of knowledge in this one area. No doubt there are subjects that a Lisp 
programmer won't know, and with which you could win every arguement.
I'm only saying that this isn't one of them.

Anyway, you've joined the party near the end, when everyone is 
leaving. The fun is over...until the next time. See you then.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
    "My body falls on you like a dead horse" -- Portait Chinois
            Please note: my email address is gubbish
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86yb79xzxp.fsf@g.pet.cam.ac.uk>
"Sajid Ahmed the Peaceman" trolled:

> 	Lisp is a deception. All lisp compilers and interpreters that 
> I've seen have been written in C, and run on top of a C program.

How sad.

Scheme 48 is written entirely in Scheme. (Its virtual machine
is compiled to C, but that's just for portability; it's *written*
in Scheme.)

CMU Common Lisp is almost entirely written in Lisp. Some bits are
written in C for bootstrapping purposes.

I have a vague feeling someone told me Harlequin's Lisp compiler
is written entirely in Lisp.

>                                                                  I've
> seen a lot of LISP and PROLOG programmers, especially in the post
> graduate 
> level of computer science, think that lisp functions the same way as 
> mathematics. They think that a call to a recursive function
> instantaneously 
> returns a result. The fact is, these function is broken down into
> machine
> level instructions, and is executed the same way as a for next loop. 

Well, knock me down with a feather. Who would have thought it?
You'll be telling me that my hardware doesn't have a single run-emacs
operation next.

If you've seen a lot of postgraduate computer scientists who think
that recursive functions return instantaneously, then the only
conclusion I can draw is that you've been hanging around a university
where either they don't know how to teach computer science or they
get really really weak students. Too bad.

> 	AI is a bunch of garbage as well. Do you know what the main goal 
> of AI is? It is to develope a program where a person cannot
> distinguish the program from a human being. What does that have to do
> with intelligence? It's just an emulator. 

Thank you for sharing this profound insight with us. Are you an
emulator or a person, by the way? How are we supposed to tell?

> 	The bottom line is, all computer programs, including AI 
> programs, are just fast pocket calculators.

So much the better for pocket calculators, say I.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Emergent Technologies Inc.
Subject: [PISSING CONTEST]Re: Lisp is *SLOW*
Date: 
Message-ID: <5qm6l4$fk$1@newsie.cent.net>
 Sajid Ahmed the Peaceman wrote in article <·············@capital.net>...
>
> Unfortunately, that is the experience that I've had.
>The postgraduate couses in computer science that I've seen were
>more like math courses than computer courses.
>

Yeah, go figure....
I wonder Y
 
From: Hrvoje Niksic
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <kig90z9cxmi.fsf@jagor.srce.hr>
Sajid Ahmed the Peaceman <········@capital.net> writes:

> All lisp compilers and interpreters that I've seen have been written
> in C, and run on top of a C program.

Now, here is a man who has seen a real lot of Lisp compilers. :-)

(rest of bait ignored.)

-- 
Hrvoje Niksic <·······@srce.hr> | Student at FER Zagreb, Croatia
--------------------------------+--------------------------------
Then...  his face does a complete change of expression.  It goes from
a "Vengeance is mine" expression, to a "What the fuck" blank look.
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e348930dcf8f1729898d8@news.demon.co.uk>
With a mighty <···············@jagor.srce.hr>,
·······@srce.hr uttered these wise words...

> Now, here is a man who has seen a real lot of Lisp compilers. :-)

There are a few _books_ that could refute his belief! A few years ago, 
I wrote one myself. Of course, it was very primitive, but it did 
compile a small subset of Scheme into C. It didn't take long to write, 
either. After all, I wrote it in Scheme...

I sense a round of URLs about to fly, but they could be pre-empted by 
a reference to the Lisp FAQ. Do any C++ programmers bother to read it?
I sometimes wonder, esp when someone tries to distort facts that are 
available for anyone with Internet access to check for themselves.

There's that nice little "rtfm" ftp site at MIT, where they keep the 
FAQs. Anyone who thinks that all Lisp compilers and interpreters are 
written in C should take a look. Still, perhaps Sajid Ahmed hasn't 
seen many Lisp compilers and interpreters? That might explain a spot 
of ignorance. Again, the Lisp FAQ can help fix that.

It certainly beats the old "open mouth and insert foot" strategy that 
all anti-Lisp attacks employ. Oops, perhaps I should say qualify that 
by adding that this applies to all the attacks that _I've_ seen. There 
may be some out there that are better informed. If such things exist, 
I've not seen one during the 5 years that I've been reading UseNet.

Not one.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
            Please note: my email address is gubbish
                 Will write Lisp code for food
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CE5827.658B@capital.net>
Martin Rodgers wrote:
> 
> With a mighty <···············@jagor.srce.hr>,
> ·······@srce.hr uttered these wise words...
> 
> > Now, here is a man who has seen a real lot of Lisp compilers. :-)
> 
> There are a few _books_ that could refute his belief! A few years ago,
> I wrote one myself. Of course, it was very primitive, but it did
> compile a small subset of Scheme into C. It didn't take long to write,
> either. After all, I wrote it in Scheme...
> 
> I sense a round of URLs about to fly, but they could be pre-empted by
> a reference to the Lisp FAQ. Do any C++ programmers bother to read it?
> I sometimes wonder, esp when someone tries to distort facts that are
> available for anyone with Internet access to check for themselves.
> 
> There's that nice little "rtfm" ftp site at MIT, where they keep the
> FAQs. Anyone who thinks that all Lisp compilers and interpreters are
> written in C should take a look. Still, perhaps Sajid Ahmed hasn't
> seen many Lisp compilers and interpreters? That might explain a spot
> of ignorance. Again, the Lisp FAQ can help fix that.
> 

	I took a look at the faq. It's mostly about the syntax of 
Lisp code. 
	Anyway, all lisp programs, as well as the compilers and 
interpreters are broken down into assembly level code, which is 
iterative. The thing I have a problem with is with people trying 
to write programs that are completely recursive, which is what lisp 
is about. That is the wrong way to go about it. It's a tremendous 
waste. 

					Peaceman
From: Reginald S. Perry
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <sxln2nlmjl9.fsf@yakko.zso.dec.com>
Sajid Ahmed the Peaceman <········@capital.net> writes:

> 	Anyway, all lisp programs, as well as the compilers and 
> interpreters are broken down into assembly level code, which is 
> iterative. The thing I have a problem with is with people trying 
> to write programs that are completely recursive, which is what lisp 
> is about. That is the wrong way to go about it. It's a tremendous 
> waste. 
> 

I would advise that you go back to school and retake the assembly
language programming course. When I took my course, using PDP-11
assembly language BTW, I wrote recursive, iterative and self-modifying
code. Thats the beauty and pain of assembly language. You can make it
anything you want it to be. Branches and jumps are not iterative, and
at the assembly language level you can essentially jump wherever you
want and since activation frames are an artifact of the argument
passing convention you impose on the architecture, you can discard
them at this level. This means that all sorts of crazy things are
possible. 

Also, in my view Lisp is not just about recursion just like C is not
just about machine level programming. The objective is to find a
medium in which it is straightforward to map abstract concepts into
computational entities. Certain concepts map well in C, some in
assembly language, others in Lisp+CLOS, others in C++,Eiffel, Sather,
Smalltalk, Perl, Tcl, whatever. So first find that medium that allows
you maximum expression, and then after you have the idea on canvas, so
to speak, then you can worry about wether you need to tweak the
medium.

Hopefully, you just program in Visual Basic and not some language where
you could hurt yourself and others, or, God forbid, learn something.


-Reggie
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CFF8E5.2E0B@capital.net>
Reginald S. Perry wrote:
> 
> Sajid Ahmed the Peaceman <········@capital.net> writes:
> 
> >       Anyway, all lisp programs, as well as the compilers and
> > interpreters are broken down into assembly level code, which is
> > iterative. The thing I have a problem with is with people trying
> > to write programs that are completely recursive, which is what lisp
> > is about. That is the wrong way to go about it. It's a tremendous
> > waste.
> >
> 
> I would advise that you go back to school and retake the assembly
> language programming course. When I took my course, using PDP-11
> assembly language BTW, I wrote recursive, iterative and self-modifying
> code. Thats the beauty and pain of assembly language. You can make it
> anything you want it to be. Branches and jumps are not iterative, and
> at the assembly language level you can essentially jump wherever you
> want and since activation frames are an artifact of the argument
> passing convention you impose on the architecture, you can discard
> them at this level. This means that all sorts of crazy things are
> possible.
> 

	Branches a jumps are too iterative. 

	You agree that for next loops are iterative, right? 
	How about if statements? They both involve branches 
	and jumps.  


> Hopefully, you just program in Visual Basic 

	Sorry, no VB for me. 


						Peaceman
From: Reginald S. Perry
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <sxl7mekdtva.fsf@yakko.zso.dec.com>
Sajid Ahmed the Peaceman <········@capital.net> writes:

> Reginald S. Perry wrote:
> > 
> > Sajid Ahmed the Peaceman <········@capital.net> writes:
> > 
> > >       Anyway, all lisp programs, as well as the compilers and
> > > interpreters are broken down into assembly level code, which is
> > > iterative. The thing I have a problem with is with people trying
> > > to write programs that are completely recursive, which is what lisp
> > > is about. That is the wrong way to go about it. It's a tremendous
> > > waste.
> > >
> > 
> > I would advise that you go back to school and retake the assembly
> > language programming course. When I took my course, using PDP-11
> > assembly language BTW, I wrote recursive, iterative and self-modifying
> > code. Thats the beauty and pain of assembly language. You can make it
> > anything you want it to be. Branches and jumps are not iterative, and
> > at the assembly language level you can essentially jump wherever you
> > want and since activation frames are an artifact of the argument
> > passing convention you impose on the architecture, you can discard
> > them at this level. This means that all sorts of crazy things are
> > possible.
> > 
> 
> 	Branches a jumps are too iterative. 
> 
> 	You agree that for next loops are iterative, right? 
> 	How about if statements? They both involve branches 
> 	and jumps.  
> 

But, a branch or jump can be called iterative ONLY IF the branch or
jump takes you to a place where you will be traversing the same
section of code each time. BUT, in pure assembly language you can
write code which while it may jump to the same label, could be
traversing different code. One way to do this is by making space in
your data section where you will write the raw machine instructions
based on whatever conditions you want. Another thing you can do which
was used a lot in the 70s is set up what is called a jump table where
the code looks iterative in that you are jumping to the same area but
where in that area you jump depends on the value in a register. Things
like radix conversion used this technique.

So while you can do things that look like something like iteration, it
is really just a jump to someplace in the code. You are just mapping
your seemingly partial knowledge of C onto your limited understanding
of computer architecture. At the machine level, everything is just a
sequence of bytes. In order to avoid insanity, we map various
high-level concepts onto the machine-level architecture. But please
make no mistake, our mapping often constrains the things that are
possible at the machine level in order to gain clarity in
understanding the computational process. There will always be some
things that are better done at the machine level. And if you have
never really sat down and played and tried to _really understand_
what's going on at the machine level, your knowledge of programming
will always be limited and you will always get into silly arguments
where people are trying to explain something you don't understand in
terms of simpler things which you still don't understand.

-Reggie

-------------------
Reginald S. Perry                      e-mail: ·····@zso.dec.com   
Digital Equipment Corporation
Performance Manager Group	               
http://www.UNIX.digital.com/unix/sysman/perf_mgr/

The train to Success makes many stops in the state of Failure.
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D64FAB.65AF@capital.net>
Reginald S. Perry wrote:
> 
> Sajid Ahmed the Peaceman <········@capital.net> writes:
> 
> >       Branches a jumps are too iterative.
> >
> >       You agree that for next loops are iterative, right?
> >       How about if statements? They both involve branches
> >       and jumps.
> >
> 
> But, a branch or jump can be called iterative ONLY IF the branch or
> jump takes you to a place where you will be traversing the same
> section of code each time. BUT, in pure assembly language you can
> write code which while it may jump to the same label, could be
> traversing different code. One way to do this is by making space in
> your data section where you will write the raw machine instructions
> based on whatever conditions you want. Another thing you can do which
> was used a lot in the 70s is set up what is called a jump table where
> the code looks iterative in that you are jumping to the same area but
> where in that area you jump depends on the value in a register. Things
> like radix conversion used this technique.
> 

	A jump instruction, in assembly would translate 
into the following:


  	MOVE  IP, address 

	
	which is iterative. 


				Peaceman
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86afjdmw1b.fsf@g.pet.cam.ac.uk>
"Sajid Ahmed the Peaceman" wrote:

> 	A jump instruction, in assembly would translate 
> into the following:
> 
>   	MOVE  IP, address 
> 
> 	which is iterative. 

Perhaps it would help if you could say exactly what you mean by
"iterative" as opposed to "recursive".

In a piece of assembly language that looks like this (no, it's not
the assembly language of any specific processor known to me; no,
it isn't a usual assembly-language-ish syntax)

    func:   compare r1,=2
            branch-if-less skip:
            push-address after1:
            subtract r1,r1,=1
            jump func:             <---
    after1: push-register r0
            push-address after2:
            subtract r1,r1,=1
            jump func:             <---
    after2: pop-register r2
            add r0,r0,r2
            pop-program-counter
    skip:   move r0,r1
            pop-program-counter

it seems to me that the jumps I've labelled with <--- are *recursive*.

If you argue that they aren't recursive because at the level of
machine instructions there somehow isn't any concept of "function",
then I can equally argue that they aren't iterative either because
at the level of machine instructions there similarly isn't any
concept of "loop".


If you insist on ignoring the higher-level abstractions which
explain what a given piece of machine code means, then sure, you
can refuse to regard the code I give above as a recursive function;
it's just a load of instructions that push and pop things from a
stack and jump around. But then I can refuse to recognise

    loop:   store r1,[r0,r1]
            add r1,r1,=1
            compare r1,=1000
            branch-if-less loop:

as a loop, too: it's just a load of instructions that do arithmetic
and jump around.


In other words, your claim about what machine-level stuff does has
nothing whatever to do with the difference between iteration and
recursion.            

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Reginald S. Perry
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <sxlu3hkm8z5.fsf@yakko.zso.dec.com>
Sajid Ahmed the Peaceman <········@capital.net> writes:

> Reginald S. Perry wrote:
> > 
> > Sajid Ahmed the Peaceman <········@capital.net> writes:
> > 
> > >       Branches a jumps are too iterative.
> > >
> > >       You agree that for next loops are iterative, right?
> > >       How about if statements? They both involve branches
> > >       and jumps.
> > >
> > 
> > But, a branch or jump can be called iterative ONLY IF the branch or
> > jump takes you to a place where you will be traversing the same
> > section of code each time. BUT, in pure assembly language you can
> > write code which while it may jump to the same label, could be
> > traversing different code. One way to do this is by making space in
> > your data section where you will write the raw machine instructions
> > based on whatever conditions you want. Another thing you can do which
> > was used a lot in the 70s is set up what is called a jump table where
> > the code looks iterative in that you are jumping to the same area but
> > where in that area you jump depends on the value in a register. Things
> > like radix conversion used this technique.
> > 
> 
> 	A jump instruction, in assembly would translate 
> into the following:
> 
> 
>   	MOVE  IP, address 
> 
> 	
> 	which is iterative. 
> 

Well this is interesting. How in the world is a move iterative? How is
iterative defined in your world?

If you want to convince me, you will have to:

1) define iteration

2) define recursion

3) define your assembly pseudo-language subset. this subset has to be
   rich enough to describe computations that can be done today on modern
   machines. 

4) define the iterative operation in this language according to your
   definition of iterative.

5) define the recursive operation in this language according to your
   definition of recursive.

6) Show how the operations in 5) are equivalent to the operations in 4)

7) show that for every operation one can describe in your language,
   they all map in to some equivalent of 4)


Do this, and I will be convinced. I would advise you think carefully
before you start.


-Reggie
From: Henry Baker
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <hbaker-2407972214270001@10.0.2.1>
In article <···············@yakko.zso.dec.com>, ·····@yakko.zso.dec.com
(Reginald S. Perry) wrote:

> 2) define recursion

From the Maclisp manual:

"Recursion.  See recursion."
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scfd8o7w43b.fsf@infiniti.PATH.Berkeley.EDU>
In article <·······················@10.0.2.1> ······@netcom.com (Henry Baker) writes:

   From: ······@netcom.com (Henry Baker)
   Newsgroups: comp.lang.lisp,comp.programming
   Date: Fri, 25 Jul 1997 06:14:27 GMT
   Organization: nil

   In article <···············@yakko.zso.dec.com>, ·····@yakko.zso.dec.com
   (Reginald S. Perry) wrote:

   > 2) define recursion

   From the Maclisp manual:

   "Recursion.  See recursion."

"Iteration: go to 'Iteration'" :)
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scfen8ximk3.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Thu, 17 Jul 1997 13:36:39 -0400
   Organization: Logical Net
   Reply-To: ········@capital.net
   Lines: 34
   Mime-Version: 1.0
   Content-Type: text/plain; charset=us-ascii
   Content-Transfer-Encoding: 7bit
   X-Mailer: Mozilla 3.01 (WinNT; I)
   Xref: agate comp.lang.lisp:29306 comp.programming:52493 comp.lang.c++:282257

   Martin Rodgers wrote:
   > 
   > With a mighty <···············@jagor.srce.hr>,
   > ·······@srce.hr uttered these wise words...
   > 
   > > Now, here is a man who has seen a real lot of Lisp compilers. :-)
   > 
   > There are a few _books_ that could refute his belief! A few years ago,
   > I wrote one myself. Of course, it was very primitive, but it did
   > compile a small subset of Scheme into C. It didn't take long to write,
   > either. After all, I wrote it in Scheme...
   > 
   > I sense a round of URLs about to fly, but they could be pre-empted by
   > a reference to the Lisp FAQ. Do any C++ programmers bother to read it?
   > I sometimes wonder, esp when someone tries to distort facts that are
   > available for anyone with Internet access to check for themselves.
   > 
   > There's that nice little "rtfm" ftp site at MIT, where they keep the
   > FAQs. Anyone who thinks that all Lisp compilers and interpreters are
   > written in C should take a look. Still, perhaps Sajid Ahmed hasn't
   > seen many Lisp compilers and interpreters? That might explain a spot
   > of ignorance. Again, the Lisp FAQ can help fix that.
   > 

	   I took a look at the faq. It's mostly about the syntax of 
   Lisp code. 
	   Anyway, all lisp programs, as well as the compilers and 
   interpreters are broken down into assembly level code, which is 
   iterative. The thing I have a problem with is with people trying 
   to write programs that are completely recursive, which is what lisp 
   is about. That is the wrong way to go about it. It's a tremendous 
   waste. 

And in one fell swoop, 50 years of programming science and engineering
are thrown out the window (or left for the garbage collectors :) ).

This has nothing to do with Lisp or C/C++ or Assembly language.  This
has to do with good programming practices and the experience of the
programmer.  Next thing I'll hear from you is how to solve the
general TSP problem in linear time.  (Of course, I'd like to hear from
you a definition of "linear time" beforehand).


-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CFF11B.5AB0@capital.net>
Marco Antoniotti wrote:
> 
> And in one fell swoop, 50 years of programming science and engineering
> are thrown out the window (or left for the garbage collectors :) ).
> 


	I feel bad for the people whose careers are based on this stuff. 
I once had a professor who went to school with Ted Kazynski (aka the
Unabomber), who supposedly wrote the worlds fastest sorting algorithm. 
The only catch was you needed to have more elements than the the total 
number of atoms in the universe for it to be faster than any of the 
so called slower sorting routines.   


					Peaceman
From: William Clodius
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CE9AFD.31DF@lanl.gov>
Sajid Ahmed the Peaceman wrote:
> <snip>
>         Anyway, all lisp programs, as well as the compilers and
> interpreters are broken down into assembly level code, which is
> iterative. The thing I have a problem with is with people trying
> to write programs that are completely recursive, which is what lisp
> is about. That is the wrong way to go about it. It's a tremendous
> waste.
> <snip>

While most assemblers are iterative, I believe I read recently (in the
procedings from the most recent PLDI conference in a paper on developing
code generators for processors from the analysis of C compiler output)
that the assembler for the Tera computer is a dialect of Scheme. This,
of course, need not imply that the Tera assembly code is essentially
recursive, (or that the machine code generated from the assembly need
have a close resemblance to the assembly code), but is interesting none
the less.

Programming language design is about providing a means of expressing an
idea as clearly as possible, (because recursion is often the clearest
way of expressing a concept languages should always include recursion),
implementation is about providing the "best" possible combination of
robustness and efficiency, (if that is best achieved by translating the
recursion into an iterative process then so be it), and programming is
about serving the needs of the user of the code (and, as important one
user of the code is very often its maintainer, that implies that the
code should be written as clearly as possible, with subtle trick (e.g.,
unclear iterative constructs) used rarely and well documented).

-- 

William B. Clodius		Phone: (505)-665-9370
Los Alamos Nat. Lab., NIS-2     FAX: (505)-667-3815
PO Box 1663, MS-C323    	Group office: (505)-667-5776
Los Alamos, NM 87545            Email: ········@lanl.gov
From: Fred Haineux
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <bc-1707971344080001@17.127.18.234>
|          Anyway, all lisp programs, as well as the compilers and 
|  interpreters are broken down into assembly level code, which is 
|  iterative. The thing I have a problem with is with people trying 
|  to write programs that are completely recursive, which is what lisp 
|  is about. That is the wrong way to go about it. It's a tremendous 
|  waste. 

If I write a clever compiler, it can turn recursion into iteration.
Lisp does that, automatically. (see note 1)

If I want to write iteration instead of recursion in Lisp, there are
plenty of ways to do so. I am not at any time required to use recursion. 
 
Here is an example I'm sure you'll understand:
(loop for x from 1 to 10 do (print "I am iterating"))

The Lisp function does not use recursion, but iterates in the exact same
way as this C program:
{for (x=1;x<=10;x++) {printf("I am iterating\n");};};  // (see note 2)

Please also note that in lisp, it's perfectly OK to say:
(defun eat (a b c)
"Combine a b and c into a single number."
   (progn              ;;;; (see note 3)
      (setf x 0)
      (setf x (+ a b))
      (setf x (* c x))
      x))

which is precisely the same as this C program

eat(a,b,c)
{
   int x;

   x=0;
   x=a+b;
   x=x*c;
   return x;
}


Bottom line: you can do any programming construct you like in Lisp.

fred

(note 1) Recursion has nothing to do with programming languages. You can
write recursive assembly code just as easily as recursive Lisp. You just
push things onto "the stack" and then pop them off when you are finishing
up. 

Indeed, every subroutine call, be it written in assembler, FORTRAN, C++,
or Lisp, does exactly this.

However, if you write a lisp function that explicitly uses recursion, the
compiler will be smart enough (in most cases) to compile an equivalent
function that uses iteration instead of recursion.

(note 2) I've put the brackets in here, even though they are unnecessary,
to call attention to the fact that C and Lisp have nearly identical
syntax. 

(note 3) Strictly speaking, this particular "(progn" is unnecessary (defun
has one "built in"), but I put it there to make it clear that Lisp has it,
and that it can be used just like any other function call, and that it
works just like you think it should: evaluate each statement in turn, and
return the result of the last one.

Note however, that C does not work precisely this way -- without the
explicit "return" statement, the result of the function is unpredictable,
despite the fact that the compiler will not signal an error!
From: Barry Margolin
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5qmqhj$sq8@pasilla.bbnplanet.com>
In article <···················@17.127.18.234>,
Fred Haineux <··@wetware.com> wrote:
>If I write a clever compiler, it can turn recursion into iteration.
>Lisp does that, automatically. (see note 1)

This is generally only true if the function is tail-recursive.  Many
recursive functions are not tail-recursive, and transforming an obviously
recursive function into a tail-recursive one may require somewhat contorted
coding.  See my post with the iterative, recursive, and tail-recursive
versions of factorial.

An extremely clever compiler might be able to figure out how to transform a
recursive function into a tail-recursive one, but this requires quite a bit
of flow analysis that's beyond most compiler writers.

-- 
Barry Margolin, ······@bbnplanet.com
BBN Corporation, Cambridge, MA
Support the anti-spam movement; see <http://www.cauce.org/>
From: Henry Baker
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <hbaker-1807971011310001@10.0.2.1>
In article <··········@pasilla.bbnplanet.com>, Barry Margolin
<······@bbnplanet.com> wrote:

> In article <···················@17.127.18.234>,
> Fred Haineux <··@wetware.com> wrote:
> >If I write a clever compiler, it can turn recursion into iteration.
> >Lisp does that, automatically. (see note 1)
> 
> This is generally only true if the function is tail-recursive.  Many
> recursive functions are not tail-recursive, and transforming an obviously
> recursive function into a tail-recursive one may require somewhat contorted
> coding.  See my post with the iterative, recursive, and tail-recursive
> versions of factorial.

EVERY function can be made tail-recursive by means of continuation-passing.
That's what Michael Fischer proved in 1972.  (The original 'push'
technology :-)
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CFF691.2877@capital.net>
Henry Baker wrote:
> 
> EVERY function can be made tail-recursive by means of continuation-passing.
> That's what Michael Fischer proved in 1972.  (The original 'push'
> technology :-)

	Good, but could you imagine writing code in lisp using tail 
recursion to eveluate a triple integral formula :

          x1   y1     z1
           /    /     /
          |     |     |        ex   ey   ez
          |     |     |    C  x    y    z     dx dy dz 
          /     /     /
        x0    y0    z0



	It would be far easier to leave the tail recursion out.  

	
						Peaceman
From: Johann Hibschman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5qov7f$q2t@agate.berkeley.edu>
In article <·············@capital.net>,
Sajid Ahmed the Peaceman  <········@capital.net> wrote:
>
>	Good, but could you imagine writing code in lisp using tail 
>recursion to eveluate a triple integral formula :
>
>          x1   y1     z1
>           /    /     /
>          |     |     |        ex   ey   ez
>          |     |     |    C  x    y    z     dx dy dz 
>          /     /     /
>        x0    y0    z0
>
>
>
>	It would be far easier to leave the tail recursion out.  

Actually, I've been doing some stuff sort of like this in Lisp.  First
of all, I feel obligated to say that iteration isn't really considered
to be bad style, which is what I think you were aiming for.  for loops
work just as well in Lisp as they do in C.  (do constructs, loop
constructs, etc.)

Now on to the cool bit.  Imagine you have a function which knows how
to do 1D integrals.  There are a zillion out there; call the one
you've got "integrate", so (integrate f x0 x1) does what you want it
to do.

Now, to integrate your 3D function f3(x, y, z), you just have to do:

(integrate
  #'(lambda (x)
      (integrate
        #'(lambda (y)
            (integrate
              #'(lambda (z) (funcall f3 x y z))
              z0 z1))
        y0 y1))
  x0 x1)

Pretty cool, eh?  You don't even have to explicitly write the three
nested for loops.  Sure, it's not a very good algorithm for doing 3D
integrals, but it's simple and it works.

Part of the appeal of Lisp is how closures can be used to pull off
trickery like this.

- Johann






-- 
Johann A. Hibschman         | Grad student in Physics, working in Astronomy.
······@physics.berkeley.edu | Probing pulsar pair production processes.
From: Henry Baker
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <hbaker-1907970830560001@10.0.2.1>
In article <·············@capital.net>, ········@capital.net wrote:

> Henry Baker wrote:
> > 
> > EVERY function can be made tail-recursive by means of continuation-passing.
> > That's what Michael Fischer proved in 1972.  (The original 'push'
> > technology :-)
> 
>         Good, but could you imagine writing code in lisp using tail 
> recursion to eveluate a triple integral formula :
> 
>           x1   y1     z1
>            /    /     /
>           |     |     |        ex   ey   ez
>           |     |     |    C  x    y    z     dx dy dz 
>           /     /     /
>         x0    y0    z0

Yes.  You could learn a lot from Sussman & Abelson's book.

You could even learn how to write a program to do those neat ascii
integral formulae in lisp!
From: Karl M. Hegbloom
Subject: Tail recursion (?)
Date: 
Message-ID: <87en8qqti3.fsf_-_@bittersweet.inetarena.com>
>>>>> "Henry" == Henry Baker <······@netcom.com> writes:

    Henry> EVERY function can be made tail-recursive by means of
    Henry> continuation-passing.  That's what Michael Fischer proved
    Henry> in 1972.  (The original 'push' technology :-)

 Where can we read about that, preferably online?

-- 
··················@inetarena.com (Karl M. Hegbloom)
http://www.inetarena.com/~karlheg
Portland, OR  USA
Debian GNU 1.3  Linux 2.0.30+parport AMD K5 PR-133
From: Henry Baker
Subject: Re: Tail recursion (?)
Date: 
Message-ID: <hbaker-2307970704130001@10.0.2.1>
In article <·················@bittersweet.inetarena.com>,
············@inetarena.com (Karl M. Hegbloom) wrote:

> >>>>> "Henry" == Henry Baker <······@netcom.com> writes:
> 
>     Henry> EVERY function can be made tail-recursive by means of
>     Henry> continuation-passing.  That's what Michael Fischer proved
>     Henry> in 1972.  (The original 'push' technology :-)
> 
>  Where can we read about that, preferably online?

Fischer invented CPS, or at least showed how it could be used to convert
upward funargs into downward funargs.  Look for 'continuation-passing
style' or 'cps'.
From: Steven Perryman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5r21nj$2av@bhars12c.bnr.co.uk>
In article <··········@pasilla.bbnplanet.com> Barry Margolin <······@bbnplanet.com> writes:
>In article <···················@17.127.18.234>,
>Fred Haineux <··@wetware.com> wrote:
>>If I write a clever compiler, it can turn recursion into iteration.
>>Lisp does that, automatically. (see note 1)

>This is generally only true if the function is tail-recursive.  Many
>recursive functions are not tail-recursive, and transforming an obviously
>recursive function into a tail-recursive one may require somewhat contorted
>coding.  See my post with the iterative, recursive, and tail-recursive
>versions of factorial.

>An extremely clever compiler might be able to figure out how to transform a
>recursive function into a tail-recursive one, but this requires quite a bit
>of flow analysis that's beyond most compiler writers.

When I was a final year undergrad in 1987, the FP course I did had quite a bit
on 'transformation' techniques. The basis was that using proof techniques
(such as unfold/induction hypothesis/fold proofs etc) , a system could convert
one functional program with reams of recursion into an equivalent program that
was more computationally efficient. One specific part of this was the
conversion of tail-recursive programs into equivalent recursive progs that
used accumulating parameters to effect the 'iteration' .

And obviously the 'on paper' examples were stuff like Fibonacci and factorial.

Given that 10 years have passed, I'd like to think there are now quite a few
support systems (theorem provers etc) available to FP compiler writers to use
in areas like removing tail recursion.

As an aside, I still use the on paper techniques for stuff like C++ , where I
come up with an initial inefficient recursive algorithm, and then 'hand
transform' it into stuff using acculumating params and/or tail recursion
removal (someone finds it more than academic :-) ) . 


Regards,
Steven Perryman
·······@nortel.co.uk
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CFF426.506D@capital.net>
Fred Haineux wrote:
> 
> |          Anyway, all lisp programs, as well as the compilers and
> |  interpreters are broken down into assembly level code, which is
> |  iterative. The thing I have a problem with is with people trying
> |  to write programs that are completely recursive, which is what lisp
> |  is about. That is the wrong way to go about it. It's a tremendous
> |  waste.
> 
> If I write a clever compiler, it can turn recursion into iteration.
> Lisp does that, automatically. (see note 1)
> 

	All compilers do that. 

> If I want to write iteration instead of recursion in Lisp, there are
> plenty of ways to do so. I am not at any time required to use recursion.
  ...
> Bottom line: you can do any programming construct you like in Lisp.
> 
> fred

	If that is indeed the case, and considered to be good programming
style in lisp, I'll completely change my outlook on lisp, and accept 
it as a decent programming language. 
        What I've been told when I took a course in Lisp about 5 years
ago, was  that you could use iterative code, but it was considered bad
programming style, like using goto statements in other programming 
languages. 


						Peaceman
From: Bill House
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <01bc9550$19c51540$03d3c9d0@wjh_dell_133.dazsi.com>
Sajid Ahmed the Peaceman <········@capital.net> wrote in article
<·············@capital.net>...
> 
> 	If that is indeed the case, and considered to be good programming
> style in lisp, I'll completely change my outlook on lisp, and accept 
> it as a decent programming language. 
>         What I've been told when I took a course in Lisp about 5 years
> ago, was  that you could use iterative code, but it was considered bad
> programming style, like using goto statements in other programming 
> languages. 
> 
This only proves that there are opinionated teachers in any language. You
shouldn't hold one person's biased view of the world against Lisp. However,
recursive source code does not have to lead to wasteful runtime behavior, as
I'm sure others have explained. 

Perhaps it's time you reconsidered Lisp on its own merits and determined how
well it suits your needs without anyone else's biases? I don't think Lisp is
the only useful language, but it's a good one for many applications, and
perhaps the best for some.

Bill House
-- 
http://www.housewebs.com
Note: my e-mail address has been altered to
confuse the enemy. 
From: Fred Haineux
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <bc-2107971455040001@17.127.18.96>
Sajid Ahmed the Peaceman <········@capital.net> wrote in article
<·············@capital.net>...
>       If that is indeed the case, and considered to be good programming
> style in lisp, I'll completely change my outlook on lisp, and accept 
> it as a decent programming language. 
>         What I've been told when I took a course in Lisp about 5 years
> ago, was  that you could use iterative code, but it was considered bad
> programming style, like using goto statements in other programming 
> languages. 

In an email, I responded to "peaceman" telling him that in Lisp, using an
"obvious" syntax, that is to say one which is very readable, is more
important than using recursive programming all the time.

Some algorithms are iterative, and it's 100% OK to write those functions
that way. Use the "obvious" mode of expression.

If you have a choice between an iterative and a recursive algorithm that
does the same thing, you are 100% OK to use whichever you like better.
Heck, put them both in.

Many Lisp heads assume, as a first approximation, that any syntax will
produce "fast enough" code (hopefully CORRECT code, but hey). Many Lisp
programmers would rather write "obvious," correct code than twisted,
speedy code.

This is a different attitude than some C programmers, who seem to care
most about efficiency, and less about correctness.

Interestingly, the famous book "Elements of Programming Style," by
Kernighan and Plauger, agrees that "obvious" code is better than "speedy"
code, because you rarely know where to optimize until you have run a
profiler, and "obvious" code is easier to maintain. These two guys, I am
told, are somewhat well-known to the C programmers, although this book
deals mostly with FORTRAN.

But anyway, this is a question of style, and "de gustibus in nolo
disputandum est" -- there's no accounting for taste.

I think that the highly technical term for a teacher who browbeats
students to using recursive syntax exclusively even when the algorithm is
better expressed iteratively is "a bozo", although that's my opinion, not
everyone's. (It is an interesting mind-expansion exercise to practice
turning recursion into iteration, and vice versa, though.)
From: Henry Baker
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <hbaker-2207970842150001@10.0.2.1>
In article <···················@17.127.18.96>, ··@wetware.com (Fred
Haineux) wrote:

> In an email, I responded to "peaceman" telling him that in Lisp, using an
> "obvious" syntax, that is to say one which is very readable, is more
> important than using recursive programming all the time.
> 
> Some algorithms are iterative, and it's 100% OK to write those functions
> that way. Use the "obvious" mode of expression.

Unfortunately, "obvious" is in the mind of the beholder.  State-based
(iterative) programming is harder not only for humans to understand
(subjective), but also harder for compilers to understand (objective).

One of these days (in my more than ample spare time :-) I'd like to show
how even traditionally "iterative" Fortran matrix codes are prettier and
easier to understand in functional/recursive form.  (Hint: think linear
logic.)

If you think "iterative" codes are easier to understand, then please
explain to me how the "SVD" (Singular Value Decomposition) transformation
in "Numerical Recipes" works.  (This section of the book is actually
online in pdf form at the publisher's web site, or at least used to be.)

See

ftp://ftp.netcom.com/pub/hb/hbaker/sigplannotices/gigo-1997-03.html

for some additional examples of iterative v. recursive.
From: Fred Haineux
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <bc-2207970952110001@17.127.18.96>
In article <·······················@10.0.2.1>, ······@netcom.com (Henry
Baker) wrote:

|  In article <···················@17.127.18.96>, ··@wetware.com (Fred
|  Haineux) wrote:
|  
|  > In an email, I responded to "peaceman" telling him that in Lisp, using an
|  > "obvious" syntax, that is to say one which is very readable, is more
|  > important than using recursive programming all the time.
|  > 
|  > Some algorithms are iterative, and it's 100% OK to write those functions
|  > that way. Use the "obvious" mode of expression.
|  
|  Unfortunately, "obvious" is in the mind of the beholder.  State-based
|  (iterative) programming is harder not only for humans to understand
|  (subjective), but also harder for compilers to understand (objective).

"Yes, that's exactly right." Or, perhaps, "Yes, but..."

|  
|  One of these days (in my more than ample spare time :-) I'd like to show
|  how even traditionally "iterative" Fortran matrix codes are prettier and
|  easier to understand in functional/recursive form.  (Hint: think linear
|  logic.)
|  
|  If you think "iterative" codes are easier to understand, then please
|  explain to me how the "SVD" (Singular Value Decomposition) transformation
|  in "Numerical Recipes" works.  (This section of the book is actually
|  online in pdf form at the publisher's web site, or at least used to be.)
|  
|  See
|  
|  ftp://ftp.netcom.com/pub/hb/hbaker/sigplannotices/gigo-1997-03.html
|  
|  for some additional examples of iterative v. recursive.


It's good to have these references, but I think that I was making the
point that you can write in whichever style you please.

Obviously, freedom of choice is freedom to piss off your colleagues with
non-obvious code.

But it beats Obfuscated C all to heck.
From: Bill House
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <01bc977f$19e65ee0$03d3c9d0@wjh_dell_133.dazsi.com>
Henry Baker <······@netcom.com> wrote in article
<·······················@10.0.2.1>...
>
>[snip}
>
> If you think "iterative" codes are easier to understand, then please
> explain to me how the "SVD" (Singular Value Decomposition) transformation
> in "Numerical Recipes" works.  (This section of the book is actually
> online in pdf form at the publisher's web site, or at least used to be.)
> 
> See
> 
> ftp://ftp.netcom.com/pub/hb/hbaker/sigplannotices/gigo-1997-03.html
> 
> for some additional examples of iterative v. recursive.
> 
Actually, I don't think iteration is necessarily easier to understand than
recursion, but it is unfortunately true that most non-Lispers don't agree.
Also, many popular products for applications programming (like VB) don't know
anything about tail recursion, as witnessed by this dire warning from the VB5
help:

"Caution   Function procedures can be recursive; that is, they can call
themselves to perform a given task. However, recursion can lead to stack
overflow. The Static keyword usually isn't used with recursive Function
procedures."

And of course, the VC++ compiler is equally "broken", if we are to judge by the
standard of your excellent article. <g>

Therefore, I think that proper handling of tail-recursion, like lambda, is
yet-another-killer-Lisp technique that the masses are doomed to remain ignorant
of (unless there is a sudden resurgence of Lisp popularity). <sigh>

Bill House
-- 
http://www.housewebs.com
Note: my e-mail address has been altered to
confuse the enemy. 
From: Michael Schuerig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <19970724185034349125@rhrz-isdn3-p5.rhrz.uni-bonn.de>
Bill House <······@nospam.housewebs.com> wrote:

> Therefore, I think that proper handling of tail-recursion, like lambda, is
> yet-another-killer-Lisp technique that the masses are doomed to remain
> ignorant of (unless there is a sudden resurgence of Lisp popularity).
> <sigh>

Have a look at anonymous functions in Pizza, a Java superset:
<http://www.cis.unisa.edu.au/~pizza/>.

Michael

--
Michael Schuerig                        Although condemned by moralist,
·············@uni-bonn.de           lying can have high survival-value.
http://www.uni-bonn.de/~uzs90z/                    -Richard L. Gregory
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scfzprgz3mv.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Fri, 18 Jul 1997 18:54:30 -0400
   Organization: Logical Net
   Reply-To: ········@capital.net
   Lines: 32
   Mime-Version: 1.0
   Content-Type: text/plain; charset=us-ascii
   Content-Transfer-Encoding: 7bit
   X-Mailer: Mozilla 3.01 (WinNT; I)
   Xref: agate comp.lang.lisp:29354 comp.programming:52576 comp.lang.c++:282522

   Fred Haineux wrote:
   > 
   > |          Anyway, all lisp programs, as well as the compilers and
   > |  interpreters are broken down into assembly level code, which is
   > |  iterative. The thing I have a problem with is with people trying
   > |  to write programs that are completely recursive, which is what lisp
   > |  is about. That is the wrong way to go about it. It's a tremendous
   > |  waste.
   > 
   > If I write a clever compiler, it can turn recursion into iteration.
   > Lisp does that, automatically. (see note 1)
   > 

	   All compilers do that.

Not only you need to look at some algorithms books.  You need to also
look at some compilers book.  AFAIK, gcc does not do that in all
possible cases (I might be wrong).  In the Lisp world, KCL and most of
its derivatives did not properly eliminate tail recursive calls.

   > If I want to write iteration instead of recursion in Lisp, there are
   > plenty of ways to do so. I am not at any time required to use recursion.
     ...
   > Bottom line: you can do any programming construct you like in Lisp.
   > 
   > fred

	   If that is indeed the case, and considered to be good programming
   style in lisp, I'll completely change my outlook on lisp, and accept 
   it as a decent programming language.

Iteration in Lisp is good programming practice whenever it is useful.

	   What I've been told when I took a course in Lisp about 5 years
   ago, was  that you could use iterative code, but it was considered bad
   programming style, like using goto statements in other programming 
   languages. 

Iteration is considere bad programming practice when recursion does
not take a performance hit, the algorithm is inherently recursive
(yes!  there exists such beasts) and a recursive solution would be
much better suited, readable and maintanable.  Nobody forbids you from
writing iterative (Common) Lisp code.  You have all you need in the
language to do just that.

-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CE5629.6904@capital.net>
Hrvoje Niksic wrote:
> 
> Sajid Ahmed the Peaceman <········@capital.net> writes:
> 
> > All lisp compilers and interpreters that I've seen have been written
> > in C, and run on top of a C program.
> 
> Now, here is a man who has seen a real lot of Lisp compilers. :-)
> 
> (rest of bait ignored.)
> 

	Your smart to ignore it :)
From: Mark Greenaway
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <868924581.409344@cabal>
Sajid Ahmed the Peaceman <········@capital.net> writes:

>	Lisp is a deception. All lisp compilers and interpreters that 
>I've seen have been written in C, and run on top of a C program. I've
>seen a lot of LISP and PROLOG programmers, especially in the post
>graduate 
>level of computer science, think that lisp functions the same way as 
>mathematics. They think that a call to a recursive function
>instantaneously 
>returns a result. The fact is, these function is broken down into
>machine
>level instructions, and is executed the same way as a for next loop. 

<sarcasm>Gosh, who'd have thought it?</sarcasm> Don't be so utterly
stupid! Any decent LISP programmer, in fact any decent programmer, knows
that. Efficiency is part of programming. But it is not the be-all and
end-all. Yes, the lower-level parts of many LISP systems might well be
written in C/Assembly etc.

The real question is: What is the most efficient/elegant/best way to
express a particular program? It might, for some problems, be in C or C++.
For some, it might be LISP or Prolog. If I can write a program which
efficiently does a job in 60 lines of well-documented LISP that would take
300 lines or more of C, and they both run at similiar speeds, it would
seem that LISP is the better choice.

>	AI is a bunch of garbage as well. Do you know what the main goal 
>of AI is? It is to develope a program where a person cannot
>distinguish the program from a human being. What does that have to do
>with intelligence? It's just an emulator. 

Totally untrue. It would appear that you don't understand what you are
talking about.

>	The bottom line is, all computer programs, including AI 
>programs, are just fast pocket calculators.

This is not accurate either. All digital computers can be said to be
equivalent to a Universal Turing Machine. But not a calculator. A
calculator hasn't got several of the basic properties it needs.
--
Mark
Certified Waifboy                   And when they come to ethnically cleanse me
                                    Will you speak out? Will you defend me? 
http://www.st.nepean.uws.edu.au/~mgreenaw         - Ich bin ein Auslander, PWEI
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CE5CA2.6FA1@capital.net>
Mark Greenaway wrote:
> 
> >       Lisp is a deception. All lisp compilers and interpreters that
> >I've seen have been written in C, and run on top of a C program. I've
> >seen a lot of LISP and PROLOG programmers, especially in the post
> >graduate
> >level of computer science, think that lisp functions the same way as
> >mathematics. They think that a call to a recursive function
> >instantaneously
> >returns a result. The fact is, these function is broken down into
> >machine
> >level instructions, and is executed the same way as a for next loop.
> 
> <sarcasm>Gosh, who'd have thought it?</sarcasm> Don't be so utterly
> stupid! 

	Calling people names doesn't do anything. 


>Any decent LISP programmer, in fact any decent programmer, knows
> that. Efficiency is part of programming. But it is not the be-all and
> end-all. Yes, the lower-level parts of many LISP systems might well be
> written in C/Assembly etc.
> 
> The real question is: What is the most efficient/elegant/best way to
> express a particular program? It might, for some problems, be in C or C++.
> For some, it might be LISP or Prolog. 

	It is true that LISP and Prolog may have some built in functions 
that lets the programmer write less code, but as far as speed is
concerned, 



>If I can write a program which
> efficiently does a job in 60 lines of well-documented LISP that would take
> 300 lines or more of C, and they both run at similiar speeds, it would
> seem that LISP is the better choice.
> 
> >       AI is a bunch of garbage as well. Do you know what the main goal
> >of AI is? It is to develope a program where a person cannot
> >distinguish the program from a human being. What does that have to do
> >with intelligence? It's just an emulator.
> 
> Totally untrue. It would appear that you don't understand what you are
> talking about.
> 
> >       The bottom line is, all computer programs, including AI
> >programs, are just fast pocket calculators.
> 
> This is not accurate either. All digital computers can be said to be
> equivalent to a Universal Turing Machine. But not a calculator. A
> calculator hasn't got several of the basic properties it needs.
> --
> Mark
> Certified Waifboy                   And when they come to ethnically cleanse me
>                                     Will you speak out? Will you defend me?
> http://www.st.nepean.uws.edu.au/~mgreenaw         - Ich bin ein Auslander, PWEI
From: David Brabant [SNI]
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5qsmsq$kgj$1@horus.mch.sni.de>
>AI is a bunch of garbage as well. Do you know what the main goal
>of AI is? It is to develope a program where a person cannot
>distinguish the program from a human being. What does that have to do
>with intelligence? It's just an emulator.

 [news groups list trimmed to comp.lang.lisp]

 It seems that another Troll has joined the legion. 
 Scott Nuds, Colin James "the Right Reverend III", 
 Alicia Carla Longstreet, Ralph Silvermann ... 
 and now Mister Sajid Ahmed "the Peaceman". Oh My !
 Reading news is a vocation.

David


-- Software is neither science nor engineering. It's really an obscure form
of poetry.--

David BrabaNT,             | E-mail: ·············@csl.sni.be
Siemens Nixdorf (SNI),     | CIS:    100337,1733
Centre Software de Li�ge,  | X-400:  C=BE;A=RTT;P=SCN;O=SNI;OU1=LGG1;OU2=S1
2, rue des Fories,         |         S=BRABANT;G=DAVID
4020 Li�ge (BELGIUM)       | HTTP:   www.sni.de       www.csl.sni.be/~david
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CE62F4.7C7B@capital.net>
Mark Greenaway wrote:
> 
> Sajid Ahmed the Peaceman <········@capital.net> writes:
> 
> >       Lisp is a deception. All lisp compilers and interpreters that
> >I've seen have been written in C, and run on top of a C program. I've
> >seen a lot of LISP and PROLOG programmers, especially in the post
> >graduate
> >level of computer science, think that lisp functions the same way as
> >mathematics. They think that a call to a recursive function
> >instantaneously
> >returns a result. The fact is, these function is broken down into
> >machine
> >level instructions, and is executed the same way as a for next loop.
> 
> <sarcasm>Gosh, who'd have thought it?</sarcasm> Don't be so utterly
> stupid! 

	Calling people names doesn't do anything. 


>Any decent LISP programmer, in fact any decent programmer, knows
> that. Efficiency is part of programming. But it is not the be-all and
> end-all. Yes, the lower-level parts of many LISP systems might well be
> written in C/Assembly etc.
> 
> The real question is: What is the most efficient/elegant/best way to
> express a particular program? It might, for some problems, be in C or C++.
> For some, it might be LISP or Prolog. If I can write a program which
> efficiently does a job in 60 lines of well-documented LISP that would take
> 300 lines or more of C, and they both run at similiar speeds, it would
> seem that LISP is the better choice.

	It is true that LISP has some built in functions that allows 
a programmer to write less code. As far as speed is concerned, in almost 
every situation, the same program written in C would be faster than a 
similar program written in Lisp. Why? C is much closer to the machine
level 
assembly code that all computers run on. Many C compilers allow inline
assembly
language code within the program. 


	As far as the size of the program is concerned, most of the time C 
programs are smaller? Why? Good Lisp programs only allow recursive code, 
without any stop codes, whereas good C programs allow for both recursive 
and iterative code. 

	Have you ever seen a the quicksort algorithm written in Lisp? 
Even though it is a recursive function, it still needs well over 
100 lines of code. In C it would only be 5 or 6 lines.



				Peaceman
From: Henry Baker
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <hbaker-1707971415170001@10.0.2.1>
In article <·············@capital.net>, ········@capital.net wrote:
>         Have you ever seen a the quicksort algorithm written in Lisp? 
> Even though it is a recursive function, it still needs well over 
> 100 lines of code. In C it would only be 5 or 6 lines.

;;; Quicksort a lisp list in considerably fewer than 100 lines of code :-)

(defun qs (x l)                             ; sort the list x onto the list l.
  (if (null x) l
    (let* ((i (car x)) (restx (cdr x))
           (high low (highlow restx i nil nil)))
      (qs low (cons i (qs high l))))))

(defun highlow (x i h l)     ; select the high and low elts of x onto h and l.
  (if (null x) (values h l)
    (let* ((firstx (car x)) (restx (cdr x)))
      (if (< firstx i) (highlow restx i h (cons firstx l))
        (highlow restx i (cons firstx h) l)))))

This is from my paper "A 'Linear Logic' Quicksort'  ACM Sigplan Notices
Feb. 1994.
ftp://ftp.netcom.com/pub/hb/hbaker/LQsort.html   (also .ps.Z)
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf3epcgx57.fsf@infiniti.PATH.Berkeley.EDU>
In article <·······················@10.0.2.1> ······@netcom.com (Henry Baker) writes:

   From: ······@netcom.com (Henry Baker)
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Thu, 17 Jul 1997 22:15:17 GMT
   Organization: nil
   Content-Type: text/plain; charset=ISO-8859-1
   Sender: ······@netcom21.netcom.com
   Content-Transfer-Encoding: 8bit
   X-Newsreader: Yet Another NewsWatcher 2.2.0
   Mime-Version: 1.0
   Lines: 22
   Xref: agate comp.lang.lisp:29310 comp.programming:52501 comp.lang.c++:282299

   In article <·············@capital.net>, ········@capital.net wrote:
   >         Have you ever seen a the quicksort algorithm written in Lisp? 
   > Even though it is a recursive function, it still needs well over 
   > 100 lines of code. In C it would only be 5 or 6 lines.

   ;;; Quicksort a lisp list in considerably fewer than 100 lines of code :-)

   (defun qs (x l)                             ; sort the list x onto the list l.
     (if (null x) l
       (let* ((i (car x)) (restx (cdr x))
	      (high low (highlow restx i nil nil)))
	 (qs low (cons i (qs high l))))))

   (defun highlow (x i h l)     ; select the high and low elts of x onto h and l.
     (if (null x) (values h l)
       (let* ((firstx (car x)) (restx (cdr x)))
	 (if (< firstx i) (highlow restx i h (cons firstx l))
	   (highlow restx i (cons firstx h) l)))))

   This is from my paper "A 'Linear Logic' Quicksort'  ACM Sigplan Notices
   Feb. 1994.
   ftp://ftp.netcom.com/pub/hb/hbaker/LQsort.html   (also .ps.Z)

Come on Henry.  This is another flame bait in disguise. :)  You should
have posted the version with arrays.  Otherwise, after the claim that
"Lisp does support only recursion", we'd get also the "Lisp does not
have arrays" crap. :)

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CFFD28.7EA@capital.net>
Marco Antoniotti wrote:
> 
>    In article <·············@capital.net>, ········@capital.net wrote:
>    >         Have you ever seen a the quicksort algorithm written in Lisp?
>    > Even though it is a recursive function, it still needs well over
>    > 100 lines of code. In C it would only be 5 or 6 lines.
> 
>    ;;; Quicksort a lisp list in considerably fewer than 100 lines of code :-)
> 
>    (defun qs (x l)                             ; sort the list x onto the list l.
>      (if (null x) l
>        (let* ((i (car x)) (restx (cdr x))
>               (high low (highlow restx i nil nil)))
>          (qs low (cons i (qs high l))))))
> 
>    (defun highlow (x i h l)     ; select the high and low elts of x onto h and l.
>      (if (null x) (values h l)
>        (let* ((firstx (car x)) (restx (cdr x)))
>          (if (< firstx i) (highlow restx i h (cons firstx l))
>            (highlow restx i (cons firstx h) l)))))
> 
>    This is from my paper "A 'Linear Logic' Quicksort'  ACM Sigplan Notices
>    Feb. 1994.
>    ftp://ftp.netcom.com/pub/hb/hbaker/LQsort.html   (also .ps.Z)
> 
> Come on Henry.  This is another flame bait in disguise. :)  You should
> have posted the version with arrays.  Otherwise, after the claim that
> "Lisp does support only recursion", we'd get also the "Lisp does not
> have arrays" crap. :)
> 

	Looks like he got me hook line and sinker :) 


	The lisp code you have above is not considered to be good 
lisp programming style,  (or what was told to me to be good lisp 
programming style):

>        (let* ((i (car x)) (restx (cdr x))
>               (high low (highlow restx i nil nil)))

	The i and restx are fine since you can easily substitute the rhs values 
wherever they are referenced in the latter part of the function. 
However you can't do the same with the high and low vars.  
	Anyway, I took the time out to make the neccesary changes in your 
code to take care of this. It's about 80 lines. I know you guys will
say I'm putting too little in each line, but I don't have access to
G-emacs 
( written in Lisp,considered by many as the world's best text editor, 
  and considered by some as the world's most difficult to use text
editor)
where you can match up the close paranthesis to it's corresponding open
paranthesis. There has to be some kind of alignment for the 
parenthesis.


						Peaceman 

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(defun high (x i h)			; select the high elts of x onto h.
  (if 
    (null x) h 
    (if 
      (< (car x) i)  
        (high 
          (cdr x) 
          i 
          h
        )
      (high 
        (cdr x) 
        i 
        (cons 
          (car x) 
          h
        )
      )
    )
  )
) 
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(defun low (x i l)     			; select the low elts of x onto l.
  (if 
    (null x) l
    (if 
      (< (car x) i) 
        (low 
          (cdr x) 
          i 
          (cons 
            (car x) 
            l
          )
        ) 
      (low 
        (cdr x) 
        i 
        l
      )
    )
  )
)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(defun qs (x l)                             ; sort the list x onto the
list l.
  (if 
    (null x) l
    (qs 
      (low 
        (cdr x) 
        (car x) 
        nil
      ) 
      (cons 
        (car x) 
        (qs 
          (high 
            (cdr x) 
            (car x) 
            nil
          ) 
          l
        )
      )
    )
  )
)
From: William Paul Vrotney
Subject: Re: Lisp is fine (was Lisp is *SLOW*)
Date: 
Message-ID: <vrotneyEDJp8s.MMv@netcom.com>
In article <············@capital.net> Sajid Ahmed the Peaceman
<········@capital.net> writes:

> > 
> >    (defun qs (x l)                             ; sort the list x onto the list l.
> >      (if (null x) l
> >        (let* ((i (car x)) (restx (cdr x))
> >               (high low (highlow restx i nil nil)))
> >          (qs low (cons i (qs high l))))))
> > 
> 
> 	The lisp code you have above is not considered to be good 
> lisp programming style,  (or what was told to me to be good lisp 
> programming style):
> 

Don't believe everything you hear.  The above style is far preferable to the
garbage you've suggested below.  Some poor authors still use your old
fashioned garbage style in their books, especially many C and C++ books.
That doesn't justify it.


> 	Anyway, I took the time out to make the neccesary changes in your 
> code to take care of this. It's about 80 lines. I know you guys will
> say I'm putting too little in each line, but I don't have access to
> G-emacs 

Everybody has access to GNU Emacs.  Down-load a copy and learn it.  If you
learn it, I guarantee it will change your prehistoric thinking.  After you
become more hip convert you friends who still think this way.

> ( written in Lisp,considered by many as the world's best text editor, 
>   and considered by some as the world's most difficult to use text
> editor)
> where you can match up the close paranthesis to it's corresponding open
> paranthesis. There has to be some kind of alignment for the 
> parenthesis.
> 

Why?  I once encounter a large C program that used this style of formatting.
When I printed out a hardcopy to understand it the whole last page was
literally nothing but indented braces!  Tell me how this is of any use to
anybody.


> 						Peaceman 
> 
> ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
> (defun qs (x l)                             ; sort the list x onto the
> list l.
>   (if 
>     (null x) l
>     (qs 
>       (low 
>         (cdr x) 
>         (car x) 
>         nil
>       ) 
>       (cons 
>         (car x) 
>         (qs 
>           (high 
>             (cdr x) 
>             (car x) 
>             nil
>           ) 
>           l
>         )
>       )
>     )
>   )
> )


Tell me how this style of formating is of any use to anyone without a very
long straight-edge?  I think that it is old fashioned and for some
programmers just a fetish.

-- 

William P. Vrotney - ·······@netcom.com
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is fine (was Lisp is *SLOW*)
Date: 
Message-ID: <33D11103.2A31@capital.net>
> Don't believe everything you hear.  The above style is far preferable to the
> garbage you've suggested below.  Some poor authors still use your old
> fashioned garbage style in their books, especially many C and C++ books.
> That doesn't justify it.
> 

	The above style is fine if you don't care about the 
readability of your code. You need to put spacing before each line 
in a program to keep track of what level your in. 


> Everybody has access to GNU Emacs.  Down-load a copy and learn it.  If you
> learn it, I guarantee it will change your prehistoric thinking.  After you
> become more hip convert you friends who still think this way.
> 

	Believe me, I know how to use it. The IDEs of many compilers
today offer many of Gemacs features. 


> Why?  I once encounter a large C program that used this style of formatting.
> When I printed out a hardcopy to understand it the whole last page was
> literally nothing but indented braces!  Tell me how this is of any use to
> anybody.
> 

	The spacing of the brackets tells you the end of sublevel, 
or how deep you are in the program. By looking at the spacing, you can 
go up your code and find the previous line that has the same 
amount of spacing. You're then able to easily see the begining 
and end of the section.



> Tell me how this style of formating is of any use to anyone without a very
> long straight-edge?  I think that it is old fashioned and for some
> programmers just a fetish.
> 


	Well, some programmers don't care about the readability of 
their code. Why not start every line at the very first space? 


						Peaceman
From: William Paul Vrotney
Subject: Re: Lisp is fine (was Lisp is *SLOW*)
Date: 
Message-ID: <vrotneyEDLowr.L57@netcom.com>
In article <·············@capital.net> Sajid Ahmed the Peaceman
<········@capital.net> writes:

> Reply-To: ········@capital.net
> Mime-Version: 1.0
> Content-Type: text/plain; charset=us-ascii
> Content-Transfer-Encoding: 7bit
> X-Mailer: Mozilla 3.01 (WinNT; I)
> Xref: ix.netcom.com comp.lang.lisp:27663 comp.programming:52499 comp.lang.c++:278851
> 
> 
> 	The spacing of the brackets tells you the end of sublevel, 
> or how deep you are in the program. By looking at the spacing, you can 
> go up your code and find the previous line that has the same 
> amount of spacing. You're then able to easily see the begining 
> and end of the section.
> 

For example do you mean that you can see the beginning and end of of a
section better with this

(defun faa (x)
  (cond ((numberp x) (+ x x)
         )
        (t (list x)
           )
        )
  )


than this?

(defun faa (x)
  (cond ((numberp x) (+ x x))
        (t (list x))))


> 
> 
> > Tell me how this style of formating is of any use to anyone without a very
> > long straight-edge?  I think that it is old fashioned and for some
> > programmers just a fetish.
> > 
> 
> 
> 	Well, some programmers don't care about the readability of 
> their code. Why not start every line at the very first space? 
> 
> 
> 						Peaceman

Obviously you did not understand my point.  If you knew Emacs the way you
say you do then you would know that Emacs does proper indenting for you.



-- 

William P. Vrotney - ·······@netcom.com
From: Bill House
Subject: Re: Lisp is fine (was Lisp is *SLOW*)
Date: 
Message-ID: <01bc9552$2d23a960$03d3c9d0@wjh_dell_133.dazsi.com>
William Paul Vrotney <·······@netcom.com> wrote in article
<·················@netcom.com>...
>
>[snip two examples of indenting]
>
I think that people can most easily read the style they are most accustomed to.


As someone who typically writes multi-language programs, I find that indenting
them all more or less the same way makes them all easier for me to read. If I
get too used to Lisp conventions, the others become subjectively less readable,
and vice versa. Human perception is unfortunately illogical. <shrug>

Bill House
-- 
http://www.housewebs.com
Note: my e-mail address has been altered to
confuse the enemy. 
From: William Paul Vrotney
Subject: Re: Lisp is fine (was Lisp is *SLOW*)
Date: 
Message-ID: <vrotneyEDnEC0.6yE@netcom.com>
In article <··························@wjh_dell_133.dazsi.com> "Bill House"
<······@nospam.housewebs.com> writes:

> 
> William Paul Vrotney <·······@netcom.com> wrote in article
> <·················@netcom.com>...
> >
> >[snip two examples of indenting]
> >
> I think that people can most easily read the style they are most accustomed to.
> 
> 
> As someone who typically writes multi-language programs, I find that indenting
> them all more or less the same way makes them all easier for me to read. If I
> get too used to Lisp conventions, the others become subjectively less readable,
> and vice versa. Human perception is unfortunately illogical. <shrug>
> 

Basically it sounds like you agree with me on Lisp conventions however I
don't have the vice versa experience.  I suppose this is because I prefer
Lisp.

I too write multi-language programs and also use "the same way" on my C++
programs.  But I get a lot of heat from other C++ programmers who do not
have a Lisp background.  Instead of saying that my style is wrong form some
reason, they say that they don't like it because it reminds them of Lisp.
Sometimes I wonder if the reason for this is that an occurrence of "}}}}}"
in a C program dilutes the "LISP stands for "Lots of Irritating Single
Parentheses"" catch phrase.


-- 

William P. Vrotney - ·······@netcom.com
From: Marco Antoniotti
Subject: Re: Lisp is fine (was Lisp is *SLOW*)
Date: 
Message-ID: <scfyb70z3ah.fsf@infiniti.PATH.Berkeley.EDU>
In article <·················@netcom.com> ·······@netcom.com (William Paul Vrotney) writes:

   From: ·······@netcom.com (William Paul Vrotney)
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Sun, 20 Jul 1997 04:38:51 GMT
   Organization: Netcom On-Line Services
   Lines: 61
   Sender: ·······@netcom3.netcom.com
   Xref: agate comp.lang.lisp:29384 comp.programming:52638 comp.lang.c++:282702


   In article <·············@capital.net> Sajid Ahmed the Peaceman
   <········@capital.net> writes:

   > Reply-To: ········@capital.net
   > Mime-Version: 1.0
   > Content-Type: text/plain; charset=us-ascii
   > Content-Transfer-Encoding: 7bit
   > X-Mailer: Mozilla 3.01 (WinNT; I)
   > Xref: ix.netcom.com comp.lang.lisp:27663 comp.programming:52499 comp.lang.c++:278851
   > 
   > 
   > 	The spacing of the brackets tells you the end of sublevel, 
   > or how deep you are in the program. By looking at the spacing, you can 
   > go up your code and find the previous line that has the same 
   > amount of spacing. You're then able to easily see the begining 
   > and end of the section.
   > 

   For example do you mean that you can see the beginning and end of of a
   section better with this

   (defun faa (x)
     (cond ((numberp x) (+ x x)
	    )
	   (t (list x)
	      )
	   )
     )


   than this?

   (defun faa (x)
     (cond ((numberp x) (+ x x))
	   (t (list x))))


   > 
   > 
   > > Tell me how this style of formating is of any use to anyone without a very
   > > long straight-edge?  I think that it is old fashioned and for some
   > > programmers just a fetish.
   > > 
   > 
   > 
   > 	Well, some programmers don't care about the readability of 
   > their code. Why not start every line at the very first space? 
   > 
   > 
   > 						Peaceman

   Obviously you did not understand my point.  If you knew Emacs the way you
   say you do then you would know that Emacs does proper indenting for you.

Not only that.  He'd know that it does also the right indenting for
C/C++ using one or the other more or less "standard styles" (K&R C or
GNU Coding or a few others - last I checked the code for cc-mode I was
kinda overwhelemd.  BTW cc-mode is written in Emacs Lisp :) ).

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is fine (was Lisp is *SLOW*)
Date: 
Message-ID: <33D64176.EE9@capital.net>
William Paul Vrotney wrote:
> For example do you mean that you can see the beginning and end of of a
> section better with this
> 
> (defun faa (x)
>   (cond ((numberp x) (+ x x)
>          )
>         (t (list x)
>            )
>         )
>   )
> 
> than this?
> 
> (defun faa (x)
>   (cond ((numberp x) (+ x x))
>         (t (list x))))
> 

	How about this: 
(defun faa (x)
   (cond 
     ((numberp x) (+ x x))
     (t (list x))
   )
)


	It's the same as what you had, except the close paranthesis 
are in the same line as the matching open parenthesis. 

> Obviously you did not understand my point.  If you knew Emacs the way you
> say you do then you would know that Emacs does proper indenting for you.
> 

	I'm not the foremost expert in Emacs, but I do know that your 
statement is not entirely true. Whether or not Emacs does the proper 
indenting for you depends on whether or not Emacs is set to that mode.
One could set up a .emacs file to set the editor to always stay in 
standard text mode, and not ever in c or lisp mode. 


					Peaceman
From: William Paul Vrotney
Subject: Re: Lisp is fine (was Lisp is *SLOW*)
Date: 
Message-ID: <vrotneyEDsnBz.Hr9@netcom.com>
In article <············@capital.net> Sajid Ahmed the Peaceman
<········@capital.net> writes:

> 
> William Paul Vrotney wrote:
> > For example do you mean that you can see the beginning and end of of a
> > section better with this
> > 
> > (defun faa (x)
> >   (cond ((numberp x) (+ x x)
> >          )
> >         (t (list x)
> >            )
> >         )
> >   )
> > 
> > than this?
> > 
> > (defun faa (x)
> >   (cond ((numberp x) (+ x x))
> >         (t (list x))))
> > 
> 
> 	How about this: 
> (defun faa (x)
>    (cond 
>      ((numberp x) (+ x x))
>      (t (list x))
>    )
> )
> 
> 
> 	It's the same as what you had, except the close paranthesis 
> are in the same line as the matching open parenthesis. 

OK fine, but the question still stands, is it easier for you to see the
sections in

    (defun faa (x)
       (cond 
         ((numberp x) (+ x x))
         (t (list x))
       )
    )

Than in

    (defun faa (x)
       (cond 
         ((numberp x) (+ x x))
         (t (list x))))

?  This is your claim.

> 
> > Obviously you did not understand my point.  If you knew Emacs the way you
> > say you do then you would know that Emacs does proper indenting for you.
> > 
> 
> 	I'm not the foremost expert in Emacs, but I do know that your 
> statement is not entirely true. Whether or not Emacs does the proper 
> indenting for you depends on whether or not Emacs is set to that mode.
> One could set up a .emacs file to set the editor to always stay in 
> standard text mode, and not ever in c or lisp mode. 
> 

This statement doesn't support your argument much for two reasons.  One, you
want all of the leverage out of Emacs that you can get, so why would you
turn auto indenting off?  Emacs beginners sometimes do not know that they
can tailor the way Emacs does auto-indenting so that it auto indents exactly
the way they like it.  Two, language specific modes do a lot more than just
auto-indenting.  In C mode moving back and forth on conditionals and source
debugging code with a debugger like GDB are just two more examples.
-- 

William P. Vrotney - ·······@netcom.com
From: Rainer Joswig
Subject: Re: Lisp is fine (was Lisp is *SLOW*)
Date: 
Message-ID: <joswig-ya023180002407971250070001@news.lavielle.com>
In article <············@capital.net>, ········@capital.net wrote:


> (defun faa (x)
>    (cond 
>      ((numberp x) (+ x x))
>      (t (list x))
>    )
> )
> 
> 
>         It's the same as what you had, except the close paranthesis 
> are in the same line as the matching open parenthesis.

This is bad style. You are wasting white space for nothing.
Lisp programmer usually don't
care about closing parentheses. Indentation is more important.
Both indentation and parenthesis matching/counting does
the Lisp environment (Emacs, Genera, whatever, ...)
for you.

> statement is not entirely true. Whether or not Emacs does the proper 
> indenting for you depends on whether or not Emacs is set to that mode.

This is no difficulty.

> One could set up a .emacs file to set the editor to always stay in 
> standard text mode, and not ever in c or lisp mode. 

You can set up the .emacs file so that you can edit anything at all.
You can configure your car that it doesn't drive - so what?

-- 
http://www.lavielle.com/~joswig/
From: Fred Haineux
Subject: Parentheses and indenting, etc.
Date: 
Message-ID: <bc-2407971449560001@17.127.18.96>
······@lavielle.com (Rainer Joswig) wrote:
|  > (defun faa (x)
|  >    (cond 
|  >      ((numberp x) (+ x x))
|  >      (t (list x))
|  >    )
|  > )
|  > 
|  > 
|  >         It's the same as what you had, except the close paranthesis 
|  > are in the same line as the matching open parenthesis.
|  
|  This is bad style. You are wasting white space for nothing.
|  Lisp programmer usually don't
|  care about closing parentheses. Indentation is more important.
|  Both indentation and parenthesis matching/counting does
|  the Lisp environment (Emacs, Genera, whatever, ...)
|  for you.

One of my least favorite programmer arguments concerns which indenting
style is "best." For some reason, these usually smart people seem to
explode violently when presented with code that doesn't meet their
personal indenting style.

This usually results in someone in the org mandating a particular style,
which produces months of endless bickering, and eventually produces some
kind of compromise standard that is both neither fish nor foul and
resoundingly ignored.

Luckily, many programmers eventually realize that there is a pretty-print
program ("cb" for the unix heads), and it can be set to produce the exact
style of indenting that is preferred.

What makes this my least favorite is that the argument itself is pointless
-- no amount of moving around the brackets is going to change the
resulting machine code. (And if it does, you should shoot your compiler
vendor, because one of the few STANDARDS of C or Lisp is that whitespace
is irrelevant.)

What makes the argument especially pointless is the fact that the need to
perform a particular indentation has vanished over the years, because C
code editors are finally starting to catch up to Lisp in functionality.

For many years, Lisp editors have provided functions such as "indent based
on semantics" (ie. NOT just "indent the same as previous line, which is
lame), "flash matching bracket" and "highlight the entire bracketed
expression" without even cracking a sweat. Most of them have also
supported "find the definition of this function I'm pointing at," "show me
the documentation for this system call", and "show me the arguments to the
function whose name I've just typed"

Now that C editors are doing some of these things (hurray!) there really
is no need to have the close brackets on separate lines. However, it's
still a matter of PERSONAL TASTE.

Remember what the philosophers said: "We demand rigidly defined areas of
doubt and uncertainty." I think indentation style should be one of them.
From: Walt Howard
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <33d8d727.21258255@news.deltanet.com>
On Thu, 24 Jul 1997 14:49:56 -0700, ··@wetware.com (Fred Haineux)
wrote:

>······@lavielle.com (Rainer Joswig) wrote:
>|  > (defun faa (x)
>|  >    (cond 
>|  >      ((numberp x) (+ x x))
>|  >      (t (list x))
>|  >    )
>|  > )
>|  > 
>|  > 
>|  >         It's the same as what you had, except the close paranthesis 
>|  > are in the same line as the matching open parenthesis.
>|  
>|  This is bad style. You are wasting white space for nothing.
>|  Lisp programmer usually don't
>|  care about closing parentheses. Indentation is more important.
>|  Both indentation and parenthesis matching/counting does
>|  the Lisp environment (Emacs, Genera, whatever, ...)
>|  for you.

	Here's a great example of how two coding methods are right,
each for the people concerned.

	I was coding in Dataflex. My boss was also. It uses goto
labels etc. All my labels were symbolic names like
"CalculateValue" or "ShowResult". His labels were all numeric,
"L0001",  "L0002". For the life of me, I couldn't figure out why
he did this.

	Then it dawned on me. He used a lot of PAPER PRINTOUTS when
looking over code. The numeric labels made sense because they
were ordered down the page. He didn't use an editor much.

	I never use paper. For me, the symbolic labels made more
sense because I could use my editor to search.

	Walt Howard
From: Gareth McCaughan
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <86zprc129l.fsf@g.pet.cam.ac.uk>
Walt Howard wrote:

> 	I was coding in Dataflex. My boss was also. It uses goto
> labels etc. All my labels were symbolic names like
> "CalculateValue" or "ShowResult". His labels were all numeric,
> "L0001",  "L0002". For the life of me, I couldn't figure out why
> he did this.
> 
> 	Then it dawned on me. He used a lot of PAPER PRINTOUTS when
> looking over code. The numeric labels made sense because they
> were ordered down the page. He didn't use an editor much.
> 
> 	I never use paper. For me, the symbolic labels made more
> sense because I could use my editor to search.

So why didn't your boss use labels like L001_CalculateValue
and L002_ShowResult?

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Walt Howard
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <33d908af.8481572@news.deltanet.com>
On 25 Jul 1997 00:03:34 +0100, Gareth McCaughan
<·····@dpmms.cam.ac.uk> wrote:

>Walt Howard wrote:
>
>> 	I was coding in Dataflex. My boss was also. It uses goto
>> labels etc. All my labels were symbolic names like
>> "CalculateValue" or "ShowResult". His labels were all numeric,
>> "L0001",  "L0002". For the life of me, I couldn't figure out why
>> he did this.
>> 
>> 	Then it dawned on me. He used a lot of PAPER PRINTOUTS when
>> looking over code. The numeric labels made sense because they
>> were ordered down the page. He didn't use an editor much.
>> 
>> 	I never use paper. For me, the symbolic labels made more
>> sense because I could use my editor to search.
>
>So why didn't your boss use labels like L001_CalculateValue
>and L002_ShowResult?
>
	Because he was a total dip, why else?

	Walt Howard
From: Walt Howard
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <33d7d4eb.20686438@news.deltanet.com>
On Thu, 24 Jul 1997 14:49:56 -0700, ··@wetware.com (Fred Haineux)
wrote:

>······@lavielle.com (Rainer Joswig) wrote:
>|  > (defun faa (x)
>|  >    (cond 
>|  >      ((numberp x) (+ x x))
>|  >      (t (list x))
>|  >    )
>|  > )
>|  > 
>|  > 
>|  >         It's the same as what you had, except the close paranthesis 
>|  > are in the same line as the matching open parenthesis.
>|  
>|  This is bad style. You are wasting white space for nothing.
>|  Lisp programmer usually don't
>|  care about closing parentheses. Indentation is more important.
>|  Both indentation and parenthesis matching/counting does
>|  the Lisp environment (Emacs, Genera, whatever, ...)
>|  for you.
>
>One of my least favorite programmer arguments concerns which indenting
>style is "best." For some reason, these usually smart people seem to
>explode violently when presented with code that doesn't meet their
>personal indenting style.

	It sucks, but it does affect readabiity. Programmers can more
quickly assimilate, i.e, find and fix bugs, in code they can read
more easily.

	I do believe no one has the right to "explode" over someone
else's style. That shows a marked lack of experience and/or
overblown self centeredness.

	The real problem is that people who don't indent like this:

	int function( int parameter )
	{
		for( int i = 0; i<10; i++)
		{
			// do something
		}
	}

	Are brain damaged. That's all I have to say about that. 

>This usually results in someone in the org mandating a particular style,
>which produces months of endless bickering, and eventually produces some
>kind of compromise standard that is both neither fish nor foul and
>resoundingly ignored.

	Yep.

>Luckily, many programmers eventually realize that there is a pretty-print
>program ("cb" for the unix heads), and it can be set to produce the exact
>style of indenting that is preferred.

	You can't do this in a large project. The reason is, you
check this code in. Often it becomes necessary to "diff" two
versions to see "what change made it break". If someone
reformatted the whole thing and checked it in, its impossible to
tell what REAL changes were made between those two versions.

>What makes the argument especially pointless is the fact that the need to
>perform a particular indentation has vanished over the years, because C
>code editors are finally starting to catch up to Lisp in functionality.

	Well, emacs ( an all around editor which includes a C mode
and a C++ mode ) has been doing that forever.

	Walt Howard
From: Erik Naggum
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <3078809742308429@naggum.no>
* Fred Haineux
| Luckily, many programmers eventually realize that there is a pretty-print
| program ("cb" for the unix heads), and it can be set to produce the exact
| style of indenting that is preferred.

* Walt Howard
| You can't do this in a large project.  The reason is, you check this code
| in.  Often it becomes necessary to "diff" two versions to see "what
| change made it break".  If someone reformatted the whole thing and
| checked it in, its impossible to tell what REAL changes were made between
| those two versions.

it occurred to me that the problem is that "diff" is run on the text
representation of the code.  had it been possible "diff" the _code_, each
programmer could check his code in and out in his own textual style.

#\Erik
-- 
Thomas J. Watson, founder of IBM: "man shall think.  machines shall work."
William H. Gates, III, Microsoft: "man shall work.  machines shall not."
From: Mariusz Zydyk
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <33deb270.10254707@news>
On 25 Jul 1997 08:55:42 +0000, Erik Naggum <····@naggum.no> wrote:
[>it occurred to me that the problem is that "diff" is run on the text
[>representation of the code.  had it been possible "diff" the _code_, each
[>programmer could check his code in and out in his own textual style.

------- Well, no, you can't diff a binary. How would you be able to
tell where the idfference mapped to in the source once you've found
it anyways? Also, diffing each programmer's source before checking
in might not work very well on a large project, where files might be
shared, or multiple people will be making changes to the same file?

What might work a bit better is to have a custom program to strip C
of any pretty features, extra spacing, etc, and then diff the files.
Hmmm... why not just follow a coding standard of whatever team/project
/company you are working at.

--
 Mariusz Zydyk                           http://www.ucalgary.ca/~mszydyk/
 Prince of Darkness                                          ···@null.net
           How do you make holy water? Boil the hell out of it.
From: Erik Naggum
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <3079253923905075@naggum.no>
* Mariusz Zydyk
| On 25 Jul 1997 08:55:42 +0000, Erik Naggum <····@naggum.no> wrote:
| [>it occurred to me that the problem is that "diff" is run on the text
| [>representation of the code.  had it been possible "diff" the _code_, each
| [>programmer could check his code in and out in his own textual style.
| 
| ------- Well, no, you can't diff a binary.

sigh.  of course you can, and "code" doesn't mean "binary", but that's not
the issue.  my point is that you don't need to diff the whitespace to diff
the source.  most diff programs are anal about newlines, for instance.

| Hmmm... why not just follow a coding standard of whatever team/project
| /company you are working at.

sigh.  that's how this discussion started.

#\Erik
-- 
Thomas J. Watson, founder of IBM: "man shall think.  machines shall work."
William H. Gates, III, Microsoft: "man shall work.  machines shall not."
From: David Hanley
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <33DF5480.239BED13@nospan.netright.com>
Mariusz Zydyk wrote:

> What might work a bit better is to have a custom program to strip C
> of any pretty features, extra spacing, etc, and then diff the files.
> Hmmm... why not just follow a coding standard of whatever team/project
>
> /company you are working at.

    Most diff algorithms work on a line basis.  Working on a character
or tokenbasis would be substantially more computationally expensive, but
it's probably a
good idea.  Then the diff program would be insensitive to formatting.

dave
From: Walt Howard
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <33e7ef71.27967430@news.deltanet.com>
On Wed, 30 Jul 1997 03:20:40 GMT, ···@null.net (Mariusz Zydyk)
wrote:

>On 25 Jul 1997 08:55:42 +0000, Erik Naggum <····@naggum.no> wrote:
>[>it occurred to me that the problem is that "diff" is run on the text
>[>representation of the code.  had it been possible "diff" the _code_, each
>[>programmer could check his code in and out in his own textual style.
>
>------- Well, no, you can't diff a binary. How would you be able to
>tell where the idfference mapped to in the source once you've found
>it anyways? Also, diffing each programmer's source before checking
>in might not work very well on a large project, where files might be
>shared, or multiple people will be making changes to the same file?
>
>What might work a bit better is to have a custom program to strip C
>of any pretty features, extra spacing, etc, and then diff the files.
>Hmmm... why not just follow a coding standard of whatever team/project
>/company you are working at.

	Well, in all honesty, the unix diff has a feature to ignore
whitespace in determining sameness.

	Walt Howard
From: Erik Naggum
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <3079336798464772@naggum.no>
* Walt Howard
| Well, in all honesty, the unix diff has a feature to ignore whitespace in
| determining sameness.

not really.  it only collapses sequence of blanks and tabs into one blank.
this doesn't take care of such things as adding or removing insignificant
whitespace where zero whitespace is allowed, or the annoying newlines that
are also considered "whitespace" in all programming languages.

#\Erik
-- 
Thomas J. Watson, founder of IBM: "man shall think.  machines shall work."
William H. Gates, III, Microsoft: "man shall work.  machines shall not."
From: Richard A. O'Keefe
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <5rp7p7$pcc$1@goanna.cs.rmit.edu.au>
On 25 Jul 1997 08:55:42 +0000, Erik Naggum <····@naggum.no> wrote:
[it occurred to me that the problem is that "diff" is run on the text
[representation of the code.  had it been possible "diff" the _code_, each
[programmer could check his code in and out in his own textual style.

···@null.net (Mariusz Zydyk) writes:
>------- Well, no, you can't diff a binary.

But Erik never suggested doing so.  He was merely adverting to the old
Interlisp proverb:  "a PROGRAM is not a LISTING".  In short, he was
suggesting something like diffing abstract syntax trees, and yes you
CAN do that.

>How would you be able to tell where the idfference mapped to in the
>source once you've found it anyways?

If it were a matter of diffing binaries, the answer is pathetically
obvious:  via the line number table.  When it's a matter of diffing
abstract syntax trees, part of Erik's point iS WHAT SOURCE CODE?
He's saying that there shouldn't be ANY distinguished listing.  In
fact, there might never ever have been a textual presentation of the
entire translation unit at one time.  The differ would show you the
context of the differences in your choice of layout.

For the record, Interlisp did pretty much what Erik suggests.

>What might work a bit better is to have a custom program to strip C
>of any pretty features, extra spacing, etc, and then diff the files.

This is precisely a crude textual approximation to Erik's suggestion.

>Hmmm... why not just follow a coding standard of whatever team/project
>/company you are working at.

What if a translation unit has to be shared between two teams with
different coding standards, or is produced (and maintained) by company
X but used (and therefore tested and debugged) by company Y, with
different coding standards?
-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Tim Bradshaw
Subject: diffing syntax trees (was Re: Parentheses and indenting, etc.)
Date: 
Message-ID: <ey3raceunog.fsf_-_@staffa.aiai.ed.ac.uk>
[Note trimmed newsgroups]

* Richard A O'Keefe wrote:
> But Erik never suggested doing so.  He was merely adverting to the old
> Interlisp proverb:  "a PROGRAM is not a LISTING".  In short, he was
> suggesting something like diffing abstract syntax trees, and yes you
> CAN do that.

Is there anything out there to do this for Lisp? (an extant dialect,
ie not interlisp, I don't want to have to turn a dmachine on).

I played with doing it a long time ago, but realised that the
algorithms were quite hard.  Even though I'm happy using diff-based
source control systems for my stuff, I've always thought a
syntactically-aware diff would be a very useful thing to have.  For
lisp it should be particularly simple since the `surface' syntax is so
simple.

One thing you do need to be able to do is have readable comments (like
interlisp). This isn't that hard for CL: it was the bit I had working!

--tim
From: Steve Quist
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <33de11db.3735200@59.55.4.69>
·····@netcom.com (Walt Howard) wrote:

>On Thu, 24 Jul 1997 14:49:56 -0700, ··@wetware.com (Fred Haineux)
>wrote:
>
[snip]
>	You can't do this in a large project. The reason is, you
>check this code in. Often it becomes necessary to "diff" two
>versions to see "what change made it break". If someone
>reformatted the whole thing and checked it in, its impossible to
>tell what REAL changes were made between those two versions.
>
So keep two configurations for a formatter around. Format to 
a standard style when checking in. Format to your preferred 
style when checking out. No one needs to complain, and 
real code differences are not lost in formatting "noise".

Steve Quist
From: Marco Antoniotti
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <scfbu3rw3m9.fsf@infiniti.PATH.Berkeley.EDU>
In article <···················@17.127.18.96> ··@wetware.com (Fred Haineux) writes:

   From: ··@wetware.com (Fred Haineux)
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Thu, 24 Jul 1997 14:49:56 -0700
   Organization: Castle Wetware Internet Services, INC.
   Lines: 57

	...

   One of my least favorite programmer arguments concerns which indenting
   style is "best." For some reason, these usually smart people seem to
   explode violently when presented with code that doesn't meet their
   personal indenting style.

I find myself objecting to "inconsistent style".  I personally follow
the GNU Coding standards because are reasonable and because they are
well supported by Emacs.

   This usually results in someone in the org mandating a particular style,
   which produces months of endless bickering, and eventually produces some
   kind of compromise standard that is both neither fish nor foul and
   resoundingly ignored.

I have been activly campaigning in my group to enforce the GNU Coding
standards.  This has led to some bickering, but it is a necessary evil
(again, not because the GNU coding standards are bad, but just because
they are a "coding standard").

   Luckily, many programmers eventually realize that there is a pretty-print
   program ("cb" for the unix heads), and it can be set to produce the exact
   style of indenting that is preferred.

Emacs is the program that should be used to do the indentation in the
first place :)

   What makes this my least favorite is that the argument itself is pointless
   -- no amount of moving around the brackets is going to change the
   resulting machine code. (And if it does, you should shoot your compiler
   vendor, because one of the few STANDARDS of C or Lisp is that whitespace
   is irrelevant.)

And here we see that you have written very little Lisp recently.
whitespaces (single ones at least) are all-important in Lisp. :)

   What makes the argument especially pointless is the fact that the need to
   perform a particular indentation has vanished over the years, because C
   code editors are finally starting to catch up to Lisp in
   functionality.

Emacs has had C and C++ modes (written in Emacs Lisp - of course) for
at least a decade.

	...

   Remember what the philosophers said: "We demand rigidly defined areas of
   doubt and uncertainty." I think indentation style should be one of them.

Yes and no.  Consensus on such matter is extremely important in an
organization, especially when you are not just producing one=shot
programs, but systems which will have to be maintained for a long time
to come.

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: ····@mWilden.com
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <33D8F364.5635@mWilden.com>
Marco Antoniotti wrote:
> 
>> I think indentation style should be one of them.
> 
> Yes and no.  Consensus on such matter is extremely important in an
> organization, especially when you are not just producing one=shot
> programs, but systems which will have to be maintained for a long time
> to come.

I'm afraid I disagree. I haven't worked on any large projects, merely
medium ones (20-30 programmers), but I've never found other people's
indentation styles to severely hamper my work. In fact, someone's style
can be a useful signature to see who wrote what. :)

I hate everyone's indentation style but my own. The problem with a
standard is that many if not most people are forced to use a style they
don't like and aren't comfortable with. This is like forcing everyone to
use the same editor. Is consistency in this area (and I invoke Emerson
here) important enough to irritate people? Are there any practical
reasons for indentation standards--i.e. situations where productivity
may suffer without them, and considering that productivity may also
suffer with them?
From: Fred Haineux
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <bc-2507971112320001@17.127.18.96>
·······@infiniti.PATH.Berkeley.EDU (Marco Antoniotti) wrote:
|  And here we see that you have written very little Lisp recently.
|  whitespaces (single ones at least) are all-important in Lisp. :)

I do understand your point -- that "f oo" is different than "foo " -- I
should have said "variances in non-zero amounts of whitespace are
considered irrelevant." But this point is pretty minor, no?

|  Yes and no.  Consensus on such matter is extremely important in an
|  organization, especially when you are not just producing one=shot
|  programs, but systems which will have to be maintained for a long time
|  to come.

Yes, and no. Having *a* coding standard is a Good Thing, without doubt,
because it improves programmer efficiency by some small, but reasonable,
amount.

The problem is that standards are almost always embroigled in bitter
argument and bickering. If everyone could agree: code however you like,
but "checkin" WILL run "cb" (with our company's "official standards
module" attached) -- hey, that'd be great, wouldn't it?

I frankly couldn't care THAT much about the particular standard. Even if
it offends mine eye, I can cope.

I reject categorically that one coding standard is "intrinsically winning"
and that all others are "obviously braindamaged."
From: Marco Antoniotti
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <scf4t9f11ig.fsf@infiniti.PATH.Berkeley.EDU>
In article <···················@17.127.18.96> ··@wetware.com (Fred Haineux) writes:

   From: ··@wetware.com (Fred Haineux)
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Fri, 25 Jul 1997 11:12:32 -0700
   Organization: Castle Wetware Internet Services, INC.
   Lines: 27

   ·······@infiniti.PATH.Berkeley.EDU (Marco Antoniotti) wrote:
   |  And here we see that you have written very little Lisp recently.
   |  whitespaces (single ones at least) are all-important in Lisp. :)

   I do understand your point -- that "f oo" is different than "foo " -- I
   should have said "variances in non-zero amounts of whitespace are
   considered irrelevant." But this point is pretty minor, no?

   |  Yes and no.  Consensus on such matter is extremely important in an
   |  organization, especially when you are not just producing one=shot
   |  programs, but systems which will have to be maintained for a long time
   |  to come.

   Yes, and no. Having *a* coding standard is a Good Thing, without doubt,
   because it improves programmer efficiency by some small, but reasonable,
   amount.

I much more concerned with the overall efficiency of the "programmer's
team" and with software maintainability over medium and long periods
of time.

   The problem is that standards are almost always embroigled in bitter
   argument and bickering. If everyone could agree: code however you like,
   but "checkin" WILL run "cb" (with our company's "official standards
   module" attached) -- hey, that'd be great, wouldn't it?

Well,  people should just use Emacs and the C/C++ modes :)

   I frankly couldn't care THAT much about the particular standard. Even if
   it offends mine eye, I can cope.

   I reject categorically that one coding standard is "intrinsically winning"
   and that all others are "obviously braindamaged."

This is true.  I can adapt, but having *a* standard surely helps.

Cheers

-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Gareth McCaughan
Subject: Re: Parentheses and indenting, etc.
Date: 
Message-ID: <86204o2gv8.fsf@g.pet.cam.ac.uk>
Fred Haineux wrote:

> What makes this my least favorite is that the argument itself is pointless
> -- no amount of moving around the brackets is going to change the
> resulting machine code. (And if it does, you should shoot your compiler
> vendor, because one of the few STANDARDS of C or Lisp is that whitespace
> is irrelevant.)

Tut. It's perfectly legal for a compiler to produce different
machine code according to the formatting of the input, provided
the machine code does the same work. So it would in fact be
legal for (say) Microsoft's C compilers to spot that code is
indented according to the GNU coding standards, and insert
lots of delay loops. :-)

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: ? the platypus {aka David Formosa}
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <869324576.187583@cabal>
In <············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

>Marco Antoniotti wrote:

[...A correctly styled lisp code...]

>	The lisp code you have above is not considered to be good 
>lisp programming style,  (or what was told to me to be good lisp 
>programming style):

[...]

>						Peaceman 

>;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
>(defun high (x i h)			; select the high elts of x onto h.
>  (if 
>    (null x) h 
>    (if 
>      (< (car x) i)  
>        (high 
>          (cdr x) 
>          i 
>          h
>        )
[...Rest of the code where each open bracket makes a new line...]

This is realy poorly styled lisp. For one thing a group of close brackets
gose all on one line.

This is lisp code styled as per C.  It is inpossable to read or debug.

--
Please excuse my spelling as I suffer from agraphia see the url in my header. 
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is $0.08 per day too much to pay?   ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D63CC4.5691@capital.net>
? the platypus {aka David Formosa} wrote:
> 
> >;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
> >(defun high (x i h)                    ; select the high elts of x onto h.
> >  (if
> >    (null x) h
> >    (if
> >      (< (car x) i)
> >        (high
> >          (cdr x)
> >          i
> >          h
> >        )
> [...Rest of the code where each open bracket makes a new line...]
> 
> This is realy poorly styled lisp. For one thing a group of close brackets
> gose all on one line.
> 
> This is lisp code styled as per C.  It is inpossable to read or debug.
> 

	To each their own. I find this style a lot easier to read than putting 
all the end parenthesis on one line. Sure, it takes up more lines, but 
it is a lot easier to group together the commands in the specified
level.

						Peaceman
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf4t9kh3p8.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Wed, 23 Jul 1997 13:17:56 -0400
   Organization: CADVision Development Corp.
   Reply-To: ········@capital.net
   Lines: 27

   ? the platypus {aka David Formosa} wrote:
   > 
   > >(defun high (x i h)                    ; select the high elts of x onto h.
   > >  (if
   > >    (null x) h
   > >    (if
   > >      (< (car x) i)
   > >        (high
   > >          (cdr x)
   > >          i
   > >          h
   > >        )
   > [...Rest of the code where each open bracket makes a new line...]
   > 

	   To each their own. I find this style a lot easier to read
   than putting  
   all the end parenthesis on one line. Sure, it takes up more lines, but 
   it is a lot easier to group together the commands in the specified
   level.

I suppose you'd write the same in C as

int
high (int x[], int c, int i, int h)
{
	if
	  (c == 0)
		if
		   (x[c] < i)
			high(x,
			     c - 1,
			     i,
			     h)
	...
}


Give me a break!
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Dennis Weldy
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <PyxcQQJl8GA.221@news2.ingr.com>
Wow! I just optimized 80 lines down to 28! bet it even runs faster too. ;-)
Yeah, I counted the blank lines...
<see below>
Anyways, if you were writing in C or C++...
int main(int argc, char **argv)
{
    int a = 0 ;
         b = 0 ;
         c = 0 ;
         d = 0 ;

    foo(a
           )
    bar(b
           )
    if(
          c < d
       )
    {
            thisfunc(
                            a,
                            b,
                            c,
                            d,
                            )
               ;
     }
}

Would you count that as 24 lines of code? :-) 
Your "it takes 80 lines to do this in lisp" doesnt really have too much
meaning if 52 of those lines is related solely to how the code is
formatted. Heck, I could write the whole thing in one line (text file input
to a lisp interpreter).As could the C version. DO we then start counting
characters? 

Dennis

(defun high (x i h) ; select the high elts of x onto h.
  (if (null x) h 
    (if  (< (car x) i)  
          (high  (cdr x)   i   h )
          (high  (cdr x)   i  (cons (car x)   h )  )
    )
  )
) 

(defun low (x i l)     ; select the low elts of x onto l.
  (if  (null x) l
    (if  (< (car x) i) 
        (low  (cdr x)  i  (cons  (car x)  l ) ) 
        (low  (cdr x)  i  l )
     )
   )
)

(defun qs (x l)                             ; sort the list x onto the
   list l.
   (if  (null x) l
        (qs  (low (cdr x)  (car x)  nil ) 
                 (cons  (car x)  
                              (qs  (high  (cdr x)  (car x)   nil )  l )
                 )
         )
    )
)
>.
>  

Sajid Ahmed the Peaceman wrote in article <············@capital.net>...

>Marco Antoniotti wrote:
>> 
>>    In article <·············@capital.net>, ········@capital.net wrote:
>>    >         Have you ever seen a the quicksort algorithm written in
Lisp?
>>    > Even though it is a recursive function, it still needs well over
>>    > 100 lines of code. In C it would only be 5 or 6 lines.
>> 
>>    ;;; Quicksort a lisp list in considerably fewer than 100 lines of
code :-)
>> 
>>    (defun qs (x l)                             ; sort the list x onto
the list l.
>>      (if (null x) l
>>        (let* ((i (car x)) (restx (cdr x))
>>               (high low (highlow restx i nil nil)))
>>          (qs low (cons i (qs high l))))))
>> 
>>    (defun highlow (x i h l)     ; select the high and low elts of x onto
h and l.
>>      (if (null x) (values h l)
>>        (let* ((firstx (car x)) (restx (cdr x)))
>>          (if (< firstx i) (highlow restx i h (cons firstx l))
>>            (highlow restx i (cons firstx h) l)))))
>> 
>>    This is from my paper "A 'Linear Logic' Quicksort'  ACM Sigplan
Notices
>>    Feb. 1994.
>>    ftp://ftp.netcom.com/pub/hb/hbaker/LQsort.html   (also .ps.Z)
>> 
>> Come on Henry.  This is another flame bait in disguise. :)  You should
>> have posted the version with arrays.  Otherwise, after the claim that
>> "Lisp does support only recursion", we'd get also the "Lisp does not
>> have arrays" crap. :)
>> 
>
> Looks like he got me hook line and sinker :) 
>
>
> The lisp code you have above is not considered to be good 
>lisp programming style,  (or what was told to me to be good lisp 
>programming style):
>
>>        (let* ((i (car x)) (restx (cdr x))
>>               (high low (highlow restx i nil nil)))
>
> The i and restx are fine since you can easily substitute the rhs values 
>wherever they are referenced in the latter part of the function. 
>However you can't do the same with the high and low vars.  
> Anyway, I took the time out to make the neccesary changes in your 
>code to take care of this. It's about 80 lines. I know you guys will
>say I'm putting too little in each line, but I don't have access to
>G-emacs 
>( written in Lisp,considered by many as the world's best text editor, 
>  and considered by some as the world's most difficult to use text
>editor)
>where you can match up the close paranthesis to it's corresponding open
>paranthesis. There has to be some kind of alignment for the 
>parenthesis.
>
>
> Peaceman 
>
>;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;;
>(defun high (x i h) ; select the high elts of x onto h.
>  (if 
>    (null x) h 
>    (if 
>      (< (car x) i)  
>        (high 
>          (cdr x) 
>          i 
>          h
>        )
>      (high 
>        (cdr x) 
>        i 
>        (cons 
>          (car x) 
>          h
>        )
>      )
>    )
>  )
>) 
>;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;
>(defun low (x i l)     ; select the low elts of x onto l.
>  (if 
>    (null x) l
>    (if 
>      (< (car x) i) 
>        (low 
>          (cdr x) 
>          i 
>          (cons 
>            (car x) 
>            l
>          )
>        ) 
>      (low 
>        (cdr x) 
>        i 
>        l
>      )
>    )
>  )
>)
>;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;
>(defun qs (x l)                             ; sort the list x onto the
>list l.
>  (if 
>    (null x) l
>    (qs 
>      (low 
>        (cdr x) 
>        (car x) 
>        nil
>      ) 
>      (cons 
>        (car x) 
>        (qs 
>          (high 
>            (cdr x) 
>            (car x) 
>            nil
>          ) 
>          l
>        )
>      )
>    )
>  )
>)
>.
> 
From: Fred Gilham
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <u7n2ngicis.fsf@japonica.csl.sri.com>
Dennis Weldy wrote:

>Wow! I just optimized 80 lines down to 28! bet it even runs faster
>too. ;-) Yeah, I counted the blank lines...
><see below>

Since Henry's original code only had 2 functions, here's Sajid Ahmed's
code rewritten to use only two functions:

(defun part (x i o tst)
  (if (null x) o
    (if (funcall tst (car x) i) (part (cdr x) i o tst)
      (part (cdr x) i (cons (car x) o) tst))))

(defun qs (x l)                             ; sort the list x onto the list l.
  (if (null x) l
    (qs (part (cdr x) (car x) nil '>)
	(cons (car x) (qs (part (cdr x) (car x) nil '<) l)))))



(Since `high' and `low' are almost identical, there's no point in
having two functions....)

This gets us back to 8 lines of less than 80 chars each :-) and meets
the stylistic objection Sajid Ahmed had.  I assume the compiler would
do common subexpression elimination on the (car x) and (cdr x) calls
so it would be just as efficient as using let*, right?

-- 
-Fred Gilham    ······@csl.sri.com
From: Rainer Joswig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <joswig-ya023180002007971312370001@news.lavielle.com>
In article <············@capital.net>, ········@capital.net wrote:

> G-emacs 
> ( written in Lisp,considered by many as the world's best text editor, 
>   and considered by some as the world's most difficult to use text
> editor)
> where you can match up the close paranthesis to it's corresponding open
> paranthesis. There has to be some kind of alignment for the 
> parenthesis.

VI should have done it, too.

Unless somebody has an extended LET*, a MULTIPLE-VALUE-BIND will
do the trick.

(defun qs (x l p)
  (if (null x)
    l
    (let ((i (first x)))
      (multiple-value-bind (high low)
                           (highlow (rest x) i nil nil p)
        (qs low (cons i (qs high l p)) p)))))

(defun highlow (x i h l p)
  (if (null x)
    (values h l)
    (let ((firstx (first x)))
      (if (funcall p firstx i)
        (highlow (rest x) i h (cons firstx l) p)
        (highlow (rest x) i (cons firstx h) l p)))))

; (qs '(3 1 5 6) nil #'<)

If I'm counting right, this is still a bit less than 100 lines. 


Btw., for formatting code you can use PPRINT:

(let ((*print-right-margin* 80)
      (*print-case* :downcase))
  (pprint '(defun qs (x l)
            (if 
              (null x) l
              (qs 
               (low 
                (cdr x) 
                (car x) 
                nil
                ) 
               (cons 
                (car x) 
                (qs 
                 (high 
                  (cdr x) 
                  (car x) 
                  nil
                  ) 
                 l
                 )
                )
               )
              )
            )))

Gives you a version that doesn't look like C:

(defun qs (x l)
  (if (null x)
      l
      (qs (low (cdr x) (car x) nil)
          (cons (car x) (qs (high (cdr x) (car x) nil) l)))))

-- 
http://www.lavielle.com/~joswig/
From: Robert Monfera
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CEEBBE.7D05@interport.net.removethisword>
Sajid Ahmed the Peaceman wrote:

> Good Lisp programs only allow recursive code,
> without any stop codes, whereas good C programs allow for both recursive
> and iterative code.

How about Backus' FP (Functional Programming) and FFP (Formal FP)
languages? Or APL? They all provide elegant ways to resolve recursion
without the
go-down-the-bit-level-let's-program-the-X-Y-Z-registers-this-is-how-
I've-been-counting-beans-since-kindergarten loops of imperative
languages.
The ideas of FP, FFP and APL can be embedded in Lisp because it IS a
language
that allows abstraction.

As for the speed, my belief is that the more abstract the language is,
the 
more optimized the application can be compiled, and it's an area where
Lisp has an
opportunity to go even further. Anyway, it's the Return on Investment
that should 
tell you something about the bottom line, and now good people are _far_
more expensive
than good hardware.

C or assembly is still better for high volume, low complexity
programming
(eg. quartz watch, calculator or washing machine logic).

Cheers,
Rob
From: Rainer Joswig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <joswig-ya023180002107971731410001@news.lavielle.com>
In article <·············@interport.net.removethisword>,
·······@interport.net.removethisword wrote:

> Sajid Ahmed the Peaceman wrote:
> 
> > Good Lisp programs only allow recursive code,
> > without any stop codes, whereas good C programs allow for both recursive
> > and iterative code.

Bad C programs only allow iterative code.

-- 
http://www.lavielle.com/~joswig/
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf204wgwhx.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Thu, 17 Jul 1997 14:22:44 -0400
   Organization: Logical Net
   Reply-To: ········@capital.net
   Lines: 55
   Mime-Version: 1.0
   Content-Type: text/plain; charset=us-ascii
   Content-Transfer-Encoding: 7bit
   X-Mailer: Mozilla 3.01 (WinNT; I)
   Xref: agate comp.lang.lisp:29313 comp.programming:52509 comp.lang.c++:282315

   Mark Greenaway wrote:
   > 
	...

   >Any decent LISP programmer, in fact any decent programmer, knows
   > that. Efficiency is part of programming. But it is not the be-all and
   > end-all. Yes, the lower-level parts of many LISP systems might well be
   > written in C/Assembly etc.
   > 
   > The real question is: What is the most efficient/elegant/best way to
   > express a particular program? It might, for some problems, be in C or C++.
   > For some, it might be LISP or Prolog. If I can write a program which
   > efficiently does a job in 60 lines of well-documented LISP that would take
   > 300 lines or more of C, and they both run at similiar speeds, it would
   > seem that LISP is the better choice.

	   It is true that LISP has some built in functions that allows 
   a programmer to write less code. As far as speed is concerned, in almost 
   every situation, the same program written in C would be faster than a 
   similar program written in Lisp. Why? C is much closer to the machine
   level 
   assembly code that all computers run on. Many C compilers allow inline
   assembly
   language code within the program.

How about

	(defun i-am-a-very-fast-lisp-function (x y z)
	  (* i (expt y z)))

or even

	(defun i-am-a-very-fast-lisp-function (x y z)
	  #I(i * y^^z))

followed by a

	(declaim (inline i-am-a-very-fast-lisp-function))

What now?

	   As far as the size of the program is concerned, most of the time C 
   programs are smaller? Why? Good Lisp programs only allow recursive code, 
   without any stop codes, whereas good C programs allow for both recursive 
   and iterative code.

Either you have not read all the posting or you have not seen a Lisp
program or both.

	   Have you ever seen a the quicksort algorithm written in Lisp? 
   Even though it is a recursive function, it still needs well over 
   100 lines of code. In C it would only be 5 or 6 lines.

In Common Lisp it is even shorter!

	(defvar my-array (make-array 1000 :element-type 'single-float))

	(dotimes (i 1000)
	   (setf (aref my-array i) (random 1000000.0)))

        (sort my-array #'<)

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D001A3.4F70@capital.net>
> 
> In Common Lisp it is even shorter!
> 
>         (defvar my-array (make-array 1000 :element-type 'single-float))
> 
>         (dotimes (i 1000)
>            (setf (aref my-array i) (random 1000000.0)))
> 
>         (sort my-array #'<)
> 
> Cheers
> --
> Marco Antoniotti


	Good, but does it use the quick sort algorithm? 

	If you want to talk about built in functions, I'd like to 
note that the qsort function is included in the standard c 
run time libraries.

Cheers.

					Peaceman
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf204s1ee1.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Fri, 18 Jul 1997 19:52:03 -0400
   Organization: Logical Net
   Reply-To: ········@capital.net
   Lines: 24
   Mime-Version: 1.0
   Content-Type: text/plain; charset=us-ascii
   Content-Transfer-Encoding: 7bit
   X-Mailer: Mozilla 3.01 (WinNT; I)
   Xref: agate comp.lang.lisp:29360 comp.programming:52584 comp.lang.c++:282532

   > 
   > In Common Lisp it is even shorter!
   > 
   >         (defvar my-array (make-array 1000 :element-type 'single-float))
   > 
   >         (dotimes (i 1000)
   >            (setf (aref my-array i) (random 1000000.0)))
   > 
   >         (sort my-array #'<)
   > 
   > Cheers
   > --
   > Marco Antoniotti


	   Good, but does it use the quick sort algorithm?

Do you know a better "equality based" sort algorithm? (A case can be
made for heapsort over certain data structures).

	   If you want to talk about built in functions, I'd like to 
   note that the qsort function is included in the standard c 
   run time libraries.

Yep.  And its signature is

	 void qsort(void *base, size_t nel, size_t width,
          int (*compar) (const void *, const void *));

The Common Lisp signatures are

	sort sequence predicate &key key => sorted-sequence

	stable-sort sequence predicate &key key => sorted-sequence

Cheers

-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: David Thornley
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5r0l5t$ecf$1@darla.visi.com>
In article <·············@capital.net>,
Sajid Ahmed the Peaceman  <········@capital.net> wrote:
>> 
>> In Common Lisp it is even shorter!
>> 
>>         (defvar my-array (make-array 1000 :element-type 'single-float))
>> 
>>         (dotimes (i 1000)
>>            (setf (aref my-array i) (random 1000000.0)))
>> 
>>         (sort my-array #'<)
>>
Shouldn't that be
	(setf my-array (sort my-array #'<))
?

Just because sort is a destructive function doesn't mean you can
omit the setf and get anything reasonable from it. 
>
>
>	Good, but does it use the quick sort algorithm? 
>
Dunno.  That's implementation-dependent.  Just like in C.

>	If you want to talk about built in functions, I'd like to 
>note that the qsort function is included in the standard c 
>run time libraries.
>
Yes, a function called "qsort" is included in the standard, and it
is defined as a sort function.  The standard makes no claim of the
algorithm.  Any claim it made would be vitiated by the "as-if"
attitude of the C standard, which states that any implementation
details mandated by the standard may be disregarded, provided this
cannot be detected by any strictly conforming program.

In other words, we've got precisely equal guarantees in Common Lisp
and C, and Common Lisp has a more versatile and easier-to-use
sort function.

As far as implementations go, well, it seems just as reasonable to
use efficient sorting algorithms in Lisp as in C.  If you have some
evidence that C implementations typically implement qsort() better,
in some manner, than Common Lisp implementations implement sort,
please let us know.

David Thornley
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scfbu3wukxo.fsf@infiniti.PATH.Berkeley.EDU>
In article <············@darla.visi.com> ········@visi.com (David Thornley) writes:

   From: ········@visi.com (David Thornley)
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: 21 Jul 1997 21:43:25 GMT
   Organization: Vector Internet Services, Inc.
   Lines: 46
   NNTP-Posting-Date: 21 Jul 1997 16:43:25 CDT
   Xref: agate comp.lang.lisp:29412 comp.programming:52700 comp.lang.c++:282914

   In article <·············@capital.net>,
   Sajid Ahmed the Peaceman  <········@capital.net> wrote:
   >> 
   >> In Common Lisp it is even shorter!
   >> 
   >>         (defvar my-array (make-array 1000 :element-type 'single-float))
   >> 
   >>         (dotimes (i 1000)
   >>            (setf (aref my-array i) (random 1000000.0)))
   >> 
   >>         (sort my-array #'<)
   >>
   Shouldn't that be
	   (setf my-array (sort my-array #'<))
   ?

   Just because sort is a destructive function doesn't mean you can
   omit the setf and get anything reasonable from it. 

As a matter of fact, we can in this case since we are using arrays.
(Unless I was mistaken in my interpretation of CLtL2 and the
Hyperspec).  If the sequence were a list, then the setf would be
mandatory.  The punishment would be a "shorter" lists (unless the
first element were also the first in the sorted list).

If we want, we could discuss at length why some of the design choices
of "destructive" operations in Common Lisp sometime have a
non-intuitive behavior.

-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: David Thornley
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5r2f2v$jet$1@darla.visi.com>
In article <···············@infiniti.PATH.Berkeley.EDU>,
Marco Antoniotti <·······@infiniti.PATH.Berkeley.EDU> wrote:
>In article <············@darla.visi.com> ········@visi.com (David Thornley) writes:
>
>
>   In article <·············@capital.net>,
>   Sajid Ahmed the Peaceman  <········@capital.net> wrote:
>   >> 
>   >> In Common Lisp it is even shorter!
>   >> 
>   >>         (defvar my-array (make-array 1000 :element-type 'single-float))
>   >> 
>   >>         (dotimes (i 1000)
>   >>            (setf (aref my-array i) (random 1000000.0)))
>   >> 
>   >>         (sort my-array #'<)
>   >>
>   Shouldn't that be
>	   (setf my-array (sort my-array #'<))
>   ?
>
>   Just because sort is a destructive function doesn't mean you can
>   omit the setf and get anything reasonable from it. 
>
>As a matter of fact, we can in this case since we are using arrays.
>(Unless I was mistaken in my interpretation of CLtL2 and the
>Hyperspec).  If the sequence were a list, then the setf would be
>mandatory.  The punishment would be a "shorter" lists (unless the
>first element were also the first in the sorted list).
>
Yes, you are correct.  Sorry about that.  (Not that my version is
incorrect, of course, just longer than necessary.)

>If we want, we could discuss at length why some of the design choices
>of "destructive" operations in Common Lisp sometime have a
>non-intuitive behavior.
>
I talked this over once with a guy on X3J13 (if that's the correct
designation).  I understand the issues involved.  (No, it wasn't
too embarassing at the time.)

I do tend to write (<fun> <foo> ...) as (setf <foo> (<fun> <foo> ... ))
when <fun> is a destructive function.  I *think* that's legal; if
<foo> is a constant or parameter I don't think using destructive
functions on it is kosher.  Assuming it is legal, I don't think
it's bad style.  It does protect you from changes when you decide
to convert an array to a list, for example.

David Thornley
From: Erik Naggum
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <3078520173470566@naggum.no>
* Marco Antoniotti
| If we want, we could discuss at length why some of the design choices
| of "destructive" operations in Common Lisp sometime have a
| non-intuitive behavior.

do they?  the question is one of which value one looks at, I think.  `sort'
on a list returns a sorted list, but the cons cells that used to be that
list have been reused.  if we look at the return value of `sort', we get
the sorted list.  if we look at a random cons cell that has been reused in
some unspecified way, who's to tell?  like `nreverse' in one implementation
swaps the `car' of cons cells, and in another the `cdr', we cannot know
what a cons cell that has been destructively modified would contain, unless
the operation is specified by the specification of the language, and it
isn't for `sort' or `nreverse'.

or, another way: after (sort <list> <predicate>), <list> is _history_.

#\Erik
-- 
there was some junk mail in my mailbox.  somebody wanted to sell me some
useless gizmo, and kindly supplied an ASCII drawing of it.  "actual size",
the caption read.  I selected smaller and smaller fonts until it vanished.
From: Richard A. O'Keefe
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5r4bbd$kul$1@goanna.cs.rmit.edu.au>
········@visi.com (David Thornley) writes:
>Yes, a function called "qsort" is included in the standard, and it
>is defined as a sort function.  The standard makes no claim of the
>algorithm.  Any claim it made would be vitiated by the "as-if"
>attitude of the C standard, which states that any implementation
>details mandated by the standard may be disregarded, provided this
>cannot be detected by any strictly conforming program.

Note that there actually was a real commercial C system for the PC
where qsort() was implemented using Shell sort.

-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Alex Measday
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5r5ck9$iof@drn.zippo.com>
In article <············@goanna.cs.rmit.edu.au>, ··@goanna.cs.rmit.edu.au
says...
>
> Note that there actually was a real commercial C system for the PC
> where qsort() was implemented using Shell sort.

Which might have been a good thing!  I vaguely recall a column by Jon
Bentley a number of years ago (in UNIX REVIEW?) in which he said that the
UNIX implementations of qsort() were flakey.  Does anyone else remember
the details?

                                       Alex Measday
                                       ····@integ.com
                                       Posted via http://drn.zippo.com/
From: Jon S Anthony
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <JSA.97Jul23195950@alexandria.organon.com>
In article <··········@drn.zippo.com> Alex Measday writes:

> In article <············@goanna.cs.rmit.edu.au>, ··@goanna.cs.rmit.edu.au
> says...
> >
> > Note that there actually was a real commercial C system for the PC
> > where qsort() was implemented using Shell sort.
> 
> Which might have been a good thing!  I vaguely recall a column by Jon
> Bentley a number of years ago (in UNIX REVIEW?) in which he said that the
> UNIX implementations of qsort() were flakey.  Does anyone else remember
> the details?

You don't consider implementing qsort with shell sort "flakey"?????

/Jon

-- 
Jon Anthony
OMI, Belmont, MA 02178
617.484.3383
"Nightmares - Ha!  The way my life's been going lately,
 Who'd notice?"  -- Londo Mollari
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf7mehh5mu.fsf@infiniti.PATH.Berkeley.EDU>
In article <··········@drn.zippo.com> Alex Measday writes:

   From: Alex Measday
   Newsgroups: comp.lang.lisp
   Date: 23 Jul 1997 09:48:09 -0700
   Organization: Integral Systems, Inc.
   Lines: 14

   In article <············@goanna.cs.rmit.edu.au>, ··@goanna.cs.rmit.edu.au
   says...
   >
   > Note that there actually was a real commercial C system for the PC
   > where qsort() was implemented using Shell sort.

   Which might have been a good thing!  I vaguely recall a column by Jon
   Bentley a number of years ago (in UNIX REVIEW?) in which he said that the
   UNIX implementations of qsort() were flakey.  Does anyone else remember
   the details?

If the implementation of qsort in the C library is flaky then it must
be fixed.  Using a slower algorithm (shellsort) and disguise it as a
faster one could be seen as mislabeling.  Call in the lawyers :)

As a matter of fact, one optimization of a Qsort based algorithm would
be to use a slower algoritm (e.g insertion sort) when the lengths of
subsequences to be (recursively) sorted are below a (implementation
and architecture dependent) limit.

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: David Hanley
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D7F99E.E8ABCF68@nospan.netright.com>
Marco Antoniotti wrote:

> In article <··········@drn.zippo.com> Alex Measday writes:
>
>    From: Alex Measday
>    Newsgroups: comp.lang.lisp
>    Date: 23 Jul 1997 09:48:09 -0700
>    Organization: Integral Systems, Inc.
>    Lines: 14
>
>    In article <············@goanna.cs.rmit.edu.au>,
> ··@goanna.cs.rmit.edu.au
>    says...
>    >
>    > Note that there actually was a real commercial C system for the
> PC
>    > where qsort() was implemented using Shell sort.
>
>    Which might have been a good thing!  I vaguely recall a column by
> Jon
>    Bentley a number of years ago (in UNIX REVIEW?) in which he said
> that the
>    UNIX implementations of qsort() were flakey.  Does anyone else
> remember
>    the details?
>
> If the implementation of qsort in the C library is flaky then it must
> be fixed.  Using a slower algorithm (shellsort) and disguise it as a
> faster one could be seen as mislabeling.  Call in the lawyers :)

    Can you derive the expected time of shell sort for us?
    (trick question it's very hard ).  Shellsort does pretty darn well
for quie large sets.  The expected time is roughly n^(2/3) with a
low constant factor.

    I'd certianly consider implementing qsort() as heapsort, for
example.
heapsort, at least, has space guarantees, even in simple mode.
In fact, heapsort is in-place.

    For a destructive list sort, mergesort is proably optimal.

    dave
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86afjb2aap.fsf@g.pet.cam.ac.uk>
David Hanley wrote:

>     Can you derive the expected time of shell sort for us?
>     (trick question it's very hard ).  Shellsort does pretty darn well
> for quie large sets.  The expected time is roughly n^(2/3) with a
> low constant factor.

Er, I think you mean n^(3/2). (And a good shellsort tends to do a bit
better than that, in practice.)

>     I'd certianly consider implementing qsort() as heapsort, for
> example.
> heapsort, at least, has space guarantees, even in simple mode.
> In fact, heapsort is in-place.

If you implement quicksort properly, and you know that it will
never be given a dataset with more than 2^32 elements, you only
have 64 words of stack overhead. The difference in code size
between one sort function and another is going to be way more
than that.

On the other hand, heapsort gives *time* guarantees too. (A really
good quicksort implementation is pretty reliable, too, but you have
to be quite careful in order to get a really good quicksort
implementation.)

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: David Hanley
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D8BB38.B837A2E2@nospan.netright.com>
Gareth McCaughan wrote:

> David Hanley wrote:
>
> >     Can you derive the expected time of shell sort for us?
> >     (trick question it's very hard ).  Shellsort does pretty darn
> well
> > for quie large sets.  The expected time is roughly n^(2/3) with a
> > low constant factor.
>
> Er, I think you mean n^(3/2). (And a good shellsort tends to do a bit
> better than that, in practice.)

    Whupsie.  That was a careless transposition.  The point I was
trying to make was that such a strong reaction against shellsort
wasn't necessarily justified.

    I blew the qsort() supplied with my C compiler by giving it a sorted
set once.
I was really dissapointed!  To me, that's much 'flakier' that a sort
that may
be a bit slower in the average case.

> >     I'd certianly consider implementing qsort() as heapsort, for
> > example.
> > heapsort, at least, has space guarantees, even in simple mode.
> > In fact, heapsort is in-place.
>
> If you implement quicksort properly, and you know that it will
> never be given a dataset with more than 2^32 elements, you only
> have 64 words of stack overhead. The difference in code size
> between one sort function and another is going to be way more
> than that.

    True.  You do get space guarantees if you iterate over the larger
half of the partition and only recurse over the smaller half.  And you
only
use 64 words if stack of you maintain the stack yourself ( stack frames
have
overhead !). I was trying to make a point about the *simple* quicksort,
though, but you are certianly correct for the properly optimized and
tweaked
version.  You can get space guarantees, and be 99.99% sure of getting
roughly the average time.

> On the other hand, heapsort gives *time* guarantees too. (A really
> good quicksort implementation is pretty reliable, too, but you have
> to be quite careful in order to get a really good quicksort
> implementation.)
>
    Exactly.
From: Marco Antoniotti
Subject: qsort, hsort, ssort (Was: Re: Lisp is *SLOW*)
Date: 
Message-ID: <scfen8nw46l.fsf_-_@infiniti.PATH.Berkeley.EDU>
In article <··············@g.pet.cam.ac.uk> Gareth McCaughan <·····@dpmms.cam.ac.uk> writes:

   From: Gareth McCaughan <·····@dpmms.cam.ac.uk>
   Newsgroups: comp.lang.lisp
   Date: 25 Jul 1997 02:24:46 +0100
   Organization: University of Cambridge, England
   Lines: 29
   X-Newsreader: Gnus v5.3/Emacs 19.34

   David Hanley wrote:

   >     Can you derive the expected time of shell sort for us?
   >     (trick question it's very hard ).  Shellsort does pretty darn well
   > for quie large sets.  The expected time is roughly n^(2/3) with a
   > low constant factor.

   Er, I think you mean n^(3/2). (And a good shellsort tends to do a bit
   better than that, in practice.)

I do not think this is a good practice.  You are advertising a
function as using a O(n  * lg n) algorithm and then you implement it
using a slower one.  This is not acceptable.

   >     I'd certianly consider implementing qsort() as heapsort, for
   > example.
   > heapsort, at least, has space guarantees, even in simple mode.
   > In fact, heapsort is in-place.

   If you implement quicksort properly, and you know that it will
   never be given a dataset with more than 2^32 elements, you only
   have 64 words of stack overhead. The difference in code size
   between one sort function and another is going to be way more
   than that.

   On the other hand, heapsort gives *time* guarantees too. (A really
   good quicksort implementation is pretty reliable, too, but you have
   to be quite careful in order to get a really good quicksort
   implementation.)

Heapsort is a Theta(n * lg n) algorithm and as such is guaranteed to
achieve that performance.  However, I would object calling the library
function 'qsort'.

-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Alex Measday
Subject: Re: qsort, hsort, ssort (Was: Re: Lisp is *SLOW*)
Date: 
Message-ID: <5ralks$sfi@drn.zippo.com>
For anyone who's interested, I found the following reference to Bentley's
QSORT column in a news archive on the web:

> To: ······@stat.wisc.edu
> Subject: Bentley's column
> Date: Sun, 26 Jan 1992 15:48:26 -0600
>                   ----
> In his "Software Exploratorium" column from the February 1992 issue of
> Unix Review, Jon Bentley discusses some unusual behaviour of the qsort
> algorithm that was discovered because of a bug report about S.  It is
> an interesting discussion featuring the work of Allan Wilks and Rick
> Becker in diagnosing the bug.
From: David Hanley
Subject: Re: qsort, hsort, ssort (Was: Re: Lisp is *SLOW*)
Date: 
Message-ID: <33D8E9AC.1AD505B@nospan.netright.com>
Marco Antoniotti wrote:

> In article <··············@g.pet.cam.ac.uk> Gareth McCaughan
> <·····@dpmms.cam.ac.uk> writes:
>
>    From: Gareth McCaughan <·····@dpmms.cam.ac.uk>
>    Newsgroups: comp.lang.lisp
>    Date: 25 Jul 1997 02:24:46 +0100
>    Organization: University of Cambridge, England
>    Lines: 29
>    X-Newsreader: Gnus v5.3/Emacs 19.34
>
>    David Hanley wrote:
>
>    >     Can you derive the expected time of shell sort for us?
>    >     (trick question it's very hard ).  Shellsort does pretty darn
> well
>    > for quie large sets.  The expected time is roughly n^(2/3) with a
>
>    > low constant factor.
>
>    Er, I think you mean n^(3/2). (And a good shellsort tends to do a
> bit
>    better than that, in practice.)
>
> I do not think this is a good practice.  You are advertising a
> function as using a O(n  * lg n) algorithm and then you implement it
> using a slower one.  This is not acceptable.

    Really?  Does it guarantee n lg n in the C standard?    I don't
recall many specification converning time taken in the C
standard.

    In any case, the crossover between quicksort & shellsort
is high enough so that I doubt you'd ever notice the difference.
Excepting that you wouldn't be able to break it by giving
it a sorted set. :)

> Heapsort is a Theta(n * lg n) algorithm and as such is guaranteed to
> achieve that performance.  However, I would object calling the library
>
> function 'qsort'.

    Why?  I don't recall the algorithm for qsort being specified.
    Though I have had t write my own sort in C when the built-in
one was insufficent.  I've never had to do that in common lisp, though.
(sort ... ) is pretty powerful, and you get stable-sort as well.

(back on the lisp topic, see?)

dave
From: Gareth McCaughan
Subject: Re: qsort, hsort, ssort (Was: Re: Lisp is *SLOW*)
Date: 
Message-ID: <86oh7qzmml.fsf@g.pet.cam.ac.uk>
David Hanley wrote:

[quoting Marco Antoniotti, here]
> > I do not think this is a good practice.  You are advertising a
> > function as using a O(n  * lg n) algorithm and then you implement it
> > using a slower one.  This is not acceptable.
> 
>     Really?  Does it guarantee n lg n in the C standard?    I don't
> recall many specification converning time taken in the C
> standard.

As I'm sure you both know, the C standard doesn't require that
|qsort()| be a quicksort. On the other hand, the name is, ah,
suggestive.

>     In any case, the crossover between quicksort & shellsort
> is high enough so that I doubt you'd ever notice the difference.
> Excepting that you wouldn't be able to break it by giving
> it a sorted set. :)

Sorting an array of 10000 integers on one platform I tried it on,
a good-ish quicksort was a factor of 2 better than the best shellsort
I found. I would expect (but don't have the figures to hand right now)
that using generic routines rather than specialised-to-integer ones
would show up the difference more rather than less.

And a good quicksort -- even a barely tolerable one -- will do just
fine on a sorted set. There are other kinds of input that do a better
job of showing up the inadequacies of some common quicksort
implementations.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Richard A. O'Keefe
Subject: Re: qsort, hsort, ssort (Was: Re: Lisp is *SLOW*)
Date: 
Message-ID: <5rhf7i$88b$1@goanna.cs.rmit.edu.au>
Gareth McCaughan <·····@dpmms.cam.ac.uk> writes:
>As I'm sure you both know, the C standard doesn't require that
>|qsort()| be a quicksort. On the other hand, the name is, ah,
>suggestive.

As I'm sure anyone who is so picky about the implications of the name
already knows, the traditional UNIX qsort was _documented_ as being a
variant of quickERsort (which is a _specific_ version of qsort).  Now,
I've seen a couple of more recent qsort() implementations, and they
certainly violate _that_ expectation.  So why is the expectation that
it is some kind of quicksort so much more important than the expectation
that it is the kind that it used to be _documented_ as being?

-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Gareth McCaughan
Subject: Re: qsort, hsort, ssort (Was: Re: Lisp is *SLOW*)
Date: 
Message-ID: <864t9fwcu6.fsf@g.pet.cam.ac.uk>
Richard O'Keefe wrote:

> As I'm sure anyone who is so picky about the implications of the name
> already knows,

I hope I didn't give the impression that I think qsort() is somehow
required to be a quicksort. I said "suggestive", and that's exactly
what I meant.

On the other hand, I *do* object to gratuitously suboptimal algorithms
being used. I think anyone producing a C library that uses shellsort
in being lax. (And so is anyone producing one that uses quicksort
without giving very careful consideration to that worst case. Yes,
there are a lot of lax people out there.)

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Christopher B. Browne
Subject: Re: qsort, hsort, ssort (Was: Re: Lisp is *SLOW*)
Date: 
Message-ID: <slrn5tqd1g.min.cbbrowne@knuth.brownes.org>
On 28 Jul 1997 14:00:49 +0100, Gareth McCaughan <·····@dpmms.cam.ac.uk> posted:
>Richard O'Keefe wrote:
>> As I'm sure anyone who is so picky about the implications of the name
>> already knows,

>I hope I didn't give the impression that I think qsort() is somehow
>required to be a quicksort. I said "suggestive", and that's exactly
>what I meant.

>On the other hand, I *do* object to gratuitously suboptimal algorithms
>being used. I think anyone producing a C library that uses shellsort
>in being lax. (And so is anyone producing one that uses quicksort
>without giving very careful consideration to that worst case. Yes,
>there are a lot of lax people out there.)

It would be reasonable to use heapsort as a "base sort," as it has
reasonable worst-case performance. The luminary Jon Bentley presented
"good reason" to make this choice in a UNIX Review column about a year
and a half ago; he had some code where a sort was done on a set of
data where it unfortunately turned out that the sort keys contained
either "0" or "1," with no other values to pick from.

Quicksort headed off into pathological "bad performance," whilst
heapsort was its usual "mediocre" self, which was quite quick enough.

Until I saw this, I would have thought the "ultimate" method to be to
do things in two phases:

a) Randomize the order using some reasonable RNG, and then
b) Use Quicksort.

That approach defuses the problem with ordered sets; unfortunately it
*doesn't* deal with the problem of overspecified keys.

-- 
Christopher B. Browne, ········@hex.net, ············@sdt.com
PGP Fingerprint: 10 5A 20 3C 39 5A D3 12  D9 54 26 22 FF 1F E9 16
URL: <http://www.hex.net/~cbbrowne/>
Q: What does the CE in Windows CE stand for?  A: Caveat Emptor...
From: Gareth McCaughan
Subject: Re: qsort, hsort, ssort (Was: Re: Lisp is *SLOW*)
Date: 
Message-ID: <8667tuc5va.fsf@g.pet.cam.ac.uk>
Christopher Browne wrote:

> It would be reasonable to use heapsort as a "base sort," as it has
> reasonable worst-case performance. The luminary Jon Bentley presented
> "good reason" to make this choice in a UNIX Review column about a year
> and a half ago; he had some code where a sort was done on a set of
> data where it unfortunately turned out that the sort keys contained
> either "0" or "1," with no other values to pick from.
> 
> Quicksort headed off into pathological "bad performance," whilst
> heapsort was its usual "mediocre" self, which was quite quick enough.

There are versions of quicksort that cope well with this situation.
Indeed, Jon Bentley co-authored a paper about one. (Bentley and McIlroy,
in "Software Practice & Experience", "Engineering a sort function";
implementing |qsort| without reading this paper should be punishable
by law.)

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Richard A. O'Keefe
Subject: Re: qsort, hsort, ssort (Was: Re: Lisp is *SLOW*)
Date: 
Message-ID: <5rhf2g$7qn$1@goanna.cs.rmit.edu.au>
·······@infiniti.PATH.Berkeley.EDU (Marco Antoniotti) writes:
>   Er, I think you mean n^(3/2). (And a good shellsort tends to do a bit
>   better than that, in practice.)

>I do not think this is a good practice.  You are advertising a
>function as using a O(n  * lg n) algorithm and then you implement it
>using a slower one.  This is not acceptable.

But qsort() is *not* advertised as using an O(NlgN) algorithm at all.
There are *no* order-of-magnitude promises in the C standard whatsoever.
For example, I have used a C implementation from a serious vendor where
the cost of malloc() and free() was O(|number of blocks ever allocated|)
so that a simple
	for (i = 0; i < n; i++) free(malloc(sizeof (double)));
took O(N**2) time.  I was rather annoyed about that.

The C++ "Standard Template Library" specification DOES give big-Oh
promises for all sorts of things, including sorting, so Shell sort would
have no place there.

>   On the other hand, heapsort gives *time* guarantees too. (A really
>   good quicksort implementation is pretty reliable, too, but you have
>   to be quite careful in order to get a really good quicksort
>   implementation.)

>Heapsort is a Theta(n * lg n) algorithm and as such is guaranteed to
>achieve that performance.  However, I would object calling the library
>function 'qsort'.

Since the name is mandated by the C standard, and since there is no
standard way for a C programmer to find out what the algorithm is,
I would have no objection to someone providing a good heapsort (and there
are some really good variants of heapsort around) instead of an algorithm
that has O(N**2) worst case behaviour, with the worst case having been
_observed_ more often than the naive analysis in books would allow.

-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86yb6w125z.fsf@g.pet.cam.ac.uk>
Alex Measday wrote:

> > Note that there actually was a real commercial C system for the PC
> > where qsort() was implemented using Shell sort.
> 
> Which might have been a good thing!  I vaguely recall a column by Jon
> Bentley a number of years ago (in UNIX REVIEW?) in which he said that the
> UNIX implementations of qsort() were flakey.

Pah. The machine on my desk at home has a C library in which |qsort()|
is (1) actually a shell sort and (2) flaky (someone apparently wasn't
quite sure whether to regard addresses as signed or unsigned when they
were writing it). *sigh*

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: David Hanley
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CFAAB1.44146DE3@nospan.netright.com>
Sajid Ahmed the Peaceman wrote:

    I'm goint to ignore most of what you said, as it is clear you don't
know either lisp or C from your posts.  But you raise an interesting
point later...

> [speed stuff snipped]
>
>         As far as the size of the program is concerned, most of the
> time C
> programs are smaller? Why? Good Lisp programs only allow recursive
> code,
> without any stop codes, whereas good C programs allow for both
> recursive
> and iterative code.
>
>         Have you ever seen a the quicksort algorithm written in Lisp?
> Even though it is a recursive function, it still needs well over
> 100 lines of code. In C it would only be 5 or 6 lines.

    I'd very much like to see this C 5 or 6 line quicksort.  Can you
post these
sort sources?  In ant case, here's a 5 line lisp quicksort I just wrote
( took me ~5 minutes )

(defun part(lst fn) (let ((ls nil)(gs nil)) (dolist (x lst)
                           (if (funcall fn x) (push x ls) (push x gs)))
(list ls gs)))
(defun qs(lst cmp) (if (< (length lst) 2) lst (let* ((p (first lst))
     (pt (part (rest lst) #'(lambda(x)(funcall cmp x p)))))
     (nconc (qs (first pt) cmp) (list p) (qs (second pt) cmp)))))

    That's quicksort.  Not my sort of choice, but it works a-ok.
But I just cooked it up really quickly to show a example.  Note the
iteration in
the part(partition) function!
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CFFE5C.3F69@capital.net>
Gareth McCaughan wrote:
> 
> "Sajid Ahmed the Peaceman" trolled:
> 
> >       Calling people names doesn't do anything.
> 
> Whereas, of course, saying "Lisp is a deception" is constructive
> argument.
> 

	You have my humble apologies.
From: Rainer Joswig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <joswig-ya023180002307970043070001@news.lavielle.com>
In article <·············@capital.net>, ········@capital.net wrote:

> >Any decent LISP programmer, in fact any decent programmer, knows
> > that. Efficiency is part of programming. But it is not the be-all and
>         It is true that LISP has some built in functions that allows 
> a programmer to write less code. As far as speed is concerned, in almost 
> every situation, the same program written in C would be faster than a 
> similar program written in Lisp. Why? C is much closer to the machine
> level 
> assembly code that all computers run on.

Why should (+ 3 4) in Lisp be slower than 3 + 4 in C?

> Many C compilers allow inline
> assembly
> language code within the program. 

Many Lisp compilers too.
 
> 
>         As far as the size of the program is concerned, most of the time C 
> programs are smaller? Why? Good Lisp programs only allow recursive code, 
> without any stop codes, whereas good C programs allow for both recursive 
> and iterative code. 

What is a *good* Lisp program versus a *good* C program.
Both language do allow recursive and iterative implementations
of algorithms.

>         Have you ever seen a the quicksort algorithm written in Lisp? 
> Even though it is a recursive function, it still needs well over 
> 100 lines of code.

Only if a C programmer would format the code.

-- 
http://www.lavielle.com/~joswig/
From: Scott Fahlman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <ydracgshwp.fsf@CLYDE.BOLTZ.CS.CMU.EDU>
······@lavielle.com (Rainer Joswig) writes:

> Why should (+ 3 4) in Lisp be slower than 3 + 4 in C?

Well, if you are really talking about "(+ 3 4)", both should take the
same amount of time, namely zero.  Any self-respecting compiler would
constant-fold this operation out of existence.  But if you are talking
about "(+ x y)" there are two fundamental reasons why Lisp might be
slower:

1. If the types of x and y are not declared and can't be deduced by
the compiler, the Lisp + must do a runtime type-dispatch on both
arguments to select the right kind of arithmetic/coercion to do.  Lisp
must select among fixnums, bignums, several flavors of float, ratios,
and complex numbers.

2. In the most common case of fixnum-fixnum arithmetic, Lisp must
detect any overflow and coerce the result to a bignum.  In the event
of an overflow, C will quietly reurn the wrong answer.  OK,
technically it is the right answer, if "right" is defined as mod-N
arithmetic, where N is machine dependent, but that doesn't help the
guy whose nuclear plant just melted.

The first case goes away if you use a few declarations, but the CL
declaration language is admittedly awkward.  As for the second case, I
would argue that overflow detection is worth paying for in 99% of all
coding situations, though reasonable people can and do differ about
the desirability of rolling over into bignums vs. signalling an
error as the default behavior.

Yes, there are rare cases where getting the wrong answer is preferable
to wasting a cycle or risking a runtime exception, but usually this is
a recipe for dangerously unreliable code.

-- Scott

===========================================================================
Scott E. Fahlman                        Internet:  ···@cs.cmu.edu
Principal Research Scientist            Phone:     412 268-2575
Department of Computer Science          Fax:       412 268-5576
Carnegie Mellon University              Latitude:  40:26:46 N
5000 Forbes Avenue                      Longitude: 79:56:55 W
Pittsburgh, PA 15213                    Mood:      :-)
===========================================================================
From: Fred Haineux
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <bc-3007971213170001@17.127.10.119>
Scott Fahlman <···@clyde.boltz.cs.cmu.edu> wrote:
|  1. If the types of x and y are not declared and can't be deduced by
|  the compiler, the Lisp + must do a runtime type-dispatch on both
|  arguments to select the right kind of arithmetic/coercion to do.  Lisp
|  must select among fixnums, bignums, several flavors of float, ratios,
|  and complex numbers.

Let's just point out that if you are using C++, it does the same thing as
Lisp in this case, and is just as slow.

In Lisp, you certainly can declare a variable to be a type, if you want to
avoid runtime dispatch.

In Lisp, the usual development cycle is...
1) write a prototype
2) add functionality until complete
3) integration test
4) optimize/declare types/etc.
5) make standalone and ship.
However, most Lisp programs stop development in the middle of stage 2 (grin)...

In C, the usual development cycle is:
1) optimize code for a new hash table algorithm that programmer thought up
on bus
2) implement system using new hash table algorithm as often as possible
3) add user interface
4) Test
5) Ship
However, most C programs stop development in the middle of stage 2 (grin)...
From: Will Hartung
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <vfr750EE5Mzo.7tG@netcom.com>
··@wetware.com (Fred Haineux) writes:

>In C, the usual development cycle is:
>1) optimize code for a new hash table algorithm that programmer thought up
>on bus
>2) implement system using new hash table algorithm as often as possible
>3) add user interface
>4) Test
>5) Ship
>However, most C programs stop development in the middle of stage 2 (grin)...

Hmm...in my experience, you seem to have 4 and 5 backwards.

-- 
Will Hartung - Rancho Santa Margarita. It's a dry heat. ······@netcom.com
1990 VFR750 - VFR=Very Red    "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
1993 Explorer - Cage? Hell, it's a prison.                    -D. Duck
From: Dennis Weldy
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <AOBlPX7n8GA.192@news2.ingr.com>
 

Fred Haineux wrote in article ...

>Scott Fahlman <···@clyde.boltz.cs.cmu.edu> wrote:
>|  1. If the types of x and y are not declared and can't be deduced by
>|  the compiler, the Lisp + must do a runtime type-dispatch on both
>|  arguments to select the right kind of arithmetic/coercion to do.  Lisp
>|  must select among fixnums, bignums, several flavors of float, ratios,
>|  and complex numbers.
>
>Let's just point out that if you are using C++, it does the same thing as
>Lisp in this case, and is just as slow.

Really? Hmm...have y'profiled it? ;-)
Anyways, due to C++, the type of the object should be known at compile
time. All that has to occur is the lookup into the vtable

21:       c = *ca + 5 ;
004011F6   mov         dword ptr [ebp-8],5
004011FD   lea         eax,dword ptr [ebp-8]
00401200   push        eax
00401201   mov         ecx,dword ptr [ca]
00401204   mov         edx,dword ptr [ecx]
00401206   mov         ecx,dword ptr [ca]
00401209   call        dword ptr [edx]
0040120B   mov         dword ptr [c],eax

Shouldn't be too shappy. Code to generate the coercions should already be
present. If virtual, then they'd use similar code. 
The main thing is that in C++, the evidently much slower case(?) of "x and
y are not declared and their types can't be deduced...." doesn't apply.
Each variable's type is known at compile time. 

What follows is the source
#include <iostream>

class A {
 int a ;
public:
 A(int _a) : a(_a) { }
 int get_a(void) { return a ; }
 virtual int operator +(const int &b) { std::cout << "A + operator" ;
return a+b ;} ;
} ;

class B : public A {
public:
 B(int _a) : A(_a) {}
 virtual int operator +(const int &b) { std::cout << "B + operator" ;
return get_a()+b ; } ;
} ;

int foo (A *ca) 
{
 int c ;

 c = *ca + 5 ;

 return c ;
}

int main(int argc, char **argv) {
 B cb(4) ;

 foo(&cb) ;

 return 0 ;
}

Dennis
From: Dennis Weldy
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <y#owRc7n8GA.191@news2.ingr.com>
 change shappy to shabby.  
Oh yeah, by "type at compile time" I mean that it's type or ancestor type
can be decuded at compile time. 

If I tried to pass an instance of class C (unrelated to A) calling foo with
that instance would generate a compile error ;-)

Dennis

Dennis Weldy wrote in article ...

> 
>Shouldn't be too shappy. Code to generate the coercions should already be
>present. If virtual, then they'd use similar code. 
>The main thing is that in C++, the evidently much slower case(?) of "x
and
>y are not declared and their types can't be deduced...." doesn't apply.
>Each variable's type is known at compile time. 
From: Rainer Joswig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <joswig-ya023180003107970027530001@news.lavielle.com>
In article <··············@CLYDE.BOLTZ.CS.CMU.EDU>, Scott Fahlman
<···@clyde.boltz.cs.cmu.edu> wrote:

> ······@lavielle.com (Rainer Joswig) writes:
> 
> > Why should (+ 3 4) in Lisp be slower than 3 + 4 in C?
> 
> Well, if you are really talking about "(+ 3 4)", both should take the
> same amount of time, namely zero.  Any self-respecting compiler would
> constant-fold this operation out of existence.  But if you are talking
> about "(+ x y)"

I wanted to talk about the process of adding to numbers. ;-)

> 1. If the types of x and y are not declared and can't be deduced by
> the compiler, the Lisp + must do a runtime type-dispatch on both
> arguments to select the right kind of arithmetic/coercion to do.  Lisp
> must select among fixnums, bignums, several flavors of float, ratios,
> and complex numbers.

But if they are declared or can be deduced? Do you see
any Lisp-specific speed penalty?

> 2. In the most common case of fixnum-fixnum arithmetic, Lisp must
> detect any overflow and coerce the result to a bignum.

Even if I declare the result to be a fixnum?

(defun test (a b)
  (declare (fixnum a b)
           (optimize (speed 3) (safety 0)))
  (the fixnum (+ a b)))

(loop for i from 1 upto 10
      with start = (- most-positive-fixnum 5)
      do (print (test start i)))

MCL 4.1 gives me:

536870907 
536870908 
536870909 
536870910 
536870911 
-536870912 
-536870911 
-536870910 
-536870909 
-536870908

-- 
http://www.lavielle.com/~joswig/
From: Andreas Eder
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <m3en8fobr1.fsf@laphroig.mch.sni.de>
Hm,
CMU Common Lisp 17f x86-linux 1.3.3 gives me 

536870907 
536870908 
536870909 
536870910 
536870911 
536870912 
536870913 
536870914 
536870915 
536870916 

in interpretive mode; I have to compile to get the (expected ?)

536870907 
536870908 
536870909 
536870910 
536870911 
-536870912 
-536870911 
-536870910 
-536870909 
-536870908

Obviously, the interpreter ignores the declarations.
From: Jens Kilian
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5s467l$ber@isoit109.bbn.hp.com>
Andreas Eder (···@laphroig.mch.sni.de) wrote:
> Hm,
> CMU Common Lisp 17f x86-linux 1.3.3 gives me 

> 536870907 
> 536870908 
> 536870909 
> 536870910 
> 536870911 
> 536870912 
> 536870913 
> 536870914 
> 536870915 
> 536870916 

> in interpretive mode; I have to compile to get the (expected ?)

> 536870907 
> 536870908 
> 536870909 
> 536870910 
> 536870911 
> -536870912 
> -536870911 
> -536870910 
> -536870909 
> -536870908

> Obviously, the interpreter ignores the declarations.

It is free to.  A program declaring 

	(the fixnum (+ x y))

is not asserting that it *wants* a fixnum back; it is giving a *guarantee*
that the addition returns a fixnum.  If this guarantee turns out not to be
valid, the result of this operation is undefined.

Greetings,

	Jens.
--
··········@acm.org                 phone:+49-7031-14-7698 (HP TELNET 778-7698)
  http://www.bawue.de/~jjk/          fax:+49-7031-14-7351
PGP:       06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]
From: Richard A. O'Keefe
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5r4agg$k15$1@goanna.cs.rmit.edu.au>
Sajid Ahmed the Peaceman <········@capital.net> writes:
>	It is true that LISP has some built in functions that allows 
>a programmer to write less code. As far as speed is concerned, in almost 
>every situation, the same program written in C would be faster than a 
>similar program written in Lisp. Why? C is much closer to the machine
>level 
>assembly code that all computers run on. Many C compilers allow inline
>assembly
>language code within the program. 

This is an invalid argument.
To start with, C is _not_ close to the machine.
- most machines do not support C's auto-scaled pointer arithmetic directly
- *NONE* of the C89 integral types supported by the compilers on this
  machine (an UltraSPARC) match the machine's native register size (64 bits).
  [Yes, I know about long long, which is why I said _C89_ integral types.]
- C lets me specify a floating point type that does not exist on this
  machine.  On an older machine from the same manufacturer, NONE of C's
  floating point types were supported in hardware.
- There are some important differences between C functions and what you
  get in assembler.  In fact, on the immediate predecessor of this machine,
  it was actually useful that two C compilers had an option to use a
  calling convention that was _not_ the normal one on this machine.

But the machine flaw is the assumption that compilers are irredeemably
stupid.  What matters is not how close to the machine the *starting* point
is, but how close the compiler can get it.

For the record, I have frequently found Scheme code compiled by Stalin to
run _faster_ than the corresponding C code.

Another reason the argument is flawed is that, as the Fortran people are
fond of pointing out, C is hard to optimise.  Languages (such as Fortran 90)
where the compiler can know more about the program than any standard C
compiler can know about any standard C program, permit better optimisation.

In the examples where Scheme code ran faster than C, generally Fortran
code went even faster.  And nobody talks about Fortran being "close to
assembler".

>	As far as the size of the program is concerned, most of the time C 
>programs are smaller?

This is meaningless.  _Which_ Lisp?  _Which_ C?  Which machine/OS?

>Why? Good Lisp programs only allow recursive code, 
>without any stop codes, whereas good C programs allow for both recursive 
>and iterative code.

You shot yourself in the foot there.
Lisp has *all* the control structures that C has.  ALL of them.
And then some.

>	Have you ever seen a the quicksort algorithm written in Lisp? 

Yes, several.

>Even though it is a recursive function, it still needs well over 
>100 lines of code. In C it would only be 5 or 6 lines.

Obviously, you have not seen a quicksort in C.  One that came with a
version of UNIX takes (by actual measurement) 177 lines.  The rather
faster one I wrote myself takes 162 lines.  There's an important
optimisation in that without which the C version would be about 80 lines.
The "Engineered" quicksort by Bentley & McIlroy is about 112 lines of C.
Since the partition step itself takes about 6 lines of C, it would be
rather hard to get a readable version of Quicksort in "5 or 6 lines" of C.

-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: ? the platypus {aka David Formosa}
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <868938030.121424@cabal>
In <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

[...]

>	Lisp is a deception. All lisp compilers and interpreters that 
>I've seen have been written in C, and run on top of a C program.

Well then you are not looking hard enough.  Lisp predates C by meany years.
There are many verents of lisp that are writton in lisp.
In addtion you could say that C is a deception as all C compilers and 
interpreters are written in mechean code.

> I've seen a lot of LISP and PROLOG programmers, especially in the post
>graduate level of computer science, think that lisp functions the same way as 
>mathematics.

Thats what makes lisp so easy to use.

> They think that a call to a recursive function instantaneously 
>returns a result.

No good programer would beleave that.

> The fact is, these function is broken down into machine
>level instructions, and is executed the same way as a for next loop. 

You seem to be implieing that ever recusive call gets recompiled.  This
is simply not true.  In addtion there have been times where the recursive
verson has been faster then the iterave form (I don't know why this is
but time says its so.)

--
Please excuse my spelling as I suffer from agraphia see the url in my header. 
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is $0.08 per day too much to pay?   ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf67uc5izj.fsf@infiniti.PATH.Berkeley.EDU>
In article <················@cabal> ? the platypus {aka David Formosa} <········@st.nepean.uws.edu.au> writes:

   From: ? the platypus {aka David Formosa} <········@st.nepean.uws.edu.au>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: 15 Jul 1997 03:40:39 GMT
   Organization: UWS Nepean - Department of Computing
   Path: agate!ihnp4.ucsd.edu!munnari.OZ.AU!metro!metro!ob1.uws.edu.au!news
   Lines: 36

   In <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   [...]

   >	Lisp is a deception. All lisp compilers and interpreters that 
   >I've seen have been written in C, and run on top of a C program.

   Well then you are not looking hard enough.  Lisp predates C by meany years.
   There are many verents of lisp that are writton in lisp.
   In addtion you could say that C is a deception as all C compilers and 
   interpreters are written in mechean code.

You can go even further than that.  C compilers are written mostly in
C.  However, GNAT (the Ada 95 front end to the gcc backend) is written
partly in Ada 95 and a large portion of the CMU Common Lisp compiler
is written in Common Lisp.  This notion of "compiler bootstrapping" is
as old as computer programming itself.

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf4t9sgxb9.fsf@infiniti.PATH.Berkeley.EDU>
In article <·················@alexandria.organon.com> ···@alexandria.organon.com (Jon S Anthony) writes:

   From: ···@alexandria.organon.com (Jon S Anthony)
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: 17 Jul 1997 23:11:36 GMT
   Organization: PSINet
   Lines: 23
   Distribution: world
   Xref: agate comp.lang.lisp:29312 comp.programming:52506 comp.lang.c++:282312

   In article <···············@infiniti.PATH.Berkeley.EDU> ·······@infiniti.PATH.Berkeley.EDU (Marco Antoniotti) writes:

   > You can go even further than that.  C compilers are written mostly in
   > C.  However, GNAT (the Ada 95 front end to the gcc backend) is written
   > partly in Ada 95 and a large portion of the CMU Common Lisp compiler

   Actually, the GNAT FE is entirely written in Ada95

   > is written in Common Lisp.  This notion of "compiler bootstrapping" is
   > as old as computer programming itself.

   What's more, the GNAT RTL is largely written in Ada95

Well.  I am not surprised.  I just wasn't up to date.

   and from what I
   see, most (all?) the CMU CL functions are written in CL (including
   EVAL...)

This is true as well.  I just did not want to stretch it too
much. (And there are portions of CMUCL written in C - after all it
must run on UN*X platforms).

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86afjl5ix4.fsf@g.pet.cam.ac.uk>
"Sajid Ahmed the Peaceman" trolled:

> > Thats what makes lisp so easy to use.
> > 
> 
> 	Well, in some circumstances, but if you try to right every program 
> using only recursive code, it makes things much much more difficult. 

It's just as well Lisp doesn't require you to use "only recursive
code", then.

> > You seem to be implieing that every recursive call gets recompiled.
> 
> 	That is true in some languages, but not true in others. 

Name three languages that require all recursive function calls
to cause the function to be recompiled. In fact, name one.

> 	Every recursive function, whether is LISP, Prolog, C+ , ML, or 
> any other language is translated into iterative assembly (machine)
> language
> code. 

That's only true if you adopt so broad a definition of "iterative"
as to make your statement meaningless.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D0000B.7ACC@capital.net>
Gareth McCaughan wrote:
> 
> > > You seem to be implieing that every recursive call gets recompiled.
> >
> >       That is true in some languages, but not true in others.
> 
> Name three languages that require all recursive function calls
> to cause the function to be recompiled. In fact, name one.
> 

	Come to think of it, I can't remember any off hand. I
don't quite remember if Basic did this. 

	Anyway, I'm sure there are some language designs that don't
use the stack when making calls to functions. They would expand them 
inline like a macro definition. When the code would finally be 
compiled, there would be recompilations of the function calls. 

					Peaceman
From: ? the platypus {aka David Formosa}
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <869323883.942607@cabal>
In <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

>Gareth McCaughan wrote:

[...]

>> Name three languages that require all recursive function calls
>> to cause the function to be recompiled. In fact, name one. 

>	Come to think of it, I can't remember any off hand. I
>don't quite remember if Basic did this. 

Basic didn't have function calls.

>	Anyway, I'm sure there are some language designs that don't
>use the stack when making calls to functions. They would expand them 
>inline like a macro definition.

In such a languge it would be inpossable to write recursive code.
I don't beleave that such a languge exists as the size of the 
exicutables would be so massive as to be useless.

--
Please excuse my spelling as I suffer from agraphia see the url in my header. 
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is $0.08 per day too much to pay?   ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument
From: Dennis Weldy
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <#WyhE8Il8GA.174@news2.ingr.com>
? the platypus {aka David Formosa} wrote in article <················@cabal>
...

>In <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.n
et> writes:
>
>>Gareth McCaughan wrote:
>
>[...]
>
>>> Name three languages that require all recursive function calls
>>> to cause the function to be recompiled. In fact, name one. 
>
>> Come to think of it, I can't remember any off hand. I
>>don't quite remember if Basic did this. 
>
>Basic didn't have function calls.

 Not quite true. Several BASICs I've used have had the ability to define
procedures and functions. 

>
>> Anyway, I'm sure there are some language designs that don't
>>use the stack when making calls to functions. They would expand them 
>>inline like a macro definition.
>
>In such a languge it would be inpossable to write recursive code.
>I don't beleave that such a languge exists as the size of the 
>exicutables would be so massive as to be useless.

Agreed. Which is why the inline C++ keyword is ignored for recursive
functions.

>
>--
>Please excuse my spelling as I suffer from agraphia see the url in my
header. 
>Never trust a country with more peaple then sheep. Buy easter bilbies.
>Save the ABC Is $0.08 per day too much to pay?   ex-net.scum and proud
>I'm sorry but I just don't consider 'because its yucky' a convincing
argument
>.
> 
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e3bf8a690bf890b9898ed@news.demon.co.uk>
With a mighty <················@cabal>,
········@st.nepean.uws.edu.au uttered these wise words...

> Basic didn't have function calls.

It did 15 years ago. At least, it did in the MS Basic on the TRS-80. 
You could define functions and call them. Of course, not many people 
used them, but they _were_ there.

Curiously, the function parameters were dynamically scoped, just like 
some of the older Lisp dialects. You can still get this behaviour in 
Common Lisp, by declaring a name to be special.

In the case of MS Basic, without any way of declaring scope for a 
name, the onky option was dynamic scoping. In this sense, it didn't 
have functions with explicit scope. Could this be what you mean?

More modern Basics _definitely_ have functions with explicit scoping.
 
> >	Anyway, I'm sure there are some language designs that don't
> >use the stack when making calls to functions. They would expand them 
> >inline like a macro definition.
> 
> In such a languge it would be inpossable to write recursive code.
> I don't beleave that such a languge exists as the size of the 
> exicutables would be so massive as to be useless.

I recall seeing the Winston and Horn animal program rewritten in Basic 
(yep, the same MS Basic I used on the TRS-80), and it used arrays for 
the "stack". The result was very ugly and hard to read. I didn't 
understand what the program did until I saw the Lisp version.

I guess you could say that a language that compiles into Combinatory 
Logic doesn't use a stack, and that recursive functions are copied. In 
fact, unless I've misunderstood it, calling a function involved 
copying it, subtituting the arguments for the parameters. The entire 
program and its data is a single data structure which is gradually 
reduced to just the data.

However, unless we're talking about pure functional dialects of Lisp, 
like LispKit, we can say that this is not usually how Lisp works! I've 
seen a Scheme interpreter written in Haskell, but that's not a typical 
way to implement Lisp. In fact, if you were to implement Basic or C in 
Haskell, you'd have to do it in a similar way, and it would be equally 
atypical for those languages. Compiling Basic or C - or C++ - into 
Combinatory Logic might be interesting, but not many people would find 
it of practical value.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"An operating system is a collection of things that don't fit into a
 language. There shouldn't be one." Daniel Ingalls, Byte August 1981
From: ? the platypus {aka David Formosa}
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <869468073.370861@cabal>
In <·························@news.demon.co.uk> ···@this_email_address_is_intentionally_left_crap_wildcard.demon.co.uk (Martin Rodgers) writes:

>With a mighty <················@cabal>,
>········@st.nepean.uws.edu.au uttered these wise words...

>> Basic didn't have function calls.

>It did 15 years ago. At least, it did in the MS Basic on the TRS-80. 

In the old versons that I was exposed to there where only 'goto' and 
'gosub' for flow control.  qbasic wich is the most moden basic I can
find (ignoring VB) dose have functions and scoping.  

[...]

>> In such a languge it would be inpossable to write recursive code.
>> I don't beleave that such a languge exists as the size of the 
>> exicutables would be so massive as to be useless.

>I recall seeing the Winston and Horn animal program rewritten in Basic 
>(yep, the same MS Basic I used on the TRS-80), and it used arrays for 
>the "stack". The result was very ugly and hard to read.

A soultion that C programers sometimes use to avoid rcurstion.  In
meany cases avoiding recursion in this way has a negitive performunce
inpact.

--
Please excuse my spelling as I suffer from agraphia see the url in my header. 
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is $0.08 per day too much to pay?   ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e3ecd8866d2bdb09898f5@news.demon.co.uk>
With a mighty <················@cabal>,
········@st.nepean.uws.edu.au uttered these wise words...

> >It did 15 years ago. At least, it did in the MS Basic on the TRS-80. 
> 
> In the old versons that I was exposed to there where only 'goto' and 
> 'gosub' for flow control.  qbasic wich is the most moden basic I can
> find (ignoring VB) dose have functions and scoping.  

Old vesions of MS Basic on the TRS-80, or some other Basic? As I said, 
this Basic used _dynamic_ scoping, and most people didn't even know 
that it had functions. You may have been one of the many who missed 
them. I only know they were there because I read the entire manual.
 
> >I recall seeing the Winston and Horn animal program rewritten in Basic 
> >(yep, the same MS Basic I used on the TRS-80), and it used arrays for 
> >the "stack". The result was very ugly and hard to read.
> 
> A soultion that C programers sometimes use to avoid rcurstion.  In
> meany cases avoiding recursion in this way has a negitive performunce
> inpact.

And achieves little else. I've never had a problem with recursion, but 
then, I used to write recursive decent parsers. Some compiler people 
prefer this technique to the "yacc" style parser, and a few of them 
claim that recursive decent is more efficient. Well, I don't know 
about that, but there are certainly advantages to both techniques.

In both cases, you're thinking recursively, even if the code compiles 
to a state machine that uses an array for the "stack". Hmm. Y'know, 
even C uses a stack...Maybe some people have a problem with the _idea_ 
of recursion? I used to have a phobia of link lists, until I wrote my 
first Forth system (which depended heavily on lists). After that, I 
had no problem with lists! In fact, I can't understand why I used to 
have so much trouble with them. Perhaps some people just need the 
right learning experience, to make them more comfortable with an idea? 
It could be that they just had a bad teacher, and so got an impression 
that recursion was somehow "difficult", and so not for them.

I was lucky with recursion. I discovered it via compiler theory, which 
was a subject that greatly interested me at the time. (It still does.)
No doubt that helped me a lot. Recursion makes so many other ideas 
easy to understand! How could I avoid it? I can't imagine being 
without it any more than I can imagine not being able to use lists, 
trees, graphs, stacks, and so on. It's fundamental.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"An operating system is a collection of things that don't fit into a
 language. There shouldn't be one." Daniel Ingalls, Byte August 1981
From: Will Hartung
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <vfr750EDqHE1.16n@netcom.com>
? the platypus {aka David Formosa} <········@st.nepean.uws.edu.au> writes:

>In <·························@news.demon.co.uk> ···@this_email_address_is_intentionally_left_crap_wildcard.demon.co.uk (Martin Rodgers) writes:

>>With a mighty <················@cabal>,
>>········@st.nepean.uws.edu.au uttered these wise words...

>>> Basic didn't have function calls.

>>It did 15 years ago. At least, it did in the MS Basic on the TRS-80. 

>In the old versons that I was exposed to there where only 'goto' and 
>'gosub' for flow control.  qbasic wich is the most moden basic I can
>find (ignoring VB) dose have functions and scoping.  

Well, in TRS-80 Level II BASIC, they had a DEF FN construct that
allowed one to define equations as functions. They were simple
expressions, there wasn't any logic allowed, and they were called
"functions" at the time.

I think it was something like:

10 DEF FNR(X)=INT(RND(0)*X)+1
20 R=FNR(6)+FNR(6)
30 IF R=7 OR R=11 THEN PRINT "YOU WIN!" ELSE PRINT "YOU LOSE"

I don't recall if you were allowed to put other variables in the
equations or not. If you were, then I imagine they were just globals
(like everything else).

I think you were able to create functions with both the numeric and
strings types.

Funny, I thought I had blotted most of this stuff out of my psyche.

-- 
Will Hartung - Rancho Santa Margarita. It's a dry heat. ······@netcom.com
1990 VFR750 - VFR=Very Red    "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
1993 Explorer - Cage? Hell, it's a prison.                    -D. Duck
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e3fc286d255268a9898f8@news.demon.co.uk>
With a mighty <················@netcom.com>,
······@netcom.com uttered these wise words...

> Well, in TRS-80 Level II BASIC, they had a DEF FN construct that
> allowed one to define equations as functions. They were simple
> expressions, there wasn't any logic allowed, and they were called
> "functions" at the time.

That's the Basic that I'm thinking of.
 
> I think it was something like:
> 
> 10 DEF FNR(X)=INT(RND(0)*X)+1
> 20 R=FNR(6)+FNR(6)
> 30 IF R=7 OR R=11 THEN PRINT "YOU WIN!" ELSE PRINT "YOU LOSE"
> 
> I don't recall if you were allowed to put other variables in the
> equations or not. If you were, then I imagine they were just globals
> (like everything else).

Yes, from what I remember, the variables were dynamically scoped, 
which is another way of saying that they were global. In your example, 
variable X would shadow any already existing X, so that while FNR is 
running, X would be 6, but when FNR returns, the old value would 
become "visible" again.

As with many things in that Basic, and many other Basic of that 
generation, functions were very limited. No wonder they were so rarely 
used! Fortunately, these things change, and now it's not unusual for a 
Basic to use lexical scoping and compile to native code. Not that this 
stops some people from calling Basic "slow". People with long memories 
and little recent experience of Basic, I guess.

Hmm. <looks at thread subject> Just like Lisp, really. Most Lisps that 
I've used have compiled to native code, and yet some people can only 
remember the interpreted lisps, like XLISP. That could be because of 
the wide availabilty and age of XLISP, but it's no excuse for assuming 
that this is the best or fastest Lisp to be found.

That's a lot like assuming that Small C is the most sophisticated C 
compiler you can find, simply because it's the only one that you've 
found and used. A little more effort should give you a much better C 
compiler, like GNU C/C++.

On the other hand, you might have used a C interpreter...
 
> I think you were able to create functions with both the numeric and
> strings types.

That sounds familiar, so you could well be right.

> Funny, I thought I had blotted most of this stuff out of my psyche.

Same here. I blame David Formosa. ;)
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"An operating system is a collection of things that don't fit into a
 language. There shouldn't be one." Daniel Ingalls, Byte August 1981
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D63BED.311A@capital.net>
> 
> >       Anyway, I'm sure there are some language designs that don't
> >use the stack when making calls to functions. They would expand them
> >inline like a macro definition.
> 
> In such a languge it would be inpossable to write recursive code.

	You could certainly write recursive code, as long as the 
number of times the function calls itself is set at compile time. 


> I don't beleave that such a languge exists 

	There out there. New programming languages are created everyday.
That's what YACC is for. 


>as the size of the
> exicutables would be so massive as to be useless.
> 

	Most programming code out there (about 96%) is 
nonrecursive. You've been programming is lisp too long. 


				Peaceman
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scfzprcfd42.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Wed, 23 Jul 1997 13:14:21 -0400
   Organization: CADVision Development Corp.
   Reply-To: ········@capital.net
   Lines: 26
   Mime-Version: 1.0
   Content-Type: text/plain; charset=us-ascii
   Content-Transfer-Encoding: 7bit
   X-Mailer: Mozilla 3.01 (WinNT; I)
   Xref: agate comp.lang.lisp:29514 comp.programming:52870 comp.lang.c++:283436

   > 
   > >       Anyway, I'm sure there are some language designs that don't
   > >use the stack when making calls to functions. They would expand them
   > >inline like a macro definition.
   > 
   > In such a languge it would be inpossable to write recursive code.

	   You could certainly write recursive code, as long as the 
   number of times the function calls itself is set at compile time. 

Errare humanum est, perseverare diabolicum. (If my Latin daoes not
fail me). :)

Mr. the Peaceman,  do you have the slightest idea of what you are
talking about?

I still have to see the assembly code for the C tree traversal from
you.  But apart from that, if a function is tail-recursive (assuming
you grasped the concept by now) what you are usually interested in, is
that the algorithm is provably terminating (and no!  you can't answer
that question in its full generality).  In that case the function runs
in constant space, since it is translated into a loop (if the compiler
is smart enough as most Lisp compilers are, contrary to many C/C++
ones).  If the function is inherently recursive (prove they do not
exist, if you can), then the limit is the amount of memory of your
computer or some configuration parameter of the language run time
environment.

   > I don't beleave that such a languge exists 

	   There out there. New programming languages are created everyday.
   That's what YACC is for. 

A better way to experiment with new language semantics and constructs
is, of course, to extend CL or Scheme. :)

   >as the size of the
   > exicutables would be so massive as to be useless.
   > 

	   Most programming code out there (about 96%) is 
   nonrecursive. You've been programming is lisp too long. 

I'd like to see the reason for the 96% figure (why not 94% or 98%?)
And no, unfortunately I am programming in pure C these days.  It ain't
funny.  However, your arguments do not have *anything whatsoever* to
do with Lisp.  They concern good programming techniques.

I will never buy anything from CADVision Development Corp.

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D8F4C2.117B@capital.net>
Marco Antoniotti wrote:
> 
> Mr. the Peaceman,  do you have the slightest idea of what you are
> talking about?
> 

	I know exactly what I'm talking about, thank you. 


> I still have to see the assembly code for the C tree traversal from
> you.  But apart from that, if a function is tail-recursive (assuming
> you grasped the concept by now) what you are usually interested in, is
> that the algorithm is provably terminating (and no!  you can't answer
> that question in its full generality).  In that case the function runs
> in constant space, since it is translated into a loop (if the compiler
> is smart enough as most Lisp compilers are, contrary to many C/C++
> ones).  If the function is inherently recursive (prove they do not
> exist, if you can), then the limit is the amount of memory of your
> computer or some configuration parameter of the language run time
> environment.
> 

	That's not the point. The point is, why make the functions 
tail recursive when simple iteration is good enough? It's a waste 
of time. 
	Just write the iterative code. It's faster and more efficient. 

	That's where my gripe in Lisp comes about. It's a 
programming language that compiles code into simpler assembly 
language machine code. It's not Mathematics, where the results 
of functions are instantaneously there. There is no need to 
write tail recursive functions, when simple iterative code will do. 


					Peaceman
From: Johann Hibschman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5rb0k1$983@agate.berkeley.edu>
Dang it.  I wasn't going to do this, but I'm bored.

In article <·············@capital.net>,
Sajid Ahmed the Peaceman  <········@capital.net> wrote:
>Marco Antoniotti wrote:
>> 
>> Mr. the Peaceman,  do you have the slightest idea of what you are
>> talking about?
>
>	I know exactly what I'm talking about, thank you. 

Well, if you know you're talking nonsense, then why should anyone
take you seriously?

>> I still have to see the assembly code for the C tree traversal from
>> you.  But apart from that, if a function is tail-recursive (assuming
>> you grasped the concept by now) what you are usually interested in, is
>> that the algorithm is provably terminating (and no!  you can't answer
>> that question in its full generality).  In that case the function runs
>> in constant space, since it is translated into a loop (if the compiler
>> is smart enough as most Lisp compilers are, contrary to many C/C++
>> ones).  If the function is inherently recursive (prove they do not
>> exist, if you can), then the limit is the amount of memory of your
>> computer or some configuration parameter of the language run time
>> environment.
>
>	That's not the point. The point is, why make the functions 
>tail recursive when simple iteration is good enough? It's a waste 
>of time. 

Whoah!  Earth to Peaceman.  What do you think people have been
talking about for the past few days.  You started this off by
claiming that:

(Dated 7/17)
Sajid Ahmed the Peaceman wrote in <·············@capital.net>:
>         Anyway, all lisp programs, as well as the compilers and
> interpreters are broken down into assembly level code, which is
> iterative. The thing I have a problem with is with people trying
> to write programs that are completely recursive, which is what lisp
> is about. That is the wrong way to go about it. It's a tremendous
> waste.

People have been discussing your first claim, that all assembly
level code is "iterative," which you have yet to define in a
meaningful way.

If I'm reading you correctly, when you say "That's not the point."
up above, you are tacitly admitting defeat on this point, but
aren't willing to actually state this, so instead you change the
subject over to the second point.

>	Just write the iterative code. It's faster and more efficient. 

Faster to write?  That depends completely on what you're used to.
More efficient?  No, because tail recursion is *equivalent* to
iteration.

If you're arguing that you can write iterative code faster than you
can write tail-recursive code, than I'll just have to take you at
your word.  Just don't think you've shown anything more significant
than that.

>	That's where my gripe in Lisp comes about. It's a 
>programming language that compiles code into simpler assembly 
>language machine code. It's not Mathematics, where the results 
>of functions are instantaneously there. There is no need to 
>write tail recursive functions, when simple iterative code will do. 

Here we go with that second point, the "I don't like Lisp because
my mean ol' Lisp instructor made me write everything recursively"
argument.  Hello?  Have you been listening? 

First of all, I'd like to see a single programming langauge that
doesn't compile code into "simpler assembly language machine code."
That's the whole point of a compiiler.  And of course it's not
Mathematics, no one claimed it was.  Really, where do you get
these things?

If you prefer "simple iteration" to tail recursion, you are arguing
mere style preferences, because, as many others have pointed out,
tail recursion is equivalent to iteration.

In fact, you are arguing trivial style preferences, because in CL
you are perfectly free to write your iteration using any of the
"do" family of iterative constructs or the "loop" macro.

Satisfied?  I didn't think so.


Sigh.  I have to admit that was kind of fun; you're almost as bad
as the "Relativity is wrong because I don't like it" folks over
on sci.physics.


- Johann

P.S.  From now on, I'll be good.  Really.  No more responding to
      flame bait.  I should really find more things to do while my
      computation is running...

-- 
Johann A. Hibschman         | Grad student in Physics, working in Astronomy.
······@physics.berkeley.edu | Probing pulsar pair production processes.
From: Martin Rodgers
Subject: Lies about Lisp
Date: 
Message-ID: <MPG.e441b04838accdb989902@news.demon.co.uk>
My body fell on Johann Hibschman like a dead horse,
who then wheezed these wise words:

> If you prefer "simple iteration" to tail recursion, you are arguing
> mere style preferences, because, as many others have pointed out,
> tail recursion is equivalent to iteration.

Alas, some peopl edon't know how little they know. I see this happen 
in the context of compiler theory all the time. To be fair, most 
people don't have the time to study compilers. However, when they 
claim to know all about compilers, in spite of their ignorance, I find 
it harder to forgive them.

Sajid Ahmed the Peaceman should by now be just starting to realise the 
true depth of his ignorance in this area. Strangely, his posts seem 
not to reflect this. I'm not sure what to conclude from this. It would 
be tempting to just write him off as an idiot, but I've seen enough 
clueless C++ programmers on UseNet behave differently, when pointing 
in the right direction, to suspect that Sajid Ahmed the Peaceman's 
motives are not based on a desire for enlightenment. Instead, I wonder 
if he's perhaps wishing to spread his ignorance?

He's certainly spreading his clueless memes to one or two newsgroups.
Fortunately, anyone reading this can check the facts for themselves.
A good place to start is with the Lisp FAQ:
<URL:http://www.cs.cmu.edu/afs/cs.cmu.edu/project/ai-
repository/ai/html/faqs/lang/lisp/top.html>
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
    "My body falls on you like a dead horse" -- Portait Chinois
            Please note: my email address is gubbish
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e4418a3b63b3988989900@news.demon.co.uk>
My body fell on Sajid Ahmed the Peaceman like a dead horse,
who then wheezed these wise words:

> 	I know exactly what I'm talking about, thank you. 

You don't give that impression. Just the opposite, in fact.

> 	That's not the point. The point is, why make the functions 
> tail recursive when simple iteration is good enough? It's a waste 
> of time. 
> 	Just write the iterative code. It's faster and more efficient. 

Is it? I've not seen any evidence for that claim. Are you perhaps 
using an example with a poor implementation of Lisp, or even a 
compiler for another language that doesn't support resursion well?

You might as well "prove" that the Earth is flat, by only looking at a 
small piece of road that is flat. More realistic examples can also be 
found.
 
> 	That's where my gripe in Lisp comes about. It's a 
> programming language that compiles code into simpler assembly 
> language machine code. It's not Mathematics, where the results 
> of functions are instantaneously there. There is no need to 
> write tail recursive functions, when simple iterative code will do. 

Why use iteration when simple resursion will do? If you want to argue 
that programming at a lower level is always best, then you should be 
using assembly language for everything. Is this in fact what you do?

If a programmer choses a particular language, their reasons for doing 
so need not depend on things like Mathematics. Recursion may be only 
of many possible reasons for making a choice.

Note that Lisp doesn't depend on recursion. _You_ may think it does, 
but you insist on demonstration your ignorance of Lisp. I strongly 
recommend that you go away and read a few of the Lisp tutorials in the 
Lisp FAQ (you did read the FAQ, didn't you?), and only then should you 
try to tell people what Lisp can or cannot do.

Assertions based on ignorance can be called mistruths. A less generous 
way of describing them would be as lies. Yes, I'm calling you a liar.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
    "My body falls on you like a dead horse" -- Portait Chinois
            Please note: my email address is gubbish
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33DA5024.593B@capital.net>
Martin Rodgers wrote:


> 
> My body fell on Sajid Ahmed the Peaceman like a dead horse,
> who then wheezed these wise words:
> 

	Sajid Ahmed the Peaceman then threw the dead horse 
off of himself, and went on with his business. 


> >       I know exactly what I'm talking about, thank you.
> 
> You don't give that impression. Just the opposite, in fact.
> 

	Sorry to burst your bubble. I'm not trying to impress 
you or anybody else. 

> >       That's not the point. The point is, why make the functions
> > tail recursive when simple iteration is good enough? It's a waste
> > of time.
> >       Just write the iterative code. It's faster and more efficient.
> 
> Is it? I've not seen any evidence for that claim. Are you perhaps
> using an example with a poor implementation of Lisp, or even a
> compiler for another language that doesn't support resursion well?



	Using recursive functions on a computer involves manipulating 
a stack. Using iterative statements does not. QED. 

> 
> You might as well "prove" that the Earth is flat, by only looking at a
> small piece of road that is flat. 

	What's that have to do with anything? 

	Like I said in the post that your replying to, my gripe 
is towards programmers (Lisp as well as others) living in their
own fantasy abstract mathematical world. It's time to accept 
reality.  


					Peaceman
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e446a93507eda33989907@news.demon.co.uk>
My body fell on Sajid Ahmed the Peaceman like a dead horse,
who then wheezed these wise words:

> 	Sorry to burst your bubble. I'm not trying to impress 
> you or anybody else. 

I didn't think you were trying to impress anyone. Oh no.
 
> 	Using recursive functions on a computer involves manipulating 
> a stack. Using iterative statements does not. QED. 

Really? How is it, then, that I can write tail recursive functions in 
C, and when compiled with a C compiler that optimises tail recursion, 
the resulting code reuses the activation record during a tail call?

You can also write recursive decent parsers in a wide variety of 
languages, including C. I first did this more than 10 years ago. Have 
you tried it yet?

> > You might as well "prove" that the Earth is flat, by only looking at a
> > small piece of road that is flat. 
> 
> 	What's that have to do with anything?

See my above points about recursion.
 
> 	Like I said in the post that your replying to, my gripe 
> is towards programmers (Lisp as well as others) living in their
> own fantasy abstract mathematical world. It's time to accept 
> reality.  

So recursive decent parsers have no practical value? Could it be that 
they use abstract mathematical ideas that you object to? Say bye bye 
to all the world's compilers. Sajid Ahmed the Peaceman has declared 
them to be unnecessary applications of abstract mathematics.

You can program in all kinds of languages without going into any more
abstract mathematics than, say, ANSI C. Perhaps you just had a poor 
teacher who gave you the impression that Lisp is way too complex for 
you? If so, then find a better teacher. The Lisp FAQ can recommend a 
number of excellent books. A deeper understanding can come later.

Don't blame the language, friend. Blame your education. Some people 
get put off algebra for the same reasons. It's not too late to correct 
the damage. Stop attacking something because you don't understand it.

Someone who with your lack of understand is either ignorant or stupid. 
Now, ignorance can be cured with education. Are you willing to learn?
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
    "My body falls on you like a dead horse" -- Portait Chinois
            Please note: my email address is gubbish
From: Emergent Technologies Inc.
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5rdn9r$lci$2@newsie2.cent.net>
Sajid Ahmed the Peaceman wrote in article <·············@capital.net>...
>
> Using recursive functions on a computer involves manipulating
>a stack. Using iterative statements does not. QED.
>
> Like I said in the post that your replying to, my gripe
>is towards programmers (Lisp as well as others) living in their
>own fantasy abstract mathematical world. It's time to accept
>reality. 

Call me crazy, but I'm beginning to look forward to these posts.

It depends on your definition of ``recursion''.  The programmers I respect
and pay a lot of attention to will generally use the word recursion
and iteration interchangably.  They are often talking about fixed points
of recursively enumerable functions.  The details of whether an
implementation
of such a function pushes or pops the stack is immaterial (both to me
and them).

I find that the really good programmers --- ok, I'll name a some names:
Henry Baker, Alan Bawden, Jonathan Rees, Will Clinger, Gerry Sussman,
Bill Rozas (and quite a few others) --- can take an extremely
abstract mathematical description of something, cast it to a
high level formal description in some language with virtually
no changes, and make it run blisteringly fast.  I don't think it is an
accident that all these people use Lisp to express their programs.

There is the occasional person who, while brilliant in their own field,
cannot code worth beans, but these people seem few and far between
to me.
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e454bc990c18e24989909@news.demon.co.uk>
My body fell on Emergent Technologies Inc. like a dead horse, thusly:

> Call me crazy, but I'm beginning to look forward to these posts.

They are entertaining, aren't they?
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
    "My body falls on you like a dead horse" -- Portait Chinois
            Please note: my email address is gubbish
From: ········@wat.hookup.net
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5rh38b$aei$1@nic.wat.hookup.net>
In <·············@capital.net>, Sajid Ahmed the Peaceman <········@capital.net> writes:
> ...
>> >       That's not the point. The point is, why make the functions
>> > tail recursive when simple iteration is good enough? It's a waste
>> > of time.
>> >       Just write the iterative code. It's faster and more efficient.
>> 
>> Is it? I've not seen any evidence for that claim. Are you perhaps
>> using an example with a poor implementation of Lisp, or even a
>> compiler for another language that doesn't support resursion well?
>
>
>
>	Using recursive functions on a computer involves manipulating 
>a stack. Using iterative statements does not. QED. 

Looks like you either didn't read the replies to your previous postings or
didn't understand them:  tail recursive functions (when properly compiled)
do not use the stack.  So what is the Q (in QED)?


>	Like I said in the post that your replying to, my gripe 
>is towards programmers (Lisp as well as others) living in their
>own fantasy abstract mathematical world. It's time to accept 
>reality.  

You mean the reality that people write about Lisp who don't know what they
are talking about and are unable to comprehend explanations given to them?

Hartmann Schaffer
From: Fred Haineux
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <bc-3107971235490001@17.127.10.119>
In article <·············@capital.net>, ········@capital.net wrote:
|          Using recursive functions on a computer involves manipulating 
|  a stack. Using iterative statements does not. QED. 

The point that we have been repeatedly making is that Lisp compilers are
smart enough to turn some cases of recursive functions into iterative
ones. By doing so, these functions do not use any stack. Even though you
write something that looks like it's going to use the stack, it doesn't.
Period.

This frees you to use recursive or iterative notation for a function, as
suits you. If you happen to like iteration, hey, go for it. Some functions
are easier to figure out iterative. However, some functions are easier
written recursive. If I write them that way, *I* will be able to
understand them more easily, and the compiler may or may not optimize it
into an iterative function, which may or may not use a stack to execute.
Personal style.

But this begs a bigger question: why is it so all-important not to use the
stack? Because the stack is slow?

So what!

When I write a Lisp program, I start by making a wild guess as to how the
program should work. Then I play with the program and repeatedly refine it
until it works right. If, in the course of development, I happen to
express a function as recursive or iterative, I don't care. I just get the
darn thing working. Whatever seems right, probably is. (I might have an
AHA! insight later, and redesign the program to be simpler or clearer.)

THEN I look at speed. The funny thing is: the Lisp compiler will often
make optimizations that produce fast code, without my having to think
about them. Assuming I still need to speed something up, I can pepper my
code with declarations (which are so much easier to add after you KNOW
what the variables are, instead of before, when you have to guess, and
therefore keep changing them....) and maybe do some profiling to see
what's slow and what's not.

So when I get to looking at speed, I can do so all at once, as a global
problem. This is an enormous luxury -- you really ought to try sometime.
Sure, in C and C++ you're stuck to declaring your variables in advance,
but other than that, you can simply ignore speed concerns and write a loop
using whatever syntax seems appropriate.

I mean, it's not like C or C++ won't compile your program if it's
recursive and it OUGHT to be iterative. Indeed, C compilers these days
will, in fact, rewrite your code to optimize it. Just because C looks like
assembly code doesn't mean that compiling it is a one-to-one translation!

But both C and Lisp compilers let you have inline assembly language code,
if you want to.

...

I think your original question was, "Why does Lisp require you to write
functions using recursive notation?" 

The answer to that is, "Lisp does not." 

You then asked, "Why is it considered 'better style' to write functions
always in recursive notation?"

The answer to that is, "No, it is not considered better style to write
functions always in recursive notation. You can use iterative notation if
you like, and indeed, Lisp provides you with more kinds of iteration than
C. It's a question of personal style to decide which notation to use."

Then you asserted that recursive functions were necessarily slower than
iterative.

The answer to that is, "No, not necessarily. Good compilers optimize code."

I've gone on to assert that execution speed is not the be-all and end-all
of a program.

Certainly, there is a certain point at which additional speed is of no
value. I really couldn't care less if it takes me .01 or .001 seconds to
do something, UNLESS I intend to do more than ten thousand of them.

I assert further that there are two other measures of speed that are just
as important as execution speed: development speed, and comprehension
speed.

Development speed is the measure of how long it takes to write a program.
Comprehension speed is the measure of how long it takes someone to learn
how the code operates.

We are all familiar with the maxim, "six months later, you WILL have to
read the comments." Kernighan and Plauger, in "The Elements of Programming
Style," repeatedly enjoin the programmer to "Write clearly -- don't be too
clever." Indeed, of the one-hundred or so "rules" which are gone over,
MOST of them are simple variants of this fundamental rule.

Obviously, what to you might be obvious might be gibberish to someone
else. You will have to decide where to draw the line.
From: ········@wat.hookup.net
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5rdea4$l8g$1@nic.wat.hookup.net>
In <·············@capital.net>, Sajid Ahmed the Peaceman <········@capital.net> writes:
> ...
>	That's not the point. The point is, why make the functions 
>tail recursive when simple iteration is good enough? It's a waste 
>of time. 
>	Just write the iterative code. It's faster and more efficient. 
>
>	That's where my gripe in Lisp comes about. It's a 
>programming language that compiles code into simpler assembly 
>language machine code. It's not Mathematics, where the results 
>of functions are instantaneously there. There is no need to 
>write tail recursive functions, when simple iterative code will do. 

In case you didn't get it yet, the machine code for iteration and tail
recursion are indistinguishable.

Hartmann Schaffer
From: Emergent Technologies Inc.
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5rdng8$lci$3@newsie2.cent.net>
 ········@wat.hookup.net wrote in article <············@nic.wat.hookup.net>.
..
>In <·············@capital.net>, Sajid Ahmed the Peaceman <········@capital.
net> writes:
>> ...
>> That's not the point. The point is, why make the functions
>>tail recursive when simple iteration is good enough? It's a waste
>>of time.
>> Just write the iterative code. It's faster and more efficient.
>>
>> That's where my gripe in Lisp comes about. It's a
>>programming language that compiles code into simpler assembly
>>language machine code. It's not Mathematics, where the results
>>of functions are instantaneously there. There is no need to
>>write tail recursive functions, when simple iterative code will do.
>
>In case you didn't get it yet, the machine code for iteration and tail
>recursion are indistinguishable.
>
>Hartmann Schaffer

And as Guy Steele pointed out, the machine code for general recursion
and iteration are indistiguishable.  It's the subproblems that push the
stack, not the recursive/iterative calls.
From: Rob Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33da750b.178126507@news.wam.umd.edu>
········@wat.hookup.net wrote:
>In case you didn't get it yet, the machine code for iteration and tail
>recursion are indistinguishable.

That's not the most amusing thing about this thread though.  The most
amusing part is the way one side insists in writing everything one
way, whether it really is more natural to structure every problem that
way or not, and the other whines about it being slow even though
modern compilers output the same code for both _styles_.

But hey, sometimes a loop is.. just a loop.
From: David Thornley
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5ri8gi$gv0$1@darla.visi.com>
In article <··················@news.wam.umd.edu>,
Rob Rodgers <·····@acm.org> wrote:
>········@wat.hookup.net wrote:
>>In case you didn't get it yet, the machine code for iteration and tail
>>recursion are indistinguishable.
>
>That's not the most amusing thing about this thread though.  The most
>amusing part is the way one side insists in writing everything one
>way, whether it really is more natural to structure every problem that
>way or not, and the other whines about it being slow even though
>modern compilers output the same code for both _styles_.
>
From here, it looks like one side insists on writing everything
in a form they find natural, and the other side insists that everything
must be written to include gratuitous stack manipulation to be
stylish.  Nobody is objecting to the Peaceman's claims that
he finds iteration more natural to him.  If he were using Common
Lisp, we could tell him how to use loops.  However, he has been
told emphatically that looping is bad Lisp style, and so far refuses
to believe that some of us use things like do and loop and mapcar.

>But hey, sometimes a loop is.. just a loop.
>
Yup.  This is why Norvig, in Paradigms of Artificial Intelligence:
Case Studies in Common Lisp, discussed about sixteen different ways
to do a loop in one of the earlier chapters.  Some things want to
be recursive, even at some cost in efficency.  Some things do nicely
in tail recursion, which is (when the compiler is through with it)
the same thing as iteration.  Some things naturally loop, and Common
Lisp has more looping constructs than any other language I know of.

David Thornley
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33E25677.38A7@capital.net>
········@wat.hookup.net wrote:
> 
> In case you didn't get it yet, the machine code for iteration and tail
> recursion are indistinguishable.
> 
> Hartmann Schaffer

	Better go back to school and take a course in compiler design.
	One word: stack.
				
				Peaceman
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scfpvrxpky1.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Fri, 01 Aug 1997 17:34:47 -0400
   Organization: Logical Net
   Reply-To: ········@capital.net
   Lines: 11
   Mime-Version: 1.0
   Content-Type: text/plain; charset=us-ascii
   Content-Transfer-Encoding: 7bit
   X-Mailer: Mozilla 3.01 (WinNT; I)
   Xref: agate comp.lang.lisp:29650 comp.programming:53355 comp.lang.c++:284879

   ········@wat.hookup.net wrote:
   > 
   > In case you didn't get it yet, the machine code for iteration and tail
   > recursion are indistinguishable.
   > 
   > Hartmann Schaffer

	   Better go back to school and take a course in compiler design.
	   One word: stack.

				   Peaceman

If there is a person who needs a compiler course it is you.

First you stated that

	all recursive programs can be espressed by iteration without
	the use of a stack

Now you essentially state

	tail-recursive definitions always need a stack.

Why don't *you* go back to school and take a serious compiler (or PL)
course and then come back here telling everybody that you were wrong
on these two accounts?  There is nothing bad into admitting that you
are wrong.  At least you will have learned something.

PS.  Apologies to everybody else,  I just cannot resist.

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Rainer Joswig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <joswig-ya023180000208970038290001@news.lavielle.com>
In article <·············@capital.net>, ········@capital.net wrote:

> ········@wat.hookup.net wrote:
> > 
> > In case you didn't get it yet, the machine code for iteration and tail
> > recursion are indistinguishable.
> > 
> > Hartmann Schaffer
> 
>         Better go back to school and take a course in compiler design.
>         One word: stack.
>                                 
>                                 Peaceman

How about being not so quick sending other people back to school?


Tail recursion does not use a stack.


(defun fac (n acc)
  (declare (optimize (speed 3) (space 0) (safety 0) (debug 0)))
  (if (zerop n)
    acc
    (fac (1- n) (* n acc))))

? (fac 5 1)
120


Macintosh Common Lisp 4.1 generated PowerPC code:


? (disassemble 'fac)
L0 
  (MFLR LOC-PC)
  (STWU SP -16 SP)
  (STW FN 4 SP)
  (STW LOC-PC 8 SP)
  (STW VSP 12 SP)
  (MR FN TEMP2)
  (LWZ IMM0 -117 RNIL)
  (TWLLT SP IMM0)
  (VPUSH ARG_Z)
  (:REGSAVE SAVE0 4)
  (VPUSH SAVE0)
  (MR SAVE0 ARG_Y)
  (MR ARG_Y SAVE0)
  (LI ARG_Z '0)
  (BLA .SPBUILTIN-EQ)
  (CMPW ARG_Z RNIL)
  (BEQ L96)
  (LWZ ARG_Z 4 VSP)
  (LWZ SAVE0 0 VSP)
  (LWZ LOC-PC 8 SP)
  (MTLR LOC-PC)
  (LWZ VSP 12 SP)
  (LWZ FN 4 SP)
  (LA SP 16 SP)
  (BLR)
L96 
  (MR ARG_Y SAVE0)
  (LI ARG_Z '1)
  (BLA .SPBUILTIN-MINUS)
  (MR ARG_Y ARG_Z)
  (VPUSH ARG_Y)
  (MR ARG_Y SAVE0)
  (LWZ ARG_Z 8 VSP)
  (BLA .SPBUILTIN-TIMES)
  (LWZ ARG_Y 0 VSP)
  (LA VSP 4 VSP)
  (LWZ SAVE0 0 VSP)
  (SET-NARGS 2)
  (MR TEMP2 FN)
  (LWZ LOC-PC 8 SP)
  (MTLR LOC-PC)
  (LWZ VSP 12 SP)
  (LWZ FN 4 SP)
  (LA SP 16 SP)
  (B L0)


O.k., let's see what's on the stack at n = 1:


(defun fac (n acc)
  (declare (optimize (speed 3) (space 0) (safety 0) (debug 0)))
  (when (= n 1)
    (print-call-history))
  (if (zerop n)
    acc
    (fac (1- n) (* n acc))))


There is only one call to FAC pending on the stack.


? (fac 110 1)
(3202728) : 0 "FAC" 88
  0 ACC:
1588245541522742940425370312709077287172441023447356320758174831844456716294
8183030959960131517678520479243672638179990208521148623422266876757623911219
200000000000000000000000000 ("required")
  1 : #<PROCESS Listener [Suspended] #x2D1FEC6> ("saved SAVE0")

(3202738) : 1 NIL NIL

(3202748) : 2 "CCL::CALL-CHECK-REGS" 80
  0 : FAC ("required")
  1 : (110 1) ("rest")
  2 : (#<PROCESS Listener [Suspended] #x2D1FEC6> 0 *TERMINAL-IO* (#<RESTART
ABORT #x3A63D3E> #<RESTART ABORT-BREAK #x3A63D66>) #<BOGUS object @
#x1B7CB6E> CCL::%PPC-APPLY-LEXPR-WITH-METHOD-CONTEXT (#<STANDARD-METHOD
INITIALIZE-INSTANCE (FRED-WINDOW)> #<CCL::STANDARD-KERNEL-METHOD
INITIALIZE-INSTANCE (WINDOW)> #<CCL::STANDARD-KERNEL-METHOD
INITIALIZE-INSTANCE (SIMPLE-VIEW)> #<CCL::STANDARD-KERNEL-METHOD
INITIALIZE-INSTANCE (CCL::INSTANCE-INITIALIZE-MIXIN)>
#<CCL::STANDARD-KERNEL-METHOD INITIALIZE-INSTANCE (STANDARD-OBJECT)>) 0)
  3 : FAC

(3202758) : 3 NIL NIL

(3202768) : 4 "CCL::TOPLEVEL-EVAL" 176
  0 : (FAC 110 1) ("required")
  1 : NIL ("optional")
  2 : NIL

(3202778) : 5 "CCL::READ-LOOP-INTERNAL" 716
  0 : *EVAL-QUEUE* ("saved SAVE0")
  1 : CCL::*CURRENT-STACK-GROUP* ("saved SAVE1")
  2 : CCL::*RESUME-STACK-GROUP-ARG* ("saved SAVE2")
  3 : #<BOGUS object @ #x1B7CB7E> ("saved SAVE3")
  4 CCL::*BREAK-LEVEL*: 0 (:SAVED-SPECIAL)
  5 CCL::*LAST-BREAK-LEVEL*: 0 (:SAVED-SPECIAL)
  6 *LOADING-FILE-SOURCE-FILE*: NIL (:SAVED-SPECIAL)
  7 CCL::*IN-READ-LOOP*: NIL (:SAVED-SPECIAL)
  8 CCL::*LISTENER-P*: T (:SAVED-SPECIAL)
  9 ***: NIL (:SAVED-SPECIAL)
  10 **: NIL (:SAVED-SPECIAL)
  11 *: NIL (:SAVED-SPECIAL)
  12 +++: NIL (:SAVED-SPECIAL)
  13 ++: NIL (:SAVED-SPECIAL)
  14 +: NIL (:SAVED-SPECIAL)
  15 ///: NIL (:SAVED-SPECIAL)
  16 //: NIL (:SAVED-SPECIAL)
  17 /: NIL (:SAVED-SPECIAL)
  18 -: NIL (:SAVED-SPECIAL)
  19 : (FAC 110 1)
  20 : #<RESTART ABORT-BREAK #x3A63D66>
  21 : #<RESTART ABORT #x3A63D3E>
  22 CCL::%RESTARTS%: ((#<RESTART ABORT #x3A63F66> #<RESTART ABORT-BREAK
#x3A63F8E>)) (:SAVED-SPECIAL)
  23 : T

(32027C8) : 6 NIL NIL

(32027D8) : 7 "CCL::READ-LOOP" 356
  0 : 0 ("required")
  1 : (#<RESTART ABORT #x3A63F66> #<RESTART ABORT-BREAK #x3A63F8E>) ("saved
SAVE0")
  2 : T
  3 : NIL
  4 CCL::*LISTENER-P*: NIL (:SAVED-SPECIAL)
  5 *EVAL-QUEUE*: NIL (:SAVED-SPECIAL)

(32027F8) : 8 "TOPLEVEL-LOOP" 48

(3202818) : 9 "Anonymous Function #x22E8E26" 44

(3202838) : 10 "CCL::RUN-PROCESS-INITIAL-FORM" 340
  0 : #<PROCESS Listener [Running] #x2D1FEC6> ("required")
  1 : (#<COMPILED-LEXICAL-CLOSURE #x2D1FF2E>) ("required")
  2 : CCL::*NEXT-STACK-GROUP* ("saved SAVE0")
  3 : NIL
  4 : #<RESTART ABORT-BREAK #x3A63F8E>
  5 : #<RESTART ABORT #x3A63F66>
  6 CCL::%RESTARTS%: NIL (:SAVED-SPECIAL)
  7 : #<COMPILED-LEXICAL-CLOSURE #x2D1FF2E>

(3202888) : 11 "CCL::%RUN-STACK-GROUP-FUNCTION" 796
  0 : #<BOGUS object @ #x3A63FFE> ("required")
  1 : 13109802 ("required")
  2 *TOP-LISTENER*: NIL (:SAVED-SPECIAL)

1588245541522742940425370312709077287172441023447356320758174831844456716294
8183030959960131517678520479243672638179990208521148623422266876757623911219
200000000000000000000000000




Compare it to the recursive version:



(defun fac (n)
  (declare (optimize (speed 3) (space 0) (safety 0) (debug 0)))
  (when (= n 1)
    (print-call-history))
  (if (zerop n)
    1
    (* n (fac (1- n)))))



? (fac 7)
(32026C8) : 0 "FAC" 84
  0 : 2 ("saved SAVE0")

(32026D8) : 1 "FAC" 160
  0 : 3 ("saved SAVE0")

(32026E8) : 2 "FAC" 160
  0 : 4 ("saved SAVE0")

(32026F8) : 3 "FAC" 160
  0 : 5 ("saved SAVE0")

(3202708) : 4 "FAC" 160
  0 : 6 ("saved SAVE0")

(3202718) : 5 "FAC" 160
  0 : 7 ("saved SAVE0")

(3202728) : 6 "FAC" 160
  0 : #<PROCESS Listener [Suspended] #x2D1FEC6> ("saved SAVE0")

(3202738) : 7 NIL NIL

(3202748) : 8 "CCL::CALL-CHECK-REGS" 80
  0 : FAC ("required")
  1 : (7) ("rest")
  2 : (#<PROCESS Listener [Suspended] #x2D1FEC6> 0 *TERMINAL-IO* (#<RESTART
ABORT #x3A63D3E> #<RESTART ABORT-BREAK #x3A63D66>) #<BOGUS object @
#x1B7CB6E> CCL::%PPC-APPLY-LEXPR-WITH-METHOD-CONTEXT (#<STANDARD-METHOD
INITIALIZE-INSTANCE (FRED-WINDOW)> #<CCL::STANDARD-KERNEL-METHOD
INITIALIZE-INSTANCE (WINDOW)> #<CCL::STANDARD-KERNEL-METHOD
INITIALIZE-INSTANCE (SIMPLE-VIEW)> #<CCL::STANDARD-KERNEL-METHOD
INITIALIZE-INSTANCE (CCL::INSTANCE-INITIALIZE-MIXIN)>
#<CCL::STANDARD-KERNEL-METHOD INITIALIZE-INSTANCE (STANDARD-OBJECT)>) 0)
  3 : FAC

(3202758) : 9 NIL NIL

(3202768) : 10 "CCL::TOPLEVEL-EVAL" 176
  0 : (FAC 7) ("required")
  1 : NIL ("optional")
  2 : NIL

(3202778) : 11 "CCL::READ-LOOP-INTERNAL" 716
  0 : *EVAL-QUEUE* ("saved SAVE0")
  1 : CCL::*CURRENT-STACK-GROUP* ("saved SAVE1")
  2 : CCL::*RESUME-STACK-GROUP-ARG* ("saved SAVE2")
  3 : #<BOGUS object @ #x1B7CB7E> ("saved SAVE3")
  4 CCL::*BREAK-LEVEL*: 0 (:SAVED-SPECIAL)
  5 CCL::*LAST-BREAK-LEVEL*: 0 (:SAVED-SPECIAL)
  6 *LOADING-FILE-SOURCE-FILE*: NIL (:SAVED-SPECIAL)
  7 CCL::*IN-READ-LOOP*: NIL (:SAVED-SPECIAL)
  8 CCL::*LISTENER-P*: T (:SAVED-SPECIAL)
  9 ***: NIL (:SAVED-SPECIAL)
  10 **: NIL (:SAVED-SPECIAL)
  11 *: NIL (:SAVED-SPECIAL)
  12 +++: NIL (:SAVED-SPECIAL)
  13 ++: NIL (:SAVED-SPECIAL)
  14 +: NIL (:SAVED-SPECIAL)
  15 ///: NIL (:SAVED-SPECIAL)
  16 //: NIL (:SAVED-SPECIAL)
  17 /: NIL (:SAVED-SPECIAL)
  18 -: NIL (:SAVED-SPECIAL)
  19 : (FAC 7)
  20 : #<RESTART ABORT-BREAK #x3A63D66>
  21 : #<RESTART ABORT #x3A63D3E>
  22 CCL::%RESTARTS%: ((#<RESTART ABORT #x3A63F66> #<RESTART ABORT-BREAK
#x3A63F8E>)) (:SAVED-SPECIAL)
  23 : T

(32027C8) : 12 NIL NIL

(32027D8) : 13 "CCL::READ-LOOP" 356
  0 : 0 ("required")
  1 : (#<RESTART ABORT #x3A63F66> #<RESTART ABORT-BREAK #x3A63F8E>) ("saved
SAVE0")
  2 : T
  3 : NIL
  4 CCL::*LISTENER-P*: NIL (:SAVED-SPECIAL)
  5 *EVAL-QUEUE*: NIL (:SAVED-SPECIAL)

(32027F8) : 14 "TOPLEVEL-LOOP" 48

(3202818) : 15 "Anonymous Function #x22E8E26" 44

(3202838) : 16 "CCL::RUN-PROCESS-INITIAL-FORM" 340
  0 : #<PROCESS Listener [Running] #x2D1FEC6> ("required")
  1 : (#<COMPILED-LEXICAL-CLOSURE #x2D1FF2E>) ("required")
  2 : CCL::*NEXT-STACK-GROUP* ("saved SAVE0")
  3 : NIL
  4 : #<RESTART ABORT-BREAK #x3A63F8E>
  5 : #<RESTART ABORT #x3A63F66>
  6 CCL::%RESTARTS%: NIL (:SAVED-SPECIAL)
  7 : #<COMPILED-LEXICAL-CLOSURE #x2D1FF2E>

(3202888) : 17 "CCL::%RUN-STACK-GROUP-FUNCTION" 796
  0 : #<BOGUS object @ #x3A63FFE> ("required")
  1 : 13109802 ("required")
  2 *TOP-LISTENER*: NIL (:SAVED-SPECIAL)

5040


Can you see the difference?

-- 
http://www.lavielle.com/~joswig/
From: Alexey Goldin
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <m1g1sl8jkt.fsf@spot.uchicago.edu>
Sajid Ahmed the Peaceman <········@capital.net> writes:

 
> > Can you see the difference?
> > 
> > --
> > http://www.lavielle.com/~joswig/
> 
> 
> 	For your sake, I hope the Macintosh makes a come a back. 
> They have already filed for chapter 11, and are just being 
> bailed out by Microsoft. 
> 
> 				Peaceman


Boy, this is a technical argument...
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e54d2f5e67d4b71989930@news.demon.co.uk>
Sajid Ahmed the Peaceman wheezed these wise words:

> 	For your sake, I hope the Macintosh makes a come a back. 
> They have already filed for chapter 11, and are just being 
> bailed out by Microsoft. 

How does this support your argument? Are you saying that "recursion is 
expensive on every machine but the Mac"? Can you show us how recursion 
for the Mac is different from every other machine? Or are you just the 
ignoring Rainer's excellent post by trying to divert the subject to an 
issue that is not at all relevant here?

TRO works on other machines, too. You must prove that it doesn't. 
("Your mission, should you accept it...") Even Mr Phelps couldn't do 
that, and you're already admitting that you've lost by resorting to 
such a cheap tactice (yeah, _more_ flamebait).
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
        "There are no limits." -- ad copy for Hellraiser
            Please note: my email address is gubbish
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33ECA98E.F6B@capital.net>
Martin Rodgers wrote:
> 
> Sajid Ahmed the Peaceman wheezed these wise words:
> 
> >       For your sake, I hope the Macintosh makes a come a back.
> > They have already filed for chapter 11, and are just being
> > bailed out by Microsoft.
> 
> How does this support your argument? Are you saying that "recursion is
> expensive on every machine but the Mac"? Can you show us how recursion
> for the Mac is different from every other machine? Or are you just the
> ignoring Rainer's excellent post by trying to divert the subject to an
> issue that is not at all relevant here?
> 
> TRO works on other machines, too. You must prove that it doesn't.
> ("Your mission, should you accept it...") Even Mr Phelps couldn't do
> that, and you're already admitting that you've lost by resorting to
> such a cheap tactic (yeah, _more_ flamebait).
> --


	I have the proof that it doesn't work (in all situations).
Rainer seems like a nice guy. I feel bad that he knows A lot 
about the MAC (soon to only be found in museums, unless Apple 
starts making PC clones called Macs).

					Peaceman
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e56ca11eeb48c5798994e@news.demon.co.uk>
Sajid Ahmed the Peaceman wheezed these wise words:

> 	I have the proof that it doesn't work (in all situations).
> Rainer seems like a nice guy. I feel bad that he knows A lot 
> about the MAC (soon to only be found in museums, unless Apple 
> starts making PC clones called Macs).

If you have proof that recursion is as expensive as you say, then show 
it to us. Otherwise, take your OS flamebait to another newsgroup, one 
where they'll appreciate it. Like comp.sys.mac.advocacy.

BTW, why did you not shown us this proof sooner?
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
        "There are no limits." -- ad copy for Hellraiser
            Please note: my email address is gubbish
From: Patric Jonsson
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5sj0v6$qdn@mumrik.nada.kth.se>
In article <············@capital.net> ········@capital.net writes:
>	I have the proof that it doesn't work (in all situations).

Demo or die.


-- 
Patric Jonsson,·······@nada.kth.se;Joy, Happiness, and Banana Mochas all round.
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33EF46F9.67CE@capital.net>
Patric Jonsson wrote:
> 
> In article <············@capital.net> ········@capital.net writes:
> >       I have the proof that it doesn't work (in all situations).
> 
> Demo or die.
> 

	Don't kill me. 
	I posted an example in one of my previous posts.

						Peaceman
From: Martin Rodgers
Subject: Re: Lisp is *not* slow
Date: 
Message-ID: <MPG.e5a83e0a4ec6c5798998c@news.demon.co.uk>
Sajid Ahmed the Peaceman wheezed these wise words:

> 	Don't kill me. 
> 	I posted an example in one of my previous posts.

And your code was contrived so that would not be tail recursive.
It was a poor use of _C_, never mind recursion.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
        "There are no limits." -- ad copy for Hellraiser
            Please note: my email address is gubbish
From: ········@wat.hookup.net
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5sfi30$o57$1@nic.wat.hookup.net>
In <·············@capital.net>, Sajid Ahmed the Peaceman <········@capital.net> writes:
>Rainer Joswig wrote:
> ... 
>	For your sake, I hope the Macintosh makes a come a back. 
>They have already filed for chapter 11, and are just being 
>bailed out by Microsoft. 

Could you please explain what this witty argument has to do with the cost
of (tail)recursion and you superior compiler knowledge?  Or are you bailing
out?

Hartmann Schaffer
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33ECB119.245D@capital.net>
········@wat.hookup.net wrote:
> 
> Could you please explain what this witty argument has to do with the cost
> of (tail)recursion and you superior compiler knowledge?  Or are you bailing
> out?
> 
> Hartmann Schaffer


	Let's take a look at the following example of tail recursion, 
(In C++, sorry don't know how to do references in Lisp) 


int factorial(int &number) {
   int x;
   if (number == 1) return 1; 
   x = number-1;
   return number * factorial(x);
} 


      If you don't know C++ : 

int factorial(int *number) {
   int x;
   if (*number == 1) return 1; 
   x = *number-1;
   return *number * factorial(&x);
} 


	There you have it, tail recurion that needs a stack.


					     Peaceman
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e56d07aa88c9389989953@news.demon.co.uk>
Sajid Ahmed the Peaceman wheezed these wise words:

> 	There you have it, tail recurion that needs a stack.

Yeah, in C++. Curiously, this has recently been discussed in 
comp.lang.lisp - were you reading those posts? It's a C/C++ problem. 
Lisp doesn't use references - it doesn't need them. The answer is 
amazingly simple: don't use references!

Menwhile, people reading this can further amuse themselves, if they 
wish to, by reading some arguments for and against using Lisp:

<URL:http://www.wildcard.demon.co.uk/lisp/for.html>
<URL:http://www.wildcard.demon.co.uk/lisp/against.html>

I just wish I had more archives to offer, like all the threads in 
which people like Peaceman have tried to show that Lisp can't be used 
for things that it is. That way, we could save some time by all re-
reading those threads, and then the C++ people could look for some 
_new_ arguments. Obviously the old ones have failed, because some 
people are still using Lisp!

<URL:http://www.wildcard.demon.co.uk/archives/>

Peaceman, could you please tell us what you hope to gain by this 
attack on a language which you don't use, have no interest in using, 
and can't even understand?
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
        "There are no limits." -- ad copy for Hellraiser
            Please note: my email address is gubbish
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33EF46B5.16E9@capital.net>
Martin Rodgers wrote:
> 
> Sajid Ahmed the Peaceman wheezed these wise words:
> 
> >       There you have it, tail recurion that needs a stack.
> 
> Yeah, in C++. Curiously, this has recently been discussed in
> comp.lang.lisp - were you reading those posts? It's a C/C++ problem.
> Lisp doesn't use references - it doesn't need them. The answer is
> amazingly simple: don't use references!
> 

	Lisp uses No references? If that were the case you just stated
something that helps my argument. I will digress, however, because 
I think that Lisp probably does use references (aka pointers) in
many situations. If not, and you have an array or string, the array is 
*copied* every time you call the recursive function.

					Peaceman
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e5a2d502904c49d989988@news.demon.co.uk>
Sajid Ahmed the Peaceman wheezed these wise words:

> 	Lisp uses No references? If that were the case you just stated
> something that helps my argument.

How does it do that? Like this, perhaps?

> Pointer arithmetic in C has some advantages. Lisp doesn't have pointer
> arithmetic. Therefore, lisp sucks. Refute. 
>
> David Hanley 

Lisp doesn't need pointer arithmetic. In fact, a great many languages 
get by just fine _without pointer arithmetic_. The same is true for 
references. Read a good book about Lisp and you might discover why.

> I will digress, however, because 
> I think that Lisp probably does use references (aka pointers) in
> many situations. If not, and you have an array or string, the array is 
> *copied* every time you call the recursive function.

That's a different kind of reference. Again, you should read a good 
Lisp book. Winston and Horn's Lisp tutorial has an excellent section 
on this very issue. Don't assume that Lisp behaves in the same way as 
C/C++ - it doesn't. You have to pay attention in order to learn this.
Can you do that, please?

Or are you attacking Lisp simply because it is different from C++? 
That would be a plausible explanation for your posts to this thread.
Can you deny it?

BTW, your recursive code was contrived. A cleaner version wouldn't use 
a refence to pass a parameter, thus enabling tail recursion. Yes, it 
can be done even in C! This has been pointed out to you many times, 
but you always ignore it. Are you an idiot or a fool? If you are 
neither, why can you not refute C examples? You do know the C 
language, don't you? It appears that you do.

Perhaps you refuse to answer these points by choice, because they 
destroy your whole argument? It's plausible.

Would you like to show that you're not clueless? That's also easy. 
Name the Lisps that you've used. Are any of them commercial systems, 
like LispWorks and Allegro CL? Here's LispWorks:

<URL:http://www.harlequin.co.uk/products/ads/lispworks/lispworks.html>

There's a free version of ACL for you to play with:
<URL:http://www.franz.com/frames/dload.main.html>

Show us some code that runs with ACL/PC (I'm sure you can find a 
machine that can run it) that demonstrates the problems that you 
describe. Can you do that? Why don't you do it? 
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
        "There are no limits." -- ad copy for Hellraiser
            Please note: my email address is gubbish
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33F1CFA8.5EEC@capital.net>
Martin Rodgers wrote:
> 
> Sajid Ahmed the Peaceman wheezed these wise words:
> 
> >       Lisp uses No references? If that were the case you just stated
> > something that helps my argument.
> 
> How does it do that? Like this, perhaps?
> 
> > Pointer arithmetic in C has some advantages. Lisp doesn't have pointer
> > arithmetic. Therefore, lisp sucks. Refute.
> >
> > David Hanley
> 
> Lisp doesn't need pointer arithmetic. In fact, a great many languages
> get by just fine _without pointer arithmetic_. The same is true for
> references. Read a good book about Lisp and you might discover why.
> 

	Pointer arithmetic is faster, than a list that can hold a variety
of different types as elements. Lets say you want to access the 1000th
element of a list. With pointer arithmetic, your there in an instance. 
With the list of varying elements, you have to traverse 999 elements 
before you get to the thousandth. This is true, unless you have some
array
pointing to the location of each element, but then your back to pointer 
arithmetic again. 

        
	Deleting elements is also a lot quicker. The programmer 
can write code to easily and quickly free up memory when it's no longer 
needed. If you leave it up to a garbage collector, it has to determine 
whether or not a particular part of memory is needed before it can free
it... translation .. slow. 

> > I will digress, however, because
> > I think that Lisp probably does use references (aka pointers) in
> > many situations. If not, and you have an array or string, the array is
> > *copied* every time you call the recursive function.
> 
> That's a different kind of reference. 

	Do tell. How are they different? 


> Again, you should read a good
> Lisp book.  Winston and Horn's Lisp tutorial has an excellent section
> on this very issue. Don't assume that Lisp behaves in the same way as
> C/C++ - it doesn't. 


	I don't think so. I've had enough of Lisp already.  

>You have to pay attention in order to learn this.
> Can you do that, please? 
> Or are you attacking Lisp simply because it is different from C++?
> That would be a plausible explanation for your posts to this thread.
> Can you deny it?


	Nah... I'm attacking Lisp because theres too much recursion in it. 
I have yet to see a lisp program without any recursion. I know that
you guys could probably easily write some. 

        

> 
> BTW, your recursive code was contrived. A cleaner version wouldn't use
> a refence to pass a parameter, 

	The code was meant as an example, not something that's clean
and fast. 

>thus enabling tail recursion. 

	You mean TRO. 


> Yes, it
> can be done even in C! This has been pointed out to you many times,
> but you always ignore it. 

	I know that. I've been saying it all along. All recursive code
is translated into 'iterative' assembly/machine language code. Using 
recursive functions adds a step.	

> Are you an idiot or a fool? 

     No. Are you?    

> If you are
> neither, why can you not refute C examples? You do know the C
> language, don't you? It appears that you do.
> 

	OK, you got it. Recursive in C code is just as bad as 
recursive Lisp code. Stick to the iterative form in both languages,
and use recursion only when it's better to do so. 


					Peaceman
From: Espen Vestre
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <w6wwlq2jh6.fsf@gromit.online.no>
Sajid Ahmed the Peaceman <········@capital.net> writes:

> With the list of varying elements, you have to traverse 999 elements 
> before you get to the thousandth. This is true, unless you have some
> array

and

> 	OK, you got it. Recursive in C code is just as bad as 
> recursive Lisp code. Stick to the iterative form in both languages,
> and use recursion only when it's better to do so. 

I think this should be enough evidence that there is no point
in carrying on this discussion before mr. Peaceman has taken
an elementary Lisp course, could we all agree to close it and
get back to discussing _interesting_ things?

Sigh.

--

  Espen Vestre
From: Mukesh Prasad
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33F1E45C.270F@polaroid.com>
Espen Vestre wrote:
> 
> Sajid Ahmed the Peaceman <········@capital.net> writes:
> 
> > With the list of varying elements, you have to traverse 999 elements
> > before you get to the thousandth. This is true, unless you have some
> > array
> 
> and
> 
> >       OK, you got it. Recursive in C code is just as bad as
> > recursive Lisp code. Stick to the iterative form in both languages,
> > and use recursion only when it's better to do so.
> 
> I think this should be enough evidence that there is no point
> in carrying on this discussion before mr. Peaceman has taken
> an elementary Lisp course, could we all agree to close it and
> get back to discussing _interesting_ things?

So what exactly is wrong with these two statements?

At best I see nothing but a difference of opinion.
Personally, in C or in Lisp, I like the idea of "use
recursion only when it's better to do so."  You may
not (use recursion when it's worse?), but neither
of these opinions is absolute.

And in C or in Lisp, unless there is an external unrelated
reference to the 1000th element, I don't know of a way to get
to the 1000th element of a list without traversing 999.  Do you?
From: Andreas Eder
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <m33eodq7ac.fsf@laphroig.mch.sni.de>
Mukesh Prasad writes:
>And in C or in Lisp, unless there is an external unrelated
>reference to the 1000th element, I don't know of a way to get
>to the 1000th element of a list without traversing 999.  Do you?

No, but noone in his right mind would use such a datastructure. If you must
access elements in a list at position 1000, then you design is wrong. Switch to
a data-representation with arrays, or hashtables. You have it right there in Lisp.
You have a lot more infrastructure built for you in Lisp than in C. Use it!

Andreas
From: Mukesh Prasad
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33F2FD7B.68A6@polaroid.com>
Andreas Eder wrote:
> 
> Mukesh Prasad writes:
> >And in C or in Lisp, unless there is an external unrelated
> >reference to the 1000th element, I don't know of a way to get
> >to the 1000th element of a list without traversing 999.  Do you?
> 
> No, but noone in his right mind would use such a datastructure. If you must
> access elements in a list at position 1000, then you design is wrong.

You are right in all you say, though you miss the point here.
What you are saying does not address how the statement
"to get to the 1000th element of the list, one must
traverse 999", is wrong.  Actually, it is the *reason* why
such a design would be wrong.  The statement is entirely
accurate as it stands.
From: Martin Rodgers
Subject: Re: Lisp is *NOT* slow
Date: 
Message-ID: <MPG.e5e81111841cdc89899c1@news.demon.co.uk>
Andreas Eder wheezed these wise words:

> No, but noone in his right mind would use such a datastructure. If you must
> access elements in a list at position 1000, then you design is wrong. Switch to
> a data-representation with arrays, or hashtables. You have it right there in Lisp.
> You have a lot more infrastructure built for you in Lisp than in C. Use it!

Trees are also useful for quickly finding an object identified by a 
key. In an array, you have a simple range of keys that can be mapped 
directly to physical addresses, but when the keys are sparase, or 
worse, not even numeric, then a tree is much better.

You could even use a hash table of trees. I tend to use hash tables of 
lists, but that's because the number of collisions tend to be small. 
In a program where I expected collisions to be more frequent, I might 
well consider either rehashing or trees. Perhaps even a simple table 
of sorted items would do, as long as the number of collisions are  
small enough to make inserting small, or insertions are infrequent. A 
binary search could then be used for resolving lookup collisions.

In any case, some kind of profiling would help determine how well the 
solution is working, plus where and how it might need improving.
I regret not having profiling in VC++, but it's possible to add manual 
profiling in this case. I just count the number of string comparisons.

This works well, regardless of which language I use. It's all pretty 
simple stuff that you can find in basic CS books. For hashing strings, 
I use the hashing function from the "Dragon" book on compiler theory:

#define PRIME 211

unsigned long hash(const char *s)
{
    register char *p;
    register unsigned long h, g;

    h = 0;
    for (p = (char *) s; *p != EOS; p++) {
        h = (h << 4) + (*p);
        g = h & 0xf0000000;
        if (g) {
            h = h ^ (g >> 24);
            h = h ^ g;
        }
    }
    return (h % PRIME);
}

I expect this is obvious to most people reading this, so I'm only 
posting it for the benifit of people like Peaceman. ;-) I doubt that 
he's read the "Dragon" book, because if he had read it, he might be 
posting some better quality arguments, as they'd be based on some 
connection with reality.

On the other hand, there's a lot of compiler theory that may be too 
close to maths for Peaceman. The above code, for example. Never mind 
ideas like strength reduction, data flow analysis, graphs, NFA to DFA 
transformations, etc.

So let's start with something simple, like hash tables. That shouldn't 
be too challenging, should it? Considering how useful these things can 
be, the small effort required easily pays for itself.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
        "There are no limits." -- ad copy for Hellraiser
            Please note: my email address is gubbish
From: William Clodius
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33F1E47B.2F1C@lanl.gov>
Sajid Ahmed the Peaceman wrote:
> <snip>
>         Pointer arithmetic is faster, than a list that can hold a variety
> of different types as elements. Lets say you want to access the 1000th
> element of a list. With pointer arithmetic, your there in an instance.
> With the list of varying elements, you have to traverse 999 elements
> before you get to the thousandth. This is true, unless you have some
> array
> pointing to the location of each element, but then your back to pointer
> arithmetic again.

Not quite. Arrays can be mapped to pointers, but they generally have a
much cleaner, more easilly optimized, and safer semantics than general
pointer arithmetic. (In principle ANSI C pointers allow the detection of
unsafe operations, but compilers have been slow to implement beyond what
is required. C pointers remain difficult to optimize.)

>         Deleting elements is also a lot quicker. The programmer
> can write code to easily and quickly free up memory when it's no longer
> needed. If you leave it up to a garbage collector, it has to determine
> whether or not a particular part of memory is needed before it can free
> it... translation .. slow.

How easy it is to determine when to delete a reference depends on the
data structure, otherwise problems with memory leaks would not exist.
Garbage collection is rarely significantly slower than hand coding,
indeed, since it is written by an expert, it is often faster than
typical hand coding. A more common problem is that many implementations
put off collecting untill it is necessary and then stop all other
processing untill the collection is completed. On average this is the
most efficient way to implement the collection, but for real time
systems or systems interfacing with humans this need not be the most
appropriate implementation. There are other ways of implementing garbage
collection however. See Wilson's surveys on garbage collection

http://www.cs.utexas.edu/users/oops/

> <snip most of the clueless discussion of recursive code>
> 
>         OK, you got it. Recursive in C code is just as bad as
> recursive Lisp code. Stick to the iterative form in both languages,
> and use recursion only when it's better to do so.
> <snip>

No recursion in C is worse than in Lisp because it is more error prone
and more complicated for a compiler to optimize.

You might also change your nickname. No one that is as agressive in his
postings as you are would be considered a "Peaceman" by others.

-- 

William B. Clodius		Phone: (505)-665-9370
Los Alamos Nat. Lab., NIS-2     FAX: (505)-667-3815
PO Box 1663, MS-C323    	Group office: (505)-667-5776
Los Alamos, NM 87545            Email: ········@lanl.gov
From: Georg Bauer
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <199708151305.a11171@ms3.maus.de>
Hi!

> How easy it is to determine when to delete a reference depends on the
data
> structure, otherwise problems with memory leaks would not exist. Garbage
> collection is rarely significantly slower than hand coding, indeed, since
> it is written by an expert, it is often faster than typical hand coding.

Actually it very often is much better to leave cleanup to the garbage
collector. One reason is, if the data structures get complex enough,
people have to program parts of a garbage collector themselves - and
often they end up with such genius' approach like simple reference
counting (don't know how often I found this stupid approach to memory
cleanup in C code, but it was far too often - reference counting is a
synonym for memory leaks in most cases). A correct and efficient garbage
collector is a bit slower than fully optimized manual deallocation,
that's right (actually the ParcPlace-folks once head some numbers on
that, they came up with 3% overhead for the garbage collection). But a
good garbage collector gets the job right, most C programmers don't.

And actually I prefer to leave stupid work to stupid machines - and
finding garbage is one such thing I happily leave to the system.

Oh, and Garbage collectors have other features, too: some really improve
the locallity of data, so that paging get's more efficient in
virtual-memory architectures (a copying garbage collector for example is
capable of this - it can put all connected data into the same or near
pages).

Even C get's much more fun with Boehm's GC. Ok, that means some memory
leaks, too, but only in very obscure situations. Most garbage is
collected, and that is _much_ better than most C/C++ programs are - ever
looked at memory allocation of something from the evil empire, after it
run for several hours? No wonder current PCs need so much memory, when
all the applications do is filling it up with garbage :-)

bye, Georg
From: J Bell
Subject: Refcount GC comment
Date: 
Message-ID: <5t8oea$5he$1@newsie2.cent.net>
Georg Bauer wrote in article <···················@ms3.maus.de>...

>Actually it very often is much better to leave cleanup to the garbage
>collector. One reason is, if the data structures get complex enough,
>people have to program parts of a garbage collector themselves - and
>often they end up with such genius' approach like simple reference
>counting (don't know how often I found this stupid approach to memory
>cleanup in C code, but it was far too often - reference counting is a
>synonym for memory leaks in most cases

And reference counting is such a dog.  It is bad enough that reference
counting gives a performance hit proportional to use --- on a multithreaded
system you have to sychronize reference count maintenance with
EVERY THREAD ON EVERY USE!  Now instead of just bumping an indirect
integer, you call into the OS for a locked read-modify-write cycle every
time
you pass the pointer around.

I suppose that locked bus cycles will eventually become commonplace in
user code just for this purpose, but it seems to me that if we wanted
specialized hardware for GC, we would have been better off with LispM's.

Most GC algorithms propose a read or write barrier, refcount GC seems to
want a touch barrier.  Plus ca change, plus ca suck.
From: Henry Baker
Subject: Re: Refcount GC comment
Date: 
Message-ID: <hbaker-1808970605020001@10.0.2.1>
In article <············@newsie2.cent.net>, "J Bell"
<········@eval-apply.com> wrote:
> Plus ca change, plus ca suck.
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I love this quote.  Is it yours?
From: Mark DePristo
Subject: Re: Refcount GC comment
Date: 
Message-ID: <5tccj7$8tr@news.acns.nwu.edu>
J Bell (········@eval-apply.com) wrote:

: Most GC algorithms propose a read or write barrier, refcount GC seems to
: want a touch barrier.  Plus ca change, plus ca suck.

Do you know of any good introductory resources about GC?  I'm interested in
a broad survey of GC techniques and analysis.  

Mark DePristo <··········@nwu.edu>
From: Jens Kilian
Subject: Re: Refcount GC comment
Date: 
Message-ID: <5te0t5$pft@isoit109.bbn.hp.com>
Mark DePristo (·········@Godzilla.cs.nwu.edu) wrote:
> Do you know of any good introductory resources about GC?  I'm interested in
> a broad survey of GC techniques and analysis.  

This URL should get you started:

	ftp://ftp.cs.utexas.edu/pub/garbage/

HTH,
	Jens.
--
··········@acm.org                 phone:+49-7031-14-7698 (HP TELNET 778-7698)
  http://www.bawue.de/~jjk/          fax:+49-7031-14-7351
PGP:       06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]
From: Chris Cox
Subject: Re: Refcount GC comment
Date: 
Message-ID: <01bcae12$e4b45880$925d80c1@sash.long.harlequin.co.uk>
Mark,
You may find our MM pages a good place to start:
http://www.harlequin.com/mm/reference/

-- 
Chris Cox   			······@harlequin.co.uk
Product Marketing   		Tel:  (01954) 785484
Harlequin Ltd                           	Fax:  (01954) 785444
Longstanton House, Woodside, Longstanton, Cambridge, CB4 5BU


Mark DePristo <·········@Godzilla.cs.nwu.edu> wrote in article
<··········@news.acns.nwu.edu>...
> J Bell (········@eval-apply.com) wrote:
> 
> : Most GC algorithms propose a read or write barrier, refcount GC seems
to
> : want a touch barrier.  Plus ca change, plus ca suck.
> 
> Do you know of any good introductory resources about GC?  I'm interested
in
> a broad survey of GC techniques and analysis.  
> 
> Mark DePristo <··········@nwu.edu>
> 
> 
> 
> 
> 
From: Rob Warnock
Subject: Re: Refcount GC comment
Date: 
Message-ID: <5tb4u6$g72@fido.asd.sgi.com>
J Bell <········@eval-apply.com> wrote:
+---------------
| And reference counting is such a dog.  It is bad enough that reference
| counting gives a performance hit proportional to use --- on a multithreaded
| system you have to sychronize reference count maintenance with
| EVERY THREAD ON EVERY USE!  Now instead of just bumping an indirect
| integer, you call into the OS for a locked read-modify-write cycle every
| time you pass the pointer around.
+---------------

Not on all machines. For instance, MIPS architecture processors
[R4000 and later] have a pair of instructions -- "load link" and
"store conditional" (LL/SC) -- that can be used to do exclusive
access from user-mode only... and not just multi-threaded but also
multi-processor! And if you use the address of the heap object to
hash into a table of such locks, you can tune the space/speed tradeoff
however you like. Others processors such as the DEC Alpha, the IBM
RS6000, and the Power PC have similar capabilities.

[But don't take this as advocating ref counting, understand...  ;-}  ]

+---------------
| I suppose that locked bus cycles will eventually become commonplace
| in user code just for this purpose...
+---------------

Already are, on several commerical large SMP and CC/NUMA machines.
That's how you get near-linear scaleups [on some problems] with
dozens or hundreds of CPUs.

[Note: The locks are *not* implemented with "locked bus cycles"
exactly -- that would be way too slow! -- but with mechanisms
that provide equivalent effect.]

+---------------
| but it seems to me that if we wanted specialized hardware for GC,
| we would have been better off with LispM's.
+---------------

Yet another case where "general-purpose" caught up with (and surpassed?)
special-purpose (e.g, LispMs)...?


-Rob

-----
Rob Warnock, 7L-551		····@sgi.com   http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673 [New area code!]
2011 N. Shoreline Blvd.		FAX: 650-933-4392
Mountain View, CA  94043	PP-ASEL-IA
From: Martin Rodgers
Subject: Re: Lisp is *NOT* slow
Date: 
Message-ID: <MPG.e5ccf1fb8ad76569899bc@news.demon.co.uk>
Sajid Ahmed the Peaceman wheezed these wise words:

> 	Pointer arithmetic is faster, than a list that can hold a variety
> of different types as elements. Lets say you want to access the 1000th
> element of a list. With pointer arithmetic, your there in an instance. 
> With the list of varying elements, you have to traverse 999 elements 
> before you get to the thousandth. This is true, unless you have some
> array
> pointing to the location of each element, but then your back to pointer 
> arithmetic again. 

Who said anything about lists? You can use arrays, too, y'know.

Oh, but then you wouldn't.
         
> 	OK, you got it. Recursive in C code is just as bad as 
> recursive Lisp code. Stick to the iterative form in both languages,
> and use recursion only when it's better to do so. 

<ahem>

int *p = NULL;

void r(int i)
{
	if (p == NULL)
		p = &i;
	printf("%d %p\n", i, &i);

	if (p != &i)
		exit(0);

	r(i + 1);
}

You might like to put all of your anti-recursion arguments into your 
CV. There must be some top class jobs waiting for someone with your 
talents. Not necessarily _programming_ jobs, of course.

Try marketing and/or advertising. You'll be right at home.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
        "There are no limits." -- ad copy for Hellraiser
            Please note: my email address is gubbish
From: Rainer Joswig
Subject: Lisp is *Fast*
Date: 
Message-ID: <joswig-ya023180001408970012150001@news.lavielle.com>
In article <·············@capital.net>, ········@capital.net wrote:

>         Pointer arithmetic is faster, than a list that can hold a variety
> of different types as elements. Lets say you want to access the 1000th
> element of a list.

You simply have detected that different data structures have
different properties.

Common Lisp has for example lists, arrays and hashtables (and some more).
Each datastructure has different access, delete or insert speed.
But this is independent of any programming language and
has nothing to do with Lisp. If for a particular application
lists are to slow - use arrays. If arrays are to slow, for
example for indexed element access, use hashtables.

O.k., let's do a reality check:

; Here comes a bit Lisp code. Are you still with me?

; Let's generate a pair of a random string and a number.
; The number is the sum of the character ascii values in the string.

(defun make-pair ()
  (loop with length = (random 100)
        with string = (make-string length)
        for i below length
        for char = (code-char (+ 32 (random 90)))
        summing (char-code char) into sum
        do (setf (aref string i) char)
        finally (return (cons sum string))))

; let's define a size parameter of 10000.

(defparameter *size* 10000)

; Now we make a list of 10000 such pairs.

(defparameter *the-list*
  (loop repeat *size*
        collect (make-pair)))

; Now we put the same elements into an array.

(defparameter *the-vector*
  (loop with vector = (make-sequence 'vector *size*)
        for item in *the-list*
        for i below (length vector)
        do (setf (aref vector i) item)
        finally (return vector)))


; Now it's time for a hashtable of the same data.
; keys are the numbers and values are the strings.

(defparameter *the-hashtable*
  (loop with table = (make-hash-table )
        repeat *size*
        for (key . value) in *the-list*
        do (setf (gethash key table) value)
        finally (return table)))

; Now we want to access a string indexed by a number from
; the various datastructures.

(defun test (item)
  (print (time (find item *the-list* :key #'first)))
  (print (time (find item *the-vector* :key #'first)))
  (print (time (gethash item *the-hashtable*))))

; Let's see at what position the number 4990 (a guess) is in the list?

? (position 4990 *the-list* :key #'first)
9995

; Now what results do you expect?
? (test 4990)

(FIND ITEM *THE-LIST* :KEY #'FIRST) took 22 milliseconds (0.022 seconds) to run.
(FIND ITEM *THE-VECTOR* :KEY #'FIRST) took 29 milliseconds (0.029 seconds)
to run.
(GETHASH ITEM *THE-HASHTABLE*) took 0 milliseconds (0.000 seconds) to run.

;Finding the 9995th element in this 10000 element unsorted list takes
;22 milliseconds on my Mac.
;Finding the 9995th element in this 10000 element unsorted array takes
;29 milliseconds on my Mac.
;Finding the value for the key 4990 in a hashtable is (way) below 1 millisecond
;on my Mac.

; Direct access of element 9995 in my list:

? (time (nth 9995 *the-list*))
(NTH 9995 *THE-LIST*) took 15 milliseconds (0.015 seconds) to run.


; Direct access of element 9995 in my vector:

? (time (aref *the-vector* 9995))
(AREF *THE-VECTOR* 9995) took 1 milliseconds (0.001 seconds) to run.


Conclusion: Everything looks like we expect it. Lisp is fast.


> With pointer arithmetic, your there in an instance. 
> With the list of varying elements, you have to traverse 999 elements 
> before you get to the thousandth. This is true, unless you have some
> array

Yes, let's simply take an array.

> pointing to the location of each element, but then your back to pointer 
> arithmetic again. 

With the difference that in Lisp array operations usually are
safe compared to pointer arithmetic in C. First in Lisp
you know that you access an array (otherwise you'll get an error,
this is the idea of typed data) and second access outside the
array range isn't possible, too (you also get an error here).


;;; Let's get an array of 20 elements of random numbers

? (defparameter *numbers*
  (let ((array (make-array 20)))
    (loop for i below (length array)
          do (setf (aref array i)
                   (random 200))
          finally (return array))))
*NUMBERS*

;;; here it is

? *numbers*
#(93 191 106 33 28 114 160 115 136 2 5 188 172 92 104 93 27 187 95 194)

;;; access out of bounds

? (aref *numbers* (random 50))


> Error: Array index 30 out of bounds for #<SIMPLE-VECTOR 20> .
> While executing: "Unknown"
> Type Command-. to abort.
See the Restarts� menu item for further choices.
1 > 
Aborted


;;; Try to access something not being an array as an array

? (aref nil 30)

> Error: value NIL is not of the expected type ARRAY.
> While executing: CCL::%AREF1
> Type Command-. to abort.
See the Restarts� menu item for further choices.
1 > 



>         
>         Deleting elements is also a lot quicker.

How do you delete? Faster than what?

> The programmer 
> can write code to easily and quickly free up memory when it's no longer 
> needed. If you leave it up to a garbage collector, it has to determine 
> whether or not a particular part of memory is needed before it can free
> it... translation .. slow. 

The programmer can write code to easily and quickly free up memory.

Translation: error prone. Duplicate code all over the system. Slow.

>         I don't think so. I've had enough of Lisp already.  

Still you don't understand basic issues. How come?

>         Nah... I'm attacking Lisp because theres too much recursion in it.

I like Lisp software with recursion. It often makes code clearer
and shorter.
 
> I have yet to see a lisp program without any recursion. I know that
> you guys could probably easily write some. 

Sure. Common Lisp has a lot constructs for iterative programming.
Probably more than most other programming languages.
You may want to read the documentation for the LOOP
construct of ANSI Common Lisp:
http://www.harlequin.com/education/books/HyperSpec/Body/mac_loop.html
It provides more features for iterative programming then
you might be able to deal with. ;-)


>         OK, you got it. Recursive in C code is just as bad as 
> recursive Lisp code. Stick to the iterative form in both languages,
> and use recursion only when it's better to do so. 

And that's quite often, since a lot data structures
(trees, etc.) have recursive definitions. I like it.

-- 
http://www.lavielle.com/~joswig/
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *Fast*
Date: 
Message-ID: <33F35D90.1406@capital.net>
Rainer Joswig wrote:
> Common Lisp has for example lists, arrays and hashtables (and some more).
> Each datastructure has different access, delete or insert speed.
> But this is independent of any programming language and
> has nothing to do with Lisp. If for a particular application
> lists are to slow - use arrays. If arrays are to slow, for
> example for indexed element access, use hashtables.
> 

	Arrays would be faster than hashtables, if it is possible 
to use them. 


> O.k., let's do a reality check:
> 
> ; Here comes a bit Lisp code. Are you still with me?
> 
> ; Let's generate a pair of a random string and a number.
> ; The number is the sum of the character ascii values in the string.
> 
> (defun make-pair ()
>   (loop with length = (random 100)
>         with string = (make-string length)
>         for i below length
>         for char = (code-char (+ 32 (random 90)))
>         summing (char-code char) into sum
>         do (setf (aref string i) char)
>         finally (return (cons sum string))))
> 
> ; let's define a size parameter of 10000.
> 
> (defparameter *size* 10000)
> 
> ; Now we make a list of 10000 such pairs.
> 
> (defparameter *the-list*
>   (loop repeat *size*
>         collect (make-pair)))
> 
> ; Now we put the same elements into an array.
> 
> (defparameter *the-vector*
>   (loop with vector = (make-sequence 'vector *size*)
>         for item in *the-list*
>         for i below (length vector)
>         do (setf (aref vector i) item)
>         finally (return vector)))
> 
> ; Now it's time for a hashtable of the same data.
> ; keys are the numbers and values are the strings.
> 
> (defparameter *the-hashtable*
>   (loop with table = (make-hash-table )
>         repeat *size*
>         for (key . value) in *the-list*
>         do (setf (gethash key table) value)
>         finally (return table)))
> 
> ; Now we want to access a string indexed by a number from
> ; the various datastructures.
> 
> (defun test (item)
>   (print (time (find item *the-list* :key #'first)))
>   (print (time (find item *the-vector* :key #'first)))
>   (print (time (gethash item *the-hashtable*))))
> 
> ; Let's see at what position the number 4990 (a guess) is in the list?
> 
> ? (position 4990 *the-list* :key #'first)
> 9995
> 
> ; Now what results do you expect?
> ? (test 4990)
> 
> (FIND ITEM *THE-LIST* :KEY #'FIRST) took 22 milliseconds (0.022 seconds) to run.
> (FIND ITEM *THE-VECTOR* :KEY #'FIRST) took 29 milliseconds (0.029 seconds)
> to run.
> (GETHASH ITEM *THE-HASHTABLE*) took 0 milliseconds (0.000 seconds) to run.
> 
> ;Finding the 9995th element in this 10000 element unsorted list takes
> ;22 milliseconds on my Mac.
> ;Finding the 9995th element in this 10000 element unsorted array takes
> ;29 milliseconds on my Mac.
> ;Finding the value for the key 4990 in a hashtable is (way) below 1 millisecond
> ;on my Mac.
> 
> ; Direct access of element 9995 in my list:
> 
> ? (time (nth 9995 *the-list*))
> (NTH 9995 *THE-LIST*) took 15 milliseconds (0.015 seconds) to run.
> 
> ; Direct access of element 9995 in my vector:
> 
> ? (time (aref *the-vector* 9995))
> (AREF *THE-VECTOR* 9995) took 1 milliseconds (0.001 seconds) to run.
> 
> Conclusion: Everything looks like we expect it. Lisp is fast.

	That's the kind of lisp code I like to see. 
	It's good that your using iterative nonrecursive code. 
If you wrote the above in recursive only code, it would take at 
least twice as long to run, and about ten times as long to write. 


> 
> > With pointer arithmetic, your there in an instance.
> > With the list of varying elements, you have to traverse 999 elements
> > before you get to the thousandth. This is true, unless you have some
> > array
> 
> Yes, let's simply take an array.
> 
> > pointing to the location of each element, but then your back to pointer
> > arithmetic again.
> 
> With the difference that in Lisp array operations usually are
> safe compared to pointer arithmetic in C. First in Lisp
> you know that you access an array (otherwise you'll get an error,
> this is the idea of typed data) and second access outside the
> array range isn't possible, too (you also get an error here).
> 

	That also means that the compiler needs to add code to check
the bounds of the array, as well as the string that states the error
message, to say the least.  




> >         Deleting elements is also a lot quicker.
> 
> How do you delete? Faster than what?
> 
> > The programmer
> > can write code to easily and quickly free up memory when it's no longer
> > needed. If you leave it up to a garbage collector, it has to determine
> > whether or not a particular part of memory is needed before it can free
> > it... translation .. slow.
> 
> The programmer can write code to easily and quickly free up memory.
> 
> Translation: error prone. Duplicate code all over the system. Slow.
> 

	Perhaps, but it depends on the programmer.



> >         OK, you got it. Recursive in C code is just as bad as
> > recursive Lisp code. Stick to the iterative form in both languages,
> > and use recursion only when it's better to do so.
> 
> And that's quite often, since a lot data structures
> (trees, etc.) have recursive definitions. I like it.
> 

	Well, the in real world programming (at least for C++),
you never write any code access datastructures. C++ has the 
famous container classes that lets you implement linked lists, 
binary trees, arrays, and almost any other structure that you can 
think of. This enables a programmer to reuse code, and not have 
to rewrite it every time the programmer wants to use a particular 
data structure.  


						Peaceman
From: Marco Antoniotti
Subject: Re: Lisp is *Fast*
Date: 
Message-ID: <scf67t8jrgo.fsf@infiniti.PATH.Berkeley.EDU>
Why I do this, I do not know.... :{

In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Thu, 14 Aug 1997 15:33:36 -0400
   Organization: Logical Net
   Reply-To: ········@capital.net
   Lines: 166


   Rainer Joswig wrote:
   > Common Lisp has for example lists, arrays and hashtables (and some more).
   > Each datastructure has different access, delete or insert speed.
   > But this is independent of any programming language and
   > has nothing to do with Lisp. If for a particular application
   > lists are to slow - use arrays. If arrays are to slow, for
   > example for indexed element access, use hashtables.
   > 

	   Arrays would be faster than hashtables, if it is possible 
   to use them.

Yeah.  Tell me how to index a set of elements with no prior knowledge
of the size using a string as key.

Your ignorance of Basic Data Structures and their relative strength is
appalling.

   > O.k., let's do a reality check:
   > 
   > ; Here comes a bit Lisp code. Are you still with me?
   > 
	...

   > Conclusion: Everything looks like we expect it. Lisp is fast.

	   That's the kind of lisp code I like to see. 
	   It's good that your using iterative nonrecursive code. 
   If you wrote the above in recursive only code, it would take at 
   least twice as long to run, and about ten times as long to write. 

You just don't get it.  In this case writing a recursive piece of code
would have required a different structure.  The point that Rainer was
making was that: (a) you have the most powerful iteration construct
available in Common Lisp and (b) different data structures behave
differently.

	...

   > 
   > The programmer can write code to easily and quickly free up memory.
   > 
   > Translation: error prone. Duplicate code all over the system. Slow.
   > 

	   Perhaps, but it depends on the programmer.

I am a much better programmer than you are (seeing your code has
conviced me of that) and yet I make *a lot* of stupid mistakes (mostly
memory leaks) when I program in C/C++.


   > >         OK, you got it. Recursive in C code is just as bad as
   > > recursive Lisp code. Stick to the iterative form in both languages,
   > > and use recursion only when it's better to do so.
   > 
   > And that's quite often, since a lot data structures
   > (trees, etc.) have recursive definitions. I like it.
   > 

	   Well, the in real world programming (at least for C++),
   you never write any code access datastructures. C++ has the 
   famous container classes that lets you implement linked lists, 
   binary trees, arrays, and almost any other structure that you can 
   think of. This enables a programmer to reuse code, and not have 
   to rewrite it every time the programmer wants to use a particular 
   data structure.  

The "famous container classes" of C++ are dispersed in a plethora of
incompatible class libraries.  If you use the MFC in your program and
then need to link in a piece of code developed with some other library
(I am not mentioning the STL, since I do not know exactly where it
compiles on) then your run-time image bloats.

This is one of the strengths of Common Lisp (the API and Data
Structures that you really need, are mostly there and standardized)
that the Java development team learned very well.

Your statement above shows again a partial knowledge of "the real
world" and of its problems. :)

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Marco Antoniotti
Subject: Re: Lisp is *Fast*
Date: 
Message-ID: <scf4t8sjn05.fsf@infiniti.PATH.Berkeley.EDU>
In article <···············@infiniti.PATH.Berkeley.EDU> ·······@infiniti.PATH.Berkeley.EDU (Marco Antoniotti) writes:

   From: ·······@infiniti.PATH.Berkeley.EDU (Marco Antoniotti)
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: 14 Aug 1997 16:05:27 -0700
   Organization: California PATH Program - UC Berkeley
   Path: agate!usenet
   Lines: 101
   Sender: ·······@infiniti.PATH.Berkeley.EDU

	...

   The "famous container classes" of C++ are dispersed in a plethora of
   incompatible class libraries.  If you use the MFC in your program and
   then need to link in a piece of code developed with some other library
   (I am not mentioning the STL, since I do not know exactly where it
   compiles on) then your run-time image bloats.

Ok.  Apologies again.  My terminology is incorrect.  `run-time image
bloat' is not quite appropriate. :}
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Eric Sauthier
Subject: Re: Lisp is *Fast*
Date: 
Message-ID: <33F42570.1839@siemens.ch>
Marco Antoniotti wrote:
> 
> Your ignorance of Basic Data Structures and their relative strength is
> appalling.

100% agreed! The reading of Mr Peaceman's postings makes me think that
he is one
of those guys who prefers tweaking is code rather that rethinking about
the algorithms used when facing non trivial problems. No matter how well
optimized a piece of code that implements an exponential algorithm is,
using another algorithm that performs the task in polynomial time will
always be more efficient - well right, I know that solving the simplex
problem is usually done using an exponential algo rather than its
polynomial counterpart because it's faster for typical problem
instances, but this is an exception. 

Don't get me wrong thoug, I think that code optimization is an important
step but that it is only a "small step" in the process of writing good
programs.
  
> I am a much better programmer than you are (seeing your code has
> conviced me of that) and yet I make *a lot* of stupid mistakes (mostly
> memory leaks) when I program in C/C++.

And you're by far not the only one! The mere existence of products like
purify (a fine product BTW) tends to prove this point.

>    Rainer Joswig wrote
>    >  Sajid Ahmed the Peaceman wrote
>    > >         OK, you got it. Recursive in C code is just as bad as
>    > > recursive Lisp code. Stick to the iterative form in both languages,
>    > > and use recursion only when it's better to do so.
>    >
>    > And that's quite often, since a lot data structures
>    > (trees, etc.) have recursive definitions. I like it.
>    >

IMHO, the bottom line of this all is that the use of recursion/iteration
depends on the nature of the problem to solve - gee, sounds as if I just
wrote that the water is wet - and that average lisp programmers seem to
tackle problems that are more recursive in nature than average C/C++
programmers - or at least used to tackle, many former lisp programmers
have been forced into using C/C++, I'm one of them. 

One thing I'm sure of is that stating that lisp is slow because it calls
for writing recursive code is total and utter nonsense. Had Mr Peaceman
mentioned the lack of performance of lisp code due to factors such as,
for instance, the boxing/unboxing of numbers (in my opinion these
problems appear only in poorly written lisp code), I would have
considered him as knowledgable enough to discuss the matter in this
newsgroup.

His postings, however, show that he is only ranting and that he doesn't
deserve more attention in this newsgroup.

Eric Sauthier
From: David Thornley
Subject: Re: Lisp is *Fast*
Date: 
Message-ID: <5t1okq$68p$1@darla.visi.com>
In article <·············@capital.net>,
Sajid Ahmed the Peaceman  <········@capital.net> wrote:
>Rainer Joswig wrote:
>> Common Lisp has for example lists, arrays and hashtables (and some more).
>> Each datastructure has different access, delete or insert speed.
>> But this is independent of any programming language and
>> has nothing to do with Lisp. If for a particular application
>> lists are to slow - use arrays. If arrays are to slow, for
>> example for indexed element access, use hashtables.
>> 
>
>	Arrays would be faster than hashtables, if it is possible 
>to use them. 
>
If you're accessing the array only by numeric index, yes.  For other
purposes, such as lookups, hashtables are generally faster.  Looking
at your sentences carefully, I think you're saying the same thing in
sufficiently different words that it looks like you're disagreeing.
>
>> O.k., let's do a reality check:
>> 
>> ; Here comes a bit Lisp code. Are you still with me?
>> 
>> Conclusion: Everything looks like we expect it. Lisp is fast.
>
>	That's the kind of lisp code I like to see. 
>	It's good that your using iterative nonrecursive code. 
>If you wrote the above in recursive only code, it would take at 
>least twice as long to run, and about ten times as long to write. 
>
Lisp has the best set of iteration constructs I've ever seen.  It
also handles recursion much better.  I don't understand your point
about recursive-only code, though:  if I were writing this sort of
thing recursively, I could easily use tail recursion, which is
equivalent to iteration.  However, this isn't a naturally recursive
algorithm (or set of algorithms).
>
>> > pointing to the location of each element, but then your back to pointer
>> > arithmetic again.
>> 
>> With the difference that in Lisp array operations usually are
>> safe compared to pointer arithmetic in C. First in Lisp
>> you know that you access an array (otherwise you'll get an error,
>> this is the idea of typed data) and second access outside the
>> array range isn't possible, too (you also get an error here).
>
>	That also means that the compiler needs to add code to check
>the bounds of the array, as well as the string that states the error
>message, to say the least.  
>
If you like, you can tell the compiler to optimize it, and add
declarations until the compiler omits the run-time checks.  (I am
told you can do the same in Ada.)  The normal behavior is to check
array limits.  C and (to a lesser extent) C++ are optimized to give
you the results quickly, be they right or wrong.  Lisp and Ada are
optimized to give you the right results, be they slow or fast.

Unless you know that the arithmetic or array access or whatever is
in a highly-used spot in your program, which normally means running
it and profiling it, correctness is usually the better choice.

You're overlooking the "strong typing" argument also.  In Lisp,
an array is an array, and the Lisp system should reject it if it
isn't.  In C or C++, an array is a pointer, and any arbitrary
pointer can be used as an array.  The freedom this gives you is
not useful most of the time, and it permanently restricts the
C or C++ compiler's ability to optimize code.  Common Lisp can
crunch numbers as fast as Fortran, depending on implementations,
while C and C++ can't.  The reason is that explicit pointer
arithmetic is legal in C and C++, and not in Common Lisp or Fortran,
and this inhibits the compiler from certain optimizations.

>
>> >         Deleting elements is also a lot quicker.
>> 
>> How do you delete? Faster than what?
>> 
>> > The programmer
>> > can write code to easily and quickly free up memory when it's no longer
>> > needed. If you leave it up to a garbage collector, it has to determine
>> > whether or not a particular part of memory is needed before it can free
>> > it... translation .. slow.
>>
Modern garbage collectors are pretty much as fast as manual memory
management.  They tend to clump up the management, though, so you can
get annoying delays.  You can cut down on the delays if you're willing
to take a slight overall performance hit.

Getting memory management right is very difficult, particularly in C,
and it's the source of lots of bugs.  I've seen references to memory
management taking about 30% of the programming effort in large C
projects, and I'm willing to believe it.  Many large Unix programs
leak memory like crazy and trust the operating system to clean up
after the program is through.
 
>> The programmer can write code to easily and quickly free up memory.
>> 
>> Translation: error prone. Duplicate code all over the system. Slow.
>
>	Perhaps, but it depends on the programmer.
>
Really?  One of the best programmers I know, who worked in a very
well-run shop, was involved in shipping a program that had a bug in
it, simply because he'd freed a data object and then used it.  If
said friend can't handle memory management precisely, most people
will write error-prone code, and if they try to compensate for it
it'll wind up slow.

Look at all the discussion of reference-counting pointers in C++
books.  There are a whole lot of very good programmers who are
using a very primitive form of garbage collection because they
can, and can't get anything better.
>
>
>> >         OK, you got it. Recursive in C code is just as bad as
>> > recursive Lisp code. Stick to the iterative form in both languages,
>> > and use recursion only when it's better to do so.
>> 
>> And that's quite often, since a lot data structures
>> (trees, etc.) have recursive definitions. I like it.
>> 
>
>	Well, the in real world programming (at least for C++),
>you never write any code access datastructures. C++ has the 
>famous container classes that lets you implement linked lists, 
>binary trees, arrays, and almost any other structure that you can 
>think of. This enables a programmer to reuse code, and not have 
>to rewrite it every time the programmer wants to use a particular 
>data structure.  
>
The Draft C++ Standard has a lot of containers, but not as many as
Lisp.  They were going to put in hash tables, but ran out of time;
this, I think, would put C++ real close to Common Lisp in useful
data structures provided.  Lisp lists work well for linked lists
and trees of various sorts, Lisp has arrays, Lisp has hash tables,
Lisp has structs.

Realistically, every commercial Common Lisp system has all these
neat data structures and algorithms built in, while every commercial
C++ system is converging on the Draft Standard on its own course.
In another three years or so, you'll be able to use the C++ Standard
Library reliably.

For several years it has seemed to me that C++ is trying to be Lisp
as a C extension.  All things being equal, I'd rather go with the
real thing.

David Thornley
From: Christopher B. Browne
Subject: Re: Lisp is *Fast*
Date: 
Message-ID: <slrn5v8mjo.6ql.cbbrowne@knuth.brownes.org>
On Thu, 14 Aug 1997 15:33:36 -0400, Sajid Ahmed the Peaceman
<········@capital.net> posted:
>Rainer Joswig wrote:
>> Common Lisp has for example lists, arrays and hashtables (and some more).
>> Each datastructure has different access, delete or insert speed.
>> But this is independent of any programming language and
>> has nothing to do with Lisp. If for a particular application
>> lists are to slow - use arrays. If arrays are to slow, for
>> example for indexed element access, use hashtables.

>	Arrays would be faster than hashtables, if it is possible 
>to use them. 

Can you tell us in what situations arrays are faster than hash tables?

Or vice versa?

I trust that you realize that linked lists provide unordered access in
O(N) time, which is the maximum rate of speed that is possible for this
sort of operation, supposing you need to visit every element...

With the apparent quality of your computer science education, answering
any of this may be problematic...
-- 
Christopher B. Browne, ········@hex.net, ············@sdt.com
PGP Fingerprint: 10 5A 20 3C 39 5A D3 12  D9 54 26 22 FF 1F E9 16
URL: <http://www.hex.net/~cbbrowne/>
Q: What does the CE in Windows CE stand for?  A: Caveat Emptor...
From: Richard A. O'Keefe
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5t9744$67d$1@goanna.cs.rmit.edu.au>
Sajid Ahmed the Peaceman <········@capital.net> writes:
>	Lisp uses No references? If that were the case you just stated
>something that helps my argument. I will digress, however, because 
>I think that Lisp probably does use references (aka pointers) in

The context was C++.  In C++, pointers and references are two different
things, with different syntax and semantics.  (Addresses are yet a third
thing, with no syntax.)

-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Christopher Oliver
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5sio53$5qn@guy-smiley.traverse.com>
Sajid Ahmed the Peaceman (········@capital.net) wrote:
: 	Let's take a look at the following example of tail recursion, 

... which is NOT and example of tail recursion.  Not only are you doing
further computation with the returned value of the procedure, but you're
also passing in an automatic variable.

Following are two somewhat long winded examples with code, but please
bear with me.

Since I'm more confortable with C:

: int factorial(int *number) {
:    int x;
:    if (*number == 1) return 1; 
:    x = *number-1;
:    return *number * factorial(&x);
: } 

This is not tail recursive.  Instead you should have written:

static int fact-aux(int n, int p) {
  return n <= 1 ? p : fact-aux(n-1, p * n);
}

int fact(int n) {
  return fact-aux(n, 1);
}

Results for 'gcc -O3 -S -fomit-frame-pointer':

        .file   "fact.c"
        .version        "01.01"
gcc2_compiled.:
.text
        .align 4
.globl fact
        .type    fact,@function
fact:
        movl 4(%esp),%edx         ; We fetch our initial argument here.
        movl $1,%eax              ; We fetch our iteration counter here.
.L11:
        cmpl $1,%edx              ; Are we done?
        jle .L10                  ; If so, return our value.
        imull %edx,%eax           ; Otherwise, compute next running product.
        decl %edx                 ; Decrement iteration counter.
        jmp .L11                  ; Loop (NOT CALL SUBROUTINE!)
        .align 4
.L10:
        ret
.Lfe1:
        .size    fact,.Lfe1-fact
        .ident  "GCC: (GNU) 2.7.2.1"

I see no explicit recursion here.


With that out of the way, let's examine the compilation with Attardi's
EcoLisp to C of the following tail recursive factorial in Lisp:

(defun fact (n)
  (labels ((fact-aux (n p)
		     (if (<= n 1)
			 p
		       (fact-aux (- n 1) (* n p)))))
    (fact-aux n 1)))

The meat of the compilation:

/*      function definition for FACT                                  */
static L1(int narg, object V1)
{
  VT3 VLEX3 CLSR3
    TTL:
    RETURN(LC2(2, (V1), MAKE_FIXNUM(1)) /*  FACT-AUX        */ );
}
/*      local function FACT-AUX                                       */
static LC2(int narg, object V1, object V2)
{
  VT4 VLEX4 CLSR4
    TTL:
  if (!(number_compare((V1), MAKE_FIXNUM(1)) <= 0))
    {
      goto L2;
    }
  VALUES(0) = (V2);
  RETURN(1);
L2:
  {
    object V3;
    V3 = number_minus((V1), MAKE_FIXNUM(1));
    V2 = number_times((V1), (V2));
    V1 = (V3);
  }
  goto TTL;
}

While there is consing going on here, I see loops built with goto, but
no explicit recursion despite its presence in the original Lisp.

: 	There you have it, tail recurion that needs a stack.

You're not looking to well, son.  Would you like to play some more?

--
Christopher Oliver                     Traverse Communications
Systems Coordinator                    223 Grandview Pkwy, Suite 108
······@traverse.com                    Traverse City, Michigan, 49684
   Some mornings it just doesn't seem worth it to gnaw through the
   leather straps.      -- Emo Phillips
From: Alexey Goldin
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <m1afirdl61.fsf@spot.uchicago.edu>
Sajid Ahmed the Peaceman <········@capital.net> writes:
> 
> 
> 	Let's take a look at the following example of tail recursion, 
> (In C++, sorry don't know how to do references in Lisp) 
> 
> 
> int factorial(int &number) {
>    int x;
>    if (number == 1) return 1; 
>    x = number-1;
>    return number * factorial(x);
> } 
> 
> 
>       If you don't know C++ : 
> 
> int factorial(int *number) {
>    int x;
>    if (*number == 1) return 1; 
>    x = *number-1;
>    return *number * factorial(&x);
> } 
> 
> 



But this is not tail recursive subroutine. Here is tail recursive one
(in C):


int factorial_tail(int number, int max,int so_far) {
   int x;

   if (number > max) return so_far; 
   return factorial_tail(number+1,max, so_far*number);
} 

and you have to use this one to call it:

int factorial(int number) {
  return factorial_tail(1,number, 1);
}


And here is Sparc assembly (compiled with gcc)

	.file	"fact.c"
	.version	"01.01"
gcc2_compiled.:
	.global .umul
.section	".text"
	.align 4
	.global factorial_tail
	.type	 factorial_tail,#function
	.proc	04
factorial_tail:
	!#PROLOGUE# 0
	save %sp,-104,%sp
	!#PROLOGUE# 1
	mov %i0,%o1
	mov %i2,%o0
.LL6:
	cmp %o1,%i1
	bg .LL5
	nop
	call .umul,0
	add %o1,1,%l0
	b .LL6                      <----------- watch this 
	mov %l0,%o1
.LL5:
	ret
	restore %g0,%o0,%o0
.LLfe1:
	.size	 factorial_tail,.LLfe1-factorial_tail
	.align 4
	.global factorial
	.type	 factorial,#function
	.proc	04
factorial:
	!#PROLOGUE# 0
	save %sp,-104,%sp
	!#PROLOGUE# 1
	mov 1,%o1
	mov 1,%o0
.LL15:
	cmp %o1,%i0
	bg .LL14
	nop
	call .umul,0
	add %o1,1,%l0
	b .LL15
	mov %l0,%o1
.LL14:
	ret
	restore %g0,%o0,%o0
.LLfe2:
	.size	 factorial,.LLfe2-factorial
	.ident	"GCC: (GNU) 2.7.2"




As you see, gcc agrees with me that tail recursion here does not require 
stack.
From: Emergent Technologies Inc.
Subject: 6.001
Date: 
Message-ID: <5sjd7q$st5$1@newsie2.cent.net>
Sajid Ahmed the Peaceman wrote in article <·············@capital.net>...
>int factorial(int *number) {
>   int x;
>   if (*number == 1) return 1;
>   x = *number-1;
>   return *number * factorial(&x);
>}
>
>
> There you have it, tail recurion that needs a stack.
>

This doesn't look like tail recursion to me.  The recursive call to
factorial is being used as an argument.  Try it this way:

.int tfact1 (int number, int result) {
.    return (number == 0)
.        ? result
.        : tfact1 (number - 1, number * result);
.}

.int tfact (int number) {
.    return tfact1 (number, 1);
.}

Note that what makes this tail recursive is that the return value is
*directly* computed by another call to tfact1.  This can therefore be
turned into a loop, and I'd bet a *lot* of C++ compilers can do it
(even though I don't think they can do it for the general case).

All this is covered in chapter 1 of Structure and Interpretation of
Computer Programs.  It is also pointed out that the factorial program
as originally written runs in O(n) time and O(n) space, while the
tail recursive one runs in O(n) time and O(1) space.  It is this that
makes the so-called recursive version such a dog, not the recursion
itself.
From: Christopher Oliver
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5sj8ag$dgh@guy-smiley.traverse.com>
Sajid Ahmed the Peaceman (········@capital.net) wrote:
: 	Let's take a look at the following example of tail recursion, 

... which is NOT and example of tail recursion.  Not only are you doing
further computation with the returned value of the procedure, but you're
also passing in an automatic variable.

Following are two somewhat long winded examples with code, but please
bear with me.

Since I'm more confortable with C:

: int factorial(int *number) {
:    int x;
:    if (*number == 1) return 1; 
:    x = *number-1;
:    return *number * factorial(&x);
: } 

This is not tail recursive.  Instead you should have written:

static int fact_aux(int n, int p) {
  return n <= 1 ? p : fact_aux(n-1, p * n);
}

int fact(int n) {
  return fact_aux(n, 1);
}

Results for 'gcc -O3 -S -fomit-frame-pointer':

        .file   "fact.c"
        .version        "01.01"
gcc2_compiled.:
.text
        .align 4
.globl fact
        .type    fact,@function
fact:
        movl 4(%esp),%edx         ; We fetch our initial argument here.
        movl $1,%eax              ; We fetch our iteration counter here.
.L11:
        cmpl $1,%edx              ; Are we done?
        jle .L10                  ; If so, return our value.
        imull %edx,%eax           ; Otherwise, compute next running product.
        decl %edx                 ; Decrement iteration counter.
        jmp .L11                  ; Loop (NOT CALL SUBROUTINE!)
        .align 4
.L10:
        ret
.Lfe1:
        .size    fact,.Lfe1-fact
        .ident  "GCC: (GNU) 2.7.2.1"

I see no explicit recursion here nor any stack use within the iteration.
Where's the beef?


With that out of the way, let's examine the compilation with Attardi's
EcoLisp to C of the following tail recursive factorial in Lisp:

(defun fact (n)
  (labels ((fact-aux (n p)
		     (if (<= n 1)
			 p
		       (fact-aux (- n 1) (* n p)))))
    (fact-aux n 1)))

The meat of the compilation:

/*      function definition for FACT                                  */
static L1(int narg, object V1)
{
  VT3 VLEX3 CLSR3
    TTL:
    RETURN(LC2(2, (V1), MAKE_FIXNUM(1)) /*  FACT-AUX        */ );
}
/*      local function FACT-AUX                                       */
static LC2(int narg, object V1, object V2)
{
  VT4 VLEX4 CLSR4
    TTL:
  if (!(number_compare((V1), MAKE_FIXNUM(1)) <= 0))
    {
      goto L2;
    }
  VALUES(0) = (V2);
  RETURN(1);
L2:
  {
    object V3;
    V3 = number_minus((V1), MAKE_FIXNUM(1));
    V2 = number_times((V1), (V2));
    V1 = (V3);
  }
  goto TTL;
}

While there is consing going on here, I see loops built with goto, but
no explicit recursion despite its presence in the original Lisp.  There
doesn't seem to be any repeated allocation of automatic variables either.
Hmmm...  Curious!

: 	There you have it, tail recurion that needs a stack.

You're not looking too good, son.  Would you like to play some more?

--
Christopher Oliver                     Traverse Communications
Systems Coordinator                    223 Grandview Pkwy, Suite 108
oliver -at- traverse -dot- com         Traverse City, Michigan, 49684
   Some mornings it just doesn't seem worth it to gnaw through the
   leather straps.      -- Emo Phillips
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33EF4ABC.1941@capital.net>
Christopher Oliver wrote:
> 
> Sajid Ahmed the Peaceman (········@capital.net) wrote:
> :       Let's take a look at the following example of tail recursion,
> 
> ... which is NOT and example of tail recursion.  Not only are you doing
> further computation with the returned value of the procedure, but you're
> also passing in an automatic variable.
> 
> Following are two somewhat long winded examples with code, but please
> bear with me.
> 
> Since I'm more confortable with C:
> 
> : int factorial(int *number) {
> :    int x;
> :    if (*number == 1) return 1;
> :    x = *number-1;
> :    return *number * factorial(&x);
> : }
> 
> This is not tail recursive.  Instead you should have written:
> 

	OK, if that's your definition of tail recursion, how about this? 


 int factorial(int *number, int result) {
    int x;
    if (*number == 1) return result;
    x = *number-1;
    return  factorial(&x, result * number);
  }



	I know that this isn't efficient code, but it's just there as an 
example. TRO fails when you have a local variable, and not allowed to 
execute the destructor on it. You have to put the local variable
somewhere.
The stack is the appropriate place. 




				Peaceman
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86wwls1tuk.fsf@g.pet.cam.ac.uk>
"Sajid Ahmed the Peaceman" wrote:

> 	OK, if that's your definition of tail recursion, how about this? 
> 
> 
>  int factorial(int *number, int result) {
>     int x;
>     if (*number == 1) return result;
>     x = *number-1;
>     return  factorial(&x, result * number);
>   }
> 
> 
> 
> 	I know that this isn't efficient code, but it's just there as an 
> example. TRO fails when you have a local variable, and not allowed to 
> execute the destructor on it. You have to put the local variable
> somewhere.
> The stack is the appropriate place. 

(For those poor benighted souls who haven't read the entire preceding
thread, this is in support of SAtP's claim that tail recursion can't
necessarily be done without using stack space.)

Yes, this is an example of how bad use of C's "features" turns
code that ought to be tail-recursive into code that isn't (at
least in the sense of being subject to tail-call optimisation).

What it shows is that certain points of the definition of the
C language were not designed with tail-call optimisation in mind.

It's an absolutely language-specific problem, and has nothing at all
to do with SAtP's claims that Lisp is slow, recursion is slow, recursion
implies space-inefficiency, etc etc. Lisp doesn't have first-class
reference objects comparable to C's pointers, so that if the compiler
sees

    (let ((x (cons 123 123)) (y 234))
      ;; blah blah blah
      (some-function x y))

it knows it's safe to forget about the bindings of X and Y before
calling SOME-FUNCTION, and that remains true whatever arguments
you give to SOME-FUNCTION. On the other hand, the cons cell to which
X is bound is still there; it continues to exist across the call;
but that's because Lisp doesn't (in general) use stack allocation
for "non-immediate" objects. Most systems will allocate the cons
cell in the heap, and pick it up later in a garbage collection if
appropriate. This is rather different from the C idiom of

    { int x[15];
      /* blah blah blah */
      return foo(x);
    }

and possibly a bit less efficient (aha! could this be what SAtP is
trying to get at?), but

  - a good garbage collector will result in the efficiency loss
    being rather small;

  - it's paid for with big winnage in convenience and clarity;

  - it avoids the ghastly bugs that can result in C programs from
    passing around pointers to stack-allocated objects (if you've
    never been bitten by this, you're lucky);

  - in the common case where you *don't* pass the object around,
    a good compiler is entirely at liberty to stick the whole thing
    on the stack. (I don't know whether they actually do this.)

(If you use Henry Baker's idea of "lazy allocation" you can actually
do *all* your allocation, of whatever kind of object, temporary or
not, on the stack. Has anyone actually tried doing this? Is the
necessary write barrier cheap enough to make it worth while?)

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Christopher Oliver
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5soguf$c7g@guy-smiley.traverse.com>
"Sajid Ahmed the Peaceman" wrote:
> 	OK, if that's your definition of tail recursion, how about this? 
> 
> 
>  int factorial(int *number, int result) {
>     int x;
>     if (*number == 1) return result;
>     x = *number-1;
>     return  factorial(&x, result * number);
>   }
> 
> 
> 
> 	I know that this isn't efficient code, but it's just there as an 
> example. TRO fails when you have a local variable, and not allowed to 
> execute the destructor on it.

Precisely.  You're doing something (destroying an automatic variable)
on return from the recursive call, thus I would say this is NOT tail
recursive.  It might be possible to transform this code into itera-
tion using a sufficiently clever compiler, but then you probably bend
the meaning of C's automatic variables.  Even then, I could write
a subtly different routine where conventional semantics (I.e. number
is a pointer to stack memory holding an int) are needed for a call
depending on the data passed to the routine.  Then, such a transform
wouldn't work.  Defeating TRO in a compiler with a given tail recurs-
ion doesn't prove that there is no iterative form for your routines;
it proves your compiler isn't sufficiently clever.  Stumping the comp-
iler doesn't necessarily mean you've found a counterexample.

Red herring:  Don't you mean "return factorial(&x, result * *number);"

Gareth McCaughan (·····@dpmms.cam.ac.uk) wrote:
: (For those poor benighted souls who haven't read the entire preceding
: thread...

Benighted?  This must be some usage of the word benighted of which I
was blissfully unaware.  ;-)

--
Christopher Oliver                     Traverse Communications
Systems Coordinator                    223 Grandview Pkwy, Suite 108
oliver -at- traverse -dot- com         Traverse City, Michigan, 49684
"Getting wrong answers faster is NOT helping the end user." - R. O'Keefe
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33F0F0A6.C47@capital.net>
Christopher Oliver wrote:
> Precisely.  You're doing something (destroying an automatic variable)
> on return from the recursive call, thus I would say this is NOT tail
> recursive.  It might be possible to transform this code into itera-
> tion using a sufficiently clever compiler, but then you probably bend
> the meaning of C's automatic variables.  Even then, I could write
> a subtly different routine where conventional semantics (I.e. number
> is a pointer to stack memory holding an int) are needed for a call
> depending on the data passed to the routine.  Then, such a transform
> wouldn't work.  Defeating TRO in a compiler with a given tail recurs-
> ion doesn't prove that there is no iterative form for your routines;
> it proves your compiler isn't sufficiently clever.  Stumping the comp-
> iler doesn't necessarily mean you've found a counterexample.
> 

	I agree with you one hundred percent. The code is 
easily translatable into iterative code, and in fact does so 
when it is compiled. 

	The reason I posted the code was to show that you may need a stack,
even if you have tail recursion (How it ever lead up to this 
argument, I don't know.)   




> Red herring:  Don't you mean "return factorial(&x, result * *number);"

	You have a good eye. 


					Peaceman
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86oh741248.fsf@g.pet.cam.ac.uk>
Christopher Oliver wrote:

> : (For those poor benighted souls who haven't read the entire preceding
> : thread...
> 
> Benighted?  This must be some usage of the word benighted of which I
> was blissfully unaware.  ;-)

That was, of course, the point. Happily I'm off on holiday for a couple
of weeks now, so I'll be, er, benighted too. I bet the thread will
still be running when I get back.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Torsten Poulin Nielsen
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <slrn5v2pdt.i0f.torsten@tyr.diku.dk>
"Sajid Ahmed the Peaceman" wrote:
> 	OK, if that's your definition of tail recursion, how about this? 
> 
> 
>  int factorial(int *number, int result) {
>     int x;
>     if (*number == 1) return result;
>     x = *number-1;
>     return  factorial(&x, result * number);
>   }
> 

Sigh, why obfuscate?

Try something like

int fak(int n, int acc)
{
  if (n == 0) return acc;
  return fak(n - 1, n * acc);
}

Which on my MIPS (with SGI's cc) gives something
like

 #   1	int fak(int n, int acc)
 #   2	{
	.ent	fak 2
fak:
	.option	O2
	.frame	$sp, 0, $31
$32:
	.loc	2 2
	.loc	2 3
 #   3	  if (n == 0) return acc;
	bne	$4, 0, $33
	.loc	2 3
	move	$2, $5
	.livereg	0x2000FF0E,0x00000FFF
	j	$31
$33:
	.loc	2 4
 #   4	  return fak(n - 1, n * acc);
	addu	$2, $4, -1
	mul	$5, $4, $5
	move	$4, $2
	b	$32                         <--------- Branch!
$34:
	.livereg	0x2000FF0E,0x00000FFF
	j	$31
	.end	fak

Which certainly looks iterative, quite unlike your
bad example.

-Torsten
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf3eoem1lm.fsf@infiniti.PATH.Berkeley.EDU>
In article <······················@tyr.diku.dk> ·······@tyr.diku.dk (Torsten Poulin Nielsen) writes:

   From: ·······@tyr.diku.dk (Torsten Poulin Nielsen)
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: 13 Aug 1997 07:43:57 GMT
   Organization: Department of Computer Science, U of Copenhagen
   Lines: 57
   Sender: ·······@tyr.diku.dk
   X-Newsreader: slrn (0.9.0.0 (BETA) UNIX)
   Xref: agate comp.lang.lisp:29921 comp.programming:54056 comp.lang.c++:287105

   "Sajid Ahmed the Peaceman" wrote:
   > 	OK, if that's your definition of tail recursion, how about this? 
   > 
   > 
   >  int factorial(int *number, int result) {
   >     int x;
   >     if (*number == 1) return result;
   >     x = *number-1;
   >     return  factorial(&x, result * number);
   >   }
   > 

   Sigh, why obfuscate?

Because he is a "real" programmer. :)

   Try something like

   int fak(int n, int acc)
   {
     if (n == 0) return acc;
     return fak(n - 1, n * acc);
   }

On top of that he does not get the very simple fact that his first function
is not tail recursive.

Cheers
-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Torsten Poulin Nielsen
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <slrn5v4hdi.k75.torsten@tyr.diku.dk>
In article <···············@infiniti.PATH.Berkeley.EDU>, Marco Antoniotti wrote:
And in article <······················@tyr.diku.dk> I wrote:
>>   Sigh, why obfuscate?
>
>Because he is a "real" programmer. :)

Oh yes, I forgot, but then again I guess I'm just one of those poor, clueless
computer scientists who deals in useless things like abstractions and
so on, instead of bits.

>On top of that he does not get the very simple fact that his first function
>is not tail recursive.

I know, but I thought it was futile to mention it. What I don't get is why
on earth he decided to pass `number' as a pointer in the first place. What
purpose should that have? He was so close to getting it right (if we overlook
the fact that the function was wrong, because he forgot to dereference
`number'). He even had the accumulator ...

I get the shivers when I think about the "quality" products his company
must be grinding out.

For reference:

 "Sajid Ahmed the Peaceman" wrote:
   >  int factorial(int *number, int result) {
   >     int x;
   >     if (*number == 1) return result;
   >     x = *number-1;
   >     return  factorial(&x, result * number);
   >   }

-Torsten
 
From: Sajid Ahmed the Peaceman
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33F3A9C4.1E38@capital.net>
Torsten Poulin Nielsen wrote:
> 
> I know, but I thought it was futile to mention it. What I don't get is why
> on earth he decided to pass `number' as a pointer in the first place. What
> purpose should that have? He was so close to getting it right (if we overlook
> the fact that the function was wrong, because he forgot to dereference
> `number'). He even had the accumulator ...
> 

	It's there as an example. 

> I get the shivers when I think about the "quality" products his company
> must be grinding out.
> 

	The products I grind out are so fast, it will make you shiver :) 


					Peaceman
From: Torsten Poulin Nielsen
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <slrn5v811v.326.torsten@tyr.diku.dk>
In article <·············@capital.net>, Peaceman wrote:

>> I know, but I thought it was futile to mention it. What I don't get is why
>> on earth he decided to pass `number' as a pointer in the first place. What
>> purpose should that have? He was so close to getting it right (if we overlook
>> the fact that the function was wrong, because he forgot to dereference
>> `number'). He even had the accumulator ...
>
>	It's there as an example. 

Hmm, but that still doesn't explain the use of a reference when ordinary
pass-by-value would be both sufficient and better. What purpose did it serve?
I'm assuming here that you had a purpose. Did you want show that C is
difficult to compile efficiently? I fail to see how it demonstrated any
of your points regarding Lisp.

>	The products I grind out are so fast, it will make you shiver :) 

<grin> but are they correct?

-Torsten
From: Martin Rodgers
Subject: Re: Lisp is *NOT* slow
Date: 
Message-ID: <MPG.e5cc7d8a45e952e9899b5@news.demon.co.uk>
Marco Antoniotti wheezed these wise words:

> On top of that he does not get the very simple fact that his first function
> is not tail recursive.

Perhaps he does it get but, as he's trolling, chooses to ignore it.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
        "There are no limits." -- ad copy for Hellraiser
            Please note: my email address is gubbish
From: Emergent Technologies Inc.
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5solkp$2tc$1@newsie2.cent.net>
 Sajid Ahmed the Peaceman wrote in article <·············@capital.net>...

[Non tail recursive code in C elided.]

> I know that this isn't efficient code, but it's just there as an
>example. TRO fails when you have a local variable, ....

Yes, C++ is indeed completely inadequate in this regard.  However, I
thought
we were discussing the problems with Lisp.

~jrm
From: ········@wat.hookup.net
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5ssol9$tif$1@nic.wat.hookup.net>
In <·············@capital.net>, Sajid Ahmed the Peaceman <········@capital.net> writes:
> ...
>
>	Let's take a look at the following example of tail recursion, 
>(In C++, sorry don't know how to do references in Lisp) 
>
>
>int factorial(int &number) {
>   int x;
>   if (number == 1) return 1; 
>   x = number-1;
>   return number * factorial(x);
>} 
>
>
>      If you don't know C++ : 
>
>int factorial(int *number) {
>   int x;
>   if (*number == 1) return 1; 
>   x = *number-1;
>   return *number * factorial(&x);
>} 
>
>
>	There you have it, tail recurion that needs a stack.
>
>
>					     Peaceman

1. This had nothing to do with what I asked you.  Are you into creative
   quoting?

2. Only a nut would write factorial like this.

3. Please try to understand tail recursion before you talk about it.  This
   example isn't tail recursive.

Hartmann Schaffer
From: Martin Rodgers
Subject: Re: Lisp is *NOT* slow
Date: 
Message-ID: <MPG.e5cc78e60da609b9899b4@news.demon.co.uk>
········@wat.hookup.net wheezed these wise words:

> 1. This had nothing to do with what I asked you.  Are you into creative
>    quoting?

It appears that he majored in troll writing. ;)
 
> 2. Only a nut would write factorial like this.

Or a troll writer. aka a nut.
 
> 3. Please try to understand tail recursion before you talk about it.  This
>    example isn't tail recursive.

Some people aren't patient eneough to try to understand something 
before attacking it. They want to burn a book before readin it, ban a  
film before seeing it, or in this case, condemn an idea based solely 
on the fact that they're too clueless to understand it.

Let the jury decide.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
        "There are no limits." -- ad copy for Hellraiser
            Please note: my email address is gubbish
From: Marco Antoniotti
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <scf4t8um1yi.fsf@infiniti.PATH.Berkeley.EDU>
In article <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

   From: Sajid Ahmed the Peaceman <········@capital.net>
   Newsgroups: comp.lang.lisp,comp.programming,comp.lang.c++
   Date: Sat, 09 Aug 1997 14:04:09 -0400
   Organization: Logical Net
   Reply-To: ········@capital.net
   Lines: 35
   Mime-Version: 1.0
   Content-Type: text/plain; charset=us-ascii
   Content-Transfer-Encoding: 7bit
   X-Mailer: Mozilla 3.01 (WinNT; I)
   Xref: agate comp.lang.lisp:29788 comp.programming:53754 comp.lang.c++:286388

   ········@wat.hookup.net wrote:
   > 
   > Could you please explain what this witty argument has to do with the cost
   > of (tail)recursion and you superior compiler knowledge?  Or are you bailing
   > out?
   > 
   > Hartmann Schaffer


	   Let's take a look at the following example of tail recursion, 
   (In C++, sorry don't know how to do references in Lisp) 


   int factorial(int &number) {
      int x;
      if (number == 1) return 1; 
      x = number-1;
      return number * factorial(x);
   } 


	 If you don't know C++ : 

   int factorial(int *number) {
      int x;
      if (*number == 1) return 1; 
      x = *number-1;
      return *number * factorial(&x);
   } 


	   There you have it, tail recurion that needs a stack.


Sorry, but I could not resist but joining in the self satisfied chorus
of all those who are now ROTFLing.

YOU JUST DON'T GET IT!

The factorial example above is what separates the "real" programmers
from the "quiche eaters" :)  The function is NOT tail recursive.

You eat too much quiche :)

-- 
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86afj14icf.fsf@g.pet.cam.ac.uk>
"Sajid Ahmed the Peaceman" trolled:

> > In case you didn't get it yet, the machine code for iteration and tail
> > recursion are indistinguishable.
> > 
> > Hartmann Schaffer
> 
> 	Better go back to school and take a course in compiler design.
> 	One word: stack.

Better go back to school and take a course in computer programming.
One word: tail.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e4d352ad6abba8b989912@news.demon.co.uk>
My body fell on Sajid Ahmed the Peaceman like a dead horse, thusly:

> One word: stack.

Many books on compiler theory neglect a certain issue.
Two words: tail recursion.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
    "My body falls on you like a dead horse" -- Portait Chinois
            Please note: my email address is gubbish
From: Dennis Weldy
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <xoZY1Z7n8GA.191@news2.ingr.com>
 Do read what's been written previously. He didn't say recursion IN
GENERAL. Rather was quite specific about TAIL RECURSION. As has been
written before, there are compilers that are smart enough to turn TAIL
RECURSION into ITERATION. Thusly, the programmer doesn't have to. 

Dennis

Sajid Ahmed the Peaceman wrote in article <·············@capital.net>...

>········@wat.hookup.net wrote:
>> 
>> In case you didn't get it yet, the machine code for iteration and tail
>> recursion are indistinguishable.
>> 
>> Hartmann Schaffer
>
> Better go back to school and take a course in compiler design.
> One word: stack.
> 
> Peaceman
>.
> 
From: ········@wat.hookup.net
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5s2ocq$a7n$1@nic.wat.hookup.net>
Sajid Ahmed the Peaceman wrote in article <·············@capital.net>...

>········@wat.hookup.net wrote:
>> 
>> In case you didn't get it yet, the machine code for iteration and tail
>> recursion are indistinguishable.
>> 
>> Hartmann Schaffer
>
> Better go back to school and take a course in compiler design.
> One word: stack.
> 
> Peaceman

Do you know anything about compilers but the word?

Hartmann Schaffer
From: David Brabant
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5s1rrr$mdo$1@horus.mch.sni.de>
>> In case you didn't get it yet, the machine code for iteration and tail
>> recursion are indistinguishable.
>> 
>> Hartmann Schaffer
>
>	Better go back to school and take a course in compiler design.
>	One word: stack.
>				
>				Peaceman


  Coming from YOU, this comment seems quite surrealistic.

David

--
David BrabaNT,             | E-mail: ·······················@csl.sni.be
Siemens Nixdorf (SNI),     | CIS:    100337(,)1733
Centre Software de Li�ge,  | X-400:  C=BE;A=RTT;P=SCN;O=SNI;OU1=LGG1;OU2=S1
2, rue des Fories,         |         S=BRABANT;G=DAVID
4020 Li�ge (BELGIUM)       | HTTP:   www.sni.de       www.csl.sni.be/~david
From: Jon Buller
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33E5F4CE.1897@nortel.com>
Sajid Ahmed the Peaceman wrote:
> 
> ········@wat.hookup.net wrote:
> >
> > In case you didn't get it yet, the machine code for iteration and tail
> > recursion are indistinguishable.
> >
> > Hartmann Schaffer
> 
>         Better go back to school and take a course in compiler design.
>         One word: stack.
> 
>                                 Peaceman

Well, I was trying to stay out of this, but "Peaceman" really should
listen to those who have studied/learned more...  As evidence, I
present the following "C" code!

     int ifact (int n, int r) {
       while (n > 0) {
         r *= n--;
       }
       return r;
     }

     int rfact (int n, int r) {
       if (n == 0)
         return r;
       return rfact (n - 1, r * n);
     }

The output of the command "gcc --version" is 2.7.2 it is running on
a NetBSD-1.2G/pc532 box (so the output is for an ns32k arch.)

The command "gcc -O2 -S t.c" produces:
     #NO_APP
     gcc2_compiled.:
     ___gnu_compiled_c:
     .text
             .align 2
     .globl _ifact
             .type    _ifact,@function
     _ifact:
             enter [],0
             movd 8(fp),r1
             movd 12(fp),r0
             cmpqd 0,r1
             bge L3
     L4:
             muld r1,r0
             addqd -1,r1
             cmpqd 0,r1
             blt L4
     L3:
             exit []
             ret 0
     Lfe1:        .size    _ifact,Lfe1-_ifact
             .align 2
     .globl _rfact
             .type    _rfact,@function
     _rfact:
             enter [],0
             movd 8(fp),r1
             movd 12(fp),r0
     L8:
             cmpqd 0,r1
             beq L7
             muld r1,r0
             addqd -1,r1
             br L8
             .align 2,0xa2
     L7:
             exit []
             ret 0
     Lfe2:
             .size    _rfact,Lfe2-_rfact


Now, notice one thing here: The code produced for the recursive
form of the factorial is better than the interative code!  I kept
the "r" argument to hopefully make the comparison of the assembly
easier.  However, I did try replacing the while loop with a for
loop, and a for loop with an index counting the other direction
(1 -> n) and it made no real difference in the generated code.

I was trying to get the output to be the same for both, but after
*multiple* attempts with the iterative version, I have been unable
to get gcc to generate code as good as my *first* recursive version!

Now the thing I want you to think about is this, *GCC-2.7.2* takes
an iterative version of some code, and a recursive version of the
same code, and makes better code for the recursive version in *EVERY*
case I tried.  The recursive code is smaller, faster, and uses the
same amount of stack for any arguments.

Now do you still hold the same opinion on who should be in a compiler
design class?

Jon Buller
From: Gareth McCaughan
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <86rac9532d.fsf@g.pet.cam.ac.uk>
Jon Buller wrote:

>      int ifact (int n, int r) {
>        while (n > 0) {
>          r *= n--;
>        }
>        return r;
>      }
> 
>      int rfact (int n, int r) {
>        if (n == 0)
>          return r;
>        return rfact (n - 1, r * n);
>      }
...
> The command "gcc -O2 -S t.c" produces:
...
> Now, notice one thing here: The code produced for the recursive
> form of the factorial is better than the interative code!

Am I missing something? It looks to me as if the inner loop of
the recursive version is one instruction longer and will therefore
be slower. (Of course, for this particular case you'll never go
round the loop very many times, unless you actually *want* to
know <something large> factorial mod 2^whatever...)

I completely agree that "Sajid Whatever-it-was the Peaceman" is
completely wrong in claiming that recursion is invariably more
inefficient, and I completely agree that this example shows that
his feared stack explosion doesn't happen; but I don't think the
code produced for the recursive version is *better*. Or did I
miss something?

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Mukesh Prasad
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33E608B9.79CF@nspmpolaroid.com>
Gareth McCaughan wrote:
[snip -- primary concern not addressed in the following]
> 
> I completely agree that "Sajid Whatever-it-was the Peaceman" is
> completely wrong in claiming that recursion is invariably more
> inefficient, and I completely agree that this example shows that

I think he may have a couple of valid points:

1)  Recursion has been used more in Lisp-like languages.
    The optimization techniqus for tail-recursion developed
    in the context of Lisp, because that's where
    they were most needed.  (Of course, once developed,
    nothing stopped GCC from taking advantage of the techniques.)
    If you pick up an early Lisp book, you will find
    recursion heavily emphasized as the "natural" technique
    for Lisp.  (Lately de-emphasized in the wake of intensive
    performance concerns.)
2)  Recursion is indeed inherently more expensive, otherwise
    the whole field of "elimination of tail recursion"
    would have been un-necessary.

Moreover, not all recursive problems are tail-recursive,
or as easily amenable to being compiled iteratively.
From: Erik Naggum
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <3079704731964042@naggum.no>
* Mukesh Prasad
| 2)  Recursion is indeed inherently more expensive, otherwise
|     the whole field of "elimination of tail recursion"
|     would have been un-necessary.

this is obviously false.  function calls have always been expensive, and
will continue to be.  if a tail call could universally be replaced with a
jump, be that to itself or to any other function, much would be saved in
performance.  designing a language and implementing calling conventions
such that this is possible is actually very hard work.  e.g., C blew it,
and therefore C++.  Scheme focused on this aspect from very early on, and
tail-call merging has been a standard feature in Lisp compilers a long
time.  since C blew it so disastrously, it's no wonder it took the C
community so long to get it right.  we should also not ignore the fact that
the Free Software Foundation and GNU project is led by people with
extensive experience from the Lisp world.

since I live in an un contaminated world, without MS bugs or problems,
could somebody who has already been exposed to MS tell us whether their
compilers for C or C++ do tail-call merging?  that could be instructive.

#\Erik
-- 
404 Give me a URL I cannot refuse.
From: Martin Rodgers
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <MPG.e503786411edd45989919@news.demon.co.uk>
My body fell on Erik Naggum like a dead horse, thusly:

> since I live in an un contaminated world, without MS bugs or problems,
> could somebody who has already been exposed to MS tell us whether their
> compilers for C or C++ do tail-call merging?  that could be instructive.

VC++ 4.0 certain handles tail recursion, _if_ you use the optimiser. 
Just like GNU C.

I don't think that you're disagreeing with Mukesh Prasad, as you're 
both saying the same thing. It's not untrue that recursion is "more 
expensive" than not using recursion, in the sense that it uses stack 
space, but it's not true that recursion is more expensive than an 
iterative approach. The ifact and rfact functions should produce 
similar performance.

It's possible to "optimise" the second resursive call in quicksort, by 
replacing it with a while loop, but compilers like VC++ and GNU C can 
optimise the call _without_ a source level transformation of code.

IMHO we should let the compiler do this kind of work for us. The only 
for making source level optimisations _in C_ is that C compilers tend 
to not use optimisation by default. We have to explicitly tell them to 
optimise the code, and this often makes the compile time longer.

Lisp programmers probably don't notice compiler times. I know that I 
usually don't! Whether I code in C or Lisp, I spend a lot of time 
recompiling single functions. The difference is that in C, this takes 
a _lot_ longer than in Lisp, where it can be very hard to measure the 
time to compile a single function.

This will vary from compiler to compiler, of course. Fortunately, in 
Common Lisp, there's the option of an interpreter - perfect for 
testing code that will be run once and then replaced by a new version 
a few seconds later. C interpreters also exist, but I'm not sure if 
they allow interpreted and compiled code to be mixed in the same 
program. Perhaps this is why C/C++ people are so confused? They may be 
judging Lisp by the limitations of C/C++ compilers, by assuming that 
the same rules apply to both languages.

Compilers - I love 'em! There are no limits.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
        "There are no limits." -- ad copy for Hellraiser
            Please note: my email address is gubbish
From: David Thornley
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5sa17h$188$1@darla.visi.com>
In article <············@newsie2.cent.net>,
Emergent Technologies Inc. <········@eval-apply.com> wrote:
>While I wish to avoid having the body of Martin Rodgers fall upon me like
>a dead horse, I do want to followup on  Erik Naggum's question:
> >> since I live in an un contaminated world, without MS bugs or problems,
>>> could somebody who has already been exposed to MS tell us whether their
>>> compilers for C or C++ do tail-call merging?  that could be
>instructive.
>>
>in theory, tail-call merging can be done by a C or C++ compiler, but in
>practice, actual code tends to make that impossible.  Both C and C++
>more or less require (I'm not sure if it is an explicit requirement, or
>whether it is an expected one) that the ``automatic'' variables 
>are valid until the return statement.  If the address of one of these is
>passed to an unknown function, it essentially prevents tail recursion.
>
IIRC, the standard does say something like that (except of course that
variables can be discarded if they're not going to be used further),
but I don't see what the problem is.  If the address of a local variable
is passed to an unknown function, and the program makes any use of
that address or variable after the return of the owning function,
that's undefined behavior (and therefore a no-no).  Since tail
recursion is essentially different behavior at the "return" statement,
I don't see that this causes any problems.

>C++ is even worse.  Someone correct me if I'm wrong here, but I
>believe that compiler created temporaries are supposed to be deleted
>
>upon exit from the surrounding block.

At the end of the full expression (i.e., an expression that is not
a subexpression of any other expression) they're created in.  This
is in the Draft Standard.  The ARM (which is what C++ used to use
more or less as a standard) is very vague on when temporaries go
away, which Stroustrup later considered a bad idea.

>  This would kill tail recursion.
>In any case, explicit class variables have their destructors run when
>the enclosing block exits, even in the event that an exception is
>raised.  This is equivalent to an UNWIND-PROTECT in Lisp, and
>is definitely *not* tail recursive.
>
This means that blocks with catch clauses can't be tail-recursive,
I think.  (The interesting thing is that exceptions in Java are
handled by the "finally" section, which does look like an UNWIND-PROTECT
to me.)

Aside from that, it seems to me that there is stuff that becomes
undefined or must be done at the return statement, and I don't
see why this can't be handled at that point by the iteration.

>The thing that's really funny about all this is that the C++ people
>are discovering issues that were dealt with by the Lisp people 20 years
>ago, and they seem to have no idea that any prior body of work exists.
>Look at how your typical C++ program collects garbage --- each class
>implements its own ad hoc reference count GC.  In any C++ program of
>any size, a tremendous amount of time is spent incrementing and
>decrementing reference counts. 
>
The big problem is that (a) handling memory properly is hard, and
(b) writing a good garbage collector is even harder.  We're spoiled
in Lisp by having good garbage collection available.  Java has
garbage collection, as well as a few other good ideas that (AFAIK)
became widely used in Lisp.

The result is that, unless you buy a commercial product or use
something like Boehm's collector, you can't implement decent
garbage collection.  Since you can't do it right, and handling storage
manually is highly nontrivial, what are you going to do?  Reference
counts are easy.  They also are inefficient and use memory non-locally,
but that may be the least of the evils.

>Look also at ``functors''.  These are throwaway classes that encapsulate
>state with a function.  A lisp person would call them closures.  The thing
>
Do we want to get started on the relative merits of the class systems,
including generics that specialize on more than one parameter and the
awkwardness or otherwise of multiple inheritance?

Probably not.

It would be nice if there was some sort of rule or natural law that
said that no language inferior to Common Lisp could become popular,
but that doesn't seem to be the case.

David Thornley
From: Emergent Technologies Inc.
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5sb6t9$ks5$1@newsie2.cent.net>
 David Thornley wrote in article <············@darla.visi.com>...
>In article <············@newsie2.cent.net>,
>Emergent Technologies Inc. <········@eval-apply.com> wrote:
>>in theory, tail-call merging can be done by a C or C++ compiler, but in
>>practice, actual code tends to make that impossible.  Both C and C++
>>more or less require (I'm not sure if it is an explicit requirement, or
>>whether it is an expected one) that the ``automatic'' variables
>>are valid until the return statement.  If the address of one of these is
>>passed to an unknown function, it essentially prevents tail recursion.
>>
>IIRC, the standard does say something like that (except of course that
>variables can be discarded if they're not going to be used further),
>but I don't see what the problem is.  If the address of a local variable
>is passed to an unknown function, and the program makes any use of
>that address or variable after the return of the owning function,
>that's undefined behavior (and therefore a no-no).  Since tail
>recursion is essentially different behavior at the "return" statement,
>I don't see that this causes any problems.

It is different because C doesn't consider the tail call to be a return.
Suppose you were perverse enough to do this: 
.{
.    int x;
.    foo (&x);
.    return bar ();
.}
(ignore those dots).
When bar is called, x is still alive, and it may actually be available to
bar --- we don't know what foo did to it.  In this case, we can't
deallocate
the frame and jump to bar, we have to do a full recursive call.

C++ is even stricter.  In the case where we
cannot deallocate x because we don't know what foo did to it, we have
a worse problem. Deallocation is in essence a no-op. It is
the immediate re-allocation that gets you in trouble. In C++, however,
destroying an object can cause arbitrary code to be run.
Consider this:
.{
.   FOO x;
.   return bar();
.}

and suppose that bar has side effects and the destructor for x does also.
C++ requires that the side effects for bar occur *before* those for
destroying
x. This breaks tail recursion.

>>C++ is even worse.  Someone correct me if I'm wrong here, but I
>>believe that compiler created temporaries are supposed to be deleted
>>
>>upon exit from the surrounding block.
>
>At the end of the full expression (i.e., an expression that is not
>a subexpression of any other expression) they're created in.  This
>is in the Draft Standard.  The ARM (which is what C++ used to use
>more or less as a standard) is very vague on when temporaries go
>away, which Stroustrup later considered a bad idea.

Ok.  This lessens the `scope' of the problem, so to speak.
But the problem is still there, any temporaries created as arguments
to a tail-called function must live until the enclosing return statement
is executed.  So doing something like ``return foo (bar + baz + quux);''
where
bar, baz, and quux are some complicated class that overloads +
will create some compiler temporaries that are supposed to last until
the return is executed, they cannot be collected before the transfer
to foo, and again, tail recursion is broken.

>
>>  This would kill tail recursion.
>>In any case, explicit class variables have their destructors run when
>>the enclosing block exits, even in the event that an exception is
>>raised.  This is equivalent to an UNWIND-PROTECT in Lisp, and
>>is definitely *not* tail recursive.
>>
>This means that blocks with catch clauses can't be tail-recursive,
>I think.  (The interesting thing is that exceptions in Java are
>handled by the "finally" section, which does look like an UNWIND-PROTECT
>to me.)

I'll concede this one because I'm not an expert on how the exception
handling works in C++.

>Aside from that, it seems to me that there is stuff that becomes
>undefined or must be done at the return statement, and I don't
>see why this can't be handled at that point by the iteration.

The thing is that in C++, tail recursion is different from iteration in
that you iterate *after* you complete the last expression of a sequence
while with tail recursion you transfer control *as* the last expression
of a sequence.  And C++ says that your classes hang around until
*after*.  This doesn't apply to WHILE, FOR, and DO because the
control transfer is outside the block whereas tail recursion
allows the control transfer to happen inside the block.

>>Look also at ``functors''.  These are throwaway classes that encapsulate
>>state with a function.  A lisp person would call them closures.  The
thing
>>
>Do we want to get started on the relative merits of the class systems,
>including generics that specialize on more than one parameter and the
>awkwardness or otherwise of multiple inheritance?
>
>Probably not.

Definitely not!  I was just pointing out that a Lisp compiler, when faced
with a single inheritance specialization (a closure) can do a bang-up job
on the sucker while C++ tends to generate a ream of indirect references.

>
>It would be nice if there was some sort of rule or natural law that
>said that no language inferior to Common Lisp could become popular,
>but that doesn't seem to be the case.
>
>David Thornley
>

C++ promotes code re-use:  C++ programs generally have
more textually copied sections of source code than most other languages.

~jrm
 
From: David Hanley
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33E8DA97.732F84F9@nospan.netright.com>
David Thornley wrote:

> In article <············@newsie2.cent.net>,
> Emergent Technologies Inc. <········@eval-apply.com> wrote:
> >in theory, tail-call merging can be done by a C or C++ compiler, but
> in
> >practice, actual code tends to make that impossible.  Both C and C++
> >more or less require (I'm not sure if it is an explicit requirement,
> or
> >whether it is an expected one) that the ``automatic'' variables
> >are valid until the return statement.  If the address of one of these
> is
> >passed to an unknown function, it essentially prevents tail
> recursion.
> >
> IIRC, the standard does say something like that (except of course that
>
> variables can be discarded if they're not going to be used further),
> but I don't see what the problem is.  If the address of a local
> variable
> is passed to an unknown function, and the program makes any use of
> that address or variable after the return of the owning function,
> that's undefined behavior (and therefore a no-no).  Since tail
> recursion is essentially different behavior at the "return" statement,
>
> I don't see that this causes any problems.

    Come, now:

int foo()
{
    int a = 5;
    return foo2( &a );
}

is perfectly valid C but would break in an ennviornemnt whichsupported
tail-recursion.

dave
From: Richard A. O'Keefe
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5sbm2c$3b2$1@goanna.cs.rmit.edu.au>
"Bill Wade" <·········@stoner.com> writes:
>If I want portable, efficient code

Ah, but if you wanted efficient code, you wouldn't be using quicksort
in the first place.
-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Michael Schuerig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <19970808034913634118@rhrz-isdn3-p25.rhrz.uni-bonn.de>
Richard A. O'Keefe <··@goanna.cs.rmit.edu.au> wrote:

> Ah, but if you wanted efficient code, you wouldn't be using quicksort
> in the first place.

??

What would you use instead? Bottom-up heapsort?

Michael
--
Michael Schuerig              The usual excuse for our most unspeakable
·············@uni-bonn.de       public acts is that they are necessary.
http://www.uni-bonn.de/~uzs90z/                       -Judith N. Shklar
From: Erik Naggum
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <3080025429729377@naggum.no>
* Jamie Zawinski
| If you have a recursive function that does a dynamic bind, or allocation,
| or whatever, around each call to itself, then I wouldn't consider that
| tail-recursive.  Sure, you've saved the call-stack, but that's not the
| whole story.

I agree that it's not the whole story, but I'm talking more about the
general ability to do tail-call merging than when specifically applied to
calling yourself.  my point was that other call-related expenses may be
worth saving.  e.g., on a SPARC, stack growth per call is large, and the
cost of save and restore involving register windows moved to and from stack
is very high.  other CPUs have other costs.  we do tail-call merging for
several good reasons and I believe all of them are related to reducing
costs, so if we can identify additional costs and ways to reduce them, that
would be a win.

| What reasons were you thinking of?  Perhaps there are also *other*
| reasons not to attempt to do tail-calls with emacs-lisp, but the dynamic
| binding showstopper was the point at which I stopped thinking about it...

I wanted to show that the way dynamic bindings and unwind-protects are
organized into a separate stack in Emacs Lisp (and probably not _entirely_
differently in other Lisps) was not an argument _against_ tail-call merging
in the presence of these forms.  you argue against tail-call merging based
on other factors in Emacs Lisp, and that's cool -- I didn't expect Emacs
Lisp to get tail-call merging just because it's possible, when it isn't
cost effective.

the question is only "how do you know how far to unwind", and if that is
something like a "mark" on a stack, like in Emacs Lisp, instead of constant
number of elements on the stack, like the C calling convention where the
caller must clean up the stack, I see opportunities for saving more costs.
not that I know any other Lisp intimately, but from what I have read
elsewhere, this does seem like a straight-forward way to do it, and would,
in other words, not remove the prospect of having efficient function calls
through the use of tail-call merging just because more powerful features
are also needed and used.

| Emacs-lisp isn't a good example of much of anything, really.

maybe not a _good_ example of anything, but a valid example of many things.
I'm aware that you don't like Emacs Lisp, and I have my own set of gripes,
but that doesn't mean everything that comes from Emacs Lisp is tainted.

#\Erik
-- 
404 Give me a URL I cannot refuse.
From: Emergent Technologies Inc.
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5slfoc$6sb$1@newsie2.cent.net>
Sorry for the belated reply, I seem to be on a slow pipe.

 Erik Naggum wrote in article <················@naggum.no>...
>my reading of ANSI X3.226 Common Lisp also gives me the impression that
>there is ample support in the specification for making the following
>(contrived) function be tail-recursive.
>
>    (defvar *foobar* 0)
>    (defun complex-tail-merge-test (x)
>      (let ((*foobar* (1+ *foobar*)))
> (unwind-protect
>     (if (zerop x)
>       nil
>       (complex-tail-merge-test (1- x)))
>   (print *foobar*))))
>
>my analysis is also partly based on the internals of GNU Emacs, which
does
>not do tail-call merging, but the dynamic binding and unwind-forms stack
is
>not the reason for that.
>
>#\Erik

This isn't tail recursive because it runs with O(n) space (even if the
compiler
compiles the recursive call as a jump).  The continuation passed to 
complex-tail-merge-test is different each time.  I do understand your
argument though, and although it may seem like I'm nit picking in this
case, I think most people would agree that tail recursion is what happens
when you pass your continuation to another function.

~jrm
From: Richard A. O'Keefe
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5sblp1$30b$1@goanna.cs.rmit.edu.au>
Mukesh Prasad <·······@nspmpolaroid.com> writes:
>2)  Recursion is indeed inherently more expensive, otherwise
>    the whole field of "elimination of tail recursion"
>    would have been un-necessary.

I can't speak about Lisp, but I do know that in the history of Prolog,
TRO support was added to DEC-10 Prolog *KNOWING THAT IT WOULD BE SLOWER*
than pure recursion, but purely in order to save memory (if we had more
than 250k in a program, the ERCC operators asked us to run it overnight!)
It became an essential part of all later Prolog systems precisely so that
we could *stop* writing iterative code!

What's expensive is not recursion, but retaining resources you are no longer
using.  RECURSIVE CALLS AREN'T ANY DIFFERENT FROM ANY OTHER CALLS.  The
key point about TRO is *not* that it turns the call into a jump, but that
it reclaims the stack frame *before* the procedure returns.  The Quintus
Prolog compiler supported an additional (and very simply implemented)
optimisation:  environment trimming.  Basically, the compiler allocated
variables in the stack frame in reverse order of death time, and as each
variable's death time was reached, the stack frame was trimmed back.
Environment trimming and tail call optimisation were done for _all_ procedures
and calls, not just self-recursions.

With object-oriented programming, even when your code is not recursive, it
still has a heck of a lot of procedure calls.  It's not just recursive
styles that have to use 'the stack'.

-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Mukesh Prasad
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33EB3651.3DF6@polaroid.com>
Richard A. O'Keefe wrote:
> 
> Mukesh Prasad <·······@nspmpolaroid.com> writes:
> >2)  Recursion is indeed inherently more expensive, otherwise
> >    the whole field of "elimination of tail recursion"
> >    would have been un-necessary.
> 
> I can't speak about Lisp, but I do know that in the history of Prolog,
> TRO support was added to DEC-10 Prolog *KNOWING THAT IT WOULD BE SLOWER*
> than pure recursion, but purely in order to save memory (if we had more
> than 250k in a program, the ERCC operators asked us to run it overnight!)
> It became an essential part of all later Prolog systems precisely so that
> we could *stop* writing iterative code!
> 
> What's expensive is not recursion, but retaining resources you are no longer
> using.  RECURSIVE CALLS AREN'T ANY DIFFERENT FROM ANY OTHER CALLS.  The

If ... you ... code ... an ... algorithm ... using
... iterations ... or ... eliminated ... tail ...
recursion ... in ... place ... of ... standard ...
recursion..., you ... are ... reducing ... the ...
number ... of... calls.
From: Christopher B. Browne
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <slrn5v51ve.lvb.cbbrowne@knuth.brownes.org>
On Fri, 08 Aug 1997 11:08:01 -0400, Mukesh Prasad
<·······@removit-polaroid.com> posted:
>Richard A. O'Keefe wrote:
>> What's expensive is not recursion, but retaining resources you are no longer
>> using.  RECURSIVE CALLS AREN'T ANY DIFFERENT FROM ANY OTHER CALLS.  The
>
>If ... you ... code ... an ... algorithm ... using
>... iterations ... or ... eliminated ... tail ...
>recursion ... in ... place ... of ... standard ...
>recursion..., you ... are ... reducing ... the ...
>number ... of... calls.

In C, function calls are moderately expensive as it requires allocation
of stack frames with associated memory allocation and cleanup.  Because
function calls are thus expensive, functions are encouraged to be of
moderate to large size.

In FORTRAN, the situation seems to be even somewhat more drastic, with
the corresponding result that FORTRAN encourages even larger functions
and subroutines.

With LISP, the fact that functions tend to get heavily factored has
encouraged improving efficiency of the calls.  Fewer calls is preferable,
but is of less criticality than is the case for the other languages.

-- 
Christopher B. Browne, ········@hex.net, ············@sdt.com
PGP Fingerprint: 10 5A 20 3C 39 5A D3 12  D9 54 26 22 FF 1F E9 16
URL: <http://www.hex.net/~cbbrowne/>
Q: What does the CE in Windows CE stand for?  A: Caveat Emptor...
From: Emergent Technologies
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <wku3gqt7ko.fsf@eval-apply.com>
··@goanna.cs.rmit.edu.au (Richard A. O'Keefe) writes:

> RECURSIVE CALLS AREN'T ANY DIFFERENT FROM ANY OTHER CALLS.  

I'd like to add:

AND THEY DON'T COST ANYTHING.

That's a slight exaggeration, they usually cost a jump instruction.  But I think 
Steele effectively argues this in

Guy Lewis Steele Jr. 
Debunking the "expensive procedure call" myth, 
or procedure call implementations considered harmful, 
or lambda, the ultimate GOTO. 
In ACM Conference Proceedings, pages 153--162. ACM, 1977. 

Remember that in most of the cases presented that show ``iteration''
better than ``recursion'' are comparing programs that run in O(1)
space to programs that run in O(n) space --- this is a bit
disingenuous.  I could equally as well argue that recursion is
obviously superior to iteration by comparing a purely iterative bubble
sort to a purely recursive mergesort.

~jrm
From: Jon Buller
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33E754A4.7E75@nortel.com>
Gareth McCaughan wrote:
> 
> Jon Buller wrote:
> 
[ Trivial factorial examples removed ]
> ...
> > The command "gcc -O2 -S t.c" produces:
> ...
> > Now, notice one thing here: The code produced for the recursive
> > form of the factorial is better than the interative code!
> 
> Am I missing something? It looks to me as if the inner loop of
> the recursive version is one instruction longer and will therefore
> be slower. (Of course, for this particular case you'll never go
> round the loop very many times, unless you actually *want* to
> know <something large> factorial mod 2^whatever...)

Uh... no.  I have lost the ability to count.  It seems to only
happen just before I post, but somehow it seems to always happen
before I post 8-)

I was looking at that code thinking: Gee, the while loop has 2
compare instructions, and the recursive function merged them into
one so it MUST be better, I guess I don't even need to look closer
or actually *count* the instructions...  Just in case someone wonders
if it's an antique CISC that allows that to occur, I checked the
same code on a SPARC last night, and they are both within a few
instructions of each other on that machine as well.

Of course this unfortunately does not counter the argument "Well
sure you can show such results with such a simple toy, but it'll
never work on 'real code'".  I haven't the time to generate such
an example, nor would I flood the net with it if I had it.  But I
am sure the outcome of such an example would be the same as this
one, provided the C parameter passing semantics didn't get in the
way, and the GCC optimizer didn't just simply give up on a large
input function.

> I completely agree that "Sajid Whatever-it-was the Peaceman" is
> completely wrong in claiming that recursion is invariably more
> inefficient, and I completely agree that this example shows that
> his feared stack explosion doesn't happen; but I don't think the
> code produced for the recursive version is *better*. Or did I
> miss something?

Now if only our flame-baiter would admit to the obvious facts as
readily...

Jon Buller
From: ? the platypus {aka David Formosa}
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <869799380.842569@cabal>
In <·············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

[...]

>> >They would expand them [...functions...] inline like a macro definition.
>> 
>> In such a languge it would be inpossable to write recursive code.

>	You could certainly write recursive code, as long as the 
>number of times the function calls itself is set at compile time. 

In most casers where you use recurstion you don't know how meany
times that you repeat.

[...]

>> as the size of the
>> exicutables would be so massive as to be useless.
>> 

>	Most programming code out there (about 96%) is 
>nonrecursive. You've been programming is lisp too long. 

Ok lets immagen a module gets called 10 or 12 times, you
have now got a 10 or 12 times module size extra code in your 
program.

--
Please excuse my spelling as I suffer from agraphia see the url in my header. 
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is $0.08 per day too much to pay?   ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument
From: Will Hartung
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <vfr750EDL36L.H60@netcom.com>
Sajid Ahmed the Peaceman <········@capital.net> writes:

>Gareth McCaughan wrote:
>> 
>> Name three languages that require all recursive function calls
>> to cause the function to be recompiled. In fact, name one.
>> 

>	Come to think of it, I can't remember any off hand. I
>don't quite remember if Basic did this. 

>	Anyway, I'm sure there are some language designs that don't
>use the stack when making calls to functions. They would expand them 
>inline like a macro definition. When the code would finally be 
>compiled, there would be recompilations of the function calls. 

I once wrote a particularly ugly set of Macros for the PDP-11 that
used a stack for user written routines, but used the registers for
the "primitives". It did, however, save any registeres used by the
primitives on the stack, and restored them later.

Of course, all of these primitives were effectively inlined into the
code. When a programs was compiled, I had about 10% actual code, and
90% saving/restoring registers. Like I said, it was horribly ugly. On
the bright side, though, all of my assembly language projects were
less than 30 "lines" of code.

But even with this horror, recursive functions were only inlined and
expanded once during compile, and then the stack exploded to insane
sizes during the recursion.

It was a great experiment when I wrote it, but quite terrible for
anything practical.

Heck, even calculators don't do what you suggest. Even INTERCAL
doesn't do what you suggest!

-- 
Will Hartung - Rancho Santa Margarita. It's a dry heat. ······@netcom.com
1990 VFR750 - VFR=Very Red    "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
1993 Explorer - Cage? Hell, it's a prison.                    -D. Duck
From: Nicholas Arthur Ambrose
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D65B66.3CB8F589@interdyn.com>
Sajid Ahmed the Peaceman wrote:

> Gareth McCaughan wrote:
> >
> > > > You seem to be implieing that every recursive call gets
> recompiled.
> > >
> > >       That is true in some languages, but not true in others.
> >
> > Name three languages that require all recursive function calls
> > to cause the function to be recompiled. In fact, name one.
> >
>
>         Come to think of it, I can't remember any off hand. I
> don't quite remember if Basic did this.
>
>         Anyway, I'm sure there are some language designs that don't
> use the stack when making calls to functions. They would expand them
> inline like a macro definition. When the code would finally be
> compiled, there would be recompilations of the function calls.
>
>                                         Peaceman

   I thought that Fortran77 didn't use the stack. I believe It also
doesn't allow recursion without some programming trickery.
Nick
From: William Clodius
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33D68C9C.41C6@lanl.gov>
Nicholas Arthur Ambrose wrote:
> <snip>
>    I thought that Fortran77 didn't use the stack. I believe It also
> doesn't allow recursion without some programming trickery.
> Nick

Richard O'Keefe may correct me on this, but I believe that the primary
reason for the addition of the SAVE statement to Fortran 77 was to allow
implementations to use stacks in a reliable manner. Technically I
believe Fortran 66 could be implemented using stacks, but entities
became undefined when they left their scope, so that local entites could
not be used in a standard conforming manner to retain state. However,
because most implementations did not use stacks, a significant body of
code was written that assumed that local entities always retained their
values, making it difficult, of course, for compilers to use stacks and
still satisfy their customers. It is true, however, that stacks were not
as useful in Fortran 77 as they were for languages with recursion or
block scoping, let alone more explicitly stack based languages such as
Forth or Pop.

As to recursion, I don't know anyone who wrote standard conforming
recursive Fortran 77 procedures, although I have met a few that thought
they had done so. In every case they were wrong. Some were using
compiler specific extensions, but most had code that worked untill new
standard conforming optimizations were implemented (such as the useage
of a stack). In some sense it is possible to emulate recursion in
Fortran 77, for example the following might be considered a form of
anonymous recursion

      FACTOR = 1
10    IF (M .EQ. 0) THEN
         FACTOR = 1 * FACTOR
      ELSE
         FACTOR = FACTOR * M
         M = M - 1
         GO TO 10
      ENDIF

but similar reasoning might consider iteration to be a way of emulating
recursion.

-- 

William B. Clodius		Phone: (505)-665-9370
Los Alamos Nat. Lab., NIS-2     FAX: (505)-667-3815
PO Box 1663, MS-C323    	Group office: (505)-667-5776
Los Alamos, NM 87545            Email: ········@lanl.gov
From: Richard A. O'Keefe
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5r6hsj$bf9$1@goanna.cs.rmit.edu.au>
William Clodius <········@lanl.gov> writes:

>Nicholas Arthur Ambrose wrote:
>> <snip>
>>    I thought that Fortran77 didn't use the stack. I believe It also
>> doesn't allow recursion without some programming trickery.
>> Nick

>Richard O'Keefe may correct me on this, but I believe that the primary
>reason for the addition of the SAVE statement to Fortran 77 was to allow
>implementations to use stacks in a reliable manner. Technically I
>believe Fortran 66 could be implemented using stacks, but entities
>became undefined when they left their scope, so that local entites could
>not be used in a standard conforming manner to retain state. However,
>because most implementations did not use stacks, a significant body of
>code was written that assumed that local entities always retained their
>values, making it difficult, of course, for compilers to use stacks and
>still satisfy their customers. It is true, however, that stacks were not
>as useful in Fortran 77 as they were for languages with recursion or
>block scoping, let alone more explicitly stack based languages such as
>Forth or Pop.

I would phrase this differently.

The Fortran 66 specification was very carefully crafted to allow
overlays to work.  (Overlays were extremely important at the time.)
In particular, it was important that when a subroutine (which might
never be called again) exited, any data that had been brought into
memory for its use should be allowed to vanish.  So the rule was
that if there are no subprograms active that mention a particular
COMMON block, that COMMON block just plain doesn't exist, and if a
subprogram isn't active, it's local variables don't exist either
(so their values don't have to be written back to the disc when
the overlay containing that subprogram is dropped).

A consequence of this concern for overlays was that Fortran 66 could
be implemented using stacks throughout, and several Fortran 66 implementations
(notably the one from Burroughs for the B6700, a _very_ nice Fortran for its
day) did in fact do this.  This potential was particularly important for
multithreaded use of Fortran.  Burroughs Fortran was multithreaded back in
the '60s, and the PrimeOS operating system (or systems; did PrimeOS for the
P300 and PrimeOS for the 50 series have much in common) was actually written
in Fortran.

The _problem_ was that a lot of other Fortran 66 implementations didn't,
notably the ones for the IBM mainframes.  I have seen _really_ weird code
where you would call a subroutine passing 20 parameters, once, and then
repeatedly call an entry point of the subroutine, passing only one or two
parameters.  People were expecting the old parameters to retain the value
they had last time!  Many Fortran programmers never bothered to read (the
really rather clear and useful) Fortran standard, and were unaware that
their code rather flagrantly failed to conform in this area, just as many
Fortran programmers assumed that 'one-trip' loops were required by the
standard (which had carefully avoided saying anything about them).

So yes, SAVE was added so that old broken code could be repaired and made
to work in stack-oriented implementations.  (Like the UNIX Fortran compilers
I use from time to time.)

>As to recursion, I don't know anyone who wrote standard conforming
>recursive Fortran 77 procedures,

They couldn't.  A *program* that conforms to the Fortran 66 or Fortran 77
standards *must not* use recursion.  However, an *implementation* that
conforms to those standards is not required to diagnose it as an error and
may implement recursion.  Several Fortran 66 implementations did, and many
Fortran 77 implementations do.  (Like those UNIX Fortran compilers.)
Fortran 90 _requires_ support for recursion, and one level of block scope.

-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Dima Zinoviev
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <53wwmh9xa9.fsf@pavel.physics.sunysb.edu>
In article <·················@interdyn.com> Nicholas Arthur Ambrose <·····@interdyn.com> writes:

>      I thought that Fortran77 didn't use the stack. I believe It also
>   doesn't allow recursion without some programming trickery.
>   Nick

No it did not (and still does not :-) Once a friend of mine was given
an assignment to implement a recursive algorithm in Fortran, so he had
to emulate the stack using several arrays (remember -- Fortran does
not have structures, either!)

Besides the sipmlicity of the implementation, there was one more
reason for not using the stack: the speed. Indeed, when C or Lisp make
a function call, a lot of run-time job is being done on the
stack, while in Fortran, a function call gets translated into a single
'jump' instruction.



-- 
Keep talking!                                    /~~~~~~~~~~~~~~~
Dmitry Zinoviev                                 / /~~~~~~~~/     
                                               /  `~~~~~~~'     
_From the Other Side of the World ____________/  Long Island, NY 
From: Barry Margolin
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5r65nv$40n@tools.bbnplanet.com>
In article <··············@pavel.physics.sunysb.edu>,
Dima Zinoviev <······@pavel.physics.sunysb.edu> wrote:
>Besides the sipmlicity of the implementation, there was one more
>reason for not using the stack: the speed. Indeed, when C or Lisp make
>a function call, a lot of run-time job is being done on the
>stack, while in Fortran, a function call gets translated into a single
>'jump' instruction.

Note quite true.  Even though there's no stack, the parameters still have
to be filled in.  The difference is that this can be done by assigning to
fixed addresses rather than as an offset from the stack pointer.  On early
computers, the performance difference might have been noticeable, but it
would be in the noise these days.

Also, the return address has to be saved somewhere, since a subroutine or
function can be called from multiple places.  Again, the difference is that
there can be a fixed location for each procedure's return address
information, since procedures don't have to be reentrant.  Many early CPU's
had an instruction that would store the old PC in the target address and
then jump to the address following it (on the PDP-8 this was the "JMS
<address>" instruction); a return would be done by doing an indirect jump
through the procedure's starting address (on the PDP-8, "JMP I <address>").
As recursive programming languages and "pure" text pages gained popularity,
this technique lost its utility.

The main advantage of the Fortran-77 model is that the compiler could
determine the total memory requirements of the program.  With recursion
available, there's generally no way to determine the amount of memory that
will be needed for the stack.  Again, this was much more important in the
early days, when memory was extremely limited.

-- 
Barry Margolin, ······@bbnplanet.com
BBN Corporation, Cambridge, MA
Support the anti-spam movement; see <http://www.cauce.org/>
From: Richard A. O'Keefe
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5r6ii9$cfg$1@goanna.cs.rmit.edu.au>
······@pavel.physics.sunysb.edu (Dima Zinoviev) writes:
>Besides the sipmlicity of the implementation, there was one more
>reason for not using the stack: the speed. Indeed, when C or Lisp make
>a function call, a lot of run-time job is being done on the
>stack, while in Fortran, a function call gets translated into a single
>'jump' instruction.

This is back to front.  On modern machines, and even on the machines that
were current when F77 was finalised, putting variables on the stack is/was
as fast as OR FASTER THAN making them global.  "in Fortran" a function
call gets translated into whatever the compiler wants to translate it to.
The one thing it _can't_ be is just a jump, because it does have to figure
out how to come back!

To make this really obvious, consider the fact that you would like to
bind local variables to registers for speed.  The rules of Fortran 66
and Fortran 77 (in the absence of a SAVE statement) say that you
_don'_t have to load the register from memory on entry and you _don't_
have to store it back on exit.  There's a load and a store per variable
saved.

Also consider the IBM 360.  On that machine, the hardware does not do
absolute addressing.  If you want to refer to a statically allocated
variable, you have to have a base register pointing nearby.  And
the instruction format only lets you have an offset of 0..4k from that
base register.  Anything you can allocate on the stack reduces the
demand for extra base registers in _all_ the subprograms.  Modern
machines have bigger offsets in their addresses, but not a _lot_ bigger.

-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Reginald S. Perry
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <sxlsox2n96p.fsf@yakko.zso.dec.com>
······@pavel.physics.sunysb.edu (Dima Zinoviev) writes:

> In article <·················@interdyn.com> Nicholas Arthur Ambrose <·····@interdyn.com> writes:
> 
> >      I thought that Fortran77 didn't use the stack. I believe It also
> >   doesn't allow recursion without some programming trickery.
> >   Nick
> 
> No it did not (and still does not :-) Once a friend of mine was given
> an assignment to implement a recursive algorithm in Fortran, so he had
> to emulate the stack using several arrays (remember -- Fortran does
> not have structures, either!)
> 
> Besides the sipmlicity of the implementation, there was one more
> reason for not using the stack: the speed. Indeed, when C or Lisp make
> a function call, a lot of run-time job is being done on the
> stack, while in Fortran, a function call gets translated into a single
> 'jump' instruction.

This can't be true in general. Where do the procedure arguments go if
you have greater than what can be stored in registers? On the stack
right?


-Reggie

-------------------
Reginald S. Perry                      e-mail: ·····@zso.dec.com   
Digital Equipment Corporation
Performance Manager Group	               
http://www.UNIX.digital.com/unix/sysman/perf_mgr/

The train to Success makes many stops in the state of Failure.
From: ········@wat.hookup.net
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5rdenk$l8g$2@nic.wat.hookup.net>
In <···············@yakko.zso.dec.com>, ·····@yakko.zso.dec.com (Reginald S. Perry) writes:
> ... 
>> No it did not (and still does not :-) Once a friend of mine was given
>> an assignment to implement a recursive algorithm in Fortran, so he had
>> to emulate the stack using several arrays (remember -- Fortran does
>> not have structures, either!)

This is an implementation decision.  There is no problem allocating local
variables on the stack as long as they need not preserve their values
between invocations of a function

>> Besides the sipmlicity of the implementation, there was one more
>> reason for not using the stack: the speed. Indeed, when C or Lisp make

Depends on the way addressing works on the target hardware.  There are
quite a few machines around where a stack would actually be faster than
static storage.

>> a function call, a lot of run-time job is being done on the
>> stack, while in Fortran, a function call gets translated into a single
>> 'jump' instruction.
>
>This can't be true in general. Where do the procedure arguments go if
>you have greater than what can be stored in registers? On the stack
>right?

You can assign static storage for the function parameters and the caller
copies them there (Not that I would recommend it, but it is possible)

Hartmann Schaffer
From: Richard A. O'Keefe
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <5r4b22$kjh$1@goanna.cs.rmit.edu.au>
Sajid Ahmed the Peaceman <········@capital.net> writes:
>Gareth McCaughan wrote:
>> Name three languages that require all recursive function calls
>> to cause the function to be recompiled. In fact, name one.

>	Come to think of it, I can't remember any off hand. I
>don't quite remember if Basic did this. 

There is nothing in any BASIC standard to require this.
Traditionally, "GOSUB" and "RETURN" manipulated a control stack
just like you'd expect.  Multi-line functions in BASIC are, if
compiled, compiled just like functions in Pascal or C.

	Anyway, I'm sure there are some language designs that don't
>use the stack when making calls to functions.

Instead of repeating your assertion, PROVIDE SOME EVIDENCE FOR IT!

There have certainly been languages that didn't use *a* stack for
function calls, such as Simula 67, Interlisp, Burroughs Algol, &c.
That's because they supported multiple threads of control, so there
were multiple stacks, or cactus stacks, or spaghetti stacks.

>They would expand them 
>inline like a macro definition. When the code would finally be 
>compiled, there would be recompilations of the function calls. 

The Algol 60 standard *described* all procedure calls using the "copy rule".
Every Algol 60 *implementation* I've ever heard of used a stack of
activation records, just like Pascal or C.

Even dBase III isn't defined to copy functions when they're called!
-- 
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Robert Monfera
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <33CEEDFA.2626@interport.net.removethisword>
Sajid Ahmed the Peaceman wrote:

>         Every recursive function, whether is LISP, Prolog, C+ , ML, or
> any other language is translated into iterative assembly (machine)
> language
> code.

And?

(You remember Backus' contribution to Fortran? And then inventing FFP as
a corrective step?
If von Neumann had lived longer, he'd have changed the processor history
in a similar way (IMO)).

(Non-Lisp programmers, sorry for the nested parentheses.)

Robert
From: ? the platypus {aka David Formosa}
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <869227516.553837@cabal>
In <············@capital.net> Sajid Ahmed the Peaceman <········@capital.net> writes:

>? the platypus {aka David Formosa} wrote:

[...]

>> > think that lisp functions the same way as mathematics.
>> 
>> Thats what makes lisp so easy to use.

>	Well, in some circumstances, but if you try to right every program 
>using only recursive code, it makes things much much more difficult. 

Did I say anything about using only recursive code?  Becuase of lisps
closeness to mathmatics its easy to write mathmatical code.  In fact
it is easy to prove lisp is correct.

[...]

>> You seem to be implieing that every recursive call gets recompiled.

>	That is true in some languages, but not true in others. 

No compler writer worth hir salt would do this.	

[...]

>	Every recursive function, whether is LISP, Prolog, C+ , ML, or 
>any other language is translated into iterative assembly (machine)
>language code. 

This is not true,  infact it is within the bounds of possablity to directly
write recursive assembly.

--
Please excuse my spelling as I suffer from agraphia see the url in my header. 
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is $0.08 per day too much to pay?   ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument
From: Michael Schuerig
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <1997071818042014409@rhrz-isdn3-p26.rhrz.uni-bonn.de>
Sajid Ahmed the Peaceman <········@capital.net> wrote:

>   Well, in some circumstances, but if you try to right every program 
> using only recursive code, it makes things much much more difficult. 

Would you mind to have a look at some actual Lisp code?

Michael

--
Michael Schuerig           Opinions are essentially bets on the truth of
·············@uni-bonn.de   sentences in a language that you understand.
http://www.uni-bonn.de/~uzs90z/                       -Daniel C. Dennett
From: Martin Cracauer
Subject: Re: Lisp is *SLOW*
Date: 
Message-ID: <1997Jul15.060533.20793@wavehh.hanse.de>
[Newsgroups trimmed]

Sajid Ahmed the Peaceman <········@capital.net> writes:

>	Lisp is a deception. All lisp compilers and interpreters that 
>I've seen have been written in C, and run on top of a C program. 

The CMU implementation of Common Lisp (http://www.cons.org/cmucl/)
implements very few things in C.

Obviously, since we are running on top of a C program (the Unix kernel :-) 
we can't avoid controlling system resources from C calls, since
this is the only interface the kernel offers. The same applies to the
Motif interface.

Half of the Garbage Collectors are written in C and the low-level
debugger is. 

Everything else - including the compiler (in the narrow sense), the
"real" debugger, the system building infrastructure, the object system
and the (X11-capable) editor - is written purely in Lisp. Not to
forget the Xlib interface, which isn't a binding to the C Xlib, but is
implemented purely in Lisp on top of a simple socket layer.

Martin
-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%% ···············@wavehh.hanse.de http://cracauer.cons.org %%%%%%%%%%%
%%
%% Unwanted commercial email will be billed at $250. You agree by sending.
%%
%% If you want me to read your usenet messages, don't crosspost to more 
%% than 2 groups (automatic filtering).
From: ····@erols.com
Subject: Re: But Lisp is *SLOW* Compared to C/C++
Date: 
Message-ID: <869021193.5012@dejanews.com>
(prompt "AND YET... ")

Acad, one of the biggie commercial aps, uses an interpreted dialect of
LISP as its main customization language -- even though written in C. 
Yes, it offered a C customization extension a few releases back, BUT! 
You need LISP to load and intercede between C Acad and the C functions
you created.

 ... And slow interpreter languages are conversely faster to write than
speedier compiled ones.

Never underestimate the programmer's abilities as a _major_ factor in
execution speed.

In article <············@cdn-news.telecom.com.au>,
  ·····@vus002.telecom.com.au (Satan - The Evil 1) wrote:
>
> After reading the "Lisp in the Real World" thread I have
> the following opinion :) to raise and would be interested
> to hear some other "non-religious" opinions regarading
> Lisp compared to languages such as C/C++ in the area of
> application speed and efficiency.
>
> *Please don't take this as a flame or a troll*
>
> Lisp seems OK for prototyping and implementing higher level
> business rules and for general scripting, but it simply
> will never be able to compete with languages such as
> C and C++ (and to a lesser extent languages such as VB
> and Delphi under Windows).
>
> Have you noticed...
>   You don't see fast numerical libraries written in Lisp.
>   You don't see scientific libraries written in Lisp.
>   You don't see commercial games written in Lisp.
>   You don't see application suites written in Lisp.
>
> In fact, you don't see any mainstream commercial applications
> written in Lisp for the the basic reason that any
> competitor will simply go out and write their competing
> application in C/C++ and make a faster, more responsive
> application that makes more efficient use of machine
> resources.  Why do you think that, despite the massive
> amount of hype, no mainstream apps have been written
> in Java? Because it is too slow for the real world when
> compared to equivalent code written in C or C++.
>
> I would say that C++ has such a bad repution amongst Lisp
> programmers because it takes several years to become
> a very good and efficient C++ programmer whilst you can
> quickly become an efficient Lisp programmer.  The massive
> number of *bad* C++ programmers certainly doesn't help
> its reputation.  An experienced C++ programmer can write
> truelly elegant, safe, and efficient code that leaves
> the equivalent Lisp code in the dust.
>
> Of course, the ease at which you can write good code in
> Lisp is a major point in its favour.
>
> But to sum up; we always seem to push hardware to its
> limits and because of this languages such as C and C++
> will maintain their edge over languages such as Lisp.
> I suppose one day we may have computers with more
> processing power than we could ever make use of
> (e.g. quantum computers) and then it will be commercially
> feasible to throw away the low-level languages.  But I
> imagine by that time Artificial Intelligences will have
> done away with programmers alltogether.
>
> OK now... go ahead an rip my argument to shreds :)
>
> Cheers,
> - PCM

-------------------==== Posted via Deja News ====-----------------------
      http://www.dejanews.com/     Search, Read, Post to Usenet