From: Joshua M Yelon
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <C1oo5L.AB9@news.cso.uiuc.edu>
A lot of these discussions have focused around syntax.  I don't think
that's why Lisp isn't a mainstream language.  As usual, I think the
main reason is efficiency.

Unfortunately, I believe that Common Lisp will never be made efficient.
It just has too many things in the language that prevent efficiency.
Here is a random smattering:

* Lisp is full of pointers - everything in lisp requires extra
pointer dereferences.  For example, declaring an array of structs
in C gives you a big chunk of memory, containing a whole bunch of
structs end-to-end.  In lisp, this gives you a big chunk of memory
containing pointers to separately allocated structs.  This may have
some advantages, but it requires an extra pointer dereference each
time you access a struct.

* Lisp is too big to optimize it all. When a language is small, like C,
it is possible to design optimization algorithms that correctly deal
with all the constructs.  Lisp is simply too big to optimize all of it,
it would simply take too many man-years.

* The lisp type system is weak.  It is very hard to make enough assertions
using the lisp type system.  For example, consider this code, which
appears to be all-fixnum-math:

    (declare (fixnum x))
    (declare (fixnum y))
    (declare (fixnum z))
    (setf x (- (* x y) z))

but according to the definitions in the lisp type system, the result of
the multiplication could be a bignum, which might be reduced back to
a fixnum by the subtraction.  So even in what appears to be fully 
fixnum-declared code, the compiler must allow for bignums.

* The lisp type-system is not very expressive.  For example, I cannot
declare that X is a list of fixnums.  This prevents the compiler from
making fixnum assumptions about (car x).  Again, I have to pour on a lot
of declarations on in order to get proper

* Global variables in Lisp will never be typed.  For example, suppose I
say: (defvar *foo*) (proclaim '(fixnum *foo*)).  The compiler cannot
change the representation of the value in *foo* to an efficient fixnum
representation - it must store it as a type T.  Why?  Because any
reference to (symbol-value x), if X happens to be '*foo*, must return
the value properly.

* It is very hard to get lisp to vectorize a case-statement.  If the
labels of a case statement are symbols, they are not necessarily at
contiguous addresses, and therefore vectorization is impossible.  If
the labels are integers, the vectorization is easy, but the code is
unreadable.  In C, it is possible to use an enum which is both readable
and vectorizeable.

* Lisp "tree-shakers" are up against too much to be really effective.
Using eval will break them completely.  Even using something as simple
as "(remove e l)" will force the linker to include the subroutine that
removes elements from lists, the subroutine that removes elements from
arrays, the subroutine that removes bits from bitvectors, and the
subroutine that removes characters from strings.  (Of course, you could
implement these all as a single subroutine performing abstract low-level
operations on sequence types, but this requires a lot of overhead, and
greatly slows down the subroutine, so it isn't really acceptable.)

* Lisp closures effectively prohibit the implementation of static scoping
through a register-display.

* The existence of the garbage collector forces all operations to keep
values in a format that the garbage collector understands.  This
effectively prohibits the use of many optimization techniques, such as
storing pointer values in available non-address registers.

The list keeps going, but I have to leave...

- Josh

From: Richard Fateman
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <1kh8ug$e3m@agate.berkeley.edu>
In article <··········@news.cso.uiuc.edu> ·······@ehsn11.cen.uiuc.edu (Joshua M Yelon) writes:
>
>* Lisp is too big to optimize it all. When a language is small, like C,
>it is possible to design optimization algorithms that correctly deal
>with all the constructs.  Lisp is simply too big to optimize all of it,
>it would simply take too many man-years.

There is no need to optimize all of it.  All that is needed is to optimize
those parts that are performance critical.  So far as I can tell, these
include function call/return; fixed size arithmetic (fixnum, floats);
array access; maybe I/O?  At least some of this has already been done.

One system I've been trying out (Allegro CL) seems to produce some arithmetic
code sequences about as good as Sun Micro's f77 (-fast optimization).
It does, however, require careful declarations for the Lisp, and some
restraint on the part of the Lisp programmer in the use of the language.

>
>* The lisp type system is weak.  It is very hard to make enough assertions
>using the lisp type system.  For example, consider this code, which
>appears to be all-fixnum-math:
>
>    (declare (fixnum x))
>    (declare (fixnum y))
>    (declare (fixnum z))
>    (setf x (- (* x y) z))
>
>but according to the definitions in the lisp type system, the result of
>the multiplication could be a bignum, which might be reduced back to
>a fixnum by the subtraction.  So even in what appears to be fully 
>fixnum-declared code, the compiler must allow for bignums.

This code is not fully declared.  This declaration could repair your
problem, with a good enough compiler (e.g. Allegro CL)

  (setf x (the fixnum (- (the fixnum (* x y)) z)))

;; note there is also no guarantee that the negative of a fixnum is a fixnum..

A way around the problem with globals of type float may be to make them
globals of type (simple-array 'single-float (1)).


-- 
Richard J. Fateman
·······@cs.berkeley.edu   510 642-1879
From: Joshua M Yelon
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <C1qJy0.6oA@news.cso.uiuc.edu>
·······@peoplesparc.Berkeley.EDU (Richard Fateman) writes:

>In article <··········@news.cso.uiuc.edu> ·······@ehsn11.cen.uiuc.edu (Joshua M Yelon) writes:
>>
>>* Lisp is too big to optimize it all. When a language is small, like C,
>>it is possible to design optimization algorithms that correctly deal
>>with all the constructs.  Lisp is simply too big to optimize all of it,
>>it would simply take too many man-years.

>There is no need to optimize all of it.  All that is needed is to optimize
>those parts that are performance critical.  So far as I can tell, these
>include function call/return; fixed size arithmetic (fixnum, floats);
>array access; maybe I/O?  At least some of this has already been done.

Well, that's ok if those are the parts you're using.  But what if they
aren't?

I'll give you an example - I once wrote a theorem prover.  Skipping
the details, I needed a data structure (call it a "PredCalcVar")
supporting three operations:

    * it needed a value slot that I could fetch and store,
    * it needed a way to mark the value-slot as empty,
    * I needed to be able to check (typep x 'predcalcvar).

These seemed to be exactly the sorts of operations for which I could
use symbols - the value slot would be (SYMBOL-VALUE X), and I could
check for emptiness using (BOUNDP X), and I could check for (SYMBOLP X).

Unfortunately, fetching the value slot using symbol-value turned out
to be tremendously slow - a look at the compiler output for symbol-value
revealed an explicit function call to the symbol-value function (rather
than just a pointer dereference).  To make matters worse, the function
itself contained a check for boundp, which was completely unneeded
given that I was protecting against this myself.

Apparently, somebody felt that explicit calls to the symbol-value
function did not count among the "parts that are performance
critical", Indeed, a perusal of a large library of lisp code to which
I have accesss turned up only one call to symbol-value, and that
wasn't in an important place.  So, the designers of the lisp _were_
justified in focusing their attention elsewhere - except when it
came to my program.

So, I decided not to use symbols.  I switched over to
(defstruct predcalcvar value), although that meant certain headaches -
For example, I was forced to write my own simple package-system for
predcalcvars, since predcalcvars must also be interned.

It turned out that the operation (PREDCALCVAR-VALUE x) was much faster
than (SYMBOL-VALUE X) - it had been inlined - but the operation
(PREDCALCVAR-P X) was much slower than (SYMBOLP X)!  Apparently, the
system had to scan down the type inheritance heirarchy before it
decided that an object wasn't a predcalcvar.  Of course, this too could
have been optimized out - but somebody apparently must have decided
that other things were more important.

Perhaps the worst thing about this was that there was nothing I could
do.  I couldn't rewrite the code myself, using lower-level, more
carefully coded operations.  In the end, I was simply stuck.

Hence my point about lisp being just too big to optimize.  It may be
that the most common half of the language is well-optimized - but
if you need something from the other half, there's nothing you can do.

>>* The lisp type system is weak.  It is very hard to make enough assertions
>>using the lisp type system.  For example, consider this code, which
>>appears to be all-fixnum-math:
>>
>>    (declare (fixnum x))
>>    (declare (fixnum y))
>>    (declare (fixnum z))
>>    (setf x (- (* x y) z))
>>
>>but according to the definitions in the lisp type system, the result of
>>the multiplication could be a bignum, which might be reduced back to
>>a fixnum by the subtraction.  So even in what appears to be fully 
>>fixnum-declared code, the compiler must allow for bignums.

>This code is not fully declared.  This declaration could repair your
>problem, with a good enough compiler (e.g. Allegro CL)
>
>  (setf x (the fixnum (- (the fixnum (* x y)) z)))

I realize that he code wasn't fully declared, I just said that it
"appears to be fully fixnum-declared".  In order to get fully-declared
code, one typically does what you did: you write "the fixnum" next to
every operator.  That's an absurd amount of work, just to get it to
use integers.  Which is what I meant when I said "It's hard to make
enought assertions".  Imagine telling a C programmer that he had
to write the words "the fixnum" next to every + symbol...
From: david kuznick
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <1993Feb1.130152.5187@titan.ksc.nasa.gov>
In article <··········@news.cso.uiuc.edu>, ·······@ehsn11.cen.uiuc.edu (Joshua M Yelon) writes:
|>Relay-Version: VMS News - V6.0-3 14/03/90 VAX/VMS V5.5; site titan.ksc.nasa.gov
|>Path: titan.ksc.nasa.gov!ames!sun-barr!cs.utexas.edu!sdd.hp.com!ux1.cso.uiuc.edu!news.cso.uiuc.edu!ehsn11.cen.uiuc.edu!jy10033
|>Newsgroups: comp.lang.lisp
|>Subject: Re: Why Isn't Lisp a Mainstream Language?
|>Message-ID: <··········@news.cso.uiuc.edu>
|>From: ·······@ehsn11.cen.uiuc.edu (Joshua M Yelon)
 
|>·······@peoplesparc.Berkeley.EDU (Richard Fateman) writes: 
|>>This code is not fully declared.  This declaration could repair your
|>>problem, with a good enough compiler (e.g. Allegro CL)
|>>
|>>  (setf x (the fixnum (- (the fixnum (* x y)) z)))
|>
|>I realize that he code wasn't fully declared, I just said that it
|>"appears to be fully fixnum-declared".  In order to get fully-declared
|>code, one typically does what you did: you write "the fixnum" next to
|>every operator.  That's an absurd amount of work, just to get it to
|>use integers.  Which is what I meant when I said "It's hard to make
|>enought assertions".  Imagine telling a C programmer that he had
|>to write the words "the fixnum" next to every + symbol...
|>
|>
Ahhh.  But what's wrong with defining a macro like FIXNUM+ that puts the "the"s
in for you?

--
**
David Kuznick
·······@meglos.mdcorp.ksc.nasa.gov
MUTLEY! Do something!  - D.D.
From: Joshua M Yelon
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <C1sB8J.6sM@news.cso.uiuc.edu>
·······@meglos.mdcorp.ksc.nasa.gov (david kuznick) writes:

>Ahhh.  But what's wrong with defining a macro like FIXNUM+ that puts
>the "the"s in for you?

You still have to write "fixnum" next to every operator - which is absurd.

By the way, a better solution might be:

(defmacro using-fixnum-math (&rest body)
    `(locally
        (declare (ftype + (fixnum fixnum) fixnum))
        (declare (ftype - (fixnum fixnum) fixnum))
        (declare (ftype * (fixnum fixnum) fixnum))
        (declare (ftype / (fixnum fixnum) fixnum))
        (declare (ftype LOGAND (fixnum fixnum) fixnum))
        (declare (ftype LOGIOR (fixnum fixnum) fixnum))
        ...
        ...
        ,@ body))

That would reduce the problem to just declaring the variables.
From: John D. Burger
Subject: Re: FIXNUM arithmetic (Was Why Isn't Lisp a Mainstream ...)
Date: 
Message-ID: <1993Feb2.233012.21033@linus.mitre.org>
An example like:

  (declare (type fixnum x y))
  (+ (* x x) y)

isn't FIXNUM arithmetic.  I presume people realize this, they simply
prefer the risk of an overflow to the runtime overhead of potential
BIGNUM arithmetic.  My personal preference runs opposite.

Note that if you know the limits of your numbers, many compilers do the
appropriate type inference (e.g. Allegro and CMU CL).  So you could
declare X and Y to be of type (INTEGER 1 100) instead of FIXNUM.  I do
this often, usually defining a new type such as SMALL-COUNTING-NUMBER. 
The above example runs an order of magnitude faster in Allegro using
(INTEGER 0 100) as opposed to FIXNUM, while the code is less than half
the size.

John Burger                                         ····@mitre.org
From: Espen J. Vestre
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <1kquvcINNbpl@coli-gate.coli.uni-sb.de>
In article <············@coli-gate.coli.uni-sb.de> Espen J. Vestre,
·····@coli.uni-sb.de writes:
>I defined the following function
>  (defun deref-sym-chain (symb)
>    (when symb
>      (deref-sym-chain (symbol-value symb))))
>and let it derefer a chain of 3000 symbols (ending in nil).  The
>execution time was 0.18 seconds.  I wouldn't call _that_ tremendously
>slow, although a similar dereference operation for your suggested
>structure objects needed "only" 0.12 seconds.

I was very tired when I wrote this :-)

I missed one zero in each timing...  Anyway, I tried something similar
again, and it seems like the structures really are the winners.  Here's
the results:
(defun deref-sym-chain (symb)
  (if (boundp symb)
    (deref-sym-chain (symbol-value symb))
    symb))

(defun deref-str-chain (str)
  (if (var-bound str)
    (deref-str-chain (var-val str))
    (var-val str)))

? (time (dotimes (i 100) (deref-str-chain str-chain)))
(DOTIMES (I 100) (DEREF-STR-CHAIN STR-CHAIN)) took 408 milliseconds
(0.408 seconds) to run.
Of that, 2 milliseconds (0.002 seconds) were spent in The Cooperative
Multitasking Experience.
NIL
? (time (dotimes (i 100) (deref-sym-chain sym-chain)))
(DOTIMES (I 100) (DEREF-SYM-CHAIN SYM-CHAIN)) took 3207 milliseconds
(3.207 seconds) to run.
Of that, 80 milliseconds (0.080 seconds) were spent in The Cooperative
Multitasking Experience.
NIL


(the chains to be dereferenced were again 3000 elements long, so both
operations are quite fast)

sorry for bothering you :-)
--------------------------------------------------------------
Espen J. Vestre,                          ·····@coli.uni-sb.de
Universitaet des Saarlandes,        
Computerlinguistik, Gebaeude 17.2 
Im Stadtwald,                          tel. +49 (681) 302 4501
D-6600 SAARBRUECKEN, Germany           fax. +49 (681) 302 4351
--------------------------------------------------------------
From: Espen J. Vestre
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <1km7hjINNa77@coli-gate.coli.uni-sb.de>
In article <··········@news.cso.uiuc.edu> Joshua M Yelon,
·······@ehsn11.cen.uiuc.edu writes:
>I'll give you an example - I once wrote a theorem prover.  Skipping

you "once" wrote?
Have you looked at the performance of new lisp implementations?

I tried to reconstruct your problems (using Macintosh Common Lisp 2.0p2),
but it seems they're gone:

>Unfortunately, fetching the value slot using symbol-value turned out
>to be tremendously slow - 

I defined the following function
  (defun deref-sym-chain (symb)
    (when symb
      (deref-sym-chain (symbol-value symb))))
and let it derefer a chain of 3000 symbols (ending in nil).  The
execution time was 0.18 seconds.  I wouldn't call _that_ tremendously
slow, although a similar dereference operation for your suggested
structure objects needed "only" 0.12 seconds.

>(PREDCALCVAR-P X) was much slower than (SYMBOLP X)!  

I can't reproduce this either, it rather looks like SYMBOLP is slightly
(but only slightly) slower than the structure-type-function.  Both are
fast, though.

--------------------------------------------------------------
Espen J. Vestre,                          ·····@coli.uni-sb.de
Universitaet des Saarlandes,        
Computerlinguistik, Gebaeude 17.2 
Im Stadtwald,                          tel. +49 (681) 302 4501
D-6600 SAARBRUECKEN, Germany           fax. +49 (681) 302 4351
--------------------------------------------------------------
From: Joshua M Yelon
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <C1v0F1.7En@news.cso.uiuc.edu>
Espen J. Vestre <·····@coli.uni-sb.de> writes:

>I defined the following function
>  (defun deref-sym-chain (symb)
>    (when symb
>      (deref-sym-chain (symbol-value symb))))
>and let it derefer a chain of 3000 symbols (ending in nil).  The
>execution time was 0.18 seconds.  I wouldn't call _that_ tremendously
>slow.

Well, actually, that _is_ pretty embarrasing performance.  Let me show
you why.

Let's compare your Common Lisp implementation to a dumb, non-optimizing C
compiler.  Here's the same program in C:

deref(symb)
    register list *symb;
    {
    while (symb) symb=symb->value;
    return 0;
    }

Ok: here is what should come out of that dumb, non-optimizing C compiler:

         move.l sp(10),a1     ; fetch symb off the stack into a register var.
test:    tst.l  a1            ;
         beq    endloop       ;
         mov.l  a1(4), sp(12) ;
         mov.l  sp(12), a1    ;
         bra    test          ;
endloop:

Notice that I am assuming a very dumb compiler here - it didn't invert the
loop, it used a temp variable sp(12) during the pointer dereference, and it
didn't allocate that temp variable in a register, it put it on the stack,
and it didn't notice that the tst.l instruction is unneeded.

So, how fast does that run?  Assuming an 8mhz 68000 (a really old Mac),
it takes a total of 4+6+28+16+10 = 64 cycles, which can be repeated
125,000 times per second.

In comparison to that, the 15,000 that your Common Lisp implementation
can do is pretty silly-looking.

If we were to put the C program into a modern C compiler, we'd get this
inner loop:

    loop: mov.l  a2(4), a2
          bne    loop

which takes 26 cycles, and can be repeated 300,000 times a second.

That's if you have a slow mac.  If you have a fast mac, then things
are much worse than I thought...

Now, a final word.  This isn't CLtL's fault - it's the fault of your
implementation.  I have seen a lisp (akcl) that came withing a factor
of two of the optimized C code, after I patched the compiler a little
bit to help it out (I had to teach it how to optimize symbol-value).
Better is theoretically possible... but the limitations I stated
earlier still apply.

With the exception of the vectorized switches... somebody showed me
how to talk a lisp into vectorizing a switch, without sacrificing
readability!  Take a look:

    (defconstant APPLE 1)
    (defconstant BANANA 2)
    (defconstant ORANGE 3)

    (case fruit
        (#.APPLE ...)
        (#.BANANA ...)
        (#.ORANGE ...)

Thank you for whomever pointed this out to me... I wish I remembered
whom it was.
From: Wade Hennessey
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <WADE.93Feb2230536@kobold.sunrise.stanford.edu>
In article <··········@news.cso.uiuc.edu> ·······@ehsn11.cen.uiuc.edu (Joshua M Yelon) writes:
   From: ·······@ehsn11.cen.uiuc.edu (Joshua M Yelon)
   References: <··········@news.cso.uiuc.edu> <····················@SUMMER.SCRC.Symbolics.COM> <····················@titan.ksc.nasa.gov> <············@coli-gate.coli.uni-sb.de>
   Sender: ······@news.cso.uiuc.edu (Net Noise owner)
   Organization: University of Illinois at Urbana
   Date: Wed, 3 Feb 1993 06:23:24 GMT

   Espen J. Vestre <·····@coli.uni-sb.de> writes:

   >I defined the following function
   >  (defun deref-sym-chain (symb)
   >    (when symb
   >      (deref-sym-chain (symbol-value symb))))
   >and let it derefer a chain of 3000 symbols (ending in nil).  The
   >execution time was 0.18 seconds.  I wouldn't call _that_ tremendously
   >slow.

   Well, actually, that _is_ pretty embarrasing performance.  Let me show
   you why.

   Let's compare your Common Lisp implementation to a dumb, non-optimizing C
   compiler.  Here's the same program in C:

   deref(symb)
       register list *symb;
       {
       while (symb) symb=symb->value;
       return 0;
       }

   Ok: here is what should come out of that dumb, non-optimizing C compiler:

	    move.l sp(10),a1     ; fetch symb off the stack into a register var.
   test:    tst.l  a1            ;
	    beq    endloop       ;
	    mov.l  a1(4), sp(12) ;
	    mov.l  sp(12), a1    ;
	    bra    test          ;
   endloop:


I think that most CL implementations produce reasonable code for the 
DEREF-SYM-CHAIN functions shown above. For example, WCL produces the
following SPARC code:

	sethi %hi(_s_lsp_NIL+5),%g2
	or %g2,%lo(_s_lsp_NIL+5),%o0
L6:
	cmp %o1,%o0
	be L3
	nop
	b L6
	ld [%o1-1],%o1
L3:
	retl
	add %sp,0,%sp

I think that many CL implementations are capable of producing similar
code on such a simple example as long as the compiler is told that
speed is more important than safety. -wade
From: Espen J. Vestre
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <1krn0cINNh8g@coli-gate.coli.uni-sb.de>
In article <··········@news.cso.uiuc.edu> Joshua M Yelon,
·······@ehsn11.cen.uiuc.edu writes:
>So, how fast does that run?  Assuming an 8mhz 68000 (a really old Mac),
>it takes a total of 4+6+28+16+10 = 64 cycles, which can be repeated
>125,000 times per second.
>
>In comparison to that, the 15,000 that your Common Lisp implementation
>can do is pretty silly-looking.
>
>If we were to put the C program into a modern C compiler, we'd get this
>inner loop:
>
>    loop: mov.l  a2(4), a2
>          bne    loop
>
>which takes 26 cycles, and can be repeated 300,000 times a second.
>
>That's if you have a slow mac.  If you have a fast mac, then things
>are much worse than I thought...

ok, ok, ok.  When you started talking about _theorem_provers_, I forgot
that the thread was really about why Lisp isn't a mainstream language :-)

In the context of theorem proving, my semantics for "tremendously slow"
is of ..er... a quite other magnitude!  (I don't deny that it's possible
to write faster theorem provers in C than in Lisp, btw.  It all depends
on how many man-months you are willing to invest to achieve that goal.  I
wonder, by the way, how many times named symbols have been reimplemented
in C...)

And yes: It _is_ a fast mac, a Quadra 700 (15x (?) the speed of that slow
mac), but as I said, I had messed things a bit up.  The real figures for
the given problem were:
symbol implementation: 203 000 ops/sec
struct implementation: 1 027 000 ops/sec
cons implementation: 1 176 000 ops/sec

>Now, a final word.  This isn't CLtL's fault - it's the fault of your
>implementation.  I have seen a lisp (akcl) that came withing a factor

OK.  I agree that CLtL doesn't say that symbol-value HAS to SIGNAL an
error when given wrong arguments.  So the compiler could indeed have
produced faster code when given (optimize (speed 3)(safety 0)(debug 0)). 
But: Do you really want a program like a theorem prover to be sloppy
about error catching??

A final word from me too:  It's very easy to demonstrate how fast
well-programmed C programs might be.  This doesn't prove that there's no
place for Common Lisp in the Real World, though.  In fact, when I look
into the Real World, I see dozens of dozens of ill-programmed, Cpu- and
memory-sucking programs.  Some of the best-selling programs on this
planet belong to this category.....
--------------------------------------------------------------
Espen J. Vestre,                          ·····@coli.uni-sb.de
Universitaet des Saarlandes,        
Computerlinguistik, Gebaeude 17.2 
Im Stadtwald,                          tel. +49 (681) 302 4501
D-6600 SAARBRUECKEN, Germany           fax. +49 (681) 302 4351
--------------------------------------------------------------
From: Scott McKay
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <19930203154121.7.SWM@SUMMER.SCRC.Symbolics.COM>
    Date: Wed, 3 Feb 1993 01:23 EST
    From: Joshua M Yelon <·······@ehsn11.cen.uiuc.edu>

    Espen J. Vestre <·····@coli.uni-sb.de> writes:

    >I defined the following function
    >  (defun deref-sym-chain (symb)
    >    (when symb
    >      (deref-sym-chain (symbol-value symb))))
    >and let it derefer a chain of 3000 symbols (ending in nil).  The
    >execution time was 0.18 seconds.  I wouldn't call _that_ tremendously
    >slow.

    Well, actually, that _is_ pretty embarrasing performance.  Let me show
    you why.

    Let's compare your Common Lisp implementation to a dumb, non-optimizing C
    compiler.  Here's the same program in C:

Of course, your code is unsafe, if you admit the same semantics for a
symbol's value as for Lisp.  In Lisp's semantics, symb->value will not
always return a new symbol.  So, while I agree with you that the Lisp
performance is far worse than it should be, you should be a little more
careful to compare apples with apples.

    deref(symb)
	register list *symb;
	{
	while (symb) symb=symb->value;
	return 0;
	}

    Ok: here is what should come out of that dumb, non-optimizing C compiler:

	     move.l sp(10),a1     ; fetch symb off the stack into a register var.
    test:    tst.l  a1            ;
	     beq    endloop       ;
	     mov.l  a1(4), sp(12) ;
	     mov.l  sp(12), a1    ;
	     bra    test          ;
    endloop:

    Notice that I am assuming a very dumb compiler here - it didn't invert the
    loop, it used a temp variable sp(12) during the pointer dereference, and it
    didn't allocate that temp variable in a register, it put it on the stack,
    and it didn't notice that the tst.l instruction is unneeded.

    So, how fast does that run?  Assuming an 8mhz 68000 (a really old Mac),
    it takes a total of 4+6+28+16+10 = 64 cycles, which can be repeated
    125,000 times per second.

Presumably a test for SYMBOLP could be done inline in about the same
number of cycles, meaning that a safe Lisp implementation of this could
achieve about 60,000 times per second.  Worse than C, but not by much.  

Furthermore, setting safety declarations down and speed declarations up
could probably get you back up to around 125,000 times per second as well.

I do not know if there are Lisp compilers that do this, but there is
certainly no good reason why they cannot.

    In comparison to that, the 15,000 that your Common Lisp implementation
    can do is pretty silly-looking.

    If we were to put the C program into a modern C compiler, we'd get this
    inner loop:

	loop: mov.l  a2(4), a2
	      bne    loop

    which takes 26 cycles, and can be repeated 300,000 times a second.

At the risk of leaving with with "Core dumped", if you use Lisp's
semantics for symbols.  The same techniques could yield the similar
performance for Lisp code.  Again, I don't know how good the best Lisp
compilers are at this.

If you're conclusion is, "Lisp is not a mainstream language because, in
part, all Lisp compilers suck", then I certainly agree with you.

    That's if you have a slow mac.  If you have a fast mac, then things
    are much worse than I thought...

    Now, a final word.  This isn't CLtL's fault - it's the fault of your
    implementation.  I have seen a lisp (akcl) that came withing a factor
    of two of the optimized C code, after I patched the compiler a little
    bit to help it out (I had to teach it how to optimize symbol-value).
    Better is theoretically possible... but the limitations I stated
    earlier still apply.

    With the exception of the vectorized switches... somebody showed me
    how to talk a lisp into vectorizing a switch, without sacrificing
    readability!  Take a look:

	(defconstant APPLE 1)
	(defconstant BANANA 2)
	(defconstant ORANGE 3)

	(case fruit
	    (#.APPLE ...)
	    (#.BANANA ...)
	    (#.ORANGE ...)

    Thank you for whomever pointed this out to me... I wish I remembered
    whom it was.

Hmm, I would have defined a DEFINE-ENUMERATION macro that expands into a
a set of DEFCONSTANTs.  Lisp compilers are allowed to inline constants
at compile-time, so here is what my code would look like:

 (define-enumeration fruits
   APPLE
   ORANGE
   BANANA)
 
 (case a-fruit
   (APPLE ...)
   (ORANGE ...)
   (BANANA ...))
 
Hey, that looks a lot like it does in C.  What a coincidence! :-)
From: Joshua M Yelon
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <C1vt8u.Fn1@news.cso.uiuc.edu>
···@stony-brook.scrc.symbolics.com (Scott McKay) writes:

[ demonstrating how to do a vectorizeable switch-statement in lisp ]

>Hmm, I would have defined a DEFINE-ENUMERATION macro that expands into a
>a set of DEFCONSTANTs.  Lisp compilers are allowed to inline constants
>at compile-time, so here is what my code would look like:
>
> (define-enumeration fruits
>   APPLE
>   ORANGE
>   BANANA)
> 
> (case a-fruit
>   (APPLE ...)
>   (ORANGE ...)
>   (BANANA ...))
> 
>Hey, that looks a lot like it does in C.  What a coincidence! :-)

Yeah, but it's wrong. The case statement doesn't evaluate it's labels.
Therefore, a-fruit is being compared to the symbols (APPLE ORANGE BANANA),
not to the numbers (1 2 3).

My earlier post suggested read-macros as a solution, but that was wrong
too - the compiler is under no obligation to evaluate the defconstants,
it must merely "analyze" them in its own internal way.  Here's a solution
which I _do_ think is right:

#.(progn
    (defconstant APPLE 1)
    (defconstant BANANA 2)
    ...)

(case a-fruit
    (#.APPLE ...)
    (#.BANANA ...)
    ...)

That way, the reader is doing the substitution of constants, and we can
ignore the complexities of whether this is happening at load or compile
time, and that whole can of worms.
From: Bill York
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <YORK.93Feb3161511@oakland-hills.lucid.com>
In article <··········@news.cso.uiuc.edu> ·······@ehsn11.cen.uiuc.edu (Joshua M Yelon) writes:

   ···@stony-brook.scrc.symbolics.com (Scott McKay) writes:

   [ demonstrating how to do a vectorizeable switch-statement in lisp ]

   >Hmm, I would have defined a DEFINE-ENUMERATION macro that expands into a
   >a set of DEFCONSTANTs.  Lisp compilers are allowed to inline constants
   >at compile-time, so here is what my code would look like:
   >
   > (define-enumeration fruits
   >   APPLE
   >   ORANGE
   >   BANANA)
   > 
   > (case a-fruit
   >   (APPLE ...)
   >   (ORANGE ...)
   >   (BANANA ...))
   > 
   >Hey, that looks a lot like it does in C.  What a coincidence! :-)

   Yeah, but it's wrong. The case statement doesn't evaluate it's labels.
   Therefore, a-fruit is being compared to the symbols (APPLE ORANGE BANANA),
   not to the numbers (1 2 3).

   My earlier post suggested read-macros as a solution, but that was wrong
   too - the compiler is under no obligation to evaluate the defconstants,
   it must merely "analyze" them in its own internal way.  Here's a solution
   which I _do_ think is right:

   #.(progn
       (defconstant APPLE 1)
       (defconstant BANANA 2)
       ...)

   (case a-fruit
       (#.APPLE ...)
       (#.BANANA ...)
       ...)

   That way, the reader is doing the substitution of constants, and we can
   ignore the complexities of whether this is happening at load or compile
   time, and that whole can of worms.

Maybe Scott meant something more like:

;;; Substrate
(defmacro defenumeration (&rest items)
  `(eval-when (compile load eval)
     ,@(loop for item in items
	     for count from 0
	     collect
	     `(defconstant ,item ,count))))

(defmacro switch (candidate &rest clauses)
  `(case ,candidate
     ,@(loop for clause in clauses
	     for enumerate = (car clause)
	     do
	     (unless (boundp enumerate)
	       (warn "No value for enumeration item ~S" enumerate))
	     collect
	     `(,(symbol-value (first clause)) . ,(rest clause)))))

;;; User code
(defenumeration apple orange grape)

(defun fruit-association (fruit)
  (switch fruit
	  (apple 'teacher)
	  (grape 'wine)
	  (orange 'anita-bryant)))
From: Simon Leinen
Subject: Re: CASE (was: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <SIMON.93Feb5131853@liasg2.epfl.ch>
···@stony-brook.scrc.symbolics.com (Scott McKay) writes:

   Which brings me to CASE.  I continually forget that it really
   doesn't evaluate its arguments.  So I have to ask, *why* do I
   continually forget?  Well, I forget because (IMHO) we blew it on
   CASE -- it should evaluate constant forms at compile-time.

What about non-constant forms?

   I do not think I have ever written a single CASE form that wants to
   dispatch on symbol names, unless they are self-evaluating (i.e.,
   keywords).

One argument against your proposal is that the compile-time
dependencies would be hairier if constants would be evaluated - see
the EVAL-WHEN that became necessary in the fixed version of your last
example.  If you force the user to say #. if she wants a case tag to
be evaluated at compile time, it is at least obvious that there is a
problem :-)

The only problem *I* have with CASE is that non-lists are accepted as
case tags.  This leads to people writing NIL as a case selector (seen
in CLUE), meaning the symbol NIL, which works under some
(non-X3J13-conforming) Lisps but breaks under others because these
interpret this as the empty list, as mandated by the standard.
Paranoid as I am, I ALWAYS put parentheses around my case selectors.
-- 
Simon.
From: Jeff Dalton
Subject: Re: CASE (was: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <8398@skye.ed.ac.uk>
In article <····················@SUMMER.SCRC.Symbolics.COM> ···@stony-brook.scrc.symbolics.com (Scott McKay) writes:

>Which brings me to CASE.  I continually forget that it really doesn't
>evaluate its arguments.  So I have to ask, *why* do I continually
>forget?  Well, I forget because (IMHO) we blew it on CASE -- it should
>evaluate constant forms at compile-time.  I do not think I have ever
>written a single CASE form that wants to dispatch on symbol names,
>unless they are self-evaluating (i.e., keywords).
>
>    ;;; User code
>    (defenumeration apple orange grape)
>
>    (defun fruit-association (fruit)
>      (switch fruit
>	      (apple 'teacher)
>	      (grape 'wine)
>	      (orange 'anita-bryant)))

I often write case forms that dispatch on symbol names that are not
self-evaluating.

I don't want to use enumeration types instead.  I want symbols
in my data, not numbers that are symbols only in source code.

I don't mind if someone adds a new kind of case.  But I don't want
to swap it for the one we have.

-- jd
From: Dorai Sitaram
Subject: Re: CASE (was: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <C2ytpp.DLu@rice.edu>
In article <····@skye.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <····················@SUMMER.SCRC.Symbolics.COM> ···@stony-brook.scrc.symbolics.com (Scott McKay) writes:
>
>>Which brings me to CASE.  I continually forget that it really doesn't
>>evaluate its arguments.  So I have to ask, *why* do I continually
>>forget?  Well, I forget because (IMHO) we blew it on CASE -- it should
>>evaluate constant forms at compile-time.  I do not think I have ever
>>written a single CASE form that wants to dispatch on symbol names,
>>unless they are self-evaluating (i.e., keywords).
>>
>>    ;;; User code
>>    (defenumeration apple orange grape)
>>
>>    (defun fruit-association (fruit)
>>      (switch fruit
>>	      (apple 'teacher)
>>	      (grape 'wine)
>>	      (orange 'anita-bryant)))
>
>I often write case forms that dispatch on symbol names that are not
>self-evaluating.
>
>I don't want to use enumeration types instead.  I want symbols
>in my data, not numbers that are symbols only in source code.
>
>I don't mind if someone adds a new kind of case.  But I don't want
>to swap it for the one we have.

I too often write case forms that dispatch on symbol names.
But an "evaluating" case (evcase) doesn't disallow this.

(evcase fruit
  ('apple 'teacher)
  ('grape 'wine)
  ('orange 'anita-bryant))

(The ' isn't the matter, is it?  Programmers typically quote
their use of symbols elsewhere, so quoting them in a case
clause shouldn't strike them as too bizarre.)

Is there any implementation-driven reason why non-evaluating
case is the form privileged (with primitive status) in
Lisps?  Like, it allows tighter code?  I can't see how, since
all such presumably easy cases of evcase can be caught by
mere syntactic inspection (compile-time).  Please correct me
if I'm missing something.

--d



-- 

·····@cs.rice.edu      !    It may be that the gulfs will wash us down;
·····@owlnet.rice.edu  !      it may be we shall touch the Happy Isles.
From: Kellom{ki Pertti
Subject: Re: CASE (was: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <PK.93Feb26121920@talitiainen.cs.tut.fi>
In article <··········@rice.edu> ·····@cs.rice.edu (Dorai Sitaram) writes:
   (The ' isn't the matter, is it?  Programmers typically quote
   their use of symbols elsewhere, so quoting them in a case
   clause shouldn't strike them as too bizarre.)

Many novice Scheme programmers do just this, and get code that works
but for other reasons than they think. I have seen a lot of code like

  (case direction
    ('north (do-the-north-thing))
    ('south (do-the-south-thing)))

which after it has been processed by read looks to the interpreter
like the syntactically correct form

  (case direction
    ((quote north) (do-the-north-thing))
    ((quote south) (do-the-south-thing)))

Strictly speaking this is not legal, as the symbol quote appears twice
in the literal lists, but I have not seen any compiler or interpreter
complain about it.
--
Pertti Kellom\"aki (TeX format)  #       These opinions are mine, 
  Tampere Univ. of TeXnology     #              ALL MINE !
      Software Systems Lab       #  (but go ahead and use them, if you like)
From: lawrence.g.mayka
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <LGM.93Feb4091819@cbnewsc.ATT.COM>
In article <··········@news.cso.uiuc.edu> ·······@ehsn11.cen.uiuc.edu (Joshua M Yelon) writes:

   #.(progn
       (defconstant APPLE 1)
       (defconstant BANANA 2)
       ...)

   (case a-fruit
       (#.APPLE ...)
       (#.BANANA ...)
       ...)

It should suffice to use EVAL-WHEN on the DEFCONSTANTs:

(eval-when (compile load eval)
  (defconstant APPLE 1)
  (defconstant BANANA 2)
  ...)

Even this measure is not necessary if the file containing the
DEFCONSTANTs has already been loaded by the time compilation of their
usage takes place.


        Lawrence G. Mayka
        AT&T Bell Laboratories
        ···@iexist.att.com

Standard disclaimer.
From: Michael Greenwald
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <michaelg.728587393@Xenon.Stanford.EDU>
·······@ehsn11.cen.uiuc.edu (Joshua M Yelon) writes:

>·······@peoplesparc.Berkeley.EDU (Richard Fateman) writes:

>>In article <··········@news.cso.uiuc.edu> ·······@ehsn11.cen.uiuc.edu (Joshua M Yelon) writes:
>>>

>It turned out that the operation (PREDCALCVAR-VALUE x) was much faster
>than (SYMBOL-VALUE X) - it had been inlined - but the operation
>(PREDCALCVAR-P X) was much slower than (SYMBOLP X)!  Apparently, the
>system had to scan down the type inheritance heirarchy before it
>decided that an object wasn't a predcalcvar.  Of course, this too could
>have been optimized out - but somebody apparently must have decided
>that other things were more important.

>Perhaps the worst thing about this was that there was nothing I could
>do.  I couldn't rewrite the code myself, using lower-level, more
>carefully coded operations.  In the end, I was simply stuck.

Once you had determined the performance issues, why couldn't you have
rolled your own data structure and hidden it using macrology: i.e. (if
it turned out to be faster on your system) represent the as symbols
and have PREDCALCVAR-P be a macro which just did a SYMBOLP.
Similarly, PREDCALCVAR-VALUE could be SYMBOL-VALUE with a declaration
declaring that its argument was always a symbol.  In many lisps that
would remove the excessive type checking.

Of course, if run-time dispatching on type turned out to be a
performance bottleneck, you could always build your own "type system"
for the critical datatypes at the cost of an extra byte (or word) per
object: simply tag them with your own type-tag, and dispatch on that
fixnum.  It all depends on what your real cost is, and how the various
functions in your application are being used.  

Most compiled languages can usually be induced to perform well; often,
however, it takes some creativity, rewriting, or new algorithms.
Sometimes the extra energy isn't worth it.  The nice thing is that you
can usually choose the language that best fits your application, and
makes the development task easiest.  Then, you can expend the extra
effort only on the critical bottlenecks in your application.  Less
energy expended, higher leverage.  In pessimistic scenarios the energy
you saved in developement and prototyping vs the cost of optimizing
the critical paths is a wash.  Often, the optimization is a smaller
burden.  (There are commercial markets, I'm sure, where performance is
absolutely critical in every line of code.  Given the current state
of Lisp compilers, perhaps (I really don't know) people coding for
this market should >always< use a C compiler.)

>I realize that he code wasn't fully declared, I just said that it
>"appears to be fully fixnum-declared".  In order to get fully-declared
>code, one typically does what you did: you write "the fixnum" next to
>every operator.  That's an absurd amount of work, just to get it to
>use integers.  Which is what I meant when I said "It's hard to make
>enought assertions".  Imagine telling a C programmer that he had
>to write the words "the fixnum" next to every + symbol...

It wouldn't be hard to write a macro that put "the fixnum" (or "the
>whatever<") around every subform in a computation, if the burden of
subform declarations turned out to be onerous (I can imagine it might,
in a large numerical application).

(I'm a little taken aback that I've just recommended macrology as a
solution in two problems --- just a note: I'm not a big fan or
overuser of macros.  I usually limit them to DEFxxx forms, or the
occassions where you need syntax.  But then again, I wouldn't mind
adding "the fixnum" to all the places where arithmetic actually turned
out to be the bottleneck in most of my programs, and runtime type
dispatching (predcalcvar-p) usually don't end up being in the critical
path of my code. (as noted above, there are often ways around it, when
it slows you down).
From: lawrence.g.mayka
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <LGM.93Jan31163951@cbnewsc.ATT.COM>
In article <··········@news.cso.uiuc.edu> ·······@ehsn11.cen.uiuc.edu (Joshua M Yelon) writes:

   * It is very hard to get lisp to vectorize a case-statement.  If the
   labels of a case statement are symbols, they are not necessarily at
   contiguous addresses, and therefore vectorization is impossible.  If
   the labels are integers, the vectorization is easy, but the code is
   unreadable.  In C, it is possible to use an enum which is both readable
   and vectorizeable.

Use #. (e.g., #.SYMBOLIC-CONSTANT) to force read-time evaluation.


        Lawrence G. Mayka
        AT&T Bell Laboratories
        ···@iexist.att.com

Standard disclaimer.
From: Scott McKay
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <19930201154151.3.SWM@SUMMER.SCRC.Symbolics.COM>
    Date: Sat, 30 Jan 1993 15:12 EST
    From: Joshua M Yelon <·······@ehsn11.cen.uiuc.edu>

    A lot of these discussions have focused around syntax.  I don't think
    that's why Lisp isn't a mainstream language.  As usual, I think the
    main reason is efficiency.

    Unfortunately, I believe that Common Lisp will never be made efficient.
    It just has too many things in the language that prevent efficiency.
    Here is a random smattering:

    * Lisp is full of pointers - everything in lisp requires extra
    pointer dereferences.  For example, declaring an array of structs
    in C gives you a big chunk of memory, containing a whole bunch of
    structs end-to-end.  In lisp, this gives you a big chunk of memory
    containing pointers to separately allocated structs.  This may have
    some advantages, but it requires an extra pointer dereference each
    time you access a struct.

The "extra pointer dereference" may be a red herring.  Memory references
are not common operations in most programs -- around 1 in 10 to 1 in 20
instructions are memory references in most programs (at least according
to some papers I read a few years back).  So one occasional extra cycle
per memory reference is just not a big deal when it typically costs you
on the order of 10ns (assuming no memory cache miss) on modern machines.
Futhermore, many of the modern C and C++ programs I have seen typically
use an "object reference" style anyway, and I honestly don't think I've
ever seen anyone use the technique of allocating a big array of structs,
although I am admittedly do not read tons of C/C++ code.

Also, amazingly (to me anyway), memory references on one of the machines
I am using now (the DEC alpha) are almost free.  This is because load
and store instructions can be dual-issued with most other instructions.
Presumably as cache sizes increase on chips and more architectures use a
"multiple boxes" design, "free" memory references will become commonplace.

    * Lisp is too big to optimize it all. When a language is small, like C,
    it is possible to design optimization algorithms that correctly deal
    with all the constructs.  Lisp is simply too big to optimize all of it,
    it would simply take too many man-years.

Again, I don't think this is completely true.  The basic number of
constructs in Lisp is not large (although you'd never know it to read
dpANS!).  Only the most basic constructs in Lisp need to be seriously
optimized (function calling, flow control, the object and type systems,
memory management).  Your claim is sort of like saying C is too hard to
optimize because there are so many C libraries.  Certainly, as your
later message makes clear, it can be hard for a Lisp implementor to
decide what should be optimized.

    * The lisp type system is weak.  It is very hard to make enough assertions
    using the lisp type system.  For example, consider this code, which
    appears to be all-fixnum-math:

	(declare (fixnum x))
	(declare (fixnum y))
	(declare (fixnum z))
	(setf x (- (* x y) z))

    but according to the definitions in the lisp type system, the result of
    the multiplication could be a bignum, which might be reduced back to
    a fixnum by the subtraction.  So even in what appears to be fully 
    fixnum-declared code, the compiler must allow for bignums.

Of course (he says with his tongue in his cheeck), a "sufficiently smart
compiler" could do the right thing here and generate an unboxed 2-word
fixnum on the stack for (* X Y), because it knows the result X is a
fixnum.  Hmm, I bet even Python doesn't do that.  In C, this would simply
return the wrong answer, right?

    * The lisp type-system is not very expressive.  For example, I cannot
    declare that X is a list of fixnums.  This prevents the compiler from
    making fixnum assumptions about (car x).  Again, I have to pour on a lot
    of declarations on in order to get proper

Yeah, this is a serious problem.  Common Lisp's whole type declaration
system seems quite clumsy to me.

    * Global variables in Lisp will never be typed.  For example, suppose I
    say: (defvar *foo*) (proclaim '(fixnum *foo*)).  The compiler cannot
    change the representation of the value in *foo* to an efficient fixnum
    representation - it must store it as a type T.  Why?  Because any
    reference to (symbol-value x), if X happens to be '*foo*, must return
    the value properly.

Maybe not.  One might envision a trick whereby "direct" reads of *FOO*
don't go through *FOO*'s real value cell but use some hidden cell
instead that is a fixnum.  Writes to *FOO* update both the real value
cell and the hidden cell (this assumes writes are uncommon).
SYMBOL-VALUE continues to hack the value cell.  This is admittedly a
kludge, but it could be done.

I am always a little dubious of programs that make heavy use of things
like SYMBOL-VALUE, although I could not articulate why.  It is certainly
not one of the things I would optimize heavily in the language, although
you have already said you have other opinions on that subject.

    * It is very hard to get lisp to vectorize a case-statement.  If the
    labels of a case statement are symbols, they are not necessarily at
    contiguous addresses, and therefore vectorization is impossible.  If
    the labels are integers, the vectorization is easy, but the code is
    unreadable.  In C, it is possible to use an enum which is both readable
    and vectorizeable.

This isn't really a fair comparison -- you are saying "the convenient
but inefficient construct I could use in Lisp is less efficient than the
*only* construct I can use in C."  In Lisp, you can also use an "enum"
(which expands into a defconstant that produces integer tags) instead of
symbols, and at least several modern Lisp compilers will vectorize the
CASE statement.  Admittedly, some Lisps are not so hot at it when the
enumeration is sparse.

    * Lisp "tree-shakers" are up against too much to be really effective.
    Using eval will break them completely.  Even using something as simple
    as "(remove e l)" will force the linker to include the subroutine that
    removes elements from lists, the subroutine that removes elements from
    arrays, the subroutine that removes bits from bitvectors, and the
    subroutine that removes characters from strings.  (Of course, you could
    implement these all as a single subroutine performing abstract low-level
    operations on sequence types, but this requires a lot of overhead, and
    greatly slows down the subroutine, so it isn't really acceptable.)

Yeah, good tree shaking is hard, even though most programs don't use
EVAL.  A well-declared Lisp program compiled by a compiler that can do
type inferencing can do a lot better than either an undeclared program
or a program compiled by a poor compiler.  Unfortunately, by this
metric, most current Lisp compilers are poor.  I personally don't think
that doing careful type-declarations in a Lisp program violates its
spirit of dynamic typing -- as proponents of statically typed languages
as often claim, it really can be an aid to the programmer as well as to
the compiler.

    * Lisp closures effectively prohibit the implementation of static scoping
    through a register-display.

Hmm...

    * The existence of the garbage collector forces all operations to keep
    values in a format that the garbage collector understands.  This
    effectively prohibits the use of many optimization techniques, such as
    storing pointer values in available non-address registers.

GCs are too large a topic to address here, but there are more than a few
people who think that GCs are so important to the writing of reliable,
robust software in bounded time, that they are worth this sort of cost.
"Extremism in the elimination of dangling references and memory leaks is
no vice."
From: Smith
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <1993Feb3.172101.16025@linus.mitre.org>
I've been laughing a lot lately.  Up until 2 years ago most everything
I was involved with was written in Lisp.  Then I led a development
effort in C and C++.  Then worked on another one in C.  For the last
year or so I've been working on a project written in Ada.  Over the
years I've started reading comp.lang.lisp, comp.lang.c, comp.lang.c++,
and comp.lang.ada.  The funny thing is that comp.lang.lisp and
comp.lang.ada seem to be filled with almost identical arguments.  They
both seem obsessed with the fact that their language isn't as accepted
as C is.  Too bad, too, because these people probably could be
spending their time much more productively if they just built good
systems in whatever language.

Many of the people I've come in contact with who make the language
decisions are not that well informed about all the languages and all
the issues.  They turn to their supporting staff for input, and those
people ask questions of the programmers like, "What do you want to
use?"  In most cases the answer is C or, most recently, C++.  The Lisp
people claim that C and C++ are too strongly typed and get in the way
of good programming.  The Ada people claim that C and C++ are too
weakly typed and get in the way of good programming.  It's an
interesting fact that it is a fairly rare occurance to see comparisons
with other languages on comp.lang.c++.

I would say that the Lisp world has brought a lot of pain upon
themselves by doing 2 things: 
1) They too tightly identified Lisp with AI, and AI has become
strongly identified with hype and little else; 
2) They have not produced anything of such great value that it could
be held up as an example of what the language could do.  I'm sure that
many people would argue this point, but I've seen a lot of Lisp
programming and I can't remember anything that could not have been
done better (ie, smaller, faster executing, more maintainable) in
another language.

I'm not saying there is no place for Lisp.  Personally I love the
language and would recommend it to anyone who wants to do rapid
prototyping.  I am saying that people who want Lisp to be accepted as
is C or C++ must do something other than complain the world doesn't
understand them.

Barry Smith
From: P. T. Withington
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <19930203210856.8.PTW@TOMMY.SCRC.Symbolics.COM>
    Date: Wed, 3 Feb 1993 12:21 EST
    From: Smith <···@mbunix.mitre.org>

[...]

		    I can't remember anything that could not have been
    done better (ie, smaller, faster executing, more maintainable) in
    another language.

But what about SOONER?  

As an example:  I think many people would agree that X would not exist,
had it not been for Lisp-based prototypes, where the initial concepts of
"window oriented" systems were explored.

    I'm not saying there is no place for Lisp.  Personally I love the
    language and would recommend it to anyone who wants to do rapid
    prototyping.  

I guess you answered my question.

But I would also add OFTENER as attribute of Lisp development.  Lisp
also excels at incremental, exploratory, or evolutionary development.

		  I am saying that people who want Lisp to be accepted as
    is C or C++ must do something other than complain the world doesn't
    understand them.

I would be happy if the world simply understood that different languages
were appropriate for different situations, and no language is
appropriate for all situations.
From: Espen J. Vestre
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <1ktf04INNsip@coli-gate.coli.uni-sb.de>
In article <····················@TOMMY.SCRC.Symbolics.COM> P. T.
Withington, ···@riverside.scrc.symbolics.com writes:
>As an example:  I think many people would agree that X would not exist,
>had it not been for Lisp-based prototypes, where the initial concepts of
>"window oriented" systems were explored.
>
>    I'm not saying there is no place for Lisp.  Personally I love the
>    language and would recommend it to anyone who wants to do rapid
>    prototyping.  
>
>I guess you answered my question.
>
>But I would also add OFTENER as attribute of Lisp development.  Lisp
>also excels at incremental, exploratory, or evolutionary development.

Add to that special-purpose programming, which there still is a lot of
out there in the "mainstream", i.e., programs which are to be used at
only one installation, and where quick development time is more important
than optimized performance.  For special-purpose programming hardware
costs nowadays may be quite insignificant, so whether a lisp program runs
twice as slow or slower is really not very important.
--------------------------------------------------------------
Espen J. Vestre,                          ·····@coli.uni-sb.de
Universitaet des Saarlandes,        
Computerlinguistik, Gebaeude 17.2 
Im Stadtwald,                          tel. +49 (681) 302 4501
D-6600 SAARBRUECKEN, Germany           fax. +49 (681) 302 4351
--------------------------------------------------------------
From: Thomas M. Breuel
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <1kqustINN62d@life.ai.mit.edu>
In article <·····················@linus.mitre.org>, you write:
|> The Lisp
|> people claim that C and C++ are too strongly typed and get in the way
|> of good programming.  The Ada people claim that C and C++ are too
|> weakly typed and get in the way of good programming.  It's an
|> interesting fact that it is a fairly rare occurance to see comparisons
|> with other languages on comp.lang.c++.

But on comp.lang.c++, you see lots of discussions about GC and safety.
Those are, in fact, the areas in which C and C++ fall so woefully
short, and the cost that you and I pay as a C/C++ user is lower
productivity and more obscure, undetected bugs.

|> I'm not saying there is no place for Lisp.  Personally I love the
|> language and would recommend it to anyone who wants to do rapid
|> prototyping.  I am saying that people who want Lisp to be accepted as
|> is C or C++ must do something other than complain the world doesn't
|> understand them.

One large part in the success of C/C++ is backwards compatibility,
support, and availability. Those are not primarily technical problems
for other languages, they are social ones that can only be addressed 
by discussions and politics.

But there is something that developers of new languages (Scheme, SML,
CommonLisp, etc.) should take notice of: if they want to succeed in the
marketplace, they should address (perhaps conceptually uninteresting
but) very important issues like: a standard foreign function interface,
standard concurrency, floating point optimization, standard source 
management facilities, guaranteed real-time automatic storage management, 
and standard facilities for the integration of software from different 
sources. 

C and C++ are pretty lousy as far as programming languages go, but they
provide just a little bit more functionality in these pragmatic areas
than other languages, and that (though not always) makes them the
better choice for many practical problems.

Personally, I hope that in a few years, a language like SML or Scheme
will address those pragmatic issues as well, so that I can write not
only my high-level code but also my "take 10 bits out of the frame
grabber register and stuff them into the X11 image structure" code
in that language.

				Thomas.
From: Jonathan Edwards
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <C1xFnG.K8D@world.std.com>
All this discussion of comparative language features is irrelevant.
The most important reason for choosing a computer language is that the
people around you use it.
Computer languages, like natural languages, are cultural artifacts.
Lisp is an artifact of the AI community, a small and isolated group.
C is an artifact of the UNIX (and now PC) community, which is taking over
the computing world.

BTW, hasn't anyone noticed that Symbolics just filed Chapter 11 bankruptcy?







-- 
Jonathan Edwards				·······@intranet.com
IntraNet, Inc					617-527-7020
From: Scott McKay
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <19930204173524.2.SWM@SUMMER.SCRC.Symbolics.COM>
    Date: Thu, 4 Feb 1993 08:47 EST
    From: Jonathan Edwards <·······@world.std.com>

    All this discussion of comparative language features is irrelevant.
    The most important reason for choosing a computer language is that the
    people around you use it.
    Computer languages, like natural languages, are cultural artifacts.
    Lisp is an artifact of the AI community, a small and isolated group.
    C is an artifact of the UNIX (and now PC) community, which is taking over
    the computing world.

Crack is taking over inner cities.  That doesn't mean it's good.

Also, Lisp is hardly an artifact of the AI community.  The AI community
embraced Lisp, not the other way around.  I certainly know of more Lisp
systems that are not AI than Lisp systems that are AI.

Furthermore, natural languages are not cultural artifacts, either.  It
is probably more accurate to say that culture is an artifact of language.

    BTW, hasn't anyone noticed that Symbolics just filed Chapter 11 bankruptcy?

What does that have to do with anything?  Symbolics is hardly the sole
vendor of Lisp, and its not even the largest vendor.

This lemming mentality is, in my opinion, the major reason that most
modern software is so crappy and has failed to reach even the modest
goals that most programmers have.
From: Barry Margolin
Subject: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <1ku7dbINNgvr@early-bird.think.com>
In article <··········@world.std.com> ·······@world.std.com (Jonathan Edwards) writes:
>BTW, hasn't anyone noticed that Symbolics just filed Chapter 11 bankruptcy?

They filed for Chapter 11 *reorganization*.  They have not gone out of
business, and I don't expect them to (well, not this year).  The main
causes of the reorganization seem to be the 1992 slump in the economy
(several key contracts were cancelled because the customers were feeling
the pinch) and some long-term leases and other past business decisions that
were draining their cash reserves.  The filing will hopefully allow them to
straighten out the latter problems and re-emerge more healthy.
-- 
Barry Margolin
System Manager, Thinking Machines Corp.

······@think.com          {uunet,harvard}!think!barmar
From: Jarmo Ahonen
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <1993Feb8.091125.17708@cs.joensuu.fi>
······@think.com (Barry Margolin) writes:

>In article <··········@world.std.com> ·······@world.std.com (Jonathan Edwards) writes:
>>BTW, hasn't anyone noticed that Symbolics just filed Chapter 11 bankruptcy?

>They filed for Chapter 11 *reorganization*.  They have not gone out of
>business, and I don't expect them to (well, not this year).  The main
>causes of the reorganization seem to be the 1992 slump in the economy
>(several key contracts were cancelled because the customers were feeling
>the pinch) and some long-term leases and other past business decisions that
>were draining their cash reserves.  The filing will hopefully allow them to
>straighten out the latter problems and re-emerge more healthy.
>-- 
>Barry Margolin
>System Manager, Thinking Machines Corp.

Hopefully they can emerge as a profitable company. I personally think
that Symbolics machines and their programs were *wonderful* but
painfully slow... 

Unfortunately they were also expensive...

I'd really like a PC compatible Symbolics :-) :-) :-)
Well, seriously, an add-on card would also be nice... :-) :-)
From: Barry Margolin
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <1l6703INN43i@early-bird.think.com>
In article <·····················@cs.joensuu.fi> ·······@cs.joensuu.fi (Jarmo Ahonen) writes:
>Hopefully they can emerge as a profitable company. I personally think
>that Symbolics machines and their programs were *wonderful* but
>painfully slow... 

Symbolics machines were generally about as fast as other workstations (at
running Lisp) that were introduced at the same time.  The problem is that
companies like Sun, Apple and Intel could come out with new, faster
versions of their machines every 6-12 months, while Symbolics would take
several years and then suddenly catch up.  Unfortunately, in the meantime
they would lag way behind.

>I'd really like a PC compatible Symbolics :-) :-) :-)
>Well, seriously, an add-on card would also be nice... :-) :-)

Are you talking about a Symbolics add-on to a PC, or a PC add-on to a
Symbolics?  Symbolics did have a 386 coprocessor at one time.  You could
open a DOS window on your console.  I don't think this product did very
well, though.  I think the main target for it was customers of CLOE, their
PC Lisp; it would permit you to test your application on a PC without
having to have two workstations on your desk.

-- 
Barry Margolin
System Manager, Thinking Machines Corp.

······@think.com          {uunet,harvard}!think!barmar
From: Wayne Allen
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <WA.93Feb8143657@raven.mcc.com>
In article <············@early-bird.think.com> ······@think.com (Barry Margolin) writes:

   In article <·····················@cs.joensuu.fi> ·······@cs.joensuu.fi (Jarmo Ahonen) writes:
   >Hopefully they can emerge as a profitable company. I personally think
   >that Symbolics machines and their programs were *wonderful* but
   >painfully slow... 

   Symbolics machines were generally about as fast as other workstations (at
   running Lisp) that were introduced at the same time.  The problem is that

   ...



Ready about! Hard to port!...

A friend of mine, waiting for his Symbolics to re-boot at 3am, wrote the 
following...


    unscheduled down time
    
    
    another listless moment 
    drifting off
    until kicked awake by a cold hard boot
    no more recurring nightmares to be iterated through
    just an aimless wanderer
    come home again to root
    
    a brief pause in the routine of
    dedication to any single task
    whose endless repetitions
    can kill the demon 
    who else might stop and ask
    
    when i die 
    will it start 
    all over again
    at the commanding touch of 
    the user 
    my friend
    
    in the background
    begins that nagging process 
    of reflection
    that given any two points of disaffection
    computes the line of some heretofore 
    unseen connection
    
    thus i
    begin
    searching through the intermediate results 
    of plans and procedures
    all come to nought
    lost without a thread of control
    and no key to escape
    from the cycle in which i'm caught
    
    these idle moments to analyze
    my life become a digital dream
    i compute therefore i am
    at least while there's still data in the stream
    
    yet as bits of memory collect together
    like garbage in the corners of my mind
    am i no more than just
    another counter keeping track of time
    
    symbols, symbols, everywhere
    and not a meaning to be found.
    could it be i'm only standing
    on simulated ground.
    
    don't clear the window
    don't turn the page
    might as well face it
    we're all now living in the information age
    
    no distortion no distractions
    no loss or gain
    no sense of morals
    no fear of pain
    
    the highest value
    no more than to calculate
    perfection measured by the time
    it takes to emulate.

  -- Mark F. Alexandre



--
 wa | Wayne Allen, ··@mcc.com
    | MCC/ISD, 3500 West Balcones Center Dr, Austin, Tx 78759 (512)338-3754
From: Jarmo Ahonen
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <1993Feb9.082511.13055@cs.joensuu.fi>
······@think.com (Barry Margolin) writes:

>In article <·····················@cs.joensuu.fi> ·······@cs.joensuu.fi (Jarmo Ahonen) writes:
(stuff deleted)

>>I'd really like a PC compatible Symbolics :-) :-) :-)
>>Well, seriously, an add-on card would also be nice... :-) :-)

>Are you talking about a Symbolics add-on to a PC, or a PC add-on to a
>Symbolics?  Symbolics did have a 386 coprocessor at one time.  You could
>open a DOS window on your console.  I don't think this product did very
>well, though.  I think the main target for it was customers of CLOE, their
>PC Lisp; it would permit you to test your application on a PC without
>having to have two workstations on your desk.

Yes, I know that there was the PC-add-on board for Symbolics. 
I expressed myself not clearly enough, I suppose...
I would like to have one of the following:
1) A symbolics system for my PC, i.e. an OS + other stuff ported to the
   Intel architecture,

but because this is not something to expect (Intel is not so fine processor
architecture :-( ), then the other interesting possibility would be 

2) an integrated "Symbolics Machine" as a buss-master card for MCA or EISA
   or Local Buss (even ISA?), and for this nice card the PC may handle keyboard,
   I/O ports and so on. It could be possible to have e.g. OS/2 to behave
   as front-end for such a card.

The only problem is that the card should be *cheap*. In other words, it should
be less than 2000 USD (incl enough memory and all the software). That will
not probably happen because only hardware may easily cost much more than
that... :-(

So it seems that I'm only dreaming about a Symbolics in my PC :-( :-(. 
From: Martin Cracauer
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <1993Feb9.104230.4912@wavehh.hanse.de>
······@think.com (Barry Margolin) writes:

>In article <·····················@cs.joensuu.fi> ·······@cs.joensuu.fi (Jarmo Ahonen) writes:
>>Hopefully they can emerge as a profitable company. I personally think
>>that Symbolics machines and their programs were *wonderful* but
>>painfully slow... 

>Symbolics machines were generally about as fast as other workstations (at
>running Lisp) that were introduced at the same time.  The problem is that
>companies like Sun, Apple and Intel could come out with new, faster
>versions of their machines every 6-12 months, while Symbolics would take
>several years and then suddenly catch up.  Unfortunately, in the meantime
>they would lag way behind.

Do you think Symbolics could bring up a Genera-like system on
stock hardware ? Is that even possible ?

I cannot think that selling hardware could get Symbolics out of their
current situation. They are creating great software, but they have no
chance to stay in the race against the stock machines.

By the way, in my opinion one of the geatest disadvantage of
symbolics machines is their price policy. The base price for a
machine is one thing, but almoust in Germany the prices for
RAM-Expansion are horrible. When one buy such a mchine, you cannot be
sure to keep it up-to-date without enourmous costs.

-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <········@wavehh.hanse.de>, WAVEDATA, Norderstedt, Germany
From: Ted Dunning
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <TED.93Feb9165003@lole.nmsu.edu>
In article <····················@wavehh.hanse.de> ········@wavehh.hanse.de (Martin Cracauer) writes:

   I cannot think that selling hardware could get Symbolics out of their
   current situation.


hey.... selling hardware is what got them *into* their current
situation.
From: Richard Harter
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <1993Feb10.071548.22475@smds.com>
In article <················@lole.nmsu.edu> ···@nmsu.edu (Ted Dunning) writes:

$In article <····················@wavehh.hanse.de> ········@wavehh.hanse.de (Martin Cracauer) writes:
$
$   I cannot think that selling hardware could get Symbolics out of their
$   current situation.

$hey.... selling hardware is what got them *into* their current situation.
        ^
       not

Bad, bad, type -- you forgot the not.
-- 
Richard Harter: SMDS Inc.  Net address: ··@smds.com Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742.    Fax: 508-369-8272
In the fields of Hell where the grass grows high
Are the graves of dreams allowed to die.
From: Ted Dunning
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <TED.93Feb10093101@lole.nmsu.edu>
In article <······················@smds.com> ··@smds.com (Richard Harter) writes:

   In article <················@lole.nmsu.edu> ···@nmsu.edu (Ted Dunning) writes:

   $In article <····················@wavehh.hanse.de> ········@wavehh.hanse.de (Martin Cracauer) writes:
   $
   $   I cannot think that selling hardware could get Symbolics out of their
   $   current situation.

   $hey.... selling hardware is what got them *into* their current situation.
	   ^
	  not

   Bad, bad, type -- you forgot the not.


ok... trying to sell hardware is what got them into their current situation.

but in fact, i am convinced that even when they *did* sell some
hardware, and even with the exorbitant maintenance costs, they still
lost money on much of the hardware in the field.
From: Barry Margolin
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <1lc5c9INNabf@early-bird.think.com>
In article <····················@wavehh.hanse.de> ········@wavehh.hanse.de (Martin Cracauer) writes:
>Do you think Symbolics could bring up a Genera-like system on
>stock hardware ? Is that even possible ?

They claim to be working on a port to the DEC Alpha chip (this was
announced at last year's ALU conference).  They don't think that Genera on
a standard 32-bit processor is worthwhile.
-- 
Barry Margolin
System Manager, Thinking Machines Corp.

······@think.com          {uunet,harvard}!think!barmar
From: Brad Cote
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <44821@sdcc12.ucsd.edu>
········@wavehh.hanse.de (Martin Cracauer) writes:

>Do you think Symbolics could bring up a Genera-like system on
>stock hardware ? Is that even possible ?

According to a guy I know (who just got laid off), the biggest problem
with any port was the word size used by Symbolics, which I believe is
either 40 or 36 bits. They were supposedly studying the Alpha chip,
which will be (or is) one of the first 64 bit chips available, as a
for rehosting the Genera software. This could be their plan, still.

Say, does anyone know if Statice is a current product? Has anyone
built a successfull product from it?

                                    Brad Cote'
                                    University of California, San Diego
                                    STARE Project
From: Dr C. Bridgewater
Subject: Re: Symbolics Chapter 11 (was Re: Why Isn't Lisp a Mainstream Language?)
Date: 
Message-ID: <1993Feb9.080553.9908@cc.ic.ac.uk>
In article <·····················@cs.joensuu.fi> ·······@cs.joensuu.fi (Jarmo Ahonen) writes:
>······@think.com (Barry Margolin) writes:
>
>>In article <··········@world.std.com> ·······@world.std.com (Jonathan Edwards) writes:
>>>BTW, hasn't anyone noticed that Symbolics just filed Chapter 11 bankruptcy?
>
>>They filed for Chapter 11 *reorganization*. They have not gone out of
>>business, and I don't expect them to (well, not this year)... 
>>The filing will hopefully allow them to straighten out the latter problems
>>and re-emerge more healthy.
>>--
>>Barry Margolin

Best of luck to them ....

>
>Hopefully they can emerge as a profitable company. I personally think
>that Symbolics machines and their programs were *wonderful* but
>painfully slow... 
>
>Unfortunately they were also expensive...
>
>I'd really like a PC compatible Symbolics :-) :-) :-)
>Well, seriously, an add-on card would also be nice... :-) :-)
>

How about a MacIvory card ? Genera-on-a-chip for the Macintosh users of
this world. Before you ask, a Macintosh is a "Personal Computer" just not
of the IBM/Intel/MicroSoft flavor <evil grin>.

Regards,

Colin

(A 3630 user who's not afraid to errrm .... use it)

****************************************************************************
*  Colin Bridgewater		     *    ·············@uk.ac.ic   * \   / *
*  Head Robot Wrangler		     * tel:+44-(0)71-589-5111x4842 *  \ /  *
*  Construction Robotics Research    * BE KIND TO SPIDERS & SNAILS * --*-- *
*  Department of Civil Engineering   *  -------------------------  *  / \  *
*  Imperial College, London, UK.     *  alias 'the happy hacker'   * /   \ *
****************************************************************************
From: Jeff Dalton
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <8399@skye.ed.ac.uk>
In article <··········@world.std.com> ·······@world.std.com (Jonathan Edwards) writes:
>All this discussion of comparative language features is irrelevant.
>The most important reason for choosing a computer language is that the
>people around you use it.

But I chose Lisp when the people around me weren't using it.
So that wasn't the most important factor then.  Nor is it the
most important now.  

>Computer languages, like natural languages, are cultural artifacts.

True.

>Lisp is an artifact of the AI community, a small and isolated group.
>C is an artifact of the UNIX (and now PC) community, which is taking over
>the computing world.

So?  Both Lisp and C were once used only by small groups.  Lots of
things invented in the AI community are now used all over the place.

>BTW, hasn't anyone noticed that Symbolics just filed Chapter 11 bankruptcy?

That's Lisp machines (part of the mistaken idea that specialized
hardware was the way to go which had so many in its grip in the
pre-risc early 80s), not the Lisp language.

-- jd
From: Thomas M. Breuel
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <TMB.93Feb3173326@arolla.idiap.ch>
>>>>> On Mon, 1 Feb 1993 15:40:56 GMT, ···@stony-brook.scrc.symbolics.com (Scott McKay) said:

>     * Lisp is full of pointers - everything in lisp requires extra
>     pointer dereferences. [...]

> The "extra pointer dereference" may be a red herring. [...] So one
> occasional extra cycle per memory reference is just not a big deal
> [...]  Futhermore, many of the modern C and C++ programs I have seen
> typically use an "object reference" style anyway, and I honestly
> don't think I've ever seen anyone use the technique of allocating a
> big array of structs, although I am admittedly do not read tons of
> C/C++ code.

Actually, it is quite common in numerical or seminumerical programs
to have constructs like:

   struct vec2 { float x,y; };
   vec2 *vs = new vec2[10000000];

The problem with using an extra level of indirection is mainly one of
space, not time. Most implementations of Lisp and similar languages
will require at least twice as much memory for the above construct as
the usual Pascal or C implementations.

					Thomas.
From: Jeff Dalton
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <8335@skye.ed.ac.uk>
In article <················@arolla.idiap.ch> ···@idiap.ch writes:
>>>>>> On Mon, 1 Feb 1993 15:40:56 GMT, ···@stony-brook.scrc.symbolics.com (Scott McKay) said:
>
>>     * Lisp is full of pointers - everything in lisp requires extra
>>     pointer dereferences. [...]

Not everything...

>Actually, it is quite common in numerical or seminumerical programs
>to have constructs like:
>
>   struct vec2 { float x,y; };
>   vec2 *vs = new vec2[10000000];
>
>The problem with using an extra level of indirection is mainly one of
>space, not time. Most implementations of Lisp and similar languages
>will require at least twice as much memory for the above construct as
>the usual Pascal or C implementations.

Actually, it's quite common for Lisp implementations to provide
float arrays that do not require any extra pointers and hence
avoid this space cost.

Even good old Franz Lisp could do this, so it's not like this
is an optimization that's appeared only in the last year or so.
(Someone should be able to say what MacLisp could do.)

-- jd
From: Thomas M. Breuel
Subject: Re: Why Isn't Lisp a Mainstream Language?
Date: 
Message-ID: <TMB.93Feb9105752@arolla.idiap.ch>
>>>>> On 8 Feb 93 16:41:16 GMT, ····@aiai.ed.ac.uk (Jeff Dalton) said:
>>Actually, it is quite common in numerical or seminumerical programs
>>to have constructs like:
>>
>>   struct vec2 { float x,y; };
>>   vec2 *vs = new vec2[10000000];
>>
>>The problem with using an extra level of indirection is mainly one of
>>space, not time. Most implementations of Lisp and similar languages
>>will require at least twice as much memory for the above construct as
>>the usual Pascal or C implementations.
>
> Actually, it's quite common for Lisp implementations to provide
> float arrays that do not require any extra pointers and hence
> avoid this space cost.

But "vs" isn't a "float" array, it is an array of a user-defined
structures (it is coincidental that in this example, all structure
components had the same type).

Of course, you can simulate space efficient arrays of structures in
CommonLisp the way you do it in FORTRAN: by having a separate array
for each component. But that causes no end of headaches and somehow
defeats the purpose of using Lisp in the first place, which is
presumably to make programming easier and place fewer stumbling blocks
in your way.

Languages like C++ give you the choice between implementing arrays of
structures as arrays of pointers to structures or concatenations of
structures. Allowing the programmer to make the right tradeoffs here
is important.

					Thomas.