From: Jeffrey Jacobs
Subject: No market, etc
Date: 
Message-ID: <4851@well.UUCP>
Skef Wholey (no "r") writes:

>Common Lisp was a comprimise, and it looks that way.  I freely admit
>that.  The same can be said of the US constitution (Electoral college?
>What a silly way to elect a President!) and the US government in
>general.

Munich, Yalta and the Paris Peace Accords (Vietnam) were also compromises!
Compromise is seldom good or bad by itself; it is the result that is important.
(And Common Lisp is hardly the U.S. Constitution).

>This, as I argued before, is more a result of the development of
>low-cost high-performance "stock" hardware (e.g. Sun-4) than the
>development of Common Lisp. 

Wish I could agree, but I can't.  LMI was doomed from the beginning; you can't
survive by saying that you are only going build faster, more expensive hardware,
and to heck with price/market considerations.  The sales of Lisp for workstations
is not that great; I would estimate a combined sales by Lucid and Franz' Allegro
at less than 4,000 copies, which is roughly equal to total Symbolics sales. Over
a period of several years.

The problem is simply lack of demand!

And your argument has nothing to do with the problems that Inference, Carnegie,
Intellicorp and Teknowledge are having.

>The Common Lisp designers were foresighted
>enough to design the language so that implementations on such stock
>hardware could be reasonably efficient.

Gee, I hate to get back into character assasination, but this is pure bogus B.S.

Perhaps we have different views of what constitutes "stock hardware".  If
you micro-code it for Lisp, it's not stock.  Stock consists of VAX, 680x0, INTEL
80x8x, Gould, Prime, WE 3200, etc.

I would certainly like to see any evidence that you can produce that shows that
any consideration was given to stock hardware.  Gabriel & Brooks "A Critique..."
certainly says otherwise, as does my own reading and interpretation.

In the past 4 years of flaming, nobody, but nobody has ever advanced this argument.
In fact, many people have argued just the opposite, that Common Lisp is not intended
to be run on stock hardware.

In fact, you effectively contradict this by saying:

>Small computers are getting bigger all the time, and becoming more
>capable of running Common Lisp.  How about this: Common Lisp is not a
>car (or a cdr!), it is a track to be run.

Now if you substitue "track" with obstacle course, then we can agree <grin>.

Any idiot can design software that will won't perform well; it happens all the time.
A good part of our business has always been fixing performance problems.

>  Small machines are getting to
>the point where they can run this track fast enough to make them
>competitive.

So what?  This is 1987, nearly 1988.  CL was basically designed in 1982.  The
world isn't really interested in waiting for the day that small machines are
good enough to be competitive.  CL simply isn't desired by the world at large.
(And what do you mean by "small";  Mac II with 4 meg, uVax, $36K Symbolics,
Intel 80386 with 8 meg?)

This problem has been around longer than I have.  There always seems to be a
belief that Lisp must always be ahead of the capabilities of the current generation of
hardware.  But Lisp does not and has not influenced mainstream hardware
development.  Hardware tagging and type checking are not to be found outside
of the Lisp machine world (unless you microcode it yourself, in which case it's
not "stock").

Think of the possibilities if  a standard Lisp could run on current
architectures; instead of everybody sitting around waiting to be able to do
things when the hardware catches up, things would be progressing, and the next
generation of hardware would expand the number of things that could be done,
and improve the things we would be have available now!

>Well, gosh, I was away from CMU for fourteen months (on a leave of
>absence from grad school) working as a High Paid Lisp Wizard in Cambridge...

One well known Lisp developer's leave of absence does not a high demand
for Common Lisp make.  (In fact, we can make a case that one of the reasons
Common Lisp is so big and complex is to keep the supply of Lisp programmers
small so that the few around are sure to find jobs <grin>).

>Then the commercial world won't have to worry about it, and they can
>develop something better.  Right? 

Wrong!  The commercial world has always believed that Lisp is too big and too
slow for commercial use and are not about to put any money into Lisp R&D.
In fact, I will go so far as to predict that the majority of commercial
implementation by hardware vendors will never be Steele complete, and that
in general the performance and resources required will be such that their
Common Lisps will simply languish in the catalogue, with very few users.

>And large programs have been
>ported from their experimental Lisp form into a "productized" C form.  I
>was involved with such a port myself.  Not too terrible if you've got
>the right tools, and/or write some of the code with that port in mind.

Riiiiight!  Ask Inference and Carnegie about it <grin>. But in the real world, this
is generally an unacceptable approach.  "Gee, Mr. Boss, we'll spend six months to
year doing it in Lisp; then we'll take another year to rewrite it in C, then we'll
fix any lingering performance problems".

This approach has turned off more potential users of Lisp and AI technology
than anything else.  (BTW, I am *not* a big C fan).

Finally, THERE IS NO REASON THAT WE CAN'T HAVE A LISP
THAT WILL ULTIMATELY PRODUCED EFFICIENT, GOOD
RUNTIME CODE!!!  Lisp, more than any other language, had the
potential to provide a development environment that would provide
the user with an unmatch path from experimentation to highly polished
end product.  Common Lisp killed this potential by becoming just another
programming language, sort of a Pascal with parentheses.

Re Lisp in Space; I'm not referring to SDI.  One of John Seely Brown's
earliest inspirations lectures had to do with the problems of robots
and artificial intelligence exploring the planets.

FYI, most space born software is written in Jovial, not assembly.  Future
work is to be in ADA.

Merry Christmas!

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: ·······@well.UUCP

From: Jeffrey Mincy
Subject: Re: No market, etc
Date: 
Message-ID: <14218@think.UUCP>
In article <····@well.UUCP> ·······@well.UUCP (Jeffrey Jacobs) writes:
>Skef Wholey (no "r") writes:
>>The Common Lisp designers were foresighted
>>enough to design the language so that implementations on such stock
>>hardware could be reasonably efficient.

>Gee, I hate to get back into character assasination, but this is pure bogus B.S.

Actually, it is not bogus.

>Perhaps we have different views of what constitutes "stock hardware".  If
>you micro-code it for Lisp, it's not stock.  Stock consists of VAX, 680x0, INTEL
>80x8x, Gould, Prime, WE 3200, etc.

No, skef understands what "stock hardware" is.

>I would certainly like to see any evidence that you can produce that shows that
>any consideration was given to stock hardware.  Gabriel & Brooks "A Critique..."
>certainly says otherwise, as does my own reading and interpretation.

There is nothing in common lisp that can not be implemented efficiently.
It is only a matter of programming, and understanding.  

The only thing that common lisp (any lisp for that matter) requires that
stock hardware does not have is type tags.  A few other things like
microcode for doing cdr-coded lists, and pointer chasing for incremented
gc's are nice, but not neccessary.

I used to work at data general (maker of the "stock hardware" MV series),
where I worked in the common lisp group.  I have thought about these issues.

I'll be more than happy to talk about individual features of common lisp,
and how they can be implemented efficiently on stock hardware.  

> Jeffrey M. Jacobs


-- jeff
seismo!godot.think.com!mincy
From: Lou Steinberg
Subject: Re: No market, etc
Date: 
Message-ID: <2616@aramis.rutgers.edu>
In article <····@well.UUCP> ·······@well.UUCP (Jeffrey Jacobs) writes:

> [...] But Lisp does not and has not influenced mainstream hardware
> development.

Do you consider the DEC-10 (and DEC-20) architecture mainstream?  How
about the SPARC (Sun-4) architecture?  Both were explicitly designed
to be good at both Fortran and Lisp.  (Of course, ideas about what
was "good for lisp" have evolved some in between those two examples.)

In particular, despite your claim that

>  Hardware tagging and type checking are not to be found outside
> of the Lisp machine world

the SPARC architecture does indeed have some hardware type tagging and
checking.  
-- 
					Lou Steinberg

uucp:   {pretty much any major site}!rutgers!aramis.rutgers.edu!lou 
arpa:   ···@aramis.rutgers.edu
From: Charles Hedrick
Subject: hardware tagging in SPARC and DEC-20
Date: 
Message-ID: <509@athos.rutgers.edu>
Some comments on the use of tags in the SPARC and DEC-20.  We should
not give the impression that either of these machines have hardware
tagging in quite the generality of the Lisp Machine.  It is true that
both of these machines were designed specifically with Lisp in mind.
However I believe the ability to use tags on the DEC-20 fell out by
accident.  Lisp was considered in the original PDP-10 design, but the
main effect was all those half-word instructions.  This allowed a
full-word (36-bit) CONS cell, with half-word (18 bit) pointers for the
CAR and CDR.  Since the PDP-10 had 18 bit addresses, that was fine.
One could also have 72-bit CONS cells with two tagged pointers, but
nobody did that and I doubt that the designers had it in mind.  With
the DEC-20, the address space expanded to 30 bits (only 24 bits of
which were actually implemented), and indexing was done in a manner
that allowed a representation with 6-bit tag fields.  We (I supervised
the writing of two versions of DEC-20 Lisp) used it to give 32-bit
integers and reals (using several adjacent type codes), and typed
pointers.  This cleaned up lots of code, and made for nice fast type
dispatches.  We were able to come up with a code sequence that
resulted in almost no type-checking overhead for integer add and
subtract (i.e. one or two extra instructions, for about a factor of 2
slowdown.  But it is very unlikely that a high enough percentage of
the instructions would be arithmetic that this would be noticable).
Everything other than 32-bit integers trapped to a subroutine for
handling.  However as far as I can tell, the DEC-20 designers were not
thinking of this when they did the design.  It just fell out of the
way addressing happened to work.

Clearly the SPARC designers were thinking of Lisp.  There are special
opcodes for tagged add and subtract.  The tag is the low-order two
bits.  Integer arithmetic is done.  In addition to the other tests, if
either operand has a non-zero tag field, overflow is set.  I don't
know how other implementors would use this, but I have some first
thoughts.  (Note however that I have no plans to do a SPARC Lisp.
Several quite competent groups are doing it already.)  Obviously a tag
value of zero is used for 30-bit integers.  I think I would use tag
values of 1 and 3 for 31-bit reals and tag value 2 for numbers that
need to be pointers (bignums and larger reals).  I suggest type codes
1 and 3 for reals so that the 31 high-order bits are significant.  Tag
value 2 would indicate a pointer to an object whose first byte would
be a further type code.  Typical code would do the add or subtract
instruction, and follow that immediately by a conditional branch,
which would branch to an appropriate exception-handling routine.
There would have to be several entry points to exception-handling, to
handle arguments in various possible combinations of registers.  (This
is the reason for suggesting test on condition rather than a trap,
although depending upon details that I haven't looked at, one might be
able to win with a trap handler.)  This implementation would add one
extra instruction for each add or subtract that happened to be an
integer (the conditional branch, which would not be taken).  31-bit
floating point would require something on the order of 10 additional
instructions, assuming we were willing to have the compiler compile
specific code for each individual instance, and that no no-op's are
needed. [branch, move arg 1, mask arg 1, indexed branch, move arg 2,
mask arg 2, indexed branch, mask arg 1, mask arg 2, float operation,
branch, assuming the no-ops can be eliminated].  If the machine is
really 1.6 MFLOP and 16 RISC MIP, adding 10 non-floating point
instructions per floating operation slows it down about a factor of 2,
which I would find perfectly acceptable for generic arithmetic applied
to floating point.  64-bit floating point would have the above 10
instructions, plus a CONS and some moves.  Probably it would still be
close to a factor of 2 slow-down, since a double-precision floating
point is somewhat slower also.  (Of course this doesn't count the
overhead of the eventual garbage-collection.)  The dispatch overhead
for bignums would almost certainly be invisible.  Let me hasten to say
that I haven't really tried any of this, but I doubt that I'm off by
much.

In summary it seems to me that the SPARC design would get roughly the
same results as the DEC-20, namely a slowdown of about a factor of 2
in arithmetic instructions.  Since not all of your program is
arithmetic instructions, the actual results should be better than
that, like maybe 1.5.  I find that quite acceptable for generic
arithmetic, and can't imagine that people would bother to use type
declarations except in very unusual cases.  Without tag support what I
think would be hurt the most is simple counting.  I believe it would
be slowed by roughly an order of magnitude over what you could get
with optimal declarations.

In addition, tag support on SPARC could be used to help in
implementing type security.  In order to do this right, one would have
to live with 30-bit short floats, and use one of the tag values for
characters (which are the only other object in Lisp that one would
like to have not take up any heap space).  With this encoding one
could make sure that one did not try to follow pointers that were
actually integer constants, etc.  This would be based on the fact that
SPARC requires full-word operations to be done on objects at full-word
boundaries.  In this encoding there would be only one tag value for
objects that are pointers.  Suppose that value is 3.  Then
representations for objects on the heap would be arranged so that
full-word operations were done with an offset of 1, i.e. instructions
like load ... 1(%i0).  If such an instruction is done with anything
not ending in 3 (i.e. an integer, short floating-point or char), an
odd-address trap occurs.  This is the first time I have ever heard the
RISC alignment restriction advertised as an advantage, but I agree
that in fact one could make use of it as such.
From: Juha Hein{nen
Subject: Re: No market, etc
Date: 
Message-ID: <2248@korppi.tut.fi>
In article <····@well.UUCP> ·······@well.UUCP (Jeffrey Jacobs) writes:

>end product.  Common Lisp killed this potential by becoming just another
>programming language, sort of a Pascal with parentheses.

Wirth probably don't like your comparison.  Considering its size,
Common Lisp would better relate to Ada.

Why this argument about Common Lisp?  Lisp doesn't live or die with
Common Lisp or is someone forcing it at you?  We have been using
Scheme for more that a year and are extremely happy with it: no
problems with size or complexity.
-- 
	Juha Heinanen
	Tampere Univ. of Technology
	Finland
	··@tut.fi (Internet), tut!jh (UUCP)
From: Ozan Yigit
Subject: Re: No market, etc
Date: 
Message-ID: <265@yunexus.UUCP>
In article <····@korppi.tut.fi> ··@tut.fi (Juha Hein{nen) writes:
>Why this argument about Common Lisp?  Lisp doesn't live or die with
>Common Lisp or is someone forcing it at you?  We have been using
>Scheme for more that a year and are extremely happy with it: no
>problems with size or complexity.

	Right. I am looking forward to the day the name "lisp"
	and "scheme" will be synonymous, and all others will be
	referred as, uhm, others. CL will be fondly remembered
	as the dinosaur that made "scheme" the true Lisp.

happy scheming.

oz
-- 
Those who lose the sight	     Usenet: [decvax|ihnp4]!utzoo!yunexus!oz
of what is really important 	    	     ......!seismo!mnetor!yunexus!oz
are destined to become 		     Bitnet: ··@[yusol|yulibra|yuyetti]
irrelevant.	    - anon	     Phonet: +1 416 736-5257 x 3976