From: James King
Subject: Need: A Lisp to run in 640 K
Date: 
Message-ID: <532@rd1632.Dayton.NCR.COM>
I am looking for a Common Lisp that will run in 640 K on a PC
XT or AT.  Am I dreaming?  Is there a subset that can run in 640 K?

Any help is appreciated.

Jim King   (513)-445-1090
········@dayton.ncr.com
From: Brian Leverich
Subject: Re: Need: A Lisp to run in 640 K
Date: 
Message-ID: <1703@randvax.UUCP>
In article <···@rd1632.Dayton.NCR.COM> ····@rd1632.Dayton.NCR.COM
(James King) writes:

>I am looking for a Common Lisp that will run in 640 K on a PC
>XT or AT.  Am I dreaming?  Is there a subset that can run in 640 K?

Common LISPs they aren't, but Northwest Algorithms' UO-LISP and TI's
PC-Scheme are the two highest performance PC list processing environments
I know, and they're cheap: UO-LISP is about $150 and Scheme is $99.

UO-LISP has an interpreter and compiler and compiled code runs very fast.
GOOD news:
   o  On a 640k PC, there's about 300k space for compiled functions and
      vectors, and garbage collection is very fast.
Bad news:
   o  The bignum and real arithmetic is slow (in particular, UO-LISP
      doesn't know about co-processor chips);
   o  The cons space is tiny, with only 16k cons cells.  This is made
      even worse by the bignum and real representations, which can
      easily chew up six or more cons cells per number.
If you're doing symbolic work, you don't need big data structures, and
your functions can all be compiled, UO-LISP is a big win.

PC-Scheme is fairly close to the R3 Scheme report standard.
GOOD news:
   o  Knows about math coprocessors.
   o  At a sometimes substantial performance penalty, you can use either
      expanded or extended memory to get a heap space up to about 2 meg.
      (On a 640k PC, you have about a 300k heap)
   o  Compiler understands tail recursion and is generally pretty bright
      about optimizing code.  Code runs fast.
   o  Doing calls to modules written in other languages is supported.
   o  Documentation is very good.
BAD news:
   o  Garbage collecting is fairly slow and obtrusive.
   o  Just executing functions (without doing any consing or whatnot)
      generates wreckage on the heap.  Doing bignum and real arithmetic
      also chews up heap space.  Some innocent-looking code can cause
      an awful lot of garbage collecting.

I'll provide additional detail if it's useful.  -B
-- 
  "Simulate it in ROSS"
  Brian Leverich                       | U.S. Snail: 1700 Main St.
  ARPAnet:     ········@rand-unix      |             Santa Monica, CA 90406
  UUCP/usenet: decvax!randvax!leverich | Ma Bell:    (213) 393-0411 X7769