From: Maciej Borecki
Subject: compile
Date: 
Message-ID: <bv6k23$c54$1@atlantis.news.tpi.pl>
hi
i'm a newbie in lisp programming, is possibly to compile lisp into executeable 
file under linux or windows? if so plz give me some peace of advice or links...

thx
--
nachos

From: Brian Mastenbrook
Subject: Re: compile
Date: 
Message-ID: <270120042042050331%NOSPAMbmastenbNOSPAM@cs.indiana.edu>
In article <············@atlantis.news.tpi.pl>, Maciej Borecki
<······@poczta.fm> wrote:

> hi
> i'm a newbie in lisp programming, is possibly to compile lisp into
> executeable 
> file under linux or windows? if so plz give me some peace of advice or
> links...

http://ww.telent.net/lisp/according_to/hello-world.html for Linux and
SBCL.

-- 
Brian Mastenbrook
http://www.cs.indiana.edu/~bmastenb/
From: Karl A. Krueger
Subject: Re: compile
Date: 
Message-ID: <bv6lf1$4vd$1@baldur.whoi.edu>
Maciej Borecki <······@poczta.fm> wrote:
> i'm a newbie in lisp programming, is possibly to compile lisp into
> executeable file under linux or windows? if so plz give me some peace
> of advice or links...

This is a frequently-asked question.  Try this FAQ for links:

	http://alu.cliki.net/Frequently%20Asked%20Questions

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: André Thieme
Subject: Re: compile
Date: 
Message-ID: <bv6m5s$80a$1@ulric.tng.de>
Maciej Borecki wrote:

> hi
> i'm a newbie in lisp programming, is possibly to compile lisp into 
> executeable file under linux or windows? if so plz give me some peace of 
> advice or links...

Yes, this is possible. One example is Corman Lisp:
http://www.cormanlisp.com/

You can build native .exe files that run under windows. You could even 
make .dll's in Lisp which your friend uses then in his C++ applications.
From: Henrik Motakef
Subject: Re: compile
Date: 
Message-ID: <x7smi1qcph.fsf@crocket.internal.henrik-motakef.de>
Maciej Borecki <······@poczta.fm> writes:

> i'm a newbie in lisp programming, is possibly to compile lisp into
> executeable file under linux or windows? if so plz give me some peace
> of advice or links...

Yes, but not with all implementations.

First, note that building an executable is different from compiling to
native code. Many implementations (almost all, actually) have
native-code compilers.

As for building a standalone executable (which is, is most cases, not
really neccessary, although frequently requested), most commercial
implementations support it, for example Lispworks and Allegro CL,
which have Windows and Linux versions and support other platforms as
well. You can get free restricted versions of them, but these do not
support building executables. Corman Lisp for Windows supports
creating .exe and .dll files as well.

As for the free implementations, there is some support for that in
both CMUCL and SBCL (both of which run on several Unix platforms),
although the SBCL way probably counts as a dirty trick...

The big question however is: why do you think you need that? Is there
really a problem that cannot be solved if some more generic lisp
runtime environment is the executable, and it loads and executes your
compiled code on startup?

This has been discussed in this newgroup many, many times. Reading
some old threads on groups.google.com might be enlightening.
From: Rahul Jain
Subject: Re: compile
Date: 
Message-ID: <8765emyz1o.fsf@nyct.net>
Henrik Motakef <············@henrik-motakef.de> writes:

> First, note that building an executable is different from compiling to
> native code. Many implementations (almost all, actually) have
> native-code compilers.

Also, you can build an executable and not compile (your code) to native
machine code.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <ZtlUb.15251$fD.14445@fed1read02>
I have a question on the side, here.  I'm new to comp.lang.lisp but this 
thread is along the lines of what I've been playing with lately.  I 
wrote a Lisp interpreter (my own dialect, imagine a subset of Scheme) 
for embedding in another project once, and have been trying to think of 
ways to improve it.  One thing that I'd like to do is make it into a 
to-native-code compiler, specifically for the Z80 in my TI-85.  (Don't 
ask 'why', I'm infamous for answering 'because I got bored'. :P)

There are a few points that I don't know how other compilers do, though:
   1. What do you do about garbage collection?  Is this part of a 
standard library that gets linked into every executable, like _init from 
crt0.s in C does?  I'm guessing it gets called by the memory allocation 
routines when needed, and that those are also linked in.
   2. What about type resolution?  The compiler has no way of knowing 
what data type a given expression will evaluate to, so how are things 
like + implemented?  Do they have to play bitwise tricks like my 
interpreter does (a la GNU Scheme's method), or is there a better way?
   3. How are function parameters marshalled?  As a list or are they 
pushed on the stack as in normal C calling conventions?  If the latter, 
how do you deal with optional and 'rest' arguments?
   4. How are macros implemented?  Do you have to include a complete 
Lisp interpreter in the linked executable to evaluate the arguments?  I 
guess that knowing how 'eval' works in compiled code would be very 
helpful here.
   5. How does memory management typically work?  I was thinking that my 
current "here is a block of cells to use for your conses, and you can 
malloc() other stuff" system that the interpreter uses would work just 
fine in the compiled code, but if there's a better way I'd love to hear 
it, especially since malloc() isn't exactly available per se on my TI-85.
   6. What am I missing?


I also want to implement tail recursion elimination, but that should be 
trivial with the compiled code whereas it is not with my interpreter.

Thanks to anyone who responds...try to reply via e-mail too so I know 
that you've responded, but I'll keep all real discussion on the 
newsgroup since this seems like a relatively hot topic. :)

Ari
From: Lars Brinkhoff
Subject: Re: compile
Date: 
Message-ID: <85llnim0tx.fsf@junk.nocrew.org>
Ari Johnson <·····@hotmail.com> writes:
> I wrote a Lisp interpreter ...  One thing that I'd like to do is
> make it into a to-native-code compiler ...  There are a few points
> that I don't know how other compilers do, though:
>    1. What do you do about garbage collection?  Is this part of a
> standard library that gets linked into every executable, like _init
> from crt0.s in C does?  I'm guessing it gets called by the memory
> allocation routines when needed, and that those are also linked in.

There is a collector called the Boehm-Demers-Weiser conservative
garbage collector (usually just "Boehm GC") that works as a
replacement for malloc.  However, many Lisp compilers (or rather
runtimes) implement their own memory management completely bypassing
malloc.  Generational and copying garbage collectors seem popular.  If
you're serious about this, there's a book called Garbage Collection
you should read.

>    2. What about type resolution?  The compiler has no way of knowing
> what data type a given expression will evaluate to, so how are things
> like + implemented?  Do they have to play bitwise tricks like my
> interpreter does (a la GNU Scheme's method), or is there a better way?

Usually, all values carry type information.  That way, a function like
+ can check all of its arguments to see that they are the right type.
A good paper to read is
  ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/typeinfo.ps.gz
Most (but not all) Lisp implementations represent values by what the
paper calls "typed words".  Perhaps that's what you mean by "bitwise
tricks".

>    3. How are function parameters marshalled?  As a list or are they
> pushed on the stack as in normal C calling conventions?  If the
> latter, how do you deal with optional and 'rest' arguments?

More like the latter.  Passing the first few arguments in registers is
also common (that's also a normal C convention on many machines).  The
called function is usually responsible for doing any &optional, &rest,
&key, etc processing of its arguments.

>    4. How are macros implemented?  Do you have to include a complete
> Lisp interpreter in the linked executable to evaluate the arguments?

Absolutely not.  All macros in a function are expanded until there are
no macro invokations left.  Then the resulting code is compiled.

>    5. How does memory management typically work?

See 1.

>    6. What am I missing?

It seems like you should read up on Lisp compilers.  The book Lisp in
Small Pieces is usually recommended.

-- 
Lars Brinkhoff,         Services for Unix, Linux, GCC, HTTP
Brinkhoff Consulting    http://www.brinkhoff.se/
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <WZbVb.18613$fD.11225@fed1read02>
Lars Brinkhoff wrote:
> There is a collector called the Boehm-Demers-Weiser conservative
> garbage collector (usually just "Boehm GC") that works as a
> replacement for malloc.  However, many Lisp compilers (or rather
> runtimes) implement their own memory management completely bypassing
> malloc.  Generational and copying garbage collectors seem popular.  If
> you're serious about this, there's a book called Garbage Collection
> you should read.

I probably should read it, as well as Lisp in Small Pieces as you later 
mentioned.  Too bad I'm broke - maybe the bookstore has 'em and I can 
just hang out there all weekend. :)

> Usually, all values carry type information.  That way, a function like
> + can check all of its arguments to see that they are the right type.
> A good paper to read is
>   ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/typeinfo.ps.gz
> Most (but not all) Lisp implementations represent values by what the
> paper calls "typed words".  Perhaps that's what you mean by "bitwise
> tricks".

Essentially, yes it is.  But isn't that a horribly slow way to do basic 
math?  C can compile '(int) a + (int) b' as what amounts to a single 
instruction on the CPU, whereas most Lisp compilers from what you're 
telling me are going to generate a function call and a bunch of runtime 
type checking.

> Absolutely not.  All macros in a function are expanded until there are
> no macro invokations left.  Then the resulting code is compiled.

How about eval?

Thanks for your response (and the others I got).  I'm learning.
From: Joe Marshall
Subject: Re: compile
Date: 
Message-ID: <y8re36oj.fsf@comcast.net>
Ari Johnson <·····@hotmail.com> writes:

> C can compile '(int) a + (int) b' as what amounts to a single
> instruction on the CPU,

C computes (a + b) mod (implementation dependent value) in a single
instruction if you give it integers.  It computes something random if
you give it bignums or strings.


-- 
~jrm
From: Henrik Motakef
Subject: Re: compile
Date: 
Message-ID: <x7broa7gcc.fsf@crocket.internal.henrik-motakef.de>
Ari Johnson <·····@hotmail.com> writes:

> But isn't that a horribly slow way to do basic math?  C can compile
> '(int) a + (int) b' as what amounts to a single instruction on the
> CPU, whereas most Lisp compilers from what you're telling me are
> going to generate a function call and a bunch of runtime type
> checking.

CL compilers generally can do so as well, if they know that a and b
are fixnums. They just have to choose their tag bits cleverly.

Of course, the function cl:+ in all its glory deals with any kind of
cl:number, including fixnums and bignums, complex numbers etc.  So
unless you help your compiler a little (or it is sufficiently smart to
figure out that a given call to cl:+ will never try to add a bignum to
a float or some such), it will indeed have to produce more code than a
single instruction.

> > Absolutely not.  All macros in a function are expanded until there are
> > no macro invokations left.  Then the resulting code is compiled.
> 
> How about eval?

What do you mean? EVAL itself is a function, not a macro. It has to
MACROEXPAND its argument, of course. In a function like (defun foo ()
(eval '(bar 1 2 3))), where BAR is a macro, it would not be
macroexpanded at compile-time.
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <KPfVb.20621$fD.20538@fed1read02>
Henrik Motakef wrote:
> Ari Johnson <·····@hotmail.com> writes:
> 
> 
>>But isn't that a horribly slow way to do basic math?  C can compile
>>'(int) a + (int) b' as what amounts to a single instruction on the
>>CPU, whereas most Lisp compilers from what you're telling me are
>>going to generate a function call and a bunch of runtime type
>>checking.
> 
> 
> CL compilers generally can do so as well, if they know that a and b
> are fixnums. They just have to choose their tag bits cleverly.

What do you mean 'cleverly'?  Without ruining the correct operation of 
+, there's no way to boil it down to a single CPU instruction, unless 
your compiler is very smart or very stupid (for example, there's no 
reason to compile (+ 3 3) to anything other than 6 :P).

> Of course, the function cl:+ in all its glory deals with any kind of
> cl:number, including fixnums and bignums, complex numbers etc.  So
> unless you help your compiler a little (or it is sufficiently smart to
> figure out that a given call to cl:+ will never try to add a bignum to
> a float or some such), it will indeed have to produce more code than a
> single instruction.

Short of only being given constant parameters, isn't part of the fun 
that you never know what type you'll be getting? ;)

> 
> 
>>>Absolutely not.  All macros in a function are expanded until there are
>>>no macro invokations left.  Then the resulting code is compiled.
>>
>>How about eval?
> 
> 
> What do you mean? EVAL itself is a function, not a macro. It has to
> MACROEXPAND its argument, of course. In a function like (defun foo ()
> (eval '(bar 1 2 3))), where BAR is a macro, it would not be
> macroexpanded at compile-time.

So you're saying you *do* have to have a full interpreter linked to 
every executable?
From: Darius
Subject: Re: compile
Date: 
Message-ID: <20040207202227.00003756@derek>
On Sat, 07 Feb 2004 17:57:10 -0700
Ari Johnson <·····@hotmail.com> wrote:

> Henrik Motakef wrote:
> > Ari Johnson <·····@hotmail.com> writes:
> > 
> > 
> >>But isn't that a horribly slow way to do basic math?  C can compile
> >>'(int) a + (int) b' as what amounts to a single instruction on the
> >>CPU, whereas most Lisp compilers from what you're telling me are
> >>going to generate a function call and a bunch of runtime type
> >>checking.
> > 
> > 
> > CL compilers generally can do so as well, if they know that a and b
> > are fixnums. They just have to choose their tag bits cleverly.
> 
> What do you mean 'cleverly'?  Without ruining the correct operation of
> +, there's no way to boil it down to a single CPU instruction, unless 
> your compiler is very smart or very stupid (for example, there's no 
> reason to compile (+ 3 3) to anything other than 6 :P).
> 
> > Of course, the function cl:+ in all its glory deals with any kind of
> > cl:number, including fixnums and bignums, complex numbers etc.  So
> > unless you help your compiler a little (or it is sufficiently smart
> > to figure out that a given call to cl:+ will never try to add a
> > bignum to a float or some such), it will indeed have to produce more
> > code than a single instruction.
> 
> Short of only being given constant parameters, isn't part of the fun 
> that you never know what type you'll be getting? ;)

Except for when you do.

> >>>Absolutely not.  All macros in a function are expanded until there
> >are>>no macro invokations left.  Then the resulting code is compiled.
> >>
> >>How about eval?
> > 
> > 
> > What do you mean? EVAL itself is a function, not a macro. It has to
> > MACROEXPAND its argument, of course. In a function like (defun foo
> > ()(eval '(bar 1 2 3))), where BAR is a macro, it would not be
> > macroexpanded at compile-time.
> 
> So you're saying you *do* have to have a full interpreter linked to 
> every executable?

Um, you'd need a full interpreter to evaluate 'eval' anyways.
From: Matthew Danish
Subject: Re: compile
Date: 
Message-ID: <20040208015848.GP8667@mapcar.org>
On Sat, Feb 07, 2004 at 05:57:10PM -0700, Ari Johnson wrote:
> Henrik Motakef wrote:
> >Ari Johnson <·····@hotmail.com> writes:
> >
> >
> >>But isn't that a horribly slow way to do basic math?  C can compile
> >>'(int) a + (int) b' as what amounts to a single instruction on the
> >>CPU, whereas most Lisp compilers from what you're telling me are
> >>going to generate a function call and a bunch of runtime type
> >>checking.
> >
> >
> >CL compilers generally can do so as well, if they know that a and b
> >are fixnums. They just have to choose their tag bits cleverly.
> 
> What do you mean 'cleverly'?  Without ruining the correct operation of 
> +, there's no way to boil it down to a single CPU instruction, unless 
> your compiler is very smart or very stupid (for example, there's no 
> reason to compile (+ 3 3) to anything other than 6 :P).

It is possible for a reasonably smart compiler to infer the types of the
arguments to functions.  In addition, CL:+ is not redefinable.  Hence,
advanced CL compilers like Python (used by CMUCL, SBCL, Scieneer) can
perform extensive analysis and optimization with results like this:


CL-USER> (defun foo (x y)
           (declare (type fixnum x y)
                    (optimize (speed 3)
                              (safety 0)))
           (the fixnum (+ x y)))
FOO
CL-USER> (disassemble *)
; Compiling LAMBDA (X Y): 
; Compiling Top-Level Form: 
4873F300:       .ENTRY "LAMBDA (X Y)"(x y)   ; (FUNCTION (FIXNUM FIXNUM) FIXNUM)
      18:       POP   DWORD PTR [EBP-8]
      1B:       LEA   ESP, [EBP-32]
      1E:       ADD   EDX, EDI               ; No-arg-parsing entry point
      20:       MOV   ECX, [EBP-8]
      23:       MOV   EAX, [EBP-4]
      26:       ADD   ECX, 2
      29:       MOV   ESP, EBP
      2B:       MOV   EBP, EAX
      2D:       JMP   ECX
      2F:       NOP

Which is an ADD instruction followed by some function-call frame
boilerplate.  It is possible to sacrifice the generality of the CL
functions for speed, you just need to let the compiler know.

(In this example I made the types explicit, but Python is also very good
at inference).

> >Of course, the function cl:+ in all its glory deals with any kind of
> >cl:number, including fixnums and bignums, complex numbers etc.  So
> >unless you help your compiler a little (or it is sufficiently smart to
> >figure out that a given call to cl:+ will never try to add a bignum to
> >a float or some such), it will indeed have to produce more code than a
> >single instruction.
> 
> Short of only being given constant parameters, isn't part of the fun 
> that you never know what type you'll be getting? ;)

And yet it is possible, in closed forms or when declared, to know what
types you will be getting.

> >>>Absolutely not.  All macros in a function are expanded until there are
> >>>no macro invokations left.  Then the resulting code is compiled.
> >>
> >>How about eval?
> >
> >
> >What do you mean? EVAL itself is a function, not a macro. It has to
> >MACROEXPAND its argument, of course. In a function like (defun foo ()
> >(eval '(bar 1 2 3))), where BAR is a macro, it would not be
> >macroexpanded at compile-time.
> 
> So you're saying you *do* have to have a full interpreter linked to 
> every executable?

Lisp code is generally run from within the Lisp runtime; whether or not
it is compiled.  If the Lisp runtime isn't running, then it needs to be
started somehow first.  Most Lisp programmers work with an incremental
method inside an image-based environment; the run-time is always going,
while you make changes, recompile, and test your code.  When the program
is finally ready for distribution, then you distill the necessary parts
of the image into an executable which is linked with the runtime.  If
you do not need EVAL, it is at this point that you leave out the
interpreter or compiler.  Commercial implementations often require extra
licensing fees if you DO want to leave it in.

The other important point which must be made clear is that the expansion
of macros is NOT related to EVAL.  The expansion of macros happens at
compile-time, if code is compiled, or just before it is interpreted.
Naturally, if you want to use EVAL on code, then that code may be
subject to macroexpansion.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Edi Weitz
Subject: Re: compile
Date: 
Message-ID: <m3isiifire.fsf@bird.agharta.de>
On Sat, 07 Feb 2004 17:57:10 -0700, Ari Johnson <·····@hotmail.com> wrote:

> Henrik Motakef wrote:
>>
>> CL compilers generally can do so as well, if they know that a and b
>> are fixnums. They just have to choose their tag bits cleverly.
>
> What do you mean 'cleverly'?  Without ruining the correct operation of
> +, there's no way to boil it down to a single CPU instruction, unless
> your compiler is very smart or very stupid

See <http://sbcl-internals.cliki.net/tag%20bit> for an example. Note
that the tag bits for fixnums are 000 and 100, so the effect is that
you have 30 bits available for the number itself and additions and
subtractions "just work."

>> Of course, the function cl:+ in all its glory deals with any kind
>> of cl:number, including fixnums and bignums, complex numbers etc.
>> So unless you help your compiler a little (or it is sufficiently
>> smart to figure out that a given call to cl:+ will never try to add
>> a bignum to a float or some such), it will indeed have to produce
>> more code than a single instruction.
>
> Short of only being given constant parameters, isn't part of the fun
> that you never know what type you'll be getting? ;)

You can make a promise to your compiler. Look up DECLARE in the ANSI
spec. With corresponding optimization settings most CL compilers will
believe you. Compare the following in CMUCL:

  (disassemble (compile nil (lambda (a b)
                              (+ a b))))

  (disassemble (compile nil (lambda (a b)
                              (declare (optimize speed (safety 0))
                                       (fixnum a b))
                              (+ a b))))

> So you're saying you *do* have to have a full interpreter linked to
> every executable?

No. First, some Lisps (like Corman, MCL, SBCL) don't even have an
interpreter. Second, most Lisps that can deliver statically linked
"stand-alone" executables have an option to excise dynamic features
like the compiler or EVAL from the resulting application. It is your
responsibility as the programmer not to use them after you've thrown
them away... :)

Edi.
From: Thomas F. Burdick
Subject: Re: compile
Date: 
Message-ID: <xcvhdy29vub.fsf@famine.OCF.Berkeley.EDU>
Edi Weitz <···@agharta.de> writes:

> On Sat, 07 Feb 2004 17:57:10 -0700, Ari Johnson <·····@hotmail.com> wrote:
>
> > So you're saying you *do* have to have a full interpreter linked to
> > every executable?

Well, you said you're interested in your own Lisp dialect: do you want
EVAL, or not?  EVAL is an interpreter, so if you want it included, you
have to include it.

> No. First, some Lisps (like Corman, MCL, SBCL) don't even have an
> interpreter. Second, most Lisps that can deliver statically linked
> "stand-alone" executables have an option to excise dynamic features
> like the compiler or EVAL from the resulting application. It is your
> responsibility as the programmer not to use them after you've thrown
> them away... :)

And third, it's not like an interpreter is big or hard.  Especially
for "a subset of scheme".

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <llhVb.21367$fD.16673@fed1read02>
Thomas F. Burdick wrote:

> Edi Weitz <···@agharta.de> writes:
> 
> 
>>On Sat, 07 Feb 2004 17:57:10 -0700, Ari Johnson <·····@hotmail.com> wrote:
>>
>>
>>>So you're saying you *do* have to have a full interpreter linked to
>>>every executable?
> 
> 
> Well, you said you're interested in your own Lisp dialect: do you want
> EVAL, or not?  EVAL is an interpreter, so if you want it included, you
> have to include it.

Touche.

> 
>>No. First, some Lisps (like Corman, MCL, SBCL) don't even have an
>>interpreter. Second, most Lisps that can deliver statically linked
>>"stand-alone" executables have an option to excise dynamic features
>>like the compiler or EVAL from the resulting application. It is your
>>responsibility as the programmer not to use them after you've thrown
>>them away... :)
> 
> 
> And third, it's not like an interpreter is big or hard.  Especially
> for "a subset of scheme".
> 

It only took me a few days to write it, so this is true.  But if your 
goal is to run on a TI-85, you don't want an entire interpreter sitting 
around if you can help it. :)
From: Pascal Bourguignon
Subject: Re: compile
Date: 
Message-ID: <8765eib5w7.fsf@thalassa.informatimago.com>
Ari Johnson <·····@hotmail.com> writes:

> Thomas F. Burdick wrote:
> 
> > Edi Weitz <···@agharta.de> writes:
> >
> >>On Sat, 07 Feb 2004 17:57:10 -0700, Ari Johnson <·····@hotmail.com> wrote:
> >>
> >>
> >>>So you're saying you *do* have to have a full interpreter linked to
> >>>every executable?
> > Well, you said you're interested in your own Lisp dialect: do you
> > want
> > EVAL, or not?  EVAL is an interpreter, so if you want it included, you
> > have to include it.
> 
> Touche.

Missed!   SBCL has  no  interpreter but  it  still has  eval. It  just
compiles before executing. [But for simple sexps such as (eval '*my-var*)].


-- 
__Pascal_Bourguignon__                     http://www.informatimago.com/
There is no worse tyranny than to force a man to pay for what he doesn't
want merely because you think it would be good for him.--Robert Heinlein
http://www.theadvocates.org/
From: Thomas F. Burdick
Subject: Re: compile
Date: 
Message-ID: <xcv8yjd9hyc.fsf@famine.OCF.Berkeley.EDU>
Ari Johnson <·····@hotmail.com> writes:

> >>On Sat, 07 Feb 2004 17:57:10 -0700, Ari Johnson <·····@hotmail.com> wrote:
> >>
> >>>So you're saying you *do* have to have a full interpreter linked to
> >>>every executable?

> Thomas F. Burdick wrote:
> 
> > Well, you said you're interested in your own Lisp dialect: do you want
> > EVAL, or not?  EVAL is an interpreter, so if you want it included, you
> > have to include it.

[ Apparently I wasn't precise enough for this forum: EVAL is an
  evaluator, which in the case of a tiny environment like a
  calculator, certainly means an interpreter, not a compiler! ]

> > And third, it's not like an interpreter is big or hard.  Especially
> > for "a subset of scheme".
> 
> It only took me a few days to write it, so this is true.  But if your 
> goal is to run on a TI-85, you don't want an entire interpreter sitting 
> around if you can help it. :)

"An entire interpreter" -- you might not be clear on just how tiny an
interpreter can be :-).  Really, once you have a compiler, and the
runtime to support the compiled Lisp, a tiny interpreter, written in
that same Lisp, can be 50-100 LOC, max.  Hopefully this should compile
down to very very little.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <USCVb.30468$fD.13153@fed1read02>
Thomas F. Burdick wrote:

> "An entire interpreter" -- you might not be clear on just how tiny an
> interpreter can be :-).  Really, once you have a compiler, and the
> runtime to support the compiled Lisp, a tiny interpreter, written in
> that same Lisp, can be 50-100 LOC, max.  Hopefully this should compile
> down to very very little.

You have a point, but one problem is that the compile-time environment 
and runtime environment are going to be hard to keep in sync that way, 
since the compiler I plan to write in C (maybe in Lisp, but that doesn't 
make it any easier) on Unix and the runtime will likely be as small as I 
can make it and therefore in Z80 assembler on a machine without a 
filesystem or anything at its disposal.  The dilemma is giving you the 
same dialect of Lisp when you parse as you do when you compile as you do 
when you run.

Now, as to writing any part of the runtime, including the interpreter, 
in the language itself...there are a lot of challenges I can see in 
that.  Then again, probably easier there than in Z80 assembler, but...
From: Christopher C. Stacy
Subject: Re: compile
Date: 
Message-ID: <uk72ymcql.fsf@news.dtpq.com>
>>>>> On 07 Feb 2004 17:57:32 -0800, Thomas F Burdick ("Thomas") writes:

 Thomas> Edi Weitz <···@agharta.de> writes:
 >> On Sat, 07 Feb 2004 17:57:10 -0700, Ari Johnson <·····@hotmail.com> wrote:
 >> 
 >> > So you're saying you *do* have to have a full interpreter linked to
 >> > every executable?

 Thomas> Well, you said you're interested in your own Lisp dialect: do
 Thomas> you want EVAL, or not?  EVAL is an interpreter, so if you
 Thomas> want it included, you have to include it.

EVAL is most certainly not an interpreter - rather, it is a function
that implements the Lisp abstraction of "evaluation".
From: javuchi
Subject: Re: compile
Date: 
Message-ID: <m2fzdl9e3j.fsf@servidor.kalandra>
······@news.dtpq.com (Christopher C. Stacy) writes:

> EVAL is most certainly not an interpreter - rather, it is a function
> that implements the Lisp abstraction of "evaluation".

And a good linker should omit to statically link to not used code. That is:
if you don't use EVAL in your program, it should not be linked into your
statically generated executable. Am I right?
I think GCC does this. If I program a statically linked "hello world"
application into C, this will not waste some megabytes in the hard disk, just
some kilobytes... and glibc is huge...

-- 
�� javuchi ��
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <sBDVb.30669$fD.27945@fed1read02>
javuchi wrote:
> And a good linker should omit to statically link to not used code. That is:
> if you don't use EVAL in your program, it should not be linked into your
> statically generated executable. Am I right?
> I think GCC does this. If I program a statically linked "hello world"
> application into C, this will not waste some megabytes in the hard disk, just
> some kilobytes... and glibc is huge...

I don't know that there's going to be any linking the the normal sense 
going on.  It's been forever since I've coded for my TI-85, but I'm 
pretty sure the build system isn't that smart.  I may shop around for a 
good enough linker, but there certainly can't be many options in that 
regard.  My solution is going to have to be keeping the runtime small. :|
From: Joe Marshall
Subject: Re: compile
Date: 
Message-ID: <3c9l9d2k.fsf@comcast.net>
javuchi <······@nospam.com> writes:

> ······@news.dtpq.com (Christopher C. Stacy) writes:
>
>> EVAL is most certainly not an interpreter - rather, it is a function
>> that implements the Lisp abstraction of "evaluation".
>
> And a good linker should omit to statically link to not used code. That is:
> if you don't use EVAL in your program, it should not be linked into your
> statically generated executable. Am I right?

The problem is that you may in fact use EVAL in your code.  Your code
could throw a runtime error, enter the debugger, start a
read-eval-print loop...  and suddenly you're using EVAL.

-- 
~jrm
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <eyDVb.30646$fD.28184@fed1read02>
Joe Marshall wrote:
> The problem is that you may in fact use EVAL in your code.  Your code
> could throw a runtime error, enter the debugger, start a
> read-eval-print loop...  and suddenly you're using EVAL.

Believe me, there will be no reading on the calculator.  Writing the 
runtime is going to be hard enough without implementing a complete 
parser in Z80 assembler.
From: Paul Wallich
Subject: Re: compile
Date: 
Message-ID: <c073v1$gl6$1@reader2.panix.com>
Ari Johnson wrote:

> Joe Marshall wrote:
> 
>> The problem is that you may in fact use EVAL in your code.  Your code
>> could throw a runtime error, enter the debugger, start a
>> read-eval-print loop...  and suddenly you're using EVAL.
> 
> 
> Believe me, there will be no reading on the calculator.  Writing the 
> runtime is going to be hard enough without implementing a complete 
> parser in Z80 assembler.

A complete parser in Z80 assembler? You mean like the one included in 
Waltz Lisp (not a CL but still perfectly functional) back in the early 
80s? Or various schemes of the same era? If you get to compile from a 
real machine there should be plenty of room.

paul
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <ZLEVb.31187$fD.26185@fed1read02>
Paul Wallich wrote:
> A complete parser in Z80 assembler? You mean like the one included in 
> Waltz Lisp (not a CL but still perfectly functional) back in the early 
> 80s? Or various schemes of the same era? If you get to compile from a 
> real machine there should be plenty of room.

I'm not saying it can't be done.  It would be tedious, of course, but 
the real reason I don't want to do a parser in Z80 assembler is because 
I have very limited memory to work with, and I can't picture a scenario 
in which I would really want to input Lisp on the calculator.

If I were writing a straight-up interpreter so as to basically have a 
calculator that works in a Lisp dialect in the same way that the HP48 
can be said to work in a Forth dialect, that would be a different goal 
altogether.  I'm trying to keep my runtime small so I can write real 
programs.  I can always expand the runtime to include a REPL if I change 
my goals.
From: Christopher C. Stacy
Subject: Re: compile
Date: 
Message-ID: <uk72w3dtv.fsf@news.dtpq.com>
>>>>> On Sun, 08 Feb 2004 20:56:50 -0700, Ari Johnson ("Ari") writes:

 Ari> Joe Marshall wrote:
 >> The problem is that you may in fact use EVAL in your code.  Your code
 >> could throw a runtime error, enter the debugger, start a
 >> read-eval-print loop...  and suddenly you're using EVAL.

 Ari> Believe me, there will be no reading on the calculator.  Writing the
 Ari> runtime is going to be hard enough without implementing a complete
 Ari> parser in Z80 assembler.

This message appears to be in response to the anonymous 
person who posted as "javuchi".   Is that you?

I don't think Common Lisp could be easily implemented on a Z80.
If that's what you're thinking of doing, my recommendation would
be to give up and write the program in assembly languge.

Alternatively, you could use Common Lisp (on a real computer) to
implement a compiler for some language that you would invent,
and cross-compile it to the Z-80.
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <85HVb.32209$fD.26721@fed1read02>
Christopher C. Stacy wrote:
> This message appears to be in response to the anonymous 
> person who posted as "javuchi".   Is that you?

Me?  Nope.  I replied to one of his posts, though, earlier.

> I don't think Common Lisp could be easily implemented on a Z80.
> If that's what you're thinking of doing, my recommendation would
> be to give up and write the program in assembly languge.

Who said Common Lisp?  I want to compile a by-me dialect of Lisp with 
the intention of being compact.  The TI-85 isn't any less powerful than 
computers were when Lisp first came about, so it's possible...you just 
have to trim a lot of fat.  And, of course, the most effective way to 
trim fat is to start with something sans fat and build up only muscle.

> Alternatively, you could use Common Lisp (on a real computer) to
> implement a compiler for some language that you would invent,
> and cross-compile it to the Z-80.

That's the idea - I won't be compiling anything on the calculator itself 
(that would make it a Lispulator, which is a tempting prospect, but one 
that can be realized after the runtime and cross-compiler are working), 
but will indeed be cross-compiling with the Z80 target.
From: Pascal Bourguignon
Subject: Re: compile
Date: 
Message-ID: <87znbsw1ci.fsf@thalassa.informatimago.com>
Ari Johnson <·····@hotmail.com> writes:
> Who said Common Lisp?  I want to compile a by-me dialect of Lisp with
> the intention of being compact.  The TI-85 isn't any less powerful
> than computers were when Lisp first came about, so it's possible...you
> just have to trim a lot of fat.  And, of course, the most effective
> way to trim fat is to start with something sans fat and build up only
> muscle.

The most compact codes are  virtual machines codes.  I guess that even
on a Z80 processor, virtual  machine code would be smaller than native
code.  If speed  is not so important...  Then a  nice property is that
you have to write in "assembler" only the virtual machine.


-- 
__Pascal_Bourguignon__                     http://www.informatimago.com/
There is no worse tyranny than to force a man to pay for what he doesn't
want merely because you think it would be good for him.--Robert Heinlein
http://www.theadvocates.org/
From: Will Hartung
Subject: Re: compile
Date: 
Message-ID: <c090op$13mdss$2@ID-197644.news.uni-berlin.de>
"Pascal Bourguignon" <····@thalassa.informatimago.com> wrote in message
···················@thalassa.informatimago.com...
>
> Ari Johnson <·····@hotmail.com> writes:
> > Who said Common Lisp?  I want to compile a by-me dialect of Lisp with
> > the intention of being compact.  The TI-85 isn't any less powerful
> > than computers were when Lisp first came about, so it's possible...you
> > just have to trim a lot of fat.  And, of course, the most effective
> > way to trim fat is to start with something sans fat and build up only
> > muscle.
>
> The most compact codes are  virtual machines codes.  I guess that even
> on a Z80 processor, virtual  machine code would be smaller than native
> code.  If speed  is not so important...  Then a  nice property is that
> you have to write in "assembler" only the virtual machine.

That's the advantage of classic threaded Forths. On the 16-bit platform,
pretty much every instruction is only 16 bits, so they tend to be very
compact. Add to that the fact that the stack is used directly for parameter
passing, and you can get net more savings in space, since the parameters are
not embedded in the code per se, like registers or immediates, etc.

Limit yourself to 255ish high level operations, and you can get even more
savings.

It would be interesting to see what an aggresive optimizing "tree
shaker"/"static linked" like image a compiler could create for  smaller
programs where it takes a high level program (written in whatever) and
compiles it into a threaded implementation, with the popular routines calls
essentially "huffman" encoded during the final build.

Regards,

Will Hartung
(·····@msoft.com)
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <NLXVb.41024$fD.38273@fed1read02>
Pascal Bourguignon wrote:
> The most compact codes are  virtual machines codes.  I guess that even
> on a Z80 processor, virtual  machine code would be smaller than native
> code.  If speed  is not so important...  Then a  nice property is that
> you have to write in "assembler" only the virtual machine.

You have a really good point that I hadn't thought of (yeah, I'd thought 
of a VM, but didn't think about the advantages other than abstraction it 
  could provide).

A VM would be challenging to make work on the TI-85, due to storage, but 
I'm sure I could find a way to coerce the calculator to work as desired.

Now, the question is what kind of VM to code.  Should I write a generic 
VM, or one targeted to Lisp (that whole bottom-up thing)?  A generic VM 
would be nice because I could target other languages to it, but, then 
again, I'm operating under the theory that you people are all right and 
that I won't /want/ any other languages once I have a working Lisp compiler.
From: Pascal Bourguignon
Subject: Re: compile
Date: 
Message-ID: <87isiffo7h.fsf@thalassa.informatimago.com>
Ari Johnson <·····@hotmail.com> writes:

> Pascal Bourguignon wrote:
> > The most compact codes are  virtual machines codes.  I guess that even
> > on a Z80 processor, virtual  machine code would be smaller than native
> > code.  If speed  is not so important...  Then a  nice property is that
> > you have to write in "assembler" only the virtual machine.
> 
> You have a really good point that I hadn't thought of (yeah, I'd
> thought of a VM, but didn't think about the advantages other than
> abstraction it could provide).
> 
> A VM would be challenging to make work on the TI-85, due to storage,
> but I'm sure I could find a way to coerce the calculator to work as
> desired.
> 
> Now, the question is what kind of VM to code.  Should I write a
> generic VM, or one targeted to Lisp (that whole bottom-up thing)?  A
> generic VM would be nice because I could target other languages to it,
> but, then again, I'm operating under the theory that you people are
> all right and that I won't /want/ any other languages once I have a
> working Lisp compiler.

If you design a lisp specific vm then you can do it the most efficient
and compact for lisp.  That is, your primitive lisp operators would be
implemented with only one opcode.

Have a look  at clisp vm: http://clisp.cons.org/impnotes/bytecode.html
(but keep  in mind that  clisp is implemented  in C so there  are some
strange  stuff, like  a stack  for C  values in  parallel to  the lisp
stack). http://clisp.cons.org/impnotes/intr-set.html


-- 
__Pascal_Bourguignon__                     http://www.informatimago.com/
There is no worse tyranny than to force a man to pay for what he doesn't
want merely because you think it would be good for him.--Robert Heinlein
http://www.theadvocates.org/
From: Raymond Wiker
Subject: Re: compile
Date: 
Message-ID: <86isifl1tc.fsf@raw.grenland.fast.no>
Ari Johnson <·····@hotmail.com> writes:

> Pascal Bourguignon wrote:
>> The most compact codes are  virtual machines codes.  I guess that even
>> on a Z80 processor, virtual  machine code would be smaller than native
>> code.  If speed  is not so important...  Then a  nice property is that
>> you have to write in "assembler" only the virtual machine.
>
> You have a really good point that I hadn't thought of (yeah, I'd
> thought of a VM, but didn't think about the advantages other than
> abstraction it could provide).
>
> A VM would be challenging to make work on the TI-85, due to storage,
> but I'm sure I could find a way to coerce the calculator to work as
> desired.

        The Apple-II had a 16-bit VM as part of the ROM; it executed a
16-bit instruction set named "Sweet-16". The Apple II was a 1.125
MHz 6502 with 16 kB RAM in the basic configuration; I doubt that the
TI-85 is more restricted.
 
-- 
Raymond Wiker                        Mail:  ·············@fast.no
Senior Software Engineer             Web:   http://www.fast.no/
Fast Search & Transfer ASA           Phone: +47 23 01 11 60
P.O. Box 1677 Vika                   Fax:   +47 35 54 87 99
NO-0120 Oslo, NORWAY                 Mob:   +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <W6hWb.52359$fD.30914@fed1read02>
Raymond Wiker wrote:
>         The Apple-II had a 16-bit VM as part of the ROM; it executed a
> 16-bit instruction set named "Sweet-16". The Apple II was a 1.125
> MHz 6502 with 16 kB RAM in the basic configuration; I doubt that the
> TI-85 is more restricted.

Wow, I had no idea the Apple II had such a thing.  Were most programs 
written to Sweet-16 or to 6502?  How about the AppleBasic 
interpreter...was that Sweet-16 or 6502?  And no, the Z80 with 32KB 
(20-some accessible to me) is certainly not as restricted.
From: Raymond Wiker
Subject: Re: compile
Date: 
Message-ID: <8665eekmvs.fsf@raw.grenland.fast.no>
Ari Johnson <·····@hotmail.com> writes:

> Raymond Wiker wrote:
>>         The Apple-II had a 16-bit VM as part of the ROM; it executed a
>> 16-bit instruction set named "Sweet-16". The Apple II was a 1.125
>> MHz 6502 with 16 kB RAM in the basic configuration; I doubt that the
>> TI-85 is more restricted.
>
> Wow, I had no idea the Apple II had such a thing.  Were most programs
> written to Sweet-16 or to 6502?  How about the AppleBasic
> interpreter...was that Sweet-16 or 6502?  And no, the Z80 with 32KB
> (20-some accessible to me) is certainly not as restricted.

        I'm not sure how much the Sweet-16 interpreter was actually
used; apparently it was used for some Apple utilities, at least. See

http://www.apple2history.org/history/ah03.html

-- 
Raymond Wiker                        Mail:  ·············@fast.no
Senior Software Engineer             Web:   http://www.fast.no/
Fast Search & Transfer ASA           Phone: +47 23 01 11 60
P.O. Box 1677 Vika                   Fax:   +47 35 54 87 99
NO-0120 Oslo, NORWAY                 Mob:   +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
From: David Golden
Subject: Re: compile
Date: 
Message-ID: <FJHVb.1997$rb.56352@news.indigo.ie>
Christopher C. Stacy wrote:


> Alternatively, you could use Common Lisp (on a real computer) to
> implement a compiler for some language that you would invent,
> and cross-compile it to the Z-80.

Forth is a good choice on Z-80
From: Ari Johnson
Subject: Re: compile
Date: 
Message-ID: <nZHVb.32787$fD.14515@fed1read02>
David Golden wrote:
> Forth is a good choice on Z-80

Forth is too easy.  And, besides, which language would you rather have 
at your disposal: Forth or a smallish Lisp? :)
From: Will Hartung
Subject: Re: compile
Date: 
Message-ID: <c08kap$13p963$2@ID-197644.news.uni-berlin.de>
"Ari Johnson" <·····@hotmail.com> wrote in message
·························@fed1read02...
> David Golden wrote:
> > Forth is a good choice on Z-80
>
> Forth is too easy.  And, besides, which language would you rather have
> at your disposal: Forth or a smallish Lisp? :)

Depending on how small the Lisp is or how large the Forth is, some may argue
there is a lot of similarity sans syntax.

e.g. the HP48 REPL, is a bastardization of both and quite capable. It has
the syntax akin to Forth, but the dynamism and high level mechanisms of
Lisp/Scheme.

That's my biggest complaint with Forth, is its lack of dynamism, at least as
far as the behavior of the stock vocabulary and standard idioms. I
understand WHY it is the way it is, but that doesn't mean I have to enjoy
it.

But, on the other side of the coin, the dynamism of Lisp is what gets it
into trouble with things like small runtimes and compilers, so...your call.

Regards,

Will Hartung
(·····@msoft.com)
From: Christopher C. Stacy
Subject: Re: compile
Date: 
Message-ID: <uoes83dzr.fsf@news.dtpq.com>
>>>>> On 09 Feb 2004 03:33:04 +0100, javuchi  ("javuchi") writes:

 javuchi> ······@news.dtpq.com (Christopher C. Stacy) writes:
 >> EVAL is most certainly not an interpreter - rather, it is a function
 >> that implements the Lisp abstraction of "evaluation".

 javuchi> And a good linker should omit to statically link to not used
 javuchi> code. That is: if you don't use EVAL in your program, it
 javuchi> should not be linked into your statically generated
 javuchi> executable. Am I right?

 javuchi> I think GCC does this. If I program a statically linked
 javuchi> "hello world" application into C, this will not waste some
 javuchi> megabytes in the hard disk, just some kilobytes... and glibc
 javuchi> is huge...

These are implementation issues, so you need to find an implementation
of Common Lisp that will dump out things the way you like.  
Perhaps none exists that will meet your requirements.

If you want, Xanalys will dump out static executables (EXE or DLLs
on Windows) of about 2-3 MB.  I don't think you can get that one
to do anything smaller.  If you promise it that you won't call
EVAL, it will eliminate EVAL from the image (as it will do with
many various other features you might not be using).
I think it will do something similar on Linux.

I haven't used Franz's product, but I believe that it has 
similar capabilities.

I've been using CMUCL on Linux, where the FASL files can be
easily made "executable". However, it relies on the CMU runtime,
so that might not work for you.

Good luck finding a Common Lisp implementation that you like.
From: javuchi
Subject: Re: compile
Date: 
Message-ID: <m2wu6w1a8x.fsf@servidor.kalandra>
······@news.dtpq.com (Christopher C. Stacy) writes:

>  javuchi> I think GCC does this. If I program a statically linked
>  javuchi> "hello world" application into C, this will not waste some
>  javuchi> megabytes in the hard disk, just some kilobytes... and glibc
>  javuchi> is huge...
> 
> These are implementation issues, so you need to find an implementation
> of Common Lisp that will dump out things the way you like.  
> Perhaps none exists that will meet your requirements.


I think none of them exists for Linux. This wouldn't be a problem if, for
example, CMUCL and SBCL is included with normal distributions. Just 10-20Mb
of implementations isn't too much... just compare it to, for example, KDE.

The main problem is: if your application is going to be released for the
grand public, then you are forcing your users to download 10-20Mb of Lisp
implementation, and then install it... and then install your program.
If the program would be statically linked and at a resonable size (just 1-2Mb
for its implementation, apart from the program itself), there would be not
problem and your user could install your program without problems.


> If you want, Xanalys will dump out static executables (EXE or DLLs
> on Windows) of about 2-3 MB. I don't think you can get that one
> to do anything smaller.  If you promise it that you won't call
> EVAL, it will eliminate EVAL from the image (as it will do with
> many various other features you might not be using).
> I think it will do something similar on Linux.

But if I'm using Linux (wich is free) and release my program with a free
license, I of course shall use a free implementation.

> Good luck finding a Common Lisp implementation that you like.

I did, CMCL/SBCL fits for me. But it might not fit for the person who wants
it to work into emmbeded CPUS, and there might be some difficulties for people
releasing applications for normal users... at least until CMUCL or SBCL would
be enought popular and included as standar as, for example, glib does.

-- 
�� javuchi ��
From: Tim Lavoie
Subject: Re: compile
Date: 
Message-ID: <87fzdkp1o2.fsf@theasylum.dyndns.org>
>>>>> "javuchi" == javuchi  <······@nospam.com> writes:

    javuchi> ······@news.dtpq.com (Christopher C. Stacy) writes: I
    javuchi> think GCC does this. If I program a statically linked
    javuchi> "hello world" application into C, this will not waste
    javuchi> some megabytes in the hard disk, just some
    javuchi> kilobytes... and glibc is huge...


    javuchi> I think none of them exists for Linux. This wouldn't be a
    javuchi> problem if, for example, CMUCL and SBCL is included with
    javuchi> normal distributions. Just 10-20Mb of implementations
    javuchi> isn't too much... just compare it to, for example, KDE.

    javuchi> The main problem is: if your application is going to be
    javuchi> released for the grand public, then you are forcing your
    javuchi> users to download 10-20Mb of Lisp implementation, and
    javuchi> then install it... and then install your program.  If the
    javuchi> program would be statically linked and at a resonable
    javuchi> size (just 1-2Mb for its implementation, apart from the
    javuchi> program itself), there would be not problem and your user
    javuchi> could install your program without problems.


Maybe it's just me, but it seems like the existing method of including
an image would be less of a burden if minimal images could be used.

I (think) I understand the part where tree-shakers might have a hard
time deciding what pieces will never be used, but the programmer might
know. Plug-ins and extensibility aside, those who want to be able to
deliver a standalone app might not need the extra goodies in the full
20 MB image, and my be content with some static limitations.
From: Eric Marsden
Subject: Re: compile
Date: 
Message-ID: <wzibro7i73e.fsf@melbourne.laas.fr>
>>>>> "tl" == Tim Lavoie <········@spamcop.net> writes:

  tl> I (think) I understand the part where tree-shakers might have a hard
  tl> time deciding what pieces will never be used, but the programmer might
  tl> know. Plug-ins and extensibility aside, those who want to be able to
  tl> deliver a standalone app might not need the extra goodies in the full
  tl> 20 MB image, and my be content with some static limitations.

I did some work on CMUCL along these lines, to create an "embedded"
runtime that has a much lower footprint than a standard image. It is
built without the compiler and without various development-oriented
facilities: documentation strings, inspector, describe, the
disassembler, etc. The image is built optimizing for space (which
means that more bytecode is used instead of native code, and less
debugging information is available). The embedded image is around
4.9MB, and at runtime without any user code loaded you have 3.7MB RSS
of which 1.9MB is shareable (this is on x86/Linux).

  <URL:ftp://ftp.linux.org.uk/pub/lisp/cmucl/experimental/>

The idea is that you compile your code to FASL files using a seperate
compiler-enabled image, load them into the embedded image and dump a
custom image for your application.
  
Unfortunately, the CLOS implementation in CMUCL requires the compiler
to be present, so CLOS is not available in these images. This is all
experimental, and hasn't been much used AFAIK.
  
-- 
Eric Marsden                          <URL:http://www.laas.fr/~emarsden/>
From: Tim Lavoie
Subject: Re: compile
Date: 
Message-ID: <87y8ralkc5.fsf@theasylum.dyndns.org>
>>>>> "Eric" == Eric Marsden <········@laas.fr> writes:

    Eric> I did some work on CMUCL along these lines, to create an
    Eric> "embedded" runtime that has a much lower footprint than a
    Eric> standard image. It is built without the compiler and without
    Eric> various development-oriented facilities: documentation
    Eric> strings, inspector, describe, the disassembler, etc. The
    Eric> image is built optimizing for space (which means that more
    Eric> bytecode is used instead of native code, and less debugging
    Eric> information is available). The embedded image is around
    Eric> 4.9MB, and at runtime without any user code loaded you have
    Eric> 3.7MB RSS of which 1.9MB is shareable (this is on
    Eric> x86/Linux).

That's pretty neat, and starts getting pretty small. I guess in this
sense, the programmer *is* the tree shaker. I haven't tried this
myself; is it hard to remove the parts you don't want?

  Cheers,
  Tim
From: Eric Marsden
Subject: Re: compile
Date: 
Message-ID: <wzid68levpg.fsf@melbourne.laas.fr>
>>>>> "tl" == Tim Lavoie <········@spamcop.net> writes:

  tl> That's pretty neat, and starts getting pretty small. I guess in this
  tl> sense, the programmer *is* the tree shaker. I haven't tried this
  tl> myself; is it hard to remove the parts you don't want?

if you know how to build CMUCL, and if the functionality you want to
remove is self-contained, it's quite easy. Most of the time it just
means omitting certain source files from the build process.

Getting CMUCL to generate smaller code is quite delicate, however,
since you tend to run into compiler bugs or undocumented assumptions
made by CMUCL's source code on the levels of various optimization
qualities.

-- 
Eric Marsden                          <URL:http://www.laas.fr/~emarsden/>
From: Tim Lavoie
Subject: Re: compile
Date: 
Message-ID: <87u11xim2r.fsf@theasylum.dyndns.org>
>>>>> "ecm" == Eric Marsden <········@laas.fr> writes:

>>>>> "tl" == Tim Lavoie <········@spamcop.net> writes:
    tl> That's pretty neat, and starts getting pretty small. I guess
    tl> in this sense, the programmer *is* the tree shaker. I haven't
    tl> tried this myself; is it hard to remove the parts you don't
    tl> want?

    ecm> if you know how to build CMUCL, and if the functionality you
    ecm> want to remove is self-contained, it's quite easy. Most of
    ecm> the time it just means omitting certain source files from the
    ecm> build process.

    ecm> Getting CMUCL to generate smaller code is quite delicate,
    ecm> however, since you tend to run into compiler bugs or
    ecm> undocumented assumptions made by CMUCL's source code on the
    ecm> levels of various optimization qualities.

Hi Eric,

Thanks for the details. I haven't had to deploy anything yet myself,
since I'm just exploring for my own interest. It's good to know what
might be involved should a need arise though.

Is it possible (CMUCL or otherwise) to just un-define various pieces
and then save an image? I could see a situation where my development
image is fine, but would simply like to trim away what I can from
something I use elsewhere.

  Tim
From: Eric Marsden
Subject: Re: compile
Date: 
Message-ID: <wziu11xbihm.fsf@melbourne.laas.fr>
>>>>> "tl" == Tim Lavoie <········@spamcop.net> writes:

  tl> Is it possible (CMUCL or otherwise) to just un-define various pieces
  tl> and then save an image? I could see a situation where my development
  tl> image is fine, but would simply like to trim away what I can from
  tl> something I use elsewhere.

for CMUCL, undefining in the sense of MAKUNBOUND and FMAKUNBOUND is
not sufficient, for two reasons. Firstly, code and data in the
read-only and static spaces (the parts of an image that are sharable
in the virtual memory sense) can't be reclaimed by the garbage
collector. You need to do a full build, and a process called genesis,
to remove stuff from these spaces.

Secondly, there can be hidden references to symbols or function
objects that prevent them from being garbage collected, even if you
remove one path to them.

-- 
Eric Marsden                          <URL:http://www.laas.fr/~emarsden/>
From: Tim Lavoie
Subject: Re: compile
Date: 
Message-ID: <87n07pigzz.fsf@theasylum.dyndns.org>
>>>>> "ecm" == Eric Marsden <········@laas.fr> writes:

    ecm> for CMUCL, undefining in the sense of MAKUNBOUND and
    ecm> FMAKUNBOUND is not sufficient, for two reasons. Firstly, code
    ecm> and data in the read-only and static spaces (the parts of an
    ecm> image that are sharable in the virtual memory sense) can't be
    ecm> reclaimed by the garbage collector. You need to do a full
    ecm> build, and a process called genesis, to remove stuff from
    ecm> these spaces.

    ecm> Secondly, there can be hidden references to symbols or
    ecm> function objects that prevent them from being garbage
    ecm> collected, even if you remove one path to them.

OK, thanks for the explanation.

  Tim
From: Raymond Toy
Subject: Re: compile
Date: 
Message-ID: <4n65edi93g.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "ecm" == Eric Marsden <········@laas.fr> writes:

>>>>> "tl" == Tim Lavoie <········@spamcop.net> writes:
    tl> Is it possible (CMUCL or otherwise) to just un-define various pieces
    tl> and then save an image? I could see a situation where my development
    tl> image is fine, but would simply like to trim away what I can from
    tl> something I use elsewhere.

    ecm> for CMUCL, undefining in the sense of MAKUNBOUND and FMAKUNBOUND is
    ecm> not sufficient, for two reasons. Firstly, code and data in the
    ecm> read-only and static spaces (the parts of an image that are sharable
    ecm> in the virtual memory sense) can't be reclaimed by the garbage
    ecm> collector. You need to do a full build, and a process called genesis,
    ecm> to remove stuff from these spaces.

    ecm> Secondly, there can be hidden references to symbols or function
    ecm> objects that prevent them from being garbage collected, even if you
    ecm> remove one path to them.

I understand there also used to be a tree-shaker for CMUCL to remove
unused stuff.  I understand that it didn't really save all that much
(couple of MB?).  I don't think it's in the current sources, though,
since it didn't really save much.

Ray
From: Rob Warnock
Subject: Re: compile
Date: 
Message-ID: <1YOdnXDMS6kKpbvdXTWc-w@speakeasy.net>
Edi Weitz  <···@agharta.de> wrote:
+---------------
| You can make a promise to your compiler. Look up DECLARE in the ANSI
| spec. With corresponding optimization settings most CL compilers will
| believe you. Compare the following in CMUCL:
| 
|   (disassemble (compile nil (lambda (a b)
|                               (+ a b))))
| 
|   (disassemble (compile nil (lambda (a b)
|                               (declare (optimize speed (safety 0))
|                                        (fixnum a b))
|                               (+ a b))))
+---------------

You get an even bigger gain by declaring that the result stays in range:

    (disassemble (compile nil (lambda (a b)
                                (declare (optimize speed (safety 0))
                                         (fixnum a b))
                                (the fixnum (+ a b)))))

Of course, it's not cool to lie to the compiler, so you better mean it...


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Matthew Danish
Subject: Re: compile
Date: 
Message-ID: <20040205171016.GM8667@mapcar.org>
On Wed, Feb 04, 2004 at 11:34:30PM -0700, Ari Johnson wrote:
> There are a few points that I don't know how other compilers do, though:
>   1. What do you do about garbage collection?  Is this part of a 
> standard library that gets linked into every executable, like _init from 
> crt0.s in C does?  I'm guessing it gets called by the memory allocation 
> routines when needed, and that those are also linked in.

This is part of the runtime.  In a C program, the runtime is implemented
by libc and the linker takes care of resolving calls to runtime
functions.  You could adopt this model for your Lisp, but a real Lisp
has more functionality.  A typical CL implementation lets you interact
with the system, lets you compile and run code in parallel to other
running code, and provides extensive error handling through the
condition system.  To do all this while compiling to machine code
typically requires tight integration between the compiler and the
runtime system.  See SBCL, CMUCL, and OpenMCL, for example.

>   2. What about type resolution?  The compiler has no way of knowing 
> what data type a given expression will evaluate to, so how are things 
> like + implemented?  Do they have to play bitwise tricks like my 
> interpreter does (a la GNU Scheme's method), or is there a better way?

In a weakly typed language, you could just assume that the arguments
were of the correct type.  In Lisp, which is a strongly typed language,
if an incorrectly typed value is given to an operation, then that
operation should signal an error.  The typical way for an implementation
to do this is via type-tags.  For example, the type `fixnum' in CL is
meant to be represented as a machine-word with a few bits set aside for
tags.  For more complicated data types, such as arrays, the data+type
info is allocated on the heap and a pointer is used to refer to it.

>   3. How are function parameters marshalled?  As a list or are they 
> pushed on the stack as in normal C calling conventions?  If the latter, 
> how do you deal with optional and 'rest' arguments?

It is possible to mimic C calling conventions to some extent, but the
greater abilities and more numerous usage of functions in Lisp programs
demands a more flexible and yet faster means.  The C calling conventions
for Linux/x86, in particular, are not wonderfully suited even to modern
C programs.  If you really care about performance, it would be best to
pass positional parameters in registers as much as possible, and also
return multiple values in registers.  Naturally, for more complicated
lambda-lists the implementation cannot be so efficient: for optional
parameters you will need to pass the number of actual arguments, and if
a &rest parameter is present then a list will need to be allocated
(though not necessarily on the heap; with a dynamic-extent declaration
this could be done on the stack).

This is assuming that by `subset of Scheme' you mean `lacking call/cc'.
If you have call/cc, you need to heap-allocate function-call frames and
allow them to be garbage collected eventually, since they will have
indefinite extent.

Alternatively, if you want to have exception handling, you will need to
maintain a stack of handlers alongside the call-stack, that know when
they were established and when they should be removed (besides the
exception handling info).

>   4. How are macros implemented?  Do you have to include a complete 
> Lisp interpreter in the linked executable to evaluate the arguments?  I 
> guess that knowing how 'eval' works in compiled code would be very 
> helpful here.

Macros are completely expanded at compile-time.  Before the code reaches
one of the intermediate representations, most likely.  Similarly to an
interpreter, really.  EVAL does not enter into it.

But as for EVAL, in compiled code, it would simply be another function
in the run-time which invoked the interpreter (or alternatively, the
compiler quickly).

>   5. How does memory management typically work?  I was thinking that my 
> current "here is a block of cells to use for your conses, and you can 
> malloc() other stuff" system that the interpreter uses would work just 
> fine in the compiled code, but if there's a better way I'd love to hear 
> it, especially since malloc() isn't exactly available per se on my TI-85.

Typically objects are allocated on the heap with type-tags, but this is
not the sort of question that is so easily answered.  Designing a proper
runtime memory management scheme for Lisp systems is a whole book in
itself.  At the simplest you could probably use an existing GC library
and output assembly code which calls it then sets up the allocated
memory with the proper low-level structure of your objects.

>   6. What am I missing?

Intermediate representations, basic blocks, dataflow analysis,
instruction selection, register allocation, and much more.  The problems
of Lisp compilation consist, to a large degree, of the same problems
that compilation of any high level programming language poses.

> I also want to implement tail recursion elimination, but that should be 
> trivial with the compiled code whereas it is not with my interpreter.

Well, it should be trivial with either.

> since this seems like a relatively hot topic. :)

Not really.  It's been done many times, which gives you lots of examples
to look at, though.


Books to read:

_Lisp in Small Pieces_, Queinnec
_Paradigms of AI Programming_, Norvig
_Advanced Compiler Design and Implementation_, Muchnik

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Ray Dillinger
Subject: Re: compile
Date: 
Message-ID: <402284A5.D9B5B634@sonic.net>
Ari Johnson wrote:
> 
> ways to improve it.  One thing that I'd like to do is make it into a
> to-native-code compiler, specifically for the Z80 in my TI-85.  (Don't
> ask 'why', I'm infamous for answering 'because I got bored'. :P)

Scheme on a Z80?  Whee!
 
> There are a few points that I don't know how other compilers do, though:
>    1. What do you do about garbage collection?  Is this part of a
> standard library that gets linked into every executable, like _init from
> crt0.s in C does?  I'm guessing it gets called by the memory allocation
> routines when needed, and that those are also linked in.

A garbage collector basically has to provide three calls:  

1) GCMalloc, which allocates memory on a garbage-collected heap.  GCMalloc
   needs to know how much *working* memory an object must contain and needs
   to have a map of which fields in it are pointers.  (If you design all your
   objects so that pointers are compacted toward the front, just knowing 
   how many pointers are in an object will suffice). 

2) Register, which identifies some pointer or GCMalloc'd structure as 
   being in the "root set" for garbage collection.

3) Collect, which frees GCMalloc'd objects not reachable by traversing
   known pointers starting from the root set.  In some implementations,
   Collect is actually a separate thread that loops infinitely.  In 
   others, it's called implicitly every time there's a jump to a lower
   address.  If it's called very often, it's usually made "incremental",
   meaning it doesn't do very much before it returns and it may take 
   hundreds of calls to reap all the floating garbage. 
   
These three things work together.  Usually malloc is called to get one 
or more 'arenas' of memory that GCMalloc allocates from.  This has other 
benefits too, since if you allocate particular types from particular 
arenas, you can recognize the type of something by the high-order bits
of the pointer that points to it.  GCMalloc usually allocates at least 
a word or two more than is visible to the program to facilitate the 
operation of Collect, although their contents will vary depending on 
the collection algorithm. 


>    2. What about type resolution?  The compiler has no way of knowing
> what data type a given expression will evaluate to, so how are things
> like + implemented?  Do they have to play bitwise tricks like my
> interpreter does (a la GNU Scheme's method), or is there a better way?

Depends on the sophistication of the system.  A GCMalloc'd value will 
always carry information indicating its type.  Immediate values (numbers
and characters in most implementations don't have to be GCMalloc'd and 
can be stored in the same space as a pointer to some other kind of value) 
are widely implemented using bitwise tricks for performance:  If you 
determine that you can allocate all objects on 8-byte boundaries, it 
follows that the low 3 bits of an address will always be zero.  That 
means you can use those three bits to indicate non-pointer types as long
as the values of the non-pointer immediate types fit into (pointersize - 
3) bits of memory.  Sometimes there will be auxiliary registers in an 
object and the 'marked' pointers indicate only which auxiliary register
to look in.  This allows full-sized or even oversized 'immediate' values. 

Things that are too big for immediate representation have to be GCMalloc'd,
and in that case, if GCMalloc always takes the memory for particular 
types from particular arenas, then you can determine the type from the 
high bits of the pointer. Although it's usually simpler to just follow
the pointer and read the typetag in the object, it can be a cache miss,
so knowing the type before following the pointer is a performance win 
sometimes. 

Sometimes an implementation is able to *prove* that all calls to + from
a particular set of call site will have an integer*integer call type.  
In that case it can compile a special version of '+' that skips all 
typechecking and link it to (or, if + is a constant, inline it at) 
that set of call sites, while simultaneously having a version of '+' 
that performs normal typechecking and is linked to other call sites. 

>    3. How are function parameters marshalled?  As a list or are they
> pushed on the stack as in normal C calling conventions?  If the latter,
> how do you deal with optional and 'rest' arguments?

The simplest implementation is for arguments to be a list.  Higher 
performance puts them on the stack or (if the system has continuations) 
into a GCMalloc'd invocation frame.  Functions that take optional and 
'rest' arguments most commonly take their additional arguments as a 
list.  For higher performance, they may be compiled in multiple versions 
for different numbers of arguments, or the call sites may be "extended" 
in compilation with dummy arguments so as to fake a call to a routine
that takes all the args. 

>    4. How are macros implemented?  Do you have to include a complete
> Lisp interpreter in the linked executable to evaluate the arguments?  I
> guess that knowing how 'eval' works in compiled code would be very
> helpful here.

Macros are implemented as a set of instructions for a Lisp program about
how to transform another Lisp program.  The result is a macro-less lisp 
program which is then compiled. In Lisp and Scheme, Macros no longer 
exist after compilation. 

>    5. How does memory management typically work?  

Carefully. 

>    6. What am I missing?

Since you mentioned scheme specifically, I'll say continuations.
 
				Bear
From: Erik Naggum
Subject: Re: compile
Date: 
Message-ID: <3284276275271451KL2065E@naggum.no>
* Maciej Borecki
| I'm a newbie in lisp programming, is possibly to compile lisp into an
| executable file under linux or windows?  If so plz give me some piece
| of advice or links...

  The most common reason to ask this question is that you want to write
  and run your programs the same way you were used to in languages whose
  tools convert a dead text file into a dead executable file which you
  give the spark of life from the user interface.  Wanting this is not a
  bad thing, but it is not the only option.  Common Lisp environments
  are just like the shell under Unix where you extend the basic language
  with constructs of your own.  Instead of adding an executable to the
  vocabulary of your user interface, the Common Lisp way is to add new
  functions to the Common Lisp environment.  If you are into graphical
  user interfaces, you can add your Common Lisp functions to the Common
  Lisp environment's graphical user interface, too, but this is usually
  so much work that working with the text interface is recommended.

  Integrated development environments are all the rage in all sorts of
  weird programming languages these days, but Common Lisp environments
  have always had integrated editors, and the most obvious example of
  this is Emacs, but there are others.  I highly recommend that you find
  such an editor to write your code, because they all allow you to send
  the code to the Common Lisp compiler or evaluator and then you can see
  the results of your coding, testing, or experimentation immediately.
  What is commonly referred to as the Pedit-compile-link-run� cycle is
  perhaps the largest difference from Common Lisp environments, because
  in Common Lisp, the whole �compile-link-run� part is invoked with a
  single key-stroke (or key-chord) and completes in milliseconds.  This
  interactive coding style is one of Common Lisp's good selling points,
  but it also requires that you keep better track of the changes you
  make to your source code than you would in most other language, since
  you can make changes to the environment in which you compile and run
  your code that do not make it into the source file, in which case you
  lose context information if you have to reload everything from scratch.

  A friend of mine illustrated this problem well.  He is the kind of
  computer user who keeps a lot of things going simultaneously.  All of
  a sudden, the X server crashed and he was thrown out to the login
  prompt.  His sudden grief was very evident on his face, but all he
  said, barely audible, was �My context...�.

-- 
Erik Naggum | Oslo, Norway                                      2004-028

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Thomas F. Burdick
Subject: Re: compile
Date: 
Message-ID: <xcvllnretbd.fsf@famine.OCF.Berkeley.EDU>
Erik Naggum <····@naggum.no> writes:

>   If you are into graphical user interfaces, you can add your Common
>   Lisp functions to the Common Lisp environment's graphical user
>   interface, too, but this is usually so much work that working with
>   the text interface is recommended.

This is true in general, but when using MCL and CMUCL/Garnet, I find
myself adding menu items and GUI buttons more often, because it's not
much work there.  Love those environments.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: André Thieme
Subject: Re: compile
Date: 
Message-ID: <bv98rf$189$1@ulric.tng.de>
Erik Naggum wrote:

>   The most common reason to ask this question is that you want to write
>   and run your programs the same way you were used to in languages whose
>   tools convert a dead text file into a dead executable file which you
>   give the spark of life from the user interface.  Wanting this is not a
>   bad thing, but it is not the only option.

It might depend on the clientele for who an application is made. If I 
program an office application or a little game under Windows then my 
users expect from me an executable. So under these circumstances it can 
be a very important option to compile directly into .exe files.
From: Daniel Barlow
Subject: Re: compile
Date: 
Message-ID: <87ektjsq7d.fsf@noetbook.telent.net>
Andr� Thieme <······································@justmail.de> writes:

> It might depend on the clientele for who an application is made. If I
> program an office application or a little game under Windows then my
> users expect from me an executable. So under these circumstances it

They do?  I try to avoid Windows as far as I can, but whenever I do
have to use it it seems that most applications use something like
InstallShield to unpack a zillion little files around the place, then
put an entry on the Start menu, and I never really have to worry too
much about whether there' a single .exe or not.

Maybe my experience is atypical, I don't know.


-dan

-- 

 http://web.metacircles.com/ - Open Source software development and support
From: Peter Seibel
Subject: Re: compile
Date: 
Message-ID: <m3ptd2mulb.fsf@javamonkey.com>
Daniel Barlow <···@telent.net> writes:

> Andr� Thieme <······································@justmail.de> writes:
> 
> > It might depend on the clientele for who an application is made. If I
> > program an office application or a little game under Windows then my
> > users expect from me an executable. So under these circumstances it
> 
> They do? I try to avoid Windows as far as I can, but whenever I do
> have to use it it seems that most applications use something like
> InstallShield to unpack a zillion little files around the place,
> then put an entry on the Start menu, and I never really have to
> worry too much about whether there' a single .exe or not.
> 
> Maybe my experience is atypical, I don't know.

Hmmm. Does anyone have any experience using any of the "standard"
installation tools to intall Lisp apps? I think Daniel is
right--that's how most software gets installed on Windows. No
particular reason to think it would be particularly hard. Just curious
if anyone has actually tried it.

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Ray Fink
Subject: Re: compile
Date: 
Message-ID: <101is1mm1k8bm22@news.supernews.com>
"Peter Seibel" <·····@javamonkey.com> wrote in message
···················@javamonkey.com...
> Hmmm. Does anyone have any experience using any of the "standard"
> installation tools to intall Lisp apps? I think Daniel is
> right--that's how most software gets installed on Windows. No
> particular reason to think it would be particularly hard. Just curious
> if anyone has actually tried it.

Yes.  It's no big deal... the same as any other application.   A .exe, some
dll's, some data files, Start menu items, some user-selectable install
options, etc.

-- Ray
From: Henrik Motakef
Subject: Re: compile
Date: 
Message-ID: <x7smhzu444.fsf@crocket.internal.henrik-motakef.de>
Andr� Thieme <······································@justmail.de> writes:

> Erik Naggum wrote:
> 
> >   The most common reason to ask this question is that you want to write
> >   and run your programs the same way you were used to in languages whose
> >   tools convert a dead text file into a dead executable file which you
> >   give the spark of life from the user interface.  Wanting this is not a
> >   bad thing, but it is not the only option.
> 
> It might depend on the clientele for who an application is made. If I
> program an office application or a little game under Windows then my
> users expect from me an executable. So under these circumstances it
> can be a very important option to compile directly into .exe files.

A pretty good approximation is: Would it be possible to deploy the
application if it would have been written in Java?

If so, it is not any harder to deploy a Lips app. Esp. on Microsoft
platforms, you'd have to ship your app with a matching JVM anyway,
because otherwise you might be very surprised about the differences in
the interpretation of "Write Once, Run Anywhere" between Sun and MS (I
once had a client with 4 different JVMs installed, 1 from Microsoft
and 3 from /the same/ installation of an Oracle client, on an
otherwise virgin Win2kPro). If Java would work, you could just provide
a startup script like one of those that hide the brokenness of Javas
pseudo-URI-based namespace model, i.e. one that invokes
"my-favourite-lisp -eval my-main-function" instead of "java -jar
guesswhereiam.jar -cp Y:\\didntguessthisdid.jar
org.net.sourceforge.uninteresting_project.changes_next_summer_anyway.but_long_name_hence_pretty_OO_and_stuff.JMVGFHOWMainFactoryManagerProviderInterfaceListener"
directly.

If your clients would even accept a VB6 or a .NET "executable", you
should have already won, because no Lisp environment could possibly be
more problematic to set up than these. If they would only allow "real"
Win32 executables, you might have to do 5 minutes of work yourself, or
use one of the several CL implementations that support building .exe
executables.
From: André Thieme
Subject: Re: compile
Date: 
Message-ID: <bv9cn3$3mn$1@ulric.tng.de>
Yes, I agree with all that.
I just mean, a windows user is not interested in downloading lisp and 
typing something in the read-eval-print loop.
I'm sorry if my text was so easy to misunderstand.

I am personally interested to �bring Lisp to the desktop�. I am only 
missing an easy programmable plattform independent gui toolkit that 
offers me all power todays gui allow (win, aqua, kde, gnome, etc...).

This wxLisp project looks great. However, it is simply not making 
advances cause the project still needs some C++ hackers who help out.
From: Thomas F. Burdick
Subject: Re: compile
Date: 
Message-ID: <xcvd693ekn7.fsf@famine.OCF.Berkeley.EDU>
You're using a broken character encoding  <······································@justmail.de> writes:

> Yes, I agree with all that.
> I just mean, a windows user is not interested in downloading lisp and 
> typing something in the read-eval-print loop.
> I'm sorry if my text was so easy to misunderstand.
> 
> I am personally interested to "bring Lisp to the desktop" am only 
> missing an easy programmable plattform independent gui toolkit that 
> offers me all power todays gui allow (win, aqua, kde, gnome, etc...).

What's wrong with LispWork's CAPI?  Sounds better than wxWindows to
me.  If I got a client today who'd want an end user app on a
combination of Windows &/or Mac &/or Unix, I'd go buy a copy of LW
tomorrow.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: André Thieme
Subject: Re: compile
Date: 
Message-ID: <bv9omp$bi2$1@ulric.tng.de>
Thomas F. Burdick wrote:

>>I am personally interested to "bring Lisp to the desktop" am only 
>>missing an easy programmable plattform independent gui toolkit that 
>>offers me all power todays gui allow (win, aqua, kde, gnome, etc...).
> 
> 
> What's wrong with LispWork's CAPI?  Sounds better than wxWindows to
> me.  If I got a client today who'd want an end user app on a
> combination of Windows &/or Mac &/or Unix, I'd go buy a copy of LW
> tomorrow.


Nothing is wrong with CAPI, however it is still not perfect. The price 
for LispWorks is not very low. If CAPI would be included in Corman Lisp 
for example I would already have bought it!
And from all screenshots I have seen so far only on the Mac the results 
look very good.
From: Marc Spitzer
Subject: Re: compile
Date: 
Message-ID: <86bron3111.fsf@bogomips.optonline.net>
Andr� Thieme <······································@justmail.de> writes:

>
> Nothing is wrong with CAPI, however it is still not perfect. The price
> for LispWorks is not very low. If CAPI would be included in Corman

$900 with free runtimes, dirt cheap.  

marc
From: André Thieme
Subject: Re: compile
Date: 
Message-ID: <bv9u88$f77$1@ulric.tng.de>
Marc Spitzer wrote:
> Andr� Thieme <······································@justmail.de> writes:
> 
> 
>>Nothing is wrong with CAPI, however it is still not perfect. The price
>>for LispWorks is not very low. If CAPI would be included in Corman
> 
> 
> $900 with free runtimes, dirt cheap.

Of course it is a good price for what they offer.
However, the student version of Corman is easier to pay for me, 125$.
Allegro pro is around 650$, but I don't know how it compares to the 999$ 
version of LispWorks.
From: Thomas F. Burdick
Subject: Re: compile
Date: 
Message-ID: <xcvznc7ctb7.fsf@famine.OCF.Berkeley.EDU>
Andr� Thieme  <······································@justmail.de> writes:

> Marc Spitzer wrote:
> > Andr� Thieme <······································@justmail.de> writes:
> > 
> > 
> >>Nothing is wrong with CAPI, however it is still not perfect. The price
> >>for LispWorks is not very low. If CAPI would be included in Corman
> > 
> > 
> > $900 with free runtimes, dirt cheap.
> 
> Of course it is a good price for what they offer.
> However, the student version of Corman is easier to pay for me, 125$.
> Allegro pro is around 650$, but I don't know how it compares to the 999$ 
> version of LispWorks.

Looking at their site, their academic price is $599.  It's 1-2 months'
rent, so that's a fair amount for a student, but the difference in
price from Corman is probably less than the effort to make your own,
or wait for wxLisp.

Or, hell, you could just wait a few more months to get an even bigger
exchange-rate discount.  Being as I get paid in USD, I liked it better
when my $1 was worth 7FF, but oh well, it's good for you.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Ray Dillinger
Subject: Re: compile
Date: 
Message-ID: <40188A0D.5B77E852@sonic.net>
André Thieme wrote:
> 
> Marc Spitzer wrote:
> > André Thieme <······································@justmail.de> writes:
> >
> >
> >>Nothing is wrong with CAPI, however it is still not perfect. The price
> >>for LispWorks is not very low. If CAPI would be included in Corman
> >
> >
> > $900 with free runtimes, dirt cheap.
> 
> Of course it is a good price for what they offer.
> However, the student version of Corman is easier to pay for me, 125$.
> Allegro pro is around 650$, but I don't know how it compares to the 999$
> version of LispWorks.

Heh.  I have a copy of Franz' Allegro Common Lisp, still in shrink-wrap, 
that I'd sell you.  

The catch is, it's version 2.0, which is *TEN* years old.  According 
to the box, it requires 8 Mbytes of RAM and 12 Mbytes of hard disk 
space, and will run on Windows 3.1 or Windows NT (given the era, this 
probably means WinNT 3.51). 

This is the "professional version" and as such it includes the "Runtime 
generator for royalty-free delivery", which is what you need to make a
runtime image that people can load & execute with a doubleclick. According
to the box, it also includes some kind of drag&drop GUI builder, which 
most likely generates GUI's functionally equivalent to modern GUI's, but
built with widgets so far out of style they'd look stylishly retro at 
this point.

However, I doubt this is really what you want.  Given that it's for 
a Microsoft platform, there's virtually no chance this code will run 
on a current WinNT, let alone WinME or WinXP.  I've never used this 
version myself, being mainly a Unix/Linux user at home.  I've been 
saving it, shrink-wrap and all, because maybe a retrocomputist or 
somebody with an old program which has succumbed to bit rot on more 
recent versions of the product may want it some day.

					Bear
From: Pascal Costanza
Subject: Re: compile
Date: 
Message-ID: <bvar22$srk$1@newsreader2.netcologne.de>
Andr� Thieme wrote:
> Marc Spitzer wrote:
> 
>> Andr� Thieme <······································@justmail.de> writes:
>>
>>
>>> Nothing is wrong with CAPI, however it is still not perfect. The price
>>> for LispWorks is not very low. If CAPI would be included in Corman
>>
>> $900 with free runtimes, dirt cheap.
> 
> Of course it is a good price for what they offer.
> However, the student version of Corman is easier to pay for me, 125$.
> Allegro pro is around 650$, but I don't know how it compares to the 999$ 
> version of LispWorks.

All of these products have free personal/evaluation versions. Just try 
them out.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Espen Vestre
Subject: Re: compile
Date: 
Message-ID: <kwwu7bytt0.fsf@merced.netfonds.no>
Andr� Thieme <······································@justmail.de> writes:

> And from all screenshots I have seen so far only on the Mac the
> results look very good.

I agree that the Motif look of the linux version only looks pretty to
those who think that anything from the 80s is pretty, but what's wrong
with the Windows version (except that you might argue that Windows per
se is ugly)?

-- 
  (espen)
From: Martti Halminen
Subject: Re: compile
Date: 
Message-ID: <pan.2004.01.28.21.41.42.502436@kolumbus.fi>
On Wed, 28 Jan 2004 22:16:41 +0100, Andr� Thieme wrote:

> Erik Naggum wrote:
> 
>>   The most common reason to ask this question is that you want to write
>>   and run your programs the same way you were used to in languages whose
>>   tools convert a dead text file into a dead executable file which you
>>   give the spark of life from the user interface.  Wanting this is not a
>>   bad thing, but it is not the only option.
> 
> It might depend on the clientele for who an application is made. If I 
> program an office application or a little game under Windows then my 
> users expect from me an executable. So under these circumstances it can 
> be a very important option to compile directly into .exe files.

Why would they expect that, when they are not getting that in most other
present-day software either? These days the software might consist of half a
dozen .dll files, plenty of auxiliary files, sound files, templates,
perhaps a little database etc, and several subdirectories, not to speak of
Java stuff where the delivered software might include a full JRE, also.

I'd say what most users want is something they can install with a few
clicks with InstallShield or whatever installer it comes with, and can be
started with a doubleclick. Whether the internals are .exe, .dll, Java
class libraries etc, is of no importance to the average user (as long as
it is buzzword-compatible...). If it happens to be a Lisp image with the
application already included, won't show to the end-user.

--