From: Matt Wette
Subject: LISP for embedded systems
Date: 
Message-ID: <7k1zbo9ike.fsf_-_@jpl.nasa.gov>
There is interest in flying LISP in space and there has been some
debate on whether this is wise or not.  I think this can be broken
into two or so more fundamental issues.  (1) is LISP the right
development environment and (2) will LISP run in embedded systems.

My LISP knowlege is mediocre at best.  [I've programmed some ~1000
line programs in SCHEME and read probablyh 1/2 of Graham's ANSI lisp
book.  My background is control theory, lots of programming in 
Fortran, C, Perl, Scheme, Java, sh, etc. and a fair amount of embedded
work (VxWorks, etc).

I'd like to give my views on what I perceive as issues wrt LISP in
embedded systems and see what knowlegeable people have to say about
it.

The issues I see are the following:

 1. Running LISP doesn't mean using the language.  It means using the 
    environment.  The major LISP implementations don't have lots of 
    emphasis in real-time issues, so the well-understood real-time 
    environments worked out for the POSIX.1c, etc standards are
    probably not directly available to the designer.

 2. LISP uses garbage collection.  This is an issue with Java as well
    as LISP.
     a. One needs to be able to guarantee, for real-time performance, that 
        heap space management doesn't become a resource issue which
        prevents fast context switches.  That is, I don't want LISP 
        to mask interrupts or hold semaphores for more than a few 
	machine instructions.
     b. One needs to do things in interrupt handlers.  Can you run
	LISP in an (real) interrupt handler?
	
 3. LISP has to carve all data from the heap.  Therefore LISP might
    be too slow.  
    If I want to write code to be fast, I'd use C, not LISP, for just
    this reason.  You can code so that malloc is never called.  In C
    the compiler will allocate your objects on the stack at compile
    time: no runtime overhead.

 4. CLOS is slow.
    I've read through the MOP book a bit.  My understanding is that
    there has got to be a huge runtime overhead for dispatch (of
    generic functions).  Compare this to single-inheritance systems
    (e.g., Java), where dispatch (my guess) is <5 machine
    instructions.

 5. LISP is big.  Therefore it may slow down the processor.
    Here is the rationale.  If my embedded code might be 1meg in C.
    That's a moderate to large application.  In LISP you get the whole
    kitchen sink.  I've seen moderate LISP in embedded systems come in
    at > 30Meg.  A 30Meg program is going to run slower than a 1Meg
    program because you are going to see a significant increase in
    cache misses.  This is especially important in embedded systems,
    which don't have the huge caches that desktop boxes do.

There's probably more, but I have to go.

Matt
-- 
Matthew.R.Wette at jpl.nasa.gov -- I speak for myself, not for JPL.

From: Matt Wette
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <7kzoyc7ygp.fsf@jpl.nasa.gov>
Matt Wette <···············@jpl.nasa.gov> writes:

> The issues I see are the following:
> 
  [...]

> There's probably more, but I have to go.

 6. I suspect that most (all?) LISP implementations don't implement
    like most RTOS's do.  That is, if a LISP task is running and a
    interrupt comes in, can the LISP system switch to another task in
    a small (say < 0.1 millisecond), bounded amount of time?

 7. Here's another important one.  The only LISP I know of that runs
    on a realtime OS must run as one OS threads.  It does it's own
    multithreading within that thread.  So there is *no* way to have
    two LISP tasks with a C task in between.  I'd also guess that the
    LISP environment has no way to handle the classis priority
    inversion problem.  
Matt

-- 
Matthew.R.Wette at jpl.nasa.gov -- I speak for myself, not for JPL.
From: Tim Bradshaw
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <ey3k8pgge37.fsf@lostwithiel.tfeb.org>
* Matt Wette wrote:
>  2. LISP uses garbage collection.  This is an issue with Java as well
>     as LISP.
>      a. One needs to be able to guarantee, for real-time performance, that 
>         heap space management doesn't become a resource issue which
>         prevents fast context switches.  That is, I don't want LISP 
>         to mask interrupts or hold semaphores for more than a few 
> 	machine instructions.
>      b. One needs to do things in interrupt handlers.  Can you run
> 	LISP in an (real) interrupt handler?

Real time issues with GCs are fairly well understood, and there is an
significant literature on them.  I don't know whether any of the
current COTS lisp systems use a GC with any real-time guarantees.  I
beleive that the symbolics GC was essentially real-time up to a couple
of omissions (I think array allocation was one).  Of course, you can
write in such a way that the GC just never runs.
	
>  3. LISP has to carve all data from the heap.  Therefore LISP might
>     be too slow.  
>     If I want to write code to be fast, I'd use C, not LISP, for just
>     this reason.  You can code so that malloc is never called.  In C
>     the compiler will allocate your objects on the stack at compile
>     time: no runtime overhead.

What makes you think you can not do this in Lisp?

>  4. CLOS is slow.
>     I've read through the MOP book a bit.  My understanding is that
>     there has got to be a huge runtime overhead for dispatch (of
>     generic functions).  Compare this to single-inheritance systems
>     (e.g., Java), where dispatch (my guess) is <5 machine
>     instructions.

Did you *measure* any CLOS performance?  the MOP given in AMOP is
*not* part of the CL standard.

>  5. LISP is big.  Therefore it may slow down the processor.
>     Here is the rationale.  If my embedded code might be 1meg in C.
>     That's a moderate to large application.  In LISP you get the whole
>     kitchen sink.  I've seen moderate LISP in embedded systems come in
>     at > 30Meg.  A 30Meg program is going to run slower than a 1Meg
>     program because you are going to see a significant increase in
>     cache misses.  This is especially important in embedded systems,
>     which don't have the huge caches that desktop boxes do.

Could you give cites? I'm currently working with a 160,000 (+ some 20k
lines of C) line system, and the dumped image for that, with all the
debugging info in, is about 6Mb, I'm not sure how much smaller it
would be without that stuff compiled in.

--tim
From: Christopher R. Barry
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <873dw332sb.fsf@2xtreme.net>
Tim Bradshaw <···@tfeb.org> writes:

> Could you give cites? I'm currently working with a 160,000 (+ some 20k
> lines of C) line system, and the dumped image for that, with all the
> debugging info in, is about 6Mb, I'm not sure how much smaller it
> would be without that stuff compiled in.

160,000 lines of Lisp? What does this puppy do? (If you are at liberty
to disclose any details, that is.)

Christopher
From: Erann Gat
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <gat-2409991827550001@milo.jpl.nasa.gov>
When I first read this posting I missed this short but crucial passage:

In article <·················@jpl.nasa.gov>, Matt Wette
<···············@jpl.nasa.gov> wrote:

> I'd like to give my views on what I perceive as issues wrt LISP in
> embedded systems and see what knowlegeable people have to say about
> it.

I skipped ahead to the following, which I interpreted as Matt expressing
statements of fact rather than inquiries.  I answered them in a way that
was thus inappropriate, and I apologize.  (Ten years of battling
ignorance and prejudice have made me defensive and paranoid.)
I hope you'll allow me another crack at it.

> The issues I see are the following:
> 
>  1. Running LISP doesn't mean using the language.  It means using the 
>     environment.  The major LISP implementations don't have lots of 
>     emphasis in real-time issues, so the well-understood real-time 
>     environments worked out for the POSIX.1c, etc standards are
>     probably not directly available to the designer.

I don't advocate using Lisp for hard-real-time code so I won't defend
it.  I do know that there are hard-real-time Lisps out there and that
they have been used in commercial applications (telephone switching),
but that's the extent of my knowledge.

>  2. LISP uses garbage collection.  This is an issue with Java as well
>     as LISP.
>      a. One needs to be able to guarantee, for real-time performance, that 
>         heap space management doesn't become a resource issue which
>         prevents fast context switches.  That is, I don't want LISP 
>         to mask interrupts or hold semaphores for more than a few 
>         machine instructions.
>      b. One needs to do things in interrupt handlers.  Can you run
>         LISP in an (real) interrupt handler?

Again, if you're not using Lisp for hard-real-time these are simply
non-issues.

>  3. LISP has to carve all data from the heap.  Therefore LISP might
>     be too slow.  
>     If I want to write code to be fast, I'd use C, not LISP, for just
>     this reason.  You can code so that malloc is never called.  In C
>     the compiler will allocate your objects on the stack at compile
>     time: no runtime overhead.

No, Lisp does not have to carve all data from the heap.  Most Lisps
will do this by default because it's the safest thing to do, but most
Lisps do allow stack-allocated objects just like C.  (In fact, it is
possible to write Lisp code that never garbage collects.)

>  4. CLOS is slow.
>     I've read through the MOP book a bit.  My understanding is that
>     there has got to be a huge runtime overhead for dispatch (of
>     generic functions).  Compare this to single-inheritance systems
>     (e.g., Java), where dispatch (my guess) is <5 machine
>     instructions.

A good CLOS compiler can in principle generate code that is as good
as C++ (with appropriate declarations).  Most compilers aren't quite
that good, but will recognize common cases where the full generality
of generic function dispatch is not needed and optimise those cases.

Another thing to say about this is that you seem to be implicitly
assuming that runtime performance is the only metric worth optimizing.
I dispute that premise.  What we should be optimizing is overall
system costs, including development, debugging, and maintenance
costs.  Machine cycles are cheap.  Human cycles are expensive.
Lisp is better at optimizing machine cycles than most people think,
but the real win is how it optimized human cycles.  It's just *so*
much easier to write and maintain a Lisp program than a C program.

>  5. LISP is big.  Therefore it may slow down the processor.
>     Here is the rationale.  If my embedded code might be 1meg in C.
>     That's a moderate to large application.  In LISP you get the whole
>     kitchen sink.  I've seen moderate LISP in embedded systems come in
>     at > 30Meg.  A 30Meg program is going to run slower than a 1Meg
>     program because you are going to see a significant increase in
>     cache misses.  This is especially important in embedded systems,
>     which don't have the huge caches that desktop boxes do.

Like I mentioned before, we now have a port of MCL whose footprint is
about 2 MB that runs on vxWorks.

> 6. I suspect that most (all?) LISP implementations don't implement
>    like most RTOS's do.  That is, if a LISP task is running and a
>    interrupt comes in, can the LISP system switch to another task in
>    a small (say < 0.1 millisecond), bounded amount of time?

Externally there is no difference whatsoever between a vxWorks thread
that is running Lisp and one that is running C.  (In fact, most
Lisps - MCL included - actually *are* C programs.  There is a
kernel written in C that implements the core functionality.  Most
of the interesting stuff, of course, happens on the heap, but that's
just a big statically allocation array as far as the rest of the
system is concerned.)

> 7. Here's another important one.  The only LISP I know of that runs
>    on a realtime OS must run as one OS threads.  It does it's own
>    multithreading within that thread.  So there is *no* way to have
>    two LISP tasks with a C task in between.  I'd also guess that the
>    LISP environment has no way to handle the classis priority
>    inversion problem. 

Our MCL port is fully integrated with the vxWorks scheduler.  Lisp
threads and C threads are both scheduled by vxWorks and can have
interleaved priorities.

Erann Gat
···@jpl.nasa.gov
From: Chuck Fry
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <37ec1be8$0$210@nntp1.ba.best.com>
I've combined Matt's concerns into one followup.

In article <·················@jpl.nasa.gov>,
Matt Wette  <···············@jpl.nasa.gov> wrote:
>There is interest in flying LISP in space and there has been some
>debate on whether this is wise or not.  I think this can be broken
>into two or so more fundamental issues.  (1) is LISP the right
>development environment and (2) will LISP run in embedded systems.

Matt, before I start addressing your issues point-by-point, allow me to
point out that several of them are issues of Lisp implementations, and
not strictly those of the language definition itself.  

It's a chicken-and-egg problem: there aren't many Lisp systems optimized
for real-time control off the shelf today, because there hasn't been
much demand, but they do exist.  Gensym used to offer one, Symbolics
developed Minima for a telephone switching application, and Rod Brooks's
L is currently available from IS Robotics.  For more on L, see:
 <http://www.isr.com/home/home_technology.html>

Also, some of the performance issues you raise are programming technique
issues, which are under the control of the programmer.  You don't write
embedded C code like you would GUI code on a PC or workstation, and in
general I wouldn't expect production embedded Lisp code to be written
like proof-of-concept AI research code either.  One of the problems in
the Lisp community is that relatively few people really understand how
to write tight, efficient code, but that's an issue of culture,
training, and purpose, and not something inherent in the language.

Given these issues, what does Lisp have to offer the embedded community?

(1) Better choice of built-in control structures: UNWIND-PROTECT,
flexible and robust exception handling, closures, etc.  And if you don't
have what you need, you can probably write a macro for it.

(2) Better ability to defer implementation decisions via abstractions,
especially using macros.  With conventional languages, all too often the
programmer must prematurely choose representations and algorithms.  Lisp
programmers can often defer those choices until it becomes clear which
areas of the program are bottlenecks.

(3) Better program safety.  True, garbage collection doesn't prevent
memory leaks, but it will prevent dereferencing null pointers!

(4) More-than-adequate performance, with modern optimizing compilers and
careful program design.

>The issues I see are the following:
>
> 1. Running LISP doesn't mean using the language.  It means using the 
>    environment.  The major LISP implementations don't have lots of 
>    emphasis in real-time issues, so the well-understood real-time 
>    environments worked out for the POSIX.1c, etc standards are
>    probably not directly available to the designer.

True enough.  Again, this is because there's been little demand for
real-time Lisp implementations, and not something inherent about the
language.  But I should point out POSIX.1c isn't considered part of the
C language spec either.

> 2. LISP uses garbage collection.  This is an issue with Java as well
>    as LISP.
>     a. One needs to be able to guarantee, for real-time performance, that 
>        heap space management doesn't become a resource issue which
>        prevents fast context switches.  That is, I don't want LISP 
>        to mask interrupts or hold semaphores for more than a few 
>	machine instructions.

There are a wide variety of approaches to garbage collection, each with
different performance characteristics in space and time.  Gary Byers,
currently at JPL, has told me that one can implement garbage collection
in Common Lisp in such a way as to guarantee hard real-time response,
but doing so requires compromises that penalize performance in other
ways.  This may be a worthwhile tradeoff, depending on the application.

And again, there's nothing inherent in the Common Lisp language
specification that prevents a programmer from writing code that only
allocates memory at startup, or at build time for that matter.

>     b. One needs to do things in interrupt handlers.  Can you run
>	LISP in an (real) interrupt handler?

In theory, yes.  (Lisp Machines did it all the time!)  The limitations
of what you can and can't do in an interrupt handler apply to all
languages equally.

> 3. LISP has to carve all data from the heap.  Therefore LISP might
>    be too slow.  
>    If I want to write code to be fast, I'd use C, not LISP, for just
>    this reason.  You can code so that malloc is never called.  In C
>    the compiler will allocate your objects on the stack at compile
>    time: no runtime overhead.

Some Lisp implementations have offered stack allocation as well (I'm not
sure which ones currently offer it) via the DYNAMIC-EXTENT declaration.
And it is possible to write Lisp code in a style similar to C code.
With modern optimizing Common Lisp compilers, I've written tight inner
loops that generate instruction sequences similar to what I'd expect
from equivalent C code.  And they don't allocate memory either.

> 4. CLOS is slow.
>    I've read through the MOP book a bit.  My understanding is that
>    there has got to be a huge runtime overhead for dispatch (of
>    generic functions).  Compare this to single-inheritance systems
>    (e.g., Java), where dispatch (my guess) is <5 machine
>    instructions.

Again, it's all about tradeoffs.  CLOS is intended to be general, as
befits a language used for research, and so compromises performance to a
degree.  Contrast this to C++ where performance is king, and
expressibility and convenience suffer.  Single inheritance object
systems are great... until you have an application that requires code to
be duplicated.

CLOS probably is not as slow as you think it is.  There have been some
very clever implementations of method dispatch; certainly most of them
are faster than exhaustive CASE statements!  And some Lisp systems
(notably MCL) offer the concept of "base classes", with large
improvements in slot access time in exchange for some loss of
generality.

And if you don't need the generality, don't use it!  I avoid using CLOS
when I know the code I'm writing is in the critical path of an
application.

> 5. LISP is big.  Therefore it may slow down the processor.
>    Here is the rationale.  If my embedded code might be 1meg in C.
>    That's a moderate to large application.  In LISP you get the whole
>    kitchen sink.  

This is less of an issue than you think.  It's true that there's a
certain minimum size to the runtime system.  But remember that
Harlequin's Lisp *as flown on DS1* allows you to remove major subsystems
that you don't need, for a considerable space savings.  Other
implementations offer similar options.

>		    I've seen moderate LISP in embedded systems come in
>    at > 30Meg.  A 30Meg program is going to run slower than a 1Meg
>    program because you are going to see a significant increase in
>    cache misses.  This is especially important in embedded systems,
>    which don't have the huge caches that desktop boxes do.

If you're referring to the DS1 Remote Agent Experiment, I don't think
you can call that "moderate Lisp in embedded systems".  There was a lot
more fat that could have been cut out, had we had the time and budget
for it.

Cache misses are an important issue and a key reason for the poor
performance of some Lisp programs, but I think it's more an issue of
locality-of-reference in data structure and algorithm design than one of
program size.  E.g. how big was the RAD6000's cache?  Maybe 4 KB?  How
are you going to avoid generating cache misses in a 1 MB program?  I
don't care what language it's written in, you're going to blow out of a
4 KB cache pretty frequently.

> 6. I suspect that most (all?) LISP implementations don't implement
>    like most RTOS's do.  That is, if a LISP task is running and a
>    interrupt comes in, can the LISP system switch to another task in
>    a small (say < 0.1 millisecond), bounded amount of time?
>
> 7. Here's another important one.  The only LISP I know of that runs
>    on a realtime OS must run as one OS threads.  It does it's own
>    multithreading within that thread.  So there is *no* way to have
>    two LISP tasks with a C task in between.  I'd also guess that the
>    LISP environment has no way to handle the classis priority
>    inversion problem.  

The multitasking implementations on the Common Lisps that I'm familiar
with are pretty primitive, I'll admit.  But note that Allegro CL for
Windows is already using native threads.  And L claims to handle
thousands of task switches per second.

To sum up, there's probably no single Lisp implementation off the shelf
today that addresses all of your concerns.  And some of these are points
I raised in our presentation of RAX to the Lisp Users Group Meeting last
year.  But there is nothing inherent in the language that prevents them
from being addressed.  I think the potential benefits bear investigating.

 -- Chuck, again not speaking for NASA or the Lisp vendors
--
	    Chuck Fry -- Jack of all trades, master of none
 ······@chucko.com (text only please)  ········@home.com (MIME enabled)
Lisp bigot, mountain biker, car nut, sometime guitarist and photographer
The addresses above are real.  All spammers will be reported to their ISPs.
From: Jason Trenouth
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <AkzvNxAV8sj359bpy1fTujQfa+sA@4ax.com>
On 25 Sep 1999 00:48:40 GMT, ······@best.com (Chuck Fry) wrote:

> It's a chicken-and-egg problem: there aren't many Lisp systems optimized
> for real-time control off the shelf today, because there hasn't been
> much demand, but they do exist.  Gensym used to offer one, Symbolics
> developed Minima for a telephone switching application, 

Several people have mentioned in passing that a hard real-time Common Lisp was
used for AT&T's telephone switching. FTR it was Harlequin's "Hercules" -- a
LispWorks variant with a hard real-time GC. Some of the Symbolics Minima folks
later worked at Harlequin, but not on Hercules.

__Jason
From: Jonathan
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <7sotq8$npo$1@news8.svr.pol.co.uk>
Chuck Fry wrote:

> I've combined Matt's concerns into one followup...

> Cache misses are an important issue and a key reason for the poor
> performance of some Lisp programs, but I think it's more an issue of
> locality-of-reference in data structure and algorithm design than one of
> program size.  E.g. how big was the RAD6000's cache?  Maybe 4 KB?  How
> are you going to avoid generating cache misses in a 1 MB program?  I
> don't care what language it's written in, you're going to blow out of a
> 4 KB cache pretty frequently.

The obvious: execution speed isn't always a design priority - please don't
assume that I'm saying it should be. I'm not. Nor I think are Matt or Chuck.
But it is sometimes *very* useful. One of the major attractions of Lisp to
me is that it is a high performance yet dynamically typed language - if I
didn't need reasonable execution speed, I'd probably have stuck with Python
for the jobs I'm moving to Lisp.

So:

Poor use of the cache is one of the most frequently encountered reasons for
low execution speed for programs written in any language, including C or
even assembler. Fixing the problem sometimes involves changing an algorithm
in a way that may require more machine instructions to be executed, but
which requires less traffic to and from the cache from main memory. The
result can be a large speed increase.  Altering data structures so that
items of data likely to be used closely together in time are placed close
together in physical memory can very work well too (when an item is brought
into the cache its neighbour may come along for free). "High Performance
Computing" by Dowd is a good introduction to this sort of stuff, and most of
the advice there should cross quite easily to Lisp. (If anyone actually
cares about this stuff and wants more references, mail me.)

Another important technique is to align data so that items don't straddle
cache lines unnecessarily (briefly: data gets brought into the cache in say
32 byte chunks that are aligned on 32 byte boundaries, a 4 byte data object
that has it's first byte one cache line and the second, third and fourth on
another needs two cache lines to be fetched instead of one - the performance
hit can be huge.) You also need to pad data out sometimes - to put dummy
data into structures to stop cache line boundaries being crossed by items.
(An array of 23 byte strcutures will mostly be poorly aligned even if the
array starts on a 32 byte boundary. So you might want to pad each structure
up to 32 bytes so every one of them is now cache line aligned.) Sometimes
you re-arrange data for similar reasons. This sort of thing might be taken
care of by the compiler, but sometimes compilers mess it up, so paranoids do
it themselves - I do. In C and some other languages a programmer can
allocate memory for himself and start allocating from the first aligned
address inside that block.

At the moment my Lisp knowledge is scanty - although I'm more impressed with
it than any other language I've used so far. So I'm wondering:

1 Given Lisp's lack of an address and data size operator (true?), how do you
check data alignment and the need for padding on objects? Does anyone know
how good the compilers are at padding and aligning data automatically? I
would have thought it was a fairly basic thing, but I spoke to an engineer
working on a commerical (non-Lisp) language project recently and he was
frighteningly ignorant of the whole area. A little attention in this area
can often give amazing results.

2 In C++, data items declared together are stored together, subject only to
any padding the compiler might do. So typically (ie on every compiler I've
used)

class foo
{
    float a, b;
    char c;
    float d;
}

will have a layout of:

offset 0: a
offset 4: b
offset 8: c
offset 10:d

and will occupy 14 bytes. You might change it to:

class foo
{
    float a, b;
    float d;
    char c;
    char pad[3]
}

 to get 16 byte alignment so that two will fit inside a cache line without
causing a misalignment of what follows. Does any one know if Lisp compilers
follow similar rules for structures?


Jonathan Coup, XWare
From: Chuck Fry
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <37f00025$0$206@nntp1.ba.best.com>
In article <············@news8.svr.pol.co.uk>,
Jonathan <········@meanwhile.freeserve.co.uk> wrote:
>At the moment my Lisp knowledge is scanty - although I'm more impressed with
>it than any other language I've used so far. So I'm wondering:
>
>1 Given Lisp's lack of an address and data size operator (true?), how do you
>check data alignment and the need for padding on objects? Does anyone know
>how good the compilers are at padding and aligning data automatically? 

This is not often a concern in Lisp because the most basic data type is
an object reference, which is essentially a machine pointer with some
type information added.  Consequently most stock hardware Lisp
implementations I'm aware of align everything on word (4-byte)
boundaries.  Conses and double floats are often 8-byte aligned, and
arrays and other data objects 16-byte aligned.

>2 In C++, data items declared together are stored together, subject only to
>any padding the compiler might do. So typically (ie on every compiler I've
>used)
>
>class foo
>{
>    float a, b;
>    char c;
>    float d;
>}
>
>will have a layout of:
>
>offset 0: a
>offset 4: b
>offset 8: c
>offset 10:d
>
>and will occupy 14 bytes. You might change it to:
>
>class foo
>{
>    float a, b;
>    float d;
>    char c;
>    char pad[3]
>}
>
> to get 16 byte alignment so that two will fit inside a cache line without
>causing a misalignment of what follows. Does any one know if Lisp compilers
>follow similar rules for structures?

Unless I've *really* been hiding my head in the sand, few Lisp
implementations support such fine-grained data structure definitions
natively, although most support it for references to external (C) data
structures.  So again it's rarely a concern.

 -- Chuck
--
	    Chuck Fry -- Jack of all trades, master of none
 ······@chucko.com (text only please)  ········@home.com (MIME enabled)
Lisp bigot, mountain biker, car nut, sometime guitarist and photographer
The addresses above are real.  All spammers will be reported to their ISPs.
From: Rainer Joswig
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <joswig-2809990200200001@194.163.195.67>
In article <··············@nntp1.ba.best.com>, ······@best.com (Chuck Fry) wrote:

> Unless I've *really* been hiding my head in the sand, few Lisp
> implementations support such fine-grained data structure definitions
> natively, although most support it for references to external (C) data
> structures.  So again it's rarely a concern.

I'm not so sure about that. Many Lisp systems have
a lot of *internal* datastructures with special layout
features (from raster arrays to special vectors for
implementing structures, ...).

If you are going to do a lot of data processing, control
over the layout of the data is not that uninteresting.

Take: 

(defclass foo ()
  ((bar :type (SIMPLE-VECTOR 2))
   (baz :type (SIMPLE-VECTOR 4))))

I certain situations I may want to have bar and baz
being inlined into the object layout. So instead
of having pointers to vectors, I'd like to have
the vectors directly included.
(Or take objects which directly include other objects.)

We were discussing this recently with Martin Cracauer,
who would have an interesting application for that.
It's a real problem for him, that most CL implementations
don't give you *that* much control over the layout of
arrays, structure or classes.
From: Thomas A. Russ
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <ymiln9rt2rw.fsf@sevak.isi.edu>
······@lavielle.com (Rainer Joswig) writes:
> Take: 
> 
> (defclass foo ()
>   ((bar :type (SIMPLE-VECTOR 2))
>    (baz :type (SIMPLE-VECTOR 4))))
> 
> I certain situations I may want to have bar and baz
> being inlined into the object layout. So instead
> of having pointers to vectors, I'd like to have
> the vectors directly included.
> (Or take objects which directly include other objects.)

Yes.  It seems that there are (at least) two different reasons why one
would have a slot that contains items of a structured type:

  (1)  A reference to a separate stand-alone object
  (2)  A structured value that is an intrinsic part of the object
       itself.

The standard implementation via pointer is ideal for case (1), but may
be less ideal, depending on circumstances, for case (2).  I recall
seeing that an OODB system (Versant) had a way of declaring that the
purpose of the structured type or object filler was case 2.  If that
declaration was made, the system would allocate in-line storage for the
object.  This had the potential for greatly improving object retrieval
time from the database, since contiguous storage was available.  One
could imagine adding an allocation type of :embedded to the current
:class and :instance slot allocation types.

Now, there are likely some features of CLOS and Lisp that make this a
bit more difficult to arrange.  In particular, one should not expect to
be able to create separate pointers to the structured values in an
object that is in a slot of type (2), since I would imagine it would
wreak some havoc on the garbage collection scheme to have pointers into
the internal data storage area of an object.

It isn't clear to me how one could enforce this restriction and still
have a somewhat useful system.

There may also be some concerns about redefining such a class, but that
would, I think, be easier to handle -- even if the solution was to say
you couldn't do that at run-time.

Enough rambling for now.

-- 
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu    
From: Kent M Pitman
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <sfwvh8uvkrx.fsf@world.std.com>
···@sevak.isi.edu (Thomas A. Russ) writes:

> [...] It seems that there are (at least) two different reasons why one
> would have a slot that contains items of a structured type:
> 
>   (1)  A reference to a separate stand-alone object
>   (2)  A structured value that is an intrinsic part of the object
>        itself.
> 
> The standard implementation via pointer is ideal for case (1), but may
> be less ideal, depending on circumstances, for case (2).  I recall
> seeing that an OODB system (Versant) had a way of declaring that the
> purpose of the structured type or object filler was case 2.  If that
> declaration was made, the system would allocate in-line storage for the
> object.  [...]

> Now, there are likely some features of CLOS and Lisp that make this a
> bit more difficult to arrange.  In particular, one should not expect to
> be able to create separate pointers to the structured values in an
> object that is in a slot of type (2), since I would imagine it would
> wreak some havoc on the garbage collection scheme to have pointers into
> the internal data storage area of an object.

I think a bunch of gc's could handle this but not all.  And the
language tries not to create situations which force a particular
implementation, especially given that implementations are already
deployed and would have to be retrofitted.  It'd be better if it were
done as an invisible "optimization" by a block compilation observing that
the use of the structure type was limited to certain situations that it could
tell never escaped.  Of course, that's not a totally trivial task. ;-)
Need the mythical SCC [Sufficiently Clever Compiler].  

A related variant of this involves recursive structures.  Consider a tree
that has subnodes of itself that are trees and is both an external and internal
version at the same time.  An "outer pointer" needs to be typed, but the
recursive nodes don't need to be for many situations.  The problem is akin
to heap- vs stack-consing of numbers.  You pass it around in efficient form
among trusted friends and you make a wrapper around it when you have to pass
off to someone you don't know.  Still, that's complicated.
From: Francis Leboutte
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <a9rsN3KJPAKQbhxRUYk=+EG4acLP@4ax.com>
Matt Wette <···············@jpl.nasa.gov> wrote:

...

> 5. LISP is big.  Therefore it may slow down the processor.
>    Here is the rationale.  If my embedded code might be 1meg in C.
>    That's a moderate to large application.  In LISP you get the whole
>    kitchen sink.  I've seen moderate LISP in embedded systems come in
>    at > 30Meg.  A 30Meg program is going to run slower than a 1Meg
>    program because you are going to see a significant increase in
>    cache misses.  This is especially important in embedded systems,
>    which don't have the huge caches that desktop boxes do.

I can speak about a 76,000 CL lines application whose runtime image is 5.1
Mb (this includes a graphic subsystem). I have not tried to reduce the
image size (I know I could . I am not interested in as the application has
to be delivered on a PC in a nuclear plant, not on a space ship). 

--
Francis Leboutte
··········@skynet.be  http://users.skynet.be/algo
From: Paolo Amoroso
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <37edc3f0.828552@news.mclink.it>
On 24 Sep 1999 13:10:41 -0700, Matt Wette <···············@jpl.nasa.gov>
wrote:

>  4. CLOS is slow.
>     I've read through the MOP book a bit.  My understanding is that
>     there has got to be a huge runtime overhead for dispatch (of
>     generic functions).  Compare this to single-inheritance systems
>     (e.g., Java), where dispatch (my guess) is <5 machine
>     instructions.

There is a paper by Henry Baker titled (quoting from memory) "CLOStrophobia
- Its Etiology and Cure" that addresses some of your concerns. It's
available at Baker's Web site:

  ftp://ftp.netcom.com/pub/hb/hbaker/home.html


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Marco Antoniotti
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <lwd7v4pjv1.fsf@copernico.parades.rm.cnr.it>
·······@mclink.it (Paolo Amoroso) writes:

> On 24 Sep 1999 13:10:41 -0700, Matt Wette <···············@jpl.nasa.gov>
> wrote:
> 
> >  4. CLOS is slow.
> >     I've read through the MOP book a bit.  My understanding is that
> >     there has got to be a huge runtime overhead for dispatch (of
> >     generic functions).  Compare this to single-inheritance systems
> >     (e.g., Java), where dispatch (my guess) is <5 machine
> >     instructions.
> 
> There is a paper by Henry Baker titled (quoting from memory) "CLOStrophobia
> - Its Etiology and Cure" that addresses some of your concerns. It's
> available at Baker's Web site:
> 
>   ftp://ftp.netcom.com/pub/hb/hbaker/home.html
> 

I read the paper.  I doubt that what is proposed there can make it
into the standard: the constraints posed on the CPL seemed to me too
"different".  I am not very familiar with the "sealing" protocol used
in Dylan, but maybe that would be easier to include.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Paul Tarvydas
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <380D2384.3D2B6E7A@tscontrols.com>
Matt Wette wrote:

[This has probably little to do with your point ... :-]

>  5. LISP is big.

LISP, the language, itself is not big.  The language was invented in the 50's
and ran well on small (relative to the present) machines.  To rub this point
in, here's an anecdote:

I consider myself to be a compiler guy and a real-time programmer.  The very
first compiler I built was a LISP compiler...

Around 1978, I manually wire-wrapped a Z80 system with 12K (yes K, not M) of
RAM.  I transliterated Fritz van der Watteren's LISP for the 6800 to Z80
assembler (about 5K of Z80 machine code, 4K on the 6800).  It had full-blown
garbage collection (albeit, mark-and-sweep).

I then typed in the LISP compiler from J.R.Allen's book "The Anatomy of Lisp"
book (fixing mistakes and typos as I went).

This compiler fit within the limited space.  It successfully compiled and ran
various (albeit small) LISP programs.

So, in my experience, you can fit (1) a LISP interpreter with garbage
collection, (2) a LISP compiler, (3) LISP source code and (4) compiled LISP
programs into as little as 12K of RAM.  If you want to add smarter garbage
collection, you probably need to add some more K's.  If you want to add lots
of LISP libraries, you probably need to add some more K's.  And so on.

[You can see some of the LISP code that ran on my Z80 machine in the Sept. 78
or 79 issue of Dr. Dobb's Journal "A Potpourri of Lisp Functions"].

[Later, I went on to learn about larger languages, like Small C (also in DDJ
(1981, I think)).]

[Even later, I learned about and wrote compilers for monstrous languages like
C, Concurrent Euclid, C++, Eiffel, et al].

[Note that garbage collection latency in my LISP was well-bounded - it never
exceeded 7K* in one pass :-].

Paul Tarvydas
·········@tscontrols.com


* 7K = 12K total RAM less 5K for the interpreter.
From: moribund
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <pNmP3.742$W12.69758@typhoon2.gnilink.net>
    Speaking of small Lispish interpreters, what is the smallest one anyone
has run into that implemented enough of the language to be useful (defun,
+-/*, loop constructs, lists, etc).  I'm not talking about doing tricks with
a precompiler to make the source tiny either.  It would be a great learning
exercise in building interpreters non-the-less.  :)

                Damond

Paul Tarvydas <·········@tscontrols.com> wrote in message
······················@tscontrols.com...
> Matt Wette wrote:
>
> [This has probably little to do with your point ... :-]
>
> >  5. LISP is big.
>
> LISP, the language, itself is not big.  The language was invented in the
50's
> and ran well on small (relative to the present) machines.  To rub this
point
> in, here's an anecdote:
>

[snip]
From: Marco Antoniotti
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <lw3dv6orzk.fsf@copernico.parades.rm.cnr.it>
"moribund" <·······@iximd.com> writes:

>     Speaking of small Lispish interpreters, what is the smallest one anyone
> has run into that implemented enough of the language to be useful (defun,
> +-/*, loop constructs, lists, etc).  I'm not talking about doing tricks with
> a precompiler to make the source tiny either.  It would be a great learning
> exercise in building interpreters non-the-less.  :)

Yes and no.  "Enough of the language" is, in my experience, a good
excuse to write a "yet another incompatible version of the language".

> 
> Paul Tarvydas <·········@tscontrols.com> wrote in message
> ······················@tscontrols.com...
> > Matt Wette wrote:
> >
> > [This has probably little to do with your point ... :-]
> >
> > >  5. LISP is big.

Well. With PCs coming with 64M standard and up, your notion of 'big'
may change.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Howard R. Stearns
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <380E126F.D326B1DA@elwood.com>
moribund wrote:
> 
>     Speaking of small Lispish interpreters, what is the smallest one anyone
> has run into that implemented enough of the language to be useful (defun,
> +-/*, loop constructs, lists, etc).  I'm not talking about doing tricks with
> a precompiler to make the source tiny either.  It would be a great learning
> exercise in building interpreters non-the-less.  :)
> 
>                 Damond

I knew a guy in school, Steve Rosenthal, who implemented some Scheme
dialect on an HP41C.  I doubt if it was compatible with anything... Last
I heard (three years ago?) he was at some interpreted-C startup in
Boston.

> 
> Paul Tarvydas <·········@tscontrols.com> wrote in message
> ······················@tscontrols.com...
> > Matt Wette wrote:
> >
> > [This has probably little to do with your point ... :-]
> >
> > >  5. LISP is big.
> >
> > LISP, the language, itself is not big.  The language was invented in the
> 50's
> > and ran well on small (relative to the present) machines.  To rub this
> point
> > in, here's an anecdote:
> >
> 
> [snip]
From: R. Matthew Emerson
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <87k8ohdcul.fsf@nightfly.apk.net>
··@world.std.com (Jeff DelPapa) writes:

> If you limit things to interpreters, there was a usable one that ran
> on a 4kw (12 bit words) pdp-8.  (it was available from the DECUS
> software library).

I think I actually have that.  It's PDP-1 Lisp, by L. Peter Deutsch,
written in 1963--1964, while he was in high school.

-matt
From: Rob Warnock
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <7uoh3r$6davm@fido.engr.sgi.com>
moribund <·······@iximd.com> wrote:
+---------------
|     Speaking of small Lispish interpreters, what is the smallest one anyone
| has run into that implemented enough of the language to be useful...
+---------------

TinyScheme <URL:http://www.altera.gr/dsouflis/tinyscm.html> is claimed to
contain almost all of R5RS Scheme, is only ~64 KB compiled on x86 Linux,
and is useful enough that its author's company uses it in their Web server
as a scripting language.

I'm sure you could do something much smaller if you have a looser definition
of "useful"...


-Rob

-----
Rob Warnock, 8L-846		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: moribund
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <TQYP3.1019$mq6.58966@typhoon2.gnilink.net>
Rob Warnock <····@rigden.engr.sgi.com> wrote in message
·················@fido.engr.sgi.com...

> I'm sure you could do something much smaller if you have a looser
definition
> of "useful"...

    That's going to make all the difference...  :)

    I've run into SIOD recently as well.  Downloaded and built the thing.
Comes with all kinds of little goodies.

                Damond
From: Christopher R. Barry
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <87wvsftcke.fsf@2xtreme.net>
"moribund" <·······@iximd.com> writes:

> Rob Warnock <····@rigden.engr.sgi.com> wrote in message
> ·················@fido.engr.sgi.com...
> 
> > I'm sure you could do something much smaller if you have a looser
> definition
> > of "useful"...
> 
>     That's going to make all the difference...  :)
> 
>     I've run into SIOD recently as well.  Downloaded and built the thing.
> Comes with all kinds of little goodies.

Don't waste your time with SIOD. C is a better scripting/extension
language than SIOD. If it must be Scheme, then use a real Scheme....

Christopher
From: Tim Bradshaw
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <ey3k8oeiclx.fsf@lostwithiel.tfeb.org>
* Christopher R Barry wrote:

> Don't waste your time with SIOD. C is a better scripting/extension
> language than SIOD. If it must be Scheme, then use a real Scheme....

This, of course, despite the fact, that SIOD has been used as a
scripting / extension language for many systems with great success by
people with a lot more experience than you (see the festival speech
synthesis system for instance).

--tim
From: Christopher R. Barry
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <87n1t9u8s6.fsf@2xtreme.net>
Tim Bradshaw <···@tfeb.org> writes:

> * Christopher R Barry wrote:
> 
> > Don't waste your time with SIOD. C is a better scripting/extension
> > language than SIOD. If it must be Scheme, then use a real Scheme....
> 
> This, of course, despite the fact, that SIOD has been used as a
> scripting / extension language for many systems with great success by
> people with a lot more experience than you (see the festival speech
> synthesis system for instance).

(SayText "I've been using the Debian Festival package for over a year,
Tim. I've been using The Gimp extensively. SIOD has caused me way too
much pain. It has a single iteration construct: while. It does not
support recursion either in all systems that use it because the
programmer that embedded it into the application did not do the job
right (see Gimp). There are no &optional, &key etc. parameters. Even C
has VA_ARGS. It has no mod function, nor any similar operation.
Actually, it is missing a great deal of math functions that you need
to do computer graphics for writing Gimp scripts. It does not have the
kind of typing guarantees with numbers that you have with Common Lisp
or C. If a routine produces a number larger than an int, you get a
float. It is a piece of filth garbage trash. I don't understand your
defense of it. Are you speaking from extensive experience with it? I
could write something better while blind-folded and drunk hacking LOGO
on an Apple ][. As for your cute remark: C++ has been used as a
programming language and a plugin/extension language for many systems
with great success by people with a lot more experience than you (see
Adobe Photoshop for instance).)

Christopher
From: Tim Bradshaw
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <ey3emelisqy.fsf@lostwithiel.tfeb.org>
* Christopher R Barry wrote:

(I choose merely a selection here, more would be cruel I think)

> There are no &optional, &key etc. parameters. Even C
> has VA_ARGS.

Are you aware how scheme does optional arguments?  I guess not.

> It has no mod function, nor any similar operation.

for hack value I added one to the sources (which I don't recall ever
looking at before).  7 lines, about 5 minutes, including finding a C
book to look up the mod operator in C.  I'm not a genius programmer,
rather, this is a fundamentally trivial task.

> I
> could write something better while blind-folded and drunk hacking LOGO
> on an Apple ][. 

But adding mod was too hard?

--tim
From: Marco Antoniotti
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <lwd7u5gk42.fsf@copernico.parades.rm.cnr.it>
Tim Bradshaw <···@tfeb.org> writes:

> * Christopher R Barry wrote:
> 
> (I choose merely a selection here, more would be cruel I think)
> 
> > There are no &optional, &key etc. parameters. Even C
> > has VA_ARGS.
> 
> Are you aware how scheme does optional arguments?  I guess not.

#define SARCASM_ON
They are optional in the sense that every implementor has the choice
to be RxRS compliant or not :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Marc Battyani
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <E4D2764DB092A614.2DB9522258C9124A.B3B9CBBC9DDF87EE@lp.airnews.net>
Marco Antoniotti <·······@copernico.parades.rm.cnr.it> wrote in message
···················@copernico.parades.rm.cnr.it...
> #define SARCASM_ON

What's this Marco :a #define in c.l.l.
Tss tss!
The correct syntax is defconstant, #+, (with-sarcasm-on ...) or whatever you
want but definitively not #define...

Marc Battyani
Sorry couldn't resist...
From: Marco Antoniotti
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <lwu2nfq4kd.fsf@copernico.parades.rm.cnr.it>
"Marc Battyani" <·············@csi.com> writes:

> Marco Antoniotti <·······@copernico.parades.rm.cnr.it> wrote in message
> ···················@copernico.parades.rm.cnr.it...
> > #define SARCASM_ON
> 
> What's this Marco :a #define in c.l.l.
> Tss tss!
> The correct syntax is defconstant, #+, (with-sarcasm-on ...) or whatever you
> want but definitively not #define...
> 
> Marc Battyani
> Sorry couldn't resist...

Sorry.  I forgot to post the code for the #d dispatching macro :)

Cheers


-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Christopher R. Barry
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <87so31royy.fsf@2xtreme.net>
Tim Bradshaw <···@tfeb.org> writes:

> * Christopher R Barry wrote:
> 
> Are you aware how scheme does optional arguments?  I guess not.

I know how scheme does &rest arguments:

  ((lambda (x . y) (list x y)) 1 2 3 4 5)  =>  (1 (2 3 4 5))

I don't know how to do &optional or &key ones, however. Do enlighten
me.

Note: I don't _hate_ scheme (the programming language). I really like
for example that you can do:

  ((if #t * -) 3 3)  =>  9

You can get the effect in Common Lisp, but it doesn't have the same
prettiness and coolness. But prettiness and coolness doesn't get
large, complex software systems built.

> > It has no mod function, nor any similar operation.
> 
> for hack value I added one to the sources (which I don't recall ever
> looking at before).  7 lines, about 5 minutes, including finding a C
> book to look up the mod operator in C.  I'm not a genius programmer,
> rather, this is a fundamentally trivial task.

I think you're missing the point. SIOD is supposed to be something
that you embed into a program for extension. Its sole purpose is to
prevent people from having to modify the C sources of a program when
they want a new feature. Do you not see a problem that you have to
modify the C sources of the extension language itself? Especially when
you have to modify the C sources of an extension language to add a
feature to it which is built into the C language in which it is
written?

> > I
> > could write something better while blind-folded and drunk hacking LOGO
> > on an Apple ][. 
> 
> But adding mod was too hard?

No, not hard, but _painful_. I'll describe my experience. Technically,
I did not need the mod function but the rem function. Over the set of
positive numbers (which is all I was working with), mod and rem do the
same thing; they do the division and return the remainder. Let's
pretend that C only has + - * / (since SIOD doesn't have much more
than this). Now I want to figure out what

 (mod 14 4)

is. I can begin by dividing the number 14 by divisor 4:

  14 / 4  =>  3

The result is the integer 3. The result of dividing two integers in C
is always an integer. I can then take the difference of the number and
the product of the result and divisor, which is 2, and there is my
remainder. If the divisor is larger than the number then the remainder
will be the number. The following C code captures everything:

  int remainder(int number, int divisor) {
      int result;
      if (divisor > number)
	  return number;
      result = number / divisor;
      return number - divisor * result;
  }

Now Common Lisp will return 7/2 for (/ 14 4). However, if Common Lisp
did not have MOD and REM, you could implement them trivially using
either FLOOR, TRUNCATE, CEILING, or ROUND. SIOD doesn't even have a
round function.

Now in SIOD:

  (/ 14 4) => 3.5

There is no way for me to use that value to compute the remainder of
the operation that produced it. And there is no function in SIOD that
will take 14 and 4 and give me something helpful to compute the
remainder. There isn't even a way I know of to convert a float to an
integer. So anyways, to do mod in SIOD you must perform the division
in software. Additionally, the only way to iterate is with while or
recursion. I went with while. Here it is:

  (define (mod number divisor)
    (if (= divisor 0)
	(error "Divisor can't be zero!"))
    (if (or (< number 0) (< divisor 0))
	(error "Arguments can't be negative in this quicky version!"))
    (cond ((> number divisor)
	   (let ((counter 0)
		 (remainder 0))
	     (while (>= (set! remainder (- number (* counter divisor)))
			divisor)
		    (set! counter (+ counter 1)))
	     remainder))
	  ((< number divisor) number)
	  (t 0)))

Now this function per se was trivial to write. But I found it hard to
believe that any language implementation could be so braindamaged so
as to force you to implement something in software in this manner; so
I spent a _long_ time reading through the SIOD reference manual
looking for A Better Way. There really isn't. I couldn't believe it.
All of this was _frustrating_. It was painful. I want to spend my time
solving _real_ problems, not dealing with issues like this which
shouldn't exist.

And this was just the _beginning_ of my frustrations with SIOD and
specifically how it was done with the Gimp.

I don't want to waste any more time discussing this. If you think SIOD
is fine; use it and be happy. But I won't stop recommending against it.

Christopher
From: Rolf-Thomas Happe
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <r5g0z07lg2.fsf@bonnie.mathematik.uni-freiburg.de>
Christopher R. Barry:
> Tim Bradshaw <···@tfeb.org> writes:
> > > It has no mod function, nor any similar operation.
> > 
> > for hack value I added one to the sources (which I don't recall ever
> > looking at before).  7 lines, about 5 minutes, including finding a C
> > book to look up the mod operator in C.  I'm not a genius programmer,
> > rather, this is a fundamentally trivial task.

Aside: C specifies the behaviour of its mod operator only for positive
input.  The value of -4 % 3, for instance, is machine dependent.

[...]
> integer. So anyways, to do mod in SIOD you must perform the division
> in software. Additionally, the only way to iterate is with while or
> recursion. I went with while. Here it is:
[...]

BTW, there's a more obvious "quick solution":

;; X: real number, P: positive number ==> value: X modulo P.
(define (mod x p)
  (cond ((< x 0)  (mod (+ x p) p))
	((>= x p) (mod (- x p) p))
	(else x)))

(R5RS Scheme's QUOTIENT/REMAINDER/MODULO procedures are more general.)

rthappe
From: Tim Bradshaw
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <ey3d7u4j32r.fsf@lostwithiel.tfeb.org>
* Christopher R Barry wrote:
> Tim Bradshaw <···@tfeb.org> writes:
>> * Christopher R Barry wrote:
>> 
>> Are you aware how scheme does optional arguments?  I guess not.

> I know how scheme does &rest arguments:

>   ((lambda (x . y) (list x y)) 1 2 3 4 5)  =>  (1 (2 3 4 5))

> I don't know how to do &optional or &key ones, however. Do enlighten
> me.

It doesn't.  And to quote your original comments about siod:

	If it must be Scheme, then use a real Scheme....


--tim
From: Rob Warnock
Subject: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <7v0f4u$7rj7p@fido.engr.sgi.com>
Christopher R. Barry <······@2xtreme.net> wrote:
+---------------
| Tim Bradshaw <···@tfeb.org> writes:
| > Are you aware how scheme does optional arguments?  I guess not.
| 
| I know how scheme does &rest arguments:
|   ((lambda (x . y) (list x y)) 1 2 3 4 5)  =>  (1 (2 3 4 5))
| I don't know how to do &optional or &key ones, however. Do enlighten me.
+---------------

You can of course "do" &optional and &key (or rather, provide somewhat
similar functionality) with explicit post-processing of the &rest argument
in your function. [Note: In Scheme, most people use simple symbols with
no colon prefix as "keywords", though MzScheme provides an explicit
"keyword" datatype that may be helpful.] This is made much more readable
(but certainly no faster) by using convenience routines, so that you could
write the CL function:

	(defun foo (a b c &key d (e 123) (f 456 f-p))
	  ... )

in Scheme like this, maybe:

	(define (foo a b c . rest)
	  (let ((d (key-value rest 'd))
		(e (key-value rest 'e 123))
		(f (key-value rest 'f 456))
		(f-p (key-present? rest 'f)))
	    ... ))

Or assuming you had non-hygenic macros available (such as "defmacro",
which most Schemes either have or can craft up out of whatever low-level
macros they *do* have), you might decide that by convention your macro
would capture the variable name "rest", and write something like this:

	(define (foo a b c . rest)
	  (with-parsed-keys (d (e 123) (f 456 f-p))
	    ... ))

As far as &optional goes, if you use explicit parsing you usually
have to manually supply an index, e.g., from some actual code I wrote
a long time ago (before I knew that CL called them &optional, or I
might have called the helper function "opt-value" instead of "dlft")

	(define (dump . rest)		; &optional addr size
	  (let* ((addr (dflt rest 0 dump-prev-addr))
		 (size (dflt rest 1 dump-prev-size))
		 (j (modulo addr 16)))
	    ...))

Or, as above, with macro support:

	(define (dump . rest)		; &optional addr size
	  (with-parsed-optionals ((addr dump-prev-addr)
				  (size dump-prev-size))
	    ... ))

[Or even knock yourself out and write a "defun" macro that does it all
for you automatically...]  ;-}  ;-}

Yes, it's ugly. Yes, CL does it better, and it's all IN THE STANDARD!
But also, yes, with a little macrology[*] one can approximate the
convenience even in Scheme.


-Rob

[*] And SIOD *does* have just enough of a macro facility to enable
defining "defmacro", and hence be able to write in the above style.

And with a bunch of work (once), you can even use SIOD's macros to
get (almost) all of the R4RS syntax.

-----
Rob Warnock, 8L-846		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Marco Antoniotti
Subject: Re: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <lwso2zq4ex.fsf@copernico.parades.rm.cnr.it>
····@rigden.engr.sgi.com (Rob Warnock) writes:

	... all intersting stuff I used to play around with Franz Lisp
        Opus xx.xx, circa 1984/85.

> Or, as above, with macro support:
> 
> 	(define (dump . rest)		; &optional addr size
> 	  (with-parsed-optionals ((addr dump-prev-addr)
> 				  (size dump-prev-size))
> 	    ... ))
> 
> [Or even knock yourself out and write a "defun" macro that does it all
> for you automatically...]  ;-}  ;-}
> 
> Yes, it's ugly. Yes, CL does it better, and it's all IN THE STANDARD!
> But also, yes, with a little macrology[*] one can approximate the
> convenience even in Scheme.

Or one can bite the bullet and switch to a real (Common) Lisp. :)

Cheers


-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Fernando D. Mato Mira
Subject: Re: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <38146474.DB8D7581@iname.com>
Marco Antoniotti wrote:

> ····@rigden.engr.sgi.com (Rob Warnock) writes:
>
> > [Or even knock yourself out and write a "defun" macro that does it all
> > for you automatically...]  ;-}  ;-}
> >
> > Yes, it's ugly. Yes, CL does it better, and it's all IN THE STANDARD!
> > But also, yes, with a little macrology[*] one can approximate the
> > convenience even in Scheme.
>
> Or one can bite the bullet and switch to a real (Common) Lisp. :)

Or one can admit defeat and rip off CMUCL :->

--
((( DANGER )) LISP BIGOT (( DANGER )) LISP BIGOT (( DANGER )))

Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: Marco Antoniotti
Subject: Re: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <lwn1t7pp9h.fsf@copernico.parades.rm.cnr.it>
"Fernando D. Mato Mira" <········@iname.com> writes:

> Marco Antoniotti wrote:
> 
> > ····@rigden.engr.sgi.com (Rob Warnock) writes:
> >
> > > [Or even knock yourself out and write a "defun" macro that does it all
> > > for you automatically...]  ;-}  ;-}
> > >
> > > Yes, it's ugly. Yes, CL does it better, and it's all IN THE STANDARD!
> > > But also, yes, with a little macrology[*] one can approximate the
> > > convenience even in Scheme.
> >
> > Or one can bite the bullet and switch to a real (Common) Lisp. :)
> 
> Or one can admit defeat and rip off CMUCL :->
> 

Meaning?  :)  CMUCL has its idiosynchracies, but it is a wonderful
system.  After all you get what you pay for.  If you want a commercial
strenght CL Franz and Harlequin will be happy to sell you one.

The point is that CMUCL comes at the same "price" of
many Schemes :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Robert Monfera
Subject: Re: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <381456DC.6E7577FF@fisec.com>
Marco Antoniotti wrote:
...
> > > Or one can bite the bullet and switch to a real (Common) Lisp. :)
> >
> > Or one can admit defeat and rip off CMUCL :->
> >
> 
> Meaning?  :)  CMUCL has its idiosynchracies, but it is a wonderful
> system.  After all you get what you pay for.  If you want a commercial
> strenght CL Franz and Harlequin will be happy to sell you one.

Probably ripping off CMUCL is a praise of its quality , i.e., "Go
understand the CMUCL solution and reimplement it in your favourite
Scheme."  At least that's how I understood it.

Robert
From: Fernando D. Mato Mira
Subject: Re: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <3814A3C3.D7EF65AA@iname.com>
Robert Monfera wrote:

> Marco Antoniotti wrote:
> ...
> > > > Or one can bite the bullet and switch to a real (Common) Lisp. :)
> > >
> > > Or one can admit defeat and rip off CMUCL :->
> > >
> >
> > Meaning?  :)  CMUCL has its idiosynchracies, but it is a wonderful
> > system.  After all you get what you pay for.  If you want a commercial
> > strenght CL Franz and Harlequin will be happy to sell you one.
>
> Probably ripping off CMUCL is a praise of its quality , i.e., "Go
> understand the CMUCL solution and reimplement it in your favourite
> Scheme."  At least that's how I understood it.

Meaning give a couple of touches to the CMUCL sources, tweak a bit
your Scheme implementation (or do things like (define otherwise
'otherwise)),
declare `keywords' (defkeyword foo) -> (define foo 'foo), load the CMUCL
looping
constructs, then load DEFMACRO, then load DEFUN, then load DEFSETF..

Silly, isn't it? OK, were are the CL implementations with native
multithreading on IRIX, Solaris, Win32, Linux?

--
((( DANGER )) LISP BIGOT (( DANGER )) LISP BIGOT (( DANGER )))

Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: Robert Monfera
Subject: Re: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <3814A60C.C86FA50A@fisec.com>
"Fernando D. Mato Mira" wrote:
...
> OK, were are the CL implementations with native
> multithreading on IRIX, Solaris, Win32, Linux?

Win32: Lispworks, ACL 5.0+, Corman Lisp...

Mac: ?  

Unix: Hopefully the question is 'when', not 'if'.  The reason is that
AFAIK it is needed to have applications that scale to tens of processors
(e.g., SUN enterprise servers) without having to fire up that many
separate CL processes and communicate between them.

What are the reasons for not using native multithreading on Unix, and
are they weighing more than what's traded?

Thanks,
Robert
From: Tim Bradshaw
Subject: Re: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <ey3puy3qiay.fsf@lostwithiel.tfeb.org>
* Robert Monfera wrote:

> What are the reasons for not using native multithreading on Unix, and
> are they weighing more than what's traded?

As I understand it the reasons are really twofold: 

	1. The `standard' (symbolics-derived) multithreading API which
	   most CLs implement is just not suitable for large-scale
	   multiprocessors.

	2. There is no standard threading API for Unix boxes.  Yes,
	   there is posix threads, but no one actually implements it
	   compatibly, so you have to reimplement for every platform,
	   which makes it very expensive.

I don't really understand (1).  Of course, something like
WITHOUT-INTERRUPTS & friends are basically death, but even the
symbolics documentation says you should use suitable locks instead, so
this doesn't seem to be that bad.  I can't see why a programming style
that uses locks to achieve synchronisation won't work, and it's
supported in the current model. Perhaps I'm being impossibly naive.
Or perhaps the problem is that oodles of code out there uses
WITHOUT-INTERRUPTS to do critical sections, and that modifying this is
really hard (because such modifications are widespread as now
*everyone* who gets at a given object needs to respect the lock on
it).

It's not clear to me if the various ATOMIC-* operations (ATOMIC-INCF &
so on) are implementable reasonably on multiprocesors, I guess it
depends on whether they have these store-conditional type operators --
and I think they kind of must have something like that because you
need *something* like that to implement locks. 

(2) is a pretty good argument from an implementor's point of view,
obviously pretty worthless from a user's.

I can also see another argument

	3. It may be very hard to guarantee the kind of safety that
	   lisp programmers are used to on a multiprocessor -- for
	   instance if you're assigning a variable, and you are doing it
	   from two processors without any locking, you may just end
	   up with either something mangled in memory (so the GC will
	   fall over later), or with two completely different versions
	   in different caches or something, and in any case this may
	   result in C-like `fall off the edge of the world' behaviour
	   rather than the lisp-like elegant failure we are used to.

There's also the issue of distributed GC &c, which may be a whole
nightmare implementationally, though I think it's theoretically a
solved problem.

--tim
From: Michael L. Harper
Subject: Re: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <3814F553.EBA3893B@alcoa.com>
I am confused about this issue somewhat. What you say below makes some
sense. But, for example, Allegro 5.0.1 now supports multiprocessing native
on NT although I have not played with it. In general, most of the solutions
they used there should apply to the problem on UNIX. My understanding of the
problem for Franz on UNIX is that pthreads simply does not provide all of
the functionality they need. For example, the concept of run and arrest
reasons and their associated functions. Perhaps Duane will provide some
clarification.

Tim Bradshaw wrote:

> * Robert Monfera wrote:
>
> > What are the reasons for not using native multithreading on Unix, and
> > are they weighing more than what's traded?
>
> As I understand it the reasons are really twofold:
>
>         1. The `standard' (symbolics-derived) multithreading API which
>            most CLs implement is just not suitable for large-scale
>            multiprocessors.
>
>         2. There is no standard threading API for Unix boxes.  Yes,
>            there is posix threads, but no one actually implements it
>            compatibly, so you have to reimplement for every platform,
>            which makes it very expensive.
>
> I don't really understand (1).  Of course, something like
> WITHOUT-INTERRUPTS & friends are basically death, but even the
> symbolics documentation says you should use suitable locks instead, so
> this doesn't seem to be that bad.  I can't see why a programming style
> that uses locks to achieve synchronisation won't work, and it's
> supported in the current model. Perhaps I'm being impossibly naive.
> Or perhaps the problem is that oodles of code out there uses
> WITHOUT-INTERRUPTS to do critical sections, and that modifying this is
> really hard (because such modifications are widespread as now
> *everyone* who gets at a given object needs to respect the lock on
> it).
>
> It's not clear to me if the various ATOMIC-* operations (ATOMIC-INCF &
> so on) are implementable reasonably on multiprocesors, I guess it
> depends on whether they have these store-conditional type operators --
> and I think they kind of must have something like that because you
> need *something* like that to implement locks.
>
> (2) is a pretty good argument from an implementor's point of view,
> obviously pretty worthless from a user's.
>
> I can also see another argument
>
>         3. It may be very hard to guarantee the kind of safety that
>            lisp programmers are used to on a multiprocessor -- for
>            instance if you're assigning a variable, and you are doing it
>            from two processors without any locking, you may just end
>            up with either something mangled in memory (so the GC will
>            fall over later), or with two completely different versions
>            in different caches or something, and in any case this may
>            result in C-like `fall off the edge of the world' behaviour
>            rather than the lisp-like elegant failure we are used to.
>
> There's also the issue of distributed GC &c, which may be a whole
> nightmare implementationally, though I think it's theoretically a
> solved problem.
>
> --tim
From: Duane Rettig
Subject: Re: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <4emeigljn.fsf@beta.franz.com>
"Michael L. Harper" <··············@alcoa.com> writes:

> I am confused about this issue somewhat. What you say below makes some
> sense. But, for example, Allegro 5.0.1 now supports multiprocessing native
> on NT although I have not played with it.

You are correct, though it is important to understand that we do support
Native OS-threads, not symetric multiprocessing.  What this means is that
you get multiprocessing of both lisp and non-lisp in a single-processor
sense; that threads can run when resources are available, one at a time
(just as it occurs on all multiprocessing systems on single processors).
An added benefit is that on multiple processor systems, the other processors
can run as many non-lisp code sections at a time as there are processors.
(a thread may have lisp code and non-lisp code run on it; I define a
non-lisp code section as that time when the non-lisp code is being run,
i.e. when the lisp heap is not being accessed).  The restriction is that
the lisp heap can only be accessed by one thread at a time.

> In general, most of the solutions
> they used there should apply to the problem on UNIX. My understanding of the
> problem for Franz on UNIX is that pthreads simply does not provide all of
> the functionality they need. For example, the concept of run and arrest
> reasons and their associated functions. Perhaps Duane will provide some
> clarification.

This is correct.  Solaris (version >= 2.6, I believe) does have all the
Posix compliance we need, but we have found little non-conformances in
all of the others, in areas critical to our design.

It was interesting to me to find that vendors can be fairly testy about
this non-conformance; last time I posted about this, a linux implementor
mailed me saying that linux did conform.  I didn't know the particulars,
but our multiprocessing guru listed about three or four areas where their
Posix conformance didn't measure up.  The reply was something like "Oh,
yeah, that. ...".  The promise was that those non-conformances were being
worked on and fixes were imminent.  I haven't kept up with the progress
on this.

> Tim Bradshaw wrote:
> 
> > * Robert Monfera wrote:
> >
> > > What are the reasons for not using native multithreading on Unix, and
> > > are they weighing more than what's traded?

The short answer to this original question is that we simply haven't
done it yet.  But I am not sure what "weighing more than what's traded"
means, nor do I know what the final goal for "native multiprocessing"
is for this particular question is (it means a different thing to
everyone who asks it).  I'm going to assume a worst-case that includes
a full, SMP-style multiprocessing (where multiple threads can access
and modify the lisp heap simultaneously on different processors).
In practice, this is done much less often than is thought, because even
on true smp systems, the granularity of threads and locked data is on
much more of a process basis rather than fine-grained control over every
data access.

> > As I understand it the reasons are really twofold:
> >
> >         1. The `standard' (symbolics-derived) multithreading API which
> >            most CLs implement is just not suitable for large-scale
> >            multiprocessors.
> >
> >         2. There is no standard threading API for Unix boxes.  Yes,
> >            there is posix threads, but no one actually implements it
> >            compatibly, so you have to reimplement for every platform,
> >            which makes it very expensive.
> >
> > I don't really understand (1).  Of course, something like
> > WITHOUT-INTERRUPTS & friends are basically death, but even the
> > symbolics documentation says you should use suitable locks instead, so
> > this doesn't seem to be that bad.  I can't see why a programming style
> > that uses locks to achieve synchronisation won't work, and it's
> > supported in the current model. Perhaps I'm being impossibly naive.
> > Or perhaps the problem is that oodles of code out there uses
> > WITHOUT-INTERRUPTS to do critical sections, and that modifying this is
> > really hard (because such modifications are widespread as now
> > *everyone* who gets at a given object needs to respect the lock on
> > it).

Well, you _said_ you don't understand (1), but then you demonstrate a
knowledge of some of the issues that make me disbelieve your original
statement :-)

In addition, one piece of the lisp-machine multiprocessing model that
is extremely hard to implement on GP hardware is that process switching
can be governed per thread (stack group) by a lisp function that runs
to determine run reasons.  This means that someting has to set up the
thread to run, to run the function, to determine whether or not to
set up the thread to run.  It's very complex, using a set of semaphores
that makes my head swim (I'm not the one implementing this).

> > It's not clear to me if the various ATOMIC-* operations (ATOMIC-INCF &
> > so on) are implementable reasonably on multiprocesors, I guess it
> > depends on whether they have these store-conditional type operators --
> > and I think they kind of must have something like that because you
> > need *something* like that to implement locks.

As far as I know, all architectures have some sort of load-locked/
store-conditional instruction pair, or some other semaphore support.
For our OS-threads implemetation (the one which allows the
one-active-lisp-thread/many-active-non-lisp-threads model), this is
not an issue, precisely because we determine that only one lisp thread
can access the heap at a time.

> > (2) is a pretty good argument from an implementor's point of view,
> > obviously pretty worthless from a user's.

Precisely.  We did two versions of our native OS-threads model, one
on NT, and one on Posix.  Most unix vendors achieved Posix almost-
compliance by putting a Posix veneer over their own thread
implementations.  It wasn't enough.  We could have chosen to do N
more implementations, one per hardware-vendor's native-threads
implementation.  But that seems too expensive a tradeoff.  Instead,
we're waiting for better Posix compliance.  Of course, we always
respond to customers (or hardware vendors) wanting to buy contracts
for specific work on specific platforms, but nobody has yet wanted
to pay for this kind of work.

> > I can also see another argument
> >
> >         3. It may be very hard to guarantee the kind of safety that
> >            lisp programmers are used to on a multiprocessor -- for
> >            instance if you're assigning a variable, and you are doing it
> >            from two processors without any locking, you may just end
> >            up with either something mangled in memory (so the GC will
> >            fall over later), or with two completely different versions
> >            in different caches or something, and in any case this may
> >            result in C-like `fall off the edge of the world' behaviour
> >            rather than the lisp-like elegant failure we are used to.
> >
> > There's also the issue of distributed GC &c, which may be a whole
> > nightmare implementationally, though I think it's theoretically a
> > solved problem.

It is true that it is theoretically solved.  But the cost of such
fine-grained control is speed (which is the whole purpose of
allowing SMP in the first place).  A read-barrier would be needed,
and best-case estimates are that the basic lisp would slow down
by at least 30% (more realistic estimates are that the basic lisp
would be half the speed of current single-processor lisps).  This
means that on a two-processor system, the speed advantage is a wash
(you may as well just run two faster separate lisp processes with a
socket conection between the two for best performance).  I suspect
that when systems with more than two processors start becoming the
norm, we will see more of a real demand for this full smp, and a
willingness to put up with a slower lisp in order to get it.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Roger Corman
Subject: Re: &optional and &key in Scheme [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <38160ccb.6743797@nntp.best.com>
>> > I can also see another argument
>> >
>> >         3. It may be very hard to guarantee the kind of safety that
>> >            lisp programmers are used to on a multiprocessor -- for
>> >            instance if you're assigning a variable, and you are doing it
>> >            from two processors without any locking, you may just end
>> >            up with either something mangled in memory (so the GC will
>> >            fall over later), or with two completely different versions
>> >            in different caches or something, and in any case this may
>> >            result in C-like `fall off the edge of the world' behaviour
>> >            rather than the lisp-like elegant failure we are used to.
>> >
>> > There's also the issue of distributed GC &c, which may be a whole
>> > nightmare implementationally, though I think it's theoretically a
>> > solved problem.
>
>It is true that it is theoretically solved.  But the cost of such
>fine-grained control is speed (which is the whole purpose of
>allowing SMP in the first place).  A read-barrier would be needed,
>and best-case estimates are that the basic lisp would slow down
>by at least 30% (more realistic estimates are that the basic lisp
>would be half the speed of current single-processor lisps).  This
>means that on a two-processor system, the speed advantage is a wash
>(you may as well just run two faster separate lisp processes with a
>socket conection between the two for best performance).  I suspect
>that when systems with more than two processors start becoming the
>norm, we will see more of a real demand for this full smp, and a
>willingness to put up with a slower lisp in order to get it.
>
>-- 
>Duane Rettig          Franz Inc.            http://www.franz.com/ (www)

I don't understand why it is harder to support SMP with lisp threads
than with other languages like C++ or Java (which manage it, at least
in some implementations).

Regarding the lisp heap, just make it thread safe, like the C/C++
memory management functions have to be. Performance is an issue?
In Corman Lisp,, a heap allocation function includes a quick check of
the number of threads going into the memory allocation functions. If
only one thread, it bypasses the critical section, otherwise uses a
critical section (20 instructions or so) for synchronization. An
optimization I am working on is a per-thread new space (ephemeral
heap. generation 0), so that memory allocation will always be full
speed with no synchronization necessary. The collector has to stop all
other threads when it runs, however.

Regarding setting of variables, I don't see what the problem is there
either. Setting a variable should be a single instruction (well, maybe
not on all processors, but definitely on Intel). Obviously if a
sequence of instructions will cause an object to be in an inconsistent
state it needs a critical section to protect it during this time. This
is just the same as in C++ or Java.

In Corman Lisp, all special variable bindings are per thread (except
the global binding) which I have found makes it incredibly easy to set
up a lot of "per thread globals". This makes writing multi-threaded
code easier than in C or C++, for example. I am not sure how Allegro
handles these (I assume the same way).

By the way, as far as I know, Corman Lisp does support SMP (runs
multiple lisp threads on multiple processors). However, I have not
done enough testing on multi-processor NT systems to make guarantees
in this regard. The tests I have tried worked fine, but I haven't
spent much time on it. There are a number of lisp objects which are
not thread safe yet (packages, for example, and clos generic function
tables). Some of these will be fixed in the upcoming version.

Roger Corman
From: Christopher R. Barry
Subject: Re: &optional and &key in Scheme [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <87ln8ppapb.fsf@2xtreme.net>
·····@xippix.com (Roger Corman) writes:

> I don't understand why it is harder to support SMP with lisp threads
> than with other languages like C++ or Java (which manage it, at least
> in some implementations).

I've heard people mention a product that Franz used to make called
Allegro CLIP which did multi-processor SMP--but on a Sequent....

Christopher
From: Tim Bradshaw
Subject: Threading [was Re: &optional and &key in Scheme [was: Re: LISP for embedded systems ]]
Date: 
Message-ID: <ey3u2nd2put.fsf_-_@lostwithiel.tfeb.org>
* Roger Corman wrote:
> The collector has to stop all
> other threads when it runs, however.

It's important to realise that something like this is crippling for
machines with significant numbers of CPUs, because of Amdahl's law.
If the Lisp needs to spend a proportion g (g < 1) of its time in GC,
and if this is a serial activity, then the best speedup (over a single
CPU) you will ever get is 1/g, no many how many processors you throw
at it.

Of course, most of the uses for Lisp on multiprocessors are probably
not trying to speed up some single computation, but to run lots of
server threads for instance, but the same thing applies -- if the GC
(or other components of the system) are serial, you will never see the
benefit of really big multiprocessors.  This is particularly toxic for
serial GC, because for a server-type application, almost all of the
rest of the work done may be completely scalable.

Does anyone know if Java VMs have multiprocessor GCs?  What about
typical C malloc/free based apps?  Any naive malloc/free will be
serial, for sure.

My impression (based on knowing some people who do big multiprocessor
applications) is that (despite the fact that `everyone knows' that you
can do multiprocessor stuff in C so you should be able to do it in
Lisp), it is extraordinarily hard to get things to scale to machines
with more than a few processors, and also reasonably non-portable to
do so.  *If* you have the resource (or the specific right people I
suspect) to throw at the problem though, and you can find a problem
people want solved, you can win, because the people who need those
problems solved are willing to pay really serious money to have them
solved.  I suspect in fact that there are really only a tiny number of
really scalable systems out there -- it just happens that one of them
is Oracle, which solves a problem on big machines that many people
want solved and are willing to pay very big money for.

--tim
From: Kaelin Colclasure
Subject: Re: Threading [was Re: &optional and &key in Scheme [was: Re: LISP for embedded systems ]]
Date: 
Message-ID: <1qGR3.13$%D.4767@newsin1.ispchannel.com>
Tim Bradshaw <···@tfeb.org> wrote in message
·······················@lostwithiel.tfeb.org...
> * Roger Corman wrote:
> > The collector has to stop all
> > other threads when it runs, however.
>
> It's important to realise that something like this is crippling for
> machines with significant numbers of CPUs, because of Amdahl's law.
> If the Lisp needs to spend a proportion g (g < 1) of its time in GC,
> and if this is a serial activity, then the best speedup (over a single
> CPU) you will ever get is 1/g, no many how many processors you throw
> at it.
>
> Of course, most of the uses for Lisp on multiprocessors are probably
> not trying to speed up some single computation, but to run lots of
> server threads for instance, but the same thing applies -- if the GC
> (or other components of the system) are serial, you will never see the
> benefit of really big multiprocessors.  This is particularly toxic for
> serial GC, because for a server-type application, almost all of the
> rest of the work done may be completely scalable.
>
> Does anyone know if Java VMs have multiprocessor GCs?  What about
> typical C malloc/free based apps?  Any naive malloc/free will be
> serial, for sure.

It's my understanding that Java's GC runs as a low-priority thread,
and that a hook is provided to explicitly do GC from a foreground
thread. I've definately seen Java apps that always have ready threads
experience resource (memory) problems because GC was never running.

All "stock" thread-safe C allocators use a mutex to protect the head
data structures from corruption. And hey, "this is crippling for
machines with significant numbers of CPUs" too. But this is what the
majority of applications written for such machines use today.

There are allocators with specialized interfaces for threaded
applications -- and they do claim massive performance deltas. And of
course, lots of C programmers do their own special-purpose allocators
to attack this problem.

> My impression (based on knowing some people who do big multiprocessor
> applications) is that (despite the fact that `everyone knows' that you
> can do multiprocessor stuff in C so you should be able to do it in
> Lisp), it is extraordinarily hard to get things to scale to machines
> with more than a few processors, and also reasonably non-portable to
> do so.  *If* you have the resource (or the specific right people I
> suspect) to throw at the problem though, and you can find a problem
> people want solved, you can win, because the people who need those
> problems solved are willing to pay really serious money to have them
> solved.  I suspect in fact that there are really only a tiny number of
> really scalable systems out there -- it just happens that one of them
> is Oracle, which solves a problem on big machines that many people
> want solved and are willing to pay very big money for.

It interesting that you call out Oracle, because last I knew, Oracle
uses process parallelism -- not threads. This was one of Sybase's main
slams against Oracle's product line.

Given sufficient physical memory, process parallelism seems to be the
way to go with the current generation of Lisp implementations. I'm
not saying that there's not room for improvement in this state of
affairs -- but Lisp is getting the job done for me.

-- Kaelin
From: Roger Corman
Subject: Re: Threading [was Re: &optional and &key in Scheme [was: Re: LISP for embedded systems ]]
Date: 
Message-ID: <38174d41.88781220@nntp.best.com>
On 27 Oct 1999 09:02:34 +0100, Tim Bradshaw <···@tfeb.org> wrote:

>* Roger Corman wrote:
>> The collector has to stop all
>> other threads when it runs, however.
>
>It's important to realise that something like this is crippling for
>machines with significant numbers of CPUs, because of Amdahl's law.
>If the Lisp needs to spend a proportion g (g < 1) of its time in GC,
>and if this is a serial activity, then the best speedup (over a single
>CPU) you will ever get is 1/g, no many how many processors you throw
>at it.
>...

You make good points, Tim, and I certainly agree about the effects GC
can have (or any single-point memory management system) on SMP
behavior. My point is that it would be nice if Lisp at least met the
behavior standards of C, C++ or Java on the same environment. On NT
this would mean at least allowing a process to take some advantage of
2-4 processors if they are available. Any use of them (even 80% of the
time) is better than nothing. I also think that with a per-thread new
space there may be some advantage over C or C++. Of course you could
do a per thread malloc/free heap in C, but without a copying,
generational collector, this might be difficult to manage correctly.
The generational collector makes it easy to move long-lived stuff into
a permanent, shared heap when it makes sense.

Roger
From: William Deakin
Subject: Re: Threading [was Re: &optional and &key in Scheme [was: Re: LISP for  embedded systems ]]
Date: 
Message-ID: <3816CC2C.404C74B4@pindar.com>
Tim Bradshaw wrote:

> Of course, most of the uses for Lisp on multiprocessors are probably not
> trying to speed up some single computation, but to run lots of server
> threads for instance, but the same thing applies -- if the GC (or other
> components of the system) are serial, you will never see the benefit of
> really big multiprocessors.  This is particularly toxic for serial GC,
> because for a server-type application, almost all of the rest of the
> work done may be completely scalable.

This prompts me to ask: Has anybody tried to run/port lisp to a massive
parallel system (I don't know, a Cray 68000 or something?)

> I suspect in fact that there are really only a tiny number of really
> scalable systems out there -- it just happens that one of them is
> Oracle, which solves a problem on big machines that many people want
> solved and are willing to pay very big money for.

As I understand it (although why I use the word understand I don't know,
there is more stand and less under): If each of the processor has
partitioned memory for processor specific jobs and a pool of memory
allocated and lockable for swapping and sharing data between
processors/threads, then you could make this scalable. The thread/process
running on each memory would have it's own gc, scaling local gc to each
process. I could then see some mark-and-sweep gc, for example, for the
pool memory, so that when a processor had gc'd references to the pool
memory it could be finally be cleared up.

This is probably a load of old cod, it has nothing to do with current
machines and I suspect it would require specialised hardware to do this
stuff. But, I am in agreement with tim,  if you can put a man on the moon,
I'm sure you can sort out some form of gc mechanism for lisp.

Best Regards,

:) will
From: Fernando D. Mato Mira
Subject: Re: Threading [was Re: &optional and &key in Scheme [was: Re: LISP for  embedded systems ]]
Date: 
Message-ID: <3816F3F4.5ABC20E5@iname.com>
William Deakin wrote:

>
> This prompts me to ask: Has anybody tried to run/port lisp to a massive
> parallel system (I don't know, a Cray 68000 or something?)

"lisp"? Massive repetitions of the same answer incoming! ;-)

> --
> ((( DANGER )) LISP BIGOT (( DANGER )) LISP BIGOT (( DANGER )))
>
> Fernando D. Mato Mira
> Real-Time SW Eng & Networking
> Advanced Systems Engineering Division
> CSEM
> Jaquet-Droz 1                   email: matomira AT acm DOT org
> CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
> Switzerland                       FAX:       +41 (32) 720-5720
>
> www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: Christopher R. Barry
Subject: Re: Threading [was Re: &optional and &key in Scheme [was: Re: LISP for   embedded systems ]]
Date: 
Message-ID: <87904ops3t.fsf@2xtreme.net>
"Fernando D. Mato Mira" <········@iname.com> writes:

> William Deakin wrote:
> 
> >
> > This prompts me to ask: Has anybody tried to run/port lisp to a massive
> > parallel system (I don't know, a Cray 68000 or something?)

Lisps like that have been mentioned here several times before. There
was the Connection Machine which had up to 65536 processors and ran
*Lisp. Allegro CL used to run on Crays and they made Allegro CLIP
which ran on the Sequent.

Christopher
From: Paolo Amoroso
Subject: Re: Threading [was Re: &optional and &key in Scheme [was: Re: LISP for   embedded systems ]]
Date: 
Message-ID: <381c1cf3.3634350@news.mclink.it>
On Wed, 27 Oct 1999 18:29:58 GMT, ······@2xtreme.net (Christopher R. Barry)
wrote:

> Lisps like that have been mentioned here several times before. There
> was the Connection Machine which had up to 65536 processors and ran
> *Lisp. Allegro CL used to run on Crays and they made Allegro CLIP

The *Lisp simulator has been ported to CMUCL by Fred Gilham and is
available at:

  ftp://ftp.csl.sri.com/pub/users/gilham/starlisp.tar.gz


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Fernando D. Mato Mira
Subject: Re: Threading [was Re: &optional and &key in Scheme [was: Re: LISP for    embedded systems ]]
Date: 
Message-ID: <3818C3A5.99726324@iname.com>
"Christopher R. Barry" wrote:

> *Lisp. Allegro CL used to run on Crays and they made Allegro CLIP

Just for completeness, let's mention that PSL was ported to Cray over 10 years
ago,
although that's about vector supercomputing, not parallelism or threading.

--
((( DANGER )) LISP BIGOT (( DANGER )) LISP BIGOT (( DANGER )))

Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: William Deakin
Subject: Re: Threading [was Re: &optional and &key in Scheme [was: Re: LISP for    embedded systems ]]
Date: 
Message-ID: <3817F7E4.A418F0F@pindar.com>
Christopher R. Barry wrote:

> Lisps like that have been mentioned here...

Where is here?

> There was the Connection Machine which had up to 65536 processors and ran
> *Lisp. Allegro CL used to run on Crays and they made Allegro CLIP which ran
> on the Sequent.

OK then, did these work using the parallelism?  and how did they gc?

Cheers,

:) will
From: Sashank Varma
Subject: Re: Threading [was Re: &optional and &key in Scheme [was: Re: LISP for embedded systems ]]
Date: 
Message-ID: <sashank-2810991005090001@129.59.212.53>
In article <·················@pindar.com>, ········@pindar.com wrote:

[snip]
>This prompts me to ask: Has anybody tried to run/port lisp to a massive
>parallel system (I don't know, a Cray 68000 or something?)
[snip]

there is connection machine lisp, a dialect of common lisp for the
(massively parallel) connection machine.  it is described in a 
number of places, including danny hillis' disseration (published
separately as a book) and a chapter by skef wholey in a book edited
by peter lee.  there are other sources of information, i'm sure.

also, steele provides a succinct description (and implementation)
of xectors in CLtL2.

plus there are descriptions of other parallel dialects of lisp
(qlisp?) scattered throughout the literature, several in a collection
of papers dedicated to john mccarthy.

let me know if you need specific references and i'll look them up
when i get the time.

sashank
From: ·················@my-deja.com
Subject: Re: &optional and &key in Scheme [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <7v65p0$bjl$1@nnrp1.deja.com>
In article <·············@beta.franz.com>,
  Duane Rettig <·····@franz.com> wrote:
> "Michael L. Harper" <··············@alcoa.com> writes:
>
> > I am confused about this issue somewhat. What you say below makes
some
> > sense. But, for example, Allegro 5.0.1 now supports multiprocessing
native
> > on NT although I have not played with it.
>
> You are correct, though it is important to understand that we do
support
> Native OS-threads, not symetric multiprocessing.  What this means is
that
> you get multiprocessing of both lisp and non-lisp in a
single-processor
> sense; that threads can run when resources are available, one at a
time
> (just as it occurs on all multiprocessing systems on single
processors).
> An added benefit is that on multiple processor systems, the other
processors
> can run as many non-lisp code sections at a time as there are
processors.
> (a thread may have lisp code and non-lisp code run on it; I define a
> non-lisp code section as that time when the non-lisp code is being
run,
> i.e. when the lisp heap is not being accessed).  The restriction is
that
> the lisp heap can only be accessed by one thread at a time.

Could one write ``non-lisp code'' in lisp by writing routines that work,
say, on stack and not heap, and return through a suitable lisp handler.
Would there be any advantage to this over starting threads running
foreign code?

I have in mind massively data parallel problems in which one might be
able to get by using lisp to manage a large number of OS level threads
on multiprocessor/single-system-image machines.

Mike Rilee RAYTHEON/NASA/GSFC Mlstp 930, B28/S207 Greenbelt, MD 20771
·················@gsfc.nasa.gov Ph. (301)286-4743 Fx. (301)286-1634
Computing in Sun-Earth Connections: http://lep694.gsfc.nasa.gov/rilee


Sent via Deja.com http://www.deja.com/
Before you buy.
From: Marco Antoniotti
Subject: Re: &optional and &key in Scheme  [was: Re: LISP for embedded systems ]
Date: 
Message-ID: <lwk8oba3ww.fsf@copernico.parades.rm.cnr.it>
Robert Monfera <·······@fisec.com> writes:

> Marco Antoniotti wrote:
> ...
> > > > Or one can bite the bullet and switch to a real (Common) Lisp. :)
> > >
> > > Or one can admit defeat and rip off CMUCL :->
> > >
> > 
> > Meaning?  :)  CMUCL has its idiosynchracies, but it is a wonderful
> > system.  After all you get what you pay for.  If you want a commercial
> > strenght CL Franz and Harlequin will be happy to sell you one.
> 
> Probably ripping off CMUCL is a praise of its quality , i.e., "Go
> understand the CMUCL solution and reimplement it in your favourite
> Scheme."  At least that's how I understood it.

Ok. I just learnt something new about how to use (abuse?!?) the
English language.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Rob Warnock
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <7uu2i5$7b40a@fido.engr.sgi.com>
Christopher R. Barry <······@2xtreme.net> wrote, speaking of SIOD:
+---------------
| If a routine produces a number larger than an int, you get a float.
+---------------

(*Hee-hee*)  SIOD has *only* floats!  (Well, C doubles.)

	LISP numberp(LISP x)
	{if FLONUMP(x) return(sym_t); else return(NIL);}
	...
	LISP flocons(double x)
	{LISP z;
	 long n;
	 if ((inums_dim > 0) &&
	     ((x - (n = (long)x)) == 0) &&
	     (x >= 0) &&
	     (n < inums_dim))
	   return(inums[n]);
	 NEWCELL(z,tc_flonum);
	 FLONM(z) = x;
	 return(z);}

An "integer" is just a double that happens to fit in a C "long".
Here's how it tells (in "lprin1g()") whether to print a decimal
point or not:

	    case tc_flonum:
	      n = (long) FLONM(exp);
	      if (((double) n) == FLONM(exp))
		sprintf(tkbuffer,"%ld",n);
	      else
		sprintf(tkbuffer,"%g",FLONM(exp));
	      gput_st(f,tkbuffer);
	      break;

So is it even worse than you thought?  ;-}


-Rob

p.s. Unlike you, though, I still think it has its uses...

-----
Rob Warnock, 8L-846		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Christopher R. Barry
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <87n1t8sis3.fsf@2xtreme.net>
····@rigden.engr.sgi.com (Rob Warnock) writes:

> though, I still think it has its uses...

When would it be a better choice than Guile, or a more complete
implementation?

Christopher
From: Rob Warnock
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <7v0g54$7pij0@fido.engr.sgi.com>
Christopher R. Barry <······@2xtreme.net> wrote:
+---------------
| ····@rigden.engr.sgi.com (Rob Warnock) writes:
| > though, I still think it [SIOD] has its uses...
| 
| When would it be a better choice than Guile, or a more complete
| implementation?
+---------------

I think that's up to the user, and also very much depends on the
application & environment. One of SIOD's *really* nice features is
how small and fast-starting it is. Here are some (admittedly very
old) results of running a trivial task in several flavors of Scheme
[except the first, which is just /bin/sh]. Each number is the lowest
time repeatably seen in a number of runs.

	Total start-up & run time for a trivial task
	in Scheme: (display "hello, world!")(newline)

	XXX     time    notes
	===     ====    =====
	sh      0.02	Used /bin/sh's builtin "echo"
	siod	0.05	Not R4RS, so no "display"; used "print".
	libscm  0.12
	gsi     0.15    Very first run took 0.68 sec [1.8 MB executable?]
	scm     0.30
	mz      0.50
	rs      0.70
	scsh    1.15

But as soon as you so something more complicated, SIOD's simple
interpreter starts to fall behind other implementations which do
some sort of preprocessing (compiling or "half-compiling") or
shallow binding of lexicals, etc..

So for classic HTTP cgi-bin scripts [for which the SIOD distribution
includes various bits of sample code] where startup time of a new Unix
process is important, SIOD still has a place, IMHO.  For most other
applications, use a "real" Scheme. (My current favorite is MzScheme,
but YMMV.)


-Rob

-----
Rob Warnock, 8L-846		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Rainer Joswig
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <joswig-2510990833510001@194.163.195.67>
In article <············@fido.engr.sgi.com>, ····@rigden.engr.sgi.com (Rob Warnock) wrote:

> But as soon as you so something more complicated, SIOD's simple
> interpreter starts to fall behind other implementations which do
> some sort of preprocessing (compiling or "half-compiling") or
> shallow binding of lexicals, etc..


You can do some preprocessing in SIOD, too. We use
SIOD so that we load a toplevel file which requires
a lot of other files (maybe upto twenty I think).
All these files can be combined in one "preprocessed"
file.
From: Rob Warnock
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <7v3q4q$3n7i@fido.engr.sgi.com>
Rainer Joswig <······@lavielle.com> wrote:
+---------------
| ····@rigden.engr.sgi.com (Rob Warnock) wrote:
| > But as soon as you so something more complicated, SIOD's simple
| > interpreter starts to fall behind other implementations which do
| > some sort of preprocessing (compiling or "half-compiling") or
| > shallow binding of lexicals, etc..
| 
| You can do some preprocessing in SIOD, too.
| We use SIOD so that we load a toplevel file which requires
| a lot of other files (maybe upto twenty I think).
| All these files can be combined in one "preprocessed" file.
+---------------

O.k., I assume you're talking about the "csiod" tool, which reads
a bunch of forms and writes them in FASL format, prepended by a
"#!/path/to/siod" header. Yes, that helps startup times for larger
programs, but *doesn't* help the post-startup performance at all!

When I said "preprocessing", I was refering to more aggressive things,
like stack-allocating lexical vars that aren't captured in closures,
beta reductions (where it's safe to do so), etc.

But SIOD does no rewriting of S-exprs at all. It's still a "pure
interpreter". In fact, it doesn't even memoize macro expansions!
They're re-expanded every time the enclosing forms are executed.

By comparison, even though SCM is also a pure interpreter, it at
least preprocesses (well, lazily rewrites then memoizes the first
time a closure is evaluated) lexical variables into "depth+offset"
tokens to speed lookups, and expands macros only once. And MzScheme,
of course, does "real compiling" of all expressions (also doing macro
expansion once only, at compile time).


-Rob

p.s. Perhaps we should take this offline, or at least move it to
comp.lang.scheme.  ;-}

-----
Rob Warnock, 8L-846		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Bob Bane
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <3815D4F0.C72BE3B9@removeme.gst.com>
Rob Warnock wrote:
> 
> But SIOD does no rewriting of S-exprs at all. It's still a "pure
> interpreter". In fact, it doesn't even memoize macro expansions!
> They're re-expanded every time the enclosing forms are executed.
>

SIOD actually does memoize macro expansions, at least as of version 3.0:

> (defmac (breedle form) `(cons ,(cadr form) ,(caddr form)))
breedle-macro

> breedle-macro
#<CLOSURE (form) (replace form (list (quote cons) (cadr form) (caddr
form)))>

> (define (foo  x y) (breedle y x))
#<CLOSURE (x y) (breedle y x)>

> foo
#<CLOSURE (x y) (breedle y x)>

> (foo 2 3)
(3 . 2)

> foo
#<CLOSURE (x y) (cons y x)>

SIOD is not 100% pure Scheme, it's not blindingly fast, and its C coding
style is idiosyncratic, but it's small and easy to hack.  In passing as
part of my current project, I've added interfaces to MySQL and an
S-expression based message passing system, changed the reader (NIL reads
and prints as (), back-quote in C, optional case-smashing for symbols),
and added a simple backtrace-on-error, all in little snippets of code. 
Someday in my Copious Free Time, I'll package the changes and submit
them to gjc.
From: Rob Warnock
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <7v5tgv$fcjm@fido.engr.sgi.com>
Bob Bane  <····@removeme.gst.com> wrote:
+---------------
| Rob Warnock wrote:
| > But SIOD... doesn't even memoize macro expansions!
| > They're re-expanded every time the enclosing forms are executed.
| 
| SIOD actually does memoize macro expansions, at least as of version 3.0:
| 
| > (defmac (breedle form) `(cons ,(cadr form) ,(caddr form)))
| breedle-macro
| > breedle-macro
| #<CLOSURE (form) (replace form (list (quote cons) (cadr form) (caddr form)))>
+---------------

Aha! Got me!! I had forgotten the "defmac" macro, which is defined in
the startup file "siod.scm":

	(define (replace before after)
	  (set-car! before (car after))
	  (set-cdr! before (cdr after))
	  after)

	(define (defmac-macro form)
	  (let ((sname (car (cadr form)))
		(argl (cdr (cadr form)))
		(fname nil)
		(body (prognify (cddr form))))
	    (set! fname (symbolconc sname '-macro))
	    (list 'begin
		  (list 'define (cons fname argl)
			(list 'replace (car argl) body))
		  (list 'define sname (list 'quote fname)))))

	(define defmac 'defmac-macro)

In my test codes, I was using the "bare" underlying SIOD macro mechanism
[more polite than calling it a "hack"], that is, that if the value of the
function position of an application is a symbol, the symbol is taken to
name a macro expansion function [e.g., as shown above for how "defmac"
itself is defined]. This mechanism does not memoize expansions.

What I was missing is that macros defined using the "defmac" utility macro
are themselves defined to *rewrite* (with "replace") any forms that they're
applied to. [Since the form *must* be an "application" S-expr, to trigger
the macro mechanism, there's always a cons cell there to be overwritten.]

So we're both right, sort of: The underlying macro mechanism *doesn't*
memoize, but the way of defining macros provided in the standard startup
file *does*.

[I suspect I overlooked this because "defmac" appears *nowhere* in the
"siod.html" documentation! (*sigh*)]


-Rob

-----
Rob Warnock, 8L-846		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Bob Bane
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <38172A22.AA4FC05B@removeme.gst.com>
Rob Warnock wrote:
> 
> So we're both right, sort of: The underlying macro mechanism *doesn't*
> memoize, but the way of defining macros provided in the standard startup
> file *does*.
> 
> [I suspect I overlooked this because "defmac" appears *nowhere* in the
> "siod.html" documentation! (*sigh*)]
> 
Which it really ought to, because the underlying
symbol-value-is-a-symbol mechanism is in the core interpreter, even if
defmac isn't loaded, ready to screw you to the wall if you mistype
something.  When it sees (foo x) and the value of foo is foo-macro, it
constructs (foo-macro '(foo x)) and evaluates it.  I discovered this one
by accidentally ending a cond with

	((t ...))

instead of
	(t ...)

Ouch!
From: Eugene Zaikonnikov
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <940684105.295927@lxms.cit.org.by>
> Don't waste your time with SIOD. C is a better scripting/extension
> language than SIOD. If it must be Scheme, then use a real Scheme....
>
Come on Chris, it's not that bad. For it's primary purpose SIOD is much
better than majority of pure RxRS dialects.

--
  Eugene.
From: Christopher R. Barry
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <87904ttz9o.fsf@2xtreme.net>
"Eugene Zaikonnikov" <······@removeme.cit.org.by> writes:

> > Don't waste your time with SIOD. C is a better scripting/extension
> > language than SIOD. If it must be Scheme, then use a real Scheme....
> >
> Come on Chris, it's not that bad. For it's primary purpose SIOD is much
> better than majority of pure RxRS dialects.

Please see my reply to Tim. Hopefully the needless size and complexity
and impurity of the following SIOD script I wrote for Gimp to automate
the processing and generation of icons for web pages will bring
closure to this. Note that there are no &optional, &key, etc.
parameters so many Gimp functions take a mandatory 8 parameters, most
of which are 0 or TRUE/FALSE. You can't quickly look them up from
Emacs either using an arglist function. It's not unlike the Windows
API, where every function wants 50 fields, most of which are 0.

This script would need extra work and documentation to be useful to
people other than myself (you also need a recent development version
of the Gimp). The basic idea is if you have a lot of icons in a single
image like


     +------+------+------+------+------+------+
     |      |      |      |      |      |      |
     | Icon | Icon | Icon | Icon | Icon | Icon |
     +------+------+------+------+------+------+
     |      |      |      |      |      |      |
     | Icon | Icon | Icon | Icon | Icon | Icon |
     +------+------+------+------+------+------+


you break them up into separate icons, do layer operations on them,
convert them to the 216 color web pallete, name them, save them,
etc.... It is conceptually very simple and it saves you about 100+
mouseclicks to just automate it all.

But it takes 5000 keystrokes worth of SIOD to automate it all. I could
easily imagine being able to do this with only a single, very readable
Common Lisp function if The Gimp had a sane API and CL as an extension 
language.

Christopher


--------
;;;
;;;  make-web-icons.scm
;;;
;;;  Copyright (C) 1999 Christopher R. Barry
;;;


(set! EXPAND_AS_NECESSARY 0)		;Fix bug?
(set! WEB_PALETTE 2)			;Fix bug?
(set! MAKE_PALETTE 0)			;Fix bug?


;; Define mod function for positive integers
(define (mod number divisor)
  (if (= divisor 0)
      (error "Divisor can't be zero!"))
  (if (or (< number 0) (< divisor 0))
      (error "Arguments can't be negative in this quicky version!"))
  (cond ((> number divisor)
	 (let ((counter 0)
	       (remainder 0))
	   (while (>= (set! remainder (- number (* counter divisor)))
		      divisor)
		  (set! counter (+ counter 1)))
	   remainder))
	((< number divisor) number)
	(t 0)))				;number equals divisor so remainder = 0

(define (script-fu-make-web-icons-alt image drawable
				      initial-x initial-y
				      width height
				      h-space v-space
				      columns rows
				      names-list)
  ;; Begin by merging visible layers if there is more than one
  (if (> (car (gimp-image-get-layers image))
	 1)
      (gimp-image-merge-visible-layers image EXPAND_AS_NECESSARY))
  ;; Do the icons
  (let* ((number-of-icons (* columns rows))
	 (current-icon 1)
	 (x-pos initial-x)
	 (y-pos initial-y))
    (while (<= current-icon number-of-icons) ;Iterate over all icons
	   ;; Make the new image objects needed for each iteration
	   (let* ((icon-image (car (gimp-image-new width height RGB)))
		  (icon-drawable (car (gimp-layer-new icon-image width height
						      RGBA_IMAGE "Icon"
						      100 NORMAL)))
		  ;; Name the file  ???  (aref names-list current-icon)  ???
		  (filename (string-append "/home/cbarry/graphics/site/"
					   (number->string current-icon)
					   ".gif")))
	     (gimp-image-add-layer icon-image icon-drawable -1)
	     (gimp-drawable-fill icon-drawable TRANS-IMAGE-FILL)
	     ;; Select the icon, copy it out, paste it into the new
	     ;; image, and anchor it.
	     (gimp-rect-select image x-pos y-pos width height REPLACE FALSE 0)
	     (gimp-edit-copy drawable)
	     (gimp-floating-sel-anchor
	      (car (gimp-edit-paste icon-drawable TRUE)))
	     ;; Convert to indexed
	     (gimp-convert-indexed icon-image 0 MAKE_PALETTE
				   255 FALSE TRUE "ignored-palette")
	     ;; Save the file
	     (file-gif-save TRUE icon-image icon-drawable filename filename
			    0 0 0 0)
	     ;;; Setup next loop
	     (set! current-icon (+ current-icon 1))
	     ;; Setup coordinates
	     (if (and (not (= current-icon 1)) ;If not doing the first icon
		      (or (= columns 1) ;And if there is only 1 column
			  (= (mod current-icon columns)
			     1)))	;Or we're 1 past last column
		 ;; Go back and do next column
		 (begin
		   (set! x-pos initial-x)
		   (set! y-pos (+ y-pos height v-space)))
		 ;; Else do next icon to the right
		 (set! x-pos (+ x-pos width h-space)))))))

(script-fu-register "script-fu-make-web-icons"
		    "<Image>/Chris/Make web icons"
		    "Converts an image with many icons into separate icons."
		    "Christopher Barry"
		    "Copyright (C) 1999 Christopher Barry"
		    "Sept. 28th 1999"
		    ""
		    SF-IMAGE "The image:"                      0
		    SF-DRAWABLE "The layer:"                   0
		    SF-VALUE "Starting X coordinate:"          "22"
		    SF-VALUE "Starting Y coordinate:"          "1"
		    SF-VALUE "Icon width:"                     "15"
		    SF-VALUE "Icon height:"                    "14"
		    SF-VALUE "Horizontal space between icons:" "2"
		    SF-VALUE "Vertical space between icons:"   "2"
		    SF-VALUE "Number of columns:"              "5"
		    SF-VALUE "Number of rows:"                 "3"
		    SF-VALUE "Space-delimited list of names:"  "0")
From: Eugene Zaikonnikov
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <940763916.673000@lxms.cit.org.by>
Christopher R. Barry <······@2xtreme.net> wrote in message
···················@2xtreme.net...
[snip]
>
> But it takes 5000 keystrokes worth of SIOD to automate it all. I could
> easily imagine being able to do this with only a single, very readable
> Common Lisp function if The Gimp had a sane API and CL as an extension
> language.
>
 I can be wrong, but Guile is an official scripting language for Gimp, isn't
it? IIRC, GJC did the first SIOD version when Linux was not even on a jigs,
let alone Gimp.
When I need to do a one-time solution or to write a simple script, SIOD is
fine for me. And it is simple as the first soviet tractor, thus I can easily
understand how it works and add any ad-hoc functionality whenever I want.
Perhaps I use SIOD for what you'd choose Perl, and maybe it is a wrong
approach. But I have nor time neither a striking need to move on Perl.

Regs,
  Eugene.
From: Tim Bradshaw
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <ey3bt9oj2nw.fsf@lostwithiel.tfeb.org>
* Eugene Zaikonnikov wrote:
> When I need to do a one-time solution or to write a simple script, SIOD is
> fine for me. And it is simple as the first soviet tractor, thus I can easily
> understand how it works and add any ad-hoc functionality whenever I want.
> Perhaps I use SIOD for what you'd choose Perl, and maybe it is a wrong
> approach. But I have nor time neither a striking need to move on Perl.

Right, that's the point exactly.  SIOD (and some other schemes, like
elk, and probably some CLs like Eclipse) are just *trivial* to
integrate with C (and probably C++ too).  For extension languages to
large C systems, the integration work can easily dominate all the
other programming you have to do, so it's absolutely crucial to make
that part easy, and to stop the footprint of the extension language
dominating the application. Even if it makes the language less
beautiful.  That emphatically is not true of some of the more
heavyweight scheme/lisp systems -- look at scheme48 for a fine example
of how to get this wrong (it may have improved, but until recently its
image size and startup time made scsh basically unsable).

--tim
From: Christopher R. Barry
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <87puy4siwc.fsf@2xtreme.net>
"Eugene Zaikonnikov" <······@removeme.cit.org.by> writes:

> Christopher R. Barry <······@2xtreme.net> wrote in message
> ···················@2xtreme.net...
> [snip]
> >
> > But it takes 5000 keystrokes worth of SIOD to automate it all. I could
> > easily imagine being able to do this with only a single, very readable
> > Common Lisp function if The Gimp had a sane API and CL as an extension
> > language.
> >
>  I can be wrong, but Guile is an official scripting language for Gimp, isn't
> it?

No. It will be in the future, however. No work has been done yet on
this so the 1.2.x series will not have it.

Christopher
From: Marco Antoniotti
Subject: Re: LISP for embedded systems
Date: 
Message-ID: <lwemelgk7s.fsf@copernico.parades.rm.cnr.it>
"Eugene Zaikonnikov" <······@removeme.cit.org.by> writes:

> > Don't waste your time with SIOD. C is a better scripting/extension
> > language than SIOD. If it must be Scheme, then use a real Scheme....
> >
> Come on Chris, it's not that bad. For it's primary purpose SIOD is much
> better than majority of pure RxRS dialects.
                          ^^^^^^^^^

Not many of them around :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa