From: Frank Buss
Subject: The Lisp CPU
Date: 
Message-ID: <cf8nh3$ath$1@newsreader2.netcologne.de>
The Verilog program is not yet done (not enough time until weekend), but 
I've collected some ideas and written a basic concept:

http://www.frank-buss.de/lispcpu/index.html

The byte-array is not so elegant, because it could be simulated with a 
list, but I think it is necessary, because of limited memory and large 
blocks are needed for sound and bitmaps. The word-size will be 32 bits, but 
this is independent of the general concept.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

From: Will Hartung
Subject: Re: The Lisp CPU
Date: 
Message-ID: <2nqe1vF3hcv1U1@uni-berlin.de>
"Frank Buss" <··@frank-buss.de> wrote in message
·················@newsreader2.netcologne.de...
> The Verilog program is not yet done (not enough time until weekend), but
> I've collected some ideas and written a basic concept:
>
> http://www.frank-buss.de/lispcpu/index.html
>
> The byte-array is not so elegant, because it could be simulated with a
> list, but I think it is necessary, because of limited memory and large
> blocks are needed for sound and bitmaps. The word-size will be 32 bits,
but
> this is independent of the general concept.

Were you planning on "hardcoding" things like the Reader to convert text
into the internal representations? Or will the Reader be "software"?

Macro's are a compiler function, not a runtime function, so do they need to
be at the hardware level?

Just curious...

Regards,

Will Hartung
(·····@msoft.com)
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <cf9ja9$8mv$1@newsreader2.netcologne.de>
"Will Hartung" <·····@msoft.com> wrote:

> Were you planning on "hardcoding" things like the Reader to convert
> text into the internal representations? Or will the Reader be
> "software"? 

I plan to code the reader in a Lisp program. But of course I need to run it 
on a PC first to produce the bootstrap binary representation of the reader 
s-expression :-)

> Macro's are a compiler function, not a runtime function, so do they
> need to be at the hardware level?

I think you are right and I can integrate it in the reader, which simply 
expands it at read-time.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: mikel
Subject: Re: The Lisp CPU
Date: 
Message-ID: <ZH7Sc.2425$QJ3.1133@newssvr21.news.prodigy.com>
Frank Buss wrote:
> "Will Hartung" <·····@msoft.com> wrote:
> 
> 
>>Were you planning on "hardcoding" things like the Reader to convert
>>text into the internal representations? Or will the Reader be
>>"software"? 
> 
> 
> I plan to code the reader in a Lisp program. But of course I need to run it 
> on a PC first to produce the bootstrap binary representation of the reader 
> s-expression :-)
> 
> 
>>Macro's are a compiler function, not a runtime function, so do they
>>need to be at the hardware level?
> 
> 
> I think you are right and I can integrate it in the reader, which simply 
> expands it at read-time.
> 

How do you plan to handle lambdas and closures? Environments? Maybe 
environments can be represented as alists, and a closure as a list of 
instructions plus an environment.
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <cfbk0k$56n$1@newsreader2.netcologne.de>
mikel <·····@evins.net> wrote:

> How do you plan to handle lambdas and closures? Environments? Maybe 
> environments can be represented as alists, and a closure as a list of 
> instructions plus an environment.

Why alists? I don't have much experience in Lisp (so I'll learn a lot by 
writing a reader and evaluator), but perhaps an environment could be a 
list of local symbols. Then a naive implementation would be to deep-copy 
this list on function call and initializing the arguments.

But how do I lookup the values from within a function body? Another idea: 
Assuming that a symbol pointer points to a structure with 3 words, like 
described in http://www.frank-buss.de/lispcpu/index.html , I can redefine 
the value-slot: it doesn't point to the actual value, but to a list of 
values. Everytime a function is called, for every argument symbol a new 
value is prepended to the list of values for this symbol. At function 
exit the values are removed again. With this concept it would be possible 
for the reader to substitute the symbol name with the pointer to the one 
and only symbol and function argument mapping, lookup, shadowing etc. is 
a O(1) operation.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: mikel
Subject: Re: The Lisp CPU
Date: 
Message-ID: <Q0hSc.5419$Mf3.1389@newssvr29.news.prodigy.com>
Frank Buss wrote:
> mikel <·····@evins.net> wrote:
> 
> 
>>How do you plan to handle lambdas and closures? Environments? Maybe 
>>environments can be represented as alists, and a closure as a list of 
>>instructions plus an environment.
> 
> 
> Why alists? 

Because

(let ((a 0)
       (b 1)
       (c 2))
   ...)

is easily and naturally represented as the alist

'((a . 0)
   (b . 1)
   (c . 2))

Actually, from a machine's point of view it's maybe more natural and 
efficient to represent that same environment as

#(0 1 2)

or, rather, to represent a particular environment frame that way, and a 
whole environment as something like

(list frame0 frame1 frame2 ...)

but that sort of assumes that you have a compiler that is turning 
variable references into frame indexes. I had the idea from reading 
earlier that you wanted the chip to evaluate something closer to the 
surface sexprs, rather than running compiled code.

> I don't have much experience in Lisp (so I'll learn a lot by 
> writing a reader and evaluator),

You bet! That's the way to do it, all right.

> but perhaps an environment could be a 
> list of local symbols. Then a naive implementation would be to deep-copy 
> this list on function call and initializing the arguments.
> 
> But how do I lookup the values from within a function body?

If the environment is an alist, you use ASSOC. If you are executing 
compiled code, then your compiler turns a variable reference into an 
index into the environment to find the right frame, and an index into 
the frame to find the right value (the variable names nca disappear 
unless you need them for debugging).

  Another idea:
> Assuming that a symbol pointer points to a structure with 3 words, like 
> described in http://www.frank-buss.de/lispcpu/index.html , I can redefine 
> the value-slot: it doesn't point to the actual value, but to a list of 
> values. Everytime a function is called, for every argument symbol a new 
> value is prepended to the list of values for this symbol. At function 
> exit the values are removed again. With this concept it would be possible 
> for the reader to substitute the symbol name with the pointer to the one 
> and only symbol and function argument mapping, lookup, shadowing etc. is 
> a O(1) operation.

The value of a symbol will then depend upon the control sequence of your 
code, rather than on the lexical structure of your code. This will lead 
to difficult-to-debug problems when programmers looking at the source 
code think a variable should have one value, and the machine following 
the thread of control thinks it should have another. Things will get 
worse if you try to support multiple simultaneous thread of control.

If you use alists (or the less obvious but more efficient framed 
discipline) then you can extend environments as needed by 
non-destructively appending alists. If you prepend new variable values 
on the front of an alist (without altering the old alist) then all the 
code that saw the old alist still sees the same variables, but all the 
code that sees the new alist sees the new variable. If the old 
environment has a variable A and the new environment has a variable A, 
the code that references A in the new environment gets the new value, 
but the code that references A in the old environment still gets the old 
value.

You'll also want closures: sequences of operations with attached 
environments.
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <cfd37d$dut$1@newsreader2.netcologne.de>
mikel <·····@evins.net> wrote:

> but that sort of assumes that you have a compiler that is turning 
> variable references into frame indexes. I had the idea from reading 
> earlier that you wanted the chip to evaluate something closer to the 
> surface sexprs, rather than running compiled code.

yes, so alists are a good idea, but then it is too slow, because I want to 
keep the reader very easy, it should only convert the s-expression text to 
the binary tree representation, without compilation and assigning indexes 
and the like.

>>   Another idea:
[...]
> The value of a symbol will then depend upon the control sequence of
> your code, rather than on the lexical structure of your code. This
> will lead to difficult-to-debug problems when programmers looking at
> the source code think a variable should have one value, and the
> machine following the thread of control thinks it should have another.
> Things will get worse if you try to support multiple simultaneous
> thread of control. 

I think dynamic binding will be no problem for my application, because most 
time I reference only local variables and in the rare cases I reference 
global variables, the names should be written like in CL:*name*. Do you 
have another example where dynamic binding could cause problems? I found 
only academic examples, like http://tinyurl.com/4ogn5 , which doesn't occur 
in normal programs, if you are a bit careful when coding.

> You'll also want closures: sequences of operations with attached 
> environments.

This could be a problem with dynamic binding. This example:

(defun foo (fun z) (funcall fun z))
(let ((z 1)) (foo (lambda (y) (+ z y)) 2 ))

would give 3 with lexical binding, but 4 with my dynamic binding concept. I 
don't know, if this is a common case in CL, but of course this could cause 
trouble, if you are used to program with lexical binding, but on the other 
side the implementation of defun and lambda would be very easy without 
lexical binding.

Looks like dynamic binding is even more useful for some special 
applications:

http://citeseer.ist.psu.edu/moreau96syntactic.html

With special "future" variables even multithreading will be no problem, but 
that's something for later, first I'll implement a single threaded Lisp 
CPU.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: mikel
Subject: Re: The Lisp CPU
Date: 
Message-ID: <GctSc.216$O72.86@newssvr14.news.prodigy.com>
Frank Buss wrote:
> mikel <·····@evins.net> wrote:
> 
> 
>>but that sort of assumes that you have a compiler that is turning 
>>variable references into frame indexes. I had the idea from reading 
>>earlier that you wanted the chip to evaluate something closer to the 
>>surface sexprs, rather than running compiled code.
> 
> 
> yes, so alists are a good idea, but then it is too slow, because I want to 
> keep the reader very easy, it should only convert the s-expression text to 
> the binary tree representation, without compilation and assigning indexes 
> and the like.

Not sure I understand; alists will be pretty fast for the typical case.

> 
> 
>>>  Another idea:
> 
> [...]
> 
>>The value of a symbol will then depend upon the control sequence of
>>your code, rather than on the lexical structure of your code. This
>>will lead to difficult-to-debug problems when programmers looking at
>>the source code think a variable should have one value, and the
>>machine following the thread of control thinks it should have another.
>>Things will get worse if you try to support multiple simultaneous
>>thread of control. 
> 
> 
> I think dynamic binding will be no problem for my application, because most 
> time I reference only local variables and in the rare cases I reference 
> global variables, the names should be written like in CL:*name*. Do you 
> have another example where dynamic binding could cause problems? I found 
> only academic examples, like http://tinyurl.com/4ogn5 , which doesn't occur 
> in normal programs, if you are a bit careful when coding.

Examples are a little hard to construct without being contrived. In 
simple programs they almost never happen, because the problem is 
relatively obvious. They happen when programs get big enough that they 
don't all fit on the screen at the same time. At that point you want the 
logic of a subrouting to correspond as much as is practical with what 
you can see on the screen. Dynamic binding makes possible situations like:

(defun foo ()
   ...
   (setf my-var 1)
   ...)

(defun bar ()
   (let ((my-var "hello"))
     ...
     (foo)
     (do-string-thing my-var)
   ))

...and then you get a mysterious error because do-string-thing expects a 
string, but you called foo, which bound my-var to an integer. The problm 
arises because foo and bar are in different annd perhaps ostensibly 
unrelated files.

The example is contrived, but the situation is not all that unusual in 
practice. It's not that it happens all the time; the trouble is more 
that when it does it's a royal pain in the ass.

> 
> 
>>You'll also want closures: sequences of operations with attached 
>>environments.
> 
> 
> This could be a problem with dynamic binding. This example:
> 
> (defun foo (fun z) (funcall fun z))
> (let ((z 1)) (foo (lambda (y) (+ z y)) 2 ))
> 
> would give 3 with lexical binding, but 4 with my dynamic binding concept. I 
> don't know, if this is a common case in CL, but of course this could cause 
> trouble, if you are used to program with lexical binding, but on the other 
> side the implementation of defun and lambda would be very easy without 
> lexical binding.

Well, if you don't support lexical binding, it won;t be Common Lisp 
anyway, so maybe you don;t have to worry about that. :-)

> 
> Looks like dynamic binding is even more useful for some special 
> applications:
> 
> http://citeseer.ist.psu.edu/moreau96syntactic.html

Yes, sometimes dynamic binding is what you want.

> 
> With special "future" variables even multithreading will be no problem, but 
> that's something for later, first I'll implement a single threaded Lisp 
> CPU.
> 
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <cfdp8b$3jh$1@newsreader2.netcologne.de>
mikel <·····@evins.net> wrote:

> Not sure I understand; alists will be pretty fast for the typical
> case. 

not so fast as direct access to a fixed memory position, because you have 
to traverse the list, which is too slow. I didn't mention that I want to 
built a real-time Lisp CPU with guaranteed response time.

> Dynamic binding makes possible situations like: 
> 
> (defun foo ()
>    ...
>    (setf my-var 1)
>    ...)
> 
> (defun bar ()
>    (let ((my-var "hello"))
>      ...
>      (foo)
>      (do-string-thing my-var)
>    ))

you are right, this is another possible source of errors. Depending on 
the programming style it is not too contrived, but if you keep in mind 
that setf modifies a dynamic variable, while "let" introduces a new local 
variable, I think you could avoid it. It could be done, Emacs Lisp proves 
it :-)

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: mikel
Subject: Re: The Lisp CPU
Date: 
Message-ID: <uduSc.5546$zo7.5390@newssvr29.news.prodigy.com>
Frank Buss wrote:

> mikel <·····@evins.net> wrote:
> 
> 
>>Not sure I understand; alists will be pretty fast for the typical
>>case. 
> 
> 
> not so fast as direct access to a fixed memory position, because you have 
> to traverse the list, which is too slow. I didn't mention that I want to 
> built a real-time Lisp CPU with guaranteed response time.

You got me there.

> 
> 
>>Dynamic binding makes possible situations like: 
>>
>>(defun foo ()
>>   ...
>>   (setf my-var 1)
>>   ...)
>>
>>(defun bar ()
>>   (let ((my-var "hello"))
>>     ...
>>     (foo)
>>     (do-string-thing my-var)
>>   ))
> 
> 
> you are right, this is another possible source of errors. Depending on 
> the programming style it is not too contrived, but if you keep in mind 
> that setf modifies a dynamic variable, while "let" introduces a new local 
> variable, I think you could avoid it. It could be done, Emacs Lisp proves 
> it :-)
> 

Fair enough.
From: ·······@Yahoo.Com
Subject: Re: The Lisp CPU
Date: 
Message-ID: <REM-2004aug15-005@Yahoo.Com>
> From: Frank Buss <··@frank-buss.de>
> > Not sure I understand; alists will be pretty fast for the typical
> > case.
> not so fast as direct access to a fixed memory position, because you
> have to traverse the list, which is too slow. I didn't mention that I
> want to built a real-time Lisp CPU with guaranteed response time.

It's impossible to guarantee response time if you allow any program
whatsoever to be run, because some programs do so many operations it's
impossible for them to run as fast as they are supposed to, such as
bubble-sort on a very large array. So the only thing you can guarantee
is that for any given algorithm there's a way to program it to run
quickly on your CPU (as well as *other* ways to program it that don't
run quickly at all). There's a simple fix for a-list searches taking
too long, which can be applied to most programs that are too slow for
that reason, so you shouldn't worry about it right now before you have
anything working at all. So don't fall into the trap of PREMATURE
OPTIMIZATION, where you worry so much about it, and spend so much time
tweaking your CPU to be faster here and faster there, that you never
ship a product, never even have a working prototype to test. Wait until
you have something working, then if it runs too slow, profile it to see
where it's taking too much time.

So what's the simple trick? If profiling shows there's a tight loop
where some particular variable is reference millions of times, but that
variable was stuck on the a-list a long time, so it's way down the
list, with more recent bindings ahead of it, so every time you
reference that variable you do a long search again: Re-bind that
particular variable around your tight loop so it's now at the head of
the a-list! All the other varables, which you seldom reference, will be
slightly slower, while that one most-commonly-used variable will be
very fast. Because you're using dynamic binding, the following code
  (let ((x x))
    ...))
won't change which variables are accessible nor their values, merely
will make x faster-access.

But what if you have several such variables? Rate them most-crucial
second-most-crucial etc., and re-bind them all with the most-crucial
innermost.

But what if there are a whole bunch of such variables? So the sixth one
still takes a while. In that case you might need to re-write the tight
loop to access slots in a single structure rather than separate
variables. So instead of variables x1 x2 x3 x4 x5 x6, you'd have (caar
xstr) (cdar xstr) (caadr xstr) (caddr xstr) (cdadr xstr) (cdddr xstr),
where x1 and x2 are very fast and the rest are slightly slower. If you
know how to build a Huffman code, you can use a similar method to
optimize the total cost of a set of variables that have known
(profiled) frequencies of usage.

(I'm assuming you've already cleaned up any profiled-critical code that
is blatantly poorly written such as large bubble sorts etc., to where
a-list searching for variable lookups is the remaining slowdown)

Note if most of the time you write your code as if using lexical
variables, i.e. the index variable of a loop is bound right there
around the loop rather than re-using some binding from way way up the
a-list, at least your index variable will never be slow. But the
upper-limit of a loop could have been bound long ago, in which case
re-binding around the tight loop it might make a good speedup. But even
then, typically you pass the upper limit as a parameter, forcing it to
be re-bound, rather than referring to it via dynamic scope, so again no
problem in the first place usually. For example, a typical usage might
be to fetch a distantly-bound variable *once* and pass it as argument
to function where it serves as upper bound on loop. So there's *one*
very slow access to the original binding and a large number of very
fast accesses to the re-binding as function parameter.
From: Alex Vinokur
Subject: Re: The Lisp CPU
Date: 
Message-ID: <2ob5slF8hrmmU1@uni-berlin.de>
········@Yahoo.Com" <··········@YahooGroups.Com> wrote in message ······················@Yahoo.Com...
[snip]
> But what if there are a whole bunch of such variables? So the sixth one
> still takes a while. In that case you might need to re-write the tight
> loop to access slots in a single structure rather than separate
> variables. So instead of variables x1 x2 x3 x4 x5 x6, you'd have (caar
> xstr) (cdar xstr) (caadr xstr) (caddr xstr) (cdadr xstr) (cdddr xstr),
> where x1 and x2 are very fast and the rest are slightly slower. If you
> know how to build a Huffman code, you can use a similar method to
> optimize the total cost of a set of variables that have known
> (profiled) frequencies of usage.
[snip]

Perhaps, n-ary Huffman Template Algorithm might help to do that:
http://sourceforge.net/projects/huffman-ta/
http://alexvn.freeservers.com/s1/huffman_template_algorithm.html
Note. The algorithm has been written in C++.

-- 
   Alex Vinokur
     http://mathforum.org/library/view/10978.html
     http://sourceforge.net/users/alexvn
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <cfq3p2$690$1@newsreader2.netcologne.de>
··········@YahooGroups.Com (·······@Yahoo.Com) wrote:

> So don't fall into the trap of PREMATURE
> OPTIMIZATION, where you worry so much about it, and spend so much time
> tweaking your CPU to be faster here and faster there, that you never
> ship a product, never even have a working prototype to test. Wait until
> you have something working, then if it runs too slow, profile it to see
> where it's taking too much time.

My idea with real time was, that I want to be able to predict the time a 
program needs to run in the worsest case. Using assicoation lists at the 
hardware level for variable bindings, this would be difficult. But you are 
right, perhaps this could be optimized later and first I should do the 
prototype, without too much thoughts about the speed.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Peter Scott
Subject: Re: The Lisp CPU
Date: 
Message-ID: <659f82ff.0408130813.658e4bb8@posting.google.com>
Frank Buss <··@frank-buss.de> wrote in message news:<············@newsreader2.netcologne.de>...
> The Verilog program is not yet done (not enough time until weekend), but 
> I've collected some ideas and written a basic concept:
> 
> http://www.frank-buss.de/lispcpu/index.html

I'm writing a program to simulate this CPU, after a fashion. I hope to
learn a lot, and I'm having fun. But there are two problems:

First, what is a primitive? Right now my printer function just says
(error "WTF is a primitive?") when it is applied to a primitive,
because I have no clue what one is.

Second, the byte-array can be applied as storage to both strings and
other vectors. How do you tell them apart? This is important for a
printer function. If you want to represent the type information in 3
bits, you still have 2 more types that you haven't used. Could one be
a string?

Good luck,
Peter
From: Pascal Bourguignon
Subject: Re: The Lisp CPU
Date: 
Message-ID: <87y8kiet76.fsf@thalassa.informatimago.com>
·········@chase3000.com (Peter Scott) writes:

> Frank Buss <··@frank-buss.de> wrote in message news:<············@newsreader2.netcologne.de>...
> > The Verilog program is not yet done (not enough time until weekend), but 
> > I've collected some ideas and written a basic concept:
> > 
> 
> I'm writing a program to simulate this CPU, after a fashion. I hope to
> learn a lot, and I'm having fun. But there are two problems:
> 
> First, what is a primitive? Right now my printer function just says
> (error "WTF is a primitive?") when it is applied to a primitive,
> because I have no clue what one is.

A primitive is a "function" that is not implemented "in the language"
you're considering, but directly by the implementation.  Lisp special
operators are primitives. (Even if the implementation is written in
lisp!).

Another example: in unix, all the system calls defined in man2 are
unix primitives. All library functions defined in man3 and in your
program are not unix primitives.

In your case, all the opcodes of your processor are the primitives.
All the functions compiled into these opcodes won't be primitves.

If you have a CAR opcode, directly executed by your processor cabling
or microcode, then your CAR will be a primitive.


> Second, the byte-array can be applied as storage to both strings and
> other vectors. How do you tell them apart? This is important for a
> printer function. If you want to represent the type information in 3
> bits, you still have 2 more types that you haven't used. Could one be
> a string?

Yes, you have to type tag all data.  If you have a lot of tags, you
may have an indirection for some of them (eg. tag 111 may mean that
the address fields points to a type tag byte followed by the data).

BUT, do your processor have primitive functions to process these
strings or other types you may define?  If not, then don't tag them!
You'll use what is available at the processor level to implement user
lisp level types.  ie, you don't need a string type if there's no
string primitiv, and the string type, or the other user types will be
implemented over the processor byte-array. For the processor, it'll be
a byte array, but the lisp system running over it will take care of
cheking the first byte as a type tag to avoid confusion between
strings and structures or other types.


> > http://www.frank-buss.de/lispcpu/index.html

Why your numbers are unsigned?
Don't you want to call them fixnum (in the hope of implementing later bignums)?


byte-array size: why not use a "number" for the size?  If you specify
an other type, you'll have to have other circuits to process them.

Your set of primitive does not feel right.

Start from your types.  You need to have primitives to
create/access/modify data of your types.

    number:
            (make-number value)   --> number
            (number-value number) --> value   (?)
            (+ number number)     --> number;overflow (and/or carry)
            (* number number)     --> number;overflow (or low;high)
            (< number number)     --> {smallest,NIL}
            (= number number)     --> T or NIL

    symbol:
            (make-symbol name)    --> symbol  (so here, we seem to be needing
                                               a string type. But we could
                                               accept at the processor level
                                               to name a symbol with any
                                               data object:
                                                 (make-symbol 42)
                                                 (make-symbol '(a b c)) )
            (symbol-set-value    symbol value)    --> value
            (symbol-set-function symbol function) --> function
            (symbol-set-name     symbol name)     --> name
            (symbol-value    symbol)              --> value
            (symbol-function symbol)              --> function
            (symbol-name     symbol)              --> name

    list (or "cons"):
            (cons value value)      --> cons
            (car cons)              --> value
            (cdr cons)              --> value
            (setcar cons value)     --> value
            (setcdr cons value)     --> value

    byte-array:
            (make-byte-array length init-value) --> byte-array
            (length byte-array)                 --> number
            (byte-array-ref byte-array index)   --> byte (! perhaps converted 
                                                            to number?)
            (byte-array-set byte-array index byte) --> byte (! perhaps a number
                                                            converted to byte?)

    all values:
            (eq value value)       --> T or NIL

            
How could you store a lisp value into a byte array without primitives
to convert the lisp value into a byte array and vice versa?  Perhaps
it'll be better to have a mere lisp value-array.  Or you could drop
the byte-array for now and go the string = character lists way.

    primitive:

How do you store functions?  Do you want a function type or will you
be happy with storing functions as lists of primitives and other
values?

Let's take this little function as an example:

    (defun fact (n) (if (< 1 n) (* n (fact (+ n -1))) 1))

defun is not primitive because it can be implemented as:

    (symbol-set-function 'fact 
                          (lambda (n) (if (< 1 n) (* n (fact (+ n -1))) 1)))

Oh oh! LAMBDA.

If you keep only one control primitive WHILE, this lambda will need to
be compiled as:

    (lambda (n)
        (let ((first t))
             (while (and (< 1 n) first)
                (setf first nil)
                (* n (fact (+ n -1))))
             (while (and (not (< 1 n)) first)
                (setf first nil)
                1)))

Perhaps it could be simplier, but it'll still be messy. I'd advise to
add control primitives, at least IF or COND. Let's try with IF:
    
    (lambda (n)
         (if (< 1 n)
             (* n (fact (+ n -1))) ;; here, we need to call a function.
             1))

So we need a primitive to call a function FUNCALL. Now, our fact compiles to:

    (lambda (n)
         (IF (< 1 n)
             (* n (FUNCALL (SYMBOL-FUNCTION (QUOTE fact)) (+ n -1)))
             1))

QUOTE!

    primitives:
        (IF cond then)
        (IF cond then else)
        (FUNCALL function args...)
        (QUOTE thing)

        You'll need something for the bindings, to handle local
        variables and function parameters. Either only by LAMBDA, or
        with a LET primitive.

Note: The "names" of the above defined primitives are not symbols!
      They're only names for the primitive values.

If the tag for symbol is 001 and the tag for primitives is 011, then:
         symbol IF could be:   0000 0010 1001 1001
while primitive IF could be:   1000 0000 0001 0011
Of course, (symbol-function 'IF) == primitive IF


    
-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <cfo0rm$d7d$1@newsreader2.netcologne.de>
Pascal Bourguignon <····@mouse-potato.com> wrote:

>> > http://www.frank-buss.de/lispcpu/index.html
> 
> Why your numbers are unsigned?
> Don't you want to call them fixnum (in the hope of implementing later
> bignums)? 

you are right, signed numbers are more useful.

> byte-array size: why not use a "number" for the size?  If you specify
> an other type, you'll have to have other circuits to process them.

I don't think that this adds more circuits, because if I implement it all 
in hardware, it is difficult to re-use the circuits which normally handles 
fixnums for handling the length, but it is more consistent to use the same 
type, I've changed this.

> Start from your types.  You need to have primitives to
> create/access/modify data of your types.
> 
>     number:
>             (make-number value)   --> number
>             (number-value number) --> value   (?)
>             (+ number number)     --> number;overflow (and/or carry)
>             (* number number)     --> number;overflow (or low;high)

I don't think that I handle overflows in my first version. It will be all 
fixnums.

>             (< number number)     --> {smallest,NIL}
>             (= number number)     --> T or NIL

is it possible to use T as a constant symbol? Then I don't need an extra 
type for it.

>             (symbol-set-value    symbol value)    --> value
>             (symbol-set-function symbol function) --> function
>             (symbol-set-name     symbol name)     --> name
>             (symbol-value    symbol)              --> value
>             (symbol-function symbol)              --> function
>             (symbol-name     symbol)              --> name

my idea was to use a list as a function, do I need then a function type? 
For example, FUNCALL would take a list as an argument, instead of a 
function.

>     list (or "cons"):
>             (cons value value)      --> cons
>             (car cons)              --> value
>             (cdr cons)              --> value
>             (setcar cons value)     --> value
>             (setcdr cons value)     --> value

yes, that's already there.

>     byte-array:

for the first version I removed it. But I think I need a random access 
array, so changed it to an array type, which can hold all types.

> Let's take this little function as an example:
> 
>     (defun fact (n) (if (< 1 n) (* n (fact (+ n -1))) 1))
> 
> defun is not primitive because it can be implemented as:
> 
>     (symbol-set-function 'fact 
>                           (lambda (n) (if (< 1 n) (* n (fact (+ n
>                           -1))) 1))) 

yes, that's possible. Perhaps I should implement it like this in the 
reader.

> If you keep only one control primitive WHILE, this lambda will need to
> be compiled as:
> 
>     (lambda (n)
>         (let ((first t))
>              (while (and (< 1 n) first)
>                 (setf first nil)
>                 (* n (fact (+ n -1))))
>              (while (and (not (< 1 n)) first)
>                 (setf first nil)
>                 1)))

why? I don't want to compile it, I just want to evaluate the raw function 
at run-time.

>     primitives:
>         (IF cond then)
>         (IF cond then else)
>         (FUNCALL function args...)
>         (QUOTE thing)
> 
>         You'll need something for the bindings, to handle local
>         variables and function parameters. Either only by LAMBDA, or
>         with a LET primitive.
> 
> Note: The "names" of the above defined primitives are not symbols!
>       They're only names for the primitive values.

why not? My idea was, if the reader reads something like "if", it creates a 
symbol (if not already created) and sets the function-slot to the 
primitive. This makes it easier to implement reflection or debugging, when 
you want to print the internal s-expression representation.

I've updated the Lisp CPU page:

http://www.frank-buss.de/lispcpu/index.html

I think it is easier to implement the CPU in Lisp first, and then in some 
hardware description language.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: ·······@Yahoo.Com
Subject: Re: The Lisp CPU
Date: 
Message-ID: <REM-2004aug15-006@Yahoo.Com>
> From: Frank Buss <··@frank-buss.de>
> my idea was to use a list as a function, do I need then a function
> type? For example, FUNCALL would take a list as an argument, instead
> of a function.

I thought you said this had to run realtime applications with
time-critical constraints?? In that case you definitely don't want to
compile a function to be a list at runtime. Instead, you want a
function to compile to an ARRAY at runtime. Then the CPU can jump
directly from one instruction to another as needed to implement program
loops and decisions, instead of having to count 1 2 3 4 ... down from
the front of the list every time there's a jump. Or did you intend to
restrict program flow to Dijkstra's primitives, in which case you
*never* need to jump back to an earlier point in a block of code, all
you ever need to do is branch to two different lists depending on a
decision (and return to where you came from when done with either), or
execute a WHILE primitive that repeatedly alternates between testing
the exit condition and executing a list of instructions again before
proceeding to the next instruction. So both WHILE and IF would use the
same primitive-subroutine-calling mechanism to execute their sub-blocks
of code before returning to the main block. For example:
  (set 'foo (cons x (if tst y z)))
might compile into (where <name> are opcode primitives, while just name
by itself is a symbol reference, and I'm using dotted lists for each
single instruction to save storage) (Data Stack shown after each
instruction executed. Control Stack is exercise for reader.):
 ((<PushImmediate> . foo) ;DStack contains foo
  (<PushSymVal> . x)      ;DStack contains val(x), foo
  (<PushSymVal> . tst)    ;DStack contains val(tst), val(x), foo
  (<If> . ((PushSymVal . y)) . ((PushSymVal . z)))
      ;After executing the <If>, upon entering one of the two branches,
      ; stack contains val(x), foo
         ;If first branch chosen, inside it after instruction,
         ; dstack contains val(y), val(x), foo
         ;If second branch chosen, inside it after instruction,
         ; dstack contains val(z), val(x), foo
         ;That top of dstack will be referred to as something after return:
                          ;DStack contains something, val(x), foo
  (<Cons>)                ;DStack contains (val(x) . something), foo
  (<Set>)                 ;Val(foo) side-effected
                          ;DStack contains (val(x) . something)
    ;If this code fragment is used at non-last statement level in PROGN,
    ; there's one more machine instruction which properly belongs to the
    ; PROGN not to this code fragment:
  (<PopDiscard>)          ;DStack now empty
  . ...any more code that follows at same level of source program...)
Of course FORTH programmers would like that RPN way of coding!
One nice thing about using RPN in this way is that order of
sub-expression evaluation is preserved by pushing result of early
things on the stack long before they are needed. So if instead of
(if tst y z) that sub-expression changed the value of x, it's the
*original* value of x that's passed to <Cons>.

Hmm, PROGN is fun to compile with each expression *inline* (all steps
as CARs of cells strung sequentially down a single very long CDR-linked
list):
  (PROGN stmt1 stmt2 stmt3)
would compile into:
  (...code for stmt1
   last of code for stmt1  ;DStack contains result from stmt1
   (<PopDiscard>)          ;DStack empty
   ...code for stmt2
   last of code for stmt2  ;DStack contains result from stmt2
   (<PopDiscard>)          ;DStack empty
   ...code for stmt3
   last of code for stmt3  ;DStack contains result from stmt3
  )

Calling LISP-level functions would be very similar to calling primitive
opcodes, except to preserve order of evaluation you'd have to have a
third stack for pending functions. This is Scheme-like:
  (myfun (car x) y)
would compile to:
  ((<PushSymVal> . myfun) ;DStack contains val(myfun)
   (<PopSaveFunction>)    ;DStack empty  FStack contains val(myfun)
   (<PushSymVal> . x)     ;DStack contains val(x)  FStack contains val(myfun)
   (<Car>)                ;DStack contains ...  FStack contains val(myfun)
   (<PushSymVal> . y)     ;DStack contains val(y), ...  FStack contains val(myfun)
   (<Funcall> . 2)        ;DStack contains what myfun returned, FStack empty
   )

Note that <Funcall> checks that the item on the FStack is really a
compiled lambda expression, i.e. a valid LISP-defined function-object
that has been compiled, and then makes sure the number of lambda
variables matches the hardcoded number of parameters in the CDR of the
<Funcall> instruction, and if so then it pushes the two new bindings
onto the environment a-list and starts running down the tail of that
compiled-lambda object. I see no reason why (lambda (a b) ...code...)
can't compile into (<LambdaTag> (a b) ...compiledcode...). This runtime
argnumber check is needed because under the Scheme/LISP1 system it's
common to accidently re-bind the value of a symbol to be some function
that takes a different number of arguments from what the source code
assumed it'd take. By comparison, built-in opcode primitives can't
change between compliation and runtime, so it's safe for them to just
pop off however many (fixed) number of DStack items they need and push
back on their single return value. (To emulate multiple return values,
return a list of them, as oldtimer LISP programmers did, or return a
vector of them like Java programmers do now.) (No keyword or optional
parameters of course in your super-small LISP, right?)

> I don't want to compile it, I just want to evaluate the raw function
> at run-time.

Huh?? I thought you had realtime critical applications? You want to do
all macro-expansions at compile time, to produce runtime code that
calls only a very reduced set of opcode primitives and LISP-defined
functions, right? You might as well compile s-expressions into RPN
steps (FORTH primitives in essence) as I've strawmanned above, and
while you're doing that you might as well change the CAR of a compiled
lambda expression to show it's compiled. If somebody CONSes up a lambda
expression at runtime, and then passes it to APPLY or FUNCALL, or
CONSes up a form and then passes it to EVAL, the runtime would have to
be smart enough to see it's raw s-expression data rather than compiled
list-of-steps data. Or you could simply forbid such activity??

> I think it is easier to implement the CPU in Lisp first, and then in
> some hardware description language.

That's an understatement I suggest you install every version of your
simulator, from the very first incomplete version, as a CGI WebServer
application, so that we can play with it and get a feel for where
you're heading and excite any bugs in your design or your emulation
before they get too entrenched.
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <cfq5fj$9rj$1@newsreader2.netcologne.de>
··········@YahooGroups.Com (·······@Yahoo.Com) wrote:

>> From: Frank Buss <··@frank-buss.de>
>> my idea was to use a list as a function, do I need then a function
>> type? For example, FUNCALL would take a list as an argument, instead
>> of a function.
> 
> I thought you said this had to run realtime applications with
> time-critical constraints?? In that case you definitely don't want to
> compile a function to be a list at runtime. Instead, you want a
> function to compile to an ARRAY at runtime. Then the CPU can jump
> directly from one instruction to another as needed to implement
> program loops and decisions, instead of having to count 1 2 3 4 ...
> down from the front of the list every time there's a jump. 

I don't think that I need to count from the beginning of a list to 
implement a loop. My idea for the while loop was, that there will be two 
stacks (implemented with a list) for loops. With my while-construct like 
this:

(while condition form)

the hardware pushes the condition-list (or expression) on one stack and 
the form-list on the other stack. After finished the loop, both are 
popped from the stacks, the condition is evaluated and pushed again on 
the stacks, if necessary. It needs to be stacks, because loops can be 
nested.

> Or did you
> intend to restrict program flow to Dijkstra's primitives, 

do you have any web reference to "Dijkstra's primitives"? I don't know 
it.

> in which
> case you *never* need to jump back to an earlier point in a block of
> code, all you ever need to do is branch to two different lists
> depending on a decision (and return to where you came from when done
> with either), or execute a WHILE primitive that repeatedly alternates
> between testing the exit condition and executing a list of
> instructions again before proceeding to the next instruction. So both
> WHILE and IF would use the same primitive-subroutine-calling mechanism
> to execute their sub-blocks of code before returning to the main
> block. For example: 

I don't understand your example in detail, both it looks similar to my 
stack solution.

> (No keyword or optional parameters of course in your super-small
> LISP, right?) 

right :-)

> 
>> I don't want to compile it, I just want to evaluate the raw function
>> at run-time.
> 
> Huh?? I thought you had realtime critical applications? You want to do
> all macro-expansions at compile time, to produce runtime code that
> calls only a very reduced set of opcode primitives and LISP-defined
> functions, right? 

yes, this was my first idea, but I wonder if this is just another 
premature optimization and I should add a macro type. I can't see all the 
consequences, but looks like an interesting concept, because one 
consequence will be that it is possible to built or modify a macro-list 
at evaluation time.

>> I think it is easier to implement the CPU in Lisp first, and then in
>> some hardware description language.
> 
> That's an understatement I suggest you install every version of your
> simulator, from the very first incomplete version, as a CGI WebServer
> application, so that we can play with it and get a feel for where
> you're heading and excite any bugs in your design or your emulation
> before they get too entrenched.

I can't see why I should install it as a CGI program, because the 
emulator would be so short and written in Common Lisp, so that everyone 
can open the web page or text file with the listing and copy and paste it 
to a running Lisp system and then starting some repl-function for the 
CPU. I don't want to hide the source.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <cg0ip6$kfa$1@newsreader2.netcologne.de>
Frank Buss <··@frank-buss.de> wrote:

>> That's an understatement I suggest you install every version of your
>> simulator, from the very first incomplete version, as a CGI WebServer
>> application, so that we can play with it and get a feel for where
>> you're heading and excite any bugs in your design or your emulation
>> before they get too entrenched.

I have a first version for review:

http://www.frank-buss.de/lispcpu/lispcpu.lisp.txt

The reader is refactored and cleaner compared to the one at 
http://www.frank-buss.de/lispcpu/index.html and a printer is implemented 
(which helped a lot implementing a correct reader) and I've started the 
evaluator. There is no function support at this time, but nested 
primitives can be executed.

First I tried the executer to write like a state-machine, with reading a 
memory word at every simulated clock cycle, which could be easy to 
implement in a hardware description language, but this was too much at 
one time, so I concentrated on the evaluator logic. When this is 
finished, I can translate it to a state-machine version.

Next I'll add DEFUN and convert all Lisp-structures in the evaluator 
(like the args-list) to the Lisp-CPU memory structures.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Pascal Bourguignon
Subject: Re: The Lisp CPU
Date: 
Message-ID: <87smak5bjf.fsf@thalassa.informatimago.com>
Frank Buss <··@frank-buss.de> writes:
> I have a first version for review:
> 
> http://www.frank-buss.de/lispcpu/lispcpu.lisp.txt

(princ #\Newline) is called (terpri)

Too bad you did not try yet to implement WHILE. That's where the fun is.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <cg8jpc$h87$1@newsreader2.netcologne.de>
Pascal Bourguignon <····@mouse-potato.com> wrote:

> Frank Buss <··@frank-buss.de> writes:
>> I have a first version for review:
>> 
>> http://www.frank-buss.de/lispcpu/lispcpu.lisp.txt
> 
> (princ #\Newline) is called (terpri)

thanks, I've changed this.

> Too bad you did not try yet to implement WHILE. That's where the fun
> is. 

I've implemented it, it was easy. Some other features now working:

- progn (was a 3-liner for the right result, because the arguments were
already evaluated) 

- added mem-print-object at the end of the test call to show the result
of the evaluator 

- #'+ #'- and #'* now supports more than 2 arguments (the difficult part
was to find the function REDUCE with Google, which I know under the word
FOLDL from Haskell) 

- refactoring: the parameter of all functions, which has low-level
access to memory, like the symbol-set/get functions, checks the pointer,
which has to be a pointer with type-bits properly set (perhaps the
checks could be extracted to a macro?) , adding set-cdr and set-car
(same as the normal cdr and car, but with tagged memory pointers for the
simulated memory) 

- adding QUOTE and SET primtives

- adding <, >, =, /=, <= and >= primitives (same code should be
extracted to a macro) 

The current implemenation can be downloaded from here:

http://www.frank-buss.de/lispcpu/lispcpu.lisp.txt

The main page is slightly updated to match the current implementation:

http://www.frank-buss.de/lispcpu/index.html

PS: The refactorings were very easy with Lisp, compared to other
languages like Java. I added the type-check in the low-level functions
and executed it. Then the debugger and call-stack says me, were I had to
change the high-level functions. Sometimes I needed more info, then I
switched to the REPL and examined some memory with "MEM-PRINT-OBJECT
some-address" and did some calls to other functions for testing. 

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: ·······@Yahoo.Com
Subject: Re: The Lisp CPU
Date: 
Message-ID: <REM-2004aug20-005@Yahoo.Com>
> From: Frank Buss <··@frank-buss.de>
> do you have any web reference to "Dijkstra's primitives"? I don't
> know it.

I might be confused as to who first listed them. I was pretty sure it
was Dijkstra, but I could be mistaken. The FORTH advocates were pretty
strong on D-charts, which is a graphical representation of what I'm
talking about.

Basically the idea is that when combining small tools to make a larger
tool ("tool" = function, procedure, method, etc.), there are only three
ways the small tools should be arranged in making the larger tool:
- Time-sequence: First tool 1, then tool 2, etc., like PROGN
- Conditional: If tool-predicate returns true then call tool 1 else
call tool 2, like IF.
- While loop: While tool-predicate keeps returning true, call tool 1
again, but when tool-predicate returns false, we're done here, like
WHILE.

Note that he didn't discuss data access, such as SETQ,
global/fluid/lexical variables, etc., only how control of flow is
organized, so he "solved" only part of the problem of how to be sure
programs are "clean".

No, I don't have a Web reference handy. I'll do a Google search and see
what turns up...

http://citeseer.ist.psu.edu/context/3323/0
   Edsger W. Dijkstra. Go to statement considered harmful. Communications
   of the ACM, 11(3):147--148, March 1968.

http://216.239.57.104/search?q=cache:M02N2ThLhs0J:tara.unl.ac.uk/~11cooperc/FLOW2.ps+d-charts+dijkstra&hl=en&ie=UTF-8
   The
   debate on the virtues of structured as opposed to unstructured
   programming, was strongly influenced by Dijkstra in the paper Go to
   statement considered harmful, [6]. This paper advocates the
   replacement of Go To statements by Do While and If Then Else
   constructs which are known as D-structures, and the flowgraphs arising
   from this style of programming are known as D-charts.

Those references I found seem to confirm Dijkstra as the originator of
that particular set of primitives for putting pieces of program into
larger pieces.

> > Huh?? I thought you had realtime critical applications? You want to do
> > all macro-expansions at compile time, to produce runtime code that
> > calls only a very reduced set of opcode primitives and LISP-defined
> > functions, right?

> yes, this was my first idea, but I wonder if this is just another
> premature optimization ...

Even though this results in optimization of runtime speed, it's not for
that purpose, it's to minimize the amount of logic you need to
carefully design into hardware and "burn" into the gate array. The idea
is to make the data fully LISP in some sense, that is you really have
pointy structures based on CONS, and a true garbage collector, but
otherwise your machine is a very simple stack architecture that calls
LISP primitives directly at the machine level. You wouldn't even have
EVAL in the first version of the machine. A cross-compiler running in
CL on some nice easy-to-use machine would generate blocks of machine
code, which would be simulated on that nice machine to make sure the
code had been compiled correctly, then downloaded to the gate array for
execution, to see if the hardware actually works. Of course this means
that in your first version you wouldn't be able to run a program that
created a pointy structure (internal form of s-expression) and then
passed it to EVAL. I would suggest on first attempt, don't even
implement APPLY in hardware. Maybe not even implement FUNCALL in
hardware. Just implement a machine-level function-calling opcode that
has the number of arguments built-in, and have each function definition
tagged with the number of arguments it takes, with a runtime check to
verify the two agree. (Yeah, I know the cross-compiler can check all
that, but you know how Murphy loves to creep in, so having the machine
HALT if number of args is wrong, rather than blindly gobble the wrong
number of arguments from the data stack and leave the stack mis-synched
with respect to whatever will be done after return, causing impossible
to diagnose runtime bugs later, you get the idea?)

Note that in programming the three D-primitives, you wouldn't be
EVALuating an s-expression, instead you'd be executing machine code
like this (notation: <opcode> [ptr to block of code] symbolname, with
implied <Return1val> at end of each referenced program block):
  <PushImmed> [<CodeHeader0args>
               <PushSymVal> lst
               <CallEndp>
               <CallNot>
               <Return1val>]
  <PushImmed> [<CodeHeader0args>
               <PushSymVal> lst
               <CallCar>
               <PushSymVal> dosomething
               <CallFunction1arg>
               <PushSymVal> revres
               <CallCons>
               <CallSetq>   revres
               <Return1val>]
  <CallWhile>

  <PushImmed> [...perform test..., leave boolean result on stack]
  <PushImmed> [...do one action, leave result on stack]
  <PushImmed> [...do alternative action, leave result on stack]
  <CallIfElse>

  <PushImmed> [...perform test..., leave boolean result on stack]
  <PushImmed> [...do one action, leave result on stack]
  <CallIf>

Hmm, to reduce the number of gates you have to "burn", instead of
having separate opcodes for if-then-else and simple if, you could have
only if-then-else, and emulate the simple if like this:
  <PushImmed> [... perform test ... leave boolean result on stack]
  <PushImmed> [...do one action ... leave result on stack]
  <PushImmed> [<CodeHeader0args>
               <PushNIL>    ;or alternately <PushSymVal> nil
               <Return1val>]
  <CallIfElse>
There could be a single copy of those three lines of machine code,
compiled from (lambda () 'nil), or alternately (lambda () nil), which
would be called from whereever needed in compiled code.

Note that the machine doesn't run down CDR chains to do a PROGN,
instead it just executes consecutive instructions in RAM, the same as
it executes consecutive instructions to push arguments on stack then
call some function (built-in opcode such as <CallCons> or
cross-compiled RAM function such as dosomething which is called via
built-in <CallFunction1arg>). I always liked the PDP-11 style of
representing all the various addressing modes, with plus/minus literal
offsets from base registers such as stack pointer or PC. Using separate
machine stacks for data (parameters to functions) etc., and separate
IBM-370-style base registers for commonly referenced sections of RAM,
the generality of the addressing notation is nice.

So the machine would have these modes:
- Doing nothing while waiting to be invoked to do something.
- Downloading a new program or patching an existing program via download.
- Running a program from RAM.

> I can't see why I should install it as a CGI program, because the
> emulator would be so short and written in Common Lisp, so that
> everyone can open the web page or text file with the listing and copy
> and paste it to a running Lisp system

Not everyone has access to a Lisp system. For example, some people are
homeless and their only access to the net is by going to a public
library where the *only* thing they can do on the computer is run
Microsoft Internet Explorer. (If nobody hires me within the next few
months, I'll be in that condition.) Also a CGI demo will always have
the latest version, and you can even set up multiple demos to show the
old version (with discussion what was wrong with it that needed fixing)
and the latest version etc. Then anyone who likes the demo can, if they
have access to a suitable Lisp system, download the latest stable
version and run it themselves.

Regarding the emulator for the proposed or under-construction computer:
> I don't want to hide the source.

CGI doesn't *require* you to hide the source. There's nothing stopping
you from *both* having the demo on your cgi-bin and the source on your
public_html.
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <cg8lpn$kav$1@newsreader2.netcologne.de>
··········@YahooGroups.Com (·······@Yahoo.Com) wrote:

> Basically the idea is that when combining small tools to make a larger
> tool ("tool" = function, procedure, method, etc.), there are only
> three ways the small tools should be arranged in making the larger
> tool: 
> - Time-sequence: First tool 1, then tool 2, etc., like PROGN
> - Conditional: If tool-predicate returns true then call tool 1 else
> call tool 2, like IF.
> - While loop: While tool-predicate keeps returning true, call tool 1
> again, but when tool-predicate returns false, we're done here, like
> WHILE.

thanks, this is what I want to implement. No GOTO, even not at the 
machine level, only a real Lisp CPU with WHILE, IF and PROGN.

> Even though this results in optimization of runtime speed, it's not
> for that purpose, it's to minimize the amount of logic you need to
> carefully design into hardware and "burn" into the gate array. 

I think the basic concept of Lisp is easy enough that I don't need that 
much gates.

> Note that the machine doesn't run down CDR chains to do a PROGN,
> instead it just executes consecutive instructions in RAM, the same as
> it executes consecutive instructions to push arguments on stack then
> call some function (built-in opcode such as <CallCons> or
> cross-compiled RAM function such as dosomething which is called via
> built-in <CallFunction1arg>). 

sorry, but then it will be just another normal CPU, perhaps with some 
special instructions, for which special compilers can produce optimized 
machine code. My vision is a CPU which runs the CDR chain, without the 
need to compile it to a very different representation first.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Rob Warnock
Subject: Re: The Lisp CPU
Date: 
Message-ID: <5r-dnZQYGeWNnrbcRVn-sw@speakeasy.net>
Frank Buss  <··@frank-buss.de> wrote:
+---------------
| thanks, this is what I want to implement. No GOTO, even not at the 
| machine level, only a real Lisp CPU with WHILE, IF and PROGN.
+---------------

By the way, if you allow even one global variable, then with PROGN,
IF, and WHILE you can efficiently emulate *arbitrary* patterns of
spaghetti-code GOTOs. [See William Wulf and Mary Shaw, "Global Variable
Considered Harmful", ACM "SIGPLAN Notices", Vol. 8, Issue 2 (Feb. 1973)
<http://portal.acm.org/citation.cfm?id=953355>.]

;-}


-Rob

p.s. IMHO, for a "real Lisp CPU" you also need abstraction,
a.k.a. LAMBDA and function call.

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Frank Buss
Subject: Re: The Lisp CPU
Date: 
Message-ID: <2p7ga7Fhvcc5U1@uni-berlin.de>
····@rpw3.org (Rob Warnock) wrote:

> p.s. IMHO, for a "real Lisp CPU" you also need abstraction,
> a.k.a. LAMBDA and function call.

yes, this was one major missing feature. I've added this and some more 
features to my current implementation:

- refactored alloc-list with mem-cons
- changed set-car/cdr to mem-rplaca/rplacd
- renamed get-car/cdr to mem-car/cdr
- extended value-slot of a symbol to a list for dynamic scope shadowing
- new primitives: defun, if, setq, nil, cons, car and cdr

Now it is getting interesting and you can write something like this:

(progn
  (defun square (n) (if (> n 0) (cons (* n n) (square (- n 1)))))
  (square 4))))

which produces this output:

(16 9 4 1 )

LAMBDA and FUNCALL are easy to add (because of the new function-type), 
perhaps someone who wants to study the code could try it. And the code 
needs a bit more clean-up. Perhaps defining a class for the memory would 
be a good idea, with all the low-level accessors like mem-cons, mem-car 
etc. as methods.

The updated source:

http://www.frank-buss.de/lispcpu/lispcpu.lisp.txt

and minimal changed documentation:

http://www.frank-buss.de/lispcpu/index.html

The next major thing will be Garbage Collection, which could be hardware 
optimized, but first I'll implement it in Lisp.

There are intersting concepts with tagged memory pages, backpointers, 
incremental or stop-and-go etc. I think I need an incremental GC, because 
one of my goal is a game and I don't want to stop the game for more than 
some ms, because otherwise movements on the screen will be discontinuous 
and perhaps dropouts in music could happen. Are there any more things I 
have to think about for GC?

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Pascal Bourguignon
Subject: Re: The Lisp CPU
Date: 
Message-ID: <87u0v6eshn.fsf@thalassa.informatimago.com>
·········@chase3000.com (Peter Scott) writes:
> I'm writing a program to simulate this CPU, after a fashion. I hope to
> learn a lot, and I'm having fun. But there are two problems:

And about simulating this CPU properly, I'd check EVAL/APPLY from SICP:
http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1

But since a Mealy or Moore Finite State Machine is not equivalent to a
Turing Machine (you'd need at least to add a stack), you'll probably
find it easier, after reading the following chapter in SICP, to
microcode EVAL and APPLY over a classical register (and stack) machine.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.