From: Frank Buss
Subject: Cells compared to Flow-Based Programming
Date: 
Message-ID: <ybytt8hbcovn.19lg9cifbvfv5$.dlg@40tude.net>
I've read nearly half of the book http://www.jpaulmorrison.com/fbp/book.pdf
from the page http://www.jpaulmorrison.com/fbp/ and I think it would be
interesting for some people of this newsgroup to summarize the concepts of
Flow-Based Concepts (FBP), compare it to Cells and what we can learn from
it. If you like to read some of the basic concepts yourself, take a look at
http://en.wikipedia.org/wiki/Flow-based_programming for FBP and
http://bc.tech.coop/blog/030911.html for Cells.

Cells is a spreadsheet-like object oriented CLOS extension: You define
classes and getter and setter methods, and whenever a value is changed, all
values, which are calculated based on this value, changes too, and
listeners can be registered and called on value change.

FBP is very different: When developing applications, you start with
defining information packets (IP). In Lisp an IP could be a hashtable or
any other Lisp object. Then you define some processes and interconnect
them. A process has inputs and outputs for processing IPs. The process
itself is a program, with local storage. A process can be instantiated
multiple times and can be configured with configuration IPs (there are
preemptive and cooperative multitasking implementations).

The main motivation of FBP is to reuse stable and tested processes for many
applications. According to the book, at IBM they have successfully used it
in many projects and one example with three projects, in the third project
they achieved a reuse rate of about 97% (PDF pages 42/43 in the book). This
sounds impressive, but they were mostly talking about business
applications, like accounting applications for a bank, and if you have
three projects for three banks, I would expect such a rate for most modern
concepts and languages, but in these days when they were writing assembler,
COBOL, PL/I and other strange things on mainframes (these are some
languages the author of the book used at the time the book was written)
this would be more difficult.

I think Cells is more fine granular: there are not a few connected black
boxes, but a network of connected values. FBP has the advantage, that
connections are explicit, but the IP concept doesn't allow to connect one
outgoing connection to multiple input connections (but you can build a
process, which has one input and two outputs, which duplicates the input to
the outputs), like used for the spreadsheet-like Cells. I don't know which
concept is better, maybe this depends on the application. But I think every
Cells network can be transformed to a FBP-like network, but the other
direction would be more difficult, so FBP may be a more general concept.

Cells is somewhat orthogonal to FBP: you could use Cells objects as IPs
inside of processes and you could use FBP networks inside update rules of
Cells objects.

Anyone interested in implementing FBP in Lisp? Many things mentioned in the
book, which may be difficult in other languages, like the loose coupling of
components with IPs and the difficulties of typed structures, linking etc.,
are no problem in Lisp. An interesting problem could be to implement the
"driver" in Lisp, which provides an API for building and invoking the
network. Maybe for testing it would be nice to implement a simple virtual
machine, which could be used to implement process activation and
deactivation and preemptive multitasking in Common Lisp. Another idea maybe
would be to use preemtive multitasking, with Scheme-like continuations,
like implemented in ARNESI for Common Lisp.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

From: George Neuner
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <gtmg34l11tdfngtsb5e7jm6sii64i7shr6@4ax.com>
On Sat, 24 May 2008 01:30:20 +0200, Frank Buss <··@frank-buss.de>
wrote:

>I've read nearly half of the book http://www.jpaulmorrison.com/fbp/book.pdf
>from the page http://www.jpaulmorrison.com/fbp/ and I think it would be
>interesting for some people of this newsgroup to summarize the concepts of
>Flow-Based Concepts (FBP), compare it to Cells and what we can learn from
>it. If you like to read some of the basic concepts yourself, take a look at
>http://en.wikipedia.org/wiki/Flow-based_programming for FBP and
>http://bc.tech.coop/blog/030911.html for Cells.
>
>Cells is a spreadsheet-like object oriented CLOS extension: You define
>classes and getter and setter methods, and whenever a value is changed, all
>values, which are calculated based on this value, changes too, and
>listeners can be registered and called on value change.
>
>FBP is very different: When developing applications, you start with
>defining information packets (IP). In Lisp an IP could be a hashtable or
>any other Lisp object. Then you define some processes and interconnect
>them. A process has inputs and outputs for processing IPs. The process
>itself is a program, with local storage. A process can be instantiated
>multiple times and can be configured with configuration IPs (there are
>preemptive and cooperative multitasking implementations).
>
>The main motivation of FBP is to reuse stable and tested processes for many
>applications. According to the book, at IBM they have successfully used it
>in many projects and one example with three projects, in the third project
>they achieved a reuse rate of about 97% (PDF pages 42/43 in the book). This
>sounds impressive, but they were mostly talking about business
>applications, like accounting applications for a bank, and if you have
>three projects for three banks, I would expect such a rate for most modern
>concepts and languages, but in these days when they were writing assembler,
>COBOL, PL/I and other strange things on mainframes (these are some
>languages the author of the book used at the time the book was written)
>this would be more difficult.
>
>I think Cells is more fine granular: there are not a few connected black
>boxes, but a network of connected values. FBP has the advantage, that
>connections are explicit, but the IP concept doesn't allow to connect one
>outgoing connection to multiple input connections (but you can build a
>process, which has one input and two outputs, which duplicates the input to
>the outputs), like used for the spreadsheet-like Cells. I don't know which
>concept is better, maybe this depends on the application. But I think every
>Cells network can be transformed to a FBP-like network, but the other
>direction would be more difficult, so FBP may be a more general concept.
>
>Cells is somewhat orthogonal to FBP: you could use Cells objects as IPs
>inside of processes and you could use FBP networks inside update rules of
>Cells objects.
>
>Anyone interested in implementing FBP in Lisp? Many things mentioned in the
>book, which may be difficult in other languages, like the loose coupling of
>components with IPs and the difficulties of typed structures, linking etc.,
>are no problem in Lisp. An interesting problem could be to implement the
>"driver" in Lisp, which provides an API for building and invoking the
>network. Maybe for testing it would be nice to implement a simple virtual
>machine, which could be used to implement process activation and
>deactivation and preemptive multitasking in Common Lisp. Another idea maybe
>would be to use preemtive multitasking, with Scheme-like continuations,
>like implemented in ARNESI for Common Lisp.

You're reading too much into particular implementations and uses
described in the book.  Flow based programming, and the dataflow model
in general, is about streaming data through a series of "filters"
which examine and/or transform it.  The granularity of the filter -
process, thread, function, multiply-adder, whatever - is not really
relevant to the concept.

Originally dataflow was conceived as a software model of a parallel
hardware scheme called "asynchronous logic with registers".  The
hardware idea was to have separately clocked (or free running analog)
logic units connected by register queues so each unit could run at its
own speed, triggered whenever input was available and not worrying
(much) about synchronization for input or output.

The simplest of the software dataflow models is one of server
processes communicating by messaging, but that is not the only
possible model.  Somebody recently mentioned the Linda programming
language - a somewhat different implementation where all communication
between application units is performed through a software associative
memory.

You're also getting too caught up in the communication abstraction.
The book's "information packets" are nothing more than the logical
"wires" connecting the functional units of the application.  It really
doesn't matter whether the wiring is implemented using IPC messaging,
in(ter)-process signaling, function callbacks, shared memory or actual
wires.

I don't know too much about Kenny's Cell projects except from skimming
what he has written here, but IIUC, Cells allows objects to be linked
into dependency notification chains so that a change to one object can
affect all its dependents.  AFAICT from such a simple understanding,
Cells qualifies as an implementation of dataflow.

George
--
for email reply remove "/" from address
From: Frank Buss
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <1sy3be1b2w94e.b3p97cfmlwo6$.dlg@40tude.net>
George Neuner wrote:

> The simplest of the software dataflow models is one of server
> processes communicating by messaging, but that is not the only
> possible model.  Somebody recently mentioned the Linda programming
> language - a somewhat different implementation where all communication
> between application units is performed through a software associative
> memory.

Erlang has another nice idea for connections between processes, like the
mailbox functions in LispWorks.

> You're also getting too caught up in the communication abstraction.
> The book's "information packets" are nothing more than the logical
> "wires" connecting the functional units of the application. 

I think this is wrong, because the information packets are the data, which
are traveling through the connections.

> It really
> doesn't matter whether the wiring is implemented using IPC messaging,
> in(ter)-process signaling, function callbacks, shared memory or actual
> wires.

Yes, this is true. I think it could be very useful for high performance
applications with multiple CPU cores and multiple computers.

> I don't know too much about Kenny's Cell projects except from skimming
> what he has written here, but IIUC, Cells allows objects to be linked
> into dependency notification chains so that a change to one object can
> affect all its dependents.  AFAICT from such a simple understanding,
> Cells qualifies as an implementation of dataflow.

This depends on how you define "dataflow" :-)

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483879b6$0$25066$607ed4bc@cv.net>
Frank Buss wrote:
> George Neuner wrote:
> 
> 
>>The simplest of the software dataflow models is one of server
>>processes communicating by messaging, but that is not the only
>>possible model.  Somebody recently mentioned the Linda programming
>>language - a somewhat different implementation where all communication
>>between application units is performed through a software associative
>>memory.
> 
> 
> Erlang has another nice idea for connections between processes, like the
> mailbox functions in LispWorks.
> 
> 
>>You're also getting too caught up in the communication abstraction.
>>The book's "information packets" are nothing more than the logical
>>"wires" connecting the functional units of the application. 
> 
> 
> I think this is wrong, because the information packets are the data, which
> are traveling through the connections.


I am reminded of the discovery that electric current is a moving hole. I 
certainly think of Cells as dataflow, but /that/ datum just flows from 
the cell that calculated it to the cell that uses it to calculate some 
other datum potentially quite different. Ah, the box is now 3cm wide? 
Then the launch decision on all ICBMs aimed at Canada is now t. 
Something like that.


> 
> 
>>It really
>>doesn't matter whether the wiring is implemented using IPC messaging,
>>in(ter)-process signaling, function callbacks, shared memory or actual
>>wires.
> 
> 
> Yes, this is true. I think it could be very useful for high performance
> applications with multiple CPU cores and multiple computers.
> 
> 
>>I don't know too much about Kenny's Cell projects except from skimming
>>what he has written here, but IIUC, Cells allows objects to be linked
>>into dependency notification chains so that a change to one object can
>>affect all its dependents.  AFAICT from such a simple understanding,
>>Cells qualifies as an implementation of dataflow.
> 
> 
> This depends on how you define "dataflow" :-)
> 

Word.

kenny

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: George Neuner
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <aslh34949tc5u8sitrjhmk4psor7n7vuuq@4ax.com>
On Sat, 24 May 2008 21:48:50 +0200, Frank Buss <··@frank-buss.de>
wrote:

>George Neuner wrote:
>
>> You're also getting too caught up in the communication abstraction.
>> The book's "information packets" are nothing more than the logical
>> "wires" connecting the functional units of the application. 
>
>I think this is wrong, because the information packets are the data, which
>are traveling through the connections.

Not exactly ... the "control IPs" are not application data but
commands to start/stop processes and make/break connections.  Thus
control IPs at least are logically part of the wiring of the
distributed application.  I suppose it's debatable.

Right now I class FBP as interesting rather than really useful.  My
first impression upon reading the book is that the author[*] hasn't
adequately addressed program scaling complexity.  From what I can see,
FDP programs will be at least as complex as MP programs.  

[*] I'm unsure of the author's surname.  Being a stupid American, the
name "van Nostrand Reinhold", as written, looks strange.  Google
didn't help.  My Anglish guess would be "Reinhold van Nostrand" but I
wouldn't dare to state it as if I really knew.



The elements of FBP are similar to Carriero and Gelernter's Linda
system (the author admits this).  The major difference is that FBP
seems to be designed as a rather traditional distributed component
scheme that requires explicit process creation and IPC connections
between components.  The components of a Linda application communicate
via a (logical) shared scoreboard maintained cooperatively by Linda
servers.  Linda processes and connections are created implicitly by
naming them as data sources or sinks, and lightweight processes can
migrate between servers (Java-Linda makes this particularly effective
because servers can exchange bytecode even in heterogenous networks).

Existing Linda implementations don't scale physically as well as MP
does and FBP might, but Linda's programming model is completely
ignorant of the number of CPUs.  Server implementation permitting, the
same Linda program code will run unchanged on any number of processors
and transparently exploit any component parallelism in the
application.

George
--
for email reply remove "/" from address
From: chris
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <87od47t5i2.fsf@dba2.int.libertyrms.com>
George Neuner <·········@/comcast.net> writes:
> [*] I'm unsure of the author's surname.  Being a stupid American, the
> name "van Nostrand Reinhold", as written, looks strange.  Google
> didn't help.  My Anglish guess would be "Reinhold van Nostrand" but I
> wouldn't dare to state it as if I really knew.

The author's surname is Morrison.  van Nostrand is the name of the
publishing house.
-- 
(reverse (concatenate 'string "ofni.sesabatadxunil" ·@" "enworbbc"))
http://linuxdatabases.info/info/finances.html
"I tell my students to think of Steele's book as the Oxford English
Dictionary and Norvig's as the complete works of Shakespeare."
-- Prof. Wendy Lenhert (Massachusetts)
From: George Neuner
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <h2lk949bi4geecj66kuka01idab3bsgr69@4ax.com>
On Tue, 05 Aug 2008 23:19:03 GMT, chris
<·····@dba2.int.libertyrms.com> wrote:

>George Neuner <·········@/comcast.net> writes:
>> [*] I'm unsure of the author's surname.  Being a stupid American, the
>> name "van Nostrand Reinhold", as written, looks strange.  Google
>> didn't help.  My Anglish guess would be "Reinhold van Nostrand" but I
>> wouldn't dare to state it as if I really knew.
>
>The author's surname is Morrison.  van Nostrand is the name of the
>publishing house.

Thank you.  I saw the name Morrison cited in the biblio but it is not
anywhere on the text I have.

George
From: Paul Tarvydas
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1enj6$mq5$1@aioe.org>
George Neuner wrote:

> You're also getting too caught up in the communication abstraction.
> The book's "information packets" are nothing more than the logical
> "wires" connecting the functional units of the application.  It really
> doesn't matter whether the wiring is implemented using IPC messaging,
> in(ter)-process signaling, function callbacks, shared memory or actual
> wires.

It matters if you are addressing performance :-).  Our VF technology is
similar to FBP (event-driven instead of IP's).  We implemented and shipped
real products around 1995 on 8-bit 8051's with 32K of RAM (128K ROM).  IPC
messaging, inter-process signalling, RTOS's, C, etc. were all non-starters. 
The paradigm of "events" and "wires" made it possible to imagine and
implement a tiny kernel and full-blown applications (600-page spec,
resulting in about the equivalent of 300 "threads" iirc) on such stunted
hardware.

It also matters if you believe (like I do) that it is valuable to decompose
a paradigm into its most primitive elements.

pt
From: George Neuner
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <0h3m34d2jh8otvjnkk36ic26umjjuhmbf7@4ax.com>
On Mon, 26 May 2008 12:18:34 -0400, Paul Tarvydas
<········@visualframeworksinc.com> wrote:

>George Neuner wrote:
>
>> You're also getting too caught up in the communication abstraction.
>> The book's "information packets" are nothing more than the logical
>> "wires" connecting the functional units of the application.  It really
>> doesn't matter whether the wiring is implemented using IPC messaging,
>> in(ter)-process signaling, function callbacks, shared memory or actual
>> wires.
>
>It matters if you are addressing performance :-).  Our VF technology is
>similar to FBP (event-driven instead of IP's).  We implemented and shipped
>real products around 1995 on 8-bit 8051's with 32K of RAM (128K ROM).  IPC
>messaging, inter-process signalling, RTOS's, C, etc. were all non-starters. 
>The paradigm of "events" and "wires" made it possible to imagine and
>implement a tiny kernel and full-blown applications (600-page spec,
>resulting in about the equivalent of 300 "threads" iirc) on such stunted
>hardware.
>
>It also matters if you believe (like I do) that it is valuable to decompose
>a paradigm into its most primitive elements.

Of course performance matters.  I was simply pointing out that the
implementation described in the book was only one particular way of
doing it and not to read too much into the method.  The author himself
compared his IP abstraction to several other coordination methods
without really explaining why he felt IP was better.

George
--
for email reply remove "/" from address
From: Slobodan Blazeski
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <4db62f7d-1ccd-4e89-9a9a-a2c3705a7a61@k30g2000hse.googlegroups.com>
On May 26, 9:30 pm, George Neuner <·········@/comcast.net> wrote:
> On Mon, 26 May 2008 12:18:34 -0400, Paul Tarvydas
>
>
>
>
>
> <········@visualframeworksinc.com> wrote:
> >George Neuner wrote:
>
> >> You're also getting too caught up in the communication abstraction.
> >> The book's "information packets" are nothing more than the logical
> >> "wires" connecting the functional units of the application.  It really
> >> doesn't matter whether the wiring is implemented using IPC messaging,
> >> in(ter)-process signaling, function callbacks, shared memory or actual
> >> wires.
>
> >It matters if you are addressing performance :-).  Our VF technology is
> >similar to FBP (event-driven instead of IP's).  We implemented and shipped
> >real products around 1995 on 8-bit 8051's with 32K of RAM (128K ROM).  IPC
> >messaging, inter-process signalling, RTOS's, C, etc. were all non-starters.
> >The paradigm of "events" and "wires" made it possible to imagine and
> >implement a tiny kernel and full-blown applications (600-page spec,
> >resulting in about the equivalent of 300 "threads" iirc) on such stunted
> >hardware.
>
> >It also matters if you believe (like I do) that it is valuable to decompose
> >a paradigm into its most primitive elements.
>
> Of course performance matters.  I was simply pointing out that the
> implementation described in the book was only one particular way of
> doing it and not to read too much into the method.  The author himself
> compared his IP abstraction to several other coordination methods
> without really explaining why he felt IP was better.
>
> George
> --
> for email reply remove "/" from address- Hide quoted text -
>
> - Show quoted text -

The main idea of the fbp is that you have  processing centers linked
by routes where IP moves.
Lets' start with a dumb idea . We have a function called double :
(defun double (x) (* 2 x))
Now we design a processing center like this :
(make-center #'double)
=> double-center-1

Now we have our program is just defining a path of links .

(path user-input double-center-1 print-center)

Note that we don't call the functions. Data just moves on it's way.
To spice things a little bit add few more programs
(path  double-center-1 inverse mod print-center)

And this one:
(path double-center  double-center print-center)

Theer is no need  calling anything, no spawning processes etc. All our
job is to define path. The ip knows which path they belong. So they
move  along the path. The processing center knows what path send them
those ips so it sends the result ips on the same path. That's it.
Everything else is just an implementation technique.
While explicit paralelization could find itself in the function level,
in the path level everything is implicitly paralel.
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483c18c3$0$11621$607ed4bc@cv.net>
Slobodan Blazeski wrote:
> On May 26, 9:30 pm, George Neuner <·········@/comcast.net> wrote:
> 
>>On Mon, 26 May 2008 12:18:34 -0400, Paul Tarvydas
>>
>>
>>
>>
>>
>><········@visualframeworksinc.com> wrote:
>>
>>>George Neuner wrote:
>>
>>>>You're also getting too caught up in the communication abstraction.
>>>>The book's "information packets" are nothing more than the logical
>>>>"wires" connecting the functional units of the application.  It really
>>>>doesn't matter whether the wiring is implemented using IPC messaging,
>>>>in(ter)-process signaling, function callbacks, shared memory or actual
>>>>wires.
>>
>>>It matters if you are addressing performance :-).  Our VF technology is
>>>similar to FBP (event-driven instead of IP's).  We implemented and shipped
>>>real products around 1995 on 8-bit 8051's with 32K of RAM (128K ROM).  IPC
>>>messaging, inter-process signalling, RTOS's, C, etc. were all non-starters.
>>>The paradigm of "events" and "wires" made it possible to imagine and
>>>implement a tiny kernel and full-blown applications (600-page spec,
>>>resulting in about the equivalent of 300 "threads" iirc) on such stunted
>>>hardware.
>>
>>>It also matters if you believe (like I do) that it is valuable to decompose
>>>a paradigm into its most primitive elements.
>>
>>Of course performance matters.  I was simply pointing out that the
>>implementation described in the book was only one particular way of
>>doing it and not to read too much into the method.  The author himself
>>compared his IP abstraction to several other coordination methods
>>without really explaining why he felt IP was better.
>>
>>George
>>--
>>for email reply remove "/" from address- Hide quoted text -
>>
>>- Show quoted text -
> 
> 
> The main idea of the fbp is that you have  processing centers linked
> by routes where IP moves.
> Lets' start with a dumb idea . We have a function called double :
> (defun double (x) (* 2 x))
> Now we design a processing center like this :
> (make-center #'double)
> => double-center-1
> 
> Now we have our program is just defining a path of links .
> 
> (path user-input double-center-1 print-center)
> 
> Note that we don't call the functions. Data just moves on it's way.
> To spice things a little bit add few more programs
> (path  double-center-1 inverse mod print-center)
> 
> And this one:
> (path double-center  double-center print-center)
> 
> Theer is no need  calling anything,...

Looks like calls to me, if I am the one laying out the path:

(print-center (double-center (double-center <input>)))

Change it to smalltalk and "send messages" and you still have the same 
paradigm: programmer-centric hand-implementd hard-wired dataflow, and 
wham yer dead.

kt



-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Slobodan Blazeski
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <ede5f21f-b1cd-4b4b-b8eb-9fda14dcfacb@k37g2000hsf.googlegroups.com>
On May 27, 4:20 pm, Ken Tilton <···········@optonline.net> wrote:
> Slobodan Blazeski wrote:
> > On May 26, 9:30 pm, George Neuner <·········@/comcast.net> wrote:
>
> >>On Mon, 26 May 2008 12:18:34 -0400, Paul Tarvydas
>
> >><········@visualframeworksinc.com> wrote:
>
> >>>George Neuner wrote:
>
> >>>>You're also getting too caught up in the communication abstraction.
> >>>>The book's "information packets" are nothing more than the logical
> >>>>"wires" connecting the functional units of the application.  It really
> >>>>doesn't matter whether the wiring is implemented using IPC messaging,
> >>>>in(ter)-process signaling, function callbacks, shared memory or actual
> >>>>wires.
>
> >>>It matters if you are addressing performance :-).  Our VF technology is
> >>>similar to FBP (event-driven instead of IP's).  We implemented and shipped
> >>>real products around 1995 on 8-bit 8051's with 32K of RAM (128K ROM).  IPC
> >>>messaging, inter-process signalling, RTOS's, C, etc. were all non-starters.
> >>>The paradigm of "events" and "wires" made it possible to imagine and
> >>>implement a tiny kernel and full-blown applications (600-page spec,
> >>>resulting in about the equivalent of 300 "threads" iirc) on such stunted
> >>>hardware.
>
> >>>It also matters if you believe (like I do) that it is valuable to decompose
> >>>a paradigm into its most primitive elements.
>
> >>Of course performance matters.  I was simply pointing out that the
> >>implementation described in the book was only one particular way of
> >>doing it and not to read too much into the method.  The author himself
> >>compared his IP abstraction to several other coordination methods
> >>without really explaining why he felt IP was better.
>
> >>George
> >>--
> >>for email reply remove "/" from address- Hide quoted text -
>
> >>- Show quoted text -
>
> > The main idea of the fbp is that you have  processing centers linked
> > by routes where IP moves.
> > Lets' start with a dumb idea . We have a function called double :
> > (defun double (x) (* 2 x))
> > Now we design a processing center like this :
> > (make-center #'double)
> > => double-center-1
>
> > Now we have our program is just defining a path of links .
>
> > (path user-input double-center-1 print-center)
>
> > Note that we don't call the functions. Data just moves on it's way.
> > To spice things a little bit add few more programs
> > (path  double-center-1 inverse mod print-center)
>
> > And this one:
> > (path double-center  double-center print-center)
>
> > Theer is no need  calling anything,...
>
> Looks like calls to me, if I am the one laying out the path:
>
> (print-center (double-center (double-center <input>)))
>
> Change it to smalltalk and "send messages" and you still have the same
> paradigm: programmer-centric hand-implementd hard-wired dataflow, and
> wham yer dead.
You are right but how does cells solve this?
>
> kt
>
> --http://smuglispweeny.blogspot.com/http://www.theoryyalgebra.com/
> ECLM rant:http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
> ECLM talk:http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en- Hide quoted text -
>
> - Show quoted text -
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483c4299$0$25038$607ed4bc@cv.net>
Slobodan Blazeski wrote:
> On May 27, 4:20 pm, Ken Tilton <···········@optonline.net> wrote:
> 
>>Slobodan Blazeski wrote:
>>
>>>On May 26, 9:30 pm, George Neuner <·········@/comcast.net> wrote:
>>
>>>>On Mon, 26 May 2008 12:18:34 -0400, Paul Tarvydas
>>
>>>><········@visualframeworksinc.com> wrote:
>>
>>>>>George Neuner wrote:
>>
>>>>>>You're also getting too caught up in the communication abstraction.
>>>>>>The book's "information packets" are nothing more than the logical
>>>>>>"wires" connecting the functional units of the application.  It really
>>>>>>doesn't matter whether the wiring is implemented using IPC messaging,
>>>>>>in(ter)-process signaling, function callbacks, shared memory or actual
>>>>>>wires.
>>
>>>>>It matters if you are addressing performance :-).  Our VF technology is
>>>>>similar to FBP (event-driven instead of IP's).  We implemented and shipped
>>>>>real products around 1995 on 8-bit 8051's with 32K of RAM (128K ROM).  IPC
>>>>>messaging, inter-process signalling, RTOS's, C, etc. were all non-starters.
>>>>>The paradigm of "events" and "wires" made it possible to imagine and
>>>>>implement a tiny kernel and full-blown applications (600-page spec,
>>>>>resulting in about the equivalent of 300 "threads" iirc) on such stunted
>>>>>hardware.
>>
>>>>>It also matters if you believe (like I do) that it is valuable to decompose
>>>>>a paradigm into its most primitive elements.
>>
>>>>Of course performance matters.  I was simply pointing out that the
>>>>implementation described in the book was only one particular way of
>>>>doing it and not to read too much into the method.  The author himself
>>>>compared his IP abstraction to several other coordination methods
>>>>without really explaining why he felt IP was better.
>>
>>>>George
>>>>--
>>>>for email reply remove "/" from address- Hide quoted text -
>>
>>>>- Show quoted text -
>>
>>>The main idea of the fbp is that you have  processing centers linked
>>>by routes where IP moves.
>>>Lets' start with a dumb idea . We have a function called double :
>>>(defun double (x) (* 2 x))
>>>Now we design a processing center like this :
>>>(make-center #'double)
>>>=> double-center-1
>>
>>>Now we have our program is just defining a path of links .
>>
>>>(path user-input double-center-1 print-center)
>>
>>>Note that we don't call the functions. Data just moves on it's way.
>>>To spice things a little bit add few more programs
>>>(path  double-center-1 inverse mod print-center)
>>
>>>And this one:
>>>(path double-center  double-center print-center)
>>
>>>Theer is no need  calling anything,...
>>
>>Looks like calls to me, if I am the one laying out the path:
>>
>>(print-center (double-center (double-center <input>)))
>>
>>Change it to smalltalk and "send messages" and you still have the same
>>paradigm: programmer-centric hand-implementd hard-wired dataflow, and
>>wham yer dead.
> 
> You are right but how does cells solve this?

Imagine you have been asked to write a report program that will take in 
all the financial data for a company for last year and print out monthly 
totals for revenue, costs, taxes, etc etc with subtotals etc, totalling 
each for the year, computing a bottomline by month and then for the 
whole year.

You have a great idea. Work out how to import the raw data into 
VisiCalc. How would VisiCalc solve that?

Hint: it will /not/ involve you looking at the VisiCalc columns and rows 
thinking "OK, the data starts here in A1 then flows over to B12 then 
jumps out to C7 and D9". That is push, Visicalc is pull and really just 
declarative: B12 = A1 + 42. I do not think about "pulling" from A1, I 
just think about computing B12. The flow is an emergent property, not 
part of my thinking.

The neat thing is that you can now add columns for new months and watch 
results pile up if you can get a real-time feed out of the CFO, and as 
they make corrections to last year's data....

With Cells you stop to think how to solve your problem as if your 
application were a spreadseet model. In RoboCup, a first order set of 
Cells took raw feeds from the server (which is simulated sensory data, 
so we have some work to do just to build a "mental model" of where we 
are on the field and where everything is) and, well, computed just that, 
a model of the field, players, goals, and ball. Now I know where I am 
and in what direction lies the ball etc etc.

The next layer of Cells decided on a strategy (the goalie might decide 
to position themself ideally between the ball and goal until a shot is 
taken (a higher order result), at which point the strategy rule should 
recompute a new strategy: catch the ball. (Hmmm, I should have had it 
consider tipping the ball over the goal or just clearing it.)

The strategy instance computes a series of "tactics" -- steps to take to 
achieve the strategy. Each tactic had a rule to decide when it had been 
achieved, or when it should be abandoned.

The cool thing is that I am in a game with twenty-one other players so I 
can be in the middle of dribbling the ball downfield when my next sight 
input arrives and the worldview cells rerun and suddenly the rule to 
decide whether or not it is time to tap the ball suddenly gets back NIL 
for position of ball -- it has gone out of sight -- and with luck I have 
allowed for that and my design either drops back to tactic "find ball" 
or as the designer I have had the foresight to code up the "abandon" 
rule on the dribble strategy to go to true if I can no longer even see 
the ball -- I mean, what's the point, right? (OK, OK, I know, this is 
just an example, a real footballer does not need to see the ball 
continuously, no quibbling plz <g>.) But in any case it is all 
declarative. I cannot think about /particular/ events because the rules 
I write must work for all circumstances as long as the instance I am 
serving is alive. I decide what happens in /any/ event, or better, for 
any state of the surrounding world, specifically the state that matters 
to this slot being calculated by this rule.

Look, it is a new paradigm, OK? Old if you count VisiCalc, but then it 
may as well be from Mars because we programmers have been hand-animating 
our models for fifty years and cannot imagine an application built in 
the mindset of CFO playing with VisiCalc. It took me a solid year to let 
go of the hand-animation habit even after falling in love with Cells, 
just to let you know how deep is the habit.

kzo

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483c457f$0$25041$607ed4bc@cv.net>
Ken Tilton wrote:
> 
> 
> Slobodan Blazeski wrote:
> 
>> On May 27, 4:20 pm, Ken Tilton <···········@optonline.net> wrote:
>>
>>> Slobodan Blazeski wrote:
>>>
>>>> On May 26, 9:30 pm, George Neuner <·········@/comcast.net> wrote:
>>>
>>>
>>>>> On Mon, 26 May 2008 12:18:34 -0400, Paul Tarvydas
>>>
>>>
>>>>> <········@visualframeworksinc.com> wrote:
>>>
>>>
>>>>>> George Neuner wrote:
>>>
>>>
>>>>>>> You're also getting too caught up in the communication abstraction.
>>>>>>> The book's "information packets" are nothing more than the logical
>>>>>>> "wires" connecting the functional units of the application.  It 
>>>>>>> really
>>>>>>> doesn't matter whether the wiring is implemented using IPC 
>>>>>>> messaging,
>>>>>>> in(ter)-process signaling, function callbacks, shared memory or 
>>>>>>> actual
>>>>>>> wires.
>>>
>>>
>>>>>> It matters if you are addressing performance :-).  Our VF 
>>>>>> technology is
>>>>>> similar to FBP (event-driven instead of IP's).  We implemented and 
>>>>>> shipped
>>>>>> real products around 1995 on 8-bit 8051's with 32K of RAM (128K 
>>>>>> ROM).  IPC
>>>>>> messaging, inter-process signalling, RTOS's, C, etc. were all 
>>>>>> non-starters.
>>>>>> The paradigm of "events" and "wires" made it possible to imagine and
>>>>>> implement a tiny kernel and full-blown applications (600-page spec,
>>>>>> resulting in about the equivalent of 300 "threads" iirc) on such 
>>>>>> stunted
>>>>>> hardware.
>>>
>>>
>>>>>> It also matters if you believe (like I do) that it is valuable to 
>>>>>> decompose
>>>>>> a paradigm into its most primitive elements.
>>>
>>>
>>>>> Of course performance matters.  I was simply pointing out that the
>>>>> implementation described in the book was only one particular way of
>>>>> doing it and not to read too much into the method.  The author himself
>>>>> compared his IP abstraction to several other coordination methods
>>>>> without really explaining why he felt IP was better.
>>>
>>>
>>>>> George
>>>>> -- 
>>>>> for email reply remove "/" from address- Hide quoted text -
>>>
>>>
>>>>> - Show quoted text -
>>>
>>>
>>>> The main idea of the fbp is that you have  processing centers linked
>>>> by routes where IP moves.
>>>> Lets' start with a dumb idea . We have a function called double :
>>>> (defun double (x) (* 2 x))
>>>> Now we design a processing center like this :
>>>> (make-center #'double)
>>>> => double-center-1
>>>
>>>
>>>> Now we have our program is just defining a path of links .
>>>
>>>
>>>> (path user-input double-center-1 print-center)
>>>
>>>
>>>> Note that we don't call the functions. Data just moves on it's way.
>>>> To spice things a little bit add few more programs
>>>> (path  double-center-1 inverse mod print-center)
>>>
>>>
>>>> And this one:
>>>> (path double-center  double-center print-center)
>>>
>>>
>>>> Theer is no need  calling anything,...
>>>
>>>
>>> Looks like calls to me, if I am the one laying out the path:
>>>
>>> (print-center (double-center (double-center <input>)))
>>>
>>> Change it to smalltalk and "send messages" and you still have the same
>>> paradigm: programmer-centric hand-implementd hard-wired dataflow, and
>>> wham yer dead.
>>
>>
>> You are right but how does cells solve this?
> 
> 
> Imagine you have been asked to write a report program that will take in 
> all the financial data for a company for last year and print out monthly 
> totals for revenue, costs, taxes, etc etc with subtotals etc, totalling 
> each for the year, computing a bottomline by month and then for the 
> whole year.
> 
> You have a great idea. Work out how to import the raw data into 
> VisiCalc. How would VisiCalc solve that?
> 
> Hint: it will /not/ involve you looking at the VisiCalc columns and rows 
> thinking "OK, the data starts here in A1 then flows over to B12 then 
> jumps out to C7 and D9". That is push, Visicalc is pull and really just 
> declarative: B12 = A1 + 42. I do not think about "pulling" from A1, I 
> just think about computing B12. The flow is an emergent property, not 
> part of my thinking.
> 
> The neat thing is that you can now add columns for new months and watch 
> results pile up if you can get a real-time feed out of the CFO, and as 
> they make corrections to last year's data....
> 
> With Cells you stop to think how to solve your problem as if your 
> application were a spreadseet model. In RoboCup, a first order set of 
> Cells took raw feeds from the server (which is simulated sensory data, 
> so we have some work to do just to build a "mental model" of where we 
> are on the field and where everything is) and, well, computed just that, 
> a model of the field, players, goals, and ball. Now I know where I am 
> and in what direction lies the ball etc etc.
> 
> The next layer of Cells decided on a strategy (the goalie might decide 
> to position themself ideally between the ball and goal until a shot is 
> taken (a higher order result), at which point the strategy rule should 
> recompute a new strategy: catch the ball. (Hmmm, I should have had it 
> consider tipping the ball over the goal or just clearing it.)
> 
> The strategy instance computes a series of "tactics" -- steps to take to 
> achieve the strategy. Each tactic had a rule to decide when it had been 
> achieved, or when it should be abandoned.
> 
> The cool thing is that I am in a game with twenty-one other players so I 
> can be in the middle of dribbling the ball downfield when my next sight 
> input arrives and the worldview cells rerun and suddenly the rule to 
> decide whether or not it is time to tap the ball suddenly gets back NIL 
> for position of ball -- it has gone out of sight -- and with luck I have 
> allowed for that and my design either drops back to tactic "find ball" 
> or as the designer I have had the foresight to code up the "abandon" 
> rule on the dribble strategy to go to true if I can no longer even see 
> the ball -- I mean, what's the point, right? (OK, OK, I know, this is 
> just an example, a real footballer does not need to see the ball 
> continuously, no quibbling plz <g>.)

Actually, it would not be a quibble. Thanks to Cells I am now in a 
position to insert a true mental model above the perception layer 
compued from the raw layer (really just two-three kinds of input). The 
mental model would persist over time in ways plausible given the laws of 
physics and even certain assumptions ("if no one else has kicked the 
ball it should be here based on the last observed location, speed , and 
direction").

Great fun, programming this way. :)

kt

> But in any case it is all 
> declarative. I cannot think about /particular/ events because the rules 
> I write must work for all circumstances as long as the instance I am 
> serving is alive. I decide what happens in /any/ event, or better, for 
> any state of the surrounding world, specifically the state that matters 
> to this slot being calculated by this rule.
> 
> Look, it is a new paradigm, OK? Old if you count VisiCalc, but then it 
> may as well be from Mars because we programmers have been hand-animating 
> our models for fifty years and cannot imagine an application built in 
> the mindset of CFO playing with VisiCalc. It took me a solid year to let 
> go of the hand-animation habit even after falling in love with Cells, 
> just to let you know how deep is the habit.
> 
> kzo
> 

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Peter Hildebrandt
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483c4e42$0$90268$14726298@news.sunsite.dk>
Slobodan Blazeski wrote:
>>> The main idea of the fbp is that you have  processing centers linked
>>> by routes where IP moves.
>>> Lets' start with a dumb idea . We have a function called double :
>>> (defun double (x) (* 2 x))
>>> Now we design a processing center like this :
>>> (make-center #'double)
>>> => double-center-1
>>> Now we have our program is just defining a path of links .
>>> (path user-input double-center-1 print-center)
>>> Note that we don't call the functions. Data just moves on it's way.
>>> To spice things a little bit add few more programs
>>> (path  double-center-1 inverse mod print-center)
>>> And this one:
>>> (path double-center  double-center print-center)
>>> Theer is no need  calling anything,...
>> Looks like calls to me, if I am the one laying out the path:
>>
>> (print-center (double-center (double-center <input>)))
>>
>> Change it to smalltalk and "send messages" and you still have the same
>> paradigm: programmer-centric hand-implementd hard-wired dataflow, and
>> wham yer dead.
> You are right but how does cells solve this?

CL-USER> (require :cells)
CL-USER> (defpackage :flow (:use :cells :utils-kt :cl))
CL-USER> (in-package :flow)
FLOW> (defmd node ()
	in
	out)
FLOW> (defmd double-node (node)
	:out (c? (* 2 (^in))))
FLOW> (defmd print-node (node)
	:out (c? (print (^in))))
FLOW> (defun path (&rest nodes)
	(when (car nodes)
	  (multiple-value-bind (start node)
               (apply #'path (butlast nodes))
	    (let ((self (make-instance (car (last nodes))
                                        :in (if node
                                            (c? (out node))
                                            (c-in 0)))))
	      (values (or start self) self)))))
FLOW> (path 'double-node 'print-node 'double-node 'print-node)
DOUBLE-NODE26
FLOW> (setf (in *) 4)
8
16

What else do we have?  Inverse and mod?
FLOW> (defmd mod3-node (node)
	:out (c? (mod (^in) 3)))
FLOW> (defmd inverse-node (node)
	:out (c? (if (eql (^in) 0) 0 (/ (^in)))))
FLOW> (path 'double-node 'inverse-node 'mod3-node 'print-node)
FLOW> (setf (in *) 4)
1/8
FLOW> (setf (in **) 1/8)
1

Satisfied?

Peter

>> kt
>>
>> --http://smuglispweeny.blogspot.com/http://www.theoryyalgebra.com/
>> ECLM rant:http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
>> ECLM talk:http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en- Hide quoted text -
>>
>> - Show quoted text -
> 
From: Alex Mizrahi
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <48381560$0$90263$14726298@news.sunsite.dk>
 FB> FBP is very different:

from what you describe it's different only terminologically..

 FB>  When developing applications, you start with defining information
 FB> packets (IP). In Lisp an IP could be a hashtable or any other Lisp
 FB> object.

so IP is just a fancy way to say "any object"? maybe it matter for 
distributed
implementations, where IPs have to be serializable

 FB>  Then you define some processes and interconnect them. A process has
 FB> inputs and outputs for processing IPs. The process itself is a program,
 FB> with local storage. A process can be instantiated multiple times and
 FB> can be configured with configuration IPs (there are preemptive and
 FB> cooperative multitasking implementations).

in Cells terminology, process is model (class), process instance is an 
object,
interconnect is synapse.

 FB> The main motivation of FBP is to reuse stable and tested processes for
 FB> many applications.

so is OOP, and articles you've linked say that OOP is quite related to FBP

 FB> I think Cells is more fine granular: there are not a few connected 
black
 FB> boxes, but a network of connected values.

in other words, Cells doesn't hide guts of processes from you, right?

 FB> better, maybe this depends on the application. But I think every Cells
 FB> network can be transformed to a FBP-like network, but the other
 FB> direction would be more difficult, so FBP may be a more general
 FB> concept.

for my untrained eye Cells seems to be like a particular implementation of
FBP concepts. can you show an example what general FBP can do
but Cells can't? 
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483835e8$0$15186$607ed4bc@cv.net>
Alex Mizrahi wrote:
>  FB> FBP is very different:
> 
> from what you describe it's different only terminologically..

I am curious if the "feel" is different. In Cells I think about state in 
a static steady-state kinda way, just as a spreadsheet author thinks 
about only how to compute the cell in front of them but then has to 
write one rule for all time and all circumstances. Yes, it makes 
event-handling trickier to think about, one giveback of Cells.

FBP sounds as if one thinks about a pipeline of processing and 
transformations one wants for state.

We may have dataflow in both models and much different programming 
experiences. But the IBM folks report the same high productivity Cells 
users enjoy, so perhaps the key is exactly what Brooks identified: 
decomposing the complexity of the the interdependence of large numbers 
of kinds of state. As long as one does that (and both seem to) we're good.

FBP may be more work than Cells, btw. I do not think about these wires 
coming in or about laying out some circuit, I Just Write the Code: 
dependencies are detected automatically for me. eg, a spreadsheet author 
  (once they learn how one works) Just Writes Rules. I get the feeling 
the FBP have to think in terms of circuits feeding circuits and getting 
them wired together.

> 
>  FB>  When developing applications, you start with defining information
>  FB> packets (IP). In Lisp an IP could be a hashtable or any other Lisp
>  FB> object.
> 
> so IP is just a fancy way to say "any object"? maybe it matter for 
> distributed
> implementations, where IPs have to be serializable
> 
>  FB>  Then you define some processes and interconnect them. A process has
>  FB> inputs and outputs for processing IPs. The process itself is a program,
>  FB> with local storage. A process can be instantiated multiple times and
>  FB> can be configured with configuration IPs (there are preemptive and
>  FB> cooperative multitasking implementations).
> 
> in Cells terminology, process is model (class), process instance is an 
> object,
> interconnect is synapse.

And the idea of process as object is kinda weird to me. RDF has shaken 
my affection for OO, but not state as central to application design. FBP 
almost sounds as if it is still code-oriented, the way flowcharts make 
processing preeminent and the data just happens to slide along these arrows.

> 
>  FB> The main motivation of FBP is to reuse stable and tested processes for
>  FB> many applications.
> 
> so is OOP, and articles you've linked say that OOP is quite related to FBP
> 
>  FB> I think Cells is more fine granular: there are not a few connected 
> black
>  FB> boxes, but a network of connected values.
> 
> in other words, Cells doesn't hide guts of processes from you, right?

I guess each Cell could be considered a black box, but they are not so 
general to my mind as a black box. Different instances of the same class 
get different rules for the same slot, so these rules are not 
super-reusable abstracted functions, they are very specific hard-coded 
little value generators, often closures over values that further tailor 
their output.

What ends up being reusable is the class (yippee! The Grail!) because 
the class no longer says so much about an instance. ie, Cells cheats. :)

kt

> 
>  FB> better, maybe this depends on the application. But I think every Cells
>  FB> network can be transformed to a FBP-like network, but the other
>  FB> direction would be more difficult, so FBP may be a more general
>  FB> concept.
> 
> for my untrained eye Cells seems to be like a particular implementation of
> FBP concepts. can you show an example what general FBP can do
> but Cells can't? 
> 
> 

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Frank Buss
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <d2ij7bsu1jzj$.1rg36cyaaizft.dlg@40tude.net>
Ken Tilton wrote:

> FBP sounds as if one thinks about a pipeline of processing and 
> transformations one wants for state.

Yes, that's right. The author compares it with the pipe operator found in
DOS and Unix shells: There are lots of simple programs, like sort, unique
etc., and you can feed the output of one program to the input of another
program.

> We may have dataflow in both models and much different programming 
> experiences. But the IBM folks report the same high productivity Cells 
> users enjoy, so perhaps the key is exactly what Brooks identified: 
> decomposing the complexity of the the interdependence of large numbers 
> of kinds of state. As long as one does that (and both seem to) we're good.

While trying to implement the telegram problem in FBP Lisp, this was
exactly my feeling: You can concentrate on writing just one process, with
explicit defined input and outputs. Then you can connect the components and
the program just works (if you've tested the components before and the
specification of the ports matches).

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <48387d67$0$25029$607ed4bc@cv.net>
Frank Buss wrote:
> Ken Tilton wrote:
> 
> 
>>FBP sounds as if one thinks about a pipeline of processing and 
>>transformations one wants for state.
> 
> 
> Yes, that's right. The author compares it with the pipe operator found in
> DOS and Unix shells: There are lots of simple programs, like sort, unique
> etc., and you can feed the output of one program to the input of another
> program.
> 
> 
>>We may have dataflow in both models and much different programming 
>>experiences. But the IBM folks report the same high productivity Cells 
>>users enjoy, so perhaps the key is exactly what Brooks identified: 
>>decomposing the complexity of the the interdependence of large numbers 
>>of kinds of state. As long as one does that (and both seem to) we're good.
> 
> 
> While trying to implement the telegram problem in FBP Lisp, this was
> exactly my feeling: You can concentrate on writing just one process, with
> explicit defined input and outputs. Then you can connect the components and
> the program just works (if you've tested the components before and the
> specification of the ports matches).
> 

Yes, that is the right phrase. After we invented Cells (three were 
present at the creation) we would turn to each other regularly and say 
in quiet awe, "It just works" after trying something ridiculously 
unlikely. And we said it more as if it were a question: "How can it just 
work?".

I think the answer is we all should have been programming with language 
constructs supporting cause and effect all along, then it would not seem 
so odd. I mean, the world Just Works, right?

Those were heady days, a true sense of having stumbled onto something. 
Only took a day to implement, too. Another message from god in there 
somewhere. :)

kt


-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Paul Tarvydas
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1erar$5fm$1@aioe.org>
Frank Buss wrote:
> While trying to implement the telegram problem in FBP Lisp, this was
> exactly my feeling: You can concentrate on writing just one process, with
> explicit defined input and outputs. Then you can connect the components
> and the program just works (if you've tested the components before and the
> specification of the ports matches).

I think that what you are experiencing are the benefits of "encapsulation".

OOP apologists use "encapsulation" as a buzzword, but, IMO OOP does not
achieve encapsulation that is as good as that of "processes"
(multi-tasking, threads, whatever).  

When you follow the rules of process-based programming, you do not scribble
outside of a process' boundaries.  Your communication with other processes
is explicit and well-defined.  Everything the process does is "visible at a
glance" (locality of reference).

OOP increases reuse, but breaks encapsulation through inheritance.  It is
difficult to understand a set of methods of a class without also
understanding the methods of the ancestor(s) - hence, locality of
reference - one of the benefits of encapsulation - is broken by
inheritance.

The problem with processes is context-switch overhead.  Ideally, you want to
push the paradigm down the lowest levels of the language, e.g. down to the
subroutine or statement level (e.g. each subroutine is a concurrent,
stand-alone unit).  I.E. you want to architect your code with 100's,
1000's, 10,000's of "processes".

Cells, FBP, stackless python, my event-based stuff are variants of
process-based programming (reactive), with attempts to reduce the overhead
so that the reactive paradigm can be used at a fine grain.

pt
From: Frank Buss
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <tvmp8kuaxn1f$.kpjws99l19su$.dlg@40tude.net>
Alex Mizrahi wrote:

> so IP is just a fancy way to say "any object"? maybe it matter for 
> distributed implementations, where IPs have to be serializable

Yes, in the FBP book an IP can be any object. There are special IPs, like
brackets for implementing sub-streams. But this is just a concept, not
built into the framework.

> in Cells terminology, process is model (class), process instance is an 
> object, interconnect is synapse.

What is a Cells synapse? I didn't used Cells very much, but I don't think
this maps very well, because in FBP all processes are running in parallel,
which is not possible with Cells.

>  FB> The main motivation of FBP is to reuse stable and tested processes for
>  FB> many applications.
> 
> so is OOP, and articles you've linked say that OOP is quite related to FBP

Yes, the local storage and instances of processes are a bit similar to
classes and objects and a FBP implementation can be done in OOP, like done
with Java and threads:

http://www.jpaulmorrison.com/fbp/jsyntax.htm

> for my untrained eye Cells seems to be like a particular implementation of
> FBP concepts. can you show an example what general FBP can do
> but Cells can't?

I'm sure you can implement everything in Cells which you can implement in
FBP, but it might look different and I'm not sure, if reusable for large
applications is as high as with FBP.

For example, take a look at the telegram problem:

http://en.wikipedia.org/wiki/Flow-based_programming#.22Telegram_Problem.22

Your task is to write a function, which reformats a text for a specified
line width, e.g. this call:

(telegram "
Du bist am Ende- was du bist. Setz dir Per�cken auf von Millionen Locken,
Setz deinen Fu� auf ellenhohe Socken, Du bleibst doch immer, was du bist."
30)

outputs these lines:

Du bist am Ende- was du bist.
Setz dir Per�cken auf von
Millionen Locken, Setz deinen
Fu� auf ellenhohe Socken, Du
bleibst doch immer, was du
bist.

In Lisp, it could look like below. I think implementing the driver should
be not too difficult. E.g. in LispWorks there are some nice multithreading
functions and the "mailbox" object, which could be used for the
connections.

The implementation is longer than a conventional implementation, but the
coupling between the processes are very loose, so unlike in conventional
OOP programs reusing it in other programs is easier. A nice GUI for placing
the process instances, configuring it, and drawing the connections would be
cool.

;;;
;;; split a string into lines
;;;
(defun read-sequence ()
  (with-input-from-string (s (config-port-read 'input-string))
    (loop for line = (read-line file nil) while line do
          (port-send 'line line))))

(add-config-port #'read-sequence 'input-string)
(add-output-port #'read-sequence 'line)

;;;
;;; split a line into words
;;;
(defun de-compose ()
  (loop for line = (port-read 'line)
        while line do
        (loop for word in (split-sequence #\Space line) do
              (port-send 'word word)))
  (port-send 'word nil))

(add-input-port #'de-compose 'line)
(add-output-port #'de-compose 'word)

;;;
;;; merge words into a line
;;;
(defun re-compose ()
  (let ((line "")
        (line-width (config-port-read 'line-width)))
    (loop for word = (port-read 'word)
          while word do
          (if (> (+ (length line) (length word))
                 line-width)
              (progn
                (port-send 'line line)
                (setf line word))
            (setf line (concatenate 'string line " " word))))
    (port-send 'line line))
  (port-send 'line nil))

(add-config-port #'re-compose 'line-width)
(add-input-port #'re-compose 'word)
(add-output-port #'re-compose 'line)

;;;
;;; write lines to output
;;;
(defun write-sequence ()
  (loop for line = (port-read 'line)
        while line do (format t "~x~%" line)))

(add-input-port #'write-sequence 'line)

;;;
;;; create network and run it
;;;
(defun telegram (string line-width)
  (let ((read-sequence (make-process #'read-sequence))
        (de-compose (make-process #'de-compose))
        (re-compose (make-process #'re-compose))
        (write-sequence (make-process #'write-sequence)))
    (configure read-sequence :port 'input-string :value string)
    (configure re-compose :port 'line-width :value line-width)
    (connect :sender read-sequence :sender-port 'line
             :receiver de-compose :receiver-port 'line)
    (connect :sender de-compose :sender-port 'word
             :receiver re-compose :receiver-port 'word)
    (connect :sender re-compose :sender-port 'line
             :receiver write-sequence :receiver-port 'line)
    (run-fbp-network)))

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Sohail Somani
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <4n_Zj.3865$Yp.1777@edtnps92>
Frank Buss wrote:

> What is a Cells synapse? I didn't used Cells very much, but I don't think
> this maps very well, because in FBP all processes are running in parallel,
> which is not possible with Cells.

Why is it not possible?
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <48387839$0$25026$607ed4bc@cv.net>
Sohail Somani wrote:
> Frank Buss wrote:
> 
>> What is a Cells synapse?

I am surprised Pascal Costanza has not answered this already. (I slay 
myself.) Synapses are anonymous Cells local to a rule which mediate not 
a CLOS slot, rather any arbitrary form within a larger Cell formula (er, 
Lisp form that calculates a Cell value).

One possible use: If I have an expensive calculation in a rule and any 
of its inputs tends to jump around a lot in ways that matter but not all 
that much, I can wrap the form that accesses that bad boy in 
(with-synapse....) wherein I make sure the other value has changed by 
amount X before "firing". The Cell to which I am local then runs.

> I didn't used Cells very much, but I don't think
>> this maps very well, because in FBP all processes are running in 
>> parallel,
>> which is not possible with Cells.
> 
> 
> Why is it not possible?

Currently Cells enforces what I call "data integrity". See the manifesto 
for details, but basically it means things should happen in order. If I 
depend on A and B and B happens also to depend on A, when A changes I 
want to see its new value and the new value of B that results from the 
change to A. This and other matters require strict enforcement of linear 
calculation (tho in some cases folks would be free to charge ahead with 
their calculations, they just gotta check that they are free to go 
ahead). btw, where Philip Eby discusses the name "Trellis" for that 
package he mentions Gerlentner's Trellis likewise involved a master 
clock -- er, that is how Cells does its integrity thing.

All that said, originally things were much more wild wild west and stuff 
did get calculated out of order and I wrote hundreds of thousands of 
lines of Lisp that way. Basically the universe is smooth and one get out 
of synch for brief periods of time and catch up with commensurately 
small deviations from goodness having arisen. Until I did RoboCup, then 
the wheels came off and I did the integrity thing.

I really do not know the issues in parallel programming. I suppose one 
could just let the data flow through the dependency graph willy nilly 
without regard to a master clock and based on my cited experience I am 
sure it would go fine, one would just have to do a little extra work 
(FLW) where "close" is not good enough.

kt



-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Frank Buss
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <1eaq15sr78ow6$.fa07xkpvojyz.dlg@40tude.net>
Sohail Somani wrote:

> Frank Buss wrote:
> 
>> What is a Cells synapse? I didn't used Cells very much, but I don't think
>> this maps very well, because in FBP all processes are running in parallel,
>> which is not possible with Cells.
> 
> Why is it not possible?

We have to ask this Kenny, but I don't think that Cells is multithreading
safe and the concept of Cells is not to have many parallel running
processes.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Sohail Somani
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <xB%Zj.3877$Yp.2575@edtnps92>
Frank Buss wrote:
> Sohail Somani wrote:
> 
>> Frank Buss wrote:
>>
>>> What is a Cells synapse? I didn't used Cells very much, but I don't think
>>> this maps very well, because in FBP all processes are running in parallel,
>>> which is not possible with Cells.
>> Why is it not possible?
> 
> We have to ask this Kenny, but I don't think that Cells is multithreading
> safe and the concept of Cells is not to have many parallel running
> processes.

Gotcha but the general concept can be executed in parallel if done properly.

Or maybe I should say "done a specific way".
From: Rainer Joswig
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <joswig-20B189.22101324052008@news-europe.giganews.com>
In article <·······························@40tude.net>,
 Frank Buss <··@frank-buss.de> wrote:

> Alex Mizrahi wrote:
> 
> > so IP is just a fancy way to say "any object"? maybe it matter for 
> > distributed implementations, where IPs have to be serializable
> 
> Yes, in the FBP book an IP can be any object. There are special IPs, like
> brackets for implementing sub-streams. But this is just a concept, not
> built into the framework.
> 
> > in Cells terminology, process is model (class), process instance is an 
> > object, interconnect is synapse.
> 
> What is a Cells synapse? I didn't used Cells very much, but I don't think
> this maps very well, because in FBP all processes are running in parallel,
> which is not possible with Cells.
> 
> >  FB> The main motivation of FBP is to reuse stable and tested processes for
> >  FB> many applications.
> > 
> > so is OOP, and articles you've linked say that OOP is quite related to FBP
> 
> Yes, the local storage and instances of processes are a bit similar to
> classes and objects and a FBP implementation can be done in OOP, like done
> with Java and threads:
> 
> http://www.jpaulmorrison.com/fbp/jsyntax.htm
> 
> > for my untrained eye Cells seems to be like a particular implementation of
> > FBP concepts. can you show an example what general FBP can do
> > but Cells can't?
> 
> I'm sure you can implement everything in Cells which you can implement in
> FBP, but it might look different and I'm not sure, if reusable for large
> applications is as high as with FBP.
> 
> For example, take a look at the telegram problem:
> 
> http://en.wikipedia.org/wiki/Flow-based_programming#.22Telegram_Problem.22
> 
> Your task is to write a function, which reformats a text for a specified
> line width, e.g. this call:
> 
> (telegram "
> Du bist am Ende- was du bist. Setz dir Per�cken auf von Millionen Locken,
> Setz deinen Fu� auf ellenhohe Socken, Du bleibst doch immer, was du bist."
> 30)
> 
> outputs these lines:
> 
> Du bist am Ende- was du bist.
> Setz dir Per�cken auf von
> Millionen Locken, Setz deinen
> Fu� auf ellenhohe Socken, Du
> bleibst doch immer, was du
> bist.
> 
> In Lisp, it could look like below. I think implementing the driver should
> be not too difficult. E.g. in LispWorks there are some nice multithreading
> functions and the "mailbox" object, which could be used for the
> connections.
> 
> The implementation is longer than a conventional implementation, but the
> coupling between the processes are very loose, so unlike in conventional
> OOP programs reusing it in other programs is easier. A nice GUI for placing
> the process instances, configuring it, and drawing the connections would be
> cool.
> 
> ;;;
> ;;; split a string into lines
> ;;;
> (defun read-sequence ()
>   (with-input-from-string (s (config-port-read 'input-string))
>     (loop for line = (read-line file nil) while line do
>           (port-send 'line line))))
> 
> (add-config-port #'read-sequence 'input-string)
> (add-output-port #'read-sequence 'line)
> 
> ;;;
> ;;; split a line into words
> ;;;
> (defun de-compose ()
>   (loop for line = (port-read 'line)
>         while line do
>         (loop for word in (split-sequence #\Space line) do
>               (port-send 'word word)))
>   (port-send 'word nil))
> 
> (add-input-port #'de-compose 'line)
> (add-output-port #'de-compose 'word)
> 
> ;;;
> ;;; merge words into a line
> ;;;
> (defun re-compose ()
>   (let ((line "")
>         (line-width (config-port-read 'line-width)))
>     (loop for word = (port-read 'word)
>           while word do
>           (if (> (+ (length line) (length word))
>                  line-width)
>               (progn
>                 (port-send 'line line)
>                 (setf line word))
>             (setf line (concatenate 'string line " " word))))
>     (port-send 'line line))
>   (port-send 'line nil))
> 
> (add-config-port #'re-compose 'line-width)
> (add-input-port #'re-compose 'word)
> (add-output-port #'re-compose 'line)
> 
> ;;;
> ;;; write lines to output
> ;;;
> (defun write-sequence ()
>   (loop for line = (port-read 'line)
>         while line do (format t "~x~%" line)))
> 
> (add-input-port #'write-sequence 'line)
> 
> ;;;
> ;;; create network and run it
> ;;;
> (defun telegram (string line-width)
>   (let ((read-sequence (make-process #'read-sequence))
>         (de-compose (make-process #'de-compose))
>         (re-compose (make-process #'re-compose))
>         (write-sequence (make-process #'write-sequence)))
>     (configure read-sequence :port 'input-string :value string)
>     (configure re-compose :port 'line-width :value line-width)
>     (connect :sender read-sequence :sender-port 'line
>              :receiver de-compose :receiver-port 'line)
>     (connect :sender de-compose :sender-port 'word
>              :receiver re-compose :receiver-port 'word)
>     (connect :sender re-compose :sender-port 'line
>              :receiver write-sequence :receiver-port 'line)
>     (run-fbp-network)))

See the history of Scheme:
http://research.sun.com/projects/plrg/JAOO-SchemeHistory-2006public.pdf

Scheme was created to understand the Actors theory from Carl Hewitt.

 Inspired in part by SIMULA and Smalltalk, Carl Hewitt 
 developed a model of computation around �actors� 
 Every agent of computation is an actor 
 Every datum or data structure is an actor 
 An actor may have �acquaintances� (other actors it knows) 
 Actors react to messages sent from other actors 
 An actor can send messages only to acquaintances 
 and to actors received in messages 
 �You don�t add 3 and 2 to get 5; instead, you send 3 
 a message asking it to add 2 to itself�

This means that programs are networks of concurrent Actors that
are sending messages and react to messages.

There are a couple of Actor implementations. I did a rough port
of ABCL/R2 to Clozure CL and to another Lisp system. The actors are
running concurrently.

Here is a snippet of an actor (example from ABCL1) from a Car Wash simulation:

You can see that the object reacts to a couple of messages (:arrive, :available, ...),
creates other objects and send messages to them. There are different types of
messages. Typically each object has an input queue of messages and executes
those (this can be interrupted by 'express messages'). In the original Actor (IIRC)
model the Actor copies itself and the copy executes the next message and so on.

[object Manager
  (state [history := [createHistory <== :new]]
         [car-q := [createPriorityQueue <== [:new #'< #'third]]]
    [worker-q := [createPriorityQueue <== [:new #'< #'third]]]
    [previous-worker-available-time := 0])
  
  (script
   (=> [:arrive X]   ; X ::= [:car CarId ArrivalTime]
       [car-q <= [:enqueue X]]
       (processing))
   (=> [:available X]   ; X ::= [:worker Worker AvailableTime]
       [worker-q <= [:enqueue X]]
       (processing))
   (=> [:undo [:available [:worker Worker AvailableTime]]]
       (loop-forever
   (match [history <== :top]
          (is [:record Car ArrivalTime PreviousWorker PreviousAvailableTime]
         where (and (>= PreviousAvailableTime AvailableTime)
               (not (eq Worker PreviousWorker)))
         [history <= :pop]
         [car-q <= [:undequeue [:car Car ArrivalTime]]]
         [worker-q <= 
              [:undequeue [:worker PreviousWorker PreviousAvailableTime]]]
         [PreviousWorker <= 
               [:undo [:wash Car :from (max ArrivalTime PreviousAvailableTime)]]])
          (otherwise (return))))
       (match [history <== :top]
         (is [:record Car ArrivalTime PreviousWorker PreviousAvailableTime]
        where (and (eq Worker PreviousWorker) 
              (= AvailableTime PreviousAvailableTime))
        [history <= :pop]
        [car-q <= [:undequeue [:car Car ArrivalTime]]]
        [PreviousWorker <= 
              [:undo [:wash Car :from (max ArrivalTime PreviousAvailableTime)]]])
         (otherwise
          [worker-q <= [:remove [:worker Worker AvailableTime]]]))
       (match [history <== :top]
         (is nil [previous-worker-available-time := 0])
         (is [:record _ _ _ AvailableTime]
        [previous-worker-available-time := AvailableTime])))
   (=> :reset
       [[car-q worker-q history] <= :reset]
       [previous-worker-available-time := 0])
   (=> [:print-history]
       [history <= :print]))
  (routine
   (processing ()
          (loop-forever
      (match [worker-q <== :top]
             (is [:worker Worker AvailableTime]
            where (>= AvailableTime previous-worker-available-time)
            (match [car-q <== :top]
              (is nil (return))
              (is [:car Car ArrivalTime]
                  (washing Car ArrivalTime Worker AvailableTime))))
             (is [:worker Worker AvailableTime]
            where (< AvailableTime previous-worker-available-time)
            (roll-back Worker AvailableTime))
             (otherwise (return)))))
   (washing (car arrival-time worker available-time)
       [Worker <= [:wash car :from (max arrival-time available-time)]]
       [car-q <= :dequeue]
       [worker-q <= :dequeue]
       [history <= [:push [:record car arrival-time worker available-time]]]
       [previous-worker-available-time := available-time])
   (roll-back (worker available-time) 
         (match [history <== :top]
           (is [:record Car ArrivalTime PreviousWorker PreviousAvailableTime]
          [history <= :pop]
          [car-q <= [:undequeue [:car Car ArrivalTime]]]
          [PreviousWorker <= 
                [:undo [:wash Car :from (max ArrivalTime PreviousAvailableTime)]]]
          [worker-q <= :dequeue]
          [worker-q <= 
               [:undequeue [:worker PreviousWorker PreviousAvailableTime]]]
          [worker-q <= [:undequeue [:worker worker available-time]]]))
         (match [history <== :top]
           (is nil [previous-worker-available-time := 0])
           (is [:record _ _ _ AvailableTime]
          [previous-worker-available-time := AvailableTime]))))]

-- 
http://lispm.dyndns.org/
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <48387ffb$0$25042$607ed4bc@cv.net>
Rainer Joswig wrote:
> In article <·······························@40tude.net>,
>  Frank Buss <··@frank-buss.de> wrote:
> 
> 
>>Alex Mizrahi wrote:
>>
>>
>>>so IP is just a fancy way to say "any object"? maybe it matter for 
>>>distributed implementations, where IPs have to be serializable
>>
>>Yes, in the FBP book an IP can be any object. There are special IPs, like
>>brackets for implementing sub-streams. But this is just a concept, not
>>built into the framework.
>>
>>
>>>in Cells terminology, process is model (class), process instance is an 
>>>object, interconnect is synapse.

Sorry, I missed this. In vanilla Cells, the "interconnect" is as dumb as 
it can be: all you can do is leave a wake-up call with any Cell you use 
(well, I mean, that happens transparently by default), But the idea is 
the same. If you ask me my value, when my output changes you just get an 
unspecific* wake-up cal* that tells you /something/ you used changed (or 
you are being brought to life at make-instance time.

A synapse (as described elsewhere) improves this somewhat by letting a 
subform of a rule compute a local value and have its own dependencies, 
but then it is the same: folks still get a non-specific Recompute! command.

* I lied. There is a special bound to the Cell that changed. But I have 
never used it for anything other than debugging, when I could not 
understand why a rule was running.

kt
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <48388420$0$11618$607ed4bc@cv.net>
Rainer Joswig wrote:
> 
> See the history of Scheme:
> http://research.sun.com/projects/plrg/JAOO-SchemeHistory-2006public.pdf
> 
> Scheme was created to understand the Actors theory from Carl Hewitt.
> 
>  Inspired in part by SIMULA and Smalltalk, Carl Hewitt 
>  developed a model of computation around �actors� 
>  Every agent of computation is an actor 
>  Every datum or data structure is an actor 
>  An actor may have �acquaintances� (other actors it knows) 
>  Actors react to messages sent from other actors 
>  An actor can send messages only to acquaintances 

That sounds different, as if one is writing code that says, OK, now send 
a message to precisely here. That is classic imperative management of 
state change and its consequences by the programmer, and has nothing of 
the declarative quality of Cells.

I would suggest the metaphor of a spreadsheet, but that has actually 
been resoundingly and repeatedly demonstrated as insufficient to make 
the point. :)

kt

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483895fa$0$15202$607ed4bc@cv.net>
Ken Tilton wrote:
> 
> 
> Rainer Joswig wrote:
> 
>>
>> See the history of Scheme:
>> http://research.sun.com/projects/plrg/JAOO-SchemeHistory-2006public.pdf
>>
>> Scheme was created to understand the Actors theory from Carl Hewitt.
>>
>>  Inspired in part by SIMULA and Smalltalk, Carl Hewitt  developed a 
>> model of computation around �actors�  Every agent of computation is an 
>> actor  Every datum or data structure is an actor  An actor may have 
>> �acquaintances� (other actors it knows)  Actors react to messages sent 
>> from other actors  An actor can send messages only to acquaintances 
> 
> 
> That sounds different, as if one is writing code that says, OK, now send 
> a message to precisely here. That is classic imperative management of 
> state change and its consequences by the programmer, and has nothing of 
> the declarative quality of Cells.

Btw, I am not disagreeing that Actors sounds like dataflow, but as Mr. 
Buss says it depends on the meaning of the word "dataflow" and as I said 
the devil is in the details, tho in this case this is no detail: the 
beauty of a spreadsheet is that I never worry about who I have to tell 
when I change, the software tends to that automatically, nor do I have 
to step back from writing my formula to do explicit stuff (as in a 
subscribe/notify pattern) in order to get updated -- I just write my rule.

Same with Cells.

> 
> I would suggest the metaphor of a spreadsheet, but that has actually 
> been resoundingly and repeatedly demonstrated as insufficient to make 
> the point. :)

Oops. :)

kt

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Sohail Somani
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <xO0_j.3887$Yp.864@edtnps92>
Ken Tilton wrote:

>> Rainer Joswig wrote:
>>
>>>
>> I would suggest the metaphor of a spreadsheet, but that has actually 
>> been resoundingly and repeatedly demonstrated as insufficient to make 
>> the point. :)
> 
> Oops. :)

Actually, in the latest version of Excel, if your addins are specified 
as reentrant (Excel4 API-using addins are not, iirc), MS Excel may 
calculate cells in parallel.

Atleast thats what I last heard when dealing with Excel 12 propaganda.
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <4838f2d8$0$11600$607ed4bc@cv.net>
Ken Tilton wrote:
> 
> 
> Ken Tilton wrote:
> 
>>
>>
>> Rainer Joswig wrote:
>>
>>>
>>> See the history of Scheme:
>>> http://research.sun.com/projects/plrg/JAOO-SchemeHistory-2006public.pdf
>>>
>>> Scheme was created to understand the Actors theory from Carl Hewitt.
>>>
>>>  Inspired in part by SIMULA and Smalltalk, Carl Hewitt  developed a 
>>> model of computation around �actors�  Every agent of computation is 
>>> an actor  Every datum or data structure is an actor  An actor may 
>>> have �acquaintances� (other actors it knows)  Actors react to 
>>> messages sent from other actors  An actor can send messages only to 
>>> acquaintances 
>>
>>
>>
>> That sounds different, as if one is writing code that says, OK, now 
>> send a message to precisely here. That is classic imperative 
>> management of state change and its consequences by the programmer, and 
>> has nothing of the declarative quality of Cells.
> 
> 
> Btw, I am not disagreeing that Actors sounds like dataflow, but as Mr. 
> Buss says it depends on the meaning of the word "dataflow" and as I said 
> the devil is in the details, tho in this case this is no detail: the 
> beauty of a spreadsheet is that I never worry about who I have to tell 
> when I change, the software tends to that automatically, nor do I have 
> to step back from writing my formula to do explicit stuff (as in a 
> subscribe/notify pattern) in order to get updated -- I just write my rule.
> 
> Same with Cells.
> 
>>
>> I would suggest the metaphor of a spreadsheet, but that has actually 
>> been resoundingly and repeatedly demonstrated as insufficient to make 
>> the point. :)

But we cellsers won't give up on it:

    http://pypi.python.org/pypi/Cellulose/0.2

Cells begat PyCells, PyCells begat Trellis and Cellulose. So there are 
three Python implementations of Cells now, tho I have a feeling PyCells 
is moribund. Cellulose gets a lot of downloads, I think because it is 
used in something else. Not sure how Trellis is doing.

kt


-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Slobodan Blazeski
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <a4222535-9c0b-4a09-a0e4-b4d7c2e9efe4@e39g2000hsf.googlegroups.com>
On May 24, 3:17 pm, "Alex Mizrahi" <········@users.sourceforge.net>
wrote:
>  FB> FBP is very different:
>
> from what you describe it's different only terminologically..
>
>  FB>  When developing applications, you start with defining information
>  FB> packets (IP). In Lisp an IP could be a hashtable or any other Lisp
>  FB> object.
>
> so IP is just a fancy way to say "any object"? maybe it matter for
> distributed
> implementations, where IPs have to be serializable
>
>  FB>  Then you define some processes and interconnect them. A process has
>  FB> inputs and outputs for processing IPs. The process itself is a program,
>  FB> with local storage. A process can be instantiated multiple times and
>  FB> can be configured with configuration IPs (there are preemptive and
>  FB> cooperative multitasking implementations).
>
> in Cells terminology, process is model (class), process instance is an
> object,
> interconnect is synapse.
>
>  FB> The main motivation of FBP is to reuse stable and tested processes for
>  FB> many applications.
>
> so is OOP, and articles you've linked say that OOP is quite related to FBP
>
>  FB> I think Cells is more fine granular: there are not a few connected
> black
>  FB> boxes, but a network of connected values.
>
> in other words, Cells doesn't hide guts of processes from you, right?
>
>  FB> better, maybe this depends on the application. But I think every Cells
>  FB> network can be transformed to a FBP-like network, but the other
>  FB> direction would be more difficult, so FBP may be a more general
>  FB> concept.
>
> for my untrained eye Cells seems to be like a particular implementation of
> FBP concepts. can you show an example what general FBP can do
> but Cells can't?

Cells looks to me more like event driven than dataflow concept.
Dataflow reoves the micromanagement of function calls.When data
arrives it gets processed.
Imagine managing a restoraunt:
In traditional programming you'll have to tell the employees what to
do.
Jim get some meat. Mary serve table 23. Cory clean the floor. Dru make
lazagna etc.
So you have to take care of coordination  between the,
In dataflow you just specify the rules.
Mary you're the waitress whenever customers arrive pick their orders
and give it chef.
Dru you're the chef, whenever you get orders prepare the food ordered.
whever you miss some ingredient give an order to Jim
Jim you're the supplier. Whenever you get an order from the chef go
buy it.
So if you have a bottleneck in the kitchen you just add another chef.
Paralelism goes (almost) for free.
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483c20cc$0$25053$607ed4bc@cv.net>
Slobodan Blazeski wrote:
> On May 24, 3:17 pm, "Alex Mizrahi" <········@users.sourceforge.net>
> wrote:
> 
>> FB> FBP is very different:
>>
>>from what you describe it's different only terminologically..
>>
>> FB>  When developing applications, you start with defining information
>> FB> packets (IP). In Lisp an IP could be a hashtable or any other Lisp
>> FB> object.
>>
>>so IP is just a fancy way to say "any object"? maybe it matter for
>>distributed
>>implementations, where IPs have to be serializable
>>
>> FB>  Then you define some processes and interconnect them. A process has
>> FB> inputs and outputs for processing IPs. The process itself is a program,
>> FB> with local storage. A process can be instantiated multiple times and
>> FB> can be configured with configuration IPs (there are preemptive and
>> FB> cooperative multitasking implementations).
>>
>>in Cells terminology, process is model (class), process instance is an
>>object,
>>interconnect is synapse.
>>
>> FB> The main motivation of FBP is to reuse stable and tested processes for
>> FB> many applications.
>>
>>so is OOP, and articles you've linked say that OOP is quite related to FBP
>>
>> FB> I think Cells is more fine granular: there are not a few connected
>>black
>> FB> boxes, but a network of connected values.
>>
>>in other words, Cells doesn't hide guts of processes from you, right?
>>
>> FB> better, maybe this depends on the application. But I think every Cells
>> FB> network can be transformed to a FBP-like network, but the other
>> FB> direction would be more difficult, so FBP may be a more general
>> FB> concept.
>>
>>for my untrained eye Cells seems to be like a particular implementation of
>>FBP concepts. can you show an example what general FBP can do
>>but Cells can't?
> 
> 
> Cells looks to me more like event driven than dataflow concept.

I am reminded of the four blind men analyzing an elephant. If You People 
Actually Used these things you would know what they are and not be 
falling back on useless analogies.

Events are hard for Cells, change is easy/

> Dataflow reoves the micromanagement of function calls.When data
> arrives it gets processed.
> Imagine managing a restoraunt:

Yeah!!!! Argument from analogy!

> In traditional programming you'll have to tell the employees what to
> do.
> Jim get some meat. Mary serve table 23. Cory clean the floor. Dru make
> lazagna etc.
> So you have to take care of coordination  between the,
> In dataflow you just specify the rules.

Which is the exact same thing as traditional programming... well, know, 
it is worse. Try Prolog ot constraints or any non-deterministic (name 
well chosen) paradigm.

> Mary you're the waitress whenever customers arrive pick their orders
> and give it chef.
> Dru you're the chef, whenever you get orders prepare the food ordered.
> whever you miss some ingredient give an order to Jim
> Jim you're the supplier. Whenever you get an order from the chef go
> buy it.
> So if you have a bottleneck in the kitchen you just add another chef.

The problem Brooks identified in NSB was not handling more data of the 
same kind, it was handling increasing numbers of kinds of data and the 
combinatorial explosion of dependencies arising therefrom. The problem 
is not more customers, the problem is adding a casino to the restaurant 
and then entertainment and then getting food to the showgirls between acts.

> Paralelism goes (almost) for free.

That would be the chorus line. Yes, it's free, they make it up on the slots.

hth, kenny

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Slobodan Blazeski
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <3e76fe84-fffa-486e-bbdd-c2cf283667a8@k30g2000hse.googlegroups.com>
On May 27, 4:54 pm, Ken Tilton <···········@optonline.net> wrote:
> Slobodan Blazeski wrote:
> > On May 24, 3:17 pm, "Alex Mizrahi" <········@users.sourceforge.net>
> > wrote:
>
> >> FB> FBP is very different:
>
> >>from what you describe it's different only terminologically..
>
> >> FB>  When developing applications, you start with defining information
> >> FB> packets (IP). In Lisp an IP could be a hashtable or any other Lisp
> >> FB> object.
>
> >>so IP is just a fancy way to say "any object"? maybe it matter for
> >>distributed
> >>implementations, where IPs have to be serializable
>
> >> FB>  Then you define some processes and interconnect them. A process has
> >> FB> inputs and outputs for processing IPs. The process itself is a program,
> >> FB> with local storage. A process can be instantiated multiple times and
> >> FB> can be configured with configuration IPs (there are preemptive and
> >> FB> cooperative multitasking implementations).
>
> >>in Cells terminology, process is model (class), process instance is an
> >>object,
> >>interconnect is synapse.
>
> >> FB> The main motivation of FBP is to reuse stable and tested processes for
> >> FB> many applications.
>
> >>so is OOP, and articles you've linked say that OOP is quite related to FBP
>
> >> FB> I think Cells is more fine granular: there are not a few connected
> >>black
> >> FB> boxes, but a network of connected values.
>
> >>in other words, Cells doesn't hide guts of processes from you, right?
>
> >> FB> better, maybe this depends on the application. But I think every Cells
> >> FB> network can be transformed to a FBP-like network, but the other
> >> FB> direction would be more difficult, so FBP may be a more general
> >> FB> concept.
>
> >>for my untrained eye Cells seems to be like a particular implementation of
> >>FBP concepts. can you show an example what general FBP can do
> >>but Cells can't?
>
> > Cells looks to me more like event driven than dataflow concept.
>
> I am reminded of the four blind men analyzing an elephant. If You People
> Actually Used these things you would know what they are and not be
> falling back on useless analogies.
>
> Events are hard for Cells, change is easy/
>
> > Dataflow reoves the micromanagement of function calls.When data
> > arrives it gets processed.
> > Imagine managing a restoraunt:
>
> Yeah!!!! Argument from analogy!
>
> > In traditional programming you'll have to tell the employees what to
> > do.
> > Jim get some meat. Mary serve table 23. Cory clean the floor. Dru make
> > lazagna etc.
> > So you have to take care of coordination  between the,
> > In dataflow you just specify the rules.
>
> Which is the exact same thing as traditional programming... well, know,
> it is worse. Try Prolog ot constraints or any non-deterministic (name
> well chosen) paradigm.
>
> > Mary you're the waitress whenever customers arrive pick their orders
> > and give it chef.
> > Dru you're the chef, whenever you get orders prepare the food ordered.
> > whever you miss some ingredient give an order to Jim
> > Jim you're the supplier. Whenever you get an order from the chef go
> > buy it.
> > So if you have a bottleneck in the kitchen you just add another chef.
>
> The problem Brooks identified in NSB was not handling more data of the
> same kind, it was handling increasing numbers of kinds of data and the
> combinatorial explosion of dependencies arising therefrom. The problem
> is not more customers, the problem is adding a casino to the restaurant
> and then entertainment and then getting food to the showgirls between acts.
>
> > Paralelism goes (almost) for free.
>
> That would be the chorus line. Yes, it's free, they make it up on the slots.
>
> hth, kenny
>
> --http://smuglispweeny.blogspot.com/http://www.theoryyalgebra.com/
> ECLM rant:http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
> ECLM talk:http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en- Hide quoted text -
>
> - Show quoted text -
Ok you got me. Any idea for cool problem with cells? Pure cells only
no gui crap please.
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483c3ab2$0$25036$607ed4bc@cv.net>
Slobodan Blazeski wrote:
> On May 27, 4:54 pm, Ken Tilton <···········@optonline.net> wrote:
> 
>>Slobodan Blazeski wrote:
>>
>>>On May 24, 3:17 pm, "Alex Mizrahi" <········@users.sourceforge.net>
>>>wrote:
>>
>>>>FB> FBP is very different:
>>
>>>>from what you describe it's different only terminologically..
>>
>>>>FB>  When developing applications, you start with defining information
>>>>FB> packets (IP). In Lisp an IP could be a hashtable or any other Lisp
>>>>FB> object.
>>
>>>>so IP is just a fancy way to say "any object"? maybe it matter for
>>>>distributed
>>>>implementations, where IPs have to be serializable
>>
>>>>FB>  Then you define some processes and interconnect them. A process has
>>>>FB> inputs and outputs for processing IPs. The process itself is a program,
>>>>FB> with local storage. A process can be instantiated multiple times and
>>>>FB> can be configured with configuration IPs (there are preemptive and
>>>>FB> cooperative multitasking implementations).
>>
>>>>in Cells terminology, process is model (class), process instance is an
>>>>object,
>>>>interconnect is synapse.
>>
>>>>FB> The main motivation of FBP is to reuse stable and tested processes for
>>>>FB> many applications.
>>
>>>>so is OOP, and articles you've linked say that OOP is quite related to FBP
>>
>>>>FB> I think Cells is more fine granular: there are not a few connected
>>>>black
>>>>FB> boxes, but a network of connected values.
>>
>>>>in other words, Cells doesn't hide guts of processes from you, right?
>>
>>>>FB> better, maybe this depends on the application. But I think every Cells
>>>>FB> network can be transformed to a FBP-like network, but the other
>>>>FB> direction would be more difficult, so FBP may be a more general
>>>>FB> concept.
>>
>>>>for my untrained eye Cells seems to be like a particular implementation of
>>>>FBP concepts. can you show an example what general FBP can do
>>>>but Cells can't?
>>
>>>Cells looks to me more like event driven than dataflow concept.
>>
>>I am reminded of the four blind men analyzing an elephant. If You People
>>Actually Used these things you would know what they are and not be
>>falling back on useless analogies.
>>
>>Events are hard for Cells, change is easy/
>>
>>
>>>Dataflow reoves the micromanagement of function calls.When data
>>>arrives it gets processed.
>>>Imagine managing a restoraunt:
>>
>>Yeah!!!! Argument from analogy!
>>
>>
>>>In traditional programming you'll have to tell the employees what to
>>>do.
>>>Jim get some meat. Mary serve table 23. Cory clean the floor. Dru make
>>>lazagna etc.
>>>So you have to take care of coordination  between the,
>>>In dataflow you just specify the rules.
>>
>>Which is the exact same thing as traditional programming... well, know,
>>it is worse. Try Prolog ot constraints or any non-deterministic (name
>>well chosen) paradigm.
>>
>>
>>>Mary you're the waitress whenever customers arrive pick their orders
>>>and give it chef.
>>>Dru you're the chef, whenever you get orders prepare the food ordered.
>>>whever you miss some ingredient give an order to Jim
>>>Jim you're the supplier. Whenever you get an order from the chef go
>>>buy it.
>>>So if you have a bottleneck in the kitchen you just add another chef.
>>
>>The problem Brooks identified in NSB was not handling more data of the
>>same kind, it was handling increasing numbers of kinds of data and the
>>combinatorial explosion of dependencies arising therefrom. The problem
>>is not more customers, the problem is adding a casino to the restaurant
>>and then entertainment and then getting food to the showgirls between acts.
>>
>>
>>>Paralelism goes (almost) for free.
>>
>>That would be the chorus line. Yes, it's free, they make it up on the slots.
>>
>>hth, kenny
>>
>>--http://smuglispweeny.blogspot.com/http://www.theoryyalgebra.com/
>>ECLM rant:http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
>>ECLM talk:http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en- Hide quoted text -
>>
>>- Show quoted text -
> 
> Ok you got me. Any idea for cool problem with cells? Pure cells only
> no gui crap please.

Let's turn that around, that question still has the tool in front: what 
applications (or libraries -- AC is having fun with a Web programming 
library built on Cells) do you consider cool?

kt


-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483c5218$0$11616$607ed4bc@cv.net>
Ken Tilton wrote:
> 
> 
> Slobodan Blazeski wrote:
> 
>> On May 27, 4:54 pm, Ken Tilton <···········@optonline.net> wrote:
>>
>>> Slobodan Blazeski wrote:
>>>
>>>> On May 24, 3:17 pm, "Alex Mizrahi" <········@users.sourceforge.net>
>>>> wrote:
>>>
>>>
>>>>> FB> FBP is very different:
>>>
>>>
>>>>> from what you describe it's different only terminologically..
>>>
>>>
>>>>> FB>  When developing applications, you start with defining information
>>>>> FB> packets (IP). In Lisp an IP could be a hashtable or any other Lisp
>>>>> FB> object.
>>>
>>>
>>>>> so IP is just a fancy way to say "any object"? maybe it matter for
>>>>> distributed
>>>>> implementations, where IPs have to be serializable
>>>
>>>
>>>>> FB>  Then you define some processes and interconnect them. A 
>>>>> process has
>>>>> FB> inputs and outputs for processing IPs. The process itself is a 
>>>>> program,
>>>>> FB> with local storage. A process can be instantiated multiple 
>>>>> times and
>>>>> FB> can be configured with configuration IPs (there are preemptive and
>>>>> FB> cooperative multitasking implementations).
>>>
>>>
>>>>> in Cells terminology, process is model (class), process instance is an
>>>>> object,
>>>>> interconnect is synapse.
>>>
>>>
>>>>> FB> The main motivation of FBP is to reuse stable and tested 
>>>>> processes for
>>>>> FB> many applications.
>>>
>>>
>>>>> so is OOP, and articles you've linked say that OOP is quite related 
>>>>> to FBP
>>>
>>>
>>>>> FB> I think Cells is more fine granular: there are not a few connected
>>>>> black
>>>>> FB> boxes, but a network of connected values.
>>>
>>>
>>>>> in other words, Cells doesn't hide guts of processes from you, right?
>>>
>>>
>>>>> FB> better, maybe this depends on the application. But I think 
>>>>> every Cells
>>>>> FB> network can be transformed to a FBP-like network, but the other
>>>>> FB> direction would be more difficult, so FBP may be a more general
>>>>> FB> concept.
>>>
>>>
>>>>> for my untrained eye Cells seems to be like a particular 
>>>>> implementation of
>>>>> FBP concepts. can you show an example what general FBP can do
>>>>> but Cells can't?
>>>
>>>
>>>> Cells looks to me more like event driven than dataflow concept.
>>>
>>>
>>> I am reminded of the four blind men analyzing an elephant. If You People
>>> Actually Used these things you would know what they are and not be
>>> falling back on useless analogies.
>>>
>>> Events are hard for Cells, change is easy/
>>>
>>>
>>>> Dataflow reoves the micromanagement of function calls.When data
>>>> arrives it gets processed.
>>>> Imagine managing a restoraunt:
>>>
>>>
>>> Yeah!!!! Argument from analogy!
>>>
>>>
>>>> In traditional programming you'll have to tell the employees what to
>>>> do.
>>>> Jim get some meat. Mary serve table 23. Cory clean the floor. Dru make
>>>> lazagna etc.
>>>> So you have to take care of coordination  between the,
>>>> In dataflow you just specify the rules.
>>>
>>>
>>> Which is the exact same thing as traditional programming... well, know,
>>> it is worse. Try Prolog ot constraints or any non-deterministic (name
>>> well chosen) paradigm.
>>>
>>>
>>>> Mary you're the waitress whenever customers arrive pick their orders
>>>> and give it chef.
>>>> Dru you're the chef, whenever you get orders prepare the food ordered.
>>>> whever you miss some ingredient give an order to Jim
>>>> Jim you're the supplier. Whenever you get an order from the chef go
>>>> buy it.
>>>> So if you have a bottleneck in the kitchen you just add another chef.
>>>
>>>
>>> The problem Brooks identified in NSB was not handling more data of the
>>> same kind, it was handling increasing numbers of kinds of data and the
>>> combinatorial explosion of dependencies arising therefrom. The problem
>>> is not more customers, the problem is adding a casino to the restaurant
>>> and then entertainment and then getting food to the showgirls between 
>>> acts.
>>>
>>>
>>>> Paralelism goes (almost) for free.
>>>
>>>
>>> That would be the chorus line. Yes, it's free, they make it up on the 
>>> slots.
>>>
>>> hth, kenny
>>>
>>> --http://smuglispweeny.blogspot.com/http://www.theoryyalgebra.com/
>>> ECLM 
>>> rant:http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
>>> ECLM 
>>> talk:http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en- 
>>> Hide quoted text -
>>>
>>> - Show quoted text -
>>
>>
>> Ok you got me. Any idea for cool problem with cells? Pure cells only
>> no gui crap please.
> 
> 
> Let's turn that around, that question still has the tool in front: what 
> applications (or libraries -- AC is having fun with a Web programming 
> library built on Cells) do you consider cool?

btw, any simulation of course would be a blast. I think we are always 
building models when we program computers, but when you are /really/ 
modelling something else (like a football match) Cells /really/ pays off.

That said, there is a second class of problem well-handled by Cells but 
not so much cool as useful: driving another system from Lisp, whether it 
be a C GUI, a sophisticated graphics engine like OpenGL, a Web browser, 
or a C physics engine such as ODE. Over here the Lisp model Does Its 
Thing, while Cell observers drive the other system over there, feedback 
from that system coming in the form of C callbacks or http requests as 
the case may be.

I always marvelled at the dual role played by Cells in something like 
Cells-Gtk or Celtk -- animating the application model I have in mind 
/and/ gluing that model to Gtk or Tcl/Tk, then I finally got it: Cells 
is simply about change and simply gives application state causal power 
over other application state. Causation and change are as fundamental as 
it gets so dual/multiple roles are to be expected.

kt

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Slobodan Blazeski
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <4cf1ccb4-304b-4858-b448-027ccba6be31@k30g2000hse.googlegroups.com>
On May 27, 4:54 pm, Ken Tilton <···········@optonline.net> wrote:
> Slobodan Blazeski wrote:
> > On May 24, 3:17 pm, "Alex Mizrahi" <········@users.sourceforge.net>
> > wrote:
>
> >> FB> FBP is very different:
>
> >>from what you describe it's different only terminologically..
>
> >> FB>  When developing applications, you start with defining information
> >> FB> packets (IP). In Lisp an IP could be a hashtable or any other Lisp
> >> FB> object.
>
> >>so IP is just a fancy way to say "any object"? maybe it matter for
> >>distributed
> >>implementations, where IPs have to be serializable
>
> >> FB>  Then you define some processes and interconnect them. A process has
> >> FB> inputs and outputs for processing IPs. The process itself is a program,
> >> FB> with local storage. A process can be instantiated multiple times and
> >> FB> can be configured with configuration IPs (there are preemptive and
> >> FB> cooperative multitasking implementations).
>
> >>in Cells terminology, process is model (class), process instance is an
> >>object,
> >>interconnect is synapse.
>
> >> FB> The main motivation of FBP is to reuse stable and tested processes for
> >> FB> many applications.
>
> >>so is OOP, and articles you've linked say that OOP is quite related to FBP
>
> >> FB> I think Cells is more fine granular: there are not a few connected
> >>black
> >> FB> boxes, but a network of connected values.
>
> >>in other words, Cells doesn't hide guts of processes from you, right?
>
> >> FB> better, maybe this depends on the application. But I think every Cells
> >> FB> network can be transformed to a FBP-like network, but the other
> >> FB> direction would be more difficult, so FBP may be a more general
> >> FB> concept.
>
> >>for my untrained eye Cells seems to be like a particular implementation of
> >>FBP concepts. can you show an example what general FBP can do
> >>but Cells can't?
>
> > Cells looks to me more like event driven than dataflow concept.
>
> I am reminded of the four blind men analyzing an elephant. If You People
> Actually Used these things you would know what they are and not be
> falling back on useless analogies.
>
> Events are hard for Cells, change is easy/
>
> > Dataflow reoves the micromanagement of function calls.When data
> > arrives it gets processed.
> > Imagine managing a restoraunt:
>
> Yeah!!!! Argument from analogy!
>
> > In traditional programming you'll have to tell the employees what to
> > do.
> > Jim get some meat. Mary serve table 23. Cory clean the floor. Dru make
> > lazagna etc.
> > So you have to take care of coordination  between the,
> > In dataflow you just specify the rules.
>
> Which is the exact same thing as traditional programming... well, know,
> it is worse. Try Prolog ot constraints or any non-deterministic (name
> well chosen) paradigm.

I already did that. Reminds me of the Russian joke:

I'm very ill.
Try drugs.
Tried. Ain't work.
Try herbas.
Tried. Ain't work.
Try surgery.
Tried. Ain't work.

Then pray to god
Tried. Ain't work.

:)
From: Paul Tarvydas
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1iol3$g5j$1@aioe.org>
Ken Tilton wrote:
> Events are hard for Cells, change is easy/

Huh?

I would have thought (without close examination :-) that Cells was based
on "events".

Obviously, our nomenclature does not match.

From my perspective, a "change" is an "event".  A change causes a chain of
reactions - a chain of events sent to the dependents interested in the
event.

I think of an "event" as a pulse of data (scalar or non-scalar).  The pulse
is propagated throughout the system.  It might cause other components to
produce further "events" (data pulses).  

The pulses of data are uni-directional (no call-return protocol implied).

A reactive system.

I perceive "events" (uni-directional data pulses) to be the most basic unit
of communication between software components.

Reading your past few postings, I surmise that you have some other
interpretation of the concept of "event".

Would you please explain to me what you think of as an "event" vs. "change"?

Thanx

pt
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483d00de$0$25019$607ed4bc@cv.net>
Paul Tarvydas wrote:
> Ken Tilton wrote:
> 
>>Events are hard for Cells, change is easy/
> 
> 
> Huh?
> 
> I would have thought (without close examination :-) that Cells was based
> on "events".
> 
> Obviously, our nomenclature does not match.

The nomenclature is fine, rather our subjects do not match.

I am talking about the programming experience. If I write a rule that 
says the area is the length times the width then I am not thinking about 
events. I /am/ taking huge delight in knowing that when in the course of 
(yes) human events something transpires to occasion a change in the 
width that the area will miraculously change at effectively (to other 
code playing by the Cells rules) the exact same time. I believe this is 
how Mr. Sulzberger resolved all these action at a distance matters, but 
we not should digress in the absence of beer.

So the rule A = l x w is code written in an eventless mindset with its 
inavriance enforced by one dataflow hack or another. Dandy.

But now we have a different rule that worries about a mouse-down event. 
A button with a slot called "clicked?". But a mousedown /happens/, it 
does not exist. Well, it exists ephemerally -- its existence is also its 
demise. An example will make this clearer.

If I say:

    (when (mouse-down *system*)
       (when (pt-within (mouse-pos *system*) (bounding-box self))
           ...I have been clicked! Do amazing things...))

And suppose we get a mouse-down event in this GUI button which happens 
to launch all ICBMs on Canada. That is OK, the button was disabled, the 
launch commander was just showing off for the cute new recruit, got a 
giggle out of her... I digress.

Now things get serious. No, not that. Canada freezes maple syrup 
production to drive up prices. Bush hopes a third war will distract the 
world from the first two he started and throws Congress a biscuit who 
runs in a circle, rolls over, and declares the hockey puck a weapon of 
mass destruction. All the silos go to defcon 42 and the launch interface 
software enables all the gui buttons which now need to be redrawn with a 
thicker frame so now they are bigger. The bounding box has changed. The 
above rule runs.

The launch commander is necking with the recruit having left the mouse 
positioned over the button, so if the mouse-down slot is populated... 
bye bye moose people. That's OK, he is necking, not holding down the 
mouse button. But what is the value of the mouse-down slot of the system 
instance? Is it the same as when he clicked the disabled button to show 
off? Bye bye moose people. And that would be an event.

So this is the tricky bit -- I can no longer think the way a spreadsheet 
modeller thinks. They build models as if there was no such thing as 
change by writing a rule that will be true regardless of events, for all 
values of all independent variables: area is length times width. boom, I 
am done. change is someone elses's problem -- ah, there ya go. In the 
only paradigm most yobbos understand, they are in charge of propagating 
change. They are not like me, they are all smart, they /like/ being in 
charge. That is why they live in a forum where they can kick people who 
joke about their typos. I digress. I am not smart like them, I love 
automatic change management even more than I like automatic memory 
management. I could programm faster in C with Cells/C than a yobbo could 
program in Lisp without it. Hell, the not-to-be method would handle the 
fricken GC...hmmm, should I go back? Run 20% faster? See, this is the 
thing: Cells is just about change (or change and cause and effect but I 
keep thinking they must be the same because of something Buddha said) so 
automatic memory -- what is that about? Anyone? Good, Tommy. Change.

> 
> From my perspective, a "change" is an "event".  A change causes a chain of
> reactions - a chain of events sent to the dependents interested in the
> event.
> 
> I think of an "event" as a pulse of data (scalar or non-scalar).  The pulse
> is propagated throughout the system.  It might cause other components to
> produce further "events" (data pulses).  
> 
> The pulses of data are uni-directional (no call-return protocol implied).
> 
> A reactive system.
> 
> I perceive "events" (uni-directional data pulses) to be the most basic unit
> of communication between software components.

Then stop thinking about communication between components. When you see 
area = length x width, do you see length and width components 
communicating wuth an area component? If so, fine, but does that not 
also feel a tad contrived? Smalltalkers say 2+3 is a message to 2 
telling it to add 3 to itself. chya. And worse than contrived, doh!, now 
you are the one thinking about architecting the dataflow from here to 
there to there to here to -- what? -- achieve the application 
functionality. Cue Brooks, that does not scale. I need a declarative 
paradigm that lets me write one rule at a time in isolation and have my 
application emerge from the sum of the rules.


Think about the mindset of a spreadsheet modeller, the CFO using the 
software to model the finances of his employer. Great, the huge model 
gets filled with literal data such as the current prime rate or current 
tax tables in some cells and formulas in others. [This happens to be 
exactly how cell-driven slots work in CLOS.] OK, now the CFO starts 
experimenting with different values for exchange rate and cost of fuel. 
Both are volatile so he experiments a lot. The people who wrote VisiCalc 
sweated bullets over getting the numbers displayed to recalculate 
properly in the face of these change events, but the CFO just wrote 
rules  in a state of mind oblivious to change relying the underlying 
tool to handle all that.

> 
> Reading your past few postings, I surmise that you have some other
> interpretation of the concept of "event".

No, I just do not want to /think/ about events or change /while 
programming/. That is a slippery slope that ends up with Brooks being 
right: manual management of change by the programmer is ineluctably hard 
once a too-low threshold of application complexity is reached.

I want to be precisely exactly completely totally is anybody listening 
like a spreadsheet modeller, writing one rule for all time that will 
compute the right value for this cell and have Cells handle the rest.

When after ten years of C I made automatic memory management a "must 
have" on my shopping list for a new language, I was not saying I had no 
intention ever again to allocate and later no longer need blocks of 
memory -- I just did not want to think about it any more.

> 
> Would you please explain to me what you think of as an "event" vs. "change"?
> 

The trick then -- and hey, maybe I missed an easier way! -- is that 
unlike anything a spreadsheet modeller deals with it seems to me there 
is some data such as "the mouse just went to a down state" that forces 
me to step away from a steady state state of mind and think about events 
/at the application level/ -- not just something happening in the 
background that Cells has to deal with. Ah, even better than the above 
scenario: suppose the launch commander is doing a drag and drop and 
happens to pass over the launch button just as it gets enabled. By GUI 
standards that is /not/ a mouse click, tho the mouse would be down over 
the enabled button.

As for there being an easier way, perhaps there is a way to duck the 
special handing of events by using time as state (with Lisp time 
granularity then being sufficiently imprecise as to take out Canada on 
an unlucky day) but Tilton's Laws of Programming frowns on making things 
more complicated in order to make things simpler: if a mousedown is not 
like a mouse position they should be different in the same way.

peace, out, kzo

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Peter Hildebrandt
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483d13c1$0$90270$14726298@news.sunsite.dk>
Ken Tilton wrote:
> If I say:
> 
>    (when (mouse-down *system*)
>       (when (pt-within (mouse-pos *system*) (bounding-box self))
>           ...I have been clicked! Do amazing things...))

The more I think about it, I believe this is how we should do events in 
Cells-Gtk4.  Well, deadline is on Friday.  Some time after that.

> And suppose we get a mouse-down event in this GUI button which happens 
> to launch all ICBMs on Canada. That is OK, the button was disabled, the 
> launch commander was just showing off for the cute new recruit, got a 
> giggle out of her... I digress.
> 
> Now things get serious. No, not that. Canada freezes maple syrup 
> production to drive up prices. Bush hopes a third war will distract the 
> world from the first two he started and throws Congress a biscuit who 
> runs in a circle, rolls over, and declares the hockey puck a weapon of 
> mass destruction. All the silos go to defcon 42 and the launch interface 
> software enables all the gui buttons which now need to be redrawn with a 
> thicker frame so now they are bigger. The bounding box has changed. The 
> above rule runs.
> 
> The launch commander is necking with the recruit having left the mouse 
> positioned over the button, so if the mouse-down slot is populated... 
> bye bye moose people. That's OK, he is necking, not holding down the 
> mouse button. But what is the value of the mouse-down slot of the system 
> instance? Is it the same as when he clicked the disabled button to show 
> off? Bye bye moose people. And that would be an event.

Thanks for that story.  You made my day :-)

> 
> The trick then -- and hey, maybe I missed an easier way! -- is that 
> unlike anything a spreadsheet modeller deals with it seems to me there 
> is some data such as "the mouse just went to a down state" that forces 
> me to step away from a steady state state of mind and think about events 
> /at the application level/ -- not just something happening in the 
> background that Cells has to deal with. Ah, even better than the above 
> scenario: suppose the launch commander is doing a drag and drop and 
> happens to pass over the launch button just as it gets enabled. By GUI 
> standards that is /not/ a mouse click, tho the mouse would be down over 
> the enabled button.

I created my own little event handling for the cairo-drawing-area widget 
in cells-gtk, which deals with clicks, selection (single elements and 
multiple elements by drawing a transparent box), and dragging.  And I 
can tell you, the code is ugly.

If only I had thought of ephemeral cells.  Well, time to think about 
cells-gtk4.

Btw, my current project contains a physics simulator using cells-ode as 
a backend, the gl-drawing-area widget of cells-gtk3 for visualization, 
and a small DSL to specify scenes in terms of s-expressions.  I will 
factor out the relevant stuff and merge it will cells-gtk next week.
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483d6209$0$25030$607ed4bc@cv.net>
Peter Hildebrandt wrote:
> 
> Thanks for that story.  You made my day :-)

Thanks. I am hoping it will discourage people from askingme for 
documentation again.

> I created my own little event handling for the cairo-drawing-area widget 
> in cells-gtk, which deals with clicks, selection (single elements and 
> multiple elements by drawing a transparent box), and dragging.  And I 
> can tell you, the code is ugly.

Right, this all started when I mentioned that declarative is fun but not 
event-friendly. Ephemerals help a lot, but still some transparency is 
lost: we are no longer Just Writing A Formula.

> 
> If only I had thought of ephemeral cells.  Well, time to think about 
> cells-gtk4.
> 
> Btw, my current project contains a physics simulator using cells-ode as 
> a backend, the gl-drawing-area widget of cells-gtk3 for visualization, 
> and a small DSL to specify scenes in terms of s-expressions.  I will 
> factor out the relevant stuff and merge it will cells-gtk next week.

Damn, sounds like you got pretty far with Cells-ODE. I thought I was 
going to be extending Cells left and right to support that. Glad to hear 
it went well, I gotta add that to Cello. I mentioned to the organizers 
of ECLM how thrilling it was that my 3D GUI buttons really were 3D. One 
of them responded, "So what?". My fault, I forgot that the best part was 
actually that the highlighting change needed to indicate that a button 
was being held down (moused down but not yet released) that I achieved 
this by actually moving the button further, well, down into the 
keyboard. I am sure that would have him jumping up and down. And now 
with Cells-ODE I can actually put little springs under the GUI buttons 
and when we get the mouse-up event...

:)

kt

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Peter Hildebrandt
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483d716d$0$90262$14726298@news.sunsite.dk>
Ken Tilton wrote:
>> Btw, my current project contains a physics simulator using cells-ode 
>> as a backend, the gl-drawing-area widget of cells-gtk3 for 
>> visualization, and a small DSL to specify scenes in terms of 
>> s-expressions.  I will factor out the relevant stuff and merge it will 
>> cells-gtk next week.
> 
> Damn, sounds like you got pretty far with Cells-ODE. I thought I was 
> going to be extending Cells left and right to support that.

I need to do some performance checking, tho.  We might still need to 
create a with-one-propagation macro for atomic changes.  The stuff is 
really slow at this point, about 1 fps.  But hold on, this might be due 
to my using OpenGL's selection processing about 10MB of vertex data at 
every iteration.

> Glad to hear 
> it went well, I gotta add that to Cello. I mentioned to the organizers 
> of ECLM how thrilling it was that my 3D GUI buttons really were 3D. One 
> of them responded, "So what?". My fault, I forgot that the best part was 
> actually that the highlighting change needed to indicate that a button 
> was being held down (moused down but not yet released) that I achieved 
> this by actually moving the button further, well, down into the 
> keyboard. I am sure that would have him jumping up and down. And now 
> with Cells-ODE I can actually put little springs under the GUI buttons 
> and when we get the mouse-up event...

Can't wait to play with it.  As a matter of fact, I don't know whether 
there are native springs in ODE.  IIRC, you need to do some black magic 
for that.

But wait -- we have CELLS-ode!  A spring is defined by F = Dx --> So 
what about something like

(make-instance 'slider-joint
                :body1 *environment*
                :body2 my-button
                :max-force (c? (vector 0 0 (* D (z button)))))


A cells driven physics simulator will be an awesome toy ... but I still 
have a thesis to write ;-)

TTL,
Peter
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483d94aa$0$11599$607ed4bc@cv.net>
Peter Hildebrandt wrote:
> Ken Tilton wrote:
> 
>>> Btw, my current project contains a physics simulator using cells-ode 
>>> as a backend, the gl-drawing-area widget of cells-gtk3 for 
>>> visualization, and a small DSL to specify scenes in terms of 
>>> s-expressions.  I will factor out the relevant stuff and merge it 
>>> will cells-gtk next week.
>>
>>
>> Damn, sounds like you got pretty far with Cells-ODE. I thought I was 
>> going to be extending Cells left and right to support that.
> 
> 
> I need to do some performance checking, tho. 

Oh, right. Sounds like you were able to defer that.

> We might still need to 
> create a with-one-propagation macro for atomic changes. 

I thought I did that -- maybe I forgot to document it. :) ISTR doing the 
geenral case of with-client-propagation so users could play all sorts of 
games.

> The stuff is 
> really slow at this point, about 1 fps.  But hold on, this might be due 
> to my using OpenGL's selection processing about 10MB of vertex data at 
> every iteration.

Oy. Can you check every tenth iteration?

> 
>> Glad to hear it went well, I gotta add that to Cello. I mentioned to 
>> the organizers of ECLM how thrilling it was that my 3D GUI buttons 
>> really were 3D. One of them responded, "So what?". My fault, I forgot 
>> that the best part was actually that the highlighting change needed to 
>> indicate that a button was being held down (moused down but not yet 
>> released) that I achieved this by actually moving the button further, 
>> well, down into the keyboard. I am sure that would have him jumping up 
>> and down. And now with Cells-ODE I can actually put little springs 
>> under the GUI buttons and when we get the mouse-up event...
> 
> 
> Can't wait to play with it.  As a matter of fact, I don't know whether 
> there are native springs in ODE.

Yeah, as I wrote I recalled being surprised at how little a physics 
engine does for us. I have to wonder if a Lisp physics engine might be 
in order. But maybe there is more there than I realize.


>  IIRC, you need to do some black magic 
> for that.
> 
> But wait -- we have CELLS-ode!  A spring is defined by F = Dx --> So 
> what about something like
> 
> (make-instance 'slider-joint
>                :body1 *environment*
>                :body2 my-button
>                :max-force (c? (vector 0 0 (* D (z button)))))
> 
> 
> A cells driven physics simulator will be an awesome toy ... but I still 
> have a thesis to write ;-)

Must be nioe having the finish line in sight. Good luck with that.

kt

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Peter Hildebrandt
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483dc53f$0$90273$14726298@news.sunsite.dk>
Ken Tilton wrote:
> Peter Hildebrandt wrote:
>> We might still need to create a with-one-propagation macro for atomic 
>> changes. 
> 
> I thought I did that -- maybe I forgot to document it. :) ISTR doing the 
> geenral case of with-client-propagation so users could play all sorts of 
> games.

Stupid me.  Now I remember us talking about it on cells-devel.  I will 
dig it up at integrate it with cells-ode next week.

>> The stuff is really slow at this point, about 1 fps.  But hold on, 
>> this might be due to my using OpenGL's selection processing about 10MB 
>> of vertex data at every iteration.
> 
> Oy. Can you check every tenth iteration?

As a matter of fact, that is what I do.  The GUI provides me with a few 
spin-buttons to select how often to run ODE, how often to redraw OpenGL, 
and how often to process the output.  But this is all for academics 
anyway, so who is ever going to care?

>> Can't wait to play with it.  As a matter of fact, I don't know whether 
>> there are native springs in ODE.
> 
> Yeah, as I wrote I recalled being surprised at how little a physics 
> engine does for us. I have to wonder if a Lisp physics engine might be 
> in order. But maybe there is more there than I realize.

Yep.  Before starting to work with ODE I seriously considered writing my 
own physics engine and realized quickly that it was not as easy as it 
looks.  It is quite simple to get the toy examples right, but once you 
face Real Problems it gets ugly.  You find yourself dealing with numeric 
stability, numeric performance issues, optimization here and there, 
borderline cases ...

Decided I could imagine nicer things to do on a sunny afternoon.


> Must be nioe having the finish line in sight. Good luck with that.

Thanks.  And as a matter of fact, it was the best decision of the past 
year or so to set an ultimate deadlide for delivery.  I suggest you do 
something similar for you Algebra software.  It has a wonderful way of 
focussing the mind :-)

Cheers,
Peter
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483dfef2$0$25024$607ed4bc@cv.net>
Peter Hildebrandt wrote:
> Ken Tilton wrote:
> 
>> Peter Hildebrandt wrote:
>>
>>> We might still need to create a with-one-propagation macro for atomic 
>>> changes. 
>>
>>
>> I thought I did that -- maybe I forgot to document it. :) ISTR doing 
>> the geenral case of with-client-propagation so users could play all 
>> sorts of games.
> 
> 
> Stupid me.  Now I remember us talking about it on cells-devel.  I will 
> dig it up at integrate it with cells-ode next week.
> 
>>> The stuff is really slow at this point, about 1 fps.  But hold on, 
>>> this might be due to my using OpenGL's selection processing about 
>>> 10MB of vertex data at every iteration.
>>
>>
>> Oy. Can you check every tenth iteration?
> 
> 
> As a matter of fact, that is what I do.  The GUI provides me with a few 
> spin-buttons to select how often to run ODE, how often to redraw OpenGL, 
> and how often to process the output.

Nice. I was wondering if there would be some way to leverage synapses to 
automatically and selectively tune how often the ODE side got polled. 
There would be a base rate (perhaps no more frequent than the highest 
frequency required by any Cell) and then individual Cells or groups of 
cells or something could decide which subset of the base rate samples to 
process: Ah, not much is going on, let's wait 10ms before recomputing. 
The cool thing would be that the synapse could adjust this over time, so 
if things did get crazy up goes the sampling rate.

>  But this is all for academics 
> anyway, so who is ever going to care?

Academics!? Omigod!! If I had known that...
> 
>>> Can't wait to play with it.  As a matter of fact, I don't know 
>>> whether there are native springs in ODE.
>>
>>
>> Yeah, as I wrote I recalled being surprised at how little a physics 
>> engine does for us. I have to wonder if a Lisp physics engine might be 
>> in order. But maybe there is more there than I realize.
> 
> 
> Yep.  Before starting to work with ODE I seriously considered writing my 
> own physics engine and realized quickly that it was not as easy as it 
> looks.  It is quite simple to get the toy examples right, but once you 
> face Real Problems it gets ugly.  You find yourself dealing with numeric 
> stability, numeric performance issues, optimization here and there, 
> borderline cases ...
> 
> Decided I could imagine nicer things to do on a sunny afternoon.

Exactly. As long as ODE offers value it is more useful and fun to add 
value elsewhere.

> 
> 
>> Must be nioe having the finish line in sight. Good luck with that.
> 
> 
> Thanks.  And as a matter of fact, it was the best decision of the past 
> year or so to set an ultimate deadlide for delivery.  I suggest you do 
> something similar for you Algebra software.  It has a wonderful way of 
> focussing the mind :-)

Yep. I went into final approach about six weeks ago, committed to doing 
the absolute minimum to have something in folks' hands by, oh, July 1. 
It's a lot more fun programming when I can get to something tricky and 
just say f**k it, they'll live. :)

kt


-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Sohail Somani
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <Kjn%j.332$i74.37@edtnps91>
Ken Tilton wrote:

> Yep. I went into final approach about six weeks ago, committed to doing 
> the absolute minimum to have something in folks' hands by, oh, July 1. 
> It's a lot more fun programming when I can get to something tricky and 
> just say f**k it, they'll live. :)

If you want to put your mouth where your foot is:

http://kalzumeus.com/2008/05/27/incredible-interest-in-the-30-day-sprint/

That is my official invitation to you.

Really. Would love to see you reach the finish line.
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483e2958$0$11612$607ed4bc@cv.net>
Sohail Somani wrote:
> Ken Tilton wrote:
> 
>> Yep. I went into final approach about six weeks ago, committed to 
>> doing the absolute minimum to have something in folks' hands by, oh, 
>> July 1. It's a lot more fun programming when I can get to something 
>> tricky and just say f**k it, they'll live. :)
> 
> 
> If you want to put your mouth where your foot is:
> 
> http://kalzumeus.com/2008/05/27/incredible-interest-in-the-30-day-sprint/
> 
> That is my official invitation to you.

Why thank you. I'll think about it. But my experience has been that I 
need to minimize distraction ruthlessly, so I might have to pass. Sounds 
like fun, tho.

> 
> Really. Would love to see you reach the finish line.

Not to worry, I have a friendly letter here from the IRS asking when 
they might expect to see last year's taxes, those tend to focus the mind 
wonderfully as well. :)

kt

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Sohail Somani
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <xXp%j.395$cV.278@edtnps92>
Ken Tilton wrote:

>> Really. Would love to see you reach the finish line.
> 
> Not to worry, I have a friendly letter here from the IRS asking when 
> they might expect to see last year's taxes, those tend to focus the mind 
> wonderfully as well. :)

Well, best of luck focusing. Though I hear lots of (((())))) make you 
cross-eyed.
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <48404749$0$11644$607ed4bc@cv.net>
Sohail Somani wrote:
> Ken Tilton wrote:
> 
>> Yep. I went into final approach about six weeks ago, committed to 
>> doing the absolute minimum to have something in folks' hands by, oh, 
>> July 1. It's a lot more fun programming when I can get to something 
>> tricky and just say f**k it, they'll live. :)
> 
> 
> If you want to put your mouth where your foot is:
> 
> http://kalzumeus.com/2008/05/27/incredible-interest-in-the-30-day-sprint/
> 
> That is my official invitation to you.
> 

I have reconsidered. How do I accept? (Help a dinosaur.) I gather any 
blog entry should be tagged 30day. Anything else?

kt

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Sohail Somani
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <9WY%j.39$Gn.31@edtnps92>
Ken Tilton wrote:
> 
> 
> Sohail Somani wrote:
>> Ken Tilton wrote:
>>
>>> Yep. I went into final approach about six weeks ago, committed to 
>>> doing the absolute minimum to have something in folks' hands by, oh, 
>>> July 1. It's a lot more fun programming when I can get to something 
>>> tricky and just say f**k it, they'll live. :)
>>
>>
>> If you want to put your mouth where your foot is:
>>
>> http://kalzumeus.com/2008/05/27/incredible-interest-in-the-30-day-sprint/
>>
>> That is my official invitation to you.
>>
> 
> I have reconsidered. How do I accept? (Help a dinosaur.) I gather any 
> blog entry should be tagged 30day. Anything else?

Yep.

I've added your feed to the aggregate feed: 
http://feeds.feedburner.com/30dayers

If you tag your posts with "30day" it should show up here:

http://smuglispweeny.blogspot.com/feeds/posts/default/-/30day?alt=rss

The url immediately above is the feed I'm aggregating so if you make a 
post and it didn't show up, one of us screwed up.

An initial post declaring that you are throwing your hat in the ring 
would get other people caught up on what you are up to. I'm sure you 
will draw much interest with your.. interesting... writing style :-)

Best of luck!

Sohail
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <4840753c$0$25056$607ed4bc@cv.net>
Sohail Somani wrote:
> Ken Tilton wrote:
> 
>>
>>
>> Sohail Somani wrote:
>>
>>> Ken Tilton wrote:
>>>
>>>> Yep. I went into final approach about six weeks ago, committed to 
>>>> doing the absolute minimum to have something in folks' hands by, oh, 
>>>> July 1. It's a lot more fun programming when I can get to something 
>>>> tricky and just say f**k it, they'll live. :)
>>>
>>>
>>>
>>> If you want to put your mouth where your foot is:
>>>
>>> http://kalzumeus.com/2008/05/27/incredible-interest-in-the-30-day-sprint/ 
>>>
>>>
>>> That is my official invitation to you.
>>>
>>
>> I have reconsidered. How do I accept? (Help a dinosaur.) I gather any 
>> blog entry should be tagged 30day. Anything else?
> 
> 
> Yep.
> 
> I've added your feed to the aggregate feed: 
> http://feeds.feedburner.com/30dayers
> 
> If you tag your posts with "30day" it should show up here:
> 
> http://smuglispweeny.blogspot.com/feeds/posts/default/-/30day?alt=rss
> 
> The url immediately above is the feed I'm aggregating so if you make a 
> post and it didn't show up, one of us screwed up.

Is a "tag" a "label"? It has been thirty seconds and I still do not see:

http://smuglispweeny.blogspot.com/search/label/30day%20Algebra%20software%20Lisp%20application

kt

> 
> An initial post declaring that you are throwing your hat in the ring 
> would get other people caught up on what you are up to. I'm sure you 
> will draw much interest with your.. interesting... writing style :-)
> 
> Best of luck!
> 
> Sohail

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Sohail Somani
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <L4%%j.90$Gn.6@edtnps92>
Ken Tilton wrote:
> 
> 
> Sohail Somani wrote:
>> http://smuglispweeny.blogspot.com/feeds/posts/default/-/30day?alt=rss
>>
>> The url immediately above is the feed I'm aggregating so if you make a 
>> post and it didn't show up, one of us screwed up.
> 
> Is a "tag" a "label"? It has been thirty seconds and I still do not see:
> 
> http://smuglispweeny.blogspot.com/search/label/30day%20Algebra%20software%20Lisp%20application 

Yikes. That is the ugliest URL ever. Anyway, in the interest of keeping 
the dinosaur old, I've adjusted the feed to pick up your interesting 
category name.

It should be picked up sometime between now and July ;-)
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <48408355$0$25026$607ed4bc@cv.net>
Sohail Somani wrote:
> Ken Tilton wrote:
> 
>>
>>
>> Sohail Somani wrote:
>>
>>> http://smuglispweeny.blogspot.com/feeds/posts/default/-/30day?alt=rss
>>>
>>> The url immediately above is the feed I'm aggregating so if you make 
>>> a post and it didn't show up, one of us screwed up.
>>
>>
>> Is a "tag" a "label"? It has been thirty seconds and I still do not see:
>>
>> http://smuglispweeny.blogspot.com/search/label/30day%20Algebra%20software%20Lisp%20application 
> 
> 
> 
> Yikes. That is the ugliest URL ever. Anyway, in the interest of keeping 
> the dinosaur old, I've adjusted the feed to pick up your interesting 
> category name.
> 
> It should be picked up sometime between now and July ;-)

Do I hear you saying "and /only/ 30day"?

:)

kt


-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Sohail Somani
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <QU%%j.106$Gn.24@edtnps92>
Ken Tilton wrote:
> 
> 
> Sohail Somani wrote:
>> Ken Tilton wrote:
>>> Is a "tag" a "label"? It has been thirty seconds and I still do not see:
>>>
>>> http://smuglispweeny.blogspot.com/search/label/30day%20Algebra%20software%20Lisp%20application 
>>
>> Yikes. That is the ugliest URL ever. Anyway, in the interest of 
>> keeping the dinosaur old, I've adjusted the feed to pick up your 
>> interesting category name.
>>
>> It should be picked up sometime between now and July ;-)
> 
> Do I hear you saying "and /only/ 30day"?

I thought of all people, Lisp programmers would DWIM (do what I mean).

:-)
From: Sohail Somani
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <6%Y%j.40$Gn.17@edtnps92>
Sohail Somani wrote:
> Ken Tilton wrote:
>>
>>
>> Sohail Somani wrote:

>>> That is my official invitation to you.
>>>
>>
>> I have reconsidered. How do I accept? (Help a dinosaur.) I gather any 
>> blog entry should be tagged 30day. Anything else?
> 
> Yep.

Err that is, no you should not need to do anything else.

Another reason I got fired from customer support.

Kidding.
From: Peter Hildebrandt
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483e7487$0$90271$14726298@news.sunsite.dk>
Ken Tilton wrote:
> Nice. I was wondering if there would be some way to leverage synapses to 
> automatically and selectively tune how often the ODE side got polled. 
> There would be a base rate (perhaps no more frequent than the highest 
> frequency required by any Cell) and then individual Cells or groups of 
> cells or something could decide which subset of the base rate samples to 
> process: Ah, not much is going on, let's wait 10ms before recomputing. 
> The cool thing would be that the synapse could adjust this over time, so 
> if things did get crazy up goes the sampling rate.

That'd be really nice.  Then of course we'd have to intertwine the 
stepping function with cells, so that cells knows when it was called the 
  last time (and with which step size and with how many iterations). 
Then a synapse would know when (at which data pulse) it was updated the 
last time and how much simulated time had passed since.  I believe the 
fire decision should be a function of those two.

> Yep. I went into final approach about six weeks ago, committed to doing 
> the absolute minimum to have something in folks' hands by, oh, July 1. 
> It's a lot more fun programming when I can get to something tricky and 
> just say f**k it, they'll live. :)

Great.  I'm looking forward to seeing it released :)

Peter
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483e9fef$0$15163$607ed4bc@cv.net>
Peter Hildebrandt wrote:
> Ken Tilton wrote:
> 
>> Nice. I was wondering if there would be some way to leverage synapses 
>> to automatically and selectively tune how often the ODE side got 
>> polled. There would be a base rate (perhaps no more frequent than the 
>> highest frequency required by any Cell) and then individual Cells or 
>> groups of cells or something could decide which subset of the base 
>> rate samples to process: Ah, not much is going on, let's wait 10ms 
>> before recomputing. The cool thing would be that the synapse could 
>> adjust this over time, so if things did get crazy up goes the sampling 
>> rate.
> 
> 
> That'd be really nice.  Then of course we'd have to intertwine the 
> stepping function with cells, so that cells knows when it was called the 
>  last time (and with which step size and with how many iterations).

Search on "f-sensitivity". :) The synapse does all the work, including 
keeping track of the last time it fired. But then the question is 
whether that work is more expensive than Just Firing, or so expensive as 
to produce little gain. Steele started by having variable assignment be 
a constraint in his CPL (the classic All X All the Time error, aka the 
"Hey! Let's use it for /everything!" error) and soon enough found out 
how slow slow could be. If so, as you suggested we extend Cells a little 
to support some "hard-wired" dataflow that bypasses the Cells machinery.

kt

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Paul Tarvydas
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1mmem$mdf$1@aioe.org>
Ken Tilton wrote:

> The nomenclature is fine, rather our subjects do not match.

I'm lost.  I can't figure out if you're agreeing with me or arguing with me.

So, I'll just change the subject and respond with things that tangentially
fired some of my neurons.

(BTW, if you keep up the veiled political threats, I'll, I'll, report you to
the CSA, exchange all of my canmeros for euros and request that CAARP bring
on a new ice age rendering all oil on the planet to become uselessy
solidified).

> But now we have a different rule that worries about a mouse-down event.
> A button with a slot called "clicked?". But a mousedown /happens/, it
> does not exist. Well, it exists ephemerally -- its existence is also its
> demise. 

Yes.  Events are discrete packets.

Furthermore, a part that fires events from one of its output pins does not
know "where" the event goes.  It might go to one receiver or many receivers
or zero receivers.

Yes, a part can fire events to an output pin, even if the pin is NC (no
connection).  

[Try that, you call-return people!]

When the launch commander sat down on the launch button, the button fired
off (one) "I'm going down" event.  At that point in time, the rest of the
system was in a "don't care" state and the event was unceremoniously dumped
into the Fargo wood-chipper.

> No, I just do not want to /think/ about events or change /while
> programming/. That is a slippery slope that ends up with Brooks being
> right: manual management of change by the programmer is ineluctably hard
> once a too-low threshold of application complexity is reached.

If that's what he actually said, then he's simply wrong.  It's only too hard
to manage change (control flow) if you tangle it up with other orthogonal
concerns (e.g. like data design or getting rid of goto's).

OTOH, I do believe that designing notation to fit a paradigm is a good thing
(comes from a physics background).  If the target paradigm does not need
events, then they shouldn't be exposed to the user.

I am, though, personally interested in the grand unified theory of
softwaring and that's certainly where I was coming from.

> The trick then -- and hey, maybe I missed an easier way! -- is that

Maybe.

I once built a injection-molding control language which allowed for "active"
variables inside expressions (akin to what I think Cells does).  I needed
to build the compiler using 8 passes just to keep things straight in my
mind and to calculate the global dependency chains.

After completing that exercise, I stepped back and noticed a "pattern" which
eventually turned into a simple event-shepherding kernelette and truly
encapsulated doo-dads (parts) that simply potato-gunned discrete events at
each other.

> As for there being an easier way, perhaps there is a way to duck the 
> special handing of events by using time as state

That direction leads to
<upcase><bold><biggest-font>T</upcase></bold></biggest-font>rouble.

Read up on "race conditions" in an electronics text.  The Right Way to solve
these problems is to admit that you cannot resolve time with sufficient
accuracy to determine the time-ordering of events.  Sometimes 'a' appears
to occur before 'b', sometimes the other way around.  There is always a
unit of time smaller than your ability to measure and, hence, there is
always the possibility that you cannot resolve the order of two events. 
The Correct Answer is to defensively, explicitly design in
anti-race-condition measures (well-understood and well-documented in
hardware design texts).

pt
Toronto, Canada
From: Dihydrogen Monoxide
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <aZH%j.3220$Q57.3098@nlpi065.nbdc.sbc.com>
On Wed, 28 May 2008 02:51:16 -0400, Ken Tilton wrote:

> As for there being an easier way, perhaps there is a way to duck the
> special handing of events by using time as state (with Lisp time
> granularity then being sufficiently imprecise as to take out Canada on
> an unlucky day) but Tilton's Laws of Programming frowns on making things
> more complicated in order to make things simpler: if a mousedown is not
> like a mouse position they should be different in the same way.
> 
> peace, out, kzo

you could just have a multi step process

button = cell tied directly to the mouse (like a volatile variable in C)
this one receives information from the mouse just like the other example 
received the temperature. it's mostly a proxy, it passes on its contents 
whenever they are received to the switch which actually makes the decision.

the key here is that mice send nothing when nothing CHANGES.

switch = cell which either points to nowhere or to another cell
when up is received it points to nowhere
when down is received it points to the cell that does the processing

the reason for the switch between limbo and work is that if the button is 
held down and some other event needs to know then the switch can be 
querried for the cell to go to rather than trying to beg the mouse to 
report.

essentially Cells needs transistors.

Or you could get physematical:
Every cell can have up to the umpteenth derivative.
State, Change, Acceleration, Jump, Flash, Sneeze
each could be tied to a cell



-- 
http://dihymo.blogspot.com
http;//ntltrmllgnc.stumbleupon.com
From: Paul Tarvydas
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1c08j$hhn$1@aioe.org>
<shameless plug>

You might also be interested in Visual Frameworks(TM).

Here are two short papers about it:

http://www.visualframeworksinc.com/DEBS2007-paper63.pdf
http://www.visualframeworksinc.com/CCECE06-0379.pdf

(more info / slides at www.visualframeworksinc.com).

The current version is written in Lispworks (this version is a work in
progress).  I'm open to ideas on how to move it forward.

</shameless plug>

pt
From: Frank Buss
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <15q067xb0y76s.zm0211zzevi$.dlg@40tude.net>
Paul Tarvydas wrote:

> Here are two short papers about it:
> 
> http://www.visualframeworksinc.com/DEBS2007-paper63.pdf
> http://www.visualframeworksinc.com/CCECE06-0379.pdf

This looks interesting. But looks like is is a bit different from Cells and
FBP. Is it right that in your system only one process (called "part" in
your system) is active at the same time? How would you implement the
telegram problem with your system?
http://en.wikipedia.org/wiki/Flow-based_programming#.22Telegram_Problem.22 

In a multithreading FBP system, the ReadSeq process can run all the time,
feeding lines of strings to the next process, which runs in parallel. But
if I understand your system correctly, it would need to fill up the queue
to the next process with all lines of the whole file, then goes to idle and
then the next process reads and evaluates the queue.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Paul Tarvydas
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1jucf$51l$1@aioe.org>
Frank Buss wrote:


> This looks interesting. But looks like is is a bit different from Cells
> and FBP. Is it right that in your system only one process (called "part"
> in your system) is active at the same time? 

No - EXACTLY the opposite.  

The idea is that every part is asynchronous and stand-alone.  

Ideally every part would get its own cpu.  

When we have only one cpu to work with (or fewer cpus than there are parts),
we have to "fake it".  

When we fake it with a kernel in an embedded system, we use hardware
interrupts to supply the "threads" of execution - we can have at least as
many "threads" active as there are hardware interrupt levels.  We *could*
fake it using an RTOS and a process for each part instance, I suppose, but
our real-life requirements didn't allow for such (space and time) luxuries.  

Nor was it necessary to use an RTOS and processes - VF gives the benefits
without incurring the cost of fully-preemptive processes.

We have faked it under Windows (C) and Lispworks.  A real project (an IDE
running under Windows) used something like O(10,000) part instances, iirc. 
A simulation achieved O(1M) instances.

Example: currently, I am working on a GUI for a PDF page-layout system.  I
needed a way to update properties on an object property sheet every time
time I made a change on the page (and v.v.).  In Smalltalk, I would have
used the listener pattern, in CL, I would use a hoary complex of callbacks,
in kt-land, I would have used cells, but I was in too much of a hurry to
hit a trade show to learn a new (to me) technology.  I built a part class
and a schematic class and used these to shepherd events around my system (I
made the deadline :-).

> How would you implement the 
> telegram problem with your system?
> http://en.wikipedia.org/wiki/Flow-based_programming#.22Telegram_Problem.22

Similarly to FBP solution, I guess.

ReadSeq would read lines from the file and send each line out as an event.

DeCompose would chop up the line into words and send each word out as an
event.  If DeCompose was "slower" than ReadSeq, line-events would pile up
at the input (in order) of DeCompose.  If Decomose was "faster" than
ReadSeq, it would finish chopping and sending words, then go idle waiting
for another line event to wake it up.

And so on.

In fact, having used this stuff in embedded systems for so long, my
knee-jerk reaction would be to replace ReadSeq with ReadChar - and have it
send single characters as events.  Decompose would be replaced by Tokenize,
which would buffer characters until it sees white space, then it would send
a word event.  And so on.

The choice of records vs. characters is an architectural decision, not
something imposed by the system.

> In a multithreading FBP system, the ReadSeq process can run all the time,

VF uses cooperative multitasking (as does Stackless Python, as far as I
understand).  VF parts are fully asynchronous and stand-alone, but the
(severe) overheads of full preemption is avoided with a trade-off. 

At first, it may seem that the trade-off (no preemption) is too severe, but,
if you switch your mind-set to the "reactive" paradigm  - everything is
INPUT driven - the trade-off melts away and seems completely natural and
fun (powerful) to use.

The weird-est (fun-est) part of VF is that you feel utterly compelled to
draw (semantically complete) diagrams of the software.  

And, this leads to the revelation that all diagramming tools suck as code
editors.  

The current implementation (written in lispworks) is a grand experiment at
inventing an emacs-like diagram editor (esp. employing "point" and "mark"
cursors) and "factbases" (using PAIP) to infer graphical objects from
graphical primitive objects (lines, dots, text) parse them and infer
semantic meaning.

pt
From: Frank Buss
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <1aata8e0mxoj.18e30kn3yxc3r$.dlg@40tude.net>
Paul Tarvydas wrote:

> We have faked it under Windows (C) and Lispworks.  A real project (an IDE
> running under Windows) used something like O(10,000) part instances, iirc. 
> A simulation achieved O(1M) instances.

For an embedded system with interrupts this sounds like it is not too
difficult, but how did you do this with Lispworks in Windows? I assume you
don't create 1M threads, which would be really slow.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Paul Tarvydas
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1p55e$j12$1@aioe.org>
Frank Buss wrote:

> For an embedded system with interrupts this sounds like it is not too
> difficult, but how did you do this with Lispworks in Windows? I assume you
> don't create 1M threads, which would be really slow.

Short answer: I write a kernelette (or a class) that shepherds events from
output queues to input queues between parts.  The existence of the
kernelette "maps" call-return semantics into an input-driven (reactive)
paradigm.  Then it all just works.  The components are so cheap that you
can have 10,000's of them lying around.

Longer answer:  It really is as simple as above, but I suspect that you want
me to elucidate further :-).

It's a cheezy form of cooperative multi-tasking, where each "thread"
promises to "get in and get out quickly".  (From what I've read, stackless
python is similar).

Each part has an input queue (of pending incoming events), an output queue,
a busy flag, some code (either hierarchically more networks of parts, or
state diagrams, or textual code), some state and local data and an entry
point called by the kernelette to begin processing the input queue.  When
called, the part processes an (or all[*]) input event(s) and possibly
produces output events, which are queued on the output queue.  On exit, the
output queue is processed (the events are delivered to their destinations
using the wiring tables) and a chain reaction of input-event processing
occurs.

That's it in a nutshell.  This allows you to write "completely"
encapsulated[+] reactive components that communicate via events and don't
need their own (often preallocated) stacks (like in fully preemptive
multi-processing).  As you can see, the cost of these components is more
like the cost of an "object" rather than a (full-blown) "process".  You can
have 10,000's of these things lying around, since they don't consume cpu
until they are activated.  Activation is cheaper than a full-blown context
switch (pop queue, case on "pin" index).

Disadvantages: (a) no preemption, (b) you have to promise to get-in get-out
quickly.

Advantages: (c) you get to think and program in the reactive paradigm (which
I claim is more appropriate for many modern problems), (d) you can build
structured, hierarchical networks (using "schematics" - described in the
papers), (e) you can draw and compile diagrams.

pt


[*] Whether a part eats one event at a time or loops through all events is a
scheduling decision.  It matters most when tuning embedded systems.

[+] Total encapsulation is achieved only if the internal action (textual)
language is controlled / integrated into the system.  You can use an
off-the-shelf language, like CL, for the code snippets, but you have to
promise to play nice.
From: Frank Buss
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <1iagc0cikad7q$.nnb97qwy4e8r.dlg@40tude.net>
Paul Tarvydas wrote:

> Disadvantages: (a) no preemption, (b) you have to promise to get-in get-out
> quickly.
> 
> Advantages: (c) you get to think and program in the reactive paradigm (which
> I claim is more appropriate for many modern problems), (d) you can build
> structured, hierarchical networks (using "schematics" - described in the
> papers), (e) you can draw and compile diagrams.

This concept helps to reduce the context switching overhead but it makes
programming harder, because the promise to get-in get-out quickly requires
additional process state variables, which could be implicit to the code
otherwise. E.g. when reading a file and sending out tokens, you just write
one loop with preemption, but with your concept you have to return and
store the last state after each send, if you can't send all the file
contents of a big file to the queue.

Another disadvantage is, that you don't utilize multi-core CPUs very well.
What do you think about using another concept: A thread group (not related
to Java thread groups), which groups some processes in its own thread, and
within each thread group the lightwight and cooperative threading is
running?

A thread group needs not to be running all the time: If no component is
active within a thread group, the system thread could be released to a
thread pool, to avoid creating and destroying system threads all the time.

Deciding which processes can be grouped should be easy. Processes like a
loop which reads a file, can be in its own thread (a thread group with one
process).

Thread groups could be compared with different clock domains in
electronics, or a thread group could be a group of components, which are
coupled by a bus like I2C to other component groups.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Paul Tarvydas
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1s4m4$a9g$1@aioe.org>
Frank Buss wrote:

> This concept helps to reduce the context switching overhead but it makes
> programming harder, because the promise to get-in get-out quickly requires
> additional process state variables, which could be implicit to the code
> otherwise. E.g. when reading a file and sending out tokens, you just write
> one loop with preemption, but with your concept you have to return and
> store the last state after each send, if you can't send all the file
> contents of a big file to the queue.

I think that this analysis is based on a faulty base assumption.  You seem
to assume that I have to build this in assembly language and that you get
to use a kernel plus a programming language suited for your paradigm.

At the assembler level, I think that I have to save less state than the
process-based method.  Preemption requires that every cpu register be saved
on the stack and that the stack pointer be changed.  My method needs to
save an index in a state variable.  The rest of registers don't need to be
saved (because, (a) either the unit of work has been completed (getting
out) and only the "next state" needs to be saved, or (b) a hardware
interrupt has occurred and the hardware does (at least some of) the saving
for me).

Of course, you don't see that happening in a process-based system when you
use a kernel and a stack-based programming language.

Likewise, imagine that my method uses a kernel and an anti-stack-based
programming language.  (Indeed, I do use a language suited to the
paradigm - it's even a graphical programming language).

Equally invisible work for the programmer in both cases.  My method is more
efficient (because it has to save fewer registers) on bare hardware. 

(This, in fact, was the driving factor for why we invented this method - we
needed to beat the performance of PLC's and RTOS kernels for an injection
molding controller).

(Aside: another analogy for VF is to think of a system of communicating
device drivers. Every component is a device driver.  Some of them talk to
real hardware, some talk to other device drivers.  All VF components
operate "below" the rtos (if you bother to install one)).

> Another disadvantage is, that you don't utilize multi-core CPUs very well.

Why do you say this?  I would guess that we would do better on multi-core
CPU's.  Either you misunderstand my method or I misunderstand how you
reached that conclusion (I'll hold off on commenting about the thread group
stuff until I understand your point).

pt
From: Frank Buss
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <3auak9itycul$.4ru8us8crxs8.dlg@40tude.net>
Paul Tarvydas wrote:

> I think that this analysis is based on a faulty base assumption.  You seem
> to assume that I have to build this in assembly language and that you get
> to use a kernel plus a programming language suited for your paradigm.

My comments were meant to what you described about your Lisp
implementation. I think I understand the system on embedded hardware with
interrupts, but for the Lisp implementation you wrote that they are not
really threads, but you need a cooperative multitasking and every process
needs to get-in get-out quick. I interpreted "get-out" as returning from a
function call.

Maybe some example code could help, how you would write the telegram
problem with your Lisp implementation, maybe with single chars, like you
suggested.

>> Another disadvantage is, that you don't utilize multi-core CPUs very well.
> 
> Why do you say this?

Because you wrote, that your Lisp implementation is not preemptive, which
implicits for me that you use a single thread for the scheduler, but maybe
I'm wrong.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Paul Tarvydas
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1ufg9$6d8$1@aioe.org>
Frank Buss wrote:

>> 
>> Why do you say this?
> 
> Because you wrote, that your Lisp implementation is not preemptive, which
> implicits for me that you use a single thread for the scheduler, but maybe
> I'm wrong.
> 

Ah - your question is more clear to me now.

In fact, *neither* implementation is preemptive.  The embedded kernel *can*
use different interrupts to "stack" a few levels of code, but we found in
practice that it was better to attach all code to the same interrupt level
and to eschew "priorities".

I see that what I didn't say is that the workstation kernels were used for
simulation or simply for allowing us to program in the reactive paradigm -
the true multi-processing aspects did not concern us on the workstation
implementations.

If I were to write a multi-cpu / multi-core implementation, I would place
one kernel on each cpu - i.e. one independent thread on each cpu (which is
the best that you can do with any approach - you can never have more than
one cpu per cpu, and you only waste cpu cycles by layering fake "processes"
onto a cpu).  [If this previous sentence doesn't make sense, wait for an
example...]

> Maybe some example code could help, how you would write the telegram

Good suggestion.  I'll cobble something together.

pt
From: Paul Tarvydas
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <g21pin$3sg$1@aioe.org>
Your suggestion for an example is a good one.  I hope this helps :-).

Here is a pseudo-code mockup of the Telegram problem in VF.  I just now
invented the pseudo-code syntax and I have left out non-pertinent details
(hopefully, I didn't insert too many blunders) - I simply want to talk
about the simple low-level mechanisms of what happens (use a fixed font to
view the pseudo-code).  (If I were doing this with the actual tool, the
parts would appear on a "schematic" diagram and their insides would be
drawn as state diagrams.  I'm guessing that pseudo-code will make this
discussion easier, esp. on non-graphical nntp).

For simplicity of discussion, I've skipped over issues like multiple
whitespace characters, pin typing, etc.

There are four "code" parts and one "schematic" part (which organizes the
parts and specifies their interconnection).  Discussion below.

part Reader:
  input pins: char, eof
  output pins: out-char, open

  initially:
    send DesiredFileName to open
    goto idle

  state idle:
    on char:
      send char to out-char
      quit
    on eof:
      goto done
  end state
  
  state done:
    quit
  end state

end part

part Tokenizer:
  input pins: in-char
  output pins: word

  initially:
    buffer = allocate()
    goto waiting-for-chars

  state waiting-for-chars:
    on inchar and not whitespacep (inchar):
      appendchar (buffer, char)
      quit
    on inchar and whitespacep (inchar):
      send buffer to word
      buffer = allocate()
      quit
  end state
end part

part ReCompose:
  input pins: word
  output pins: record

  initially:
    recbuff = allocate()
    goto waiting-for-words

  state waiting-for-words:
    on word:
      when (length(word) + length(recbuff) + 1) > DesiredRecordLength:
        send recbuff to record
        recbuff = allocate()
      end
      append (recbuff, word)
      append (recbuff, Space)
      quit
  end state
 end part

part WriteSeq:
  input pins: record
  output pins:

  initially: 
    goto waiting-for-records

  state waiting-for-records:
    on record:
      write (file, record)
      quit
  end state
end part

schematic Telegram:
  parts
    rdr : Reader
    tok : Tokenizer
    recomp : ReCompose
    wr : WriteSeq

  nets:
    wire <filesystem>.char to rdr.char
    wire rdr.open to <filesystem>.open
    wire rdr.outchar to tok.inchar
    wire tok.word to recomp.word
    wire recomp.word to wr.record
end schematic


This is just a straight-forward network of four parts cascaded together.  

I have imagined a "event driven" file system.  After a file is opened, the
file systems sends one "ready" event every time a character is ready to be
read from the file.  It sends a different "eof" event when the file has
been exhausted.  (I discuss how to map a "normal" "pull" type file system
onto this software, below).

I included a "quit" statement as syntactic sugar only to remind us that
execution of action code stops at that point.


Loose details of operation:

1) The system wakes up, all the parts are instantiated and the initially
code is executed for each part.  The state variable for each part is set to
appropriate start state (again, done explicitly here for expository
purposes).

1a) If the start-up generated any events, they are processed.  In our case,
an open event is sent to the filesystem (allow me to skip over the details
of how this communication with the filesystem works).

2) The system goes live - it simply goes to a wait mode until something
happens.

3) The file system sends a character to the Reader.

An input event [char,<character-value>] is placed on the input queue of
Reader.

(By this I mean: an event id identifying the 'char' pin and a single
character as the data, e.g. ('char . #\P) or similar).

4) The kernel sets Reader's busy flag and activates Reader.

5) Reader pops one event from its input queue and cases on its state (idle)
and executes code (if any) that pertains to the input event (char).  

In this case, the incoming character is simply sent out as an event on
output pin "out-char".  This causes the event [out-char, <character-value>]
to be placed on the output queue of Reader.  

The action code sequence quits.

The busy flag of Reader is set to false (it is no longer busy).

6) The kernel then delivers the event from the output queue to its
destination(s).  The event [out-char, <character-value>] is moved and
converted to [in-char, <character-value>] on the input queue of the
Tokenizer.

Minor detail: the event is converted from an "output" event to an "input"
event by changing the output pin name (or index) to an input pin name (or
index).  If an output goes to more than one input pin, clones of the event
are placed in each receivers' input queues with the appropriate input pin
names.

Now, Reader has no events on its input queue and Tokenizer has one event on
its input queue.

7) Tokenizer sets its busy flag to true, then cases on the incoming event. 
It either stuffs the character into the buffer and goes back to idle, or it
sends the buffer and allocates a new buffer.

Then it quits.  Its busy flag is set to false.  If it generated an output
event, then the event would be delivered (to ReCompose) - i.e. a chain
reaction of events on down the line.

8) The rest of the events continue in the same manner as described above.  

At some point, the file system sends another character to Reader and the
whole process repeats.  When the file system sends an "eof" event, Reader
goes into its "done" state and nothing else happens (this is a toy example,
after all).

Is it sufficient to stop the description at this point?


Detail: It is not obvious in this example, but the "busy" flag BLOCKS
preemption / reentrancy.  A part must finish what it is doing before
looking for more work.

I haven't covered the case where the file system sends bursts of "ready"
events faster than the software can handle them.  There are two more cases:

1) Ready events show up while Reader is still busy.  In this case, the
[char,<...>] event is placed (in order) on the input queue of Reader.  When
Reader reaches its "quit" statement, it becomes re-activated if there is
more (/ new) work in its input queue.

2) Ready events show up while Reader is not busy, but some part further down
the chain is still busy.  In this case, the chain proceeds to execute and
stops when it hits the part which is busy, leaving an input event in its
queue and simply "returning" (RTI).  This is an example of stacked,
partial-preemption.  The busy part is temporarily preempted by the incoming
interrupt.  Its execution state remains on the stack and Reader gets the
CPU (scribbling on the stack above the saved state, similar to what happens
with CALL/RETURN).  When this new chain of events withers and quits, the
stack is popped (RTI) and the interrupted part resumes from where it left
off.

The embedded kernel and the workstation kernel are essentially the same,
except that the embedded kernel needs to have IRQ-OFF's placed in the
correct places, whereas this doesn't matter in a "dumb" workstation
version.

Does this description give a better overview of what happens and how the
equivalent of "processes" are implemented inexpensively?

pt


ps.  Suppose we wanted to do this with a "normal" file system - a "pull"
system - or we wanted to simulate the system on a workstation before
committing it to an embedded system.

We can add a simple feedback loop to the Reader part.  Every time it reads a
character from the file, it also sends itself a "go" event - which causes
it to read another character, and so on.

This particular feedback loop might cause the Reader part to read the whole
file - it depends on the scheduling policy within the kernel (whether
Reader continues until its input queue is empty, or whether Tokenizer gets
to run before Reader gets another chance).  

If we wanted to explicitly write the code so that it keeps queue levels down
to a minimum, we can add "synchronization" handshakes to the code.  The
part furthest down the chain to consume an event is responsible for telling
the Reader to "go" and read another character.  

I think that the code would look like this (I can't test it, since it is
pseudo code):


part Reader:
  input pins: go
  output pins: out-char, loopback

  initially:
    file = open (DesiredFile)
    send true to loopback  ;; once - and never again
    goto idle

  state idle:
    on go:
      if eof(file) 
        goto done
      end
      c = getc(file)
      send c to out-char
      quit
  end state
  
  state done:
  end state

end part

part Tokenizer:
  input pins: in-char
  output pins: word, request

  initially:
    buffer = allocate()
    goto waiting-for-chars

  state waiting-for-chars:
    on inchar and not whitespacep (inchar):
      appendchar (buffer, char) 
      send true to request
      quit
    on inchar and whitespacep (inchar):
      send buffer to word
      quit
  end state
end part

part ReCompose:
  input pins: word
  output pins: record, request

  initially:
    recbuff = allocate()
    goto waiting-for-words

  state waiting-for-words:
    on word:
      if (length(word) + length(recbuff) + 1) > DesiredRecordLength:
        send recbuff to record
        recbuff = allocate()
      else
        send true to request
      end
      append (recbuff, word)
      append (recbuff, Space)
      quit
  end state
 end part

part WriteSeq:
  input pins: record
  output pins: request

  initially: 
    goto waiting-for-records

  state waiting-for-records:
    on record:
      write (file, record)
      send true to request
      quit
  end state
end part

schematic Telegram:
  parts
    rdr : Reader
    tok : Tokenizer
    recomp : ReCompose
    wr : WriteSeq

  nets:
    wire rdr.outchar to tok.inchar
    wire tok.word to recomp.word
    wire recomp.word to wr.record
    wire (rdr.loopback, tok.request, recomp.request, wr.request) to rdr.go
end schematic


To me, this is a great feature (others may disagree).  The engineer gets to
control the design at very low levels, yet the design intent is clear
(because he does the above with diagrams).


pps.  I also glossed over memory management, because we lispers like not to
have to think about that.  But, let's say that we wanted to tighten the
design up to use statically allocated buffers.  Each part would send its
data (e.g. a word) in a buffer to the next part in the chain.  When that
part is finished with the buffer, it "returns" the buffer by sending it
back on a recycle wire.  Again, the recycle wires would be explicitly drawn
on the schematic and visible to everyone.
From: Rob Warnock
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <qLKdnWnvHKZvH9_VnZ2dnUVZ_jOdnZ2d@speakeasy.net>
Paul Tarvydas  <········@visualframeworksinc.com> wrote:
+---------------
| At the assembler level, I think that I have to save less state than the
| process-based method.  Preemption requires that every cpu register be saved
| on the stack and that the stack pointer be changed.  My method needs to
| save an index in a state variable.  The rest of registers don't need to be
| saved (because, (a) either the unit of work has been completed (getting
| out) and only the "next state" needs to be saved, or (b) a hardware
| interrupt has occurred and the hardware does (at least some of) the saving
| for me).
+---------------

This sounds similar to the approach I took when designing the kernel
of the "WSched" multi-tasking communications operating system for
the DEC PDP-8 [augmented with DCA's proprietary "Bus Window" MMU]
back at Digital Communications Associates (DCA) in Atlanta in 1972.

The entire O/S ran with interrupts *off*[1], but by fiat [enforced
during design reviews!] each task was required to execute one
·@SERVICE" macro[2] at *least* once every 200 machine cycles[3]
and at least once per trip through every data-dependent loop
[such as traversing an arbitrary-length linked list]. This ensured
that the scheduler ["event dispatcher" in modern parlence] got a
chance to re-prioritorize things at least every 200 cycles (~250 us)
if a new event had arrived from the outside world [such as a user-
typed character]. The *only* state preserved by the ·@SERVICE" macro
was the current PC and the current "Bus Window" mapping ["page table
base pointer", in modern parlence].

Newly-arrived events were linked to the end of the list (task queue)
associated with their priority, and contained the PC of the task
responsible for servicing them. A task pre-empted by an ·@SERVICE"
became an new "event" whose "task" was simply to return to the PC
after the ·@SERVICE" which pre-empted it. Task scheduling was thus
trivial -- simply run the first task on the highest-priority non-empty
task queue. [There were only 3 or 4 priorities, hence only 3 or 4
task queues]

This design [plus the "Bus Window", which made manipulating the
8-word "chunks" which were the basic units of allocation *much*
easier] was *extremely* efficient -- average CPU consumption to
get a character into and *out* of the system was only ~100 us
(or ~83 cycles) -- and we were able to keep over a hundred local
user terminals plus several remote terminal-concentrator network
lines serviced using a cheap, "slow", little PDP-8... much to the
consternation of the competition [DEC themselves!] who were using
the more expensive, much faster, "obviously better" DEC PDP-11 CPU
for the same job.  ;-}


-Rob

[1] Except that the ·@SERVICE" macro[2] turned interrupts on for
    exactly *one* instruction as a cheap way to poll the common bus
    interrupt request line. It was cheaper even than testing for
    "any interrupt" then conditionally calling the scheduler, since
    the interrupt hardware did both the test & the "call" for us.

[2] Using the "8BAL" macro pre-processor [think "m4", but nicer syntax].
    ·@SERVICE" was defined as "ION; CLA; IOF", and tested whether
    anyone was pulling on the Omnibus's interrupt request line. The
    ION (Interupt ON) instruction had a one-cycle delay built into it
    [for reasons having to do with the PDP-8's instruction set], which
    was why the CLA (CLear Accumulator) was there, and ·@SERVICE" was
    thus *documented* to clear the AC [which was often needed on the
    PDP-8 anyway].

[3] The DEC PDP-8's instuction set was very simple, and programmers
    could trivially manually count machine cycles taken by instructions,
    as there were only a handful of different cases: basic instructions
    were one cycle; memory-reference added a cycle; indirection during
    memory-reference added a cycle; "auto-incrementing" during indirection
    added a cycle, and storing back into memory added a cycle. [The only
    instruction that could do *all* of those was "ISZ @FOO", where "FOO"
    was one of the eight same-page magic auto-incrementing locations
    (locations #o10-#o17), and cost 5 cycles.]

    Well, actually, on the PDP-8/e reading from memory took 1.2 us
    and writing took 1.4 us, so technically there were *two* possible
    cycle lengths, but we didn't need to be that precise and so didn't
    bother with the difference.  ;-}

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Paul Tarvydas
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1uejg$343$1@aioe.org>
Rob Warnock wrote:

> This sounds similar to the approach I took when designing the kernel

It wouldn't surprise me.  I think that Paul Morrison says that FBP was used
in production in the '60's.

The meme has been around for a while.  I think that it is increasing in
strength.

Our variant doesn't even need the @SERVICE call - everything tends to be
coded as state machines and the code snippets that are executed during
transitions/entries/exits are all very short.  

Our original problem was that "polling" was way too inefficient.  PLC's
(employing periodic polling) were being using on injection molding
machines, but a certain event ("cut-over") occurred in such a short time
span that polling strategies would completely miss the events.  We had to
bolt fairly complicated code directly onto interrupts - then we discovered
that the code would be better-structured if we used state charts, then we
discovered that we might as well do all of the code that way, regardless of
whether it was bolted to an interrupt or to another piece (chain) of
software.  An RTOS had been bought in, but we eventually ignored it, since
all of the real code was written below the RTOS.

pt
From: Rob Warnock
Subject: Re: Visual Frameworks vs. Cells compared to Flow-Based Programming
Date: 
Message-ID: <EMqdnW8xmqMRxt7VnZ2dnUVZ_j6dnZ2d@speakeasy.net>
Paul Tarvydas  <········@visualframeworksinc.com> wrote:
+---------------
| Our variant doesn't even need the @SERVICE call - everything tends
| to be coded as state machines and the code snippets that are executed
| during transitions/entries/exits are all very short.  
+---------------

To some extent IBM's PARS (Programmed Airline Reservation System)
[later broken into APPS and ACP ("Airline Control Program"), the
latter later renamed TCP ("Transaction Processing Facility")]
used many of the same principles, see:

    http://en.wikipedia.org/wiki/Airlines_Control_Program
    http://www.blackbeard.com/tpf/tpfhist.htm

[For many years Bank of America and other banks used TPF to run
the centralized part of their ATM applications.]

+---------------
| Our original problem was that "polling" was way too inefficient.
| PLC's (employing periodic polling) were being using on injection
| molding machines, but a certain event ("cut-over") occurred in
| such a short time span that polling strategies would completely
| miss the events.
+---------------

That's what was nice about @SERVICE: you just stuck it in someplace
where you already needed a CLA [which happens a lot in PDP-8 code,
since there is no LOAD -- you have to clear (CLA) then add (TAD)],
and if there was no pending interrupt request then it was "free".
[Well, nearly; it cost 3 cycles instead of true CLA's 1 cycle.]

+---------------
| We had to bolt fairly complicated code directly onto interrupts -
| then we discovered that the code would be better-structured if we
| used state charts, then we discovered that we might as well do all
| of the code that way, regardless of whether it was bolted to an
| interrupt or to another piece (chain) of software.
+---------------

Well, much of our code *was* state machines, with the edges sometimes
being new events and sometimes being just the continuations resulting
from the required @SERVICE points [recall that I said at least one was
mandatory in every indefinite-length loop].

+---------------
| An RTOS had been bought in, but we eventually ignored it,
| since all of the real code was written below the RTOS.
+---------------

Heh! Our WSCHED *was* the "RTOS", if you're going to use those terms.
Someday I should do a full write-up of WSCHED, but here are some
salient points:

- The "Bus Window" MMU redefined the first 64 words of each PDP-8
  "field" [4 kwords] as being virtual memory: 8 pages (we called
  them "chunks") of 8 words each. The virtual mapping for each field
  was the *same*, so if you put some data into the virtually-addressed
  area and did a cross-field subroutine call the data was still there
  in the same relative locations in the target field, which made
  cross-field calls *much* faster. *Only* the first 64 words of
  each field were virtual; all other addresses remained as before.

- The "page table" was thus itself an 8-word chunk and by software
  convention was always mapped into the first virtual page (loc. 0-7)
  [trivially done by having word 0 of each page table contain the
  page number of that page table itself], so that you could change
  the currently-active mappings for locations #o10-#o77 by depositing
  the new mappings directly into locations 1-7. [The MMU snooped such
  stores and copied them into its TLB automagically.]

- This 8-word chunkiness was pervasive throughout the data structures
  of the system. *All* dynamic data was organized in chunk-sized pieces,
  and linked lists were constructed by having word 7 of a chunk contain
  the chunk (page) number of the next chunk in the list. If a chunk in
  such a list was currently mapped in (virtual) locations #o70-77, one
  could remap those locations to the next chunk in the list with only
  two instructions [assuming the AC was clear, almost always the case] --
  "TAD 77 ; DCA 7". That took word 7 of the current chunk and slammed it
  into the 7th mapping of the current page table. Et voila!

- The application (data communications) layer of WSCHED was organized
  around the MMU as a sort of "software crossbar" between some input
  device (say, UART#73) and some output device (say, network trunk #5
  logical channel #17). Virtual pages 1-3 were owned by the "source",
  while pages 4-6 were owned by the "destination". [Page 7 was universally
  a "scratch" page, mainly used as a temp for linked-list traversals,
  as above.] A reverse mapping -- flipping the roles of "source" and
  "destination" always existed at a fixed place in the "destination"
  area, so one could flip roles in two instructions -- "TAD 46; WENABL".

  [Aside: When an device was not "connected" in the software crossbar,
  by convention it was mapped to itself. This meant that one could
  create a crossbar connection with only a six load/store pairs to
  swap the "destination" mappings of the two devices, whereupon they
  were suddenly "connected" as above.]

- Associated with each multi-unit device controller was a vector of
  page tables (called "contexts") for the units of that controller.
  So all you had to do to process some input was add the unit number
  of the interrupting device to the base of the page tables for that
  controller and "light the context" with a WENABL instruction, then
  call the "source" input-handling subroutine whose address was at
  a fixed address in the "context" (virtual area).

Anyway, not to go on *too* long [though I probably have already!],
the point being that scheduled-task control blocks were built out
of these same "chunks". When you ran a task (or a continuation from
an @SERVICE) you simply "lit its context" and then did a subroutine
return into the saved PC stored *in* the context [at yet another
fixed location in the now-active virtual memory]. Input-data-driven
"software crossbar" data flow, event-driven, state-driven, or timed
scheduled tasks -- all of these models co-existed within WSCHED and
all used the *same* dispatch mechanism! The whole thing meshed together
*quite* nicely, thank you very much.  ;-}  ;-}


-Rob

p.s. I have replicated the WSCHED style a number of times over the
years in various embedded network applications, except using "source"
and "destination" base or index registers (now that CPUS *have* such
things -- the PDP-8 didn't!) instead of a special-purpose MMU [starting
with the Zilog Z-80, in which we used the X & Y registers for "source"
and "destination"]. The "software crossbar" style is still quite useful,
as is the interrupt-off "very-light-weight RTOS" style.

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <rem-2008may29-003@yahoo.com>
> From: Frank Buss <····@frank-buss.de>
> ... I think it would be interesting for some people of this
> newsgroup to summarize the concepts of Flow-Based Concepts (FBP),
> compare it to Cells and what we can learn from it.

Is either of them of any value whatsoever in the first place?

> If you like to read some of the basic concepts yourself, take a
> look at
> http://en.wikipedia.org/wiki/Flow-based_programming for FBP

   ... flow-based programming (FBP) is a programming
   paradigm that defines applications as networks of "black box"
   processes, which exchange data across predefined connections by
   message passing. These black box processes can be reconnected
   endlessly to form different applications without having to be changed
   internally. ...

This sounds like bullshit. What really is "message passing"?
How is that any different from a function (procedure) being passed
a message consisting of parameters and returning a message
consisting of a set of return values?
Surely two black boxes can't pass messages directly between them,
because neither can possibly know of the other's existance. Some
higher-level controller must decide which message is to be passed
to which black box and then collect whatever return messages it
produces and decide where to forward them next.
So I don't see how FBP is any different from procedural programming,
where a main application calls the various subroutine components.
If a subroutine component needs to return more than one set of
return values, some immediately and some later, then all we need to
arrange is that it returns a set of two return values, one of them
the bundle of immediate return values, and the other a
continution which will be automatically called again.

   ... a network of
   asynchronous processes communicating by means of streams of structured
   data chunks, called "information packets" (IPs). ...

That doesn't require anything more than multi-processing procedural
programming. Whenever you call a function (procedure) which returns
both a bundle of immediate return values and a continuation, you
fork the process to handle each separately in parallel. Whenever
the controller needs to send some return values to one procedure
and other return values to another procedure, again the controller
needs only fork the process and run the two forwardings in
parallel.

What if a procedure needs several different inputs from different
sources before it will be ready to return some values? That's easy
in procedural programming too: All you need is an OWN (static)
variable that keeps the internal state from one call to the next,
and a NULL type of return that is simply discarded, in a case where
there are no return values yet. (It's these NULL return values that
cause a decrease in the total number of processes running in
parallel. Otherwise the number of processes would keep increasing
any time there's a fork due to returnValues+contiuation or
returnValues to more than one destination.)

> More than one process can execute the same piece of code.

So each instantiation of the same piece of code is working with a
different instance of the OWN variable containing the interal
state, right? So presumably the controller creates new instances of
each piece of code, each with a virgin interal state, and then
subsequent calls to that piece of code with *different* sets of
parameters would cause the no-longer-virgin internal states to be
different in content, right?

In a language like Java, the different instances of internal state
for the same piece of code could be emulated by using an instance
variable. To start a new instance of code+state, you call a
constructor, which returns an object with one internal field, the
state. Then you make a method call using that object, which updates
that state before returning. CLOS could do something similar.

   The network definition is usually diagrammatic, and is converted into
   a connection list in some lower-level language or notation. FBP is
   thus a visual programming language at this level. ...

This sounds like more bullshit. It's the GUI which is diagrammatic.
The *actual* network definition is a connection list, which is
somewhat hidden from view when using the GUI, but it's really there
all the time, and it's quite apparent when writing "programs that
write programs", as is often done in Lisp. It would be a royal
pain, when writing a program that writes a FBP program, to need to
convert all the details into a visual diagram (or mouse motions to
draw a visual diagram) which is somehow fed back into the GUI.
I remember programming in HyperTalk like that. For example, a
script to create a button would read somewhat like this:
  FUNCTION MakeButton(x1,y1,x2,y2,newname)
    choose the button tool;
    move the mouse to x1,y1;
    drag the mouse to x2,y2;
    choose the select tool;
    select the chunk at (x1+x2)/2,(y1+y2)/2;
    set the name of the selected chunk to newname;
  END MakeButton;
(Take the precise syntax with a grain of salt. The main point is
 that there's an interactive editor for building a GUI by installing
 buttons etc. into the drawing pane, whereby certain menu actions
 are needed to get into and out-of button-mode, and certain mouse
 motions within button-mode are needed to create a new button, and
 from a script you meticulously describe those menu+mouse actions
 to accomplish the same GUI effect.)

   ... IPs traveling across a given connection
   (actually it is their "handles" that travel) constitute a "stream",
   which is generated and consumed asynchronously ...

So that pretty much precludes running such a process on multiple
CPUs connected across the InterNet? Or does a process on one
machine owning a handle for data on another machine have a way to
request that other machine to eventually send the actual data?
(What does Java call that? It's been a while. It's more than just
 RMI+SOAP+JNDI. But I forget the name. It's related to Enterprise Beans?)

   ... The task, called the "Telegram Problem", originally
   described by Peter Naur, is to write a program which accepts lines of
   text and generates output lines of a different length, without
   splitting any of the words in the text (we assume no word is longer
   than the size of the output lines). ...

This isn't well specified. It leaves some important facts unstated.
For example, are all the lines of input given at the start, and the
program has access to them all before needing to start working on
the first output line? Or are the lines of input given
incrementally and a line of output should be generated as soon as
there's enough data from input to be sure that no more words will
need to be added to the currently-getting-built line of output? Are
the lines of input given in gulps, a whole line as an atomic input,
or are the lines of input given character by character?

> and http://bc.tech.coop/blog/030911.html for Cells.

   ... analogous to a spreadsheet cell (e.g. --
   something in which you can put a value or a formula and have it
   updated automatically based on changes in other "cell" values).

That's a rather limited framework for developing applications.

> Anyone interested in implementing FBP in Lisp?

Do you mean implement it in some particular vendor-supplied
implementation of Lisp which happens to support multiple threads?
Or somehow implement it in ANSI CL, somehow emulating threads by
means of some sort of polling loop?


-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to try to read the text-picture in this box:

/----------------------------------------------------------------------------\
 ~|~ |`     _       _  _ _    _ _  _  _|. _  _   _|_|_ . _    ._|_|_  _    _|_
 _|_~|~  \/(_)|_|  (_|| (/_  | (/_(_|(_||| |(_|   | | ||_\  VV| | | |(_)|_| |
         /                                   _|
  _| _  _ _   _ _|_. _  _   ._|_   |`. _ __|_    _ _  _ _  _ _|_|_ . _  _   . _
 (_|(/_(_| \/|_) | || |(_|  | |   ~|~|| _\ | ,  _\(_)| | |(/_ | | ||| |(_|  |_\
           / |          _|                                              _|
    _ _  _  _    ~|~~|~  |~\~|~|~\|\ |'~|~  /~`/~\|\/|(~  |~|~)/~\|\/|  |\/|(~|
 VV| (_)| |(_|.  _|_ |   |_/_|_|_/| \|  |   \_,\_/|  |(_  |~|~\\_/|  |  |  |(_.
            _|
\----(Rendered by means of <http://www.schnoggo.com/figlet.html>)------------/
     (You don't need JavaScript or images to see that ASCII-text image!!
      You just need to view this in a fixed-pitch font such as Monaco.)

Then enter your best guess of the text (50-150 chars) into this TextArea:
   +------------------------------------------------------------+
   |                                                            |
   |                                                            |
   |                                                            |
   |                                                            |
   +------------------------------------------------------------+
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483eba39$0$11644$607ed4bc@cv.net>
Robert Maas, http://tinyurl.com/uh3t wrote:
>>From: Frank Buss <····@frank-buss.de>
>>... I think it would be interesting for some people of this
>>newsgroup to summarize the concepts of Flow-Based Concepts (FBP),
>>compare it to Cells and what we can learn from it.
> 
> 
> Is either of them of any value whatsoever in the first place?
> 
> 
>>If you like to read some of the basic concepts yourself, take a
>>look at
>>http://en.wikipedia.org/wiki/Flow-based_programming for FBP
> 
> 
>    ... flow-based programming (FBP) is a programming
>    paradigm that defines applications as networks of "black box"
>    processes, which exchange data across predefined connections by
>    message passing. These black box processes can be reconnected
>    endlessly to form different applications without having to be changed
>    internally. ...
> 
> This sounds like bullshit. What really is "message passing"?
> How is that any different from a function (procedure) being passed
> a message consisting of parameters and returning a message
> consisting of a set of return values?

The rest of what you wrote also makes sense in a similar vein, but it is 
tantamount to looking at Lisp and asking (an extreme case) how it is any 
different than a Turing machine. The question confuses theoretical 
equivalence with the programmer experience -- the mechanisms may be 
inter-transformable where the programmer experince is not.

As for how is the programmer experience different with other paradigms, 
it seems to me some people can look at small examples of something new 
like FBP or OO or Prolog and Just Get It, other people do not. The ones 
who get it may not end up /liking it/, but that would be after giving it 
a try, again without bringing a lot of emotional baggage to the table 
like "how is this different?" and "why should I learn it?".

Hey, it's just a tool, relax.

Early adopters are always looking for new toys and they tend to be 
brighter than average and have a lot of confidence that they can browse 
one or two pages of info and "get it (or not)" so they do. While they 
browse there is this huge suspension of disbelief because they 
understand the new toy will seem weird at first and that they themselves 
are creatures of habit so "ya gotta give it a chance".

Consider structured programming. Sure, no different! Not. For the 
programmer, all the difference in the world is had just from limiting 
oneself to a tight subset of constructs possible. Those of us who were 
open to it really enjoyed it once we lived with it. I remember the 
epiphany I had when I realized I could look at my program doing 
something daft and know for a fact which branch in the code had been 
taken, telling me all the things that must be true for me to have gotten 
there (which conditional branches had been followed, which flags were in 
which state, etc etc). That was so cool, let alone the fact that there 
were so many fewer bugs to find. Again, the programmer experience: work 
and effort got shifted from the expensive tailend debugging phase to the 
cheaper design/planning phase /by the paradigm/.

Likewise with Cells, one is compelled to put all the logic to determine 
a slot of an instance in one lexical chunk of code. That very discipline 
turns out to be empowering, part of why I am down on the many backdoors 
in KR: the power lies to some degree precisely in the absence of backdoors.

If you read the stuff on FBP, you will find one of its advocates talking 
about the same discipline forced on programmers to translate their usual 
imperativethink into flowthink. The good news is that this is actually 
quite fun and offers the Golden Trade-Off: think harder on more 
interesting problems to avoid massive amounts of tedious mind-numbing work.

hth,kt

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Paul Tarvydas
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1mhm0$3sb$1@aioe.org>
Robert Maas, http://tinyurl.com/uh3t wrote:

> This sounds like bullshit. What really is "message passing"?
> How is that any different from a function (procedure) being passed
> a message consisting of parameters and returning a message
> consisting of a set of return values?

The difference is : synchronization.  

Re-read what you wrote above - it reveals the dogma of synchronization.

A function/procedure passes a message, then returns a set of values.  All
the while that the function/procedure is running, the caller is stalled
waiting for an answer (the return), even if it doesn't really care about
the answer.

In message passing (event sending, as I prefer to call it), the caller does
not wait for a response.  You cannot do that with function calls unless you
invent cumbersome extraneous baggage (e.g. an rtos or an event loop or the
utterly insane concept of rpc's).

In reactive programming, NOTHING is synchronized unless the programmer makes
it so.

In call-return programming (functional, procedural, etc) EVERYTHING is
synchronized - always.  

People are currently wracking their brains trying to figure out how to
program multi-core cpu's.  They think that it's a hard problem.  But only
because they cannot think outside of the call-return box.  Same thing
happened in the early Windows days - people had 6-month learning curves
before they could produce even a simple Windows program.  That's because
Windows (GUI) programming is a reactive problem and people were trying to
fit anti-reactive programming languages and thinking onto the problem
space.  Same with network protocols.

Do you happen to know anything about hardware design?  TTL, say.  On a
circuit board populated with TTL chips, are the chips all synchronized or
are they asynchronous?  Do hardware people achieve better success rates in
their designs than software people do in their designs?  

Do chip designers offer some kind of guarantee that their chips will work as
specified in the data sheets?  

Do software designers offer equivalent guarantees?

pt
From: George Neuner
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <t79u34pobe1jmb5ekl2rvs3301ntnjcv9g@4ax.com>
On Thu, 29 May 2008 11:26:53 -0400, Paul Tarvydas
<········@visualframeworksinc.com> wrote:

>Do you happen to know anything about hardware design?  TTL, say.  On a
>circuit board populated with TTL chips, are the chips all synchronized or
>are they asynchronous?  

TTL is not asynchronous in the manner that software people generally
use the term.  

In an unclocked design each part is slaved to the part(s) that provide
input to it.  Any directed web of TTL logic is synchronous relative to
stability of its input - the propagation time to produce a result once
input stabilizes is predictable.  The only really independent circuits
are loops with no external inputs - such as fixed frequency
oscillators - and even then the individual TTL components of the macro
loop circuit are themselves synchronous relative to one another.

A hardware analogy more in keeping with a software discussion would be
two incompatibly clocked macro parts with handshaking data transfer
between them.


>Do hardware people achieve better success rates in
>their designs than software people do in their designs?  

Yes.  But the reason is because they don't invent new logic but simply
reuse standardized components having known behaviors.  Even when the
components are interconnected in novel ways, the behavior of the
result is predictable.


>Do chip designers offer some kind of guarantee that their chips will work as
>specified in the data sheets?  

No.  They offer "warranty" ... which is a very different concept from
"guarantee".


>Do software designers offer equivalent guarantees?

No.  But some select few do offer warranty.

George
--
for email reply remove "/" from address
From: Paul Tarvydas
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1psec$bs0$1@aioe.org>
George Neuner wrote:

> On Thu, 29 May 2008 11:26:53 -0400, Paul Tarvydas
> <········@visualframeworksinc.com> wrote:
> 
>>Do you happen to know anything about hardware design?  TTL, say.  On a
>>circuit board populated with TTL chips, are the chips all synchronized or
>>are they asynchronous?
> 
> TTL is not asynchronous in the manner that software people generally
> use the term.
> <snip>

Maybe I don't follow what you mean.

My point was that from the perspective of a single component, the arrival of
inputs is not predictable - inputs can arrive at that component "at any
time" in "any" order.  The inputs to a chip do not all arrive at the "same"
time like parameters to a subroutine do.

Furthermore, as you point out, one can calculate the propagation time for a
signal through a circuit largely due to the fact that the components of the
circuit are independent (unlike software, where subroutines depend, deeply,
on other subroutines).

A chip can be characterized before it is used in a circuit.  Circuits made
from pre-characterized chips are easier to design.  Software built with
subroutines does not work this way due to the tangle of dependencies (maybe
it is theoretically possible, but I never see anyone doing that).

>>Do hardware people achieve better success rates in
>>their designs than software people do in their designs?
> 
> Yes.  But the reason is because they don't invent new logic but simply

[If that's true, why was my National Semiconductor catalogue from the
late '70's so much thinner than one from the '80's?]

> reuse standardized components having known behaviors.  Even when the
> components are interconnected in novel ways, the behavior of the
> result is predictable.

Yes.  My point is that this is a good model / goal for building software.

>>Do chip designers offer some kind of guarantee that their chips will work
>>as specified in the data sheets?
> 
> No.  They offer "warranty" ... which is a very different concept from
> "guarantee".

If I understand correctly, a warranty is time-limited, a guarantee is a
statement of fact ("it will operate thusly").

What, then, is a data sheet and what is the purpose of the "test circuit"
often supplied on a data sheet?  What is the purpose of a truth table
supplied in a data sheet?

Every hardware designer I knew used data sheets as statements of operation
(which I think is called a "guarantee").  They knew not to rely on
non-specified behaviour and they knew to complain bitterly if the specified
behaviour was not delivered.

>>Do software designers offer equivalent guarantees?
> 
> No.  But some select few do offer warranty.

A software attempt at datasheets is the set of Java library reference
manuals.  In my experience they fall far short of the sort of utility that
hardware datasheets provide(d).  I claim that this is because software
subroutines (classes, et al) are too wound together and too interdependent
to allow a clean characterization of operation and behaviour as is possible
for hardware.  I think that using reactive software components and circuits
built using reactive software components is a step in the "right"
direction.

pt
From: George Neuner
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <78c144hqhh97inkm0amudfn338rkib7d30@4ax.com>
On Fri, 30 May 2008 17:48:59 -0400, Paul Tarvydas
<········@visualframeworksinc.com> wrote:

>George Neuner wrote:
>
>> On Thu, 29 May 2008 11:26:53 -0400, Paul Tarvydas
>> <········@visualframeworksinc.com> wrote:
>> 
>>>Do you happen to know anything about hardware design?  TTL, say.  On a
>>>circuit board populated with TTL chips, are the chips all synchronized or
>>>are they asynchronous?
>> 
>> TTL is not asynchronous in the manner that software people generally
>> use the term.
>> <snip>
>
>Maybe I don't follow what you mean.

Then let's try this.  

In software, asynchronous does not mean "parallel" but rather means
"not sequential" and there is the notion of independence.  It's based
on the ideas in Hoare's book, "Communicating Sequential Processes" - a
good read if you haven't already.  An electronic version is at
http://www.usingcsp.com/cspbook.pdf.

My objection to your TTL reference was that, although it's true that
the components operate in parallel, the cascade logic itself is
sequential.  Each basic block (NAND, NOR, etc.) represents a
sequential function.  A connected web of blocks can only represent a
sequential function or an oscillating function (which is really just a
brawl of 2 or more sequential functions).  Achieving non-sequential
logic as in the software model requires multiple webs with little or
no signal interaction between them.


>My point was that from the perspective of a single component, the arrival of
>inputs is not predictable - inputs can arrive at that component "at any
>time" in "any" order.  The inputs to a chip do not all arrive at the "same"
>time like parameters to a subroutine do.

Ok.  It was unclear to me that that is where you were going with the
argument.


>Furthermore, as you point out, one can calculate the propagation time for a
>signal through a circuit largely due to the fact that the components of the
>circuit are independent (unlike software, where subroutines depend, deeply,
>on other subroutines).
>
>A chip can be characterized before it is used in a circuit.  Circuits made
>from pre-characterized chips are easier to design.  Software built with
>subroutines does not work this way due to the tangle of dependencies (maybe
>it is theoretically possible, but I never see anyone doing that).

Software can be analyzed in a similar way (see Gries, Barendregt,
etc.).  It's just that few developers know how to formally proof
software and even fewer bother to do it except when the results will
be published.  It's also the case that some popular programming
languages have features that impede easy analysis.


>>>Do hardware people achieve better success rates in
>>>their designs than software people do in their designs?
>> 
>> Yes.  But the reason is because they don't invent new logic ...
>
>[If that's true, why was my National Semiconductor catalogue from the
>late '70's so much thinner than one from the '80's?]

Because the complexity of "standard" parts has been increasing.  That
does not refute my argument that hardware developers simply combine
standard parts - all that's happened is that what was previously board
level macro circuitry has been proven useful enough to be integrated
into a standard IC.

I think you'll agree that the vast majority of hardware designers are
content to pick parts out of catalogs rather than do materials
chemistry to build senary logic gates.

Software developers, OTOH, do invent ad-hoc symbolism and accompanying
predicate systems every time they write a program - usually
under-specified, almost always incomplete, rarely coverage tested and
limited to "expected" inputs.


>> they reuse standardized components having known behaviors.  Even when the
>> components are interconnected in novel ways, the behavior of the
>> result is predictable.
>
>Yes.  My point is that this is a good model / goal for building software.

I agree.

The problem of standardized software components will be solved as soon
as everyone can agree on a common paradigm neutral, multi-language,
cross-platform, immutable, type safe, modular delivery system.

I don't see that happening in my lifetime (and I expect to live
another 30-40 years).  


>>>Do chip designers offer some kind of guarantee that their chips will work
>>>as specified in the data sheets?
>> 
>> No.  They offer "warranty" ... which is a very different concept from
>> "guarantee".
>
>If I understand correctly, a warranty is time-limited, a guarantee is a
>statement of fact ("it will operate thusly").

Historically "warrant" was limited to the lifetime of the giver.
However, companies have (potentially) unlimited lifetimes.  

There is a legal difference between "warranty" and "guarantee".  A
warrant is a statement of authority to take action in matters related
to whatever is warranted.  A guarantee is a promise of responsibility
for an obligation.


>What, then, is a data sheet and what is the purpose of the "test circuit"
>often supplied on a data sheet?  What is the purpose of a truth table
>supplied in a data sheet?

A data sheet is just a piece of paper.  The warranty that the data
sheet is a good faith description of the operation of a part and that
the company will replace parts that do not conform to the description
is what gives meaning to the data sheet.


>Every hardware designer I knew used data sheets as statements of operation
>(which I think is called a "guarantee").  They knew not to rely on
>non-specified behaviour and they knew to complain bitterly if the specified
>behaviour was not delivered.

If everyone jumped off a cliff would you do it too?  Those designers
were relying on an incorrect understanding of the meaning of the
documents.


>>>Do software designers offer equivalent guarantees?
>> 
>> No.  But some select few do offer warranty.
>
>A software attempt at datasheets is the set of Java library reference
>manuals.  In my experience they fall far short of the sort of utility that
>hardware datasheets provide(d).  I claim that this is because software
>subroutines (classes, et al) are too wound together and too interdependent
>to allow a clean characterization of operation and behaviour as is possible
>for hardware.  I think that using reactive software components and circuits
>built using reactive software components is a step in the "right"
>direction.

The best software "data sheets" I've seen were for Intel's Integrated
Performance Primitive libraries.

Component software is a great idea but intractably hard in practice.
Honestly I can't care much about the lacking in Java's documentation
because Java as it exists now is not a suitable candidate for a really
useful componentized software platform.  Neither is .NET, or Corba, or
COM or anything I've yet seen.

George

--
for email reply remove "/" from address
From: Paul Tarvydas
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1s86q$nm0$1@aioe.org>
George Neuner wrote:

> on the ideas in Hoare's book, "Communicating Sequential Processes" - a
> good read if you haven't already.

Both, CSP and Occam are referenced in my papers that I posted links to
earlier in this thread.

pt
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <rem-2008jun01-009@yahoo.com>
> From: George Neuner <·········@/comcast.net>
(compared to hardware designers:)
> few developers know how to formally proof software and even fewer
> bother to do it except when the results will be published.

Given that it's not even possible to proof hardware, because
quantum mechanics and small impurities in materials production and
variation in physical assembly actions all make the properties of a
hardware device impossible to prove within bounds, and anything
proven from such an assumption would then be GIGO. At best
statistical estimates of success rate and MTBF (Mean Time Between
Failure) can be predicted on the basis of experience. Accordingly
complaining that software isn't formaly proofed seems an unfair
complaint in this hardware-vs-software discussion.

Perhaps a more appropriate metaphor would be the legal system, of
adversarial contestants, one to show evidence for something, the
other to find flaws in the evidence leading to reasonable doubt. In
the software contest, perhaps each unit (function, method, etc.)
can be specified per initial state assumed, D/P process that
occurs, and final state claimed if the initial state was within
spec. The person claiming a unit is "working" could post a demo of
the unit for testing, by both automated test rigs (some written by
an adversary) and manual/interactive attempts to make the unit
fail. If the unit has withstood a reasonable time period of such
testing, then it can be trusted. With formal specs for starting
state and final state, automated verification of matching states of
output from one unit to input of another unit, and automated
management of a whole network of such dependencies, simple
connections of the specs between units can be used to prove that
combinations would work too, without needing to test each
combination. (Maybe you're saying that the overall system for
managing such analysis of multi-unit software isn't available "off
the shelf" for popular programming languages, so it's not worth
people bothering to even try that methodology except if they're
going to publish the result in a scholarly ournal?)

> It's also the case that some popular programming languages have
> features that impede easy analysis.

It's not necessary to *use* such difficult-to-analyze features, you
know? If the parse tree for the software is readily available, as
it *is* for Lisp (and has been since the very start nearly fifty
years ago), it's possible to automatically analyze the software to
check what features it uses and warn if any difficult-to-analyze
feature is used, and not exactly where it's used. Then anyone
wishing to prove that the different individually-tested units can
be proven to work together, can attempt to modify the code (if
necessary) to eliminate use of any difficult-to-analyze feature,
and if successful then apply whatever multi-unit-analyzer software
is available. Have you investigated that idea to see how feasible
it might be? (Note that different features might cause difficulty
with different methods of multi-unit analysis, so a different
difficult-code detector might be needed for a different
multi-unit-analysis tool.)

> I think you'll agree that the vast majority of hardware designers
> are content to pick parts out of catalogs rather than do materials
> chemistry to build senary logic gates.

Back to that metaphor: Software is nearly *always* assembled from
low-level units that come from a catalog, namely the "Principles of
Operation" manual for the CPU when writing machine/assembly
language code, or the "API documentation" when writing code in a
higher-level language. For example, we have the Common Lisp
Hyper-Spec <http://www.isr.ist.utl.pt/library/docs/HyperSpec/Front/>
and the JavaDoc <http://java.sun.com/j2se/1.4.2/docs/api/index.html>
as "catalogs" of what API units are available for use (and
well-tested and believed to be working) in our application-level
software.

> Software developers, OTOH, do invent ad-hoc symbolism and
> accompanying predicate systems every time they write a program

All within the framework provided by the API (or Principles of
Operation) manual. Nothing is invented do novo except *intentions*
of use of some existing type of data. Even new classes in CLOS or
Java are sub-classes of an existing class, with well-specified
technology for introducing specific features within the general
capability. provided by the API. Application programmers do *not*
delve deep into the underlying machine language to defeat the
restrictions of the API, much less modify the hardware to generate
new machine language instructions to take advantage of.
So I believe here you're making much ado about nothing.

> usually under-specified, almost always incomplete,

Now *this* is a valid point. During rapid prototyping, both true
R&D to explore what's easy to accomplish via new algrithms, and
trying to understand user requirements by quickly implementing what
you think the user wants to see if that's what the user really
wants before fixing the design too firmly, it's actualy an
advantage to keep specs flexibie and ambiguous and be willing to
change them as you learn new things about what's doable or what the
user wants respectively. Only later after you think you really do
have it right (new algorithm "works" in a useful way, or user is
happy with the way your prototype demo works), *then* is the time
to write up the formal specs and make sure the specs match what the
user (or researcher) wants accomplished and also make sure the
software satisfies the specs. If formal specs aren't written up at
this time, or the specs are substantially incomplete, I agree the
programmer has been negligent, and if the supervisor/manager of the
pragrammer fails to take the time to review the specs, that person
is negligent. (In a single-person project, such as I've been doing
while unemployed, it'd be nice to find a peer to review my specs,
but alas the one time recently I posted some specs for review
nobody bothered to say anything except a vague remark that my specs
are incomplete, without bothering to tell me what aspect of them is
allegedly incomplete, so I've gotten no feedback to improve my
specs.)

As a regular practice, I put documentation at the front of each
function definition and in front of or alongside each global
declaration. I use semicolon comments instead of CL documentation
strings, because it's easier to format them, and because I don't
have to include them each time I copy-and-paste a modified function
definition from the local edit buffer across a dialup modem into
the remote Unix shell where CMUCL is running. But if I ever were to
make any of my code public
 (if anybody ever wanted it and were willing to somehow pay me for
  my work, either with money or a job offer or public accolades),
I'd be glad to convert my semicolon-comments into a documentation
string that is retrievable by (DESCRIBE <symbolThatNamesTheFunction>).
In that per-function documentation I state what global situation if
anything is assumed, then what each parameter is supposed to be,
then what the function does (and in some cases *how* it does it),
and finally what side effects
 (either globally, or locally within a passed data structure)
and/or return value(s) there is/are. It's all in English of course,
nothing formal that could be automatically verified, but I'd be
glad to convert it to any formal specification langyage anyone
wants if they are willing to pay me for that work.

> rarely coverage tested and limited to "expected" inputs.

Personally, I test my code for both expected cases and out-of-bounds
values (with type/range checking of parameters whenever it seems reasoanble).
In particular, when developing line at a time, I test each error message
itself *as*if* that condition had occurred before I test the conditional
(when SomethingWrong (error ...)) or (unless EverythingGood (error ...)).
I find that mistakes in composing the error message and parameters
to it, usually missing parameters or ugly formatting, are very very
common, and getting them fixed before I move on is best practice.

> The problem of standardized software components will be solved as
> soon as everyone can agree on a common paradigm neutral,
> multi-language, cross-platform, immutable, type safe, modular
> delivery system.

How about a syntax-free development and archival system, which
automatically lists which known programming languages support any
given module and offers a menu for converting the syntax-free
software to whichever of the known programming languages the user
wishes to express it in (port it to)? I've proposed this idea
several times recently, but nobody else has expressed any serious
interest.

> I don't see that happening in my lifetime (and I expect to live
> another 30-40 years).

If you like my syntax-free idea, would volunteer to offer me
emotional support for my idea and also brainstorm with me to work
out the details of the design and also perform beta tests on any
code I write to implement the idea per our agreed-upon design?

> Historically "warrant" was limited to the lifetime of the giver.
> However, companies have (potentially) unlimited lifetimes.

Except that when a merge happens the new owner convenient forgets
any promises made by the previous company. For example, I opened an
account with Northern California Savings, because they had a
wonderful commercial on TV with a chain of dominos representing a
person's life time, which comes to an end at the end of the domino
chain, plus the fact they offered me free cheque printing for the
lifetime of the account. They were bought by Great Western Savings,
which honored the promise, which was later bought by Washington
Mutual which refused to honor the promise, started charging me for
cheque printing.

> A data sheet is just a piece of paper.  The warranty that the
> data sheet is a good faith description of the operation of a part
> and that the company will replace parts that do not conform to the
> description is what gives meaning to the data sheet.

Back to that metaphor: I'd be glad to replace any unit of software
that after sale fails to perform as I claimed, given that the
underlying Common Lisp system and underlying operating system
aren't at fault.
(A nice thing about software is that if a whole batch of copies of
 a software unit all fail, only one replacement needs be provided,
 and can then be simply copied to replace all the other copies that
 were also faulty. It's not like a food recall or chip recall where
 massive quantities of defective product must be physically returned
 to the manufacturer who then must recycle or land-fill etc.)

> Component software is a great idea but intractably hard in practice.

Do you mean that setting up formal specs for *all* the functions in
Common Lisp or *all* the classes/methods in Java etc. is
intractably hard (I would agree there), or that even doing formal
specs for a relatively well-behaved subset is intractably hard
 (I might disagree with you if that's what you mean, but I'd need
  to see you clearly state what you are saying before I post formal
  rebuttal).

> Honestly I can't care much about the lacking in Java's
> documentation because Java as it exists now is not a suitable
> candidate for a really useful componentized software platform.

Is any decent-sized subset of the Java 1.3 or 1.4 API suitable?

> Neither is .NET, or Corba, or COM or anything I've yet seen.

How about a subset of Common Lisp??
From: George Neuner
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <a9r844h23kpcktqkpcj1i3klstbb33oh5c@4ax.com>
On Sun, 01 Jun 2008 22:31:41 -0700,
···················@spamgourmet.com (Robert Maas,
http://tinyurl.com/uh3t) wrote:

>> From: George Neuner <·········@/comcast.net>
>(compared to hardware designers:)
>> few developers know how to formally proof software and even fewer
>> bother to do it except when the results will be published.
>
>Given that it's not even possible to proof hardware, because
>quantum mechanics and small impurities in materials production and
>variation in physical assembly actions all make the properties of a
>hardware device impossible to prove within bounds, and anything
>proven from such an assumption would then be GIGO. At best
>statistical estimates of success rate and MTBF (Mean Time Between
>Failure) can be predicted on the basis of experience. Accordingly
>complaining that software isn't formaly proofed seems an unfair
>complaint in this hardware-vs-software discussion.

You clipped what I responded to which was a lament that software
couldn't be analyzed like hardware.

In any event, proofs begin with a set of postulates assumed to be
true.  A software proof usually starts from the assumption that the
execution hardware will perform as spec'd.


>Perhaps a more appropriate metaphor would be the legal system, of
>adversarial contestants, one to show evidence for something, the
>other to find flaws in the evidence leading to reasonable doubt. In
>the software contest, perhaps each unit (function, method, etc.)
>can be specified per initial state assumed, D/P process that
>occurs, and final state claimed if the initial state was within
>spec. 

It's not a bad idea but 


>The person claiming a unit is "working" could post a demo of
>the unit for testing, by both automated test rigs (some written by
>an adversary) and manual/interactive attempts to make the unit
>fail. If the unit has withstood a reasonable time period of such
>testing, then it can be trusted. With formal specs for starting
>state and final state, automated verification of matching states of
>output from one unit to input of another unit, and automated
>management of a whole network of such dependencies, simple
>connections of the specs between units can be used to prove that
>combinations would work too, without needing to test each
>combination. (Maybe you're saying that the overall system for
>managing such analysis of multi-unit software isn't available "off
>the shelf" for popular programming languages, so it's not worth
>people bothering to even try that methodology except if they're
>going to publish the result in a scholarly ournal?)

I'm saying that most developers simply don't know how to proof
software.  Testing is not proofing.  Automatic proofing tools would
certainly help - but any possible tool is limited by the halting
problem and so there will always be code that the tool cannot prove
correct.  And the tool would have to be simple enough for code-monkeys
to use and have warnings they could understand.

[Incidently, I'm not excepting myself.  I *did* know how to write a
proof at one time, and even had to do so professionally for some
medical device software I once worked on.  But I haven't done a
software proof in well over a decade and I doubt I could sit down and
do one now without methodology research.]


>> It's also the case that some popular programming languages have
>> features that impede easy analysis.
>
>It's not necessary to *use* such difficult-to-analyze features, you
>know? 

Of course.  However, a language with imperative features is inherently
more difficult to analyze than a language without them.  The entire
point of some standard compiler transformations is to "functionalize"
the code and make it somewhat easier to analyze.

But even so, there are no "functional" CPUs ... all are imperative.
So, by rights, to prove the software, you have to prove both that the
source correctly implements the intent of the program under the
semantics of the source language, and also that the imperative
translation implements identical intent under the semantics of the
assembler language.

A good compilers is very hard to write - over the years I've been
involved with two projects.


>If the parse tree for the software is readily available, as
>it *is* for Lisp (and has been since the very start nearly fifty
>years ago), it's possible to automatically analyze the software to
>check what features it uses and warn if any difficult-to-analyze
>feature is used, and not exactly where it's used. Then anyone
>wishing to prove that the different individually-tested units can
>be proven to work together, can attempt to modify the code (if
>necessary) to eliminate use of any difficult-to-analyze feature,
>and if successful then apply whatever multi-unit-analyzer software
>is available. Have you investigated that idea to see how feasible
>it might be? (Note that different features might cause difficulty
>with different methods of multi-unit analysis, so a different
>difficult-code detector might be needed for a different
>multi-unit-analysis tool.)

With imperative languages, it's a lot harder than it sounds - there
are myriad special cases where a potentially unsafe construct is
suitably constrained and therefore not being used in a dangerous
manner.  If you simply reported any use of the unsafe construct,
developers would start ignoring the warnings.  You have to do a lot of
work to sort out what is dangerous from what looks like it might be
dangerous.


>> I think you'll agree that the vast majority of hardware designers
>> are content to pick parts out of catalogs rather than do materials
>> chemistry to build senary logic gates.
>
>Back to that metaphor: Software is nearly *always* assembled from
>low-level units that come from a catalog, namely the "Principles of
>Operation" manual for the CPU when writing machine/assembly
>language code, or the "API documentation" when writing code in a
>higher-level language. For example, we have the Common Lisp
>Hyper-Spec <http://www.isr.ist.utl.pt/library/docs/HyperSpec/Front/>
>and the JavaDoc <http://java.sun.com/j2se/1.4.2/docs/api/index.html>
>as "catalogs" of what API units are available for use (and
>well-tested and believed to be working) in our application-level
>software.

True, but not relevant.  The discussion is about component (re)use of
off-the-shelf software by different authors, possibly written using
different languages.

Paul made an analogy to hardware, but to be similar, software
components must be in some immutable form that can be executed but not
modified by the developer.


>> Software developers, OTOH, do invent ad-hoc symbolism and
>> accompanying predicate systems every time they write a program
>
>All within the framework provided by the API (or Principles of
>Operation) manual. Nothing is invented do novo except *intentions*
>of use of some existing type of data. 


That is exactly what I'm talking about.  The semantics of the platform
are not relevant.  The program has its own symbology and semantics.
The hardware may see an integer value, say 42, but nothing constrains
the program to treat 42 as a number - it can be a symbol representing
anything that is meaningful to the problem domain.

The logic and semantics defined on the unique symbology of the program
is *always* ad-hoc.


>Application programmers do *not* delve deep into the underlying 
>machine language to defeat the restrictions of the API, much less
>modify the hardware to generate new machine language instructions 
>to take advantage of.  So I believe here you're making much ado 
>about nothing.

It's only necessary to use data in a way that's inconsistent with its
nominal appearance.  Like using 42 to mean 

  1 - open silo doors
  0 - detach fuel line
  1 - spin up gyros
  0 - retract gantry
  1 - start ignition
  0 - deny recall


>> The problem of standardized software components will be solved as
>> soon as everyone can agree on a common paradigm neutral,
>> multi-language, cross-platform, immutable, type safe, modular
>> delivery system.
>
>How about a syntax-free development and archival system, which
>automatically lists which known programming languages support any
>given module and offers a menu for converting the syntax-free
>software to whichever of the known programming languages the user
>wishes to express it in (port it to)? I've proposed this idea
>several times recently, but nobody else has expressed any serious
>interest.

I honestly have no idea what a "syntax-free" development system would
even look like.  There is no such animal as a syntax free language and
back-translation from a language neutral IR to any particular high
level language more often than not produces garbage.

Java and .NET have the right idea - secure managed environments, a
common low level distribution format and compilation to native code at
load time (or alternatively at install time).  It's the current
implementations of the idea(s) that are lacking.


>> Historically "warrant" was limited to the lifetime of the giver.
>> However, companies have (potentially) unlimited lifetimes.
>
>Except that when a merge happens the new owner convenient forgets
>any promises made by the previous company. For example, I opened an
>account with Northern California Savings, because they had a
>wonderful commercial on TV with a chain of dominos representing a
>person's life time, which comes to an end at the end of the domino
>chain, plus the fact they offered me free cheque printing for the
>lifetime of the account. They were bought by Great Western Savings,
>which honored the promise, which was later bought by Washington
>Mutual which refused to honor the promise, started charging me for
>cheque printing.

Those promises were not warrants (or guarantees) ... they were simply
corporate policies and as such were subject to change.  

In many countries, an acquiring company must uphold obligations of the
acquired company.  A warrant, however, is not an obligation or
responsibility - it is just an assumption of authority.  The acquiring
company can choose not to assume authority over products or services
of the acquired company that they don't like.  In that case the
orphaned products and services generally die unless someone else
assumes them.


>> Component software is a great idea but intractably hard in practice.
>
>Do you mean that setting up formal specs for *all* the functions in
>Common Lisp or *all* the classes/methods in Java etc. is
>intractably hard (I would agree there), or that even doing formal
>specs for a relatively well-behaved subset is intractably hard
> (I might disagree with you if that's what you mean, but I'd need
>  to see you clearly state what you are saying before I post formal
>  rebuttal).

I mean exactly what I said previously.  To recap: For software to be
componentized, it must become like IC hardware - an immutable
black-box deliverable which can only be executed according to its
specified interface.  It must be impossible or at least unfeasibly
hard to tamper with the deliverable.  It must be plug and play at
least on all the popular platforms and usable from any language or
development system that runs on those platforms.

That could be achieved now, but it would be nearly impossible to get
everyone to agree on an appropriate common delivery and management
system.


>> Honestly I can't care much about the lacking in Java's
>> documentation because Java as it exists now is not a suitable
>> candidate for a really useful componentized software platform.
>
>Is any decent-sized subset of the Java 1.3 or 1.4 API suitable?

No.  

Ignoring Java's deficiencies as a source language, the JVM is not a
good execution platform for languages whose features differ
significantly from Java.  Advanced languages features not found in
Java can be implemented on the JVM with varying degrees of difficulty,
but their performance is frequently poor (see SISC for an example).
Sun, as yet, has shown no inclination to expand the JVM to accommodate
other languages.  The delivery format is not externally tamper-proof
and reflection allows runtime discovery and manipulation of objects in
ways the designer did not intend.

The same is true of .NET although its VM makes a bit more of an effort
to accommodate languages that Microsoft doesn't promote - for example,
.NET (as of version 2) directly supports tail recursion.

COM and Corba are ok but components are not hardware neutral like
bytecode files nor are they immune to tampering.


>> Neither is .NET, or Corba, or COM or anything I've yet seen.
>
>How about a subset of Common Lisp??

No.  Lisp has no non-source external format for code.  The whole point
of component software is to use it as is and prevent modifications
that might affect the reliability of the code.


George
--
for email reply remove "/" from address
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <rem-2008jun01-008@yahoo.com>
> > How is that any different from a function (procedure) being passed
> > a message consisting of parameters and returning a message
> > consisting of a set of return values?
> From: Paul Tarvydas <········@visualframeworksinc.com>
> The difference is : synchronization.

Synchronization is good if you need to be sure data is available
before you try to process from it. Now you can beg the question by
setting up a continuation or callback where something will use the
data whenever it finally becomes available, but still there must be
some kind of synchronization between availability of the data and
activation of the callback or continuation.

> A function/procedure passes a message, then returns a set of
> values.  All the while that the function/procedure is running, the
> caller is stalled waiting for an answer (the return), even if it
> doesn't really care about the answer.

Wrong, if you have multi-processing in the first place, with the
essential primitives: Fork Join and PollStatus. There's no need for
message passing. There's only need for first-class handles on
processes with primitives for doing those three essential
primitives with them.

If you need to invoke two tasks, and you won't need either result
before calling the other, then you FORK your process, and invoke
each task from a different thread. Then at the point where you need
*both* final results to be somehow combined, that's where you JOIN
the two threads. If you can operate either with or without a
particular result, but having it is better than not having it, you
can poll the process that produces that result to see if it's
gotten to that point yet, and if so use the result, else go on
without it.

> In message passing (event sending, as I prefer to call it), the
> caller does not wait for a response.

In forked threads, only the sub-caller that needs the result for
further computation waits for it, the other sub-caller that if off
doing something else at the same time does *not* wait for it. How
is the multi-threaded procedural paradigm any different from the
message-passing paradigm, except that the multi-threaded procedural
paradigm has a better view of dependencies via the Join and Poll
primitives? (I envision a boolean-vector Poll interface: When
activing a process that you'll need to poll later to learn of
completion of various milestones, you pass it a boolean vector that
you have a handle on, all bits initially FALSE, and you set it to
turn on various elements when the corresponding milestones are
completed. The interface between the sub-process and the boolean
vector allows turning on bits but not turning off bits, so once a
bit is on you won't be stabbed in the back by the sub-process
changing its mind later. The vector is actualy extra-boolean,
which might be implemented by a NULL pointer that can *once* be set
to a non-NULL return value.)

> You cannot do that with function calls unless you invent
> cumbersome extraneous baggage (e.g. an rtos or an event loop or the
> utterly insane concept of rpc's).

Well if you have a single-threaded procedural system, then indeed
you can't fork threads, so you need to explicitly run an event
loop, and make sure each emulated thread pauses at frequent
intervals to avoid hogging the CPU. On the Macintosh under System
6, the system did something like that. Applications could be paused
only at specific points where the application did a system call. If
any application ran a tight loop that didn't do even one system
call the whole time, it'd lock the machine until it finished. In
other systems, CPU clock interrupts are enabled, forcing every
application to be interruptable at regular intervals (except during
critical sections of code when interrupts are disabled; an
application that hung inside a critical section forever would
likewise hang the CPU, but that's less likely than using the Mac
way.) On systems with virtual machines, there's no need for user
applications to disable interrupts at all, because if the
application is inside an application-level critical section the
system can nevertheless interrupt the virtual machine and save its
entire state and run another application for a while.

But IMO this is moot because modern procedural languages (on modern
operating systems such as Linux) support multiple threads with
operating-system support to automatically timeshare the various
threads. No event loop is needed there.

> In reactive programming, NOTHING is synchronized unless the
> programmer makes it so.

OK, at this point I gotta look up that jargon on Google/WikiPedia:

 <http://www-sop.inria.fr/meije/rp/>
Nice menu of implementations of the idea, but no overall definition.

 <http://www-sop.inria.fr/meije/rp/generalPresentation/index.html>
OK, this is more what I need to get started with understanding the idea.
   Reactive Programming considers software systems whose behaviors
   basically consist in reacting to activations. These systems are often
   called reactive systems. System reactions, also called instants, can
   take various forms, for example sequences of successive phases, or
   oscillations eventually leading to stabilization. ...

Is that a good explanation of what you're talking about?

I think my proposed feedback-loop methodology for finding narrower
and narrower nested interval-arithmetic approximates to fixed
points of iterators (usually some form of Newton's method) would
fit into that paradigm. It can run in a single process, always
chosing the sub-call that is most likely to result in narrowing the
parent interval, although in close calls it would be nice to be
able to fork the process and narrow both input intervals at the
same time and whichever returns first would get first chance to
narrow the parent interval.

     * The possibility to dynamically create and run new components
       during execution eases the programming task, as it does not impose
       a system to have a static maximum number of running components.
That sounds exactly like the Fork (Split) primitive of ordinary
multi-processing, which works fine within the procedural-programming style.

   The basic idea of Reactive-C was to propose a programming style close
   to C, in which program behaviors are defined in terms of reactions to
   activations. Reactive-C programs can react differently when activated
   for the first time, for the second time, and so on. Thus a new
   dimension appears for the programmer: the logical time induced by the
   sequence of activations, each pair of activation/reaction defining one
   instant.
I see a possible problem or two: What if the 'program' is still
running from a earlier activation when somebody already tries to
activate it a second time. Does the second attempt **HANG** until
the processing from the first activation has completed, or is the
second attempt bundled into an activation record and put into a
queue so that the activator does't need to wait until that second
activation is complete? What if two processes each try to activate
a third process at exactly the same time? Is it a race condition as
to which activation is processed first? What if two processes try
to do the **first** activation of a third process? Which of them is
allowed to get *FIRST* priviledge and which is relegated to getting
only *SECOND* activation? Or does an attempted *FIRST* activation
always start a new instance of the program, so that if two attempts
occur at the same time then two instances are started and they run
as separate processes, with each first-activator having a handle
only on the process that *it* activated?

 <http://en.wikipedia.org/wiki/Reactive_programming>
   ... in an imperative programming setting, a: = b + c would
   mean that a is being assigned the result of b + c in the instant the
   expression is evaluated. In reactive programming it could instead mean
   that we set up a dynamic data-flow from b and c to a. whenever the
   value of c or b is changed, then a is automatically updated.

That's actually the inverse of my interval-arithmetic proposal, and
for that purpose it seems inferior because it performs excessive
computation that nobody needs yet and might never need.

Consider for example Newton's method for interval arithmetic:
We set up a starting interval which isolates one zero
(x-axis-crossing-point) of a function narrow enough that Newton's
method applied to that interval will approximately double the
number of signifant digits (square the error fraction).
Then we hook Newton's method to feed that interval back to itself,
so that each time Newton's method is called that interval is made
square as accurate.

By my method, anybody who needs that value checks if it's already
accurate enough, and if so just uses the current interval, else it
specifies a recursive call to Newton's method then upon return
checks again to see if it's accurate enough now. Anyone just
wanting "more" accuracy will unconditionally call Newton's method
once.

By reactive programming, once the feedback loop is activated, the
Newton's method process will run endlessly
 (because its input has changed, it will be run, which changes its
  output, which feeds back to its input, causing its input to be
  changed again, causing it to be activated again, and so on and so
  on and so on),
repeatedly squaring the accuracy, thus repeatedly doubling the
amount of storage needed to hold the more-accurate result. Program
size grows exponentially in just this one component even if nobody
really need more than twenty or thirty significant digits here.
Within seconds the working set for this one proceess exceeds actual
5 gigabytes of RAM, throwing virtual memory for the whole system
into thrashing, and then eventually the system crashes because it
has run out of available 500 gigabytes swap space.

Was it McCarthy who stated that the only important thing about a
program is what output it produces, that nothing should be computed
unless it will affect output? My interval arithmetic
lazy-evaluation evaluate-only-as-much-as-neededwould seem to
satisfy McCarthy's. statement, whereas reactive programming would
seem to violate it. The difference is between McCarthy's
output-driven processing and reactive programming's input-driven
processing. McCarthy (and I) do only as much computing as needed to
produce the required output. Reactive programming does all
computing that is allowed by the input, which may be horrendously
more orders of magnitude than is cost effective for required
output. Is that a correct comparison??

Now there's one disadvantage of the extreme McCarthy/Maas idea,
latency. If we wait until somebody asks for something before it's
computed, there may be a delay before it's available, whereas if we
pre-compute something in anticipation that somebody might ask for
it, then ideally it's already available for immediate return
whenever somebody finally does ask for it. It seems silly to sit
idle waiting for somebody to ask something, if the computer can
predict likely things somebody might want and use idle cycles to
pre-compute them and stage them for immediate retrieval. So perhaps
a slight variation of my proposed idea would be to use idle cycles
to (starting from the top) shrink the intervals just a little bit
more than previously asked for. Still, each level in the network
leading toward output is interval-shruken only a little bit at a
time, and after a reasonable amount of interval-shrinkage occurs
maybe it's better to have idle cycles than to fill up all of memory
computing more and more accurate results that are less and less
likely to actually be wanted.

> In call-return programming (functional, procedural, etc)
> EVERYTHING is synchronized - always.

Only in single-thread programming. Everything in a given thread is
synchronized to its caller and anything it calls, but separate
threads don't need to be synchronized except at the point where a
JOIN is needed to merge results from different threads.

> People are currently wracking their brains trying to figure out
> how to program multi-core cpu's.  They think that it's a hard
> problem.  But only because they cannot think outside of the
> call-return box.

I think you're getting into strawman territory now.

Call-return, to top-down lazy-traverse a network of data
dependencies (including Newton's method etc. feedback loops), seems
to me quite sufficient for interval arithmetic and other
data/processing.

Machine interupts for device events (mouse click etc.), or a
background clock-driven device poll loop, either of which then does
a call-return to the appropriate event-handler for the appropriate
widget, seem sufficient to handle GUIs. (Ok, I'll admit that *some*
of the event handling is done asynchronously, where one widget
passes an activation to another widget and doesn't wait for a done
signal before either doing something else or going idle to wait
another local event. But at least it waits for the activated widget
to reply as to whether the activation was accepted or not??)

CGI (or PHP, JSP, RMI etc.) to connect clients to servers, together
with a call-return linkage to a relational database to manage all
persistent data, seems sufficient to handle
multi-machine-across-network applications.

Now maybe that doesn't cover all possible computer applications,
but it covers more than you seem to admit.

As to your derogatory implications that people who write software
don't assure their customers that the software will work: Would you
be willing to hire me to work for you, on the condition that we
block+fund the work according to use cases, and you don't pay me
for a particular use case until and unless it works as specified
(and all previously-paid use cases within the same application
 continue to work, interoperating with the new use case)?
From: Frank Buss
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <rjdnes3z38lv$.p831iavcuuk7$.dlg@40tude.net>
Robert Maas, http://tinyurl.com/uh3t wrote:

> This sounds like bullshit. What really is "message passing"?
> How is that any different from a function (procedure) being passed
> a message consisting of parameters and returning a message
> consisting of a set of return values?
> Surely two black boxes can't pass messages directly between them,
> because neither can possibly know of the other's existance. Some
> higher-level controller must decide which message is to be passed
> to which black box and then collect whatever return messages it
> produces and decide where to forward them next.

You are right, using a main function, which calls functions and other
functions with the returned values, could be one way to implement it and
with continuations even pseudo-multithreaded. Of course, the code would be
not very readable, because processes can have state and there can be
multiple instances of one process, so you have to maintain states in you
main function and add a state parameter for each function. And you have to
write a lot of code for the network of queues and processes. Looks like
this could be simplified with a framewirk with a DSL and some macros. The
result would be a cooperative system like Paul Tarvydas described.

> So I don't see how FBP is any different from procedural programming,
> where a main application calls the various subroutine components.

As Kenny wrote, everything is a Turing machine. The difference is the
higher level concept: With the same argument you could say, that you don't
see any difference from procedural programming and object oriented
programming. OO is just calling functions with a this pointer, and some
more logic to implement inheritance, polymorphism etc. The point is the
higher level concept of "objects". It is not important how to implement it,
e.g. like you can see in the object oriented framework GTK, which is
implemented in C, which doesn't know about classes.

Same with FBP: When designing a FBP program, you think about data, which is
processed by a network of processes.

I don't have experience with FBP, but I think one additional main
difference compared to structured programming is the focus on data instead
of tasks. In structured programming you have a problem, which you split in
simpler problems, until you can implement it as functions. In FBP it looks
like it is more data-oriented: You have some input data and some output
data and with the help of processes, you split the data to simpler streams,
which are fed to sub-networks and are merged together at the output.

> What if a procedure needs several different inputs from different
> sources before it will be ready to return some values?

In a real preemptive system this is no problem: e.g. if you need a value
from wire x and a value from wire y, first read a value from wire x, then
read a value from wire y, then process the values. Each read could block
until there is some data in the wire (but you have to be careful to avoid
deadlock problems).

> That's easy
> in procedural programming too: All you need is an OWN (static)
> variable that keeps the internal state from one call to the next,
> and a NULL type of return that is simply discarded, in a case where
> there are no return values yet.

Maybe this is possible for simple cases, but sounds a bit complicated and
not like intuitively understandable code. And what do you do if you need
multiple instances of processes? You could arrange static variables in
arrays and passing context information to the function which instance is
meant, but the code would become even more unreadable.

> This sounds like more bullshit. It's the GUI which is diagrammatic.
> The *actual* network definition is a connection list, which is
> somewhat hidden from view when using the GUI, but it's really there
> all the time, and it's quite apparent when writing "programs that
> write programs", as is often done in Lisp. It would be a royal
> pain, when writing a program that writes a FBP program, to need to
> convert all the details into a visual diagram (or mouse motions to
> draw a visual diagram) which is somehow fed back into the GUI.

That's true and the diagram is just one way to visualize it. In the book
the author wrote, that a blind person was happy to use the concept, because
it made work so much easier for him. It doesn't matter how you build or
visualize the network of connected processes.

>    ... IPs traveling across a given connection
>    (actually it is their "handles" that travel) constitute a "stream",
>    which is generated and consumed asynchronously ...
> 
> So that pretty much precludes running such a process on multiple
> CPUs connected across the InterNet? Or does a process on one
> machine owning a handle for data on another machine have a way to
> request that other machine to eventually send the actual data?

I think this is an implementation detail. When traveling to other machines
it sounds like a good idea to transfer the data to which the handle points,
as well. In the original FBP concept an outgoing connection feeds only one
incoming connection (but an incoming connection can be fed from multiple
outgoing connections) and you have to explicit destroy an IP. This
supercedes GC and there will be a one-to-one relation between a handle and
the associated data.

>> Anyone interested in implementing FBP in Lisp?
> 
> Do you mean implement it in some particular vendor-supplied
> implementation of Lisp which happens to support multiple threads?
> Or somehow implement it in ANSI CL, somehow emulating threads by
> means of some sort of polling loop?

I think a mix of both would be nice, see my other posting today.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <rem-2008jun02-001@yahoo.com>
> From: Frank Buss <····@frank-buss.de>
> You are right, using a main function, which calls functions and
> other functions with the returned values, could be one way to
> implement it and with continuations even pseudo-multithreaded. Of
> course, the code would be not very readable, because processes can
> have state and there can be multiple instances of one process, so
> you have to maintain states in you main function and add a state
> parameter for each function.

No, you only need a state parameter for each continuation that has
internal state, and probably that would be inside the continuation
anyway so you don't have to add anything new after you already have
the continuation (except the persistent handle on that continuation).

And you don't need anything maintained in the main function except
handles on continuations that are directly kept by the main
function. Other continuation handles can be kept elsewhere, in
whichever *other* continuation needs to keep it available for local
use. And return values can contain handles on other continuations,
which are immediately passed as parameters to other continuations,
so many handles don't need to actually be kept anywhere long.
For many D/P applications the main function needn't deal with
continuations at all. It just calls a function to do input
processing, then passes the return value (a record containing a
continuation for further input processing and the accumulated input
data so-far) to the function that does the main processing, then
passes the return value from it to the function that does output
generation (report writing). During debugging it might be written as:
 (defun main (sysargs)
   (prog (inrec outrec)
     (setq inrec (process-input sysargs))
     (setq outrec (main-processing inrec))
     (generate-output outrec)
     (return 0)
     ))
After debugging it might be collapsed to:
 (defun main (sysargs)
   (generate-output (main-processing (process-input sysargs)))
   0
   )
Deep inside the three functions that are called would be all the
code that explicitly instantiates continuations and steps them as
needed and builds a record containing a handle on the continuation
and later code that picks fields out of the passed record.

> And you have to write a lot of code for the network of queues and
> processes. Looks like this could be simplified with a framewirk [sic]
> with a DSL and some macros. The result would be a cooperative
> system like Paul Tarvydas described.

Sure, a package to add another layer of API to support a DSL for
this sort of thing, within an existing programming language such as
CL or Java, would be reasonable.

> ... When designing a FBP program, you think about data, which is
> processed by a network of processes.

Hopefully you think about *both* data and processes. Without data
there's nothing to process, but without processes there's just
static data sitting there like a Web page that nobody ever loads
into a browser.

> I don't have experience with FBP, but I think one additional main
> difference compared to structured programming is the focus on data
> instead of tasks.

Repeating myself, concentrating on just the front, or rear, wheel
of a bicycle, at the expense of the other, isn't a good way to
maintain a bicycle. Likewise concentrating on only data and
ignoring processing is likewise only half correct. A bicycle needs
both wheels, neither is more important, and data-processing
software needs both data and processing.

> In structured programming you have a problem, which you split in
> simpler problems, until you can implement it as functions. In FBP
> it looks like it is more data-oriented: You have some input data
> and some output data and with the help of processes, you split the
> data to simpler streams, which are fed to sub-networks and are
> merged together at the output.

I actually debug my mostly-procedural code that way, working from
input forward to output, except that I develop my code bottom-up
instead of haywire of network connectionsas much as possible. But
my "dataflow" software actually is somewhat like you describe, but
again all in a procedural paradigm: I have a separate function to
make sure each control point in the data flow is not just generated
but in fact up to date with respect to any earlier-in-flow data
that it depends on.
 (And when I make an incompatible change in some algorithm for
  producing some particular data, I "hardware" a timestamp in that
  control point so that any output that was computed before the
  incompatible change is automatically regarded as "out of date",
  forcing it and any data depending on it to be recomputed the next
  time such data is requested. Here's an actual example:
    ;... set date1 and date2 to UT timestamps of the two inputs ...
    (setq insym (make-symbol "LABELS+PHPROPS")) ;make JOIN of the two inputs
    (length (setf (get insym :DATA) ...))
    (setf (get insym :DATE)
          (max date1 date2 3419829547 ;When GR installed to replace 01
                           3419834222 ;When closure bug fixed
                           3419916361 ;When mustkeep flag eliminated
                           ))
  )
So the haywire (graph) of dataflow is effected by the call
relationships between these various data control-points.
It would be relatively trivial to automatically process my function
definitions to trace calls of one function by another and thereby
build the actual data-flow graph as an explicit set of links, which
could then be printed in some graphical form
 (that last part is more difficult, similar to automatic circuit
  layout, if you want to minimize the number of wires that cross so
  that the output will be visually legible).

> > What if a procedure needs several different inputs from different
> > sources before it will be ready to return some values?
> In a real preemptive system this is no problem: e.g. if you need
> a value from wire x and a value from wire y, first read a value
> from wire x, then read a value from wire y, then process the
> values. Each read could block until there is some data in the wire
> (but you have to be careful to avoid deadlock problems).

Agreed. The rather trivial code to analyse the call pattern of the
functions representing the data control points, or alternately a
formal representation of the data-flow graph whereupon the function
definitions for control points are generated automatically, would
then support detecting any data loops that either are flat-out
disallowed or cause for concern that they are stable.
 (With my time-stamped control points, loops must be flat-out
  disallowed. With my proposed interval-arithmetic dataflow, loops
  are very common, and stability must be guaranteed, i.e. don't set
  up a Newton's method loop until after the zero of the function has
  been narrowed enough that Newton's method is immediately provably
  strictly nested output-subIntervalOf-input. Before stability has
  been achieved, a more complicated algorithm that uses
  divide-and-conquer would sit in that dataflow control point, to be
  replaced by Newton's method after stability has been achieved.)

> > That's easy
> > in procedural programming too: All you need is an OWN (static)
> > variable that keeps the internal state from one call to the next,
> > and a NULL type of return that is simply discarded, in a case where
> > there are no return values yet.
> Maybe this is possible for simple cases, but sounds a bit
> complicated and not like intuitively understandable code. And what
> do you do if you need multiple instances of processes? You could
> arrange static variables in arrays and passing context information
> to the function which instance is meant, but the code would become
> even more unreadable.

Common Lisp fuctions don't have OWN variables. Only lexical
closures (and CLOS objects) do. So what you do is write (for each
"class" of function-with-OWN-state) just one (1) FUNCTION that makes
reference to an external state record, and then each time you want
to instantiate it with separate OWN state you create a lexical
closure that contains that external state-record inside it. Doh!!

If you need more than one function sharing the same OWN state, it's
trivial to build a closure containing all those functions sharing a
single lexical variable containing the shared state. So for example
For exmaple, if you have five related functions that share an
external state, and you want to make three instances of that
set-of-five, with each set having its own separate persistent
state, you make three closures each of which has one lexical
variable and five functions.

(Are there any newbies reading this thread who are intrigued by what
 I'm saying but would need me to compose a worked-out example of
 such a thing before they would really understand? For example,
 five functions referencing a vector of two numeric values, which do:
  - Use the current first value (and the current second value
     before the call) to compute a new second value, which is
     returned.
  - Use the current first value (and the current second value
     before the call) in a different way to compute a new second
     value, which is returned.
  - Use the current first value (and the current second value
     before the call) in yet a different way to compute a new
     second value, which is returned.
  - Replace the first value.
  - Print the current state on the stdout.
  The constructor for this closure (actually a vector of
  intertwined closures) of course sets initial values for both.)

> ... the diagram is just one way to visualize it. In the book the
> author wrote, that a blind person was happy to use the concept,
> because it made work so much easier for him.

Does the blind person have some kind of braile-like feely "screen"
as display device, so that by feeling the "screen" with the fingers
the traces of the "wires" connecting between nodes could be
followed, and the pseudo-braile icons at each node could be felt to
identify each of them? Indeed such a BGUI would seem a good match
for a blind person who doesn't like listening to vocalized source
code syntax so much.

> > ... CPUs connected across the InterNet? Or does a process on one
> > machine owning a handle for data on another machine have a way to
> > request that other machine to eventually send the actual data?
> I think this is an implementation detail. When traveling to other
> machines it sounds like a good idea to transfer the data to which
> the handle points, as well.

I strongly disagree in the case where the handle points to a
massive database of which the requestor will traverse only a very
small part that it needs and ignore the rest. Lazy/incremewntal-FTP
as is done with RMI/SOUP or whatever seems a better design. Of
course you can add an extra layer of query language, sort of like
how SQL works, whereby you specify in the query exactly what parts
of the data record you really need, and *only* those particular
fields are returned in on-the-fly-created records within the result
set. If you do that, then indeed transmitting immediately *all* the
fields that were requsted, instead of just a handle to them, would
be optimal.

> In the original FBP concept an outgoing connection feeds only one
> incoming connection (but an incoming connection can be fed from
> multiple outgoing connections) and you have to explicit destroy an
> IP. This supercedes GC and there will be a one-to-one relation
> between a handle and the associated data.

Hmmm, is the claim that GC doesn't work across a network, not even
the most modern generational GC algorithms, so it's best to scrap
the whole idea of GC for such cross-machine references and revert
to C-style malloc/free or C++-style new/delete? I don't approve of
that idea. What if system A requests system B to reserve a resource
and give system A handle to it, intending to tell system B to
delete it sometime later. But system A crashes, or the application
unexpectedly quits, etc., so the DELETE command is never
transmitted, so system B holds onto the resource indefinitely?
Maybe better for system B, when in need of a local GC, to ask
system A "which of these handles are you still maintaining", and
hold onto any that system A says it still is maintaining, and GC
the rest (unless somebody else has them). If system A doesn't
respond within a reasonable time (20 minutes?), go ahead and assume
A has crashed and won't be holding onto *any* stale pointers any
longer. Summary of my proposal: For local references, do the usual
mark and sweep. For remote references, keep explicit info about
which remote machines *might* still have a reference, and query
them whenever there are no longer any local references. Since
dealing with remote references has a lot more overhead than local
references anyway, we can afford the luxury of an EQ-hash-table to
keep track of which objects might have remote references and what
machines might have those references, with a single bit in each
object to tell whether there *might* be an entry in the
EQ-hash-table worth checking during GC. A separate EQUAL-hash-table
would provide the reverse mapping, so that when it's necessary to
query a remote host about a particular reference, *all/most* of the
currently believed-active references to my machine can be inquired
about in a single network datagram, and if the query times out then
*all* such references from that host can be deleted (from the hash
tables) en masse leaving them available for GC if nobody else has
handles on them.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: 419 spammer harvested address (was: Cells compared to Flow-Based Programming)
Date: 
Message-ID: <rem-2008aug27-002@yahoo.com>
> From: ···················@spamgourmet.com (Robert Maas, http://tinyurl.com/uh3t)
That variant e-mail address was posted only once, in
this article on Jun.02:
 <http://groups.google.com/group/comp.lang.lisp/msg/e156db70df59c938>
 = Message-ID: <·················@yahoo.com>
but today I've had to disable the address, because after nearly
three months some spammer finally harvested that address and sent
me a spam to that address, reported here:
 <http://groups.google.com/group/news.admin.net-abuse.sightings/msg/4c90936b378a0185>
 = Message-ID: <·················@yahoo.com>
If you want to very-belatedly e-mail me privately, see my Web site
for a Turing test that leads to a non-published e-mail address, or
check some other article I posted recently for a public but
disguised address that hasn't yet been harvested.

By the way, as recently as *today*, Barack Obama's
campaign organization is still sending out spam to
addresses that never signed up for any mailing from
him, but which addresses were used in a *comment* to
his comment WebForm asking him *not* to send them any
e-mail. Harvesting e-mail addresses to use for some
purpose other than how they were originally supplied
should be a crime. Regardless of where the address came
from, putting somebody on an e-mail mass-mailing list
without first asking them to confirm they wanted the
mailings should also be a crime. Obama's campaign has
repeatedly commited what should be two separate crimes.
Obama and his campaign workers should be in prison
already. But of course Obama is corrupt like many other
members of Congress, and probably voted for CAN-SPAM
which gives them a legal exemption to all anti-spam
laws.

The last three such spam to one of my spamtrap
addresses are summarized below:

   291 Behind the scenes in Denver
   SpamTrap -- My mom, the girls, and I left home in Chicago and got to
   Denver yesterday. What a beautiful city! The convention started this
   morning, and everyone... Michelle Obama
   ····@...
   Aug 26, 2008 1:28 am

   292 Did you see Michelle?
   SpamTrap -- I am so lucky to be married to the woman who delivered
   that speech last night. Michelle was electrifying, inspiring, and
   absolutely magnificent. I... Barack Obama
   ····@...
   Aug 26, 2008 3:55 pm

   293 Rolling up our sleeves
   SpamTrap -- This has been a convention of extraordinary moments. Ted
   Kennedy passing the torch to a new generation. Michelle Obama moving
   the crowd to tears.... Jon Carson, BarackOba...
   ····@...
   5:42 pm
From: Espen Vestre
Subject: Re: 419 spammer harvested address
Date: 
Message-ID: <m1ej4aryg3.fsf@vestre.net>
So what? I get appr. 3000 spam mails every day, but I can manage it.
-- 
  (espen)
From: Stanisław Halik
Subject: Re: 419 spammer harvested address
Date: 
Message-ID: <g9e51n$31k9$1@opal.icpnet.pl>
thus spoke Robert Maas, http://tinyurl.com/uh3t <···················@spamgourmet.com.disabled>:

>> From: ···················@spamgourmet.com (Robert Maas,
>> http://tinyurl.com/uh3t)
> but today I've had to disable the address, because after nearly
> three months some spammer finally harvested that address and sent
> me a spam to that address, reported here:

What's the problem? Turn on greylisting and use SpamAssassin with SARE
rulesets. It's not a big deal. Got a couple hundred attempts to spam my
address daily, the very few that get through greylisting get flagged by
SA. There's not a global problem with spam, but dimwitted postmasters.

-- 
The great peril of our existence lies in the fact that our diet consists
entirely of souls. -- Inuit saying
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: 419 spammer harvested address
Date: 
Message-ID: <rem-2008sep30-008@yahoo.com>
> From: =?UTF-8?Q?Stanis=C5=82aw?= Halik <··············@tehran.lain.pl>
> Turn on greylisting and use SpamAssassin with SARE rulesets.

I don't have either option. Perhaps you will provide me with a free
account on whatever computer *you* use which does have those
options.

> There's not a global problem with spam, but dimwitted postmasters.

I don't know of any e-mail service available to me currently that
does it right. All known postmasters are currently dimwitted, some
more dimwitted than others. Perhaps whatever system you are using,
which you are bragging about, could be made available to me? If and
when I finish some other projects and have time+energy to start
implementing my new e-mail system, it'll do spam filtering **right**.


-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to try to read the text-picture in this box:

/----------------------------------------------------------------------------\
| |\/||_|   |~ _ ||  _  ._ _  _    _ _|  _|_ _   |\ | _      \/ _ ._|
| |  | _|  ~|~(_)||<_\  | | |(_)\/}_(_|   | (_)  | \|}_\/\/  / (_)| |<
|  |~._ _ ._ _   |~ _ |o |~ _ ._._ o _    |\/||_|   |~._o _._  _| _   _ ||
| ~|~| (_)| | |  |_(_|||~|~(_)| | ||(_|o  |  | _|  ~|~| |}_| |(_|_\  (_|||
| _|_._o _ _|  _|_ _    _ _ |_|  ''|       _ ._._   |_| _    o  ~|~|_  _._ _)||
|  | | |}_(_|   | (_)  _\(_| _|    |  \/\/(_|| | |   _|(_)|_|o   | | |}_| }_ ||
| |_  _  ._  _    _   ._ |~o._ (~|  _|_|_  _._ _    _ ._  _|  ._  _    _ ._  _
| |_)}_  | |(_)  _\|_|| ~|~|| | _|   | | |}_| }_)  (_|| |(_|  | |(_)  (_)| |}_
|  _   _._    _ _ ._ _ _ ''  |\/||_|       _  _  _||_|) _   _    _|_ _o _| _
| }_\/}_| |  (_(_|| }__\o    |  | _|  \/\/(_)(_)(_| _| _\  (_)|_| | _\|(_|}_
|  _ _    _._ _ _|      o_|_|_    _._  _        |\ | _      \/ _ ._| ) _   _
| (_(_)\/}_| }_(_|  \/\/| | | |  _\| |(_)\/\/o  | \|}_\/\/  / (_)| |< _\  (_|
| | _ ._  _||_|  _|_ _     ._        |_  _._   |_| _    )._ _  _|_|_  _
| |(_)| |}_| _|   | (_)\/\/| |)  \/\/| |}_| |   _|(_)|_| | }_   | | |}_
|  _ ._ ||_|   _   ._ |~ _._  |_  _ |_|   _ ._ _    ._  _|
| (_)| || _|  _\|_|| ~|~}_|   |_)(_) _|  (_|| (_)|_|| |(_|o
\----(Rendered by means of <http://www.schnoggo.com/figlet.html>)------------/
     (You don't need JavaScript or images to see that ASCII-text image!!
      You just need to view this in a fixed-pitch font such as Monaco.)

Then enter your best guess of the text (200-300 chars) into this TextArea:
   +------------------------------------------------------------+
   |                                                            |
   |                                                            |
   |                                                            |
   |                                                            |
   |                                                            |
   +------------------------------------------------------------+

If you get the text 90% correct, that will be good enough to qualify
for interactive coaching towards the exactly correct answer.
But if you get it less than 90% correct, your entire /16 will be
permanently blacklisted, and your sysadmin will be sent one e-mail
telling your IP number and date&time which caused the blacklisting,
and as a result you will probably be expunged from your ISP.
From: Kaz Kylheku
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <1a21a3d9-e277-41f2-b063-aa21ce38b2ab@i76g2000hsf.googlegroups.com>
On May 23, 4:30 pm, Frank Buss <····@frank-buss.de> wrote:
> I've read nearly half of the book http://www.jpaulmorrison.com/fbp/book.pdf
> from the page http://www.jpaulmorrison.com/fbp/

I read all of it, some 13 years ago.

> FBP is very different: When developing applications, you start with
> defining information packets (IP). In Lisp an IP could be a hashtable or
> any other Lisp object. Then you define some processes and interconnect
> them. A process has inputs and outputs for processing IPs. The process
> itself is a program, with local storage.

Where storage constitutes state!

> A process can be instantiated
> multiple times and can be configured with configuration IPs (there are
> preemptive and cooperative multitasking implementations).

I worked on some projects which incorporated these FBP concepts, and
realized it's a mess. FBP sounds good on paper, but it's
``impossible'' to debug.

If something goes wrong---for instance, a process obtains bad data
from somewhere---you don't have a useful call trace. The top of your
thread called some ``getmessage'' function, which pulled the bad
message from a queue where it was deposited by another thread that has
since gone on to do other things.

If you've ever debugged a communication protocol, you know what I
mean.

You're going to end up with silly abstraction inversions. Programmers
will want simple function calls, and they will emulate them by
defining information packet types which are like functions. The
configuration IP concept in FBP is a perfect example of this. You just
want to call a simple function with arguments to configure a process
(ideally, just once via a constructor call), but instead, you have to
instantiate a process, hook it into this network, and send it a
packet.

The nice thing about functions is that the caller is suspended until
the function returns, and the caller synchronously collects the return
value. You can set breakpoints, single-step the entire process if
necessary, and view the activation chain.

Also, did you get to the part of the book where it describes very
stateful processes which, for instance, parse nested syntax? It even
uses Lisp notation. What happens if the stream of packets being parsed
by some process has a syntax error? Ah, right. FBP applications
consist of debugged processes, so that would never happen in the
field. :)

> According to the book, at IBM they have successfully used it
> in many projects and one example with three projects, in the third project
> they achieved a reuse rate of about 97% (PDF pages 42/43 in the book).

A reuse rate of 97% means that you really don't have a new project.
You just tweaked something in the existing project and /called/ it a
new project.

If your ``new'' project has 97% of the code of the ``old'' one, you've
only done a minimal amount of new work. This could only be because the
requirements have only changed in a trivial way.

Or it could also be that 97% of your effort was spent in developing
infrastructure whose details are irrelevant to the problem domain. If
Common Lisp is 99 times bigger than the average program you write,
then each time you write a new program, you can claim 99% reuse. :)
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <483edd89$0$15203$607ed4bc@cv.net>
Kaz Kylheku wrote:
> On May 23, 4:30 pm, Frank Buss <····@frank-buss.de> wrote:
> 
>>I've read nearly half of the book http://www.jpaulmorrison.com/fbp/book.pdf
>>from the page http://www.jpaulmorrison.com/fbp/
> 
> 
> I read all of it, some 13 years ago.
> 
> 
>>FBP is very different: When developing applications, you start with
>>defining information packets (IP). In Lisp an IP could be a hashtable or
>>any other Lisp object. Then you define some processes and interconnect
>>them. A process has inputs and outputs for processing IPs. The process
>>itself is a program, with local storage.
> 
> 
> Where storage constitutes state!
> 
> 
>>A process can be instantiated
>>multiple times and can be configured with configuration IPs (there are
>>preemptive and cooperative multitasking implementations).
> 
> 
> I worked on some projects which incorporated these FBP concepts, and
> realized it's a mess. FBP sounds good on paper, but it's
> ``impossible'' to debug.

Cells has issues that way, esp. since I shifted to a queue structure to 
handle propagation so you don't have a call stack to follow when you 
land in the debugger, but only rarely do I have a problem involving 
"hey, why am I recalculating?". The whole point of declarative 
programming is to write rules that cover all the cases by calling on all 
the input they need, so all I need to know to debug a rule are my 
inputs. If I see a crazy input, Tilton's Law cuts in and I go worry 
about that Cell.

But, yes, the paradigm shift does introduce some issues.

> 
> If something goes wrong---for instance, a process obtains bad data
> from somewhere---you don't have a useful call trace. The top of your
> thread called some ``getmessage'' function, which pulled the bad
> message from a queue where it was deposited by another thread that has
> since gone on to do other things.
> 
> If you've ever debugged a communication protocol, you know what I
> mean.
> 
> You're going to end up with silly abstraction inversions. Programmers
> will want simple function calls, and they will emulate them by
> defining information packet types which are like functions.

And this is why we love a multi-paradigm language. Things like FBP and 
Cells should be used selectively within the code beast. Unfortunately 
the usual reaction people have when they hit on something new is "Hey, 
let's make a new language." They missed Tilton's Law:

     Make new datatypes, not new languages.

kt

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <rem-2008jun02-003@yahoo.com>
> From: Ken Tilton <···········@optonline.net>
> Make new datatypes, not new languages.

I agree. And furthermore they don't need to be actual new datatypes.
New intentions on old datatypes work fine too.
(And in a sense, you really can't invent new datatypes within Lisp
 or Java, you can only make new sub-classes of existing classes.
 The closest you can get to a brand-new datatype is when you
 directly sub-class STANDARD-CLASS or java.lang.Object respectively.)

Once you have a truly generic datatype, such as nested lists with
sufficient types of atomic constituents, you can emulate any
intentional datatype you want. Several common methodologies exist:
- Have the intention of the data known a priori by the functions
   you call to accept the data as input.
- Store the intentional type of the data in the CAR of the head
   cell of each "object", with the CDR pointing to the actual data.
- Use an uninterned symbol as the header of each separate "object",
   with the property list containing the intentional tag(s) and the
   value cell holding the actual value. This has the advantage that
   an object can be "of multiple types" i.e. satisfy multiple
   interfaces, merely by having multiple type-tags in the property
   list.
- Define STRUCT or CLOS classes, sigh, do you really need to do that?
From: Tim X
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <87hccgjx2f.fsf@lion.rapttech.com.au>
Kaz Kylheku <········@gmail.com> writes:

<snip>
>
> I worked on some projects which incorporated these FBP concepts, and
> realized it's a mess. FBP sounds good on paper, but it's
> ``impossible'' to debug.
>
> If something goes wrong---for instance, a process obtains bad data
> from somewhere---you don't have a useful call trace. The top of your
> thread called some ``getmessage'' function, which pulled the bad
> message from a queue where it was deposited by another thread that has
> since gone on to do other things.
>
<snip>
>
> The nice thing about functions is that the caller is suspended until
> the function returns, and the caller synchronously collects the return
> value. You can set breakpoints, single-step the entire process if
> necessary, and view the activation chain.
>

I agree that this does create some issues and doesn't
necessarily fit well with traditional debugging paradigms, but I'm not
sure it follows that therefore it is a bad paradigm/methodology. 

Given the developments in multi-core hardware, threads, multi processor
systems, distributed systems, greater use of network based services
etc, I think the whole model of software as largely synchronous function
calls may be too limiting a paradigm for how we design, write and debug
software. We probably need to develop new techniques in the same way we
had to develop new techniques in how we structure and debug software as
our software and the problems they addressed became more omplex and
unstructured spaghetti code showed its limitations as complexity
increased.

I wouldn't argue that this is the one and only paradigm for softwrre
development, but it may be, for a certain class of problems, a better
paradigm even if it creates a need to approach debugging from a new
direction.  

> A reuse rate of 97% means that you really don't have a new project.
> You just tweaked something in the existing project and /called/ it a
> new project.

Making a statement of a reuse rate of x% is meaningless without details
of how much requirements differ, so I agree that it doesn't mean
anything. However, for the same reasons, you can't also state that it wasn't a new project
either. 


> If your ``new'' project has 97% of the code of the ``old'' one, you've
> only done a minimal amount of new work. This could only be because the
> requirements have only changed in a trivial way.
>

Thats just an assumption. It is possible the claims are true and the
projects had very different requirements and the approach does provide
this type of advantage. Without details of how long the original project
took and to what extent the requirements differ, you can't really make
any concise statement based on just the amount of reuse. It is possible
the original project took 10 times as long to develop as it would have
taken with a different approach and at the end, there was such a lot of
code,, libraries and black boxes that you could solve any problem with
just 3% extra coding (that 3% could take another 4 years of course - it
is all relative).



> Or it could also be that 97% of your effort was spent in developing
> infrastructure whose details are irrelevant to the problem domain. If
> Common Lisp is 99 times bigger than the average program you write,
> then each time you write a new program, you can claim 99% reuse. :)

Exactly. This doesn't say anything about the approach, only about the
lack of sufficient information to make a call.

Tim


-- 
tcross (at) rapttech dot com dot au
From: Paul Tarvydas
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <g1p5ps$l5q$1@aioe.org>
Tim X wrote:

> Given the developments in multi-core hardware, threads, multi processor
> systems, distributed systems, greater use of network based services
> etc, I think the whole model of software as largely synchronous function
> calls may be too limiting a paradigm for how we design, write and debug
> software. We probably need to develop new techniques in the same way we
> had to develop new techniques in how we structure and debug software as
> our software and the problems they addressed became more omplex and
> unstructured spaghetti code showed its limitations as complexity
> increased.

I agree, but note that you are overlooking the idea of re-using "old"
techniques.  Reactive programming enables the borrowing of well-worn
techniques from the hardware world:

- unit test / test circuits - reactive programming makes unit test very
easy - create a stimulus unit that sends events into the unit, create an
observer that watches the outputs of the reactive unit and compares them
with expected values ; it's easier in the reactive paradigm because
input-driven code doesn't implicitly drag along large libraries of hidden
stuff that it's dependent upon

- test probes - "tap" watchpoints into the reactive circuit and observe
values over time (have the Cells people tried this?)

- back-to-back testing - use two copies of the same component, copy 1 drives
inputs into copy 2, copy 2 sends its responses back to copy 1

- techniques for handling race conditions

- reuse of "patterns" that don't naturally occur to people when thinking in
the call-return paradigm, e.g. "daisy chain" for resource management,
multiplex / demultiplex for load balancing.

pt
From: Ken Tilton
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <48402d64$0$25019$607ed4bc@cv.net>
Paul Tarvydas wrote:
> Tim X wrote:
> 
> 
>>Given the developments in multi-core hardware, threads, multi processor
>>systems, distributed systems, greater use of network based services
>>etc, I think the whole model of software as largely synchronous function
>>calls may be too limiting a paradigm for how we design, write and debug
>>software. We probably need to develop new techniques in the same way we
>>had to develop new techniques in how we structure and debug software as
>>our software and the problems they addressed became more omplex and
>>unstructured spaghetti code showed its limitations as complexity
>>increased.
> 
> 
> I agree, but note that you are overlooking the idea of re-using "old"
> techniques.  Reactive programming enables the borrowing of well-worn
> techniques from the hardware world:
> 
> - unit test / test circuits - reactive programming makes unit test very
> easy - create a stimulus unit that sends events into the unit, create an
> observer that watches the outputs of the reactive unit and compares them
> with expected values ; it's easier in the reactive paradigm because
> input-driven code doesn't implicitly drag along large libraries of hidden
> stuff that it's dependent upon
> 
> - test probes - "tap" watchpoints into the reactive circuit and observe
> values over time (have the Cells people tried this?)

defobserver? :) That is a normal part of programming with Cells, eg, how 
a widget color attribute triggers a redraw event when it takes on a new 
color. Observers are generic in Standard Cells (via a GF specialized on 
slot name, instance, new-value, old-value, and the kitchen sink) but in 
my toy RDF implementation observers accidentally became Cell-specific. 
Not sure that matters here. Anyway, yeah, ready and available for 
debugging and other kinds of program analysis.

kt


> 
> - back-to-back testing - use two copies of the same component, copy 1 drives
> inputs into copy 2, copy 2 sends its responses back to copy 1
> 
> - techniques for handling race conditions
> 
> - reuse of "patterns" that don't naturally occur to people when thinking in
> the call-return paradigm, e.g. "daisy chain" for resource management,
> multiplex / demultiplex for load balancing.
> 
> pt
> 

-- 
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
ECLM rant: 
http://video.google.com/videoplay?docid=-1331906677993764413&hl=en
ECLM talk: 
http://video.google.com/videoplay?docid=-9173722505157942928&q=&hl=en
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <rem-2008jun02-002@yahoo.com>
> From: Kaz Kylheku <········@gmail.com>
> I worked on some projects which incorporated these FBP concepts,
> and realized it's a mess. FBP sounds good on paper, but it's
> ``impossible'' to debug.
> If something goes wrong---for instance, a process obtains bad data
> from somewhere---you don't have a useful call trace. The top of your
> thread called some ``getmessage'' function, which pulled the bad
> message from a queue where it was deposited by another thread that has
> since gone on to do other things.

I suffered a similar disaster a couple years ago when I tried to
write a system consisting of several mutually recursive functions
that explored a very large virtual space to generate a suitable
random element of it with continuations for never exploring the
same randomly-chosen part of the space again after it had been
exhausted. After I abandoned the project, I realized that what I
should have done is code each piece of "business logic" as a
standalone function that takes a starting state and returns a final
state, and then have a very simple toplevel recursive main program
that called the appropriate business-logic function at each step in
the recursive exploration. So if I ever find time/energy/mood to do
it again, I'll refactor all the existing code per the new design,
test each business-logic function by itself, and expect success at
long last?

> If you've ever debugged a communication protocol, you know what I mean.

I did, but it was a simple two-party point-to-point protocol, which
was easy to debug. Something like IP, with multiple levels of
domain name servers, caches containing records with expiration
dates, multiple subnets with gateways and routers, would probably
illustrate your point fine.

> You're going to end up with silly abstraction inversions.
> Programmers will want simple function calls, and they will
> emulate them by defining information packet types which are like
> functions.

*Real* programmers will think in terms of state vectors and
processing steps which change that state. :-)

> If Common Lisp is 99 times bigger than the average program you
> write, then each time you write a new program, you can claim 99%
> reuse. :)

Hey, I think that's a great advertising gimmic. Instead of giving
in to the turds that complain they want executables, we can brag
with the "sound bite" that with Common Lisp most applications
achieve 99% re-use of code, without needing to make copies of that
re-used code on the disk!! Most people just accept sound bites
without checking the devil in the details, so this may be the way
to make Common Lisp more generally popular. (Java can make the same
claim, but let's keep this advertising-gimmick secret to ourselves,
OK?) Common Lisp applications are so small relative to their power
that they can be transmitted from place to place over the InterNet
in a fraction of a second.

All we need to actually *do*, as opposed to just talking, is build
a Lisp-based Web browser (or a Lisp-based plug-in for existing Web
browsers) which supports Lisp applets just the same as how Java
applets are already supported. If we can make flashy animations
that are just a click away in a Web browser, which show an
advertising banner identifying them as using Common Lisp technology
(the same as my cellphone has a banner advertising that it uses
Java in the cartoony games that come with it), maybe people will
get curious what is this wonderful technology that is better than
JavaScript or Java applets.

Well one other thing that might be useful is making Common Lisp
more modular, so that only a barebones kernel of it need be
downloaded initially, and additional pieces are automatically
downloaded only as needed later.

Hey, wouidn't it be fun to implement a downloadable replacement for
the usual Web browsers, such that if somebody clicks on our
download link their current Web browser downloads it and installs
it as a plug-in, but then our CL-plug-in completely takes over the
browser so that none of the original browser is ever run again, and
the user doesn't see any change in performance except that spam
pop-ups are better regulated (avoided) and other subtle factors are
better than with the original browser?
From: ········@visualframeworksinc.com
Subject: Re: Cells compared to Flow-Based Programming
Date: 
Message-ID: <4de33776-4a86-4617-8bb0-2107775edebb@l64g2000hse.googlegroups.com>
FYI, the latest issue of DDJ contains an article (and a reference to a
book) about event-based programming:

http://www.ddj.com/architect/208801141

pt