From: ···········@yahoo.com
Subject: Writer Responsibility of Autonomous Units
Date: 
Message-ID: <1174424763.524761.59640@o5g2000hsb.googlegroups.com>
Regardless of efficiency concerns and possible multi-threading or
stack issues...

Suppose I have an autonomous neuron_

(defclass neuron ()
  ((dendrites ...)
   (axon ...)
   (memory ...)
   ...))

any instance of which may or may not have a slot 'input, since the
specialty slot 'memory *might* track the inputs and outputs of the
neuron. Signal transmission from a generic neuron, through the
synapses, to the generic output neurons of a network might proceed as
follows_

Setting the input-of a neuron
  fires the neuron,
    which sets the output-of the neuron,
      which feeds the signal forward,
        setting the input-of each axonal synapse,

Setting the input-of a synapse
  fires the synapse,
    which transfers the signal to the output-of the synapse,
      which feeds the signal forward to the post-synaptic neuron,
        until the signals from other pre-synaptic neurons satisfies
this neuron's integrator,
          which sets the input-of this neuron,
            etcetera.

This brings up the question, How much is setf allowed to do? Is it
ultimately up to the API designer?

>From the aforementioned algorithm, the designer of the (input-of

neuron) writer might envision the following__

(defmethod (setf input-of) (value (element neuron))
  (let ((input (setf (slot-value element 'input) value)))
     ; Note: this example illustrates the case of a neuron class which
does have an 'input slot.
    (fire element (activator-of element) input context)
   input))

As I recall--and correct me if I'm wrong--Cells employs the same
manner of action-at-a-distance upon setting the input of a cell (or
even reading a cell's output, as I recall from browsing the source
code). Whereas this might not be common practice, this example of
prior art demonstrates to me that it has been done. I just wonder
whether it might violate any expectation of a Lisp programmer of any
level of experience who might happen upon the source code.

What is your opinion?

Jack

From: Pascal Bourguignon
Subject: Re: Writer Responsibility of Autonomous Units
Date: 
Message-ID: <874pofyp6m.fsf@voyager.informatimago.com>
···········@yahoo.com writes:
> [...]
> Setting the input-of a neuron
> [...]
> What is your opinion?

It's up to you.

The main problem you'll have is that your neural network has probably
loops, so (setf (input-of x) ...) will eventually recursively call
(setf (input-of x) ...)  and if your neural network doesn't converge,
it will never end.

Said otherwise, this design decision is valid only  when you have a
simple network architecture without loops, or when you have proovde
that your neural network always converges.

Moreover, there are probably a lot of inputs coming in together
(thousands of retinal impulses, thousands of frequency levels, etc).

It is probably easier to do a shallow store of the inputs, and later
(or in a parallel thread) to run the neural network activity update.

-- 
__Pascal Bourguignon__
http://www.informatimago.com
http://pjb.ogamita.org
From: George Neuner
Subject: Re: Writer Responsibility of Autonomous Units
Date: 
Message-ID: <hdm2031d5bvm3vu0s0v1esrdeik6orudoe@4ax.com>
On Wed, 21 Mar 2007 09:29:53 +0100, Pascal Bourguignon
<···@informatimago.com> wrote:

>···········@yahoo.com writes:
>> [...]
>> Setting the input-of a neuron
>> [...]
>> What is your opinion?
>
>It's up to you.
>
>The main problem you'll have is that your neural network has probably
>loops, so (setf (input-of x) ...) will eventually recursively call
>(setf (input-of x) ...)  and if your neural network doesn't converge,
>it will never end.

For a feed-forward network, rather than threading the activations
directly, it's better to queue nodes with modified input, process the
queue with an interruptible loop and, if necessary, use iteration
counters to limit the number of times nodes are revisited.  

You're screwed if the problem diverges, but using a queue and counters
helps to contain problems which produce oscillatory behaviors.


>Said otherwise, this design decision is valid only  when you have a
>simple network architecture without loops, or when you have proovde
>that your neural network always converges.

I'd still use an activation queue just for generality.

George
--
for email reply remove "/" from address
From: ···········@yahoo.com
Subject: Re: Writer Responsibility in Autonomous Units
Date: 
Message-ID: <1174501434.602101.118820@n59g2000hsh.googlegroups.com>
George Neuner wrote:
> On Wed, 21 Mar 2007 09:29:53 +0100, Pascal Bourguignon
> wrote:
> >It's up to you.

I guess I answered my own question with the reference to Cells. Thanks
for the validation.

> >
> >The main problem you'll have is that your neural network has probably
> >loops, so (setf (input-of x) ...) will eventually recursively call
> >(setf (input-of x) ...)  and if your neural network doesn't converge,
> >it will never end.
>

A cyclic graph would cause problems with my simple example, of course,
and I had considered that a network with loops would require a
different strategy to avoid infinite recursion, which might be
addressed in the integrator, for instance. This gets away from my
question, but since I decided to use a concrete example, instead of
inventing a foo-bar-baz system for illustration, I guess I opened the
door.

> For a feed-forward network, rather than threading the activations
> directly, it's better to queue nodes with modified input, process the
> queue with an interruptible loop and, if necessary, use iteration
> counters to limit the number of times nodes are revisited.

I like the idea. It might address some issues that I had foreseen.
Thanks.

> You're screwed if the problem diverges, but using a queue and counters
> helps to contain problems which produce oscillatory behaviors.

Small steps. If a specific problem causes difficulty, it'll require a
change in the system.

> >Said otherwise, this design decision is valid only  when you have a
> >simple network architecture without loops, or when you have proovde
> >that your neural network always converges.

I do realize that better solutions exist for certain problems. For
example, the most efficient implementation for an MLP with homogeneous
layers or a growing neural gas definitely would not involve individual
object-oriented processing units, but for my purposes individual units
will give me greater flexibility and better focus. Likewise, recurrent
traversal of nodes without mechanisms for avoiding infinite recursion
would be the wrong hammer for cyclic graphs. The fire hasn't started
yet, so hold off on calling 911.

> I'd still use an activation queue just for generality.

I'll look into it. Thanks, again.

Jack
From: ··········@hotmail.com
Subject: Re: Writer Responsibility in Autonomous Units
Date: 
Message-ID: <1174560960.773719.198300@l75g2000hse.googlegroups.com>
> A cyclic graph would cause problems with my simple example, of course,
> and I had considered that a network with loops would require a
> different strategy to avoid infinite recursion, which might be
> addressed in the integrator, for instance. This gets away from my
> question, but since I decided to use a concrete example, instead of
> inventing a foo-bar-baz system for illustration, I guess I opened the
> door.

A network that never converges can emulate a memory. Just need to stop
the world
and read output once in a while. Recurrent neural network.

Action-at-a-distance is if I guess correctly used because it saves
memory. Else you
would have duplicate value on the output cell going to the next input.