From: Will Hartung
Subject: Working with LWW MP package.
Date: 
Message-ID: <vfr750Ew5vt1.Duq@netcom.com>
I'm using the multi-processing package in LispWorks fer Winders 4.01.

I was wondering if anyone could provide some tips.

What I have is several connections to assorted clients that all need
access to a central core of routines. There are several shared
resources that are used by the routines that I need to coordinate.

The client calls up the server, and gets its own process (or thread I
supposed) in LWW, which is spawned by the (start-server ...) function
in the COMM package.

The client sends requests to this process, which then passes it along
to the central routine of the server.

There isn't a lot of rocket science in the core routines themselves,
in that they're pretty atomic. The routines look some stuff up, and
send the result back to the client. 

I'm working on how to basically maintain this atomicity (is that a
word?) for the central process.

My thoughts were to basically have the clients queue up their request,
and then wait until the request is fulfilled. But how to notify each
client that they're request is done?

One thought is to use something like:

(defun handle-request (stuff)
  (let ((req (make-req-item))
        (local-cons (cons nil nil)))
    (setf (req data) stuff)
    (setf (req status) local-cons)
    (atomically-queue-req req)
    ;; mp:process-wait throws the calling process into a wait state
    ;; until the passed function returns t
    (mp:process-wait "Waiting..." 
                     #'(lambda () (car local-cons)))))

(defun core-driver ()
  (mp:process-wait "Waiting for request"
                   #'(lambda () (not (queue-empty *main-req-queue*))))
  (process-queue *main-req-queue*))

(defun process-queue (req-queue)
  (let ((req (pop-queue req-queue)))
    (when req
       (crunch-data (req data))
       (setf (car (req status)) t))))

Now, this seems farily straight-forward. But it seems that there is a
lot of polling going on, and I'm not really sure if this is a Bad
Thing or not.

The other thought, though, is to use stack groups, and pass those
along with the request so that the (process-queue ...) function can
just restart the calling thread, rather than have the calling thread
sit there glaring at some rogue cons floating around, waiting for it
to change.

I'm just wondering if there is any real benefit to pursuing the stack
group concept, or is the (process-wait ...) low impact enough
(assuming the test is cheap enough).

Another option is to not use the queue at all, but have each client
just call the core routines directly, but then there seems to be a lot
of lock grabbing and giving and what not, and there didn't seem to be
an elegant way to have a lot of processes wait for a lock to free up
and then make the mad rush to reset it. But, I guess that is what the
(with-lock ...) macro does. Actually, does (with-lock ...) queue up
the processes to be released in the order they're requested? Does
anyone know if locks are built on top of the (process-wait ...)
layer?

Of course if there is some approach that I've missed, then I'm all
ears for that too.

Anyway, thanx for any insight you may provide.

-- 
Will Hartung - Rancho Santa Margarita. It's a dry heat. ······@netcom.com
1990 VFR750 - VFR=Very Red    "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
1993 Explorer - Cage? Hell, it's a prison.                    -D. Duck