From: Robert Maas, http://tinyurl.com/uh3t
Subject: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008may06-001@yahoo.com>
In conjunction with my ProxHash R&D project,
in late April I started work on a new dataflow automatic-updater
utilty which is somewhat like the Unix 'make' utility.

Background: Unix's 'make' uses a list of dependencies between various
disk files to automatically update a final output file such as an
executable. Typical use is to make sure each source file is compiled
to yield a more recent compiled file, and to make sure each
executable file is at least as recent as all compiled modules it's
built from. The Java's 'ant' utility goes further to support
updating of JAR files containing Class files in a similar manner.

But each of those utilities works *only* in using dependencies
between various disk files, and each effectively runs like a shell
script, starting a new utility (compiler, loader, JAR updater) for
each step in the dataflow that needs to be re-done.

New: My new utility 2008-4-MayLoad.lisp checks dependencies between
both disk files and in-memory data values (in a Lisp environment),
relating in-memory data values with the precursor data needed to
compute them and/or disk files used to back them up to avoid
time-consuming re-calculation. It does all its work from within
that single Lisp environment, not needing to start up any other
executables.

The first draft of my description of the algorithms is here:
  <http://www.rawbw.com/~rem/NewPub/MayLoadSpec.txt>
I'm soliciting feedback in two areas:
- Any obvious flaws in the algorithms, especially where I claim
   that some fact is "provably true". Can you produce any
   counterexample? Based on the described algorithms, can you find
   any case where any value is unnecessarily re-calculated or
   re-loaded or re-saved?
- Any suggested changes in wording to make the description easier
   to understand without changing the meaning.

Also I'm curious whether anybody else has already written a utility
like this, and if not I'm curious whether anybody likes my idea and
might have a practical use for it and would like to beta-test it
later when the code is more stable.

From: Leslie P. Polzer
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to 	Unix 'make')
Date: 
Message-ID: <0d439c29-2626-49b7-a984-9c4acb5a19e8@b1g2000hsg.googlegroups.com>
On May 7, 12:07 am, ·················@SpamGourmet.Com (Robert Maas,
tables.
>
> The first draft of my description of the algorithms is here:
>   <http://www.rawbw.com/~rem/NewPub/MayLoadSpec.txt>
> I'm soliciting feedback in two areas:
> - Any obvious flaws in the algorithms, especially where I claim
>    that some fact is "provably true". Can you produce any
>    counterexample? Based on the described algorithms, can you find
>    any case where any value is unnecessarily re-calculated or
>    re-loaded or re-saved?
> - Any suggested changes in wording to make the description easier
>    to understand without changing the meaning.

I'm going to take a look at it. In the meantime, do you know this
paper:

  http://cs-www.cs.yale.edu/homes/dvm/papers/lisp05.pdf


> Also I'm curious whether anybody else has already written a utility
> like this, and if not I'm curious whether anybody likes my idea and
> might have a practical use for it and would like to beta-test it
> later when the code is more stable.

I'm very sure I will have use for this, please include me in that
testing phase.

  Leslie
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008may07-001@yahoo.com>
> From: "Leslie P. Polzer" <·············@gmx.net>
> do you know this paper:
> http://cs-www.cs.yale.edu/homes/dvm/papers/lisp05.pdf

I have no way to read PDF files here.

> I'm very sure I will have use for this, please include me in that
> testing phase.

Google Groups doesn't show your full e-mail address, so I needed to
TELNET to a regular NNTP server to pull up your article. I found:
  ·············@gmx.net
I suppose I could have guessed that from your family name,
but it's 4 hours past my bed time and I'm not fully alert.
I'll make a memo of your address to contact you by direct e-mail, OK?
From: moi
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar  to Unix 'make')
Date: 
Message-ID: <437b5$48218365$5350c6a6$22023@cache110.multikabel.net>
On Wed, 07 May 2008 02:09:03 -0700, Robert Maas, http://tinyurl.com/uh3t
wrote:

>> From: "Leslie P. Polzer" <·············@gmx.net> do you know this
>> paper:
>> http://cs-www.cs.yale.edu/homes/dvm/papers/lisp05.pdf
> 
> I have no way to read PDF files here.
> 
I dumped the PDF into ASCII for you:



                                       A Framework for Maintaining the 
Coherence
                                                       of a Running Lisp
                                                        Drew McDermott
                                             Yale Computer Science 
Department
                                                        P.O. Box 208285
                                                 New Haven, CT 06520-8285
                                                  ··············@yale.edu
Keywords: Inference, algorithms, consistency.
                                                             Abstract
             During Lisp software development, it is normal to revise and 
reload programs and data structures
         continually. The result is that the state of the Lisp process 
can become “incoherent,” with updates to
         “supporting chunks” coming after updates to they chunks they 
support. The word chunk is used here
         to mean any entity, content, or entity association, or anything 
else modelable as up to date or out of
         date. To maintain coherence requires explicit management of an 
acyclic network of chunks, which can
         depend on conjunctions and disjunctions of other chunks; 
further, the updating of a chunk can require
         additional chunks. In spite of these complexities, the system 
presented in this paper is guaranteed to
         keep the chunk network up to date if each chunk’s “deriver” is 
correct, the deriver being the code that
         brings that chunk up to date.
      Lisp novices are often surprised to find out that Lisp is a “shell” 
as well as a compiler. Unlike C++
or Java, one starts Lisp, loads functions and data into it, and plays 
around with them. The advantages
of this architecture are well known: it’s easy to modify the system 
incrementally — and experimentally;
it’s easy to combine two or more programs; the debugger is easily 
integrated with the rest of the run-time
system; and so forth.
      Because the typical Lisp session lasts a long time, the programmer 
must worry about keeping it in a
“good” state. Tools for maintaining “goodness” incude features such as 
unwind-protect, which ensure that
even when a program bombs its sensitive side effects can be undone. 
Although the same functionality can
be found in other languages (e.g., Java’s finally clauses), only in Lisp 
is it a routine tool used by the
programmer to ensure that the current core image can continue to execute 
in what I will call a coherent
state. Another example is the distinction between defvar and 
defparameter, which would be meaningless
in most languages, but in Lisp is crucial to making sure that one can 
reload a file while undoing “just the
right set” of global-variable assignments since the file was last loaded.1
      However, the built-in facilities of Lisp do not address the 
coherence problem in any systematic way.
For example, although it is easy to reload a file after making some bug 
fixes, it often happens that the
reloaded file initialized some table, and entries were made in it by 
files loaded later. The other files must
then be reloaded, unless that causes further glitches. Sometimes one can 
use defvar to avoid reinitializing
the table, but sometimes that’s simply inadequate; some of the later 
entries are to be retained and some
discarded, and there’s no obvious way to sort them out. Often one must 
resort to restarting Lisp and
reloading the program’s files in the original order.
      There is nothing really wrong with restarting. It often requires 
you to take special measures to get back
to the point you were at before the restart. Every Lisp programmer has 
had to build “restart scripts,”
sequences of expressions whose evaluation will get the Lisp back to the 
middle of a debugging sequence.
Aside from this nuisance, it seems as if there ought to be a way to 
formalize the dependencies among the
parts of a Lisp “session” in such a way that revising one part causes the 
other parts to revise themselves to
restore coherence. Rather than maintaining a collection of ad-hoc restart 
scripts, one could instead assert
    1
      Lisp is not entirely alone in having coherence issues. Prolog makes 
an analogous distinction in its distinguishing between
loading and reloading a file. And, of course, Prolog also has the 
analogue of a read-eval-print loop.
the relationships among parts explicitly and permanently. The explicit 
statement allows anyone reading
the code to understand how the parts relate. The fact that the assertions 
are permanent means that as
further system development occurs, the smooth functioning of pieces 
developed earlier can be taken for
granted.
     One symptom of the absence of such a formal theory of session 
coherence is how hard it is to get
defsystem right.2 defsystem is, of course, the Lisp world’s analogue of 
make. There are several versions
of it. Some are complex, some simple. Some are “procedural” and some 
“declarative.” The former are
more like make in that they mainly organize sequences of actions in the 
space of compiling and loading
files. “Declarative” systems attempt to define a system as a collection 
of files on which different operations
can be defined. The popular ASDF [5] seems to have caught on quickly 
because it is cleanly written and
supplies a nice distributed package-management facility. None of these 
systems address the issues dealt
with here, including how to keep a system coherent as different versions 
of files are loaded.
     The present paper may be related to work on persistent objects [1, 
2], in that it connects entities in
process memory to entities in long-term storage. It is possible that such 
a facility might be useful as a
foundation for the system I describe, but, as we will see, the real 
problem is getting the logic right. The
details of how objects and their interdependencies are implemented are 
not that crucial.
1       Chunks
We introduce the term chunk to mean a piece of information in a 
particular state, form, or location.
Examples of chunks are
     • A file
     • A Lisp file as loaded into memory in an executable form
     • In a table of S-expression handlers, the subset corresponding to 
executable Lisp expressions whose
         cars are one of the built-in Lisp special operators
     • An association between two directories S and C, such that object 
files compiled from source files in
         S belong in C.
     The components of the Chunk data structure will be unfolded as this 
paper progresses. However, one
thing that will in general not appear as part of a chunk is the entity 
that it keeps track of. For one thing,
there may be no such entity. The chunk for “File foo loaded into memory” 
does not correspond to any
information not found in foo. If a chunk depends on no other chunk it is 
said to be given; otherwise, it is
derived. The information associated with a given chunk is set from 
outside the system, and so is likely to
be describable by a noun phrase, such as “Contents of file F.” If no noun 
phrase is applicable, as is usually
the case with derived chunks, I will use a declarative phrase, such as 
“File F is loaded.” Either way, we say
the chunk manages the noun phrase or the statement: “Chunk C manages 
‘File F is loaded’.” If derived
chunk C manages P , then C is up to date when and only when P is true. A 
given chunk, by contrast, is
up to date when and only when the system knows the date when the 
information it manages last changed.
In general, the goal of the coherence system is to make sure that all3 
chunks are up to date
     A chunk can obviously be almost any piece or state of information, 
but the intent is that it be “largish.”
If a spreadsheet cell is supposed to hold the total of a column of cells, 
that could be analyzed as a chunk
   2
     See [4] for a discussion of some of the issues involved in the 
design of system-description macros.
   3
     I’ll qualify this quantifier shortly.
(managin “The total of those numbers is stored in that cell”), but the 
mechanisms I propose may not be
cost-effective for something so small, especially if the chunks change 
frequently.4
     I introduce the generic function derive that brings a chunk up to 
date: For a derived chunk, (derive c)
computes something, moves something around, translates something, or does 
some other transformation of
data. For a “given” chunk, derive just verifies the date when the content 
the chunk manages last changed.
All the chunk-management system knows is what is revealed by the return 
value of derive. If it returns
nil, it has determined that the chunk is already up to date. If it 
returns a number > 0, that means that
it or someone else changed something, and that the number is the exact 
time when the change occurred.
Time can be measured in whatever scheme is convenient (so long as it’s 
expressible by a number, and so
long as all the chunks connected to each other use the same scheme).
     On some occasions it is useful to determine the date of a chunk 
without running its deriver (i.e., the
method supplied for derive applied to chunks of this class). The generic 
function (derive-date c) finds the
date at which c was last derived, if possible. When the date is not 
available, the deriver may return nil,
meaning that one should assume the previously stored date to be accurate, 
or the constant +no-info-date+
(an integer < 0, and hence not a legal date), meaning that there is no 
way to know the date without calling
derive. Which of these is appropriate depends on exactly what the chunk 
manages.
     An obvious, and classic, example of a dependency among chunks is the 
relation between an object file
and its source file. When its source file changes, the object file is out 
of date. We identify two chunks here
(to start with): one managing the source file and one managing “The 
object file is the result of compiling
the source file.” If the source file changes, then the compiled file must 
be rederived; that is, derive must be
applied to it, and the method for derive must call the compiler to 
recompile it. We use the term basis for
the set of chunks that a given chunk depends on in this way: If chunk F 
is in the basis of chunk G, then if
F changes G must be recomputed — or placed anew, or transformed, or 
whatever operation corresponds
to G’s being “up to date.” G is said to be a derivee of F .
     There is one escape clause, however. A chunk can exist but be 
dormant, in the sense that the chunk
system is not required to track it. Application programs tell the chunk 
system when to flip the chunk from
dormant to managed, the term I’ll use for a chunk that the chunk system 
must keep up to date. I’ll return
to this issue in section 2.
     Dependencies are of three sorts:
   1. Conjunctive: This is what is captured by the chunk’s basis. If the 
basis of C is {B1 , . . . , Bn }, then
        C must be changed whenever some of the Bi have changed, but only 
after all have been brought up
        to date.
   2. Disjunctive: A special class of Chunks are the or-chunks. In 
addition to a basis, such a chunk has
        a non-empty set of disjuncts. The or-chunk is up to date if one 
of its disjuncts is (in addition to its
        basis).
   3. Transient: Sometimes a chunk C is not dependent on chunk R, but 
cannot be updated unless R is
        up to date. The set of such R’s are said to be the update basis 
of C.
     The chunk network of figure 1 provides an example. Chunk C2 
represents the compiled version of
file file2.lisp, itself represented by chunk F 2. In addition, F 2 uses 
macros defined in file file1.lisp,
represented by chunk F 1. That means C2 depends on the chunk M 1 =
(:macros file1.lisp), which
represents the macros in F 1. Hence, file2.lisp needs to be recompiled if 
either file2.lisp or M 1
changes. In chunk jargon, C2, if it is managed, seems to have as its 
basis {F 2, M 1}. (M 1, in turn, has
{F 1} as its basis.)
   4
     Tilton’s [6] CELLS system provides spreadsheet-like functionality in 
Lisp. The issues that arise in that application are, as
we will see, not the same as the ones I am talking about.
                            LM1
                                   Macros of
                                    file1.lisp
  C2                              are loaded
       file2.fasl is
            the
                                       M1
        compiled
                                                                            
h
       version of
                            Macros
         file2.lisp
                           defined in            L1               SM1
                            file1.lisp
                                                               file1.lisp
                                                  file1.lisp
                                                               has been
                                                 has been
                                                              slurped for
     F2                                            loaded
                                                                macros
                                                                                       
d
              File
           file2.lisp
                                                                          
c
                               F1
                                         File
                                      file1.lisp
                                                                             
Fig 2: Unstable
                Fig 1: Dependencies among file 
chunks                        dependency cycle
     But that’s not quite adequate. If file2.lisp actually needs to be 
recompiled, it is not enough that
M 1 be up to date; it is also required that the macros in file1.lisp be 
loaded into the running Lisp. We
introduce a new chunk LM 1 =(:loaded (:macros file1.lisp)). LM 1 can be 
brought up to date by going
through the file and evaluating all the macro definitions it contains. 
I’ll call this slurping the file. It’s
unusual to make use of a slurper in the Lisp world, but bear with me. LM 
1 must be up to date in order
to bring C2 up to date, that is, in order to compile file2. So the set 
{LM 1} is the update basis of C2;
this is a transient dependency (indicated by an arrow with a double white 
head).
     We go on to observe that, if file1.lisp or its compiled version has 
been loaded, it is unnecessary
to slurp it. So LM 1 must be an or-chunk, with two chunks as disjuncts: 
L1 =(:loaded file1.lisp)
and SM 1 =(:slurped (:macros file1.lisp)). The second is marked as the 
default disjunct of LM 1
(indicated by the double-headed black arrow pointing to the triangle 
indicating disjunction). To ensure
that a managed or-chunk always has a current selection (a managed base), 
we require that every or-chunk
have a default disjunct, the one that gets managed if none of the others 
are.
     The default method for derive applied to an Or-chunk is instructive:
(defmethod derive ((orch Or-chunk))
    (let ((date nil))
         (dolist (d (Or-chunk-disjuncts orch)
                       (or date
                           (error
                                   "No disjunct of or-chunk ~s is managed 
and up to date"
                                   orch)))
               (cond ((and (Chunk-managed d)
                             (chunk-up-to-date d))
                       (cond ((or (not date)
                                        (and (< (Chunk-date d) date)
                                                 (>= (Chunk-date d) 0)))
                                (setq date (Chunk-date d)))))))))
    Subclasses of Or-chunk may need to do more, but a “bare OR” simply 
represents that one of its disjuncts
is up to date. The method just searches through the disjuncts checking 
the dates of the managed, up-to-
date disjuncts, and returns the date of the one brought up to date 
earliest. It turns out that we don’t need
another slot in the Chunk class to keep track of its current selection; 
we just store a singleton list with its
current selection as the chunk’s update basis. The semantics are exactly 
as required: that the selection be
up to date at the point where the or-chunk is derived.
    It would nice if we could insist that every derive method be purely 
local, in the sense that it does
absolutely nothing except bring the chunk up to date, and in particular 
does not change the chunk network
or the state of any chunk besides itself. Unfortunately, in realistic 
systems some chunks’ purpose is to mess
around with other chunks. For example, if a file F specifies in its 
header what other files are required to
be loaded before it is loaded, then the chunk managing “F ’s header is 
loaded” will alter the basis of the
chunk managing “file F is loaded.”
    In the rest of the paper, I will use the following terminology. An 
immediate supporter of a chunk C is
an element of its basis, or its default disjunct if C is an or-chunk. The 
supporters of C are those chunks
related by the transitive closure of the “immediate supporter” relation.
    The height of a chunk is then defined in the obvious way. If the 
chunk is not an or-chunk and has an
empty basis, then it is called a leaf chunk, and has a height of 0. 
Otherwise, its height is 1 + the height of
its highest immediate supporter.
2     Keeping Track of Managed Chunks
Each chunk contains two binary flags, manage-request, which keeps track 
of whether the user has requested
that the chunk be managed; and managed, which is true if manage-request 
is true, or if it is necessary to
manage this chunk in order to manage one of its derivees. I will use the 
letter R to abbreviate the
manage-request markers on chunks. A marking R is a function that assigns R
(c) = t to a chunk c if and
only if the user has requested that c be managed. A marking M is a 
similar function: M (c) = t iff c is
managed.
    If M (c) = t, a local cause of M (c) with respect to a given M and R 
is one of three things:
   1. c itself, in the case where R(c) = t;
   2. a chunk d such that c is an element of the basis of d and M (d) = t;
   3. or an or-chunk h such that c is the default disjunct of h, M (h) = 
t, and for every other disjunct c
       of h, M (c ) = nil.
A supporting path for M (cn ) with respect to M and R is a sequence of 
chunks c0 , c1 , . . . , cn such that
M (ci ) = t for all i ∈ [0, n], R(c0 ) = t, and for all i ∈ [0, n − 1], 
ci is a local cause of M (ci+1 ). (n may
= 0.) A supporting path c0 , . . . , cn is always in the “down” 
direction: ci has height greater than ci+1 , and
a change in the management status of ci can cause a change in the 
management status of ci+1 , but never
vice versa.
    An M is a closure of R if
             {c|M (c) = t} = {c| there is a supporting path for M (c) 
with respect to M and R}
There is in general more than one possible closure. A key job of the 
chunk-management system is to find
one of them. It is carried out by two mutually recursive programs, chunk-
manage and chunk-unmanage,
which are called to bring the management flags back to closure after the 
user calls chunk-request-mgt or
chunk-terminate-mgt to change the manage-request flag of some chunk.
      (chunk-manage c) is called whenever a local cause for c to be 
managed is detected. (chunk-unmanage
c) is called whenever the last local cause for c to be managed is 
removed. chunk-manage’s essential task is
to make sure that the basis of a managed chunk is managed, which is a 
simple recursion. chunk-unmanage
checks to see if ceasing to manage chunk c removes the last local cause 
for some of the chunks in its basis,
and if so unmanages them as well. These simple recursions are made more 
complex by the existence of or-
chunks. Suppose a chunk c becomes managed, at which point it is the only 
managed non-default disjunct
of or-chunk h. Call the default disjunct d. If h itself is managed, then 
it was a local cause for d with
respect to the previous set of management markings. In the new set h 
provides no cause for d to become
managed, so if there is no other cause, chunk-unmanage must be called to 
mark it unmanaged. The opposite
flip can occur when c becomes unmanaged. Without or-chunks, (un)
management propagation would be
monotonic, and it would obviously converge to a closed set of marks M . 
Or-chunks make it nonmonotonic,
in the sense that marking one chunk can cause another to become unmarked. 
This raises the possibility of
infinite loops.
      In fact, it is not hard to construct a network of chunks that does 
allow infinite loops. Figure 2 shows
the simplest case. Here or-chunk h has two disjuncts c and d, with d 
being the default. d’s basis happens
to be {c}. Initially none of the chunks is managed. If chunk-manage is 
called with h as argument, it first
sets M (h) = t, then M (d) = t, then M (c) = t. At this point there is no 
longer any local cause for d, so it
becomes unmanaged. Now there’s no reason to manage c, so . . . . In this 
case there isn’t a legal marking.
In other networks there are multiple possible markings; in others, 
various combinations must be tried until
a stable labeling can be found.
      In all of these examples, the chunk network contains a certain kind 
of pathological subgraph. A down
link is a pair of chunks c1 , c2 such that c2 is an immediate supporter 
of c1 . A lateral chain is a sequence
c, h, d1 , d2 , . . . , dn , where h is an or-chunk, d1 is its default 
chunk, c is another disjunct of h, and for all i
such that 1 ≤ i < n, di , di+1 is a downlink. Note that such a chain is 
defined independent of any marking
of the chunks (cf. supporting paths). The idea is that for some 
assignment of “managed” or “unmanaged”
status to the nodes involved, a change in the management status of c can 
cause a change in the status of
d1 and hence to dn .5 This is called a lateral chain because it allows 
management marks to flow from a
chunk at one height to a chunk at some unpredictable other height. A 
lateral cycle is defined as a series
c0 , . . . , cn such that cn = c0 and for all i ∈ [0, n − 1], there is a 
lateral chain connecting ci and ci+1 . It is
fairly straightforward to prove:
               Theorem: manage and unmanage can get into an infinite 
recursion only if one of them is
          applied to a chunk c0 that is part of a lateral cycle 
c0 , . . . , cn .
Unfortunately, lack of space prevents me from including the proof here.
      This theorem is good news, because it means a simple algorithm can 
handle all the nonpathological
cases, and detect the pathological ones. It is easy to see why lateral 
cycles are pathological. The purpose
of or-chunks is to allow for a default information source to be 
supplanted by a larger source once there is
a reason to load it. In a lateral cycle, each chunk ci plays the role of 
“default subset” in the lateral chain
to its left, and as “contingent superset” in the lateral chain to its 
right. It would be unusual to see this
pattern carried out for two or three iterations, but downright absurd to 
see it form a cycle, because the
intuitive subset/superset picture would result in a chunk being a 
superset of itself.
      Hence rather than try to develop sophisticated algorithms for 
coping with lateral cycles, we adopt the
much simpler tactic of detecting them and signaling an error. This is 
easy to do. We simply augment
chunk-manage with code to set the management state of its argument to :in-
transition; and augment
    5
      Actually, there are chunk graphs in which a given lateral chain can 
never become a conduit in this way, because the
required marking is not in fact consistent with the graph’s topology. As 
will shortly become clear, we err on the side of caution
by regulating all lateral chains, not just those that are effective.
chunk-unmanage with code to check whether the state = :in-transition, 
indicating an attempt to reset
before the completion of a set.
3     Updating Chunks
A chunk actually does something useful when it is updated, meaning 
rederived if necessary so as to be
consistent with its supporters. Exactly how it is determined that some 
chunks require updating is outside
the scope of the chunk system. For instance, consider the (leaf) chunk 
corresponding to the contents of
a source file. Its deriver does not do anything to the contents of the 
file, but merely changes the chunk’s
date, if necessary, to equal the write date of the file. There may 
perhaps be a way to have the file system
send a signal to Lisp when the write date changes, but for now I assume 
that after editing a file the user
tells the chunk system to check the new write date and infer the 
consequences of its having changed.
    The program that takes over at this point is called chunks-update 
(plural because in the general case we
have a set of chunks that have changed). The job of (chunks-update 
chunks) is to rederive all the chunks,
but it takes this opportunity to update all the supporters and derivants 
of the directly affected chunks.
(Chunk c1 is a derivant of c2 if c2 is a supporter of c1 .) This is a 
surprisingly complex operation, because
of two factors:
   1. At the time a chunk is derived, its update basis (p. 3) must be up 
to date.
   2. Updating one chunk may cause the basis of other chunks to change.
    Setting these two factors aside for the nonce, the basic algorithm is 
fairly standard:
    (defun chunks-update (chunks)
        (let (derive-mark)
           (labels ((chunks-leaves-up-to-date (chunkl)
                         (let ((need-updating ’()))
                            (dolist (ch chunkl need-updating)
                               (let ((sl (check-leaves-up-to-date ch)))
                                  (setq need-updating
                                          (nconc sl need-updating))))))
                     (check-leaves-up-to-date (ch)
                         (chunk-derive-date-and-record ch)
                         (cond ((and (chunk-is-leaf ch)
                                      (= (Chunk-date ch) +no-info-date+))
                                (chunk-derive-and-record ch)))
                         (let ((to-be-derived
                                      (check-from-derivees ch)))
                            (cond ((chunk-is-leaf ch)
                                   to-be-derived)
                                  (t
                                   (nconc
                                       to-be-derived
                                       (chunks-leaves-up-to-date
                                                       (Chunk-basis 
ch)))))))
                     (check-from-derivees (ch)
                         (let ((updatees
                                  (remove-if
                                      (lambda (c)
                                                 (of (chunk-up-to-date c)
                                                      (not (Chunk-managed 
c))))
                                            (set-latest-support-date 
ch))))
                                 (cons ch
                                        (chunks-leaves-up-to-date 
updatees ’()))))
                         (derivees-update (ch)
                             (cond ((and (Chunk-managed ch)
                                            (not (Chunk-derive-in-
progress ch))
                                            (not (chunk-date-up-to-date 
ch))
                                            ;; Run the deriver when and 
only when
                                            ;; its basis is up to date --
                                            (every #’chunk-up-to-date
                                                      (Chunk-basis ch))
                                            (not (chunk-is-marked ch 
derive-mark)))
                                     (chunk-mark ch derive-mark)
                                     (chunk-derive-and-record ch)
                                     (derivees-derivees-update
                                              (Chunk-derivees ch)))))
                         (derivees-derivees-update (l)
                             (dolist (d l)
                                 (dolist (c (set-latest-support-date d))
                                    (derivees-update c)))))
                 ;; BODY OF LABELS BEGINS HERE
                 (setq derive-mark chunk-event-num*)
                 (setq chunk-event-num* (+ chunk-event-num* 1))
                 (let ((chunks-needing-update
                              (chunks-leaves-up-to-date chunks ’())))
                       (dolist (ch chunks-needing-update)
                          (derivees-update ch ’()))))))
The final version of the algorithm is much more complex than this, but we 
can already discern some
subtleties. The function chunks-update calls a few important subroutines:
     • (set-latest-support-date c): Compute the date of the most recently 
updated base of chunk c. If
       it is later than c’s latest-support-date, reset that slot of c, 
and repeat the computation for each
       derivee of c. Returns a list of all the derivants of c the date of 
whose most recently changed supporter
       has changed.
     • (chunk-derive-and-record c): Apply derive to c, and change c’s 
date to the date returned by derive
       if it is later than c’s old date. While this is going on, set the 
derive-in-progress slot of c to t.
     • (chunk-derive-date-and-record c): Apply derive-date to c, and set 
c’s date accordingly. Returns t
       if the new date is newer than c’s old date.
     • (chunk-mark c m) and (chunk-is-marked c m): See below.
     The chunks-update algorithm proceeds in two phases6 : During the 
outer call to chunks-leaves-up-to-date,
it sweeps through the chunk network finding all the “questionable” chunks 
reachable from the given list.
A chunk is questionable if it is managed, it is out of date, and either 
it is in the list chunks, or it sup-
ports a questionable chunk, or it is derived from a questionable chunk. 
The out-of-dateness test is not
   6
     For brevity, I’ve omitted the — entirely straightforward — code that 
checks for support cycles during the sweeps through
the chunk network.
performed using the dates pasted on chunks when the sweep starts, but on 
the dates that emerge when
leaves supporting questionable chunks are (re-)derived.
     This sweep returns a list of non-leaf questionable chunks. In the 
second phase, the outer call to
derivees-update, these chunks are re-derived. Note that derivees-update 
simply discards a chunk whose
basis is not up to date. That’s because the questionable basis chunks 
must themselves be on the list of
chunks to try deriving; when derivees-update gets to them, and re-derives 
them, it will then call itself
recursively to derive their derivees. Eventually all questionable 
derivees will be updated, except for the
rare cases in which derive determines that a questionable basis is up to 
date after all.
     The algorithm uses a marking scheme to ensure that no chunk is 
derived more than once. The global
fixnum chunk-event-num* is stored as derive-mark, and then incremented so 
that the same number is never
used again. Whenever derive is applied to a chunk, the chunk’s update-
marks is field is used to record
that it is now marked with derive-mark. The function chunk-is-marked 
checks to see whether the chunk
has already been marked with derive-mark. If the report is positive, then 
the chunk is not derived again.
Through all the complexities that are added to chunks-update, this 
property is preserved, because no
matter how the chunk network changes, the deriver of a chunk is supposed 
to take all relevant data into
account when it runs.
     Lack of space prevents me from a thorough description of the actual 
chunks-update program. The
following is a very skimpy sketch of the layers of complexity that must 
be added to the basic code above.
     The update basis of a chunk must be up to date before the chunk is 
derived. This requires a change
to derivees-update. However, before control gets to that point, the 
update basis must be managed, or its
components will not be updated. We must add code to check-leaves-up-to-
date to call chunk-manage on
the update basis of a chunk that might be updated. All such temporarily 
managed chunks are placed on a
list, and when chunks-update is finished, it calls chunk-unmanage on 
them. This code is unwind-protected
so that the temporary management is undone even if chunks-update 
terminates in some abnormal way.
     In addition to the derivation mark, the chunk-update system must use 
different marks to mark chunks
that have been seen during chunks-leaves-up-to-date and those seen during 
derivees-update. The same
global counter, chunk-event-num*, is used for this purpose, and we call 
the two marks down-mark and
up-mark respectively. We do not provide three slots on each chunk to keep 
track of these marks, because
of the possibility of unexpected calls to chunks-update, a topic to which 
I now turn.
     As I have mentioned more than once, there is no way to keep chunk 
derivers from calling chunks-update.
When it happens, we must let the call proceed, because it may change the 
outcome of the current call. For
instance, one system of chunks may keep track of which files depend on 
which other files, while another
keeps track of the compilation and load states of files. During an update 
of the latter system, an update of
the former may occur, thus changing which files should be be compiled or 
loaded. We can say informally
that the first system is “meta” to the second, but I’ve made no input to 
introduce explicit “layers” and
“metalayers” to the chunk system. Instead, when a call to chunks-update 
detects that another call has
happened, it simply restarts.
     Restarting means allocating new values for down-mark and up-mark, 
then marking from chunks all over
again. The way marks are managed is that each chunk has a list of marks. 
To tell if a chunk is marked
with m, the system checks to see if m is in the list. Rather than use 
member, as it traverses the list it
deletes marks that are no longer in use. To tell if a mark is still in 
use requires chunks-update and other
“mark-allocating” functions to discard marks they have allocated; this 
occurs when chunks-update exits,
normally or abnormally.
     Although the details of this scheme are entirely orthogonal to chunk 
management, it does give us
an easy way of testing whether chunks-update has been called by someone 
while chunks-update was in
progress: simply check to see if some other process has allocated a chunk 
mark. When this event is
detected, chunks-update drops what it is doing, and restarts.
4     Applications and Conclusions
The biggest application of the chunk system is the YTools File Manager 
(YTFM) [3], but is impossible to
talk about all its intricacies in the space available. Besides, a simple 
example will show better how much
value is added by using chunks.
    Let’s suppose that a file tab.lisp initializes a table with some sort 
of S-expression handlers, each
associated with a symbol that can occur as the car of an S-expression. In 
tab.lisp we can have this code:
(declare-chunk handler-table-init
    :contents
       ((defparameter handler-table* (make-hash-table ...))))
In a later file handlers.lisp we can write
(declare-chunk special-form-handlers (:depends-on handler-table-init)
    :contents
       ((setf (gethash ’cond handler-table*)
               (lambda (x y z) ...))
        (setf (gethash ’let handler-table*)
               (lambda (x y z) ...))))
To make the declare-chunk macro work, all we need to do is define a class 
File-segment-chunk, which has
two kinds of base chunk: the File-chunk of the file the chunk declaration 
appears in, and the chunks it
is declared to depend on. The File-chunk abstraction is supplied by the 
YTFM, as is the closely related
Loaded-file-chunk, which manages “File F is loaded into memory.” We need 
the latter for the update
basis of a File-segment-chunk; to update a chunk declared in file F , it 
is necessary (and sufficient!) for
F to be loaded. The contents of a File-segment-chunk become a function 
with zero arguments, to be
called by derive when applied to an element of the class. In the example, 
if tab.lisp is reloaded, then if
handlers.lisp hasn’t changed since it was last loaded, then derive calls 
the function, thus re-evaluating
the two setfs. If handlers.lisp has changed, then the deriver does 
nothing (because the file will have
been reloaded before the deriver is called). This is the simplest scheme, 
but it is easy to explore other
alternatives, such as “slurping” the file to find and evaluate just the 
chunk definition.
    The point is that this mechanism allows fine-grained control over the 
rebuilding of Lisp sessions. Once
the dependency has been declared, the developer can stop worrying about 
it, confident that the chunk
manager will always reconfigure data structures properly as files are 
debugged and reloaded. With this
confidence, the times when the user must give up and reload everything 
can be reduced to a minimum.
References
[1] Jim Farley. Java Distributed Computing. O’Reilly, 1998.
[2] Heiko Kirschke. Persistent Lisp Objects! At http://
plob.sourceforge.net/plob.html, 2005.
[3] Drew McDermott. YTools: A Package of Portable Enhancements to Common 
Lisp
    . Available at http://cs-www.cs.yale.edu/homes/dvm/papers/ytdoc.pdf, 
2005.
[4] Kent Pitman. The Description of Large Systems. Technical Report 801, 
MIT AI, 1984. Now available
    at http://www.nhplace.com/kent/Papers/Large-Systems.html.
[5] Rosenberg. ASDF:, 2004. Another System Definition Facility. http://
www.cliki.net/asdf.
[6] Kenny Tilton. Cells: A Dataflow Extension to CLOS. http://common-
lisp.net/project/cells/,
    2005.

HTH,
AvK
From: John W. Krahn
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <tZfUj.1164$KB3.699@edtnps91>
moi wrote:
> On Wed, 07 May 2008 02:09:03 -0700, Robert Maas, http://tinyurl.com/uh3t
> wrote:
> 
>>> From: "Leslie P. Polzer" <·············@gmx.net> do you know this
>>> paper:
>>> http://cs-www.cs.yale.edu/homes/dvm/papers/lisp05.pdf
>> I have no way to read PDF files here.
>>
> I dumped the PDF into ASCII for you:

That's definitely not ASCII, the quotation marks are outside of the 
range of the ASCII character set.


John
-- 
Perl isn't a toolbox, but a small machine shop where you
can special-order certain sorts of tools at low cost and
in short order.                            -- Larry Wall
From: Logan Shaw
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <4823c059$0$3387$4c368faf@roadrunner.com>
John W. Krahn wrote:
> moi wrote:
>> On Wed, 07 May 2008 02:09:03 -0700, Robert Maas, http://tinyurl.com/uh3t
>> wrote:

>>>> From: "Leslie P. Polzer" <·············@gmx.net> do you know this
>>>> paper:
>>>> http://cs-www.cs.yale.edu/homes/dvm/papers/lisp05.pdf
>>> I have no way to read PDF files here.

>> I dumped the PDF into ASCII for you:

> That's definitely not ASCII, the quotation marks are outside of the 
> range of the ASCII character set.

It strikes me that the phrase "an ASCII file" has been used as a shorthand
way of saying "a file which is a stream of simple characters".  But now that
Unicode is overtaking ASCII in popularity, maybe we need some better
terminology.

People have said "plain text file" for a while, but is that a good term
for something that's in Unicode?  Is UTF-8 plain enough to call it "plain"?

I suppose the term "plain text" is ambiguous, so it's sort of a judgment
call whether you co-opt it for UTF-8.

   - Logan
From: John Thingstad
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <op.uavete02ut4oq5@pandora.alfanett.no>
P� Fri, 09 May 2008 05:12:39 +0200, skrev Logan Shaw  
<············@austin.rr.com>:

> John W. Krahn wrote:
>> moi wrote:
>>> On Wed, 07 May 2008 02:09:03 -0700, Robert Maas,  
>>> http://tinyurl.com/uh3t
>>> wrote:
>
>>>>> From: "Leslie P. Polzer" <·············@gmx.net> do you know this
>>>>> paper:
>>>>> http://cs-www.cs.yale.edu/homes/dvm/papers/lisp05.pdf
>>>> I have no way to read PDF files here.
>
>>> I dumped the PDF into ASCII for you:
>
>> That's definitely not ASCII, the quotation marks are outside of the  
>> range of the ASCII character set.
>
> It strikes me that the phrase "an ASCII file" has been used as a  
> shorthand
> way of saying "a file which is a stream of simple characters".  But now  
> that
> Unicode is overtaking ASCII in popularity, maybe we need some better
> terminology.
>
> People have said "plain text file" for a while, but is that a good term
> for something that's in Unicode?  Is UTF-8 plain enough to call it  
> "plain"?
>
> I suppose the term "plain text" is ambiguous, so it's sort of a judgment
> call whether you co-opt it for UTF-8.
>
>    - Logan

It isn't ambiguous to me. It is a file containing only text. The format is  
unspecified.
Most editors manage auto detect so they just don't care. If this is a  
problem how about writing such a auto-detect library? (Gave me an idea  
anyhow.)

LispWorks editor uses a emacs header to determine coding.
Looks like:

;;-*-mode: lisp; coding: utf-16;-*-

In an email header it is spesified like:

Content-Type: text/plain; charset=ISO-8859-1; format=flowed

.. and in a web page like this:

<META http-equiv="Content-Type" content="text/html; charset=UTF-8">

Note that content is text/plain for all plain text and seperate from  
charset.

--------------
John Thingstad
From: John Thingstad
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <op.uavetxhout4oq5@pandora.alfanett.no>
P� Fri, 09 May 2008 05:12:39 +0200, skrev Logan Shaw
<············@austin.rr.com>:

> John W. Krahn wrote:
>> moi wrote:
>>> On Wed, 07 May 2008 02:09:03 -0700, Robert Maas,  
>>> http://tinyurl.com/uh3t
>>> wrote:
>
>>>>> From: "Leslie P. Polzer" <·············@gmx.net> do you know this
>>>>> paper:
>>>>> http://cs-www.cs.yale.edu/homes/dvm/papers/lisp05.pdf
>>>> I have no way to read PDF files here.
>
>>> I dumped the PDF into ASCII for you:
>
>> That's definitely not ASCII, the quotation marks are outside of the  
>> range of the ASCII character set.
>
> It strikes me that the phrase "an ASCII file" has been used as a  
> shorthand
> way of saying "a file which is a stream of simple characters".  But now  
> that
> Unicode is overtaking ASCII in popularity, maybe we need some better
> terminology.
>
> People have said "plain text file" for a while, but is that a good term
> for something that's in Unicode?  Is UTF-8 plain enough to call it  
> "plain"?
>
> I suppose the term "plain text" is ambiguous, so it's sort of a judgment
> call whether you co-opt it for UTF-8.
>
>    - Logan

It isn't ambiguous to me. It is a file containing only text. The format is
unspecified.
Most editors manage auto detect so they just don't care. If this is a
problem how about writing such a auto-detect library? (Gave me an idea
anyhow.)

LispWorks editor uses a emacs header to determine coding.
Looks like:

;;-*-mode: lisp; coding: utf-16;-*-

In an email header it is spesified like:

Content-Type: text/plain; charset=ISO-8859-1; format=flowed

.. and in a web page like this:

<META http-equiv="Content-Type" content="text/html; charset=UTF-8">

Note that content is text/plain for all plain text and seperate from
charset.


-- 
--------------
John Thingstad
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008may08-006@yahoo.com>
> >> http://cs-www.cs.yale.edu/homes/dvm/papers/lisp05.pdf
> > I have no way to read PDF files here.
> From: moi <····@invalid.address.org>
> I dumped the PDF into ASCII for you:

Hey, thanks. The way you did it is much better than the way Google
does it. Your way, it's actually legible! How did you do it?

> During Lisp software development, it is normal to revise and
> reload programs and data structures continually.

Although that's vaguely similar to what I do, there's a major
difference: I do all my editing on my Macintosh, then copy&paste
across dialup modem to Unix, the only place where CMUCL is
available for me to use, where I do all line-by-line testing, all
full-function testing, and all R&D testing. When I need to restart
Lisp, because I lost my dialup connection or it's another day and
I'm logging in again, I upload (to Unix) all the files that have
changed locally since last uploaded, then start Lisp and run the
initialization sequence to get all active source files loaded into
Lisp. Then I can start re-building whatever data I have in the Lisp
environment. In the past I needed to recompute everything from
original input, so I ran a script for doing that. Now with my
automatic dataflow software, I just call a few functions to request
bringing up-to-date the input for whatever I'm in the middle of
developing, which automatically loads expensively-computed data
from disk instead of recomputing it. Thus I have no need to
automatically keep track of write-date on sourcefiles, and it
wouldn't do any good because there's no way to automatically upload
files from Macintosh to Unix. I only need to watch timestamps on
the data, not the sourcefiles. The system described in this PDF
(now ASCII text) document deals with both sourcecode and data.

A "chunk" per the paper is similar to a "control point" as I used
the term. Perhaps "chunk" is the data itself and "control point" is
the logical record that tells about the "chunk" and determines how
the data will be automatically brought up to date if it isn't
already. Thus from a nitpicky technical point, chunks and control
points are different aspects of the same process, but they match
1-1 so we can talk about either just the same without going astray.

Most of the time a piece of data either is or is not generated or
loaded already, and the timestamp is overkill. The main place where
timestamps would be useful is if I change the definition of how
some data in the chain is computed, such as if I change the
ProxHash algorithm to use a different random number generator. I
would simply delete the backup copy of that one data value from
both Lisp memory and disk, thereby forcing the dataflow system to
re-compute it and re-save it. At that point, since the timestamp is
the date saved, all data dependent on it would show as obsolete and
needing re-computing if I ever ask for them. The timestampes would
save me the burden of trying to manually invalidate each later data
value, and possibly overlook one of them resulting in inconsistent
values.

> The result is that the state of the Lisp process can become
> <E2>incoherent,<E2> with updates to <E2>supporting chunks<E2>
> coming after updates to they chunks they support.

Yes, that's the basic problem expressed nicely.

> The word chunk is used here to mean any entity, content, or
> entity association, or anything else modelable as up to date or out
> of date. To maintain coherence requires explicit management of an
> acyclic network of chunks, which can depend on conjunctions and
> disjunctions of other chunks;

Yes. The key is that it's acylcic, else it's impossible to
terminate recursion. This kind of dataflow is very different from
the feedback loops to converge on fixed points of functions during
interval arithmetic calculations.

Conjunctions are easy to understand: One resultant chunk depends
on two supporting chunks. (One control point has two inputs.)

Disjunctions aren't so obvious. Is this like when there might be a
backup copy of the data on disk, whereby the value can either be
re-computed or loaded from backup file depending on which is more
recent, but if output is as recent as latest of supporting and
saved chunk then neither re-compute nor load is needed?

> the built-in facilities of Lisp do not address the coherence
> problem in any systematic way.

Agreed.

> For example, although it is easy to reload a file after making
> some bug fixes, it often happens that the reloaded file initialized
> some table, and entries were made in it by files loaded later.

I don't write my code that way. Loading a file doesn't initialize a
table. Instead, loading a file merely makes available the functions
needed to initialize the table, and the functions which determine
under what circumstances the table would need to be initialized. If
the table has already been initialized, the reloaded software won't
have any reason to require it to be initialized again.

> There is nothing really wrong with restarting. It often requires
> you to take special measures to get back to the point you were at
> before the restart.

This is why I used to have a script that computed all the values
that were needed for my current R&D work, and why *now* I have
instead the automatic dataflow to compute or reload those same
values in a more optimal way.

Note that several years ago I had a weaker form of automatic
dataflow. It used timestamps to load or recompute data as needed,
but if I wanted to save to disk I needed to call that function
manually. It also used two globals per control point, one of which
was the actual data value, and one of which was the timestamp and
other info, both on the value cell of the global symbol. The new
automatic dataflow is essentially a refactoring of that old code to
have only a single symbol per control point, using properties
rather than value cell to store timestamp and data value (and
eventually other info about the control point).
From: moi
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar  to Unix 'make')
Date: 
Message-ID: <8e006$48240367$5350c6a6$17916@cache120.multikabel.net>
On Thu, 08 May 2008 22:54:19 -0700, Robert Maas, http://tinyurl.com/uh3t
wrote:

>> >> http://cs-www.cs.yale.edu/homes/dvm/papers/lisp05.pdf
>> > I have no way to read PDF files here.
>> From: moi <····@invalid.address.org>
>> I dumped the PDF into ASCII for you:
> 
> Hey, thanks. The way you did it is much better than the way Google does
> it. Your way, it's actually legible! How did you do it?

Well, basically just cut(select all from within the PDF-viewer) && paste 
(via an editor, since my newsreader does not seem to like big chunks).

AvK
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008may10-004@yahoo.com>
> >> I dumped the PDF into ASCII for you:
> > Hey, thanks. The way you did it is much better than the way Google does
> > it. Your way, it's actually legible! How did you do it?
> From: moi <····@invalid.address.org>
> Well, basically just cut(select all from within the PDF-viewer) && paste
> (via an editor, since my newsreader does not seem to like big chunks).

Ah, sort of like what I do at some public computer center which has
Note Pad available (only 2 such places that I know of in all of
Santa Clara County), when I'm looking at a MS-Word document that
I've previously created at the public library
 (where MS-IE was the *only* program available until a few months
  ago, and even now Note Pad isn't available, but MS-Word has been
  available for the past few months, so that's the only way I can
  accumulate notes to myself then e-mail at end of session)
with lots of photos and descriptions of where I found them and what
they look like. Except even though I can e-mail that MS-Word
document to myself from the library, I can't view it from home. So
now at the one-of-two public computer lab I want to extract just
the text so that I can view those descriptions from home where I
can't see images or MS-Word documents at all: I load the document
into MS-Word, select all, start a Note Pad, paste into it, which
copies only the text parts and converts the type to TXT, then
finally select-all and paste into MS-IE Yahoo! Mail. It does no
good to copy from MS-Word and paste into MS-IE Yahoo! Mail
directly, because then the images are included and the whole e-mail
is MIME format MS-Word or somesuch which defeats the objective.

Anyway, thanks again for the service, so I don't have to spend
hours commuting to a public computer lab by public transit or
bicycle just to look at one file in the midst of discussion. Too
bad it requires manual copy&paste, so it's not feasible to set it
up as a Web service. (What Google does is crap by comparison, as I
mentionned before: Appx. 1.5 lines of text then a blank line then
1.5 lines of text then a blank line etc. all through a document,
making it totally painful to try to comprehend it.)
From: moi
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar  to Unix 'make')
Date: 
Message-ID: <5e44$4825ddd5$5350c6a6$13945@cache90.multikabel.net>
On Sat, 10 May 2008 10:05:47 -0700, Robert Maas, http://tinyurl.com/uh3t
wrote:

>> >> I dumped the PDF into ASCII for you:
>> > Hey, thanks. The way you did it is much better than the way Google

You're welcome.

>
> created at the public library
>  (where MS-IE was the *only* program available until a few months
>   ago, and even now Note Pad isn't available, but MS-Word has been
>   available for the past few months, so that's the only way I can
>   accumulate notes to myself then e-mail at end of session)
> with lots of photos and descriptions of where I found them and what they
> look like. Except even though I can e-mail that MS-Word document to
> myself from the library, I can't view it from home. So now at the
> one-of-two public computer lab I want to extract just the text so that I
> can view those descriptions from home where I can't see images or
> MS-Word documents at all: I load the document into MS-Word, select all,

My guess is that a wordprocessor-like program has an option "save as",
(sometimes you have to deselect some none-features, agree with losing the 
formatting, etc. I don't know if it is scriptable. I can give it a try, 
but I am afraid that most of the work will be in avoiding to be used as 
an open mail-relay.
 
> start a Note Pad, paste into it, which copies only the text parts and
> converts the type to TXT, then finally select-all and paste into MS-IE
> Yahoo! Mail. It does no good to copy from MS-Word and paste into MS-IE
> Yahoo! Mail directly, because then the images are included and the whole
> e-mail is MIME format MS-Word or somesuch which defeats the objective.
> 
Yes, God's ways are mysterious, aren't they :-]

> Anyway, thanks again for the service, so I don't have to spend hours
> commuting to a public computer lab by public transit or bicycle just to
> look at one file in the midst of discussion. Too bad it requires manual
> copy&paste, so it's not feasible to set it up as a Web service. (What
> Google does is crap by comparison, as I mentionned before: Appx. 1.5

Google does what it can. It *needs* the plain ascii just to tokenise.
Reconstructing the formatting is much harder. PDF sucks. Good old 
PostScript did an excellent job, and so did ps2asc et.al.

> lines of text then a blank line then 1.5 lines of text then a blank line
> etc. all through a document, making it totally painful to try to
> comprehend it.)

"if it works, it's obsolete" (Marhall McLuhan)
http://www.marshallmcluhan.com/poster.html

AvK
From: Logan Shaw
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <48251874$0$31752$4c368faf@roadrunner.com>
Robert Maas, http://tinyurl.com/uh3t wrote:
>> From: moi <····@invalid.address.org>

>> During Lisp software development, it is normal to revise and
>> reload programs and data structures continually.
> 
> Although that's vaguely similar to what I do, there's a major
> difference: I do all my editing on my Macintosh, then copy&paste
> across dialup modem to Unix, the only place where CMUCL is
> available for me to use, where I do all line-by-line testing, all
> full-function testing, and all R&D testing. When I need to restart
> Lisp, because I lost my dialup connection or it's another day and
> I'm logging in again,

Assuming you are doing everything at the terminal, which is a fair
assumption if you're using dialup, try out the "screen" program
under Unix.  It's a terminal emulator within a terminal.  It
supports multiple sessions, which is to say that it can emulate
more than one terminal at a time and let you switch between them.

The reason it's interesting in your situation is that it has
essentially a client-server architecture.  When your modem hangs
up, the client will die, but the server (which maintains the state
of the emulated terminals -- and thus all the programs running
within them) remains, and you can reattach to it with "screen -r".

Obviously, you cannot use it to keep your programs alive forever
because there are other limits, but the unreliability of modems
is, I would guess, among the more annoying limits.

> I upload (to Unix) all the files that have
> changed locally since last uploaded, then start Lisp and run the
> initialization sequence to get all active source files loaded into
> Lisp. Then I can start re-building whatever data I have in the Lisp
> environment. In the past I needed to recompute everything from
> original input, so I ran a script for doing that. Now with my
> automatic dataflow software, I just call a few functions to request
> bringing up-to-date the input for whatever I'm in the middle of
> developing, which automatically loads expensively-computed data
> from disk instead of recomputing it. Thus I have no need to
> automatically keep track of write-date on sourcefiles, and it
> wouldn't do any good because there's no way to automatically upload
> files from Macintosh to Unix.

Sure there is.  Drop to the command line on the Mac and use "rsync".
This assumes you have TCP/IP connectivity between the Mac and the
Unix machine, but that can be accomplished over dialup.  Use
"rsync -a localdir remotehost:remotedir", for example.

   - Logan
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008may10-005@yahoo.com>
> > ... I do all my editing on my Macintosh, then copy&paste
> > across dialup modem to Unix, the only place where CMUCL is
> > available for me to use, where I do all line-by-line testing, all
> > full-function testing, and all R&D testing. When I need to restart
> > Lisp, because I lost my dialup connection or it's another day and
> > I'm logging in again,
> From: Logan Shaw <············@austin.rr.com>
> Assuming you are doing everything at the terminal

I don't know what you mean by "terminal". My Macintosh, as any
Macintosh since they were first invented, has multiple windows on
screen, overlapping so that I can select a mostly-hidden window by
clicking on some visible corner of it. One window is the VT100
terminal-emulator, others are Finder or McSink. All my editing is
in various McSink windows, not the one VT100 terminal-emulator
window. So it's not correct to say that everything I do is at the
(VT100) terminal, since most typing is done at some other window.

> which is a fair assumption if you're using dialup

I don't know what you mean by "dialup". When most people nowadays
talk about "dialup", they mean PPP or SLIP or DSL, not VT100. I
don't have anything available except VT100 here. VT100 emulator on
my Mac goes directly through my modem then through voice-grade
phone connection to modem at ISP which then goes through some
TELNET-like digital link to the actual shell machine.

> try out the "screen" program under Unix.

Several people have suggested that over the past several years, but
it has been too much of a nuisance to even consider it, until last
night. Normally the modem loses carrier just a few times per day,
some days not at all for ten hours I'm dialed in, so it's only a
minor inconvenience to have to re-dial and re-establish whatever I
was doing at the time. But last night things got **HORRIBLE**, with
modem hanging up on the average every five minutes, all evening and
through the night until I went to bed. I finally decided times were
desperate enough to try screen. (I didn't see your article until
after I got up this morning, Saturday, so your article was a
coincidence!) But every time I tried to 'man screen', the modem
would hang up before I could get to the important commands I
needed. Finally the modem stayed connected enough to see page 5
where said
       If you're impatient and want to get started without  doing
       a  lot more reading, you should remember this one command:
       "C-a ?".  Typing these two characters will display a  list
       of  the available screen commands and their bindings.
before it disconnected again. So the next time I dialed in I said
'screen' and then tried that list-commands command, but it was
inscrutable, so I had to go back to the manual to see more, except
it disconnected again, so I was back to trying over and over to
display enough of the manual to see how to attach to my
disconnected screen. It looked like -d -r was closest to what I
wanted, even though -D -R was the author's favorite. But I read
further and decided to try just -r by itself, saving -d -r for the
case where -r didn't work because the old screen had failed to
become detached when the modem lost carrier. It had taken a half
hour of re-dialings over-and-over just to find out about C-a ? and
start screen the first time, and it took another half hour
of re-dialings over-and-over before I could finally see that info
about -r etc. and decide what to try, and -r worked fine.
So for the rest of the night I was re-dialing over and over, about
every 2-5 minutes, it was **horribly bad**, and each time I'd
compose one line of code on my Mac while I was disconnected and
then re-dial then screeen -r then copy&paste the one line of code
into screen-CMUCL before the modem would disconnect again.

So now I'm using screen to protect myself from lost of context when
modem loses carrier. Already this morning, modem lost carrier once
while I was composing the previous followup (to moi, the nice
person who converted PDF text to plain ASCII text and posted it),
but so-far it hasn't dropped carrier again while posting that and
composing this new followup to you. So the modems seem to be in
halfway decent shape now. But since I've made the investment to
learn how to use screen, I might continue using it.

One thing about it I don't like: Last night, after I finally got my
new test routine written and debugged and running, where it was
spewing out lots of text (with SLEEPs every so often to avoid
overflowing modem buffers), I was letting it run a few minutes,
then ctrl-C to interrupt it to a breakpoint, then scrolling the
VT100 emulator back to copy&paste the stuff that had gone
off-screen during that batch, then starting it again then ctrl-C to
copy&paste. If the modem lost carrier while it was stopped in a
breakpoint, I'd just finish my batch of copy&paste before
re-dialing. But one time the modem lost carrier while my program
was spewing out, before I could interrupt it, and by the time I
came back it had spewed out much more than a screenful, which is
*not* saved by screen, so I lost all that output and had to stop
the whole process at that point. I spent a half hour browsing the
manual, interrupted by lost modem about every 2-5 minutes, and
never did find any way to scroll back within screen to see anything
that had scrolled off-screen while the modem was disconnected. I
saw some command for starting some mode that saves stuff that
scrolled off-screen, but I couldn't find any way to scroll back
even after that mode had been entered.

> It's a terminal emulator within a terminal.

I'm not sure that makes any sense, unless by "terminal" you mean a
Unix termcap sort of thing. I guess that's what you mean.

> It supports multiple sessions, which is to say that it can
> emulate more than one terminal at a time and let you switch between
> them.

For my purpose, that isn't necessary. I'm content to ctrl-Z and
switch jobs just as I did before starting to use screen.

> The reason it's interesting in your situation is that it has
> essentially a client-server architecture.  When your modem hangs
> up, the client will die, but the server (which maintains the state
> of the emulated terminals -- and thus all the programs running
> within them) remains, and you can reattach to it with "screen -r".

*now* you tell me "screen -r"!! Last night it would have taken me
three hours of re-dialing to find your article, and I might never
have found it at all because Google Groups would need to be
re-started from scratch each time the modem lost carrier, and I
wouldn't be able to remember where I left off, that's even *if* I
could somehow magically read your mind that you had posted a
suggestion to use screen and you had included "screen -r" within
it. It took only an hour to find the info via repeatedly re-dialing
and re-starting 'man screen', which turned out to be faster and
less frustrating. But thanks for trying. If I had seen your article
just *before* the modem went totally bad, and if it had been fresh in
my mind when the modem did go bad, it would have saved me a half
hour.

Hmmm, still dialed in, modem hasn't disconnected again the whole
time I've been composing this followup, much better than last night!!

> Obviously, you cannot use it to keep your programs alive forever
> because there are other limits,

Yeah, like the admin here doesn't condone tying up resources that
are not being used for long period of time. It's explicitly
forbidden for PPP users to run a program that automatically
transmits something through the connection at regular intervals to
keep it from timing out the connection and needing (automatic)
re-dial. I think if I were to keep a screen active when I go to bed
at night, or when I go away from home, that would be frouned on
too. So last night, when modem crapped out just as I was getting
ready to go to bed, before I could shut down screen, I deliberately
re-dialed just to properly shut down screen to avoid having it sit
there while I was sleeping.

*** At this point in my composing followup, modem crapped out again ***
Re-dialing at 10:50 PDT ... ISP's dialup phone line is busy, trying
again ... not busy, establishing 19200 bps modem, logging in,
reattaching screen, not 10:52 PDT.

> Drop to the command line on the Mac and use "rsync".

**what** command line???
Macintosh System 7.5.5 doesn't have a command line!!
None of the applications that I've ever used on a Mac have command line either!

> This assumes you have TCP/IP connectivity between the Mac and the
> Unix machine

No I don't. Didn't you read the part about using VT100 dialup into
Unix shell?? Please read this FYI:
  <http://www.rawbw.com/~rem/NewPub/mySituation.html>

> but that can be accomplished over dialup.

The VT100 emulator understands **only** VT100. AFAIK there's no way
to run rsync over VT100 terminal emulator. Only Kermit and Zterm
work over it. And Kermit only half works, because the Kermit on
FreeBSD Unix has a bug: It fails to convert between DOS and Unix
newlines. So before I download any file via Kermit, I need to go
into Emacs to change all newlines to (*) CR-LF combinations, and
after I upload any file from Mac I need to create a one-line
Unix-format file then append the DOS-format uploaded file after it
then go into Emacs which recognizes to change all but the first
newline from CR-LF back to just LF. Copy&paste directly across
modems is much more convenient, so I use Kermit only when I need to
upload or download a really big file that would be too much trouble
to chop into 30k pieces or copy&paste. But in any case, I'm pretty
sure Kermit can't do rsync. And zterm doesn't even exist on FreeBSD
Unix. (man and whereis both turn up empty.)

(*) MODEM LOST CARRIER AGAIN AT 10:57, REDIALING, BACK ON AT 10:59

> Use "rsync -a localdir remotehost:remotedir", for example.

If I had a machine that could do it I would, but I don't.
From: Logan Shaw
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <4826047e$0$5693$4c368faf@roadrunner.com>
Robert Maas, http://tinyurl.com/uh3t wrote:
>>> ... I do all my editing on my Macintosh, then copy&paste
>>> across dialup modem to Unix, the only place where CMUCL is
>>> available for me to use, where I do all line-by-line testing, all
>>> full-function testing, and all R&D testing. When I need to restart
>>> Lisp, because I lost my dialup connection or it's another day and
>>> I'm logging in again,
>> From: Logan Shaw <············@austin.rr.com>
>> Assuming you are doing everything at the terminal
> 
> I don't know what you mean by "terminal". My Macintosh, as any
> Macintosh since they were first invented, has multiple windows on
> screen, overlapping so that I can select a mostly-hidden window by
> clicking on some visible corner of it. One window is the VT100
> terminal-emulator, others are Finder or McSink.

By "doing everything at the terminal", I meant on the Unix side.
Perhaps I should have said "assuming all your interaction with
Unix takes place through a Unix terminal session".  (The alternative
would be starting up an X11 session and remotely displaying windows
from the Unix machine to the Mac.  It is uncommon to do this with
dialup, but definitely possible, and in fact I used to do just
that, occasionally.)

>> which is a fair assumption if you're using dialup
> 
> I don't know what you mean by "dialup". When most people nowadays
> talk about "dialup", they mean PPP or SLIP or DSL, not VT100.

Others may mean something different when they say "dialup", but
when I say dialup, I mean using a traditional modem over a
traditional telephone line.  I mean using a system where in order
to make the network connection, you have to dial a telephone number,
because you are using a telephone line.  What you do with that
two-way stream of data that the modem allows, once you've established
the stream, does not affect whether I call it dialup.  :-)

For what it's worth, you could either (a) use a terminal emulator on
your local side to talk directly to a session on the remote side over
the dialup link, or (b) use PPP or SLIP to multiplex connections
to multiple sessions (using multiple terminal emulators).  There
are other options as well, so my point was merely that the slow
speed of dialup generally makes graphical interaction with the remote
system not very practical, and thus you are left with character-based
interaction, and on Unix that usually means interacting through
a terminal session.  Which is the case where 'screen' is useful.
Therefore, I was making a connection between dialup and 'screen'
being useful.

> So the next time I dialed in I said
> 'screen' and then tried that list-commands command, but it was
> inscrutable, 

Yes, one of the weaknesses of 'screen' is that the names of its
commands and its terse explanations on the help screen are very
much inscrutable.  Once you've learned to use them, they're still
hard to remember sometimes.  Eventually sheer muscle memory will
probably win out and overcome that, but it's harder to learn than
it needs to be.

> so I had to go back to the manual to see more, except
> it disconnected again, so I was back to trying over and over to
> display enough of the manual to see how to attach to my
> disconnected screen. It looked like -d -r was closest to what I
> wanted, even though -D -R was the author's favorite. But I read
> further and decided to try just -r by itself, saving -d -r for the
> case where -r didn't work because the old screen had failed to
> become detached when the modem lost carrier.

That is usually my policy.  It's rare (though possible!) that
the hangup of the modem fails to detach the previous screen.
Even if it does, I am fine with retyping "screen -d" to detach
it, then typing my "screen -r" again.  Usually, when something
was expected to work but fails, I'd rather take a moment to
analyze it and check if I understand what happened, rather
than just unconditionally charging forward.

> One thing about it I don't like: Last night, after I finally got my
> new test routine written and debugged and running, where it was
> spewing out lots of text (with SLEEPs every so often to avoid
> overflowing modem buffers), I was letting it run a few minutes,
> then ctrl-C to interrupt it to a breakpoint, then scrolling the
> VT100 emulator back to copy&paste the stuff that had gone
> off-screen during that batch, then starting it again then ctrl-C to
> copy&paste. If the modem lost carrier while it was stopped in a
> breakpoint, I'd just finish my batch of copy&paste before
> re-dialing. But one time the modem lost carrier while my program
> was spewing out, before I could interrupt it, and by the time I
> came back it had spewed out much more than a screenful, which is
> *not* saved by screen, so I lost all that output and had to stop
> the whole process at that point. I spent a half hour browsing the
> manual, interrupted by lost modem about every 2-5 minutes, and
> never did find any way to scroll back within screen to see anything
> that had scrolled off-screen while the modem was disconnected.

There is no explicit feature within 'screen' to do this.  But, you
can accomplish it by (ab)using the 'copy' feature.  Type control-A
then "[" (or control-A ESC instead) to enter copy mode, and from
there you can navigate around using arrow keys.  Hitting enter will
mark the beginning of a region of text to copy, and hitting enter
again will mark the end and perform the copy.  But you can also
just navigate around and see what you want to see, hiting ESC to
abort the copy interaction and return to the end of the buffer.

You can enable different sorts of keybindings to use within copy
mode.  By default, I think it is 'vi'-like, but it can be changed
to be 'emacs'-like.

The one thing you can't do, as far as I know, is select more than
one screen full of text from the terminal emulator side (i.e. in
your case, the Mac side) to copy out to another program on the
computer running the terminal emulator.  This is because the
terminal emulator does not know about the scrollback feature of
'screen'.  Indeed, scrolling around using the feature that 'screen'
provides will just appear to the terminal emulator to be more
output that it should buffer.  It can make it harder for the
terminal emulator

> I
> saw some command for starting some mode that saves stuff that
> scrolled off-screen, but I couldn't find any way to scroll back
> even after that mode had been entered.

Ah yes, then you've found copy mode already.

>> It's a terminal emulator within a terminal.
> 
> I'm not sure that makes any sense, unless by "terminal" you mean a
> Unix termcap sort of thing. I guess that's what you mean.

Yes, I mean that the Unix system is expecting to talk to a terminal
(and draw characters onto a grid, and take keyboard input).  Your
Mac provides a terminal emulator that the Unix system can interact
with like it is a terminal.  'screen' is a program that knows how to
interact with a terminal, and it also knows how to emulate a terminal.
So Unix treats 'screen' as a terminal, 'screen' behaves as a terminal,
and 'screen' treats your Mac terminal emulator as a terminal as well.

> For my purpose, that isn't necessary. I'm content to ctrl-Z and
> switch jobs just as I did before starting to use screen.

Yes, and sticking to control-Z can be a lot less confusing, because
'screen' isolates separate jobs into separate visual contexts,
which makes it harder to track what's going on, whereas ctrl-Z
intermixes them, which is aesthetically uglier but which allows
you to see the big picture better.

> Yeah, like the admin here doesn't condone tying up resources that
> are not being used for long period of time. It's explicitly
> forbidden for PPP users to run a program that automatically
> transmits something through the connection at regular intervals to
> keep it from timing out the connection and needing (automatic)
> re-dial. I think if I were to keep a screen active when I go to bed
> at night, or when I go away from home, that would be frouned on
> too.

Quite possibly, but it depends.  On such a system, modem lines are
often the scarcest resource, because every line costs a monthly fee
from the telephone company.

Main memory and CPU time on the machine are also limited resources,
but not very scarce.  At worst, you may use some of those resources,
but at best your process will be idle and Unix will swap the memory
pages out to disk, and you will use neither RAM nor CPU time.

So, the negatives might be small indeed.  Your system admin probably
reserves the right to not like it, but whether he exercises the right
is hard to say.  :-)

>> Drop to the command line on the Mac and use "rsync".
> 
> **what** command line???
> Macintosh System 7.5.5 doesn't have a command line!!
> None of the applications that I've ever used on a Mac have command line either!

Oh, you said you were using a Mac, and I assumed you meant a
relatively modern one.  Nevertheless, the old classic Macs were
a lot of fun, and make a perfectly serviceable computer in many
respects.

>> This assumes you have TCP/IP connectivity between the Mac and the
>> Unix machine
> 
> No I don't. Didn't you read the part about using VT100 dialup into
> Unix shell?? Please read this FYI:
>   <http://www.rawbw.com/~rem/NewPub/mySituation.html>

Ah, that is a fairly limited situation.  It would probably be
possible to get PPP going, since Mac OS 7.5.5 supports, if I
recall correctly Open Transport PPP, and there *may* be software
on the Unix side to do PPP even without any help from root.
(Basically, you'd have a TCP/IP stack in userland software with
the ability to transparently convert your TCP streams into
Berkeley socket calls, so that when you send a "SYN", it does
a connect().)

But, even if you got that working, it would be slow, and to my
knowledge 'rsync' does not run on the classic Macs.

> And Kermit only half works, because the Kermit on
> FreeBSD Unix has a bug: It fails to convert between DOS and Unix
> newlines.

Perhaps kermit is treating text files as binary files, and thus
leaving all bytes unchanged.  I'm far from a kermit expert, but
I believe it supports character set conversion and file format
conversion and things like that.  So it may be possible to get
it to do the conversion as part of the transfer, with the right
settings.  Unfortunately, the documentation is not great (if you
don't have the book), so it will not be trivial.

> So before I download any file via Kermit, I need to go
> into Emacs to change all newlines to (*) CR-LF combinations, and
> after I upload any file from Mac I need to create a one-line
> Unix-format file then append the DOS-format uploaded file after it
> then go into Emacs which recognizes to change all but the first
> newline from CR-LF back to just LF.

You could probably do this at the command line to remove a little
of the tedium.  I am surprised you don't want Mac format, which
has a simple CR at each line end (rather than CR LF).  It's easy
to convert from a Unix (LF at line end) to old-style Mac (CR at
line end) file format with e.g.:

	tr '\n' '\r' < unix-file > mac-file

And the other direction with:

	tr '\r' '\n' < mac-file > unix-file

You could set up a shell function or alias for that fairly easily
to avoid all that typing:

	u2m ()
	{
	    tr '\n' '\r' < "$1" > "$1".mac
	}

	m2u ()
	{
	    tr '\r' '\n' < "$1" > "$1".unix
	}

You should be able run "u2m foo" to convert a Unix-newlines (LF) file
called "foo" into a Mac-newlines (CR) file called "foo.mac", and
similarly for the "m2u".

   - Logan
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008may11-002@yahoo.com>
> >> Assuming you are doing everything at the terminal
> > I don't know what you mean by "terminal". My Macintosh, as any
> > Macintosh since they were first invented, has multiple windows on
> > screen, overlapping so that I can select a mostly-hidden window by
> > clicking on some visible corner of it. One window is the VT100
> > terminal-emulator, others are Finder or McSink.
> From: Logan Shaw <············@austin.rr.com>
> By "doing everything at the terminal", I meant on the Unix side.
> Perhaps I should have said "assuming all your interaction with
> Unix takes place through a Unix terminal session".

Thanks for clarifying. Yes, that's my only option for access to
Unix, or to InterNet (indirectly) from home. Most stuff is done in
scrolling mode, but lynx operates in "full screen" mode instead.
But in both cases it's a Unix terminal session on a Unix shell-only
account on a computer system which is dedicated to shell+Web.

> The alternative would be starting up an X11 session and remotely
> displaying windows from the Unix machine to the Mac.

That would require some sort of packet-based communication
connecting the Mac to Unix, which is not available here.

> It is uncommon to do this with dialup, but definitely possible,
> and in fact I used to do just that, occasionally.

OK, just to satisfy my curiosity: What kind of packet connection
did you have between your Mac and the remote Unix? (I'm guessing
PPP or SLIP or DIALNET or ...?)

> when they say "dialup", but when I say dialup, I mean using a
> traditional modem over a traditional telephone line.  I mean using
> a system where in order to make the network connection, you have to
> dial a telephone number, because you are using a telephone line.
> What you do with that two-way stream of data that the modem allows,
> once you've established the stream, does not affect whether I call
> it dialup.  :-)

OK, you're using "dialup" in the generic sense, *anything* using
modems over voice-grade lines, as opposed to DSL or EtherNet or
direct TCP/IP or cable modem or other non-voice-grade digital
service. When I say I use dialup modems with VT100 emulator, I am
using "dialup" in the same sense. But most people nowadays use
"dialup" as synonym for dialup-PPP/or/SLIP.

** SECOND MODEM DROPPAGE TODAY AT 11:28, REDIALING IN BACKGROUND AS
I CONTINUE COMPOSING THIS RESPONSE LOCALLY. PREVIOUS LOSS OF
CARRIER WAS AT 11:12, ORIGINAL LOGIN WAS AT 10:49, SO MODEM'S MEAN
TIME BETWEEN FAILURE IS ABOUT 20 MINUTES SO-FAR TODAY.

> For what it's worth, you could either (a) use a terminal emulator
> on your local side to talk directly to a session on the remote side
> over the dialup link,

That's what I'm doing. That's my only option.

> or (b) use PPP or SLIP to multiplex connections to multiple
> sessions (using multiple terminal emulators).

That's not an option for me. My Macintosh has only a 69030 CPU, too
slow to do PPP/SLIP efficiently, and only 8MB of RAM, thrashed
horribly in 1998 when I tried a free one-month trial of AT&T
WorldNet, took 20 minutes just to download the AT&T WorldNet home
page, and 5 minutes just to scroll **LOCALLY** when I clicked the
scroll bar on MS-IE after the page was already downloaded to MS-IE.

> There are other options as well, so my point was merely that the
> slow speed of dialup generally makes graphical interaction with the
> remote system not very practical, and thus you are left with
> character-based interaction, and on Unix that usually means
> interacting through a terminal session.

** THIRD MODEM DROPPAGE TODAY AT 11:35, REDIALING IN BACKGROUND AS
I CONTINUE COMPOSING LOCALLY. I DIDN'T EVEN *USE* UNIX DURING THAT
BRIEF UPTIME, DIDN'T EVEN BOTHER TO REATTACH SCREEN BEFORE IT
DROPPED CARRIER. NOT REATTACHING SCREEN NOW EITHER, UNTIL I FINISH
COMPOSING OR NEED UNIX FOR SOME REASON.

It is true that downloading images over 19200 bps dialup takes
quite a while, compared to comparably-reasonable quantities of
text. A full screen of text, if I'm scrolling through a file with
'more', or paging through a Web page with lynx, takes 1920
characters (24*80 VT100 screen) i.e. about 1 second, whereas a JPG
to fill the screen would take about 25-100k bytes between a
quarter-minute and a full minute. But that's only on a computer
fast enough with enough memory to run PPP or SLIP efficiently. Even
a full minute to download a full-screen JPG would be "fast"
compared to what this machine is capable of. I tried to download a
an image from Mars from a NASA WebSite over AT&T WorldNet, and
after one hour of download it had gotten only the first half inch
of the top of the image. (I was using GIF-watcher which allows
peeking at a partially-written image while the disk file was still
open for writing from the Web browser. Every 5 or 10 minutes I'd
use it to check progress, which was far slower than the Bob&Ray
skit of Slow Talkers of America.)

If I could run the Web from here, without images, but spend the
extra quarter-to-full minute to download an occasioal image
deliberately, that would be entirely reasonable. Most of the time
the images in Web sites are distracting advertisements flashing all
over the screen, of *negative* value to me. Yahoo! Mail (when I'm
at public library or other public computer lab which has such
access) is especially bad with animated GIFs flashing so badly I
can't concentrate on what I'm trying to read from my e-mail
folders. But when I was trying AT&T WorldNet, even after I waited
20 minutes for the AT&T WorldNet to download, then immediately
typed in a URL for one of my own text-only Web pages, still it took
several minutes to download each text-only Web page, compared to
fraction of second for download plus one second to display with
VT100 dialup to lynx. So even if I could get a PPP service that had
a text-only home page, still it wouldn't be decent here on this
slow small-RAM Mac.

> Which is the case where 'screen' is useful.

Actually 'screen' is mostly useful only when the modems are flaky,
so that each time the modem disconnects I can get back to exactly
the same screen I had last (except AFAIK 'screen' doesn't keep
scrollback data the way the local VT100 emulator does here, so
direct VT100 when modems are reliable is better than 'screen'
reconnects in that respect).

Other than dealing with reconnections due to flaky modems, what
other valuable use do you consider 'screen' to have? Personally
I've gotten so used to ctrl-Z and %<number> to switch jobs that I
don't at all need 'screen''s feature of switching between multiple
terminal sessions over a single 'screen' activation. *) Also,
using a single session allows me to locally (**) scroll back to
other things I've been doing recently, which multiple 'screen'
sessions within single (***) VT100 screen wouldn't allow me.

* FOURTH MODEM DROPPAGE TODAY AT 11:57 SINCE FIRST LOGIN AT 10:49
(MEAN TIME BETWEEN FAILURES APPX. 17 MINUTES), REDIALING IN
BACKGROUND AS I CONTINUE COMPOSING LOCALLY. AGAIN, I DIDN'T EVEN
*USE* UNIX DURING THAT BRIEF UPTIME, DIDN'T EVEN BOTHER TO REATTACH
SCREEN BEFORE IT DROPPED CARRIER. NOT REATTACHING SCREEN NOW
EITHER, UNTIL I FINISH COMPOSING OR NEED UNIX FOR SOME REASON.

** FIFTH MODEM DROPPAGE TODAY AT 12:02, SAME REMARKS AS ABOVE, BUT
MTBF DOWN BECAUSE THAT WAS SUCH A SHORT UPTIME.

*** SIXTH MODEM DROPPAGE TODAY AT 12:04, GETTING INTO HORRIBLY-BAD
MODE JUST LIKE SATURDAY NIGHT!!

> It's rare (though possible!) that the hangup of the modem fails
> to detach the previous screen.

There is one situation where loss of connection *does* almost
always fail to disconnect my login: When I'm at a public InterNet
terminal which supports TELNET, and I've used it to log into my
shell account (so that not only can I see my list of things-to-do
which I edit from home then look at when at public terminal), but I
can actually edit the file to check off things I've already done
(so I won't forget I did them and see them still in the to-do list
next time I'm at a public terminal). From time to time, the TELNET
program on Windows gets a disconnect from host, and almost every
time that happens, when I re-connect and log back in again, I see
I'm still logged in at another terminal, and I don't know how to
kill the other session except to run 'ps' and manually kill the
other processes with the magic KILL flag to force an immediate
KILL, which is usually too much trouble to bother.

Not being familiar with 'screen', I had to consider the possibility
that 'screen' might not detach when the modem craps (*) out, so I
needed to consider learning both the detach-and-reattach and
just-reattach commands. But in fact, what you say is true, it
always detached, so just 'screen -r' has been sufficient so-far.

* 7TH MODEM DROPPAGE TODAY AT 12:11.

> Even if it does, I am fine with retyping "screen -d" to detach
> it, then typing my "screen -r" again.  Usually, when something was
> expected to work but fails, I'd rather take a moment to analyze it
> and check if I understand what happened, rather than just
> unconditionally charging forward.

Hmm, given that -r usually is sufficient, I agree with your logic
and will probably do what you suggest if ever is otherwise, now
that you've explained it.

> ... (ab)using the 'copy' feature.  Type control-A then "[" (or
> control-A ESC instead) to enter copy mode, and from there you can
> navigate around using arrow keys.  Hitting enter will mark the
> beginning of a region of text to copy, and hitting enter again will
> mark the end and perform the copy.

I don't understand how that will recover what scrolled off the
screen's emulation of a VT100 screen because my running program
spewed it during the time after the modem unexpectedly disconnected
until the time I could log back in and reattach the screen session.

I need to put 'screen' into a mode of saving (*) *all* text that
rolls off the top of the screen, so that *if* during that mode I
lose modem connection I can later scroll back to view it. After I
put 'screen' into such a save-all-text mode, I need to be able to
interact with my Lisp or other interactive session normally,
including the abilty to press RETURN (ENTER) at the end of each
command (line of input) without 'screen' interpreting it as some
special get-out-of-mode or mark-end-of-saved-data command.
Your description of C-a [ doesn't sound like it'd be useful to me.

* 8TH MODEM DROPPAGE TODAY AT 12:19, RE-DIAL GOT BUSY SIGNAL,
REDIALED AGAIN AT 12:20.

> >> It's a terminal emulator within a terminal.
> > I'm not sure that makes any sense, unless by "terminal" you mean a
> > Unix termcap sort of thing. I guess that's what you mean.
> Yes, I mean that the Unix system is expecting to talk to a terminal
> (and draw characters onto a grid, and take keyboard input).  Your
> Mac provides a terminal emulator that the Unix system can interact
> with like it is a terminal.  'screen' is a program that knows how to
> interact with a terminal, and it also knows how to emulate a terminal.
> So Unix treats 'screen' as a terminal, 'screen' behaves as a terminal,
> and 'screen' treats your Mac terminal emulator as a terminal as well.

Yes. Thanks for clarifying that my guess was correct.

> > For my purpose, that isn't necessary. I'm content to ctrl-Z and
> > switch jobs just as I did before starting to use screen.
> Yes, and sticking to control-Z can be a lot less confusing, because
> 'screen' isolates separate jobs into separate visual contexts,
> which makes it harder to track what's going on, whereas ctrl-Z
> intermixes them, which is aesthetically uglier but which allows
> you to see the big picture better.

Yes, especially as I mentionned that the VT100 emulator on my Mac
keeps track of appx. 34 screensful of stuff that's scrolled off the
top of the 24-line window, and I can scroll to any point within
that appx. 800-line virtual screen using the local (Mac) scrollbar
on that window to go back to view something, even to copy blocks of
stuff from already-scrolled-past stuff to paste into local edit
window. I can even go back a few screens to see some line of input
echoed back and copy that and then paste it back in to be executed
again.

> >> Drop to the command line on the Mac and use "rsync".
> > **what** command line???
> > Macintosh System 7.5.5 doesn't have a command line!!
> > None of the applications that I've ever used on a Mac have command line either!
> Oh, you said you were using a Mac, and I assumed you meant a
> relatively modern one.

Bad assumption. Take a look at what I have:
* REATTACHING 'SCREEN' NOW BECAUSE I'M ACTUALLY GOING TO DO
  SOMETHING ON UNIX, GETTING THE URL TO PASTE IN HERE:
<http://www.rawbw.com/~rem/NewPub/mySituation.html>
Note that I bought my current Mac for $200 (plus $100 for monitor),
more than ten years ago, from the used/surplus-computer table at a
small Mac-only shop (went out of business 5+ years ago). The
monitor died about 5 years ago, so I had to drive to Fry's (before
my car died 4.5 years ago) and buy a VGA monitor plus a Mac-to-VGA
adaptor (because Mac stuff was no longer being sold anywhere I
could find), another $100 cost. So far that new Dell monitor has
survived except a couple years the '-' button wore out so I can no
longer adjust brightness, so I have to completely shut it off each
night when I go to bed.

> Nevertheless, the old classic Macs were a lot of fun, and make a
> perfectly serviceable computer in many respects.

Macintosh Performa isn't a "classic". It's newer than a "Macintosh
II", which is newer than a "Macintosh SE", which is newer than the
"Macintosh Plus" which is the only other Mac I owned. The Plus died
in 1999, power/video board overheaded and died for the third time,
and it would cost too much to (*) repair it when Y2K would have
made it impossible to set the date&time after end of 1999 anyway.

* 9TH MODEM DROPPAGE TODAY AT 12:40, SINCE FIRST LOGIN TODAY AT 10:49,
MEAN TIME BETWEEN FAILURES 12 MINUTES, OR 11 MINUTES IF YOU COUNT
THAT BUSY SIGNAL AS ANOTHER FAILURE.

> It would probably be possible to get PPP going, since Mac OS
> 7.5.5 supports, if I recall correctly Open Transport PPP, and
> there *may* be software on the Unix side to do PPP even without
> any help from root.

Like I said, I already tried that in 1998, just a few months after
I bought this used computer, and it was so horrendously slow as to
be worse than useless. But if you believe you can make it run
faster, orders of magnitude faster, like 1-2 seconds instead 5
minutes per 2k text-only Web page, like 30 seconds instead of 20
minutes per typical Web page with images, like half-second instead
of 5 minutes per scrolling local window within InterNet Explorer,
feel free to arrange to come over here and give it a try.

It's not allowed to emulate PPP or SLIRP etc. over shell-only
account, so we'd need to use *your* PPP account on *your* ISP when
testing, but if you can demonstrate a way to make it work at decent
speed then I might be glad to pay the extra $5 per month to upgrade
my shell-only account to a PPP+shell account.

> But, even if you got that working, it would be slow, and to my
> knowledge 'rsync' does not run on the classic Macs.

Nor mid-archaic Macs. "classic" Mac is Plus or SE running System
4.2 or 6.0.3 or 6.0.7, while mid-archaic Mac (*) is Mac II or
Performa running 6.0.7 or 7.5.5, and semi-new Mac is PowerBook
running 7.5.5, and really-new Mac runs System X (Linux-lookalike)
with Intel x86 CPU.

* 10TH MODEM DROPPAGE TODAY AT 12:55, SINCE FIRST LOGIN TODAY AT 10:49.

> I am surprised you don't want Mac format, which has a simple CR
> at each line end (rather than CR LF).

I *do* want Mac format at my end. Kermit on the Mac automatically
converts between DOS format along the communication channel and Mac
format on the local disk. Kermit on Unix, on my previous ISP prior
to 2000, automatically converted between DOS format on the
communications channel and Unix format on the remote disk. But the
current ISP has a broken version of Kermit that thinks the host
files must be in DOS format, and breaks totally if I try to
download a file that isn't in DOS format, because it thinks a Unix
format file is all one huge long line, so the entire file is
transmitted as a single packet, which utterly breaks Kermit when it
reaches end-of-file and *still* hasn't seen the DOS end-of-line
mark.

It's been years since I was dumb enough to try to download a text
file without first converting to DOS format, so I don't remember
the exact error message, so I'll do a try just now to refresh my
memory ... OK, it gets all the way to the end of the file, showing
progress in number of bytes, then the VT100 emulator's Kermit
utility beeps and puts up an alert saying that the download was
unsuccessful. The Unix Kermit server now prints out this error
message:

3AEToo many retriesA
*************************
SEND-class command failed.
 Packets sent: 161
 Retransmissions: 11
 Timeouts: 1
 Transfer canceled by receiver.
 Receiver's message: "Too many retries"
 Most recent local error: "Unknown error: 0"

HINTS... If the preceding error messages do not explain the failure:
 . Give me a SET FLOW XON/XOFF command and try again.
 . Try it again with SET PREFIXING ALL.
 . Try it again with SET STREAMING OFF.
 . Try it again with SET PARITY SPACE.
 . As a last resort, give a ROBUST command and try again.
Also:
 . Be sure the source file has read permission.
 . Be sure the target directory has write permission.
(Use SET HINTS OFF to suppress hints.)
*************************

(/home/users/rem/) C-Kermit>

Now if I convert the Unix file to DOS format by
  query replace   ^J    ^M^J
then try kermit download again ... hmm, that fails too.
Loading into local text editor, I see the first file attempt is all
one line (which crashes the text editor and freezes the entire Mac
sometimes, although not this time, fortunately), while the second
attempt is just fine Mac-format file up to the point where it gives
the Google Groups copyright character, which is outside the
US-ASCII character set, which explains why the good download
crashed. Here's the relevant text:
   Create a group - Google Groups - Google Home - Terms of Service -
   Privacy Policy
                             <A9>2008 Google
That's printed by 'more', which shows hexadecimal code of any
non-ASCII character.

Yeah, before replying to your message, I did this:
- In google groups, clicked on SHOW ORIGINAL button.
- Used lynx to print the entire Web page (*) to a file called 'tmp'.
- Ctrl-Z out of lynx, and used 'more' to spew out screen-at-a-time
   the whole 'tmp' file to VT100 emulator.
- Scrolled back emulator locally to find start of the spewed text.
- Dragged cursor from top of spewed text (your message) to bottom,
   auto-scrolling as the cursor sits just below bottom edge of window,
   until it reached end of your message.
- Used copy-from-emulator command to copy *all* that selected text,
   your entire message, header and body, but not the copyright banner
   at the bottom because I didn't think I'd want to reply to that.
- Clicked over to my local NNTP edit and pasted your entire message
   into there.
Then at that point in replying to your message, when I wanted to
give a demo of Kermit download, I did 'ls -lt | more' to see what
recent file I might want to download, saw 'tmp' sitting there,
decided to try it, not realizing it wasn't a valid US-ASCII text
file, because of that one Latin-1 copyright symbol near the end. (**)

* 11TH MODEM DROPPAGE TODAY AT 13:16, SINCE FIRST LOGIN TODAY AT 10:49.
RE-DIAL FAILED BECAUSE OF LINE BUSY, RE-DIALED AGAIN AT 12:16

** 12TH MODEM DROPPAGE TODAY AT 13:24.

So let me try a different file which doesn't have any non-USASCII
characters in it ... OK, I was mistaken, it doesn't crash Kermit,
it downloads fully, but then it shows as one very long line, which
crashes my Macintosh if the file is larger than the Mac allows
single lines to be in a TextEdit window. It's a good thing I used a
relatively small file, just three thousand 3329 characters (all on
one line after download). I remember the last time I tried to load
such a long file into an edit buffer. It slowly tried to cram more
and more of the text into the edit buffer, first showing that
horizontal scrolling was needed, then starting to wrap around to
show strange black characters on top of the text it had previously
displayed, then that overprinting line got more and more black,
until the entire Mac froze and I needed to cold-boot.

> It's easy to convert from a Unix (LF at line end) to old-style
> Mac (CR at line end) file format

That's no good, because Kermit requires DOS format (CR LF), then
converts to Mac format by omitting LF characters when writing to
disk. That's why if I download a Unix format file (CR only), on the
Mac with CR removed it's all one very very long line.

Let me try in Emacs (which I understand better and can see visually
as I work), replacing LF (Unix) with just CR (Mac) ... done, in
EMACS buffer shows as one long wrapped line with ^M where all the
line breaks used to be. Now gonna try downloading with Kermit ...
hmm, it downloaded fine. Taking a look on the Mac ... hmm, normal
correct Mac file. Oh well, I guess it doesn't make any difference
whether I convert to DOS or Mac format before Kermit download, so
long as I don't leave it Unix format!! Reverse engineering the
Kermit algorithm, I think when writing to Mac disk the Kermit
feature of the VT100 emulator just passes CR like regular text and
omits LF, so both CR-LF and CR on transmission end up CR on Mac but
LF by itself ends up *nothing*.

So I stand corrected, convert LF to either CR-LF or just CR works
equally, but in EMACS (*) if I convert to just CR then it's all one
run-on line which is difficult to see whereas CR-LF shows as normal
line breaks with ^M at end of each line, which is better to view
IMO. So I'll stick to CR-LF whenever I need to download a file that
is too large to download via 'more'.

** 13TH MODEM DROPPAGE TODAY AT 13:46, SINCE FIRST LOGIN TODAY AT 10:49.

By the way, if I try to 'more' that Mac-converted file, it looks like:
   From ·············@yahoo.com Wed Aug 1 04:17:43 2007^M   [space.gif]^M   X-Ap
parently-To: ·······@yahoo.com via 209.191.68.160; Wed, 01 Aug^M   2007 04:17:43
 -0700^M   [space.gif]^M   X-Originating-IP: [209.191.68.157]^M   [space.gif]^M
  Return-Path: <>^M   [space.gif]^M   Authentication-Results: mta494.mail.mud.ya
hoo.com from=yahoo.com;^M   domainkeys=pass (ok)^M   [space.gif]^M   Received: f
rom 209.191.68.157 (HELO web34708.mail.mud.yahoo.com)^M   (209.191.68.157) by mt
a494.mail.mud.yahoo.com with SMTP; Wed, 01 Aug^M   2007 04:17:43 -0700^M   [spac
..
If I instead convert to DOS format, then 'more' shows me:
   From ·············@yahoo.com Wed Aug 1 04:17:43 2007
   [space.gif]
   X-Apparently-To: ·······@yahoo.com via 209.191.68.160; Wed, 01 Aug
   2007 04:17:43 -0700
   [space.gif]
   X-Originating-IP: [209.191.68.157]
   [space.gif]
   Return-Path: <>
   [space.gif]
   Authentication-Results: mta494.mail.mud.yahoo.com from=yahoo.com;
   domainkeys=pass (ok)
   [space.gif]
..
so apparently 'more' given a DOS format file transparently treats
it as if a Unix-format file, compared to EMACS which when in
Unix-file mode shows with ^M at end of each line. Now if I *start*
EMACS on a already-DOS-file, it uses DOS mode and doesn't shw me
the ^M, just like 'more' does. Let me try to trick 'more' by having
the first few lines be Unix mode then convert the rest to DOS mode
.. in EMACS in Unix mode it looks like:
   From ·············@yahoo.com Wed Aug 1 04:17:43 2007
   [space.gif]
   X-Apparently-To: ·······@yahoo.com via 209.191.68.160; Wed, 01 Aug
   2007 04:17:43 -0700^M
   [space.gif]^M
   X-Originating-IP: [209.191.68.157]^M
   [space.gif]^M
..
Now let me try 'more' on that:
   From ·············@yahoo.com Wed Aug 1 04:17:43 2007
   [space.gif]
   X-Apparently-To: ·······@yahoo.com via 209.191.68.160; Wed, 01 Aug
   2007 04:17:43 -0700
   [space.gif]
   X-Originating-IP: [209.191.68.157]
   [space.gif]
..
hmmm, apparently 'more' treats each line separately, allowing a mix
of DOS and Unix format lines, both showing without any ^M.

OK, I think we've definitely reached the point of TMI.

> with e.g.:
>         tr '\n' '\r' < unix-file > mac-file

Emacs query-replace is easier because (1) I've used it many times,
(2) I can single-step it via comma or space before I press
exclamation-mark to do all the rest, so I can see if I'm getting it
correct before I bury myself too deep, (3) the final result is
right there on-screen before I save to file, to give me additional
confidence I got it right.

> You could set up a shell function or alias for that fairly easily
> to avoid all that typing:

Except I need to do it so rarely that it's faster to do the Emacs
query replace manually than to search ~/bin/* to try to find which
shell script I made for doing it more automatically. (*)

** 14TH MODEM DROPPAGE TODAY AT 14:00, SINCE FIRST LOGIN TODAY AT 10:49.

OK, so-far 13 modem droppages just during the time it took me to
compose this followup text locally on my Mac. Now let's see if I
can upload it to the NNTP server before the modem drops carrier
again ...

AFTER I GOT THE TIMESTAMP, JUST AS I WAS ABOUT TO COPY&PASTE THE TELNET
COMMAND TO CONNECT TO THE NNTP SERVER:
14TH MODEM DROPPAGE TODAY AT 14:04, SINCE FIRST LOGIN TODAY AT 10:49.
MEAN TIME BETWEEN FAILURE <14 MINUTES.
From: Logan Shaw
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <48279551$0$31717$4c368faf@roadrunner.com>
Robert Maas, http://tinyurl.com/uh3t wrote:
> That's not an option for me. My Macintosh has only a 69030 CPU, too
> slow to do PPP/SLIP efficiently, and only 8MB of RAM, thrashed
> horribly in 1998 when I tried a free one-month trial of AT&T
> WorldNet, took 20 minutes just to download the AT&T WorldNet home
> page, and 5 minutes just to scroll **LOCALLY** when I clicked the
> scroll bar on MS-IE after the page was already downloaded to MS-IE.
     :
     :
> quarter-minute and a full minute. But that's only on a computer
> fast enough with enough memory to run PPP or SLIP efficiently.
     :
     :
> If I could run the Web from here, without images, but spend the
> extra quarter-to-full minute to download an occasioal image
> deliberately, that would be entirely reasonable.

Not that you're likely to want to go this route, but if it were a
high priority for you to achieve this, you could probably pull it
off by installing NetBSD on that machine as a replacement for Mac OS.
A Motorola 68030 is not a very fast CPU, but with a well-implemented
TCP/IP stack and serial driver, it should be able to support PPP at
19200 BPS, given the proper software, and I would expect NetBSD to
be able to do that.

The mac68k port is still apparently maintained, judging by the fact
that the latest release, NetBSD 4.0, is available for mac68k:

     http://www.netbsd.org/ports/mac68k/
     ftp://ftp.netbsd.org/pub/NetBSD/NetBSD-4.0/mac68k/INSTALL.txt

Of course, you would have to give up the Mac-ness of the system,
but in exchange you would gain the ability to run lots of modern
Unix software.  Much of what has been ported to NetBSD should run
on the mac68k edition.

NetBSD/mac68k supports the Performa 600, and has a minimum memory
requirement of 8 MB, so it may be able to run OK on your hardware.

I'm not saying that it's a good idea, but it would be fun to try
as an experiment, just to see if it would work.

   - Logan
From: George Neuner
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <r7nf24ljq8er5j2rjok2uiica7ci5ejf7h@4ax.com>
On Sun, 11 May 2008 19:55:19 -0500, Logan Shaw
<············@austin.rr.com> wrote:

>Robert Maas, http://tinyurl.com/uh3t wrote:
>> That's not an option for me. My Macintosh has only a 69030 CPU, too
>> slow to do PPP/SLIP efficiently, and only 8MB of RAM, thrashed
>> horribly in 1998 when I tried a free one-month trial of AT&T
>> WorldNet, took 20 minutes just to download the AT&T WorldNet home
>> page, and 5 minutes just to scroll **LOCALLY** when I clicked the
>> scroll bar on MS-IE after the page was already downloaded to MS-IE.
>     :
>     :
>> quarter-minute and a full minute. But that's only on a computer
>> fast enough with enough memory to run PPP or SLIP efficiently.
>     :
>     :
>> If I could run the Web from here, without images, but spend the
>> extra quarter-to-full minute to download an occasioal image
>> deliberately, that would be entirely reasonable.
>
>Not that you're likely to want to go this route, but if it were a
>high priority for you to achieve this, you could probably pull it
>off by installing NetBSD on that machine as a replacement for Mac OS.
>A Motorola 68030 is not a very fast CPU, but with a well-implemented
>TCP/IP stack and serial driver, it should be able to support PPP at
>19200 BPS, given the proper software, and I would expect NetBSD to
>be able to do that.

The 68030 isn't the problem - I once worked on a system that had a
40Mhz 030 driving 4 10-T ethernet ports and a pair of 56K modems all
while doing 3D image processing, print preparation and print
monitoring.  

Most likely the culprit is MacOS - as you noted.  But even so, 19.2
should be nothing for an 030 Mac with 8MB unless there is quite a bit
of VMM thrashing ... the RS-422 ports are 1Mbps and the AppleTalk
ports 5.5? Mbps, and they are generally reliable under heavy loads.
It's been a while since I played with Macs or MacOS, but IIRC the
network and serial interrupts have higher priority than the disk.
There would have to be a hell of a lot of thrashing going on before
there would be PPP timeouts - the TCP/IP stack is pinned in memory and
the serial port and modem are both buffered.

Is it possible to close applications or shut down daemons and free up
some memory?  Not much help I know, but most Macs I've seen had tons
of shit running in the background.


>The mac68k port is still apparently maintained, judging by the fact
>that the latest release, NetBSD 4.0, is available for mac68k:
>
>     http://www.netbsd.org/ports/mac68k/
>     ftp://ftp.netbsd.org/pub/NetBSD/NetBSD-4.0/mac68k/INSTALL.txt
>
>Of course, you would have to give up the Mac-ness of the system,
>but in exchange you would gain the ability to run lots of modern
>Unix software.  Much of what has been ported to NetBSD should run
>on the mac68k edition.
>
>NetBSD/mac68k supports the Performa 600, and has a minimum memory
>requirement of 8 MB, so it may be able to run OK on your hardware.
>
>I'm not saying that it's a good idea, but it would be fun to try
>as an experiment, just to see if it would work.
>
>   - Logan

--
for email reply remove "/" from address
From: Pascal J. Bourguignon
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <7clk2e2vql.fsf@pbourguignon.anevia.com>
George Neuner <·········@/comcast.net> writes:

> Most likely the culprit is MacOS - as you noted.  But even so, 19.2
> should be nothing for an 030 Mac with 8MB unless there is quite a bit
> of VMM thrashing ... the RS-422 ports are 1Mbps and the AppleTalk
> ports 5.5? Mbps, and they are generally reliable under heavy loads.
> It's been a while since I played with Macs or MacOS, but IIRC the
> network and serial interrupts have higher priority than the disk.
> There would have to be a hell of a lot of thrashing going on before
> there would be PPP timeouts - the TCP/IP stack is pinned in memory and
> the serial port and modem are both buffered.

IIRC, on the Macs with serial ports, only the "A" or "modem" port
would have interrupts be of higher priority.  The "B" or "printer"
port wouldn't.

-- 
__Pascal Bourguignon__
From: George Neuner
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <jljj24dlft5odkdgt430ql3vnv064ib303@4ax.com>
On Tue, 13 May 2008 15:12:50 +0200, ···@informatimago.com (Pascal J.
Bourguignon) wrote:

>George Neuner <·········@/comcast.net> writes:
>
>> Most likely the culprit is MacOS - as you noted.  But even so, 19.2
>> should be nothing for an 030 Mac with 8MB unless there is quite a bit
>> of VMM thrashing ... the RS-422 ports are 1Mbps and the AppleTalk
>> ports 5.5? Mbps, and they are generally reliable under heavy loads.
>> It's been a while since I played with Macs or MacOS, but IIRC the
>> network and serial interrupts have higher priority than the disk.
>> There would have to be a hell of a lot of thrashing going on before
>> there would be PPP timeouts - the TCP/IP stack is pinned in memory and
>> the serial port and modem are both buffered.
>
>IIRC, on the Macs with serial ports, only the "A" or "modem" port
>would have interrupts be of higher priority.  The "B" or "printer"
>port wouldn't.

You're definitely right about the ports being different.  

I don't have my Inside Macintosh books handy (they're in a box
somewhere - I haven't worked on a Mac in ages), but my hazy
recollection of the priority order puts the disk 5th, behind the
system timer, vertical retrace, Appletalk and serial A.

George
--
for email reply remove "/" from address
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008jun27-001@yahoo.com>
Why this response is so belated:
  <http://groups.google.com/group/misc.misc/msg/cea714440e591dd2>
= <······················@yahoo.com>
> From: George Neuner <·········@/comcast.net>
> ... 19.2 should be nothing for an 030 Mac with 8MB unless there
> is quite a bit of VMM thrashing ...

That was 1998 when I tried AT&T Worldnet and found to be so
horrendously slow that it was worse than useless. That was a long
time ago, but I seem to recall it was thrashing an awful lot. I
suspect InterNet explorer takes a huge chunk of RAM, and the PPP
stack takes another large chunk, and random-accesses within it all
pretty badly. But I have no way to try it again to see if my memory
is accurate.

> There would have to be a hell of a lot of thrashing going on
> before there would be PPP timeouts - the TCP/IP stack is pinned
> in memory and the serial port and modem are both buffered.

I have no idea how to get info about whether any PPP timeouts
occurred. Without any data, I can't discuss this topic further.

> Is it possible to close applications or shut down daemons and
> free up some memory?  Not much help I know, but most Macs I've
> seen had tons of shit running in the background.

Yeah, but most of the used memory is in the system: When I go to
Finder and mouse-down on the Apple in the upper left corner and
drag down to the first entree, "About This Macintosh ..." it shows
me a bar chart with numbers, which at the moment show:
- McSink            20k
- Risk             512k  (wasn't running when I was trying PPP service)
- System Software 5002k
- VersaTerm        284k  (wasn't running when I was trying PPP service)
- Xlisp 2.1g       800k  (wasn't running when I was trying PPP service)
so it's not like anything I normally have running *also* (in
addition to InterNet Explorer), except the System Software, is
taking up a lot of RAM.

I'm not aware of any daemons that normally run on a Macintosh with
System 7.5.5, so perhaps you can give me an idea how to search for
them to see if any are running that I don't know about?


-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to try to read the text-picture in this box:

/------------------------------------------------------------------------\
|     |\/|       _  _   _     _   _  _  _|  _   |_   _ |  _    .  _      |
|     |  | \/   _) (_) | )   | ) (- (- (_| _)   | ) (- | |_)   | | )     |
|          /                                             |               |
|      _  |  _   _  __ |_   _  _       _   _  |_                         |
|     (_| | (_) (-     |_) |  (_| ,   | ) (_) |_                         |
|           _/                                                           |
|          _   _   _|  _  _  __ |_   _  _                                |
|     \)/ (_) | ) (_| (- |      |_) |  (_| .                             |
\------(Rendered by means of <http://www.schnoggo.com/figlet.html>)------/
     (You don't need JavaScript or images to see that ASCII-text image!!
      You just need to view this in a fixed-pitch font such as Monaco.)

Then enter your best guess of the text (40-50 chars) into this TextField:
          +--------------------------------------------------+
          |                                                  |
          +--------------------------------------------------+
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008jun26-001@yahoo.com>
> Date: Sun, 11 May 2008 19:55:19 -0500
Why this response is so belated:
  <http://groups.google.com/group/misc.misc/msg/cea714440e591dd2>
= <······················@yahoo.com>
> From: Logan Shaw <············@austin.rr.com>
> if it were a high priority for you to achieve this, you could
> probably pull it off by installing NetBSD on that machine as a
> replacement for Mac OS.

First of all, I like the way that I can rearrange icons on the desktop
and inside folders and have they stay put until I move them again.
For a personal computer, this is infinitely better than MicroSoft
Windows where every time I re-open a window it rearranges the icons
to standard layout and every time I put an icon on the desktop it
immediately moves to the next available standard grid position
nowhere near where I want it.
The Mac as a personal computer is also better than Unix which
doesn't even have icons on a desktop in the first place.

But it's moot because my Mac has only 19.9 MB free so I can't
install anything of any significant size here anyway.

> I'm not saying that it's a good idea, but it would be fun to try
> as an experiment, just to see if it would work.

I'm sure 19.9 MB wouldn't be enough disk space to do it just for
fun to see if it would work.

Thanks for at least trying to suggest an idea. Here's a description
of my situation here, in case you have any more ideas and want to
check if they would work here before spending a lot of time/energy
composing a newsgroup article I have to dismiss as not possible
here:  <http://www.rawbw.com/~rem/NewPub/mySituation.html>
From: Mark Wooding
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <slrng2iq7o.ihm.mdw@metalzone.distorted.org.uk>
Robert Maas, http://tinyurl.com/uh3t <···················@SpamGourmet.Com> wrote:

> I need to put 'screen' into a mode of saving (*) *all* text that
> rolls off the top of the screen, so that *if* during that mode I
> lose modem connection I can later scroll back to view it.

It does that automatically.  If you find that it's not storing enough
history for you, put

  defscrollback 1000

or some other number of lines in your ~/.screenrc.  (You say that your
Mac terminal emulator stores about 34 screenfuls of 24-line screens, so
that's 816 lines; 1000 is a slight upgrade, then.)

> Your description of C-a [ doesn't sound like it'd be useful to me.

It's a bit tedious to use, but effective.  Basically, C-a [ puts screen
into a strange selecting-text mode during which you can wander back and
forth through the scrollback history; when you've seen what you wanted
to see, hit ESC to return to the usual terminal-emulation mode.  The
cursor jumps back to wherever it was before, and the display snaps back
to showing what it was showing before you started.

So you have to switch back and forth between viewing scrollback and
normal interaction, which is annoying, but the feature is nonetheless
very useful.  Essentially, when you've been screwed by another modem
droppage: reconnect, hit `screen -r', C-a [ to review history, scroll
about to see what you missed, ESC, and then continue.

> Let me try in Emacs (which I understand better and can see visually
> as I work), replacing LF (Unix) with just CR (Mac) ... done, in
> EMACS buffer shows as one long wrapped line with ^M where all the
> line breaks used to be.

Golly.  Which version of Emacs are you using?  Modern Emacsen (certainly
Emacs 21, 2002) will notice Unix, DOS and Mac line endings automatically
and just continue to edit them.  You can convert by visiting the file,
hitting C-x RET f undecided-{unix,dos,mac} RET C-x C-s (say).  There's
an indicator in the mode line telling you which coding system -- and in
particular which line-ending) Emacs is using.

-- [mdw]
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008may13-001@yahoo.com>
> > I need to put 'screen' into a mode of saving (*) *all* text that
> > rolls off the top of the screen, so that *if* during that mode I
> > lose modem connection I can later scroll back to view it.
> From: Mark Wooding <····@distorted.org.uk>
> It does that automatically.  If you find that it's not storing enough
> history for you, put
>   defscrollback 1000
> or some other number of lines in your ~/.screenrc.  (You say that your
> Mac terminal emulator stores about 34 screenfuls of 24-line screens, so
> that's 816 lines; 1000 is a slight upgrade, then.)

Oh, thanks, I misunderstand what somebody said before and/or what
the 'man' page said. I tried copy mode just now, and in copy mode I
looked at the 'man' page (I gave the 'man screen' command and ran
down several pages *before* entering copy mode, then used copy mode
as a way to scroll back). The default was to go back only about
four full screens. I had the darndest time getting *out* of copy
mode, back to regular mode. Finally in desperation I just pressed
space, which said it had set a mark, then pressed space again,
which said it had copied 0 charactes, then pressed space again,
which finally was seen by 'man' already in progress. I ran 'man' a
while longer until I noticed that it said copy mode was like a vi
editor. It said any characters not listed would exit copy mode, but
the list of copy-mode characters was already past the four-screen
buffer, and I couldn't remember what all those characters had been,
so I took a wild guess and pressed =, and it said copy mode was
aborted, so I guess I got lucky.

> > Your description of C-a [ doesn't sound like it'd be useful to me.
> It's a bit tedious to use, but effective.  Basically, C-a [ puts screen
> into a strange selecting-text mode during which you can wander back and
> forth through the scrollback history; when you've seen what you wanted
> to see, hit ESC to return to the usual terminal-emulation mode.

Oh. I should have scrolled forward (locally on my Mac) in the edit
buffer that contains your message I'm editing my reply to. Then I
would have seen that and not been so frustrated. But I hadn't read
your entire message so I didn't know it later had the info I
wanted. Oh well. Somebody long ago (when the entire ARPANET had
only 64 hosts, 90% of which were military inaccessible) said good
advice is to read everything before doing anything, but nowadays if
you try to read the entire InterNet before responding to anything
.. you get the idea? (Do you remember that ad on TV a few years
ago when somebody upgraded to high-speed DSL, and they actually
were able to read all the way to the end of the InterNet, and the
computer monitor displayed a message YOU HAVE REACHED THE END OF
THE INTERNET. THERE IS NO MORE TO SEE.)

> > Let me try in Emacs (which I understand better and can see visually
> > as I work), replacing LF (Unix) with just CR (Mac) ... done, in
> > EMACS buffer shows as one long wrapped line with ^M where all the
> > line breaks used to be.
> Golly.  Which version of Emacs are you using?

GNU Emacs 20.7.2 (i386-unknown-freebsdelf4.1.1, X toolkit)
 of Sat Nov 18 2000 on ...
Copyright (C) 1999 Free Software Foundation, Inc.

> Modern Emacsen (certainly Emacs 21, 2002) will notice Unix, DOS
> and Mac line endings automatically and just continue to edit them.

Do you mean if you are already editing a Unix-format file, so Emacs
is in Unix-file mode, and in the edit you *change* line endings
with query replace, and you're still in that same edit buffer? Or
do you mean if you *start* to edit a file which is *already* in
some non-Unix format? I'll do an experiment where I change line
endings to be Mac format (bare CR without LF), exit editor,
re-start editor, and see what's on-screen ... OK in that case
*after* I have it re-start that file (I re-started EMACS on the
file, but I presume killing the buffer then C-x C-f the file again
would have worked too), it shows (Mac) instead of (DOS) or :---
near the left of the mode line, and on-screen it looks normal
instead of ^M mess. So we're both right, just talking about
different situations.

> You can convert by visiting the file, hitting
> C-x RET f undecided-{unix,dos,mac} RET C-x C-s (say).

Let me try figuring out what you mean there ...

The target text contains the following non ASCII character(s):
          latin-iso8859-1: ?
These can't be encoded safely by the coding system undecided-mac.

Dang, that copyright symbol again!! That's what I get for being
lazy, editing the most recent file on the disk, which is your
message downloaded from Google Groups via lynx P command.

Hmm, if the purpose of all this is to prepare a file for download
with Kermit, your way is safer, because it warns me if the file has
any non-USASCII characters in it, instead of discovering that near
the end of a Kermit download when Kermit bombs.

> There's an indicator in the mode line telling you which coding system

Yes, I know about that, noticed it after I *uploaded* a file from
my Mac and noticed that EMACS showed it as a DOS file.

> and in particular which line-ending) Emacs is using.

I don't see that specifically. The mode line here just says:
---1(Mac)---F1  tmpmac            (Fundamental)--L1--Top------------------------
 That's when I load back in the file I previously converted to Mac
 format using      query replace    ^J     ^M
From: Mark Wooding
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <slrng2ji36.ihm.mdw@metalzone.distorted.org.uk>
Robert Maas, http://tinyurl.com/uh3t <···················@SpamGourmet.Com> wrote:

>> Modern Emacsen (certainly Emacs 21, 2002) will notice Unix, DOS
>> and Mac line endings automatically and just continue to edit them.
>
> Do you mean if you are already editing a Unix-format file, so Emacs
> is in Unix-file mode, and in the edit you *change* line endings
> with query replace, and you're still in that same edit buffer? Or
> do you mean if you *start* to edit a file which is *already* in
> some non-Unix format?

It applies some heuristics to the file at visit-time to decide what
kinds of line-endings it has.  It then converts the buffer to an
internal format (probably Unix-style line endings) and remembers what
transformation it applied so that it can do the inverse thing when you
save the buffer.

Messing with C-x RET C-f changes Emacs's idea of which transformation to
apply when saving.  In particular, it doesn't actually change the buffer
contents at all.

There's also (on Emacs 22 only :-( ) C-x RET C-r which tells Emacs that
its heuristics guessed the wrong coding system and should try to read
the file into your buffer again using one you specify explicitly.

>> You can convert by visiting the file, hitting
>> C-x RET f undecided-{unix,dos,mac} RET C-x C-s (say).
>
> Let me try figuring out what you mean there ...
>
> The target text contains the following non ASCII character(s):
>           latin-iso8859-1: ?
> These can't be encoded safely by the coding system undecided-mac.
>
> Dang, that copyright symbol again!! That's what I get for being
> lazy, editing the most recent file on the disk, which is your
> message downloaded from Google Groups via lynx P command.

Hmm.  I got that wrong.

  * Just `unix', `mac' or `dos' work and you don't need the `undecided-'
    prefix.

  * Emacs 20 doesn't behave the same way as my Emacs 22 here, and
    insists on asking you which coding system to use if it can't work it
    out.  It may be better to shut it up by explicitly asking for
    utf-8-unix or latin-1-mac or whatever.  In particular, when it asks
    you at save time which coding system to use, it /doesn't/ let you
    specify the line-ending convention and seems to end up in an awful
    mess.

Sorry.

>> There's an indicator in the mode line telling you which coding system
>
> I don't see that specifically. The mode line here just says:
> ---1(Mac)---F1  tmpmac            (Fundamental)--L1--Top------------
      ^^^^^
That bit's the line-ending.  The `1' is telling you that it's using a
one-byte representation for characters.

-- [mdw]
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008may13-002@yahoo.com>
> From: Mark Wooding <····@distorted.org.uk>
> It applies some heuristics to the file at visit-time to decide
> what kinds of line-endings it has.  It then converts the buffer to
> an internal format (probably Unix-style line endings) and remembers
> what transformation it applied so that it can do the inverse thing
> when you save the buffer.

Ah, thanks for the explanation. Since EMACS was originally invented
on a PDP-10 where newline is CR-LF, I imagine maybe this mapping
was originally invented to avoid the problem of C-f or C-b ending
up halfway between the CR and LF (or complicated logic to count two
characters in the buffer as one when executing such a command,
especially with repeat count). But I'm just guessing there.

> Messing with C-x RET C-f changes Emacs's idea of which
> transformation to apply when saving.  In particular, it doesn't
> actually change the buffer contents at all.

Ah, thanks for the extra explanation!

> * Emacs 20 doesn't behave the same way as my Emacs 22 here, and
>   insists on asking you which coding system to use if it can't work it
>   out.  It may be better to shut it up by explicitly asking for
>   utf-8-unix or latin-1-mac or whatever.

That wouldn't be any good at all, because I have no way to view
anything except US-ASCII, and Kermit bombs when it encounters any
character with the parity bit on. It's better that I force it to
check for all US-ASCII characters and warn me if that's violated
*before* I start Kermit to download it.

Early last year I wrote software to convert between various
notational conventions:
- UTF-8
- Latin-1
- US-ASCII with brace-pictures for non-USASCII characters.
If after I see the warning I investigate and find it's an
unimportant part, then I just delete the offending character before
trying to convert the file again. On the other hand if I discover
there are a *lot* of non-USASCII characters then I look carefully
to see whether I'm looking at UTF-8 or Latin-1. For example:
Lat1: As<ED> dando una mirada a todos estos a<F1>os, ve<ED>a que no hab<ED>a
BraP: As{i'} dando una mirada a todos estos a{n~}os, ve{i'}a que no hab{i'}a
Utf8: <E8><B5><B7> <E5><88><9D> <E3><80><80> <E7><A5><9E> <E5><89><B5>
Just glancing at the hexadecimal notations given by 'more' shows
whether the non-USASCII bytes are in groups with first byte in
different range from other bytes (UTF-8), or non-USASCII bytes are
individual (Latin-1). Once I recognize whether a passage is in
UTF-8 or Latin-1 I can apply the appropriate conversion to brace
pictures, whereupon I can see the text visually here.

> >> There's an indicator in the mode line telling you which coding system
> > I don't see that specifically. The mode line here just says:
> > ---1(Mac)---F1  tmpmac            (Fundamental)--L1--Top------------
>       ^^^^^
> That bit's the line-ending.  The `1' is telling you that it's
> using a one-byte representation for characters.

Loading a DOS-format file (converted yesterday, earlier in this
thread), into EMACS, whereupon I see:
----(DOS)---F1  tmp2dos           (Fundamental)--L1--Top------------------------
I don't see a "2" before the (DOS), so I guess the notation is inconsistent.

> Latin-1:                             <A9>2008 Google

Yeah, I copied that part too from GG's display of your message.
<A9> must be the copyright symbol.
From: Pascal J. Bourguignon
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <7cprrq2vvg.fsf@pbourguignon.anevia.com>
···················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t) writes:
> 14TH MODEM DROPPAGE TODAY AT 14:04, SINCE FIRST LOGIN TODAY AT 10:49.
> MEAN TIME BETWEEN FAILURE <14 MINUTES.

Yes, they REALLY want you to upgrade to ADSL.

-- 
__Pascal Bourguignon__
From: jellybean stonerfish
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <UbnVj.3044$17.1388@newssvr22.news.prodigy.net>
On Sat, 10 May 2008 11:09:38 -0700, Robert Maas, http://tinyurl.com/uh3t
wrote:



> I finally decided times were desperate enough to try
> screen. (I didn't see your article until after I got up this morning,
> Saturday, so your article was a coincidence!) But every time I tried to
> 'man screen', the modem would hang up before I could get to the
> important commands I needed. Finally the modem stayed connected enough
> to see page 5 where said
>        If you're impatient and want to get started without  doing a  lot
>        more reading, you should remember this one command: "C-a ?". 
>        Typing these two characters will display a  list of  the
>        available screen commands and their bindings.
> before it disconnected again.

Reading all of the stuff you did to read the screen manpage made my head 
hurt.  I am not familiar with the terminal you are using, but is there 
any way to save the output locally?  Then you can do a command like

    man screen | cat

and the data is saved locally in a clean text format.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008may11-001@yahoo.com>
> > I finally decided times were desperate enough to try
> > screen. (I didn't see your article until after I got up this morning,
> > Saturday, so your article was a coincidence!) But every time I tried to
> > 'man screen', the modem would hang up before I could get to the
> > important commands I needed. Finally the modem stayed connected enough
> > to see page 5 where said
> >        If you're impatient and want to get started without  doing a  lot
> >        more reading, you should remember this one command: "C-a ?".
> >        Typing these two characters will display a  list of  the
> >        available screen commands and their bindings.
> > before it disconnected again.
> From: jellybean stonerfish <··········@geocities.com>
> Reading all of the stuff you did to read the screen manpage made my head
> hurt.  I am not familiar with the terminal you are using, but is there
> any way to save the output locally?

During the critical time, when the modem's mean-time-between-failure
was less than 2 minutes, there was no way to get more than just the
first page or two of the man pages for 'screen' downloaded before
it'd die again and I'd need to start from the top. In theory, if I
knew in advance it was going to be so bad, each time it died I
might have copied what had printed to VT100 so-far to local edit,
then after reconnecting I might have used the 'more' feature of
'man' to immediately skip to the point I had already to pick up
where I left off. But the manual effort of copying*pasting the
appropriate / command to skip might have occupied the last ten
seconds available before the next disconnect, so I might not have
even gotten the / command executed before it died again. Maybe I
was too optimistic. Each time I re-dialed, I hoped the modem would
stay up long enough to get all the man pages output in one piece.
That never happened, but at least one time I got to the place where
it told about the C-a ? command, and then after I got screen
started before the modem disconnected again, one other time it got
to the place where it told me how to reattach that old screen
session, which I had to do about a hundred times that night.

> Then you can do a command like
>     man screen | cat
> and the data is saved locally in a clean text format.

I don't know what you're trying to say there. Where would I "do" a
command like that? The only place I could "do" that command would
be on the Unix shell account, but the modem dies before that
command could finish executing, so what good would that do??

By the way, Saturday modem connections were flaky again, although
not quite as bad as Friday night on the average. Occasionaly the
modem would stay connected for up to a half hour at a time. Today,
so-far the modem has stayed connected all the time since I first
dialed in today (22 minutes ago), through finding your article and
composing this followup so-far. I'm using 'screen' just in case it
goes bad again.

** AT 11:12 PDT, 23 MINUTES AFTER CONNECT, JUST AS I WAS TRYING TO
GET TIMESTAMP TO UPLOAD THIS FOLLOWUP, MODEM LOST CARRIER AGAIN.
RE-DIALING... BACK ONLINE AT 11:14 READY TO UPLOAD THIS.
From: Pascal J. Bourguignon
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <7cfxsuuyxz.fsf@pbourguignon.anevia.com>
···················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t) writes:

>> From: "Leslie P. Polzer" <·············@gmx.net>
>> do you know this paper:
>> http://cs-www.cs.yale.edu/homes/dvm/papers/lisp05.pdf
>
> I have no way to read PDF files here.

You're too pessimistic!

Go to google advanced search, and give the criteria that will select this file:

http://www.google.com/search?hl=en&lr=&as_qdr=all&ie=ISO-8859-1&q=allinurl:+dvm+papers+lisp05+site:cs.yale.edu+filetype:pdf&btnG=Search

Then "click" on the "View as HTML",  or the "Google recommands
visiting our text version of this document." link.

The proof is in w3m:
------------------------------------------------------------------------
Location: http://www.google.com/search?hl=en&lr=&as_qdr=all&ie=ISO-8859-1&q=allinurl:+dvm+papers+lisp05+site:cs.yale.edu+filetype:pdf&btnG=Search
Web Images Maps News Shopping Gmail more ▼ Video Groups Books Scholar Finance Blogs
YouTube Calendar Photos Documents Reader
even more »
                                                                                                           Sign in

 Google [allinurl: dvm papers lisp05 site:cs.yale.edu filetype:pdf     ]  Search    Advanced Search               
                                                                                    Preferences                   

 Web             Results 1 - 1 of 1 from cs.yale.edu for allinurl: dvm papers lisp05 filetype:pdf. (0.27 seconds) 
Tip: Save time by hitting the return key instead of clicking on "search"
[PDF] A Framework for Maintaining the Coherence of a Running Lisp Drew ...
File Format: PDF/Adobe Acrobat - View as HTML                                                                     
Your browser may not have a PDF reader available. Google recommends visiting our text version of this document.   
A Framework for Maintaining the Coherence. of a Running Lisp. Drew McDermott. Yale Computer Science Department. PO
Box 208285. New Haven, CT 06520-8285 ...                                                                          
www.cs.yale.edu/homes/dvm/papers/lisp05.pdf - Similar pages                                                       

                    [allinurl: dvm papers lisp05 site:cs.yale.edu filetype:pdf     ]  Search                      
                                                                                                                  
  Search within results | Language Tools | Search Tips | Dissatisfied? Help us improve | Try Google Experimental  

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
             ©2008 Google - Google Home - Advertising Programs - Business Solutions - About Google              
------------------------------------------------------------------------


You must be more optimistic to be able to find quick and easy
solutions to your problems...


-- 
__Pascal Bourguignon__
From: Leslie P. Polzer
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to 	Unix 'make')
Date: 
Message-ID: <b2009cf2-2dac-488d-b739-532ad4f272d2@r66g2000hsg.googlegroups.com>
On May 7, 12:07 am, ·················@SpamGourmet.Com (Robert Maas,

> I'm soliciting feedback in two areas:
> - Any obvious flaws in the algorithms, especially where I claim
>    that some fact is "provably true". Can you produce any
>    counterexample? Based on the described algorithms, can you find
>    any case where any value is unnecessarily re-calculated or
>    re-loaded or re-saved?
> - Any suggested changes in wording to make the description easier
>    to understand without changing the meaning.

What is "precursor data"? Can you give an example?
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <rem-2008may07-002@yahoo.com>
> From: "Leslie P. Polzer" <·············@gmx.net>
> What is "precursor data"? Can you give an example?

For my current R&D project, there is just one manually-created
input file, and everything else chains from that source.

If A -> B -> C,
then A is a precursor to B, and B is a precursor to C.
If A -> D ---\
              >---->E
      C------/
then C and D are both precursors to E.

The single input file (at this time) is just a list of descriptions
of Transferrable/Soft skills, a private file which is essentially:
  <http://www.rawbw.com/~rem/NewPub/ProxHash/labsatz.txt>
except without the labels at the left margin.
This list of sentences is loaded into memory, and used to generate
two items, the list of labels you see at the left in that file
above, and the parse into a list of words normalized to lower case.
Each list-of-words is converted to histograms of bigrams trigrams
and tetragrams. These are summed to yield the whole-corpus
histograms. The single-record historgrams are divided by the
whole-corpus histograms to yield the ratio histograms. These are
merged into a single histogram per record:
  <http://www.rawbw.com/~rem/NewPub/ProxHash/trans-skills-freqrats.txt>
These are normalized to yield:
  <http://www.rawbw.com/~rem/NewPub/ProxHash/trans-skills-normfreqrats.txt>
Those are converted to ProxHash vectors:
  <http://www.rawbw.com/~rem/NewPub/ProxHash/trans-skills-proxhashvecs.txt>
Most of those lists of intermediate values are attached to
properties on the labels, making them randomly accessible in later
algorithms. The first use of the ProxHash vectors, randomly
accessed via the properties on symbols, is to generate the optimal
graph in 2-d (using just two of the ProxHash components):
  <http://www.rawbw.com/~rem/NewPub/ProxHash/trans-skills-links2d.txt>
That graph is then upgraded to successively higher dimensional metrics:
  <http://www.rawbw.com/~rem/NewPub/ProxHash/trans-skills-links3d.txt>
  <http://www.rawbw.com/~rem/NewPub/ProxHash/trans-skills-links4d.txt>
  <http://www.rawbw.com/~rem/NewPub/ProxHash/trans-skills-links10d.txt>
  <http://www.rawbw.com/~rem/NewPub/ProxHash/trans-skills-links64d.txt>
That's as far as I have the dataflow utilized so-far.

The thread on ProxHash success has more details about the
methodology, but the above should give you a good enough idea of
the dataflow in general, which is all you need to understand to get
an idea what I'm using my new MayLoad utility for. The use of
normfreqrats as input to compute proxhashvecs is the most
time-consuming task in all the above, so it was important to be
able to save at least those items to disk to avoid needing to
re-compute from scratch every time the modem loses carrier and I
have to re-start my Lisp environment.
From: Pascal J. Bourguignon
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <7ck5i6uzgp.fsf@pbourguignon.anevia.com>
···················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t) writes:
> The thread on ProxHash success has more details about the
> methodology, but the above should give you a good enough idea of
> the dataflow in general, which is all you need to understand to get
> an idea what I'm using my new MayLoad utility for. The use of
> normfreqrats as input to compute proxhashvecs is the most
> time-consuming task in all the above, so it was important to be
> able to save at least those items to disk to avoid needing to
> re-compute from scratch every time the modem loses carrier and I
> have to re-start my Lisp environment.

#+sbcl  sb-ext:save-lisp-and-die
#+clisp ext:saveinitmem
#+cmu   ext:save-lisp
etc.

-- 
__Pascal Bourguignon__
From: Frank Buss
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <xqgxphntqjzl$.a0shurvv130o.dlg@40tude.net>
Robert Maas, http://tinyurl.com/uh3t wrote:

> The thread on ProxHash success has more details about the
> methodology, but the above should give you a good enough idea of
> the dataflow in general, which is all you need to understand to get
> an idea what I'm using my new MayLoad utility for. The use of
> normfreqrats as input to compute proxhashvecs is the most
> time-consuming task in all the above, so it was important to be
> able to save at least those items to disk to avoid needing to
> re-compute from scratch every time the modem loses carrier and I
> have to re-start my Lisp environment.

If you are using a Unix command prompt, try "screen". If you loose the
connection, the session is not closed, but you can re-attach it on the next
login with screen -r (maybe depends on nohup settings for your user
account, but works great for root accounts :-)

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Alex Mizrahi
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <4821bdf7$0$90269$14726298@news.sunsite.dk>
 RM> an idea what I'm using my new MayLoad utility for. The use of
 RM> normfreqrats as input to compute proxhashvecs is the most
 RM> time-consuming task in all the above, so it was important to be
 RM> able to save at least those items to disk to avoid needing to
 RM> re-compute from scratch every time the modem loses carrier and I
 RM> have to re-start my Lisp environment.

when I was experimenting with computational linguistics lengthy computations 
were indeed of a problem,
and I guess some fancy utility that can track dependencies and cache data in 
files would be of help.

but that will work fine only if it is easy to use.

unfortunately, from your specification it's not clear how this stuff can be 
used.
you say about :DATE and :DATA properties of a symbol used by update 
algorithm,
but you do not say how does algorithm know which file to look to, or how to 
compute a value.

i find algorithm being rather trivial, and perhaps most important thing is 
dependency description "language". do you have one? can you give us some 
_small_, but _full_ example of MayLoad being used -- in form of Lisp source 
code and REPL session log. preferable without mentioning ProxHash bloody 
details :)
From: Pascal J. Bourguignon
Subject: Re: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make')
Date: 
Message-ID: <7cod7iv9yz.fsf@pbourguignon.anevia.com>
·················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t) writes:

> In conjunction with my ProxHash R&D project,
> in late April I started work on a new dataflow automatic-updater
> utilty which is somewhat like the Unix 'make' utility.
>
> Background: Unix's 'make' uses a list of dependencies between various
> disk files to automatically update a final output file such as an
> executable. Typical use is to make sure each source file is compiled
> to yield a more recent compiled file, and to make sure each
> executable file is at least as recent as all compiled modules it's
> built from. The Java's 'ant' utility goes further to support
> updating of JAR files containing Class files in a similar manner.

Well, it goes further only if you mean >=, but not >.
make is also able to update ar files containing .o files in a similar manner.
You just have to say so, using the syntax: libmine.a(fun.o)

-*- mode: compilation; default-directory: "/tmp/" -*-
Compilation started at Wed May  7 09:34:41

cd /tmp ; cat Makefile ; make clean all
#-------------------------
all:libmine.a
libmine.a:libmine.a(fun.o)
libmine.a(fun.o):fun.c
clean:
	rm -f libmine.a
#-------------------------
rm -f libmine.a
cc    -c -o fun.o fun.c
ar rv libmine.a fun.o
ar: creating libmine.a
a - fun.o
rm fun.o

Compilation finished at Wed May  7 09:34:41


> The first draft of my description of the algorithms is here:
>   <http://www.rawbw.com/~rem/NewPub/MayLoadSpec.txt

> - :DATE = The timestamp (Universal Time) for which this data is correct

ant uses checksums, because timestamps don't have enough resolution
anymore.  That's even more true when you work in RAM instead of
launching external disk based processes.

> Requesting a particular item of live data to be "current" traces backwards
> recursively through the dataflow to make sure all inputs are "current",
> per the following algorithm at each control point:

I can hear Kenny crying "Cells!" here...


-- 
__Pascal Bourguignon__
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Spammer harvested this address, 11 messages discarded (was: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make'))
Date: 
Message-ID: <rem-2008jun28-001@yahoo.com>
> Date: Tue, 06 May 2008 15:07:31 -0700
> From: ·················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t)
Attention anyone who might have wanted to reply to me privately:

Shortly after I posted my thread-starter, a spammer harvested my
address and sent me two Nigerian 419 spam before I could disable
the address.

After I disabled this address, eleven additional e-mail were sent
to this address, all discarded, no NDN issued in any case. Most of
these additional/discarded messages were probably from the spammer,
but I have no way to know. Not one legitimate message arrived prior
to the two Nigerian spam.

In case anyone reading this thread sent me e-mail and I never
replied, that's why, I never received your e-mail.

Anybody who wants to reply to me privately on this topic
 (if you tried before, I hope you saved a backup copy of your message text),
try this address instead (camoflaged here):
 make <digit2> <dot> <digit5> <dot> CalRobert <at the same domain as before>
Spammers flood my mailbox with several hundred e-mail per day.
Blame it on them that I have to take such drastic measures to try
to separate the very very few legitimate e-mail I receive from the
thousand times more spam that I receive.
Blame it on Yahoo! Mail for not providing any proper spam-filtering
system whereby I could **reject** any e-mail I don't want, and I
have to resort to a third-party (SpamGourmet) filtering system
which also has the bad design of accepting-then-discarding e-mail
to addresses I've already disabled instead of rejecting the e-mail
so that you'd get a NDN (Non-Delivery Notice) from your mail agent,
but at least SpamGourmet lets me spawn variants of the same basic
address that all feed to the same mailbox so that I can disable
some variant addresses and keep others active while needing to look
in only one mailbox for all the e-mail that arrived via all those
variants.
From: Ariel
Subject: Re: Spammer harvested this address, 11 messages discarded (was: Requesting critique of spec of my new MayLoad utility (similar to Unix 'make'))
Date: 
Message-ID: <20080628055755.eafaac1c.no@mail.poo>
On Sat, 28 Jun 2008 04:58:14 -0700
·················@spamgourmet.com (Robert Maas, http://tinyurl.com/uh3t) wrote:

> > Date: Tue, 06 May 2008 15:07:31 -0700
> > From: ·················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t)
> Attention anyone who might have wanted to reply to me privately:
> 
> Shortly after I posted my thread-starter, a spammer harvested my
> address and sent me two Nigerian 419 spam before I could disable
> the address.
> 
> After I disabled this address, eleven additional e-mail were sent
> to this address, all discarded, no NDN issued in any case. Most of
> these additional/discarded messages were probably from the spammer,
> but I have no way to know. Not one legitimate message arrived prior
> to the two Nigerian spam.
> 
> In case anyone reading this thread sent me e-mail and I never
> replied, that's why, I never received your e-mail.
> 
> Anybody who wants to reply to me privately on this topic
>  (if you tried before, I hope you saved a backup copy of your message text),
> try this address instead (camoflaged here):
>  make <digit2> <dot> <digit5> <dot> CalRobert <at the same domain as before>
> Spammers flood my mailbox with several hundred e-mail per day.
> Blame it on them that I have to take such drastic measures to try
> to separate the very very few legitimate e-mail I receive from the
> thousand times more spam that I receive.
> Blame it on Yahoo! Mail for not providing any proper spam-filtering
> system whereby I could **reject** any e-mail I don't want, and I
> have to resort to a third-party (SpamGourmet) filtering system
> which also has the bad design of accepting-then-discarding e-mail
> to addresses I've already disabled instead of rejecting the e-mail
> so that you'd get a NDN (Non-Delivery Notice) from your mail agent,
> but at least SpamGourmet lets me spawn variants of the same basic
> address that all feed to the same mailbox so that I can disable
> some variant addresses and keep others active while needing to look
> in only one mailbox for all the e-mail that arrived via all those
> variants.

Not sure about how Yahoo handles mail, but I've found that greylisting has worked magic for cutting spam at the server level.  The best part is it doesn't pull false postitives based on message content, but delays acceptance of an email (with a 45X error message denoting to retry delivery soon).  This is very effective for stopping random return address and volatile open proxy styles of spam.  Only false positives are from old SMTP servers that do not adhere to SMTP RFC (the list is very short and virtually unseen on the interweb).  Definately worth looking into if you or your mailserver admin can be assed with implimenting it (these days there are many implementations for all the mainstream SMTP servers).
-a
From: Moi
Subject: Re: Spammer harvested this address, 11 messages discarded (was:  Requesting critique of spec of my new MayLoad utility (similar to Unix  'make'))
Date: 
Message-ID: <861da$4866369e$5350c294$15945@cache110.multikabel.net>
On Sat, 28 Jun 2008 04:58:14 -0700, Robert Maas, http://tinyurl.com/uh3t
wrote:

WRT your original post (after two months ;-) a few remarks:

A "chunk" contains various things:
0 - { bookkeeping: timestamp , reference to filename, flags} 
1 - the function name or body (or a reference to it)
2 - the "names" of the parameters that is uses (or references to it)
3 - the actual values for these parameters (or references to it)
4 - the resulting function value.

1 is necessary to re-evaluate the the function, if needed.
2 is necessary to construct the dependency graph (DAG)
3 is the searchkey for the caching. (note: there could be multiple chunks 
for the same functionname, but for different function argument values.
4 is the cached function value (payload).

You probably want the topology {0,1,2,3} in core all the time.
It would be pretty costly to do a diskread for every node traveral you do.
(but a traversal would end once you find a node that is up to data wrt 
it's precursors.)

If there is no other process writing to the backing files you can
always assume that if a chunk is in core -->> it is valid, and if it is 
more or equally recent than it's precursor -->> the cached value is valid.
If it is out-of-date --> recompute it (and mark dirty, or write to file)

The same goes *recursively* for the children (precursors), which implies
that you can never check a chunk's valididy _before having checked it's 
children's validity_. Which may become costly.

As you noted yourself before: changing the function bodies or the topology 
invalidates it plus all of it's descendants.

HTH,
AvK