From: Conrad Barski
Subject: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <a4af10cf.0311040716.6ec9a405@posting.google.com>
After reading Raymond's new book, TAOUP, I've been thinking about how
some of the concepts in the book could be applied to LISP programming.
Briefly, one of the central prinicples of the book is that programs
should be small, should favor non-binary text formats for everything,
and fit into one of three roles:


1. Filter Programs (i.e. gcc) are programs that transform data from
stdin to stdout and can be piped.

2. "Mechanism" Programs (i.e. gdb) have state and/or require user
interaction and therefore feature a CLI.

3. "Policy" Programs (i.e. KDevelop) are "skins" to add a GUI (or
other interface enhancements) to programs of type 1 or 2 by "slaving"
them. They do this through piping for type 1, and sockets (or other
IPC) for type 2 programs.


Since writing light processes is still messy in LISP due to core
overhead, it makes more sense to me, in LISP, to develop "programs" to
function within the REPL instead of a UNIX prompt.

If one thinks of the REPL as a command line (which it is, after all)
then there is an obvious lisp analogue for type 1 programs, as these
are basically calls to FP (functional-programming) functions. (Of
course, UNIX pipes are somewhat more akin to function calls in a lazy
language, since they execute only as input becomes available)

However, LISP really has no obvious analogue for type 2 programs, or
CLI programs. Having such a program would allow one to design the
MECHANISM of a program using an interactive prompt, which minimizes
arbitrary interface implementation and allows for easier unit testing,
which is always difficult when GUIs are involved. Unlike UNIX CLIs, it
makes more sense to me for a LISP CLI to use SEXPs instead of simple
ASCII text.

Of course, (as with most things) it is trivial to add such CLI
programming concepts to LISP, such as in my implementation below. The
core algorithm is in "cli-repl" which basically is an input-output
pump for a CLI object. The CLI object itself I implemented as a
structure (in this case a primitive RPN calculator that greets you by
name). You can pass the object "command line paramaters" by feeding
paramaters to a constructor function. The structure's variables allow
it to maintain state. Any inputs into the CLI object are passed into
its "cli-eval" method.

Such a design allows for easy testing of code with a minimal
interface, allows easy unit testing by passing batch commands (via the
"bat" param - see example below). After the "mechanism" code has been
completed, a GUI (i.e. "policy") could then be written to drive the
CLI object directly (and parsing its SEXPs) without using the
"cli-repl" pump and instead directly calling the CLI object's
"cli-eval" method.

Anyway, this is just a brainstorm I had. Any comments welcome. I'm
sure lots of people are already doing something like this. Am I just
re-inventing the wheel here?


(defun cli-repl (cli &optional bat)
  (labels ((box (x)
		(if (consp x)
		    x
		  (list x)))
	   (prn (x)
		(pprint (box x))
		x)
	   (process (fn)
		    (fresh-line)
		    (princ '>)
		    (prn (let ((x (funcall fn)))
			   (if (eq x 'dump)
			       cli
			     (cli-eval cli (box x)))))))
	  (prn (cli-init cli))
	  (mapc #'(lambda (x) (process #'(lambda () (prn x)))) bat)
	  (loop (process #'read))))

(defstruct rpncalc stack name)

(defmethod cli-init ((x rpncalc))
  `(welcome-to-rpn-calc ,(rpncalc-name x)))

(defmethod cli-eval ((x rpncalc) exp)
  (symbol-macrolet ((stack (rpncalc-stack x)))
		   (labels ((binop (fn)
				   (push (apply fn (reverse (list (pop stack) (pop stack))))
stack)))
			   (case (first exp)
				 (push (push (second exp) stack) `(,stack push-complete))
				 (add (binop #'+))
				 (sub (binop #'-))))))

(defun create-rpncalc (name)
  (make-rpncalc :name name))

(cli-repl (create-rpncalc 'bob)) 


**** sample session ****

(WELCOME-TO-RPN-CALC BOB)
>(push 8)

((8) PUSH-COMPLETE)
>(push 3)

((3 8) PUSH-COMPLETE)
>(push 2)

((2 3 8) PUSH-COMPLETE)
>add

(5 8)
>sub

(3)
>(push 2)

((2 3) PUSH-COMPLETE)
>

**** end session ****


use of batch commands:

(cli-repl (create-rpncalc 'jim) '((push 3) (push 2) sub))

From: Will Hartung
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <bo8os1$1b1bnu$1@ID-197644.news.uni-berlin.de>
"Conrad Barski" <·····················@yahoo.com> wrote in message
·································@posting.google.com...
> Briefly, one of the central prinicples of the book is that programs
> should be small, should favor non-binary text formats for everything,
> and fit into one of three roles:
>
>
> 1. Filter Programs (i.e. gcc) are programs that transform data from
> stdin to stdout and can be piped.
>
> 2. "Mechanism" Programs (i.e. gdb) have state and/or require user
> interaction and therefore feature a CLI.
>
> 3. "Policy" Programs (i.e. KDevelop) are "skins" to add a GUI (or
> other interface enhancements) to programs of type 1 or 2 by "slaving"
> them. They do this through piping for type 1, and sockets (or other
> IPC) for type 2 programs.
>
> Since writing light processes is still messy in LISP due to core
> overhead, it makes more sense to me, in LISP, to develop "programs" to
> function within the REPL instead of a UNIX prompt.

What, are you talking about "Lisp Scripts"? CLISP isn't dramatically more
expensive that Perl.

> If one thinks of the REPL as a command line (which it is, after all)
> then there is an obvious lisp analogue for type 1 programs, as these
> are basically calls to FP (functional-programming) functions. (Of
> course, UNIX pipes are somewhat more akin to function calls in a lazy
> language, since they execute only as input becomes available)
>
> However, LISP really has no obvious analogue for type 2 programs, or
> CLI programs. Having such a program would allow one to design the
> MECHANISM of a program using an interactive prompt, which minimizes
> arbitrary interface implementation and allows for easier unit testing,
> which is always difficult when GUIs are involved. Unlike UNIX CLIs, it
> makes more sense to me for a LISP CLI to use SEXPs instead of simple
> ASCII text.

Lisp and UNIX systems are quite different. While you could lever the UNIX
ideology into a Lisp environment, why go through the effort?

What needs to be realized is that the UNIX Shell, it's files, and the
scattering of scripts and utilities are all a part of the global "UNIX
Image". The Shell is, essentially, the REPL of a UNIX system. The PATH and
directories its "Package" system.

People talk about how expensive Lisp is, with its memory requirements, and
its image files, etc. Then they compare it to "Hello World" in
C/Perl/Python/FORTRAN/Intercal and say "See how simple compared to Lisp" not
recognizing ALL of the boiler plate that "Hello World" brings along with it,
notably the entire UNIX Kernel, runtime environment, libraries, etc. You
can't run "Hello World" without booting the machine first.

"Well, you have to boot the machine to use Lisp too!" Very true, but, for
example, I have several GB of data on my PC to make it work like a Unix Box.
Cygwin (very useful, FYI) is ENORMOUS, and quite expensive so that I can get
"Hello World" running.

Think how little space a system would need on the disk drive if it were only
to boot BSD and dump straight into CMUCL (and X), compared to what a stock
BSD UNIX system would require. You get an X Session, CMUCL, Hemlock and toss
the rest away. "Lean and mean". (Silly, too, but that's a detail).

UNIX entities work with each other through formatted character streams. The
IPC standard is the pipe, and most everything else leverages off of that
concept.

Lisp intercommunicates with a much richer data model, using structure data
objects (either simple lists or more advanced structures). It CAN
communincate through textual S-Exprs, but within the image there's no need.

Regexprs are "new" to Lisp, compared to Unix because Lisp data structures
tend not to be torn apart using them, they've been unnecessary whereas
they're required to leverage the "stream of characters" format of Unix.

The CLI in UNIX is important because parsing that data to make it usable is
a major part of the daily task. Lisps reader and structured objects do that
for us already, so we use it in that way. Who needs "awk" when we can use
CADR or NTH. We can simply use the data rather than convert it first.

This is why the Unix Way is so important in Unix, and why it's basically
irrelevant in Lisp.

Regarding Type 2 programs, the entire system is a Type 2 program. Since you
have accessible state, the programs simply store it and let the individual
function manipulate it. Why write an interface to such a thing when you have
the REPL already? Stuff the state in to a default *global*, and work away.
You're always "inside" the program, whereas in Unix you either run the code
to completion, or interrupt and interact with it using a custom interface.
The UNIX system has no global state beyond files, so again this use is
dictated by the Unix environment.

While Lisp need peacefully co-exist with Unix enviroments, there's really no
need for it to actually adopt their methods when they are really quite
foreign to whole Lisp Design.

(P.S. Certified UNIX bigot here, I can dance the CLI dance with the best of
them, I like it, I do it everyday -- but that doesn't mean I want it in
Lisp)

Regards,

Will Hartung
(·····@msoft.com)
From: Pascal Bourguignon
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <87u15jpygo.fsf@thalassa.informatimago.com>
"Will Hartung" <·····@msoft.com> writes:
> What, are you talking about "Lisp Scripts"? CLISP isn't dramatically more
> expensive that Perl.

Not at all indeed:

$ time perl -e 'for($i=1;$i<1000;$i++){print 1+$i;}'>/dev/null

real    0m0.022s
user    0m0.020s
sys     0m0.000s


$ time clisp -norc -x '(dotimes (i 1000) (print (+ 1 i)))'>/dev/null

real    0m0.065s
user    0m0.030s
sys     0m0.030s


> [...] 
> Regarding Type 2 programs, the entire system is a Type 2 program. Since you
> have accessible state, the programs simply store it and let the individual
> function manipulate it. Why write an interface to such a thing when you have
> the REPL already? Stuff the state in to a default *global*, and work away.
> You're always "inside" the program, whereas in Unix you either run the code
> to completion, or interrupt and interact with it using a custom interface.
> The UNIX system has no global state beyond files, so again this use is
> dictated by the Unix environment.

Persistency.  The situation would be different if the lisp images were
stored on disk  rather than in core.  I really long  for EROS and it's
permament store.  Or  perhaps I should replace the REPL  and make it a
REPLSI, saving the image after each command.

 
> While Lisp need peacefully co-exist with Unix enviroments, there's really no
> need for it to actually adopt their methods when they are really quite
> foreign to whole Lisp Design.
> 
> (P.S. Certified UNIX bigot here, I can dance the CLI dance with the best of
> them, I like it, I do it everyday -- but that doesn't mean I want it in
> Lisp)
> 
> Regards,
> 
> Will Hartung
> (·····@msoft.com)

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Will Hartung
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <bo9i84$1arkvr$1@ID-197644.news.uni-berlin.de>
"Pascal Bourguignon" <····@thalassa.informatimago.com> wrote in message
···················@thalassa.informatimago.com...
> > The UNIX system has no global state beyond files, so again this use is
> > dictated by the Unix environment.
>
> Persistency.  The situation would be different if the lisp images were
> stored on disk  rather than in core.  I really long  for EROS and it's
> permament store.  Or  perhaps I should replace the REPL  and make it a
> REPLSI, saving the image after each command.

For a type 2 program in UNIX, you'd (typically) need to actually give it a
"save" command, and that functionality can be added to persist independent
bits of Lisp as well, unless you can think of some examples that I'm
missing. For level 1 pipe sequences, there's only persistence at each end in
UNIX, and many times even those are temporary (as folks need to branch the
logic and that's difficult to do with a simple piped command line).

But to be fair, because of the fact that files are the "only" method to
manipulating global state within the UNIX system, persistent files are much
more common.

However, since you mention snapshotting the image, conceptually you have the
opportunity to save the state of your entire system for recovery (or
analysis say) later, even mid-error, mid-debug, mid-whatever. You can use
the ever subtle SIGQUIT in UNIX to hopefully capture your image to replicate
that in UNIX, but it seems to be used in more limited ways than in Lisp.
Plus once you have the image, you only have the pretty lowlevel commands of
gdb plus whatever you may have in your program to actually do analysis
later, assuming you can get your program to restart without wiping its
state.

I do agree with the EROS comment though, then you won't have any qualms
about interning that 200MB data set to play with natively in internal
format, make copies, tweak it a bit here, tweak it there, "externalize" it
at your liesure.

Regards,

Will Hartung
(·····@msoft.com)
From: Daniel Barlow
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <87ad7cf3tu.fsf@noetbook.telent.net>
·····················@yahoo.com (Conrad Barski) writes:

> After reading Raymond's new book, TAOUP, I've been thinking about how
> some of the concepts in the book could be applied to LISP programming.

After reading TAOUP, I was left thinking that it said more about ESR
and less about Unix.  I particularly admired the sleight of hand that
let him claim Emacs as an example of the Unix Philosophy(sic) - I
doubt that many real unix fans (Al Viro comes to mind, for some
reason) would agree, and I don't think RMS would either.

> Briefly, one of the central prinicples of the book is that programs
> should be small, should favor non-binary text formats for everything,
> and fit into one of three roles:
>
>
> 1. Filter Programs (i.e. gcc) are programs that transform data from
> stdin to stdout and can be piped.
>
> 2. "Mechanism" Programs (i.e. gdb) have state and/or require user
> interaction and therefore feature a CLI.
>
> 3. "Policy" Programs (i.e. KDevelop) are "skins" to add a GUI (or
> other interface enhancements) to programs of type 1 or 2 by "slaving"
> them. They do this through piping for type 1, and sockets (or other
> IPC) for type 2 programs.

I would characterise the traditional unix philosophy as 

- the shell is the user interface
- programs should do one thing each
- "everything is a stream of octets".  
- pipes, temporary files, etc can be used to compose programs

(Disclaimer: I only really started using unix about a decade ago, so
this is based more on what I've read than on contemporary experience).
I don't think that Emacs or Netscape (or, frankly, fetchmail, which
does some stuff with ETRN that's completely unrelated to its core
purpose) fit this model.

This model has the appeal of transparency and simplicity (which is a
major draw, don't get me wrong), but little else.  For example, the
data representation in some problems is intrinsically more complex
than 'bag of bytes' - thus we see layered representations like XML.
Ever tried dealing with XML using filters and pipes?  The standard
Unix tools expect one record per line: they're a poor fit.  To apply
Unix philosophy usefully to "modern" unix apps, you need to dilute it
almost to the point of "air is good; competition is bad; I like
jello".  There's not anything unique to Unix (or really, any insight
of any kind) in the design principle that "a program should be small
unless there is a need for it to be big"

Anyway, the Lisp "equivalent", I think, would be

- the repl is the user interface
- functions should do one thing each
- everything is an object
- functions can be composed by passing parameters and return values

There's no _need_ to serialise everything down to a stream of bytes,
if you have standard and fairly pervasive support (inspector,
debugger, etc) for looking at the objects as they cross interfaces.
Of course, it's when you have binary protocols _without_ this
support that you start wishing for unix-style design.

(And it's when you have the typical unix utility's approach to error
reporting that you start offering sacrifice to the authors of strace
or truss or par or whatever system call tracer your platform uses, but
while it doesn't help that unix's policy is to return sentinel values
on error instead of signalling an exception, I think that it's still
at least partly the fault of the programs concerned for not checking
said sentinels)


-dan

-- 

   http://web.metacircles.com/cirCLe_CD - Free Software Lisp/Linux distro
From: Thomas F. Burdick
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <xcv65i02fjd.fsf@famine.OCF.Berkeley.EDU>
Daniel Barlow <···@telent.net> writes:

> ·····················@yahoo.com (Conrad Barski) writes:
> 
> > After reading Raymond's new book, TAOUP, I've been thinking about how
> > some of the concepts in the book could be applied to LISP programming.
> 
> After reading TAOUP, I was left thinking that it said more about ESR
> and less about Unix.  I particularly admired the sleight of hand that
> let him claim Emacs as an example of the Unix Philosophy(sic) - I
> doubt that many real unix fans (Al Viro comes to mind, for some
> reason) would agree, and I don't think RMS would either.

Wow, that *is* an impressive sleight!  Makes me curious about the
book, actually.

> Anyway, the Lisp "equivalent", I think, would be
> 
> - the repl is the user interface
> - functions should do one thing each
> - everything is an object
> - functions can be composed by passing parameters and return values

I agree.  If you design your domain-specific language (that your app
is written in) with interactivity in mind, you can use the repl as
your cli.  Developing the GUI on top of such a cli is better than
doing it on top of a unix-like cli, because objects keep their
identity and type between the layers.

> There's no _need_ to serialise everything down to a stream of bytes,
> if you have standard and fairly pervasive support (inspector,
> debugger, etc) for looking at the objects as they cross interfaces.

When I first realized that I could cause my gui to call CL:BREAK
whenever I wanted, I flipped.

> Of course, it's when you have binary protocols _without_ this
> support that you start wishing for unix-style design.
> 
> (And it's when you have the typical unix utility's approach to error
> reporting that you start offering sacrifice to the authors of strace
> or truss or par or whatever system call tracer your platform uses, but
> while it doesn't help that unix's policy is to return sentinel values
> on error instead of signalling an exception, I think that it's still
> at least partly the fault of the programs concerned for not checking
> said sentinels)

I think a lot of Unix/C programmers are scared of just writing their
own, reasonably behaving, functions around libc because they're
worried they'd be working against the philosophy of the system.  And
besides, they're encouraged to return nonsense in their own libraries
:-(

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Paolo Amoroso
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <87oevrdga2.fsf@plato.moon.paoloamoroso.it>
Thomas F. Burdick writes:

> I agree.  If you design your domain-specific language (that your app
> is written in) with interactivity in mind, you can use the repl as
> your cli.  Developing the GUI on top of such a cli is better than
> doing it on top of a unix-like cli, because objects keep their
> identity and type between the layers.

This reminds me of CLIM.


Paolo
-- 
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
From: Thomas F. Burdick
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <xcv4qxi0ynl.fsf@famine.OCF.Berkeley.EDU>
Paolo Amoroso <·······@mclink.it> writes:

> Thomas F. Burdick writes:
> 
> > I agree.  If you design your domain-specific language (that your app
> > is written in) with interactivity in mind, you can use the repl as
> > your cli.  Developing the GUI on top of such a cli is better than
> > doing it on top of a unix-like cli, because objects keep their
> > identity and type between the layers.
> 
> This reminds me of CLIM.

I've often heard comments like, "you have to use it to understand it,
you can't get it from the documentation" applied to CLIM.  This is too
bad, because I've also heard lots of good things about it (this
comment included).  Does anyone have any good pointers to
understanding CLIM without reading the entire reference manual then
writing a significant system in it?

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Christian Lynbech
Subject: CLIM (was Re: LISP & "The Art of Unix Programming")
Date: 
Message-ID: <ofr80m53v6.fsf_-_@situla.ted.dk.eu.ericsson.se>
>>>>> "Thomas" == Thomas F Burdick <···@famine.OCF.Berkeley.EDU> writes:

Thomas> Does anyone have any good pointers to understanding CLIM
Thomas> without reading the entire reference manual then writing a
Thomas> significant system in it?

How about getting CLIM, browsing the spec and then writing a small system?

At least this is my approach. McCLIM is mature enough that you can get
something done, and from the examples and pattern-matching-browsing in
the reference manual it isn't that hard to get something up on the
screen.

If you have access to a suitable sophisticated version of ACL (whether
that includes the free personal edition I do not know) you will find a
CLIM Users Guide in the doc directory, even if you do not have CLIM. I
just discovered this recently, so I haven't really used it much, but I
would expect it to be a bit more convenient than the spec.

If you have a commercial Lisp, you could also consider calling up the
vendor and challenge them to convince you of buying CLIM :-)

------------------------+-----------------------------------------------------
Christian Lynbech       | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
                                        - ·······@hal.com (Michael A. Petonic)
From: Paolo Amoroso
Subject: Re: CLIM (was Re: LISP & "The Art of Unix Programming")
Date: 
Message-ID: <87fzh1lgrc.fsf@plato.moon.paoloamoroso.it>
Christian Lynbech writes:

>>>>>> "Thomas" == Thomas F Burdick <···@famine.OCF.Berkeley.EDU> writes:
>
> Thomas> Does anyone have any good pointers to understanding CLIM
> Thomas> without reading the entire reference manual then writing a
> Thomas> significant system in it?
>
> How about getting CLIM, browsing the spec and then writing a small system?

I'm afraid I don't have shortcuts. I personally have a brute force,
multipass, teach yourself in 10 years approach to learning CLIM: read
the specification, read source code, play with code, repeat the whole
thing. I am collecting all the documentation and bits of code I can
get my hands on.

No, I am not even *remotely* a CLIM guru. But I am seeing glimpses of
light more and more frequently. Similarly to learning an advanced
language like Common Lisp, I think motivation is important. One of my
motivations is playing with McCLIM, particularly with the Lisp
Listener. Maybe it's just me, but the listener looks cool beyond all
recognition: I can feel the Lisp Machine between my toes.

The McCLIM developers are quietly producing a gem. Perennial kudos to
them.


> If you have access to a suitable sophisticated version of ACL (whether
> that includes the free personal edition I do not know) you will find a
> CLIM Users Guide in the doc directory, even if you do not have CLIM. I

I second Christian's recommendation. The "CLIM 2 User Guide" for
Allegro CL, freely available for download at Franz's site, is a great
and possibly overlooked resource. Its title is the key: it's a "user
guide", not just a reference document. It contains tiny but useful
bits of pragmatics on how to actually design and structure a CLIM
application, i.e. it also shows you the big picture.

Another useful resource is part II (unfortunately, the only available
one) of LUV 93's CLIM tutorial. A careful read of the slides and
sample code helps in understanding certain features of CLIM.


Paolo
-- 
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
From: Bulent Murtezaoglu
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <87k76ddaw5.fsf@acm.org>
>>>>> "TFB" == Thomas F Burdick <···@famine.OCF.Berkeley.EDU> writes:
    TFB> ...  Does
    TFB> anyone have any good pointers to understanding CLIM without
    TFB> reading the entire reference manual then writing a
    TFB> significant system in it?

I was able to pick it up in in a matter of a few days in 94 or 95,
using the tiny examples that Franz's documentation then had (and the
Symbolics manual for CLIM 1.0 which was obsolete but gave me a much
better idea).  You don't need to write a significant system in it, but
writing something that accomplishes a task you know and care about
does help.  I was familiar with xlib and Xt at that time and CLIM was
indeed a relief.  As another poster suggested, just start with the
examples in the vendor documentation and then try to implement
something that falls outside the menu/tab/dialog GUI.  You will need
the reference manual but you'll be looking things up rather than
reading it cover-to-cover in a vacuum.

Good luck/have fun.

cheers,

BM
From: Bruce Stephens
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <878ymuo1bg.fsf@cenderis.demon.co.uk>
···@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Daniel Barlow <···@telent.net> writes:

[...]

>> After reading TAOUP, I was left thinking that it said more about ESR
>> and less about Unix.  I particularly admired the sleight of hand that
>> let him claim Emacs as an example of the Unix Philosophy(sic) - I
>> doubt that many real unix fans (Al Viro comes to mind, for some
>> reason) would agree, and I don't think RMS would either.
>
> Wow, that *is* an impressive sleight!  Makes me curious about the
> book, actually.

The whole thing is online,
<http://www.catb.org/~esr/writings/taoup/html/>, as well as being
available on processed dead trees.

I think ESR stretches the title rather opportunistically.  I'm not
sure he's really saying that Emacs is an example of the Unix
Philosophy, however.  He's (rightly) rather more equivocal than that.
There are clearly aspects which one might regard as Unixy, and aspects
which one really can't.

Similarly the section on the web browser as universal front
end---sure, that can make sense, but it's more an idea of the 1990s
rather than specifically an idea from Unix.  (As far as I can tell,
anyway.)

[...]
From: Pascal Bourguignon
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <874qxjrdt5.fsf@thalassa.informatimago.com>
Daniel Barlow <···@telent.net> writes:
> There's no _need_ to serialise everything down to a stream of bytes,
> if you have standard and fairly pervasive support (inspector,
> debugger, etc) for looking at the objects as they cross interfaces.

mappers (map, mapcar, maphash,  etc).  But the main difference between
lisp  mappers and  pipes is  that list  mappers evaluate  their inputs
before  processing   them  while  pipes   implies  synchronization  of
processing. Of course, we can easily implement delayed evaluation.

> Of course, it's when you have binary protocols _without_ this
> support that you start wishing for unix-style design.

Just imagine receiving  some .class files along with  a flow of binary
objects (assuming you  don't have any jvm nor  any documentation about
it)  vs. receiving  a flow  of ASCII.   Even if  there is  no "method"
described  in the flow  of ASCII  data, it  mab be  and often  IS more
readable and  processable (with your  own algorithms) than  the binary
objects.

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Will Hartung
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <bo99am$1a7q2s$1@ID-197644.news.uni-berlin.de>
"Pascal Bourguignon" <····@thalassa.informatimago.com> wrote in message
···················@thalassa.informatimago.com...
> Just imagine receiving  some .class files along with  a flow of binary
> objects (assuming you  don't have any jvm nor  any documentation about
> it)  vs. receiving  a flow  of ASCII.   Even if  there is  no "method"
> described  in the flow  of ASCII  data, it  mab be  and often  IS more
> readable and  processable (with your  own algorithms) than  the binary
> objects.

To a point, but I can guarantee you that were that ASCII annotated in, say,
Urdu, it may as well be in binary for me.

One of the advantages of an ASCII format is the apparent transparency of the
format to a casual reader, but that's only if the ASCII format is designed
to be transparent. BASE64 is, techincally, an ASCII format, for all of the
good that does.

But by utilizing a readable ASCII format, you relieve the burden on the
application of creating a tool to manage that format, delegating it to the
lowly text editor. And on Unix there are several other tools available
specifically for that purpose of munging text files.

In Lisp, we rely on a binary format and compromise perhaps by using formats
that are easily rendered and read by the printer and reader (or simply
extending the system to leverage the printer and reader for our custom
objects). This gives the developer the performance of a binary protocol
along with the convenience and ease of use of an ASCII protocol, within the
Lisp image.

Outside the image, all bets are off. Binary, ASCII, XML, whatever.

Regards,

Will Hartung
(·····@msoft.com)
From: Alain Picard
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <87k76fppkc.fsf@memetrics.com>
Daniel Barlow <···@telent.net> writes:

> ·····················@yahoo.com (Conrad Barski) writes:
>
>> After reading Raymond's new book, TAOUP, I've been thinking about how
>> some of the concepts in the book could be applied to LISP programming.
>
> After reading TAOUP, I was left thinking that it said more about ESR
> and less about Unix.  I particularly admired the sleight of hand that
> let him claim Emacs as an example of the Unix Philosophy(sic) - I

Indeed.  The _reason_ emacs is so good is that it
flouts this Unix "philosophy".  That's why my 
mailer/browser/editor/code-control/chess-interface/etc
REALLY can cooperate.


I think the unix philosophy is adequately explored
in the Unix Hater's Handbook; now online 
(at http://www.art.net/~hopkins/Don/unix-haters/handbook.html)
for those not fortunate to have read it already. 

It's grossly unfair, but dangerously close to the truth,
and always funny.
From: Paolo Amoroso
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <87sml4dr2p.fsf@plato.moon.paoloamoroso.it>
Conrad Barski writes:

> After reading Raymond's new book, TAOUP, I've been thinking about how
> some of the concepts in the book could be applied to LISP programming.
[...]
> 2. "Mechanism" Programs (i.e. gdb) have state and/or require user
> interaction and therefore feature a CLI.
[...]
> However, LISP really has no obvious analogue for type 2 programs, or
> CLI programs. Having such a program would allow one to design the

Maxima?


Paolo
-- 
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
From: Conrad Barski
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <a4af10cf.0311041805.562647e7@posting.google.com>
> I particularly admired the sleight of hand that
> let him [ESR] claim Emacs as an example of the Unix Philosophy(sic)

Yeah, you definitely have to take your mind into a very special and
convoluted place to follow his argumentation on emacs. I personally
_love_ emacs, but it clearly breaks most design principles anyone has
ever commited to paper. I think there are just a certain few entities
in software (kernels, emacs) where power and flexibility are of such
an utmost importance that you cannot rely on generalized design
patterns and need to use patterns highly specific to the problem
domain.

> For example, the
> data representation in some problems is intrinsically more complex
> than 'bag of bytes' - thus we see layered representations like XML.
> Ever tried dealing with XML using filters and pipes?

Well, I think a central (although perhaps unstated) argument in TAOUP
is that using complex/powerful tools such as XML as generalized
solutions even in simple situations may not always be a good thing-
The problems of treating even simple data as XML are greater than the
problems of maintain differing data formats (some simple, some
complex) that allow simple data to remain simple.

Now I use XML in my daytime work very heavily. And I can understand
the draw of XML in a corporate environment- When faced with two
complicated entities that need to communicate it is often more
cost-effective to use a complex protocol based on XML rather than
redesign and simplify away the complexity that is the initial cause of
the problem.

> To apply Unix philosophy usefully to "modern" unix apps ...

For the reasons stated above, I remain still somewhat unconvinced that
a "modern" unix app automatically should rely on a more complex,
hierarchical protocol, such as XML. Although that does seem to be
where things are headed right now.

> If you design your domain-specific language (that your app
> is written in) with interactivity in mind, you can use the repl as
> your cli.  Developing the GUI on top of such a cli is better than
> doing it on top of a unix-like cli, because objects keep their
> identity and type between the layers.

I can understand your argumentation for using the REPL as the CLI, but
I think arguments can be made for the other side as well. For
instance, the REPL of LISP is mostly impenetrable to a non-programmer.
Conversely, if one develops a specific CLI to a program, even a bright
non-programmer can easily learn the syntax and perform QA on the
entirety of an app even before the GUI-generating "policy" program is
available. Also, any unhandled exceptions a non-programmer can trigger
within the CLI is of importance, whereas a non-programming tester
could generate exceptions that have no intrinsic value in the LISP
REPL in two seconds flat. Additionally, although having objects
maintain identity between layers has many benefits, there are arguably
some extra transparency benefits in having a CLI accept/return ASCII
data and print meaningful result indicators instead of requiring the
direct calling of object functions.

Clearly, at a lower level of organization, object and procedural
programming styles have an important place, but I like the argument in
TAOUP that at the program-level, directly calling object functions (or
using other highly proprietary protocols) is best avoided.

> What, are you talking about "Lisp Scripts"? CLISP isn't dramatically more
> expensive that Perl.

If I running multiple, very small CLISP scripts that pipe to each
other, my impression is that this would require separate CLISP cores
to be loaded for each process involved in the piping. Are you saying
this isn't true? Or are you saying that Perl carries the same
overhead? (I am not trying to argue, just trying to get the facts
straight myself...)

> Why write an interface to such a thing when you have
> the REPL already? Stuff the state in to a default *global*, and work away.

Right- It seems to me that for complex, state-requiring LISP
"programs" (I'm using that word in the broadest sense possible) the
proverbial "LISP way" is to either (1) use globals for state and drive
a DSL through the REPL text reader or (2) use an object interface that
properly localizes state but has a proprietary non-text interface.

My argument is that at certain levels of design there is a benefit of
a third option, which is (3) use an object interface to properly
localize state but have an SEXP interface that does not allow objects
to be manipulated directly between levels. Such a "CLI pattern" seems
to be considered somewhat contrary to the "LISP way" and I'm trying to
understand why.

-Thanks
From: Thomas F. Burdick
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <xcvvfpz1cjj.fsf@famine.OCF.Berkeley.EDU>
·····················@yahoo.com (Conrad Barski) writes:

> > I particularly admired the sleight of hand that
> > let him [ESR] claim Emacs as an example of the Unix Philosophy(sic)
> 
> Yeah, you definitely have to take your mind into a very special and
> convoluted place to follow his argumentation on emacs. I personally
> _love_ emacs, but it clearly breaks most design principles anyone has
> ever commited to paper. I think there are just a certain few entities
> in software (kernels, emacs) where power and flexibility are of such
> an utmost importance that you cannot rely on generalized design
> patterns and need to use patterns highly specific to the problem
> domain.
> 
> > For example, the
> > data representation in some problems is intrinsically more complex
> > than 'bag of bytes' - thus we see layered representations like XML.
> > Ever tried dealing with XML using filters and pipes?
> 
> Well, I think a central (although perhaps unstated) argument in TAOUP
> is that using complex/powerful tools such as XML as generalized
> solutions even in simple situations may not always be a good thing-
> The problems of treating even simple data as XML are greater than the
> problems of maintain differing data formats (some simple, some
> complex) that allow simple data to remain simple.
> 
> Now I use XML in my daytime work very heavily. And I can understand
> the draw of XML in a corporate environment- When faced with two
> complicated entities that need to communicate it is often more
> cost-effective to use a complex protocol based on XML rather than
> redesign and simplify away the complexity that is the initial cause of
> the problem.
> 
> > To apply Unix philosophy usefully to "modern" unix apps ...
> 
> For the reasons stated above, I remain still somewhat unconvinced that
> a "modern" unix app automatically should rely on a more complex,
> hierarchical protocol, such as XML. Although that does seem to be
> where things are headed right now.
> 
> > If you design your domain-specific language (that your app
> > is written in) with interactivity in mind, you can use the repl as
> > your cli.  Developing the GUI on top of such a cli is better than
> > doing it on top of a unix-like cli, because objects keep their
> > identity and type between the layers.
> 
> I can understand your argumentation for using the REPL as the CLI, but
> I think arguments can be made for the other side as well. For
> instance, the REPL of LISP is mostly impenetrable to a non-programmer.

Bah!  Do you just suspect this, or is this from experience -- because
I have experience to the contrary.  Certainly it's not the most
optimal non-programmer interface, but I've seen non-programmers get
the hang of it pretty quickly.

> Conversely, if one develops a specific CLI to a program, even a bright
> non-programmer can easily learn the syntax and perform QA on the
> entirety of an app even before the GUI-generating "policy" program is
> available.

Well, if you're using the REPL as a CLI, you need to have the user
live >=90% of the time in your app-specific language, not CL.

> Also, any unhandled exceptions a non-programmer can trigger
> within the CLI is of importance, whereas a non-programming tester
> could generate exceptions that have no intrinsic value in the LISP
> REPL in two seconds flat. Additionally, although having objects
> maintain identity between layers has many benefits, there are arguably
> some extra transparency benefits in having a CLI accept/return ASCII
> data and print meaningful result indicators instead of requiring the
> direct calling of object functions.

I think you're thinking about this wrong: you don't just write the
code you need to get the program to work, then tell the user, "here ya
go!".  If you're viewing the repl as an interface, you want to make
sure that you have all the functions and macros written that you'd
need to make it a reasonable, if paren-filled, interface.

> > What, are you talking about "Lisp Scripts"? CLISP isn't dramatically more
> > expensive that Perl.
> 
> If I running multiple, very small CLISP scripts that pipe to each
> other, my impression is that this would require separate CLISP cores
> to be loaded for each process involved in the piping. Are you saying
> this isn't true? Or are you saying that Perl carries the same
> overhead? (I am not trying to argue, just trying to get the facts
> straight myself...)

You have to load the CLISP core each time, and it then interprets the
script.  Perl has to load its image each time, too.  Once upon a time,
CLISP used to be *faster* to load but it looks like the Perl
maintainers have put work into making it load faster, while I doubt
the CLISP maintainers have worried about this for years; now Perl
takes about as long as bash, awk, grep, etc., while CLISP is 2x slower
to start up.  Seems like a crazy time to worry about it, because we're
talking about ~1/100 second.


-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Pascal Bourguignon
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <87y8uvnn6w.fsf@thalassa.informatimago.com>
·····················@yahoo.com (Conrad Barski) writes:
> 
> > What, are you talking about "Lisp Scripts"? CLISP isn't dramatically more
> > expensive that Perl.
> 
> If I running multiple, very small CLISP scripts that pipe to each
> other, my impression is that this would require separate CLISP cores
> to be loaded for each process involved in the piping. Are you saying
> this isn't true? Or are you saying that Perl carries the same
> overhead? (I am not trying to argue, just trying to get the facts
> straight myself...)

Please, remember  that any file read is  kept in core as  long as that
memory  is used  and not  needed for  something else.   When  you pipe
several times  the same program it's  read only once the  same text is
shared.  The  second time,  you only pay  the initialization of  a new
process. (Same for perl, it's a generic feature of unix).
 

$ time clisp -norc -x '(ext:quit)'

real    0m0.803s
user    0m0.040s
sys     0m0.040s

$ time clisp -norc -x '(ext:quit)'

real    0m0.071s
user    0m0.050s
sys     0m0.020s

$ time clisp -norc -x '(ext:quit)'

real    0m0.068s
user    0m0.040s
sys     0m0.020s


-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
From: Rob Warnock
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <HJqdnTaZu_GAaDWiXTWc-w@speakeasy.net>
Pascal Bourguignon  <····@thalassa.informatimago.com> wrote:
+---------------
| Please, remember  that any file read is  kept in core as  long as that
| memory  is used  and not  needed for  something else.   When  you pipe
| several times  the same program it's  read only once the  same text is
| shared.  The  second time,  you only pay  the initialization of  a new
| process. (Same for perl, it's a generic feature of unix).
| 
| $ time clisp -norc -x '(ext:quit)'
...
| $ time clisp -norc -x '(ext:quit)'
| real    0m0.068s
| user    0m0.040s
| sys     0m0.020s
+---------------

Exactly, which is why I have switched to CMUCL for all my "scripting",
even though the first time it runs it's a lot slower than CLISP. Here are
the times for the tenth running of each on my machine (1.4 GHz Athlon):

    % time clisp -norc -x '(ext:quit)'
    0.016u 0.008s 0:00.02 50.0%     7140+4348k 0+0io 0pf+0w
    %
...
    % time cmucl -noinit -eval '(quit)'
    0.012u 0.005s 0:00.01 100.0%    252+3620k 0+0io 0pf+0w
    %

Note: The individual "user" and "system" times seem to move around
a lot, but in both cases the *sum* of the two seems to remain reasonably
constant, that is, 24ms for CLISP and 16-17ms for CMUCL.

And then when you compare loading and running some substantial piece
of compiled code in each, well, CMUCL just gets faster...  ;-}


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Damien R. Sullivan
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <boauik$tvv$1@hood.uits.indiana.edu>
····@rpw3.org (Rob Warnock) wrote:

>And then when you compare loading and running some substantial piece
>of compiled code in each, well, CMUCL just gets faster...  ;-}

But if it's not compiled, CMUCL is pretty slow.  How could I slip compilation
into
time lisp -noinit -eval '(dotimes (i 1000) (print (+ 1 i))) (quit)' > /dev/null
?

-xx- Damien X-) 
From: Rob Warnock
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <MLKdnXmwOdE8mTSiXTWc-g@speakeasy.net>
Damien R. Sullivan <········@cs.indiana.edu> wrote:
+---------------
| ····@rpw3.org (Rob Warnock) wrote:
| >And then when you compare loading and running some substantial piece
| >of compiled code in each, well, CMUCL just gets faster...  ;-}
| 
| But if it's not compiled, CMUCL is pretty slow.
+---------------

Not really:

% time clisp -norc -x '(dotimes (i 1000) (print (+ 1 i)))' > /dev/null
0.014u 0.022s 0:00.03 100.0%    3966+2828k 0+0io 0pf+0w
% time cmucl -noinit -eval '(progn (dotimes (i 1000) (print (+ 1 i))) (quit))' > /dev/null
0.021u 0.014s 0:00.03 100.0%    140+2869k 0+0io 0pf+0w
% 

Or given what I said about needing to sum user+sys:

CLISP: 36ms
CMUCL: 35ms

About equal, actually...


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Will Hartung
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <boe781$1ct5g9$1@ID-197644.news.uni-berlin.de>
"Rob Warnock" <····@rpw3.org> wrote in message
···························@speakeasy.net...
> Damien R. Sullivan <········@cs.indiana.edu> wrote:
> +---------------
> | ····@rpw3.org (Rob Warnock) wrote:
> | >And then when you compare loading and running some substantial piece
> | >of compiled code in each, well, CMUCL just gets faster...  ;-}
> |
> | But if it's not compiled, CMUCL is pretty slow.
> +---------------
>
> Not really:

Even if it were, it's, frankly, irrelevant to the task IMHO.

When you're piping together bits-of-unix to get a task done, when you need
to type something as even mildly complicated as that loop, odds are you're
heading for an editor for that blip of the pipe anyway. The magic of the
Unix shell and its plethora of utilities is that they do pretty nice things
using very succinct commands, and that's what makes them usable for on the
fly pipes.

When a pipe gets to over, say, 60 characters, it's almost immediately
unwieldly (at least for me). I might continue pushing it, but when it
encroaches the next line, it's easy enough to stick "> x.x" onto the end to
"save my work".

Anyway, once you're in the "plop it in a file mode", compiling it is a
heartbeat away IF its even necessary.

But the real key is simply that 99% of these tasks are so short, that the
performance is not even an issue. The overall performance of the task is
typically far lower than the Impatience Quotient of the user.

When the timing gets over the ImpQ, then you may start looking at
optimizations. If that optimization is simply a (compile) away, so much the
better.

The goal is to get these tasks into an "automatic mode". Where you think
"X", and a 50 character pipe, subtley different from the last time, pours
out of your fingers.

Odds are when it gets slow, it's going to be because you actually have to
think about the process (which takes longer than the process typically), or
it's to the point where it hits your ImpQ and you THEN think about it and
say "Wait, ^C" and start over.

When you go "oh wait", check the syntax, see it's ok, then you pop a window
and look at your files and notice that they've finally crossed that thin
threshold from "fast enough" to "punishingly slow". Then you reevaluate the
process, the requirements, etc, and start all over.

Regards,

Will Hartung
(·····@msoft.com)
From: Daniel Barlow
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <87vfpysxjn.fsf@noetbook.telent.net>
·····················@yahoo.com (Conrad Barski) writes:

> Well, I think a central (although perhaps unstated) argument in TAOUP
> is that using complex/powerful tools such as XML as generalized
> solutions even in simple situations may not always be a good thing-
> The problems of treating even simple data as XML are greater than the
> problems of maintain differing data formats (some simple, some
> complex) that allow simple data to remain simple.

I didn't suggest that XML should be the answer to every question.  I
did suggest that sometimes the line-based filtering approacg of Unix
falls down, and that the Unix folk in private know that - which is why
things like XML have taken off

In fact, the line-based filters fall down more than the average unix
user wants to think about or admit.  For example, take the RtL
challenge: write a shell script which greps every file in $PWD not
matching *.titles, for the text 'Switch Date 1990s' (may be broken
over two lines) and moves the matching file into another directory.
Sounds trivial?  Now make it work when the filenames may contain
spaces.  You're quite rapidly into arcane bits of shell syntax.

:; touch "file 1"
:; touch "file 2"
:; touch foo bar
:; for i in * ; do touch "$i.titles" ;done # need quotes to protect space
:; ls
bar  bar.titles  file 1  file 1.titles  file 2  file 2.titles  foo  foo.titles

# ... ok so far.  i think we have to use find to get a list of files 
# _not_ matching *.titles, but any external command will demonstrate.
# let's make it easier by not worrying about whether the search text
# spans multiple lines yet

:; grep -l Switch `find . -name \*.titles -prune -o -print`
grep: ./file: No such file or directory
grep: 1: No such file or directory
grep: ./file: No such file or directory
grep: 2: No such file or directory


> Now I use XML in my daytime work very heavily. And I can understand
> the draw of XML in a corporate environment- When faced with two
> complicated entities that need to communicate it is often more
> cost-effective to use a complex protocol based on XML rather than
> redesign and simplify away the complexity that is the initial cause of
> the problem.

Sometimes the problem is essentially complex, and simplifying essential
complexiity away is usually not a good idea.

Sometimes the problem is not essentially complex, but the Unix
approach /introduces/ complexity - as in the example above where they
made the decision to delimit files in a list with a marker that's 
a valid filename constituent.  If the shell had real lists, and
backticks meant "create a list of the output" instead of "take the
output, collapse newlines to whitespace, and return a string for
reparsing by the caller" as it in practice seems to, this situation
would not have arisen.

> For the reasons stated above, I remain still somewhat unconvinced that
> a "modern" unix app automatically should rely on a more complex,
> hierarchical protocol, such as XML. Although that does seem to be
> where things are headed right now.

I don't want to get hung up on the specifics of XML or whether
hierarchy is a good idea, because in happier circumstances I'd be the
seventh-to-last person on the planet to promote XML for anything.
It's merely an example of traditional Unix thinking refusing to
acknowledge problem complexity.  Look at BIND for another example:
they move from a line-based configuration file in BIND 4 to a
curly-bracket C-like syntax in BIND 8.  Now all your standard Unix
tools are a bit less useful for deciding which zones you're serving
names for, but at least you have somewhere to say per-zone who AXFR
requests are allowed from.

> available. Also, any unhandled exceptions a non-programmer can trigger
> within the CLI is of importance, whereas a non-programming tester
> could generate exceptions that have no intrinsic value in the LISP
> REPL in two seconds flat. Additionally, although having objects

I'm not sure what your argument is here.  If I type rubbish at my repl,
I probably get a debugger.  If I type rubbish at a unix shell, I get 
"not found" messages, or unexpected answers, or a segmentation fault.
I'm not sure that one behaviour is intrinsically better than another 

(I agree that the Lisp debugger is a bit scary if you get dumped in it
and don't know lisp.  That's simply a packaging issue, though.  Bind
*debugger-hook* to something that pops up a dialog box saying "Error
found, debug this (advanced users only) or continue?"

> My argument is that at certain levels of design there is a benefit of
> a third option, which is (3) use an object interface to properly
> localize state but have an SEXP interface that does not allow objects
> to be manipulated directly between levels. Such a "CLI pattern" seems
> to be considered somewhat contrary to the "LISP way" and I'm trying to
> understand why.

Because it adds complexity, is my gut response.  If you don't need to
serialise, don't serialise.  If you do need to transform the objects
somehow at the boundary (perhaps the two layers are in different
conceptual security domains, and data coming from the lower one needs
vetting) then there's some argument for serialising as a means to do
this, but (a) it's quite hard to design right - we see above that the
unix shell doesn't really manage it - and (b) it often gives us no
gain in exchange or quite considerable pain.

After all, transfers between kernel and userspace have similar trust
boundary issues, and I still don't see anyone suggesting that
getdents() (Linux-only, google should find a manpage) should return a
space-separated list of files to be parsed by the caller, instead of
the vector of dirent objects that it actually does come back with.


-dan

-- 

   http://web.metacircles.com/cirCLe_CD - Free Software Lisp/Linux distro
From: Paolo Amoroso
Subject: Re: LISP & "The Art of Unix Programming"
Date: 
Message-ID: <ff4743a5.0311050048.d397459@posting.google.com>
·····················@yahoo.com (Conrad Barski) wrote in message news:<····························@posting.google.com>...

> After reading Raymond's new book, TAOUP, I've been thinking about how

See also:

  Two views of the Unix philosophy
  http://lwn.net/Articles/53806/


Paolo